by omar-haris
Provides intelligent semantic code search for AI assistants by indexing codebases with local AI model embeddings and exposing searchable context through the Model Context Protocol.
Smart Coding MCP enables AI coding assistants to retrieve relevant code fragments based on meaning rather than keyword matches. It builds a vector index of your project's source files using a local embedding model (nomic‑embed‑text‑v1.5) with flexible Matryoshka Representation Learning dimensions, then answers natural‑language queries with ranked code snippets.
npm install -g smart-coding-mcp
# or, one‑off execution
npx -y smart-coding-mcp --workspace /path/to/project
semantic_search, index_codebase, clear_cache, e_set_workspace, or f_get_status from your IDE or assistant.Q: Do I need an internet connection? A: No. The embedding model is bundled and runs locally via ONNX Runtime.
Q: Can I use a different embedding model?
A: Yes, set SMART_CODING_EMBEDDING_MODEL to another compatible model name.
Q: How does the server affect my laptop’s performance? A: By default it caps CPU usage at 50 % and processes files in configurable batches, keeping the system responsive.
Q: What languages are supported? A: The default configuration targets JavaScript/TypeScript, but the chunker can be extended to other languages.
Q: How do I reset the index?
A: Use the clear_cache tool or delete the .smart-coding-cache/ folder and run index_codebase again.
An extensible Model Context Protocol (MCP) server that provides intelligent semantic code search for AI assistants. Built with local AI models using Matryoshka Representation Learning (MRL) for flexible embedding dimensions (64-768d), with runtime workspace switching and comprehensive status reporting.
| Tool | Description | Example |
|---|---|---|
semantic_search |
Find code by meaning, not just keywords | "Where do we validate user input?" |
index_codebase |
Manually trigger reindexing | Use after major refactoring or branch switches |
clear_cache |
Reset the embeddings cache | Useful when cache becomes corrupted |
d_check_last_version |
Get latest version of any package (20 ecosystems) | "express", "npm:react", "pip:requests" |
e_set_workspace |
Change project path at runtime | Switch to different project without restart |
f_get_status |
Get server info: version, index status, config | Check indexing progress, model info, cache size |
AI coding assistants work better when they can find relevant code quickly. Traditional keyword search falls short - if you ask "where do we handle authentication?" but your code uses "login" and "session", keyword search misses it.
This MCP server solves that by indexing your codebase with AI embeddings. Your AI assistant can search by meaning instead of exact keywords, finding relevant code even when the terminology differs.

Better Code Understanding
Performance
Privacy
Progressive Indexing
Resource Throttling
SQLite Cache
Optimized Defaults
Install globally via npm:
npm install -g smart-coding-mcp
To update to the latest version:
npm update -g smart-coding-mcp
Add to your MCP configuration file. The location depends on your IDE and OS:
| IDE | OS | Config Path |
|---|---|---|
| Claude Desktop | macOS | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Claude Desktop | Windows | %APPDATA%\Claude\claude_desktop_config.json |
| Cascade (Cursor) | All | Configured via UI Settings > Features > MCP |
| Antigravity | macOS | ~/.gemini/antigravity/mcp_config.json |
| Antigravity | Windows | %USERPROFILE%\.gemini\antigravity\mcp_config.json |
Add the server configuration to the mcpServers object in your config file:
{
"mcpServers": {
"smart-coding-mcp": {
"command": "smart-coding-mcp",
"args": ["--workspace", "/absolute/path/to/your/project"]
}
}
}
{
"mcpServers": {
"smart-coding-mcp-frontend": {
"command": "smart-coding-mcp",
"args": ["--workspace", "/path/to/frontend"]
},
"smart-coding-mcp-backend": {
"command": "smart-coding-mcp",
"args": ["--workspace", "/path/to/backend"]
}
}
}
⚠️ Warning: Most MCP clients (including Antigravity and Claude Desktop) do NOT support
${workspaceFolder}variable expansion. The server will exit with an error if the variable is not expanded.
For clients that support dynamic variables (VS Code, Cursor):
{
"mcpServers": {
"smart-coding-mcp": {
"command": "smart-coding-mcp",
"args": ["--workspace", "${workspaceFolder}"]
}
}
}
| Client | Supports ${workspaceFolder} |
|---|---|
| VS Code | Yes |
| Cursor (Cascade) | Yes |
| Antigravity | No ❌ |
| Claude Desktop | No ❌ |
Override configuration settings via environment variables in your MCP config:
| Variable | Type | Default | Description |
|---|---|---|---|
SMART_CODING_VERBOSE |
boolean | false |
Enable detailed logging |
SMART_CODING_BATCH_SIZE |
number | 100 |
Files to process in parallel |
SMART_CODING_MAX_FILE_SIZE |
number | 1048576 |
Max file size in bytes (1MB) |
SMART_CODING_CHUNK_SIZE |
number | 25 |
Lines of code per chunk |
SMART_CODING_MAX_RESULTS |
number | 5 |
Max search results |
SMART_CODING_SMART_INDEXING |
boolean | true |
Enable smart project detection |
SMART_CODING_WATCH_FILES |
boolean | false |
Enable file watching for auto-reindex |
SMART_CODING_SEMANTIC_WEIGHT |
number | 0.7 |
Weight for semantic similarity (0-1) |
SMART_CODING_EXACT_MATCH_BOOST |
number | 1.5 |
Boost for exact text matches |
SMART_CODING_EMBEDDING_MODEL |
string | nomic-ai/nomic-embed-text-v1.5 |
AI embedding model to use |
SMART_CODING_EMBEDDING_DIMENSION |
number | 128 |
MRL dimension (64, 128, 256, 512, 768) |
SMART_CODING_DEVICE |
string | cpu |
Inference device (cpu, webgpu, auto) |
SMART_CODING_CHUNKING_MODE |
string | smart |
Code chunking (smart, ast, line) |
SMART_CODING_WORKER_THREADS |
string | auto |
Worker threads (auto or 1-32) |
SMART_CODING_MAX_CPU_PERCENT |
number | 50 |
Max CPU usage during indexing (10-100%) |
SMART_CODING_BATCH_DELAY |
number | 100 |
Delay between batches in ms (0-5000) |
SMART_CODING_MAX_WORKERS |
string | auto |
Override max worker threads limit |
Example with environment variables:
{
"mcpServers": {
"smart-coding-mcp": {
"command": "smart-coding-mcp",
"args": ["--workspace", "/path/to/project"],
"env": {
"SMART_CODING_VERBOSE": "true",
"SMART_CODING_BATCH_SIZE": "200",
"SMART_CODING_MAX_FILE_SIZE": "2097152"
}
}
}
}
Note: The server starts instantly and indexes in the background, so your IDE won't be blocked waiting for indexing to complete.
| Component | Technology |
|---|---|
| Protocol | Model Context Protocol (JSON-RPC) |
| AI Model | nomic-embed-text-v1.5 (MRL) |
| Inference | transformers.js + ONNX Runtime |
| Chunking | Smart regex / Tree-sitter AST |
| Search | Cosine similarity + exact match boost |
Query → Vector embedding → Cosine similarity → Ranked results
Natural language search:
Query: "How do we handle cache persistence?"
Result:
// lib/cache.js (Relevance: 38.2%)
async save() {
await fs.writeFile(cacheFile, JSON.stringify(this.vectorStore));
await fs.writeFile(hashFile, JSON.stringify(this.fileHashes));
}
Typo tolerance:
Query: "embeding modle initializashun"
Still finds embedding model initialization code despite multiple typos.
Conceptual search:
Query: "error handling and exceptions"
Finds all try/catch blocks and error handling patterns.
.smart-coding-cache/Embedding Model: nomic-embed-text-v1.5 via transformers.js v3
Legacy Model: all-MiniLM-L6-v2 (fallback)
Vector Similarity: Cosine similarity
Hybrid Scoring: Combines semantic similarity with exact text matching
This project builds on research from Cursor showing that semantic search improves AI coding agent performance by 12.5% on average across question-answering tasks. The key insight is that AI assistants benefit more from relevant context than from large amounts of context.
See: https://cursor.com/blog/semsearch
MIT License
Copyright (c) 2025 Omar Haris
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
{
"mcpServers": {
"smart-coding-mcp": {
"command": "npx",
"args": [
"-y",
"smart-coding-mcp",
"--workspace",
"/absolute/path/to/your/project"
]
}
}
}claude mcp add smart-coding-mcp npx -y smart-coding-mcp --workspace /absolute/path/to/your/project