by stippi
Provides an LLM‑powered autonomous coding assistant with command‑line, graphical, and MCP server interfaces for real‑time code analysis, modification, and project exploration.
Code Assistant enables developers to interact with large language models (LLMs) to analyze, refactor, and extend codebases autonomously. It supports multiple interaction modes—including a modern GUI, a terminal UI, and a headless MCP server—so the assistant can be used directly or integrated into other tools such as Claude Desktop.
Installation
git clone https://github.com/stippi/code-assistant
cd code-assistant
cargo build --release
The compiled binary appears at target/release/code-assistant
.
Configure projects (optional) – create ~/.config/code-assistant/projects.json
to list directories the assistant can access.
Run the desired mode
code-assistant --ui [--task "Your task"]
code-assistant --task "Your task" [--provider <provider> --model <model>]
code-assistant server
Select LLM provider via environment variables or command‑line flags (--provider
, --model
). Supported providers include Anthropic, OpenAI, Ollama, SAP AI Core, Vertex AI, OpenRouter, Groq, and MistralAI.
Q: Do I need to rebuild the binary for every LLM provider? A: No. The binary is provider‑agnostic; you select the provider at runtime via flags or environment variables.
Q: Can the assistant modify files outside the configured project? A: By design, tool calls reject absolute paths and are limited to the project directory (including temporary projects). Absolute or out‑of‑scope relative paths are blocked.
Q: How does streaming work with providers that lack native function calling? A: The assistant falls back to XML‑style tags or caret blocks, which stream parameters token‑by‑token and are parsed by custom streaming processors.
Q: Is there sandboxing for executed commands?
A: Currently the execute_command
tool runs the command directly in the host shell. Future releases aim to add sandboxing to restrict file system access.
Q: How can I integrate Code Assistant with Claude Desktop?
A: Add an entry under mcpServers
in Claude Desktop’s developer settings pointing to the compiled binary with the server
argument, as shown in the README.
An AI coding assistant built in Rust that provides both command-line and graphical interfaces for autonomous code analysis and modification.
Multi-Modal Tool Execution: Adapts to different LLM capabilities with pluggable tool invocation modes - native function calling, XML-style tags, and triple-caret blocks - ensuring compatibility across various AI providers.
Real-Time Streaming Interface: Advanced streaming processors parse and display tool invocations as they stream from the LLM, with smart filtering to prevent unsafe tool combinations.
Session-Based Project Management: Each chat session is tied to a specific project and maintains persistent state, working memory, and draft messages with attachment support.
Multiple Interface Options: Choose between a modern GUI built on Zed's GPUI framework, traditional terminal interface, or headless MCP server mode for integration with MCP clients such as Claude Desktop.
Intelligent Project Exploration: Autonomously builds understanding of codebases through working memory that tracks file structures, dependencies, and project context.
git clone https://github.com/stippi/code-assistant
cd code-assistant
cargo build --release
The binary will be available at target/release/code-assistant
.
Create ~/.config/code-assistant/projects.json
to define available projects:
{
"code-assistant": {
"path": "/Users/<username>/workspace/code-assistant"
},
"my-project": {
"path": "/Users/<username>/workspace/my-project"
}
}
Important Notes:
# Start with graphical interface
code-assistant --ui
# Start GUI with initial task
code-assistant --ui --task "Analyze the authentication system"
# Basic usage
code-assistant --task "Explain the purpose of this codebase"
# With specific provider and model
code-assistant --task "Add error handling" --provider openai --model gpt-5
code-assistant server
Configure in Claude Desktop settings (Developer tab → Edit Config):
{
"mcpServers": {
"code-assistant": {
"command": "/path/to/code-assistant/target/release/code-assistant",
"args": ["server"],
"env": {
"PERPLEXITY_API_KEY": "pplx-...", // optional, enables perplexity_ask tool
"SHELL": "/bin/zsh" // your login shell, required when configuring "env" here
}
}
}
}
Anthropic (default):
export ANTHROPIC_API_KEY="sk-ant-..."
code-assistant --provider anthropic --model claude-sonnet-4-20250514
OpenAI:
export OPENAI_API_KEY="sk-..."
code-assistant --provider openai --model gpt-4o
SAP AI Core:
Create ~/.config/code-assistant/ai-core.json
:
{
"auth": {
"client_id": "<service-key-client-id>",
"client_secret": "<service-key-client-secret>",
"token_url": "https://<your-url>/oauth/token",
"api_base_url": "https://<your-url>/v2/inference"
},
"models": {
"claude-sonnet-4": "<deployment-id>"
}
}
Ollama:
code-assistant --provider ollama --model llama2 --num-ctx 4096
Other providers: Vertex AI (Google), OpenRouter, Groq, MistralAI
Tool Syntax Modes:
--tool-syntax native
: Use the provider's built-in tool calling (most reliable, but streaming of parameters depends on provider)--tool-syntax xml
: XML-style tags for streaming of parameters--tool-syntax caret
: Triple-caret blocks for token-efficency and streaming of parametersSession Recording:
# Record session (Anthropic only)
code-assistant --record session.json --task "Optimize database queries"
# Playback session
code-assistant --playback session.json --fast-playback
Other Options:
--continue-task
: Resume from previous session state--use-diff-format
: Enable alternative diff format for file editing--verbose
: Enable detailed logging--base-url
: Custom API endpointThe code-assistant features several innovative architectural decisions:
Adaptive Tool Syntax: Automatically generates different system prompts and streaming processors based on the target LLM's capabilities, allowing the same core logic to work across providers with varying function calling support.
Smart Tool Filtering: Real-time analysis of tool invocation patterns prevents logical errors like attempting to edit files before reading them, with the ability to truncate responses mid-stream when unsafe combinations are detected.
Multi-Threaded Streaming: Sophisticated async architecture that handles real-time parsing of tool invocations while maintaining responsive UI updates and proper state management across multiple chat sessions.
Contributions are welcome! The codebase demonstrates advanced patterns in async Rust, AI agent architecture, and cross-platform UI development.
This section is not really a roadmap, as the items are in no particular order. Below are some topics that are likely the next focus.
replace_in_file
and we know in which file quite early.
If we also know this file has changed since the LLM last read it, we can block the attempt with an appropriate error message.execute_command
tool runs a shell with the provided command line, which at the moment is completely unchecked.\n
line endings, no trailing white space).
This increases the success rate of matching search blocks quite a bit, but certain ways of fuzzy matching might increase the success even more.
Failed matches introduce quite a bit of inefficiency, since they almost always trigger the LLM to re-read a file.
Even when the error output of the replace_in_file
tool includes the complete file and tells the LLM not to re-read the file.Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.