by Mng-dev-ai
Self‑hosted Claude Code workspace with multi‑provider routing, sandboxed execution, and a full in‑browser IDE.
Agentrove delivers a self‑hosted web UI that wraps Claude Code as the execution harness. It supports routing requests to Anthropic, OpenAI, GitHub Copilot, OpenRouter, or any custom Anthropic‑compatible endpoint while preserving Claude Code’s tool behaviour, permissions, and MCP orchestration. Workspaces provide isolated sandboxes (Docker or host) and a shared file system, enabling a VS Code‑like experience directly in the browser or as a native desktop app.
http://localhost:3000 (web) or launch the bundled macOS app.claude-agent-sdk.Q: Do I need a separate database for the desktop version? A: No. The desktop mode uses a local SQLite file and an in‑memory cache/pub‑sub, removing the need for PostgreSQL or Redis.
Q: Can I run Agentrove without Docker? A: Yes. In desktop mode the sandbox runs on the host process, and in web mode you can choose the host sandbox provider when creating a workspace.
Q: How does provider switching keep the context?
A: The workspace’s filesystem, .claude configuration, and MCP settings remain unchanged; only the model endpoint changes, so chat history and state are preserved.
Q: What is the Anthropic Bridge? A: It is a lightweight service launched inside the sandbox that exposes an Anthropic‑compatible API, routing calls to non‑Anthropic providers while preserving Claude Code request semantics.
Q: How are recurring tasks scheduled? A: Agentrove includes an in‑process asynchronous scheduler that runs tasks defined in skills or agents; no external worker process is required.
Self-hosted Claude Code workspace with multi-provider routing, sandboxed execution, and a full web IDE.
Note: Agentrove is under active development. Expect breaking changes between releases.
Join the Discord server.
React/Vite Frontend
-> FastAPI Backend
-> PostgreSQL + Redis (web/docker mode)
-> SQLite + in-memory cache/pubsub (desktop mode)
-> Sandbox runtime (Docker/Host)
-> Claude Code CLI + claude-agent-sdk
Agentrove runs chats through claude-agent-sdk, which drives the Claude Code CLI in the selected sandbox. This keeps Claude Code-native behavior for tools, session flow, permission modes, and MCP orchestration.
For OpenAI, OpenRouter, and Copilot providers, Agentrove starts anthropic-bridge inside the sandbox and routes Claude Code requests through:
ANTHROPIC_BASE_URL=http://127.0.0.1:3456OPENROUTER_API_KEY and GITHUB_COPILOT_TOKENopenai/gpt-5.2-codex, openrouter/moonshotai/kimi-k2.5, copilot/gpt-5.2-codexAgentrove UI
-> Claude Agent SDK + Claude Code CLI
-> Anthropic-compatible request shape
-> Anthropic Bridge (OpenAI/OpenRouter/Copilot)
-> Target provider model
For Anthropic providers, Agentrove uses your Claude auth token directly. For custom providers, Agentrove calls your configured Anthropic-compatible base_url.
claude-agent-sdkopenai/*, openrouter/*, copilot/*)Workspaces are the top-level organizational unit. Each workspace owns a dedicated sandbox and groups all related chats under one project context.
Each workspace gets its own sandbox instance (Docker container or host process). Chats within a workspace share the same filesystem, installed tools, and .claude configuration. Switching between workspaces switches the entire execution environment.
When creating a workspace you can override the default sandbox provider (Docker or Host). The provider is locked at creation time — all chats in that workspace use the same provider.
git clone https://github.com/Mng-dev-ai/agentrove.git
cd agentrove
docker compose -p agentrove-web -f docker-compose.yml up -d
Open http://localhost:3000.
docker compose -p agentrove-web -f docker-compose.yml down
docker compose -p agentrove-web -f docker-compose.yml logs -f
Desktop mode uses Tauri with a bundled Python backend sidecar on localhost:8081, with local SQLite storage.
When running in desktop mode:
8081Tauri Desktop App
-> React frontend (.env.desktop)
-> bundled backend sidecar (localhost:8081)
-> local SQLite database
Requirements:
Dev workflow:
cd frontend
npm install
npm run desktop:dev
Build (unsigned dev):
cd frontend
npm run desktop:build
App bundle output:
frontend/src-tauri/target/release/bundle/macos/Agentrove.appDesktop troubleshooting:
8081 if already in useConfigure providers in Settings -> Providers.
anthropic: paste token from claude setup-tokenopenai: authenticate with OpenAI device flow in UIcopilot: authenticate with GitHub device flow in UIopenrouter: add OpenRouter API key and model IDscustom: set Anthropic-compatible base_url, token, and model IDsgpt-5.2-codex, gpt-5.2, gpt-5.3-codexmoonshotai/kimi-k2.5, minimax/minimax-m2.1, google/gemini-3-pro-previewGLM-5, M2.5, or private org-specific endpoints (depends on your backend compatibility)Switching providers within a workspace does not require a new workflow:
.claude resources (skills, agents, commands)This is the main value of using Claude Code as the harness while changing inference providers behind Anthropic Bridge.
3000808054326379590060808765GET /healthGET /api/v1/readyz
/ and API under /api/*

Apache 2.0. See LICENSE.
Contributions are welcome. Please open an issue first to discuss what you would like to change, then submit a pull request.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.