by doobidoo
Provides a universal memory service with semantic search, intelligent memory triggers, OAuth‑enabled team collaboration, and multi‑client support for Claude Desktop, Claude Code, VS Code, Cursor and over a dozen AI applications.
MCP Memory Service offers a persistent, searchable knowledge store that AI assistants can read from and write to automatically. It integrates with the Model Context Protocol to supply relevant memories during a chat, supports natural‑language time and tag queries, and can consolidate memories across sessions.
git clone https://github.com/doobidoo/mcp-memory-service.git
cd mcp-memory-service
python install.py # default lightweight SQLite‑vec backend
# optional ML stack
# python install.py --with-ml
# optional ChromaDB backend for multi‑client sharing
# python install.py --with-chromadb
uv run memory server # or: python -m mcp_memory_service.server
export MCP_OAUTH_ENABLED=true
uv run memory server --http # starts HTTP API on 8000
claude mcp add --transport http memory-service http://localhost:8000/mcp
The client will auto‑discover OAuth endpoints and obtain a token.uv run memory store "Fixed authentication race condition"
uv run memory recall "authentication race condition"
uv run memory search --tags python debugging
uv run memory health # shows server health and OAuth status
Q: Do I need heavy ML libraries?
A: No. The default installation uses SQLite‑vec with ONNX embeddings (<100 MB). Add --with-ml only if you need advanced models.
Q: My Python version is 3.13 – will it work? A: SQLite‑vec wheels may be missing for 3.13. The installer falls back to building from source or you can switch to the ChromaDB backend.
Q: How does OAuth work for Claude Code?
A: Enable MCP_OAUTH_ENABLED=true and start the server with --http. Clients discover the /oauth/.well-known endpoint, register themselves, and receive JWTs for each request.
Q: Can I run the service in Docker?
A: Yes. Use the provided docker-compose.yml for MCP protocol or docker-compose.http.yml for the HTTP/OAuth API.
Q: What storage backend should I choose? A: Use SQLite‑vec for single‑user or lightweight scenarios. Choose ChromaDB when multiple users need to share the same memory store.
Production-ready MCP memory service with zero database locks, hybrid backend (fast local + cloud sync), and intelligent memory search for AI assistants. Features v8.9.0 auto-configuration for multi-client access, 5ms local reads with background Cloudflare sync, Natural Memory Triggers with 85%+ accuracy, and OAuth 2.1 team collaboration. Works with Claude Desktop, VS Code, Cursor, Continue, and 13+ AI applications.
Memory Awareness Enhancements
Previous Releases:
📖 Full Details: CHANGELOG.md | All Releases
# One-command installation with auto-configuration
git clone https://github.com/doobidoo/mcp-memory-service.git
cd mcp-memory-service && python install.py
# Choose option 4 (Hybrid - RECOMMENDED) when prompted
# Installer automatically configures:
# ✅ SQLite pragmas for concurrent access
# ✅ Cloudflare credentials for cloud sync
# ✅ Claude Desktop integration
# Done! Fast local + cloud sync with zero database locks
Install from PyPI:
# Install latest version from PyPI
pip install mcp-memory-service
# Or with uv (faster)
uv pip install mcp-memory-service
Then configure Claude Desktop by adding to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or equivalent:
{
"mcpServers": {
"memory": {
"command": "memory",
"args": ["server"],
"env": {
"MCP_MEMORY_STORAGE_BACKEND": "hybrid"
}
}
}
}
For advanced configuration with the interactive installer, clone the repo and run python scripts/installation/install.py.
For development and contributing, use editable install to ensure source code changes take effect immediately:
# Clone repository
git clone https://github.com/doobidoo/mcp-memory-service.git
cd mcp-memory-service
# Create and activate virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# CRITICAL: Editable install (code changes take effect immediately)
pip install -e .
# Verify editable mode (should show source directory, not site-packages)
pip show mcp-memory-service | grep Location
# Expected: Location: /path/to/mcp-memory-service/src
# Start development server
uv run memory server
⚠️ Important: Editable install (-e flag) ensures MCP servers load from source code, not stale site-packages. Without this, source changes won't be reflected until you reinstall the package.
Version Mismatch Check:
# Verify installed version matches source code
python scripts/validation/check_dev_setup.py
See CLAUDE.md for complete development guidelines.
Universal Installer (Most Compatible):
# Clone and install with automatic platform detection
git clone https://github.com/doobidoo/mcp-memory-service.git
cd mcp-memory-service
# Lightweight installation (SQLite-vec with ONNX embeddings - recommended)
python install.py
# Add full ML capabilities (torch + sentence-transformers for advanced features)
python install.py --with-ml
# Install with hybrid backend (SQLite-vec + Cloudflare sync)
python install.py --storage-backend hybrid
📝 Installation Options Explained:
--with-ml: Adds PyTorch + sentence-transformers for advanced ML features - heavier but more capable--storage-backend hybrid: Hybrid backend with SQLite-vec + Cloudflare sync - best for multi-device accessDocker (Fastest):
# For MCP protocol (Claude Desktop)
docker-compose up -d
# For HTTP API + OAuth (Team Collaboration)
docker-compose -f docker-compose.http.yml up -d
Smithery (Claude Desktop):
# Auto-install for Claude Desktop
npx -y @smithery/cli install @doobidoo/mcp-memory-service --client claude
Updating from an older version? Scripts have been reorganized for better maintainability:
python -m mcp_memory_service.server in your Claude Desktop config (no path dependencies!)uv run memory server with UV toolingscripts/run_memory_server.py to scripts/server/run_memory_server.pyOn your first run, you'll see some warnings that are completely normal:
These warnings disappear after the first successful run. The service is working correctly! For details, see our First-Time Setup Guide.
sqlite-vec may not have pre-built wheels for Python 3.13 yet. If installation fails:
brew install python@3.12--storage-backend cloudflaremacOS users may encounter enable_load_extension errors with sqlite-vec:
brew install python && rehashPYTHON_CONFIGURE_OPTS='--enable-loadable-sqlite-extensions' pyenv install 3.12.0--storage-backend cloudflare or --storage-backend hybridIntelligent Context Injection - See how the memory service automatically surfaces relevant information at session start:
What you're seeing:
Result: Claude starts every session with full project context - no manual prompting needed.
👉 Visit our comprehensive Wiki for detailed guides:
busy_timeout=15000,cache_size=20000)/session-start command for manual session initialization (workaround for issue #160)Note: All heavy ML dependencies (PyTorch, sentence-transformers) are now optional to dramatically reduce build times and image sizes. SQLite-vec uses lightweight ONNX embeddings by default. Install with
--with-mlfor full ML capabilities.
# Start server with web interface
uv run memory server --http
# Access interactive dashboard
open http://127.0.0.1:8888/
# Upload documents via CLI
curl -X POST http://127.0.0.1:8888/api/documents/upload \
-F "file=@document.pdf" \
-F "tags=documentation,reference"
# Search document content
curl -X POST http://127.0.0.1:8888/api/search \
-H "Content-Type: application/json" \
-d '{"query": "authentication flow", "limit": 10}'
# Start OAuth-enabled server for team collaboration
export MCP_OAUTH_ENABLED=true
uv run memory server --http
# Claude Code team members connect via HTTP transport
claude mcp add --transport http memory-service http://your-server:8000/mcp
# → Automatic OAuth discovery, registration, and authentication
# Store a memory
uv run memory store "Fixed race condition in authentication by adding mutex locks"
# Search for relevant memories
uv run memory recall "authentication race condition"
# Search by tags
uv run memory search --tags python debugging
# Check system health (shows OAuth status)
uv run memory health
Recommended approach - Add to your Claude Desktop config (~/.claude/config.json):
{
"mcpServers": {
"memory": {
"command": "python",
"args": ["-m", "mcp_memory_service.server"],
"env": {
"MCP_MEMORY_STORAGE_BACKEND": "sqlite_vec"
}
}
}
}
Alternative approaches:
// Option 1: UV tooling (if using UV)
{
"mcpServers": {
"memory": {
"command": "uv",
"args": ["--directory", "/path/to/mcp-memory-service", "run", "memory", "server"],
"env": {
"MCP_MEMORY_STORAGE_BACKEND": "sqlite_vec"
}
}
}
}
// Option 2: Direct script path (v6.17.0+)
{
"mcpServers": {
"memory": {
"command": "python",
"args": ["/path/to/mcp-memory-service/scripts/server/run_memory_server.py"],
"env": {
"MCP_MEMORY_STORAGE_BACKEND": "sqlite_vec"
}
}
}
}
Hybrid Backend (v8.9.0+ RECOMMENDED):
# Hybrid backend with auto-configured pragmas
export MCP_MEMORY_STORAGE_BACKEND=hybrid
export MCP_MEMORY_SQLITE_PRAGMAS="busy_timeout=15000,cache_size=20000"
# Cloudflare credentials (required for hybrid)
export CLOUDFLARE_API_TOKEN="your-token"
export CLOUDFLARE_ACCOUNT_ID="your-account"
export CLOUDFLARE_D1_DATABASE_ID="your-db-id"
export CLOUDFLARE_VECTORIZE_INDEX="mcp-memory-index"
# Enable HTTP API
export MCP_HTTP_ENABLED=true
export MCP_HTTP_PORT=8000
# Security
export MCP_API_KEY="your-secure-key"
SQLite-vec Only (Local):
# Local-only storage
export MCP_MEMORY_STORAGE_BACKEND=sqlite_vec
export MCP_MEMORY_SQLITE_PRAGMAS="busy_timeout=15000,cache_size=20000"
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ AI Clients │ │ MCP Memory │ │ Storage Backend │
│ │ │ Service v8.9 │ │ │
│ • Claude Desktop│◄──►│ • MCP Protocol │◄──►│ • Hybrid 🌟 │
│ • Claude Code │ │ • HTTP Transport│ │ (5ms local + │
│ (HTTP/OAuth) │ │ • OAuth 2.1 Auth│ │ cloud sync) │
│ • VS Code │ │ • Memory Store │ │ • SQLite-vec │
│ • Cursor │ │ • Semantic │ │ • Cloudflare │
│ • 13+ AI Apps │ │ Search │ │ │
│ • Web Dashboard │ │ • Doc Ingestion │ │ Zero DB Locks ✅│
│ (Port 8888) │ │ • Zero DB Locks │ │ Auto-Config ✅ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
mcp-memory-service/
├── src/mcp_memory_service/ # Core application
│ ├── models/ # Data models
│ ├── storage/ # Storage backends
│ ├── web/ # HTTP API & dashboard
│ └── server.py # MCP server
├── scripts/ # Utilities & installation
├── tests/ # Test suite
└── tools/docker/ # Docker configuration
See CONTRIBUTING.md for detailed guidelines.
python scripts/validation/validate_configuration_complete.py to check your setupReal-world metrics from active deployments:
Apache License 2.0 - see LICENSE for details.
Ready to supercharge your AI workflow? 🚀
👉 Start with our Installation Guide or explore the Wiki for comprehensive documentation.
Transform your AI conversations into persistent, searchable knowledge that grows with you.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
A basic implementation of persistent memory using a local knowledge graph. This lets Claude remember information about the user across chats.
by topoteretes
Provides dynamic memory for AI agents through modular ECL (Extract, Cognify, Load) pipelines, enabling seamless integration with graph and vector stores using minimal code.
by basicmachines-co
Enables persistent, local‑first knowledge management by allowing LLMs to read and write Markdown files during natural conversations, building a traversable knowledge graph that stays under the user’s control.
by smithery-ai
Provides read and search capabilities for Markdown notes in an Obsidian vault for Claude Desktop and other MCP clients.
by chatmcp
Summarize chat messages by querying a local chat database and returning concise overviews.
by dmayboroda
Provides on‑premises conversational retrieval‑augmented generation (RAG) with configurable Docker containers, supporting fully local execution, ChatGPT‑based custom GPTs, and Anthropic Claude integration.
by qdrant
Provides a Model Context Protocol server that stores and retrieves semantic memories using Qdrant vector search, acting as a semantic memory layer.
by GreatScottyMac
Provides a project‑specific memory bank that stores decisions, progress, architecture, and custom data, exposing a structured knowledge graph via MCP for AI assistants and IDE tools.
by andrea9293
Provides document management and AI-powered semantic search for storing, retrieving, and querying text, markdown, and PDF files locally without external databases.