by wojtyniak
Provides a searchable, continuously updated database of MCP servers, allowing AI assistants to discover and provision the appropriate tools via natural‑language queries.
Mcp Mcp serves as a "phone book" for MCP servers. It aggregates thousands of server definitions from official listings and community‑curated collections, deduplicates them, and stores them in a database that can be queried semantically.
uvx
(or pipx
) after cloning the repository.mcp-mcp
command.uv run main.py
) or HTTP mode (uv run main.py --http
).just
commands, auto‑reload, and full test suite.Q: Do I need to run my own instance? A: Yes, Mcp Mcp is a self‑hosted service; you can run it locally or deploy it in a container for production.
Q: Which Python version is required? A: Python 3.13 or newer.
Q: How are duplicate servers avoided? A: An intelligent deduplication process merges entries across sources.
Q: Can I contribute new server definitions? A: Absolutely – fork the repo, add entries to the source lists, and submit a pull request.
Q: Is there a Docker image? A: Not yet, but Docker support is on the roadmap.
MCP-MCP is a Meta-MCP Server that acts as a tool discovery and provisioning service for the Model Context Protocol (MCP). When an AI assistant needs a capability that isn't currently available, it can ask MCP-MCP to discover and suggest appropriate MCP servers from a comprehensive database of over a thousand servers aggregated from multiple curated sources.
Think of it as a "phone book" for MCP servers - one tool to find all other tools.
MCP-MCP provides access to a comprehensive database aggregated from multiple curated sources, including:
The database is automatically updated every 3 hours with the latest servers from the community.
Agents Just Wanna Have Tools
Why make agents (and users) hunt for tools when we can bring the tools to them?
Add MCP-MCP to your Claude Desktop configuration file:
~/Library/Application\ Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"mcp-mcp": {
"command": "uvx",
"args": ["mcp-mcp"]
}
}
}
{
"mcpServers": {
"mcp-mcp": {
"command": "mcp-mcp"
}
}
}
Add MCP-MCP to your Claude Code configuration file:
claude mcp add mcp-mcp uvx mcp-mcp
Once configured, you can ask Claude Desktop to discover MCP servers using natural language:
# Clone the repository
git clone https://github.com/your-username/mcp-mcp.git
cd mcp-mcp
# Install dependencies
uv sync
# Run tests
uv run pytest
# Run the server
uv run main.py
For testing the installed package:
uvx mcp-mcp
This installs and runs the MCP-MCP server directly via uvx.
This project includes a justfile
for common development tasks:
# List all available commands
just help
# Development with auto-reload
just dev # STDIO mode with file watching
just dev-http # HTTP mode with file watching
# Running without auto-reload
just run-stdio # STDIO mode
just run-http # HTTP mode
# Testing
just test # Unit tests only
just test-integration # Include GitHub integration tests
# Building and publishing
just build # Build package
just publish-test # Publish to Test PyPI
just publish-prod # Publish to Production PyPI
# Utilities
just version # Show version
just clean # Clean build artifacts
For development and testing, use HTTP transport (easier to stop with Ctrl+C):
# HTTP mode (accessible at http://localhost:8000)
uv run main.py --http
# OR with justfile:
just run-http
# With auto-reload during development
just dev-http
# Custom host/port
uv run main.py --http --host 0.0.0.0 --port 3000
# STDIO mode (for MCP clients like Claude Desktop)
uv run main.py # Note: To stop STDIO mode, use Ctrl+D (EOF), not Ctrl+C
# OR with justfile:
just run-stdio
# With auto-reload during development
just dev
# Build package
uv build
# OR with justfile:
just build
# Test local installation
uvx --from ./dist/mcp_mcp-0.1.0-py3-none-any.whl mcp-mcp
mcp-mcp --help
Option | Description | Default |
---|---|---|
--transport {stdio,http} |
Transport method | stdio |
--http |
Use HTTP transport | - |
--host HOST |
Host for HTTP transport | localhost |
--port PORT |
Port for HTTP transport | 8000 |
# Run all tests (unit + integration)
uv run pytest
# OR with justfile:
just test
# Run only unit tests (fast, no network)
uv run pytest db/ -v
# OR with justfile:
just test-unit
# Run only integration/e2e tests
uv run pytest tests/ -v
# OR with justfile:
just test-integration
# Run GitHub integration tests (optional, requires network)
MCP_MCP_TEST_GITHUB_INTEGRATION=1 uv run pytest tests/
# OR with justfile:
just test-integration-github
# Run all tests including GitHub integration
MCP_MCP_TEST_GITHUB_INTEGRATION=1 uv run pytest
# OR with justfile:
just test-all
# Run with coverage
uv run pytest --cov=db
Test Structure:
db/
alongside the code they test (Go-style)tests/
directoryIntegration Tests: Set MCP_MCP_TEST_GITHUB_INTEGRATION=1
to test real GitHub downloads and verify the complete first-user onboarding experience. These tests ensure users get fast startup (< 5 seconds) with 1,728+ servers.
We welcome contributions! Please see our development setup and:
git checkout -b feature/amazing-feature
)git commit -m 'Add amazing feature'
)git push origin feature/amazing-feature
)This project is licensed under the MIT License - see the LICENSE file for details.
Made with ❤️ for the MCP ecosystem
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "mcp-mcp": { "command": "uvx", "args": [ "mcp-mcp" ] } } }
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.