by VeriTeknik
Provides a unified interface to manage multiple MCP servers, offering an AI playground, RAG capabilities, real‑time notifications, and support for both STDIO and Streamable HTTP transport modes.
It aggregates several Model Context Protocol (MCP) servers into a single endpoint, allowing AI clients to discover tools, resources, prompts, and documents across all configured servers without extra client setup.
npx -y @pluggedin/pluggedin-mcp-proxy@latest --pluggedin-api-key YOUR_API_KEY
npx -y @pluggedin/pluggedin-mcp-proxy@latest \
--transport streamable-http \
--port 8080 \
--pluggedin-api-key YOUR_API_KEY
Add --require-api-auth to enforce Bearer‑token authentication and --stateless for a new session on every request.docker build -t pluggedin-mcp-proxy .) and run either in STDIO mode or with --transport streamable-http.npx command in Claude Desktop, Cline, Cursor, etc., or point the client to http://localhost:12006/mcp when HTTP mode is enabled.Q: Do I need to set the API key as an environment variable or can I pass it on the command line?
A: Both are supported. The CLI flag --pluggedin-api-key overrides the PLUGGEDIN_API_KEY environment variable.
Q: Which transport should I choose? A: Use STDIO for local development and direct integration with desktop MCP clients. Choose Streamable HTTP when you need remote access, web‑based integrations, or containerised deployments.
Q: How does authentication work for the HTTP mode?
A: By default the endpoint is open (useful for local testing). Adding --require-api-auth makes the server expect a Authorization: Bearer <API_KEY> header.
Q: Can I run multiple instances with different workspaces? A: Yes. Each instance reads its configuration from the plugged.in App using the provided API key, so you can launch separate containers with different keys to isolate workspaces.
Q: Is OAuth token handling automatic? A: For Streamable HTTP downstream servers the proxy fetches and refreshes OAuth tokens from the plugged.in App; no manual token management is needed.
Q: How do I stop a specific session in stateless mode?
A: Stateless mode creates a new session per request, so there is nothing to clean up. In stateful mode send a DELETE /mcp request with the mcp-session-id header to terminate the session.
The plugged.in MCP Proxy Server is a powerful middleware that aggregates multiple Model Context Protocol (MCP) servers into a single unified interface. It fetches tool, prompt, and resource configurations from the plugged.in App and intelligently routes requests to the appropriate underlying MCP servers.
This proxy enables seamless integration with any MCP client (Claude, Cline, Cursor, etc.) while providing advanced management capabilities through the plugged.in ecosystem.
Knowledge (RAG v2 / AI Document Exchange)
Search and ground model outputs with unified, attribution‑aware document retrieval. MCP servers can create and manage documents in your library with versioning, visibility controls, and model attribution. Use the built‑in RAG to search across all connected sources and return relevant snippets and metadata.
Memory (Persistent AI Memory)
Long‑lived, workspace/profile‑scoped memory that survives sessions. The hub integrates with the plugged.in App's persistent memory so agent actions and insights can be stored and recalled across tasks. Built‑in memory tools are on the roadmap to expose low‑friction get/put/search patterns under the same auth model.
Tools
Aggregate built‑in capabilities with downstream MCP servers (STDIO, SSE, Streamable HTTP). Tool discovery is cached and can be refreshed on demand; hub‑level discovery returns a unified catalog for any MCP client. The hub supports tools, resources, resource templates, and prompts.
Proxy
One connection for every client. Run as STDIO (default) or Streamable HTTP with optional API auth and stateless mode. Works with Claude Desktop, Cline, Cursor, MCP Inspector, and more; keep your existing client configs while centralizing policies and telemetry.
⭐ If you find this project useful, please consider giving it a star on GitHub! It helps us reach more developers and motivates us to keep improving.
ai_generated, upload, or api sourcesThe proxy provides two distinct categories of tools:
These tools are built into the proxy and work without any server configuration:
pluggedin_discover_tools - Smart discovery with caching for instant resultspluggedin_rag_query - RAG v2 search across your documents with AI filtering capabilitiespluggedin_send_notification - Send notifications with optional email deliverypluggedin_create_document - (Coming Soon) Create AI-generated documents in your libraryThese tools come from your configured MCP servers and can be turned on/off:
The discovery tool intelligently shows both categories, giving AI models immediate access to all available capabilities.
# Quick discovery - returns cached data instantly
pluggedin_discover_tools()
# Force refresh - shows current tools + runs background discovery
pluggedin_discover_tools({"force_refresh": true})
# Discover specific server
pluggedin_discover_tools({"server_uuid": "uuid-here"})
Example Response:
## 🔧 Static Built-in Tools (Always Available):
1. **pluggedin_discover_tools** - Smart discovery with caching
2. **pluggedin_rag_query** - RAG v2 search across documents with AI filtering
3. **pluggedin_send_notification** - Send notifications
4. **pluggedin_create_document** - (Coming Soon) Create AI-generated documents
## ⚡ Dynamic MCP Tools (8) - From Connected Servers:
1. **query** - Run read-only SQL queries
2. **generate_random_integer** - Generate secure random integers
...
The enhanced RAG v2 system allows MCP servers to create and search documents with full AI attribution:
# Search for documents created by specific AI models
pluggedin_rag_query({
"query": "system architecture",
"filters": {
"modelName": "Claude 3 Opus",
"source": "ai_generated",
"tags": ["technical"]
}
})
# Search across all document sources
pluggedin_rag_query({
"query": "deployment guide",
"filters": {
"dateFrom": "2024-01-01",
"visibility": "workspace"
}
})
# Future: Create AI-generated documents (Coming Soon)
pluggedin_create_document({
"title": "Analysis Report",
"content": "# Market Analysis\n\nDetailed findings...",
"format": "md",
"tags": ["analysis", "market"],
"metadata": {
"model": {
"name": "Claude 3 Opus",
"provider": "Anthropic"
}
}
})
# Install and run with npx (latest v1.0.0)
npx -y @pluggedin/pluggedin-mcp-proxy@latest --pluggedin-api-key YOUR_API_KEY
For existing installations, see our Migration Guide for detailed upgrade instructions.
# Quick upgrade
npx -y @pluggedin/pluggedin-mcp-proxy@1.0.0 --pluggedin-api-key YOUR_API_KEY
Add the following to your Claude Desktop configuration:
{
"mcpServers": {
"pluggedin": {
"command": "npx",
"args": ["-y", "@pluggedin/pluggedin-mcp-proxy@latest"],
"env": {
"PLUGGEDIN_API_KEY": "YOUR_API_KEY"
}
}
}
}
Add the following to your Cline configuration:
{
"mcpServers": {
"pluggedin": {
"command": "npx",
"args": ["-y", "@pluggedin/pluggedin-mcp-proxy@latest"],
"env": {
"PLUGGEDIN_API_KEY": "YOUR_API_KEY"
}
}
}
}
For Cursor, you can use command-line arguments instead of environment variables:
npx -y @pluggedin/pluggedin-mcp-proxy@latest --pluggedin-api-key YOUR_API_KEY
| Variable | Description | Required | Default |
|---|---|---|---|
PLUGGEDIN_API_KEY |
API key from plugged.in App | Yes | - |
PLUGGEDIN_API_BASE_URL |
Base URL for plugged.in App | No | https://plugged.in |
Command line arguments take precedence over environment variables:
npx -y @pluggedin/pluggedin-mcp-proxy@latest --pluggedin-api-key YOUR_API_KEY --pluggedin-api-base-url https://your-custom-url.com
| Option | Description | Default |
|---|---|---|
--transport <type> |
Transport type: stdio or streamable-http |
stdio |
--port <number> |
Port for Streamable HTTP server | 12006 |
--stateless |
Enable stateless mode for Streamable HTTP | false |
--require-api-auth |
Require API key for Streamable HTTP requests | false |
For a complete list of options:
npx -y @pluggedin/pluggedin-mcp-proxy@latest --help
The proxy can run as an HTTP server instead of STDIO, enabling web-based access and remote connections.
# Run as HTTP server on default port (12006)
npx -y @pluggedin/pluggedin-mcp-proxy@latest --transport streamable-http --pluggedin-api-key YOUR_API_KEY
# Custom port
npx -y @pluggedin/pluggedin-mcp-proxy@latest --transport streamable-http --port 8080 --pluggedin-api-key YOUR_API_KEY
# With authentication required
npx -y @pluggedin/pluggedin-mcp-proxy@latest --transport streamable-http --require-api-auth --pluggedin-api-key YOUR_API_KEY
# Stateless mode (new session per request)
npx -y @pluggedin/pluggedin-mcp-proxy@latest --transport streamable-http --stateless --pluggedin-api-key YOUR_API_KEY
POST /mcp - Send MCP messagesGET /mcp - Server-sent events stream (optional)DELETE /mcp - Terminate sessionGET /health - Health check endpointIn stateful mode (default), use the mcp-session-id header to maintain sessions:
# First request creates a session
curl -X POST http://localhost:12006/mcp \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-d '{"jsonrpc":"2.0","method":"tools/list","id":1}'
# Subsequent requests use the same session
curl -X POST http://localhost:12006/mcp \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-H "mcp-session-id: YOUR_SESSION_ID" \
-d '{"jsonrpc":"2.0","method":"tools/call","params":{"name":"tool_name"},"id":2}'
When using --require-api-auth, include your API key as a Bearer token:
curl -X POST http://localhost:12006/mcp \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-d '{"jsonrpc":"2.0","method":"ping","id":1}'
You can also build and run the proxy server using Docker.
Ensure you have Docker installed and running. Navigate to the pluggedin-mcp directory and run:
docker build -t pluggedin-mcp-proxy:latest .
A .dockerignore file is included to optimize the build context.
Run the container in STDIO mode for MCP Inspector testing:
docker run -it --rm \
-e PLUGGEDIN_API_KEY="YOUR_API_KEY" \
-e PLUGGEDIN_API_BASE_URL="YOUR_API_BASE_URL" \
--name pluggedin-mcp-container \
pluggedin-mcp-proxy:latest
Run the container as an HTTP server:
docker run -d --rm \
-e PLUGGEDIN_API_KEY="YOUR_API_KEY" \
-e PLUGGEDIN_API_BASE_URL="YOUR_API_BASE_URL" \
-p 12006:12006 \
--name pluggedin-mcp-http \
pluggedin-mcp-proxy:latest \
--transport streamable-http --port 12006
Replace YOUR_API_KEY and YOUR_API_BASE_URL (if not using the default https://plugged.in).
While the container is running, you can connect to it using the MCP Inspector:
npx @modelcontextprotocol/inspector docker://pluggedin-mcp-container
This will connect to the standard input/output of the running container.
Press Ctrl+C in the terminal where docker run is executing. The --rm flag ensures the container is removed automatically upon stopping.
Deploy the plugged.in MCP Proxy to Smithery Cloud for hosted, always-available access to your MCP servers.
pluggedin-mcp repositorysmithery.yamlFor complete deployment instructions, configuration options, troubleshooting, and technical details, see:
The hub is designed to support agentic loops end‑to‑end:
MCP Client → plugged.in MCP Hub → (Plan → Act → Reflect)
↘ Knowledge ↘ Memory ↘ Tools
Safety & Ops
Enable --require-api-auth in Streamable HTTP mode; use allowlists for commands, arguments, and env. Combine server‑level validation with client‑side prompts hardened against prompt‑injection. Leverage existing logging/telemetry to track tool usage and document mutations.
The plugged.in MCP Proxy Server acts as a bridge between MCP clients and multiple underlying MCP servers:
pluggedin_discover_tools):
force_refresh=true, runs discovery in background while showing current toolstools/list: Fetches from /api/tools (includes static + dynamic tools)resources/list: Fetches from /api/resourcesresource-templates/list: Fetches from /api/resource-templatesprompts/list: Fetches from /api/prompts and /api/custom-instructions, merges resultstools/call: Parses prefix from tool name, looks up server in internal mapresources/read: Calls /api/resolve/resource?uri=... to get server detailsprompts/get: Checks for custom instruction prefix or calls /api/resolve/prompt?name=...The plugged.in MCP Proxy implements comprehensive security measures to protect your system and data:
.env files with proper handling of quotes and multiline valuesexecFile() instead of exec() to prevent shell injectionnode, npx - Node.js commandspython, python3 - Python commandsuv, uvx, uvenv - UV Python toolsA dedicated security-utils.ts module provides:
For detailed security implementation, see SECURITY.md.
The plugged.in MCP Proxy Server is designed to work seamlessly with the plugged.in App, which provides:
Contributions are welcome! Please feel free to submit a Pull Request.
sanitize-html librarysanitize-html for robust HTML content filtering/health endpoint for service monitoringSee Release Notes for complete details.
Tests are included for development purposes but are excluded from Docker builds to minimize the container footprint.
# Run tests locally
npm test
# or
./scripts/test-local.sh
# Run tests in watch mode
npm run test:watch
# Run tests with UI
npm run test:ui
The Docker image is optimized for minimal footprint:
# Build optimized Docker image
docker build -t pluggedin-mcp .
# Check image size
docker images pluggedin-mcp
This project is licensed under the MIT License - see the LICENSE file for details.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
{
"mcpServers": {
"pluggedin": {
"command": "npx",
"args": [
"-y",
"@pluggedin/pluggedin-mcp-proxy@latest"
],
"env": {
"API_KEY": "<YOUR_API_KEY>"
}
}
}
}claude mcp add pluggedin npx -y @pluggedin/pluggedin-mcp-proxy@latest