by VeriTeknik
Provides a self‑hostable web interface that unifies discovery, configuration, and management of MCP servers while turning AI conversations into searchable, versioned organizational knowledge.
Plugged.in is a unified platform that aggregates dozens of MCP servers, captures every AI interaction, and stores it as versioned, searchable documents. It acts like a Git‑style content management system for AI‑generated output, offering persistent memory, multi‑model attribution, and semantic RAG capabilities.
.env.example to .env, and start the stack with Docker Compose (docker compose up --build -d).Q: Do I need to run a separate MCP server? A: No. Plugged.in includes an MCP Proxy that aggregates existing MCP servers. You only need to provide API keys for the models you use.
Q: Can I run Plugged.in without Docker?
A: Yes. Install Node.js 18+, PostgreSQL, and Redis, then follow the manual installation steps (pnpm install, pnpm build, pnpm start).
Q: How is data secured? A: All sensitive data is encrypted with AES‑256‑GCM. Each workspace gets its own encryption key, and communication uses TLS. Rate limiting and sandboxed execution protect against abuse.
Q: Which AI models are supported? A: Any model that implements the MCP, including Claude, GPT‑4, Gemini, and community‑hosted servers.
Q: How does RAG work? A: Documents are vectorized on ingest; queries are embedded and matched against the vector store, returning the most relevant passages which are then injected into the model prompt.
Q: Is there a free tier? A: The hosted version offers a free tier with limited storage and request limits. Self‑hosting is unrestricted aside from your own infrastructure costs.

Turn your AI conversations into permanent organizational memory
🚀 Get Started • 📚 Documentation • 🌟 Features • 💬 Community
Every day, you have brilliant conversations with AI - strategy sessions with GPT-4, code reviews with Claude, analysis with Gemini. But when you close that chat window, all that knowledge vanishes. This is the "AI knowledge evaporation" problem.
plugged.in is the world's first AI Content Management System (AI-CMS) - a platform that transforms ephemeral AI interactions into persistent, versioned, and searchable organizational knowledge.
Think of it as "Git for AI-generated content" meets "WordPress for AI interactions".
Your AI conversations become permanent assets. Every document is versioned, attributed, and searchable.
Claude writes v1, GPT-4 adds technical specs in v2, Gemini refines in v3 - all tracked and attributed.
Works with 1,500+ MCP servers. Connect any tool, any AI, any workflow - all through one interface.
End-to-end encryption, OAuth 2.1, rate limiting, and sandboxed execution for your peace of mind.
Documents Managed: 90+ (72% AI-generated)
Integrated MCP Servers: 1,568
Active Versioning: Documents with up to 4 iterations
Model Attributions: 17 different AI models tracked
Search Performance: Sub-second RAG queries
Security: AES-256-GCM encryption, Redis rate limiting
# Clone and setup
git clone https://github.com/VeriTeknik/pluggedin-app.git
cd pluggedin-app
cp .env.example .env
# Start with Docker
docker compose up --build -d
# Visit http://localhost:12005
Visit plugged.in for instant access - no installation required.

plugged.in acts as the central hub connecting various AI clients, development tools, and programming languages with your knowledge base and the broader MCP ecosystem. The architecture is designed for maximum flexibility and extensibility.
The MCP Proxy serves as a unified gateway that aggregates multiple MCP servers into a single interface:
How it works: Each AI client connects via STDIO, receiving access to all your configured MCP servers through one connection. The proxy handles:
Direct programmatic access through official SDKs:
@pluggedin/sdk) - Full-featured SDK for Node.js and browserpluggedin-sdk) - Pythonic interface for AI workflowspluggedin-go) - High-performance Go implementationUse Cases:
// Create documents programmatically
const doc = await client.documents.create({
title: "API Analysis",
content: "...",
source: "api"
});
// Query RAG knowledge base
const results = await client.rag.query("How do we handle auth?");
The plugged.in web platform provides:
Knowledge Base (RAG)
Document Store
MCP Registry
Tools Management
Direct integrations bypassing the MCP Proxy for enhanced performance:
Why Native Connectors?
Persistent memory across sessions:
Example Workflow:
User Request Flow:
1. User asks question in Claude Desktop
2. MCP Proxy receives request
3. Proxy checks RAG for relevant context
4. Combines context + user question
5. Routes to appropriate MCP servers
6. Aggregates responses
7. Logs activity to database
8. Returns enriched response to user
Document Creation Flow:
1. AI generates document via SDK
2. Content processed and sanitized
3. Model attribution recorded
4. Version created in Document Store
5. Vectors generated for RAG
6. Document searchable immediately
All connections use:
The architecture supports:
Visit our comprehensive documentation at docs.plugged.in
Create a .env file with:
# Core (Required)
DATABASE_URL=postgresql://user:pass@localhost:5432/pluggedin
NEXTAUTH_URL=http://localhost:12005
NEXTAUTH_SECRET=your-secret-key # Generate: openssl rand -base64 32
# Security (Required)
NEXT_SERVER_ACTIONS_ENCRYPTION_KEY= # Generate: openssl rand -base64 32
# Features (Optional)
ENABLE_RAG=true
ENABLE_NOTIFICATIONS=true
ENABLE_EMAIL_VERIFICATION=true
REDIS_URL=redis://localhost:6379 # For Redis rate limiting
# Email (For notifications)
EMAIL_SERVER_HOST=smtp.example.com
EMAIL_SERVER_PORT=587
EMAIL_FROM=noreply@example.com
# Performance (Optional)
RAG_CACHE_TTL_MS=60000 # Cache TTL in milliseconds
# Install dependencies
pnpm install
# Setup database
pnpm db:migrate:auth
pnpm db:generate
pnpm db:migrate
# Build for production
NODE_ENV=production pnpm build
# Start the server
pnpm start
Connect your AI clients to plugged.in:
{
"mcpServers": {
"pluggedin": {
"command": "npx",
"args": ["-y", "@pluggedin/pluggedin-mcp-proxy@latest"],
"env": {
"PLUGGEDIN_API_KEY": "YOUR_API_KEY"
}
}
}
}
npx -y @pluggedin/pluggedin-mcp-proxy@latest --pluggedin-api-key YOUR_API_KEY
| Feature | plugged.in | Traditional AI Chat | MCP Clients Alone |
|---|---|---|---|
| Persistent Memory | ✅ Full versioning | ❌ Session only | ❌ No storage |
| Multi-Model Support | ✅ All models | ⚠️ Single vendor | ✅ Multiple |
| Document Management | ✅ Complete CMS | ❌ None | ❌ None |
| Attribution Tracking | ✅ Full audit trail | ❌ None | ❌ None |
| Team Collaboration | ✅ Built-in | ❌ None | ❌ Limited |
| Self-Hostable | ✅ Yes | ⚠️ Varies | ✅ Yes |
| RAG Integration | ✅ Native | ⚠️ Limited | ❌ None |
We love contributions! See our Contributing Guide for details.
# Fork the repo, then:
git clone https://github.com/YOUR_USERNAME/pluggedin-app.git
cd pluggedin-app
pnpm install
pnpm dev
MIT License - see LICENSE for details.
Built on top of these amazing projects:
Latest Release: v2.12.0 - Enhanced Security & Performance
View the full changelog and release notes at docs.plugged.in/releases
Ready to give your AI permanent memory?
🚀 Start Now • ⭐ Star on GitHub
If you find plugged.in useful, please star the repo - it helps others discover the project!
Please log in to share your review and rating for this MCP.
{
"mcpServers": {
"pluggedin": {
"command": "npx",
"args": [
"-y",
"@pluggedin/pluggedin-mcp-proxy@latest"
],
"env": {
"PLUGGEDIN_API_KEY": "<YOUR_API_KEY>"
}
}
}
}claude mcp add pluggedin npx -y @pluggedin/pluggedin-mcp-proxy@latestExplore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
A basic implementation of persistent memory using a local knowledge graph. This lets Claude remember information about the user across chats.
by topoteretes
Provides dynamic memory for AI agents through modular ECL (Extract, Cognify, Load) pipelines, enabling seamless integration with graph and vector stores using minimal code.
by basicmachines-co
Enables persistent, local‑first knowledge management by allowing LLMs to read and write Markdown files during natural conversations, building a traversable knowledge graph that stays under the user’s control.
by smithery-ai
Provides read and search capabilities for Markdown notes in an Obsidian vault for Claude Desktop and other MCP clients.
by chatmcp
Summarize chat messages by querying a local chat database and returning concise overviews.
by dmayboroda
Provides on‑premises conversational retrieval‑augmented generation (RAG) with configurable Docker containers, supporting fully local execution, ChatGPT‑based custom GPTs, and Anthropic Claude integration.
by qdrant
Provides a Model Context Protocol server that stores and retrieves semantic memories using Qdrant vector search, acting as a semantic memory layer.
by doobidoo
Provides a universal memory service with semantic search, intelligent memory triggers, OAuth‑enabled team collaboration, and multi‑client support for Claude Desktop, Claude Code, VS Code, Cursor and over a dozen AI applications.
by GreatScottyMac
Provides a project‑specific memory bank that stores decisions, progress, architecture, and custom data, exposing a structured knowledge graph via MCP for AI assistants and IDE tools.