by Abiorh001
Provides a complete AI agent platform that combines the OmniAgent builder, a local tools system, autonomous background agents, multi‑tier memory with vector search, real‑time event streaming, and a universal MCP client usable via CLI or embeddable library.
Mcp Omni Connect delivers an end‑to‑end AI ecosystem. It lets developers create custom agents with the OmniAgent builder, register Python functions as tools, run self‑flying background agents, and store short‑ and long‑term context in Redis, databases, or vector stores. The same runtime also acts as a feature‑rich MCP client that can talk to any Model Context Protocol server through stdio, SSE, HTTP, Docker, or NPX transports.
# Recommended
uv add mcpomni-connect
# Or with pip
pip install mcpomni-connect
.env
file with at least one LLM API key:
LLM_API_KEY=your_provider_key
Optional variables enable Redis, vector DB, or Opik tracing (see README).python examples/basic.py # Simple MCP client
python examples/omni_agent_example.py # Full OmniAgent demo
python examples/web_server.py # FastAPI web UI (http://localhost:8000)
OmniAgent
in your code, passing custom tool registries and MCP server definitions.@tool_registry.register_tool
decorator turns any Python function into an AI‑accessible tool.Domain | Example |
---|---|
Automation | Build a background agent that monitors a directory, extracts new PDFs, and summarizes them using a vector‑backed memory. |
Enterprise tooling | Combine internal MCP services (file system, code repo, issue tracker) with custom analysis tools to create an AI‑assistant for DevOps. |
Customer support | Deploy a chat‑based agent that can query knowledge‑base resources, retrieve relevant documents from a vector DB, and execute privileged internal tools after user approval. |
Data enrichment | Use the multi‑tier memory to store embeddings of large datasets; agents can later perform semantic search and generate insights on demand. |
Rapid prototyping | Developers can spin up the CLI, connect to any MCP server, and experiment with tool orchestration without writing boilerplate code. |
Q: Do I need a vector database to use the platform?
A: No. The platform works with in‑memory or Redis memory out‑of‑the‑box. Enabling ENABLE_VECTOR_DB=true
activates long‑term semantic memory via Qdrant or ChromaDB.
Q: How do I add a new MCP server at runtime?
A: Use the CLI command /add_servers:<path/to/config.json>
or call agent.add_server(...)
in code. The JSON follows the same schema shown in the README.
Q: Can I run the agent without internet access? A: Yes, if you use a locally hosted LLM (Ollama) and a local vector store. Only the LLM provider key is required for remote models.
Q: What authentication methods are supported?
A: OAuth 2.0 (auto‑starts a callback server on http://localhost:3000
), bearer tokens via the Authorization
header, and arbitrary custom headers.
Q: How is usage billed?
A: Billing is determined by the underlying LLM provider. The optional /api_stats
command shows total tokens, requests, and costs if you configure provider‑specific cost tracking.
Q: How do I enable tracing?
A: Set OPIK_API_KEY
and OPIK_WORKSPACE
in .env
. The library automatically decorates LLM calls, tool executions, and memory operations with Opik spans.
MCPOmni Connect is the complete AI platform that evolved from a world-class MCP client into a revolutionary ecosystem. It now includes OmniAgent - the ultimate AI agent builder born from MCPOmni Connect's powerful foundation. Build production-ready AI agents, use the advanced MCP CLI, or combine both for maximum power.
New to MCPOmni Connect? Get started in 2 minutes:
# Install with uv (recommended)
uv add mcpomni-connect
# Or with pip
pip install mcpomni-connect
# Create .env file with your LLM API key
echo "LLM_API_KEY=your_openai_api_key_here" > .env
# Try the basic MCP client
python examples/basic.py
# Or try OmniAgent with custom tools
python examples/omni_agent_example.py
# Or use the advanced MCP CLI
python examples/run.py
➡️ Next: Check out Examples or jump to Configuration Guide
Born from MCPOmni Connect's foundation - create intelligent, autonomous agents with:
Advanced command-line interface for connecting to any Model Context Protocol server with:
🎯 Perfect for: Developers who want the complete AI ecosystem - build custom agents AND have world-class MCP connectivity.
🌟 Introducing OmniAgent - A revolutionary AI agent system that brings plug-and-play intelligence to your applications!
@tool_registry.register_tool("tool_name")
run_omni_agent.py
for 12+ EXAMPLE tool registration patterns# Basic MCP client usage - Simple connection patterns
python examples/basic.py
# Advanced MCP CLI - Full-featured client interface
python examples/run.py
# Complete OmniAgent demo - All features showcase
python examples/omni_agent_example.py
# Advanced OmniAgent patterns - Study 12+ tool examples
python examples/run_omni_agent.py
# Self-flying background agents - Autonomous task execution
python examples/background_agent_example.py
# FastAPI implementation - Clean API endpoints
python examples/fast_api_iml.py
# Web server with UI - Interactive interface for OmniAgent
python examples/web_server.py
# Open http://localhost:8000 for web interface
All LLM provider examples consolidated in:
# See examples/llm_usage-config.json for:
# - Anthropic Claude models
# - Groq ultra-fast inference
# - Azure OpenAI enterprise
# - Ollama local models
# - OpenRouter 200+ models
# - And more providers...
🚀 Want to start building right away? Jump to Quick Start | Examples | Configuration
/memory_store:redis
, /memory_store:database:postgresql://user:pass@host/db
/memory_mode:sliding_window:5
, /memory_mode:token_budget:3000
ENABLE_VECTOR_DB=true
/event_store:redis_stream
, /event_store:in_memory
@track
decorators📚 Prefer hands-on learning? Skip to Examples or Configuration
MCPOmni Connect Platform
├── 🤖 OmniAgent System (Revolutionary Agent Builder)
│ ├── Local Tools Registry
│ ├── Background Agent Manager
│ ├── Custom Agent Creation
│ └── Agent Orchestration Engine
├── 🔌 Universal MCP Client (World-Class CLI)
│ ├── Transport Layer (stdio, SSE, HTTP, Docker, NPX)
│ ├── Multi-Server Orchestration
│ ├── Authentication & Security
│ └── Connection Lifecycle Management
├── 🧠 Shared Memory System (Both Systems)
│ ├── Multi-Backend Storage (Redis, DB, In-Memory)
│ ├── Vector Database Integration (ChromaDB, Qdrant)
│ ├── Memory Strategies (Sliding Window, Token Budget)
│ └── Session Management
├── 📡 Event System (Both Systems)
│ ├── In-Memory Event Processing
│ ├── Redis Streams for Persistence
│ └── Real-Time Event Monitoring
├── 🛠️ Tool Management (Both Systems)
│ ├── Dynamic Tool Discovery
│ ├── Cross-Server Tool Routing
│ ├── Local Python Tool Registration
│ └── Tool Execution Engine
└── 🤖 AI Integration (Both Systems)
├── LiteLLM (100+ Models)
├── Context Management
├── ReAct Agent Processing
└── Response Generation
Required:
Optional (for advanced features):
# Option 1: UV (recommended - faster)
uv add mcpomni-connect
# Option 2: Pip (standard)
pip install mcpomni-connect
Minimal setup (get started immediately):
# Just set your API key - that's it!
echo "LLM_API_KEY=your_api_key_here" > .env
Advanced setup (optional features):
📖 Need more options? See the complete Configuration Guide below for all environment variables, vector database setup, memory configuration, and advanced features.
Path A: Build Custom Agents (OmniAgent)
python examples/omni_agent_example.py
Path B: Advanced MCP Client (CLI)
python examples/run.py
Path C: Web Interface
python examples/web_server.py
# Open http://localhost:8000
⚡ Quick Setup: Only need
LLM_API_KEY
to get started! | 🔍 Detailed Setup: Vector DB | Tracing
Create a .env
file with your configuration. Only the LLM API key is required - everything else is optional for advanced features.
# ===============================================
# REQUIRED: AI Model API Key (Choose one provider)
# ===============================================
LLM_API_KEY=your_openai_api_key_here
# OR for other providers:
# LLM_API_KEY=your_anthropic_api_key_here
# LLM_API_KEY=your_groq_api_key_here
# LLM_API_KEY=your_azure_openai_api_key_here
# See examples/llm_usage-config.json for all provider configs
# ===============================================
# Tracing & Observability (OPTIONAL) - NEW!
# ===============================================
# For advanced monitoring and performance optimization
# 🔗 Sign up: https://www.comet.com/signup?from=llm
OPIK_API_KEY=your_opik_api_key_here
OPIK_WORKSPACE=your_opik_workspace_name
# ===============================================
# Vector Database (OPTIONAL) - Smart Memory
# ===============================================
# ⚠️ Warning: 30-60s startup time for sentence transformer
# ⚠️ IMPORTANT: You MUST choose a provider - no local fallback
ENABLE_VECTOR_DB=true # Default: false
# Choose ONE provider (required if ENABLE_VECTOR_DB=true):
# Option 1: Qdrant Remote (RECOMMENDED)
OMNI_MEMORY_PROVIDER=qdrant-remote
QDRANT_HOST=localhost
QDRANT_PORT=6333
# Option 2: ChromaDB Remote
# OMNI_MEMORY_PROVIDER=chroma-remote
# CHROMA_HOST=localhost
# CHROMA_PORT=8000
# Option 3: ChromaDB Cloud
# OMNI_MEMORY_PROVIDER=chroma-cloud
# CHROMA_TENANT=your_tenant
# CHROMA_DATABASE=your_database
# CHROMA_API_KEY=your_api_key
# ===============================================
# Persistent Memory Storage (OPTIONAL)
# ===============================================
# These have sensible defaults - only set if you need custom configuration
# Redis - for memory_store_type="redis" (defaults to: redis://localhost:6379/0)
# REDIS_URL=redis://your-remote-redis:6379/0
# REDIS_URL=redis://:password@localhost:6379/0 # With password
# Database - for memory_store_type="database" (defaults to: sqlite:///mcpomni_memory.db)
# DATABASE_URL=postgresql://user:password@localhost:5432/mcpomni
# DATABASE_URL=mysql://user:password@localhost:3306/mcpomni
💡 Quick Start: Just set
LLM_API_KEY
and you're ready to go! Add other variables only when you need advanced features.
servers_config.json
)For MCP server connections and agent settings:
MCPOmni Connect supports multiple ways to connect to MCP servers:
Use when: Connecting to local MCP servers that run as separate processes
{
"server-name": {
"transport_type": "stdio",
"command": "uvx",
"args": ["mcp-server-package"]
}
}
Use when: Connecting to HTTP-based MCP servers using Server-Sent Events
{
"server-name": {
"transport_type": "sse",
"url": "http://your-server.com:4010/sse",
"headers": {
"Authorization": "Bearer your-token"
},
"timeout": 60,
"sse_read_timeout": 120
}
}
Use when: Connecting to HTTP-based MCP servers with or without OAuth
Without OAuth (Bearer Token):
{
"server-name": {
"transport_type": "streamable_http",
"url": "http://your-server.com:4010/mcp",
"headers": {
"Authorization": "Bearer your-token"
},
"timeout": 60
}
}
With OAuth:
{
"server-name": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://your-server.com:4010/mcp"
}
}
http://localhost:3000
Important: When using OAuth authentication, MCPOmni Connect automatically starts an OAuth callback server.
🖥️ Started callback server on http://localhost:3000
http://localhost:3000
is hardcoded and cannot be changed"auth": {"method": "oauth"}
in your config"auth"
section from your server configuration"headers"
with "Authorization": "Bearer token"
insteadPossible Causes & Solutions:
Wrong Transport Type
Problem: Your server expects 'stdio' but you configured 'streamable_http'
Solution: Check your server's documentation for the correct transport type
OAuth Configuration Mismatch
Problem: Your server doesn't support OAuth but you have "auth": {"method": "oauth"}
Solution: Remove the "auth" section entirely and use headers instead:
"headers": {
"Authorization": "Bearer your-token"
}
Server Not Running
Problem: The MCP server at the specified URL is not running
Solution: Start your MCP server first, then connect with MCPOmni Connect
Wrong URL or Port
Problem: URL in config doesn't match where your server is running
Solution: Verify the server's actual address and port
Yes, this is completely normal when:
"auth": {"method": "oauth"}
in any server configurationIf you don't want the OAuth server:
"auth": {"method": "oauth"}
from all server configurations{
"mcpServers": {
"local-tools": {
"transport_type": "stdio",
"command": "uvx",
"args": ["mcp-server-tools"]
}
}
}
{
"mcpServers": {
"remote-api": {
"transport_type": "streamable_http",
"url": "http://api.example.com:8080/mcp",
"headers": {
"Authorization": "Bearer abc123token"
}
}
}
}
{
"mcpServers": {
"oauth-server": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://oauth-server.com:8080/mcp"
}
}
}
Start the CLI - ensure your API key is exported or create .env
file:
# Basic MCP client
python examples/basic.py
# Or advanced MCP CLI
python examples/run.py
# Run all tests with verbose output
pytest tests/ -v
# Run specific test file
pytest tests/test_specific_file.py -v
# Run tests with coverage report
pytest tests/ --cov=src --cov-report=term-missing
tests/
├── unit/ # Unit tests for individual components
Installation
# Clone the repository
git clone https://github.com/Abiorh001/mcp_omni_connect.git
cd mcp_omni_connect
# Create and activate virtual environment
uv venv
source .venv/bin/activate
# Install dependencies
uv sync
Configuration
# Set up environment variables
echo "LLM_API_KEY=your_api_key_here" > .env
# Configure your servers in servers_config.json
Start Client
uv run examples/run.py
Or:
python examples/run.py
Use Case | Choose | Best For |
---|---|---|
Build custom AI apps | OmniAgent | Web apps, automation, custom workflows |
Connect to MCP servers | MCP CLI | Daily workflow, server management, debugging |
Learn & experiment | Examples | Understanding patterns, proof of concepts |
Production deployment | Both | Full-featured AI applications |
Perfect for: Custom applications, automation, web apps
# Study the examples to learn patterns:
python examples/basic.py # Simple MCP client
python examples/omni_agent_example.py # Complete OmniAgent demo
python examples/background_agent_example.py # Self-flying agents
python examples/web_server.py # Web interface
# Then build your own using the patterns!
Perfect for: Daily workflow, server management, debugging
# Basic MCP client - Simple connection patterns
python examples/basic.py
# World-class MCP client with advanced features
python examples/run.py
# Features: Connect to MCP servers, agentic modes, advanced memory
Perfect for: Learning, understanding patterns, experimentation
# Comprehensive testing interface - Study 12+ EXAMPLE tools
python examples/run_omni_agent.py --mode cli
# Study this file to see tool registration patterns and CLI features
# Contains many examples of how to create custom tools
💡 Pro Tip: Most developers use both paths - the MCP CLI for daily workflow and OmniAgent for building custom solutions!
One of OmniAgent's most powerful features is the ability to register your own Python functions as AI tools. The agent can then intelligently use these tools to complete tasks.
from mcpomni_connect.agents.tools.local_tools_registry import ToolRegistry
# Create tool registry
tool_registry = ToolRegistry()
# Register your custom tools with simple decorator
@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
"""Calculate the area of a rectangle."""
area = length * width
return f"Area of rectangle ({length} x {width}): {area} square units"
@tool_registry.register_tool("analyze_text")
def analyze_text(text: str) -> str:
"""Analyze text and return word count and character count."""
words = len(text.split())
chars = len(text)
return f"Analysis: {words} words, {chars} characters"
@tool_registry.register_tool("system_status")
def get_system_status() -> str:
"""Get current system status information."""
import platform
import time
return f"System: {platform.system()}, Time: {time.strftime('%Y-%m-%d %H:%M:%S')}"
# Use tools with OmniAgent
agent = OmniAgent(
name="my_agent",
local_tools=tool_registry, # Your custom tools!
# ... other config
)
# Now the AI can use your tools!
result = await agent.run("Calculate the area of a 10x5 rectangle and tell me the current system time")
No built-in tools - You create exactly what you need! Study these EXAMPLE patterns from run_omni_agent.py
:
Mathematical Tools Examples:
@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
area = length * width
return f"Area: {area} square units"
@tool_registry.register_tool("analyze_numbers")
def analyze_numbers(numbers: str) -> str:
num_list = [float(x.strip()) for x in numbers.split(",")]
return f"Count: {len(num_list)}, Average: {sum(num_list)/len(num_list):.2f}"
System Tools Examples:
@tool_registry.register_tool("system_info")
def get_system_info() -> str:
import platform
return f"OS: {platform.system()}, Python: {platform.python_version()}"
File Tools Examples:
@tool_registry.register_tool("list_files")
def list_directory(path: str = ".") -> str:
import os
files = os.listdir(path)
return f"Found {len(files)} items in {path}"
1. Simple Function Tools:
@tool_registry.register_tool("weather_check")
def check_weather(city: str) -> str:
"""Get weather information for a city."""
# Your weather API logic here
return f"Weather in {city}: Sunny, 25°C"
2. Complex Analysis Tools:
@tool_registry.register_tool("data_analysis")
def analyze_data(data: str, analysis_type: str = "summary") -> str:
"""Analyze data with different analysis types."""
import json
try:
data_obj = json.loads(data)
if analysis_type == "summary":
return f"Data contains {len(data_obj)} items"
elif analysis_type == "detailed":
# Complex analysis logic
return "Detailed analysis results..."
except:
return "Invalid data format"
3. File Processing Tools:
@tool_registry.register_tool("process_file")
def process_file(file_path: str, operation: str) -> str:
"""Process files with different operations."""
try:
if operation == "read":
with open(file_path, 'r') as f:
content = f.read()
return f"File content (first 100 chars): {content[:100]}..."
elif operation == "count_lines":
with open(file_path, 'r') as f:
lines = len(f.readlines())
return f"File has {lines} lines"
except Exception as e:
return f"Error processing file: {e}"
MCPOmni Connect provides advanced memory capabilities through vector databases for intelligent, semantic search and long-term memory.
# Enable vector memory - you MUST choose a provider
ENABLE_VECTOR_DB=true
# Option 1: Qdrant (recommended)
OMNI_MEMORY_PROVIDER=qdrant-remote
QDRANT_HOST=localhost
QDRANT_PORT=6333
# Option 2: ChromaDB Remote
OMNI_MEMORY_PROVIDER=chroma-remote
CHROMA_HOST=localhost
CHROMA_PORT=8000
1. Qdrant Remote (Recommended Default)
# Install and run Qdrant
docker run -p 6333:6333 qdrant/qdrant
# Configure
ENABLE_VECTOR_DB=true
OMNI_MEMORY_PROVIDER=qdrant-remote
QDRANT_HOST=localhost
QDRANT_PORT=6333
2. ChromaDB Remote
# Install and run ChromaDB server
docker run -p 8000:8000 chromadb/chroma
# Configure
ENABLE_VECTOR_DB=true
OMNI_MEMORY_PROVIDER=chroma-remote
CHROMA_HOST=localhost
CHROMA_PORT=8000
3. ChromaDB Cloud
ENABLE_VECTOR_DB=true
OMNI_MEMORY_PROVIDER=chroma-cloud
CHROMA_TENANT=your_tenant
CHROMA_DATABASE=your_database
CHROMA_API_KEY=your_api_key
Monitor and optimize your AI agents with production-grade observability:
Sign up for Opik (Free & Open Source):
Add to your .env
file (see Environment Variables above):
OPIK_API_KEY=your_opik_api_key_here
OPIK_WORKSPACE=your_opik_workspace_name
Once configured, MCPOmni Connect automatically tracks:
Agent Execution Trace:
├── agent_execution: 4.6s
│ ├── tools_registry_retrieval: 0.02s ✅
│ ├── memory_retrieval_step: 0.08s ✅
│ ├── llm_call: 4.5s ⚠️ (bottleneck identified!)
│ ├── response_parsing: 0.01s ✅
│ └── action_execution: 0.03s ✅
💡 Pro Tip: Opik is completely optional. If you don't set the credentials, MCPOmni Connect works normally without tracing.
Memory Store Management:
# Switch between memory backends
/memory_store:in_memory # Fast in-memory storage (default)
/memory_store:redis # Redis persistent storage
/memory_store:database # SQLite database storage
/memory_store:database:postgresql://user:pass@host/db # PostgreSQL
/memory_store:database:mysql://user:pass@host/db # MySQL
# Memory strategy configuration
/memory_mode:sliding_window:10 # Keep last 10 messages
/memory_mode:token_budget:5000 # Keep under 5000 tokens
Event Store Management:
# Switch between event backends
/event_store:in_memory # Fast in-memory events (default)
/event_store:redis_stream # Redis Streams for persistence
Enhanced Commands:
# Memory operations
/history # Show conversation history
/clear_history # Clear conversation history
/save_history <file> # Save history to file
/load_history <file> # Load history from file
# Server management
/add_servers:<config.json> # Add servers from config
/remove_server:<server_name> # Remove specific server
/refresh # Refresh server capabilities
# Debugging and monitoring
/debug # Toggle debug mode
/api_stats # Show API usage statistics
The MCPOmni Connect CLI is the most advanced MCP client available, providing professional-grade MCP functionality with enhanced memory, event management, and agentic modes:
# Launch the advanced MCP CLI
python examples/run.py
# Core MCP client commands:
/tools # List all available tools
/prompts # List all available prompts
/resources # List all available resources
/prompt:<name> # Execute a specific prompt
/resource:<uri> # Read a specific resource
/subscribe:<uri> # Subscribe to resource updates
/query <your_question> # Ask questions using tools
# Advanced platform features:
/memory_store:redis # Switch to Redis memory
/event_store:redis_stream # Switch to Redis events
/add_servers:<config.json> # Add MCP servers dynamically
/remove_server:<name> # Remove MCP server
/mode:auto # Switch to autonomous agentic mode
/mode:orchestrator # Switch to multi-server orchestration
MCPOmni Connect is not just a CLI tool—it's also a powerful Python library. OmniAgent consolidates everything - you no longer need to manually manage MCP clients, configurations, and agents separately!
OmniAgent automatically includes MCP client functionality - just specify your MCP servers and you're ready to go:
from mcpomni_connect.omni_agent import OmniAgent
from mcpomni_connect.memory_store.memory_router import MemoryRouter
from mcpomni_connect.events.event_router import EventRouter
from mcpomni_connect.agents.tools.local_tools_registry import ToolRegistry
# Create tool registry for custom tools
tool_registry = ToolRegistry()
@tool_registry.register_tool("analyze_data")
def analyze_data(data: str) -> str:
"""Analyze data and return insights."""
return f"Analysis complete: {len(data)} characters processed"
# OmniAgent automatically handles MCP connections + your tools
agent = OmniAgent(
name="my_app_agent",
system_instruction="You are a helpful assistant with access to MCP servers and custom tools.",
model_config={
"provider": "openai",
"model": "gpt-4o",
"temperature": 0.7
},
# Your custom local tools
local_tools=tool_registry,
# MCP servers - automatically connected!
mcp_tools=[
{
"name": "filesystem",
"transport_type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"]
},
{
"name": "github",
"transport_type": "streamable_http",
"url": "http://localhost:8080/mcp",
"headers": {"Authorization": "Bearer your-token"}
}
],
memory_store=MemoryRouter(memory_store_type="redis"),
event_router=EventRouter(event_store_type="in_memory")
)
# Use in your app - gets both MCP tools AND your custom tools!
result = await agent.run("List files in the current directory and analyze the filenames")
If you need the old manual approach for some reason:
OmniAgent makes building APIs incredibly simple. See examples/web_server.py
for a complete FastAPI example:
from fastapi import FastAPI
from mcpomni_connect.omni_agent import OmniAgent
app = FastAPI()
agent = OmniAgent(...) # Your agent setup from above
@app.post("/chat")
async def chat(message: str, session_id: str = None):
result = await agent.run(message, session_id)
return {"response": result['response'], "session_id": result['session_id']}
@app.get("/tools")
async def get_tools():
# Returns both MCP tools AND your custom tools automatically
return agent.get_available_tools()
Key Benefits:
💡 Quick Reference: See
examples/llm_usage-config.json
for all LLM provider configurations (Anthropic, Groq, Azure, Ollama, OpenRouter, etc.)
{
"AgentConfig": {
"tool_call_timeout": 30,
"max_steps": 15,
"request_limit": 1000,
"total_tokens_limit": 100000
},
"LLM": {
"provider": "openai",
"model": "gpt-4",
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 30000,
"top_p": 0
},
"mcpServers": {
"ev_assistant": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
},
"sse-server": {
"transport_type": "sse",
"url": "http://localhost:3000/sse",
"headers": {
"Authorization": "Bearer token"
},
"timeout": 60,
"sse_read_timeout": 120
},
"streamable_http-server": {
"transport_type": "streamable_http",
"url": "http://localhost:3000/mcp",
"headers": {
"Authorization": "Bearer token"
},
"timeout": 60,
"sse_read_timeout": 120
}
}
}
{
"LLM": {
"provider": "anthropic",
"model": "claude-3-5-sonnet-20241022",
"temperature": 0.7,
"max_tokens": 4000,
"max_context_length": 200000,
"top_p": 0.95
}
}
{
"LLM": {
"provider": "groq",
"model": "llama-3.1-8b-instant",
"temperature": 0.5,
"max_tokens": 2000,
"max_context_length": 8000,
"top_p": 0.9
}
}
{
"LLM": {
"provider": "azureopenai",
"model": "gpt-4",
"temperature": 0.7,
"max_tokens": 2000,
"max_context_length": 100000,
"top_p": 0.95,
"azure_endpoint": "https://your-resource.openai.azure.com",
"azure_api_version": "2024-02-01",
"azure_deployment": "your-deployment-name"
}
}
{
"LLM": {
"provider": "ollama",
"model": "llama3.1:8b",
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 100000,
"top_p": 0.7,
"ollama_host": "http://localhost:11434"
}
}
{
"LLM": {
"provider": "openrouter",
"model": "anthropic/claude-3.5-sonnet",
"temperature": 0.7,
"max_tokens": 4000,
"max_context_length": 200000,
"top_p": 0.95
}
}
MCPOmni Connect supports multiple authentication methods for secure server connections:
{
"server_name": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://your-server/mcp"
}
}
{
"server_name": {
"transport_type": "streamable_http",
"headers": {
"Authorization": "Bearer your-token-here"
},
"url": "http://your-server/mcp"
}
}
{
"server_name": {
"transport_type": "streamable_http",
"headers": {
"X-Custom-Header": "value",
"Authorization": "Custom-Auth-Scheme token"
},
"url": "http://your-server/mcp"
}
}
MCPOmni Connect supports dynamic server configuration through commands:
# Add one or more servers from a configuration file
/add_servers:path/to/config.json
The configuration file can include multiple servers with different authentication methods:
{
"new-server": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
},
"another-server": {
"transport_type": "sse",
"headers": {
"Authorization": "Bearer token"
},
"url": "http://localhost:3000/sse"
}
}
# Remove a server by its name
/remove_server:server_name
/tools
- List all available tools across servers/prompts
- View available prompts/prompt:<name>/<args>
- Execute a prompt with arguments/resources
- List available resources/resource:<uri>
- Access and analyze a resource/debug
- Toggle debug mode/refresh
- Update server capabilities/memory
- Toggle Redis memory persistence (on/off)/mode:auto
- Switch to autonomous agentic mode/mode:chat
- Switch back to interactive chat mode/add_servers:<config.json>
- Add one or more servers from a configuration file/remove_server:<server_name>
- Remove a server by its name# Enable Redis memory persistence
/memory
# Check memory status
Memory persistence is now ENABLED using Redis
# Disable memory persistence
/memory
# Check memory status
Memory persistence is now DISABLED
# Switch to autonomous mode
/mode:auto
# System confirms mode change
Now operating in AUTONOMOUS mode. I will execute tasks independently.
# Switch back to chat mode
/mode:chat
# System confirms mode change
Now operating in CHAT mode. I will ask for approval before executing tasks.
Chat Mode (Default)
Autonomous Mode
Orchestrator Mode
# List all available prompts
/prompts
# Basic prompt usage
/prompt:weather/location=tokyo
# Prompt with multiple arguments depends on the server prompt arguments requirements
/prompt:travel-planner/from=london/to=paris/date=2024-03-25
# JSON format for complex arguments
/prompt:analyze-data/{
"dataset": "sales_2024",
"metrics": ["revenue", "growth"],
"filters": {
"region": "europe",
"period": "q1"
}
}
# Nested argument structures
/prompt:market-research/target=smartphones/criteria={
"price_range": {"min": 500, "max": 1000},
"features": ["5G", "wireless-charging"],
"markets": ["US", "EU", "Asia"]
}
The client intelligently:
MCPOmni Connect now provides advanced controls and visibility over your API usage and resource limits.
Use the /api_stats
command to see your current usage:
/api_stats
This will display:
You can set limits to automatically stop execution when thresholds are reached:
You can configure these in your servers_config.json
under the AgentConfig
section:
"AgentConfig": {
"tool_call_timeout": 30, // Tool call timeout in seconds
"max_steps": 15, // Max number of steps before termination
"request_limit": 1000, // Max number of requests allowed
"total_tokens_limit": 100000 // Max number of tokens allowed
}
# Check your current API usage and limits
/api_stats
# Set a new request limit (example)
# (This can be done by editing servers_config.json or via future CLI commands)
# Example of automatic tool chaining if the tool is available in the servers connected
User: "Find charging stations near Silicon Valley and check their current status"
# Client automatically:
1. Uses Google Maps API to locate Silicon Valley
2. Searches for charging stations in the area
3. Checks station status through EV network API
4. Formats and presents results
# Automatic resource processing
User: "Analyze the contents of /path/to/document.pdf"
# Client automatically:
1. Identifies resource type
2. Extracts content
3. Processes through LLM
4. Provides intelligent summary
🚨 Most Common Issues: Check Quick Fixes below first!
📖 For comprehensive setup help: See ⚙️ Configuration Guide | 🧠 Vector DB Setup
Error | Quick Fix |
---|---|
Error: Invalid API key |
Check your .env file: LLM_API_KEY=your_actual_key |
ModuleNotFoundError: mcpomni_connect |
Run: uv add mcpomni-connect or pip install mcpomni-connect |
Connection refused |
Ensure MCP server is running before connecting |
ChromaDB not available |
Install: pip install chromadb - See Vector DB Setup |
Redis connection failed |
Install Redis or use in-memory mode (default) |
Tool execution failed |
Check tool permissions and arguments |
Connection Issues
Error: Could not connect to MCP server
servers_config.json
API Key Issues
Error: Invalid API key
.env
Redis Connection
Error: Could not connect to Redis
.env
Tool Execution Failures
Error: Tool execution failed
Enable debug mode for detailed logging:
/debug
examples/
directoryWe welcome contributions! See our Contributing Guide for details.
Complete documentation is available at: MCPOmni Connect Docs
To build documentation locally:
./docs.sh serve # Start development server at http://127.0.0.1:8080
./docs.sh build # Build static documentation
This project is licensed under the MIT License - see the LICENSE file for details.
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "example-server": { "command": "npx", "args": [ "-y", "package-name" ], "env": { "API_KEY": "<YOUR_API_KEY>" } } } }
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process.
by danny-avila
Provides a self‑hosted ChatGPT‑style interface supporting numerous AI models, agents, code interpreter, image generation, multimodal interactions, and secure multi‑user authentication.
by block
Automates engineering tasks on local machines, executing code, building projects, debugging, orchestrating workflows, and interacting with external APIs using any LLM.
by RooCodeInc
Provides an autonomous AI coding partner inside the editor that can understand natural language, manipulate files, run commands, browse the web, and be customized via modes and instructions.
by pydantic
A Python framework that enables seamless integration of Pydantic validation with large language models, providing type‑safe agent construction, dependency injection, and structured output handling.
by lastmile-ai
Build effective agents using Model Context Protocol and simple, composable workflow patterns.
by mcp-use
A Python SDK that simplifies interaction with MCP servers and enables developers to create custom agents with tool‑calling capabilities.
by nanbingxyz
A cross‑platform desktop AI assistant that connects to major LLM providers, supports a local knowledge base, and enables tool integration via MCP servers.
by gptme
Provides a personal AI assistant that runs directly in the terminal, capable of executing code, manipulating files, browsing the web, using vision, and interfacing with various LLM providers.