by mcp-use
A Python SDK that simplifies interaction with MCP servers and enables developers to create custom agents with tool‑calling capabilities.
Mcp Use provides a high‑level interface for connecting any LLM that supports tool calling to MCP servers. It abstracts server management, tool discovery, and agent orchestration so developers can focus on the agent logic rather than low‑level communication details.
pip install mcp-use (or pip install "mcp-use[e2b]" for sandboxed execution).MCPClient from the configuration.ChatOpenAI, ChatAnthropic).MCPAgent with the LLM and client, then call agent.run() or agent.astream() for streaming output.import asyncio, os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from mcp_use import MCPAgent, MCPClient
async def main():
load_dotenv()
client = MCPClient.from_config_file("browser_mcp.json")
llm = ChatOpenAI(model="gpt-4o")
agent = MCPAgent(llm=llm, client=client)
result = await agent.run("Find the best restaurant in San Francisco")
print(result)
asyncio.run(main())
agent.astream() provides real‑time incremental responses.disallowed_tools list when creating MCPAgent.asyncio.run.npx.Create an AI agent that can use MCP tools to accomplish complex tasks.
pip install mcp-use langchain-openai
import asyncio
from langchain_openai import ChatOpenAI
from mcp_use import MCPAgent, MCPClient
async def main():
# Configure MCP server
config = {
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
}
}
}
client = MCPClient.from_dict(config)
llm = ChatOpenAI(model="gpt-4o")
agent = MCPAgent(llm=llm, client=client)
result = await agent.run("List all files in the directory")
print(result)
asyncio.run(main())
→ Full Python Agent Documentation
npm install mcp-use @langchain/openai
import { ChatOpenAI } from "@langchain/openai";
import { MCPAgent, MCPClient } from "mcp-use";
async function main() {
// Configure MCP server
const config = {
mcpServers: {
filesystem: {
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
},
},
};
const client = MCPClient.fromDict(config);
const llm = new ChatOpenAI({ modelName: "gpt-4o" });
const agent = new MCPAgent({ llm, client });
const result = await agent.run("List all files in the directory");
console.log(result);
}
main();
→ Full TypeScript Agent Documentation
Connect to MCP servers directly without an AI agent for programmatic tool access.
import asyncio
from mcp_use import MCPClient
async def main():
config = {
"mcpServers": {
"calculator": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-everything"]
}
}
}
client = MCPClient.from_dict(config)
await client.create_all_sessions()
session = client.get_session("calculator")
result = await session.call_tool(name="add", arguments={"a": 5, "b": 3})
print(f"Result: {result.content[0].text}")
await client.close_all_sessions()
asyncio.run(main())
import { MCPClient } from "mcp-use";
async function main() {
const config = {
mcpServers: {
calculator: {
command: "npx",
args: ["-y", "@modelcontextprotocol/server-everything"],
},
},
};
const client = new MCPClient(config);
await client.createAllSessions();
const session = client.getSession("calculator");
const result = await session.callTool("add", { a: 5, b: 3 });
console.log(`Result: ${result.content[0].text}`);
await client.closeAllSessions();
}
main();
→ TypeScript Client Documentation
Build your own MCP server with custom tools, resources, and prompts.
npx create-mcp-use-app my-server
cd my-server
npm install
import { createMCPServer } from "mcp-use/server";
import { z } from "zod";
const server = createMCPServer("my-server", {
version: "1.0.0",
description: "My custom MCP server",
});
// Define a tool
server.tool("get_weather", {
description: "Get weather for a city",
parameters: z.object({
city: z.string().describe("City name"),
}),
execute: async ({ city }) => {
return { temperature: 72, condition: "sunny", city };
},
});
// Start server with auto-inspector
server.listen(3000);
// 🎉 Inspector at http://localhost:3000/inspector
→ Full TypeScript Server Documentation
Coming Soon! For now, please use the TypeScript implementation to create MCP servers.
Debug and test your MCP servers with the interactive web-based inspector.
When you create a server with mcp-use, the inspector is automatically available:
server.listen(3000);
// Inspector automatically at: http://localhost:3000/inspector
Inspect any MCP server via CLI:
npx @mcp-use/inspector --url http://localhost:3000/sse
Features:
→ Full Inspector Documentation
This monorepo contains multiple packages for both Python and TypeScript:
| Package | Description | Version |
|---|---|---|
| mcp-use | Complete MCP client and agent library |
mcp-use/
├── libraries/
│ ├── python/ → Python implementation
│ │ ├── mcp_use/ → Core library
│ │ ├── examples/ → Python examples
│ │ └── docs/ → Python documentation
│ │
│ └── typescript/ → TypeScript implementation
│ └── packages/
│ ├── mcp-use/ → Core framework
│ ├── cli/ → Build tool
│ ├── inspector/ → Web inspector
│ └── create-mcp-use-app/ → Scaffolding
└── README.md → This file
Build everything from AI agents to servers - not just clients. Create the full MCP ecosystem in your preferred language.
Choose Python for ML/data workflows or TypeScript for web applications. Same great features, different languages.
Includes observability, streaming, multi-server support, sandboxing, and tool access controls out of the box.
Hot reload, TypeScript/Python type safety, built-in inspector, and comprehensive documentation.
MIT licensed and community-driven. Contribute, fork, or extend as needed.
MIT © MCP-Use Contributors
We love contributions! Check out our contributing guidelines:
If you use MCP-Use in your research or project, please cite:
@software{mcp_use2025,
author = {Zullo, Pietro and Contributors},
title = {MCP-Use: Complete MCP Ecosystem for Python and TypeScript},
year = {2025},
publisher = {GitHub},
url = {https://github.com/mcp-use/mcp-use}
}
Thanks to all our amazing contributors!
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process.
by danny-avila
Provides a self‑hosted ChatGPT‑style interface supporting numerous AI models, agents, code interpreter, image generation, multimodal interactions, and secure multi‑user authentication.
by block
Automates engineering tasks on local machines, executing code, building projects, debugging, orchestrating workflows, and interacting with external APIs using any LLM.
by RooCodeInc
Provides an autonomous AI coding partner inside the editor that can understand natural language, manipulate files, run commands, browse the web, and be customized via modes and instructions.
by pydantic
A Python framework that enables seamless integration of Pydantic validation with large language models, providing type‑safe agent construction, dependency injection, and structured output handling.
by lastmile-ai
Build effective agents using Model Context Protocol and simple, composable workflow patterns.
by Klavis-AI
Provides production‑ready MCP servers and a hosted service for integrating AI applications with over 50 third‑party services via standardized APIs, OAuth, and easy Docker or hosted deployment.
by nanbingxyz
A cross‑platform desktop AI assistant that connects to major LLM providers, supports a local knowledge base, and enables tool integration via MCP servers.
by gptme
Provides a personal AI assistant that runs directly in the terminal, capable of executing code, manipulating files, browsing the web, using vision, and interfacing with various LLM providers.
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": [
"-y",
"@playwright/mcp@latest"
],
"env": {}
}
}
}claude mcp add playwright npx -y @playwright/mcp@latest