by GongRzhe
Dynamically creates MCP servers from web API configurations, enabling seamless integration of REST, GraphQL, or other web services into AI assistant tools.
API Weaver provides a FastMCP server that can register arbitrary web APIs at runtime and automatically expose each endpoint as an MCP tool. By converting REST, GraphQL, or custom HTTP services into MCP‑compatible tools, AI assistants such as Claude can call external services without custom code.
git clone https://github.com/GongRzhe/APIWeaver.git
cd APIWeaver
pip install -r requirements.txt
# optional: pip install . (adds the `apiweaver` CLI command)
apiweaver run
apiweaver run --transport streamable-http --host 127.0.0.1 --port 8000
apiweaver run --transport sse --host 127.0.0.1 --port 8000
register_api
tool, providing a JSON definition that includes base URL, authentication, headers, and endpoint specifications.test_api_connection
validates connectivity before use.Q: Which transport should I use?
A: Prefer streamable-http
for modern deployments (cloud, containers). Use stdio
for local desktop tools, and sse
only for legacy clients.
Q: How do I secure the server? A: Run it behind a firewall or reverse proxy, use HTTPS for the HTTP transports, and keep authentication tokens out of the config file (use environment variables where possible).
Q: Can I register GraphQL APIs? A: Yes – define GraphQL endpoints in the configuration; the server will treat each operation as a separate MCP tool.
Q: Do I need to restart the server after adding an API?
A: No. APIs are managed at runtime via the register_api
, unregister_api
, and list_apis
tools.
Q: What languages can I call the server from? A: Any language that can speak MCP (e.g., via STDIO, HTTP, or SSE). The server is language‑agnostic.
A FastMCP server that dynamically creates MCP (Model Context Protocol) servers from web API configurations. This allows you to easily integrate any REST API, GraphQL endpoint, or web service into an MCP-compatible tool that can be used by AI assistants like Claude.
APIWeaver supports three different transport types to accommodate various deployment scenarios:
apiweaver run
or apiweaver run --transport stdio
apiweaver run --transport sse --host 127.0.0.1 --port 8000
http://host:port/mcp
apiweaver run --transport streamable-http --host 127.0.0.1 --port 8000
http://host:port/mcp
# Clone or download this repository
cd ~/Desktop/APIWeaver
# Install dependencies
pip install -r requirements.txt
{
"mcpServers": {
"apiweaver": {
"command": "uvx",
"args": ["apiweaver", "run"]
}
}
}
There are several ways to run the APIWeaver server with different transport types:
1. After installation (recommended):
If you have installed the package (e.g., using pip install .
from the project root after installing requirements):
# Default STDIO transport
apiweaver run
# Streamable HTTP transport (recommended for web deployments)
apiweaver run --transport streamable-http --host 127.0.0.1 --port 8000
# SSE transport (legacy compatibility)
apiweaver run --transport sse --host 127.0.0.1 --port 8000
2. Directly from the repository (for development):
# From the root of the repository
python -m apiweaver.cli run [OPTIONS]
Transport Options:
--transport
: Choose from stdio
(default), sse
, or streamable-http
--host
: Host address for HTTP transports (default: 127.0.0.1)--port
: Port for HTTP transports (default: 8000)--path
: URL path for MCP endpoint (default: /mcp)Run apiweaver run --help
for all available options.
APIWeaver is designed to expose web APIs as tools for AI assistants that support the Model Context Protocol (MCP). Here's how to use it:
Start the APIWeaver Server:
For modern MCP clients (recommended):
apiweaver run --transport streamable-http --host 127.0.0.1 --port 8000
For legacy compatibility:
apiweaver run --transport sse --host 127.0.0.1 --port 8000
For local desktop applications:
apiweaver run # Uses STDIO transport
Configure Your AI Assistant: The MCP endpoint will be available at:
http://127.0.0.1:8000/mcp
http://127.0.0.1:8000/mcp
Register APIs and Use Tools:
Once connected, use the built-in register_api
tool to define web APIs, then use the generated endpoint tools.
The server provides these built-in tools:
{
"name": "my_api",
"base_url": "https://api.example.com",
"description": "Example API integration",
"auth": {
"type": "bearer",
"bearer_token": "your-token-here"
},
"headers": {
"Accept": "application/json"
},
"endpoints": [
{
"name": "list_users",
"description": "Get all users",
"method": "GET",
"path": "/users",
"params": [
{
"name": "limit",
"type": "integer",
"location": "query",
"required": false,
"default": 10,
"description": "Number of users to return"
}
]
}
]
}
{
"name": "weather",
"base_url": "https://api.openweathermap.org/data/2.5",
"description": "OpenWeatherMap API",
"auth": {
"type": "api_key",
"api_key": "your-api-key",
"api_key_param": "appid"
},
"endpoints": [
{
"name": "get_current_weather",
"description": "Get current weather for a city",
"method": "GET",
"path": "/weather",
"params": [
{
"name": "q",
"type": "string",
"location": "query",
"required": true,
"description": "City name"
},
{
"name": "units",
"type": "string",
"location": "query",
"required": false,
"default": "metric",
"enum": ["metric", "imperial", "kelvin"]
}
]
}
]
}
{
"name": "github",
"base_url": "https://api.github.com",
"description": "GitHub REST API",
"auth": {
"type": "bearer",
"bearer_token": "ghp_your_token_here"
},
"headers": {
"Accept": "application/vnd.github.v3+json"
},
"endpoints": [
{
"name": "get_user",
"description": "Get a GitHub user's information",
"method": "GET",
"path": "/users/{username}",
"params": [
{
"name": "username",
"type": "string",
"location": "path",
"required": true,
"description": "GitHub username"
}
]
}
]
}
{
"auth": {
"type": "bearer",
"bearer_token": "your-token-here"
}
}
{
"auth": {
"type": "api_key",
"api_key": "your-key-here",
"api_key_header": "X-API-Key"
}
}
{
"auth": {
"type": "api_key",
"api_key": "your-key-here",
"api_key_param": "api_key"
}
}
{
"auth": {
"type": "basic",
"username": "your-username",
"password": "your-password"
}
}
{
"auth": {
"type": "custom",
"custom_headers": {
"X-Custom-Auth": "custom-value",
"X-Client-ID": "client-123"
}
}
}
?param=value
)/users/{id}
){
"timeout": 60.0 // Timeout in seconds
}
{
"name": "status",
"type": "string",
"enum": ["active", "inactive", "pending"]
}
{
"name": "page",
"type": "integer",
"default": 1
}
{
"mcpServers": {
"apiweaver": {
"command": "apiweaver",
"args": ["run", "--transport", "streamable-http", "--host", "127.0.0.1", "--port", "8000"]
}
}
}
{
"mcpServers": {
"apiweaver": {
"command": "apiweaver",
"args": ["run"]
}
}
}
The server provides detailed error messages for:
streamable-http
for modern deployments, stdio
for local toolstest_api_connection
after registering an APIRun with verbose logging (if installed):
apiweaver run --verbose
Feel free to extend this server with additional features:
MIT License - feel free to use and modify as needed.
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "apiweaver": { "command": "apiweaver", "args": [ "run" ], "env": {} } } }
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.