by MCPJam
Visually test and debug MCP server tools, resources, prompts, and OAuth flows with an integrated LLM playground and transport support.
MCPJam Inspector provides a local‑first UI for interacting with MCP servers. It lets developers observe JSON‑RPC traffic, experiment with tools and resources, and run LLM chats against their server implementations.
# Quick start via npx (recommended)
npx @mcpjam/inspector@latest
Optional flags:
--port <number> – run on a custom port.--ollama <model> – start with an Ollama model.--config mcp.json – import server definitions from a mcp.json file.The UI is then reachable at http://localhost:3001 (or the custom port you chose). You can add servers via the MCP Servers tab, selecting STDIO, SSE, or HTTP transports, and test OAuth 2.0 or Dynamic Client Registration.
docker run -p 3001:3001 mcpjam/mcp-inspector:latest).Q: Do I need to install Node.js?
A: Yes, Node.js 20+ is required. The tool runs via npx so no global install is needed.
Q: Can I run the inspector offline? A: The UI itself works offline, but LLM playground features need internet access to contact external model APIs unless you use a local Ollama model.
Q: How do I add a server that uses STDIO?
A: Use the Add Server UI, select STDIO, and provide the command (e.g., uv run fastmcp run path/to/server.py). The inspector will launch the process and communicate via pipes.
Q: Is there support for Windows/macOS desktop apps?
A: Pre‑built desktop binaries are available on the MCPJam website; they internally launch the same npx‑based server.
Q: Where can I report bugs or request features? A: Open an issue on the GitHub repository or join the Discord community linked in the README.
MCPJam inspector is the testing and debugging platform for MCP servers & OpenAI apps. Visually inspect your server's tools, resources, prompts, and OAuth. Try your server against different models in the LLM playground.
Start up the MCPJam inspector:
npx @mcpjam/inspector@latest
We recommend starting MCPJam inspector via npx:
npx @mcpjam/inspector@latest
We also have a Mac and Windows desktop app:
Run MCPJam Inspector using Docker:
# Using Docker Compose (recommended)
docker-compose up -d
# Or using Docker run
docker run -d \
-p 6274:6274 \
--env-file .env.production \
-e NODE_ENV=production \
--add-host host.docker.internal:host-gateway \
--name mcp-inspector \
--restart unless-stopped \
mcpjam/mcp-inspector:latest
The application will be available at http://127.0.0.1:6274.
Important for macOS/Windows users:
http://127.0.0.1:6274 (not localhost)http://host.docker.internal:PORT instead of http://127.0.0.1:PORTExample:
# Your MCP server runs on host at: http://127.0.0.1:8080/mcp
# In Docker, configure it as: http://host.docker.internal:8080/mcp
| Capability | Description |
|---|---|
| Multi-protocol servers | Connect to STDIO, SSE, and streamable HTTP MCP servers. |
| Flexible auth | Supports OAuth 2.1 and bearer tokens, including custom scopes and client credentials. |
| Rich configuration | Configure environment variables, custom headers, and timeouts. |
| Manual tool invocation | Manually invoke MCP tools, resources, resource templates, and elicitation flows. |
| Server info | View server icons, version, capabilities, instructions, and ChatGPT widget metadata exposed by the server. |
Develop OpenAI apps or MCP-UI apps locally. No ngrok needed. MCPJam is the only local-first OpenAI app emulator.
View every step of the OAuth handshake in detail, with guided explanations.
Try your server against any LLM model. We provide frontier models like GPT-5, Claude Sonnet, Gemini 2.5. No API key needed, it's on us.
We're grateful for you considering contributing to MCPJam. Please read our contributing guide.
You can also reach out to the contributors that hang out in our Discord channel.
Some of our partners and favorite frameworks:
This project is licensed under the Apache License 2.0 - see the LICENSE.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.