by getsentry
Provides a remote Model Context Protocol service that proxies Sentry's API, enabling AI‑powered development tools to query, search, and act on Sentry data directly from code editors.
Sentry Mcp delivers a remote MCP service that sits between AI coding assistants and the Sentry API. It translates natural‑language requests from tools like Cursor or Claude Code into Sentry queries and actions, allowing developers to debug and explore incidents without leaving their editor.
npx @sentry/mcp-server@latest. Supply a Sentry access token and, if you need AI‑powered search tools, configure an LLM provider via environment variables.--access-token (and optionally --host for a non‑SaaS Sentry instance). Example: npx @sentry/mcp-server@latest --access-token=YOUR_TOKEN --host=sentry.example.com.pnpm inspector and point the UI at http://localhost:5173 to test the server interactively.SENTRY_ACCESS_TOKEN, EMBEDDED_AGENT_PROVIDER (openai or anthropic), and the corresponding API key (OPENAI_API_KEY or ANTHROPIC_API_KEY). Optional overrides include SENTRY_HOST and MCP_DISABLE_SKILLS.stdio transport for quick self‑hosted usage.search_events, search_issues, etc.) that translate natural language into Sentry query syntax.EMBEDDED_AGENT_PROVIDER to openai or anthropic and provide the appropriate API key.org:read, project:read, project:write, team:read, team:write, event:write.--disable-skills=skill1,skill2 flag or set MCP_DISABLE_SKILLS in the environment.pnpm dev workflow after configuring .env files and an OAuth app.SENTRY_HOST or use --host to point to a self‑hosted instance.Sentry's MCP service is primarily designed for human-in-the-loop coding agents. Our tool selection and priorities are focused on developer workflows and debugging use cases, rather than providing a general-purpose MCP server for all Sentry functionality.
This remote MCP server acts as middleware to the upstream Sentry API, optimized for coding assistants like Cursor, Claude Code, and similar development tools. It's based on Cloudflare's work towards remote MCPs.
You'll find everything you need to know by visiting the deployed service in production:
If you're looking to contribute, learn how it works, or to run this for self-hosted Sentry, continue below.
While this repository is focused on acting as an MCP service, we also support a stdio transport. This is still a work in progress, but is the easiest way to adapt run the MCP against a self-hosted Sentry install.
Note: The AI-powered search tools (search_events, search_issues, etc.) require an LLM provider (OpenAI or Anthropic). These tools use natural language processing to translate queries into Sentry's query syntax. Without a configured provider, these specific tools will be unavailable, but all other tools will function normally.
To utilize the stdio transport, you'll need to create an User Auth Token in Sentry with the necessary scopes. As of writing this is:
org:read
project:read
project:write
team:read
team:write
event:write
Launch the transport:
npx @sentry/mcp-server@latest --access-token=sentry-user-token
Need to connect to a self-hosted deployment? Add --host (hostname only, e.g. --host=sentry.example.com) when you run the command.
Some features (like Seer) may not be available on self-hosted instances. You can disable specific skills to prevent unsupported tools from being exposed:
npx @sentry/mcp-server@latest --access-token=TOKEN --host=sentry.example.com --disable-skills=seer
SENTRY_ACCESS_TOKEN= # Required: Your Sentry auth token
# LLM Provider Configuration (required for AI-powered search tools)
EMBEDDED_AGENT_PROVIDER= # Required: 'openai' or 'anthropic'
OPENAI_API_KEY= # Required if using OpenAI
ANTHROPIC_API_KEY= # Required if using Anthropic
# Optional overrides
SENTRY_HOST= # For self-hosted deployments
MCP_DISABLE_SKILLS= # Disable specific skills (comma-separated, e.g. 'seer')
Important: Always set EMBEDDED_AGENT_PROVIDER to explicitly specify your LLM provider. Auto-detection based on API keys alone is deprecated and will be removed in a future release. See docs/embedded-agents.md for detailed configuration options.
{
"mcpServers": {
"sentry": {
"command": "npx",
"args": ["@sentry/mcp-server"],
"env": {
"SENTRY_ACCESS_TOKEN": "your-token",
"EMBEDDED_AGENT_PROVIDER": "openai",
"OPENAI_API_KEY": "sk-..."
}
}
}
}
If you leave the host variable unset, the CLI automatically targets the Sentry SaaS service. Only set the override when you operate self-hosted Sentry.
For self-hosted instances that don't support Seer:
{
"mcpServers": {
"sentry": {
"command": "npx",
"args": ["@sentry/mcp-server"],
"env": {
"SENTRY_ACCESS_TOKEN": "your-token",
"SENTRY_HOST": "sentry.example.com",
"MCP_DISABLE_SKILLS": "seer"
}
}
}
}
MCP includes an Inspector, to easily test the service:
pnpm inspector
Enter the MCP server URL (http://localhost:5173) and hit connect. This should trigger the authentication flow for you.
Note: If you have issues with your OAuth flow when accessing the inspector on 127.0.0.1, try using localhost instead by visiting http://localhost:6274.
To contribute changes, you'll need to set up your local environment:
Set up environment files:
make setup-env # Creates both .env files from examples
Create an OAuth App in Sentry (Settings => API => Applications):
http://localhost:5173http://localhost:5173/oauth/callbackConfigure your credentials:
.env in the root directory and add your OPENAI_API_KEYpackages/mcp-cloudflare/.env and add:
SENTRY_CLIENT_ID=your_development_sentry_client_idSENTRY_CLIENT_SECRET=your_development_sentry_client_secretCOOKIE_SECRET=my-super-secret-cookieStart the development server:
pnpm dev
Run the server locally to make it available at http://localhost:5173
pnpm dev
To test the local server, enter http://localhost:5173/mcp into Inspector and hit connect. Once you follow the prompts, you'll be able to "List Tools".
There are three test suites included: unit tests, evaluations, and manual testing.
Unit tests can be run using:
pnpm test
Evaluations require a .env file in the project root with some config:
# .env (in project root)
OPENAI_API_KEY= # Also required for AI-powered search tools in production
Note: The root .env file provides defaults for all packages. Individual packages can have their own .env files to override these defaults during development.
Once that's done you can run them using:
pnpm eval
Manual testing (preferred for testing MCP changes):
# Test with local dev server (default: http://localhost:5173)
pnpm -w run cli "who am I?"
# Test agent mode (use_sentry tool only)
pnpm -w run cli --agent "who am I?"
# Test against production
pnpm -w run cli --mcp-host=https://mcp.sentry.dev "query"
# Test with local stdio mode (requires SENTRY_ACCESS_TOKEN)
pnpm -w run cli --access-token=TOKEN "query"
Note: The CLI defaults to http://localhost:5173. Override with --mcp-host or set MCP_URL environment variable.
Comprehensive testing playbooks:
docs/testing-stdio.md for complete guide on building, running, and testing the stdio implementation (IDEs, MCP Inspector)docs/testing-remote.md for complete guide on testing the remote server (OAuth, web UI, CLI client)This repository uses automated code review tools (like Cursor BugBot) to help identify potential issues in pull requests. These tools provide helpful feedback and suggestions, but we do not recommend making these checks required as the accuracy is still evolving and can produce false positives.
The automated reviews should be treated as:
When addressing automated feedback, focus on the underlying concerns rather than strictly following every suggestion.
Looking to contribute or explore the full documentation map? See CLAUDE.md (also available as AGENTS.md) for contributor workflows and the complete docs index. The docs/ folder contains the per-topic guides and tool-integrated .md files.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
{
"mcpServers": {
"sentry": {
"command": "npx",
"args": [
"@sentry/mcp-server@latest"
],
"env": {
"SENTRY_ACCESS_TOKEN": "<YOUR_SENTRY_ACCESS_TOKEN>",
"EMBEDDED_AGENT_PROVIDER": "openai|anthropic",
"OPENAI_API_KEY": "<YOUR_OPENAI_API_KEY>",
"ANTHROPIC_API_KEY": "<YOUR_ANTHROPIC_API_KEY>",
"SENTRY_HOST": "<YOUR_SENTRY_HOST>",
"MCP_DISABLE_SKILLS": "<comma-separated-skills>"
}
}
}
}claude mcp add sentry npx @sentry/mcp-server@latest