by ntk148v
Provides programmatic access for AI assistants to query and manage Prometheus Alertmanager resources, supporting authentication, multi‑tenant handling, and pagination to fit LLM context limits.
Enables AI assistants and tools to query Alertmanager status, alerts, silences, receivers, and alert groups, as well as create, update, and delete silences and alerts.
uv (or Docker), and network access to an Alertmanager instance.python -m src.alertmanager_mcp_server.server, use the provided Make targets (make install), or install via Smithery with:
npx -y @smithery/cli install @ntk148v/alertmanager-mcp-server --client claude
ALERTMANAGER_URL, optional ALERTMANAGER_USERNAME, ALERTMANAGER_PASSWORD, ALERTMANAGER_TENANT).stdio, http, or sse via MCP_TRANSPORT (default stdio). For HTTP/SSE, adjust MCP_HOST and MCP_PORT.python -m src.alertmanager_mcp_server.server --transport http --host 127.0.0.1 --port 9000
Or Docker:
docker run -e ALERTMANAGER_URL=http://alertmanager:9093 -p 8000:8000 ghcr.io/ntk148v/alertmanager-mcp-server
ALERTMANAGER_TENANT or per‑request X‑Scope‑OrgId header.stdio, http (streamable HTTP), and sse (Server‑Sent Events).X‑Scope‑OrgId header.Q: Which transport should I choose?
A: Use stdio for local CLI integration, http for standard HTTP streaming, or sse when you need Server‑Sent Events with persistent connections.
Q: How does pagination work?
A: Each listing function accepts count (items per page) and offset (items to skip). The response includes a pagination object with total, offset, count, and has_more.
Q: Can I run the server without Docker?
A: Yes – install dependencies with uv or pip, then start the server via the Python module command shown above.
Q: How is authentication handled?
A: Provide optional ALERTMANAGER_USERNAME and ALERTMANAGER_PASSWORD environment variables; the server forwards them to Alertmanager using Basic Auth.
Q: Do I need to rebuild the Docker image?
A: Not required. The official image is hosted at ghcr.io/ntk148v/alertmanager-mcp-server. Build locally only if you modify the source.
Prometheus Alertmanager MCP is a Model Context Protocol (MCP) server for Prometheus Alertmanager. It enables AI assistants and tools to query and manage Alertmanager resources programmatically and securely.
X-Scope-OrgId header for Mimir/Cortex)To install Prometheus Alertmanager MCP Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @ntk148v/alertmanager-mcp-server --client claude
# Clone the repository
$ git clone https://github.com/ntk148v/alertmanager-mcp-server.git
# Set environment variables (see .env.sample)
ALERTMANAGER_URL=http://your-alertmanager:9093
ALERTMANAGER_USERNAME=your_username # optional
ALERTMANAGER_PASSWORD=your_password # optional
ALERTMANAGER_TENANT=your_tenant_id # optional, for multi-tenant setups
For multi-tenant Alertmanager deployments (e.g., Grafana Mimir, Cortex), you can specify the tenant ID in two ways:
ALERTMANAGER_TENANT environment variableX-Scope-OrgId header in requests to the MCP serverThe X-Scope-OrgId header takes precedence over the static configuration, allowing dynamic tenant switching per request.
You can control how the MCP server communicates with clients using the transport options and host/port settings. These can be set either with command-line flags (which take precedence) or with environment variables.
stdio, http, or sse. Default: stdio.http or sse transports (used by the embedded uvicorn server). Default: 0.0.0.0.http or sse transports. Default: 8000.Examples:
Use environment variables to set defaults (CLI flags still override):
MCP_TRANSPORT=sse MCP_HOST=0.0.0.0 MCP_PORT=8080 python3 -m src.alertmanager_mcp_server.server
Or pass flags directly to override env vars:
python3 -m src.alertmanager_mcp_server.server --transport http --host 127.0.0.1 --port 9000
Notes:
The stdio transport communicates over standard input/output and ignores host/port.
The http (streamable HTTP) and sse transports are served via an ASGI app (uvicorn) so host/port are respected when using those transports.
Add the server configuration to your client configuration file. For example, for Claude Desktop:
{
"mcpServers": {
"alertmanager": {
"command": "uv",
"args": [
"--directory",
"<full path to alertmanager-mcp-server directory>",
"run",
"src/alertmanager_mcp_server/server.py"
],
"env": {
"ALERTMANAGER_URL": "http://your-alertmanager:9093s",
"ALERTMANAGER_USERNAME": "your_username",
"ALERTMANAGER_PASSWORD": "your_password"
}
}
}
}
$ make install


$ docker run -e ALERTMANAGER_URL=http://your-alertmanager:9093 \
-e ALERTMANAGER_USERNAME=your_username \
-e ALERTMANAGER_PASSWORD=your_password \
-e ALERTMANAGER_TENANT=your_tenant_id \
-p 8000:8000 ghcr.io/ntk148v/alertmanager-mcp-server
{
"mcpServers": {
"alertmanager": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"-e",
"ALERTMANAGER_URL",
"-e",
"ALERTMANAGER_USERNAME",
"-e",
"ALERTMANAGER_PASSWORD",
"ghcr.io/ntk148v/alertmanager-mcp-server:latest"
],
"env": {
"ALERTMANAGER_URL": "http://your-alertmanager:9093s",
"ALERTMANAGER_USERNAME": "your_username",
"ALERTMANAGER_PASSWORD": "your_password"
}
}
}
}
This configuration passes the environment variables from Claude Desktop to the Docker container by using the -e flag with just the variable name, and providing the actual values in the env object.
The MCP server exposes tools for querying and managing Alertmanager, following its API v2:
get_status()get_alerts(filter, silenced, inhibited, active, count, offset)
count: Number of alerts per page (default: 10, max: 25)offset: Number of alerts to skip (default: 0){ "data": [...], "pagination": { "total": N, "offset": M, "count": K, "has_more": bool } }get_silences(filter, count, offset)
count: Number of silences per page (default: 10, max: 50)offset: Number of silences to skip (default: 0){ "data": [...], "pagination": { "total": N, "offset": M, "count": K, "has_more": bool } }post_silence(silence_dict)delete_silence(silence_id)get_receivers()get_alert_groups(silenced, inhibited, active, count, offset)
count: Number of alert groups per page (default: 3, max: 5)offset: Number of alert groups to skip (default: 0){ "data": [...], "pagination": { "total": N, "offset": M, "count": K, "has_more": bool } }When working with environments that have many alerts, silences, or alert groups, the pagination feature helps:
offset and count parametershas_more flag indicates when additional pages are availableExample: If you have 100 alerts, the LLM can fetch them in manageable chunks (e.g., 10 at a time) and only load what's needed for analysis.
See src/alertmanager_mcp_server/server.py for full API details.
Contributions are welcome! Please open an issue or submit a pull request if you have any suggestions or improvements.
This project uses uv to manage dependencies. Install uv following the instructions for your platform.
# Clone the repository
$ git clone https://github.com/ntk148v/alertmanager-mcp-server.git
$ cd alertmanager-mcp-server
$ make setup
# Run test
$ make test
# Run in development mode
$ mcp dev src/alertmanager_mcp_server/server.py
# Install in Claude Desktop
$ make install
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by netdata
Delivers real‑time, per‑second infrastructure monitoring with zero‑configuration agents, on‑edge machine‑learning anomaly detection, and built‑in dashboards.
by Arize-ai
Open-source AI observability platform enabling tracing, evaluation, dataset versioning, experiment tracking, prompt management, and interactive playground for LLM applications.
by msgbyte
Provides integrated website traffic analysis, uptime checking, and server health monitoring in a single self‑hosted platform.
by grafana
Provides programmatic access to a Grafana instance and its surrounding ecosystem through the Model Context Protocol, enabling AI assistants and other clients to query and manipulate dashboards, datasources, alerts, incidents, on‑call schedules, and more.
by dynatrace-oss
Provides a local server that enables real‑time interaction with the Dynatrace observability platform, exposing tools for querying data, retrieving problems, sending Slack notifications, and integrating AI assistance.
by pydantic
Provides tools to retrieve and query OpenTelemetry trace and metric data from Pydantic Logfire, allowing LLMs to analyze distributed traces and run arbitrary SQL queries against telemetry records.
by VictoriaMetrics-Community
Provides a Model Context Protocol server exposing read‑only VictoriaMetrics APIs, enabling seamless monitoring, observability, and automation through AI‑driven assistants.
by GeLi2001
Enables interaction with the Datadog API through a Model Context Protocol server, providing access to monitors, dashboards, metrics, logs, events, and incident data.
by grafana
Provides a Model Context Protocol (MCP) server that enables AI agents to query Grafana Loki log data via stdin/stdout or Server‑Sent Events, supporting both local binary execution and containerized deployment.
{
"mcpServers": {
"alertmanager": {
"command": "npx",
"args": [
"-y",
"@ntk148v/alertmanager-mcp-server"
],
"env": {
"ALERTMANAGER_URL": "<ALERTMANAGER_URL>",
"ALERTMANAGER_USERNAME": "<ALERTMANAGER_USERNAME>",
"ALERTMANAGER_PASSWORD": "<ALERTMANAGER_PASSWORD>",
"ALERTMANAGER_TENANT": "<ALERTMANAGER_TENANT>"
}
}
}
}claude mcp add alertmanager npx -y @ntk148v/alertmanager-mcp-server