by honeycombio
Enables LLMs to query and analyze Honeycomb observability data across multiple environments via the Model Context Protocol, providing real‑time insights into datasets, SLOs, triggers, and more.
Honeycomb MCP provides a self‑hosted server that exposes Honeycomb datasets, alerts, dashboards, and other telemetry through a standardized protocol that LLMs can interact with. It lets AI assistants run analytics queries, list resources, and retrieve detailed metadata without needing direct API calls.
pnpm install
pnpm run build
serverConfig to start it via npx). The server reads configuration from environment variables.list_datasets, run_query, list_slos, or get_trigger from the LLM client to retrieve data.Q: Do I need an Enterprise account? A: Yes, the server works only with Honeycomb Enterprise API keys that have full permissions.
Q: Is the server authenticated? A: The server itself is unauthenticated; authentication is handled via the API key supplied in environment variables.
Q: Can I run this in EU?
A: Set HONEYCOMB_API_ENDPOINT to the EU endpoint (e.g., https://api.eu1.honeycomb.io/).
Q: How does caching work?
A: Caching is enabled by default. TTLs and maximum cache size can be tuned with HONEYCOMB_CACHE_* environment variables.
Q: Is the project still maintained? A: The repository is deprecated; users are encouraged to migrate to the hosted Honeycomb MCP solution.
⚠️ DEPRECATED: This self-hosted MCP server is deprecated. Please migrate to the hosted Honeycomb Model Context Protocol (MCP) solution at Honeycomb MCP Documentation.
A Model Context Protocol server for interacting with Honeycomb observability data. This server enables LLMs like Claude to directly analyze and query your Honeycomb datasets across multiple environments.

Honeycomb MCP is effectively a complete alternative interface to Honeycomb, and thus you need broad permissions for the API.
Currently, this is only available for Honeycomb Enterprise customers.
Today, this is a single server process that you must run on your own computer. It is not authenticated. All information uses STDIO between your client and the server.
pnpm install
pnpm run build
The build artifact goes into the /build folder.
To use this MCP server, you need to provide Honeycomb API keys via environment variables in your MCP config.
{
"mcpServers": {
"honeycomb": {
"command": "node",
"args": [
"/fully/qualified/path/to/honeycomb-mcp/build/index.mjs"
],
"env": {
"HONEYCOMB_API_KEY": "your_api_key"
}
}
}
}
For multiple environments:
{
"mcpServers": {
"honeycomb": {
"command": "node",
"args": [
"/fully/qualified/path/to/honeycomb-mcp/build/index.mjs"
],
"env": {
"HONEYCOMB_ENV_PROD_API_KEY": "your_prod_api_key",
"HONEYCOMB_ENV_STAGING_API_KEY": "your_staging_api_key"
}
}
}
}
Important: These environment variables must bet set in the env block of your MCP config.
EU customers must also set a HONEYCOMB_API_ENDPOINT configuration, since the MCP defaults to the non-EU instance.
# Optional custom API endpoint (defaults to https://api.honeycomb.io)
HONEYCOMB_API_ENDPOINT=https://api.eu1.honeycomb.io/
The MCP server implements caching for all non-query Honeycomb API calls to improve performance and reduce API usage. Caching can be configured using these environment variables:
# Enable/disable caching (default: true)
HONEYCOMB_CACHE_ENABLED=true
# Default TTL in seconds (default: 300)
HONEYCOMB_CACHE_DEFAULT_TTL=300
# Resource-specific TTL values in seconds (defaults shown)
HONEYCOMB_CACHE_DATASET_TTL=900 # 15 minutes
HONEYCOMB_CACHE_COLUMN_TTL=900 # 15 minutes
HONEYCOMB_CACHE_BOARD_TTL=900 # 15 minutes
HONEYCOMB_CACHE_SLO_TTL=900 # 15 minutes
HONEYCOMB_CACHE_TRIGGER_TTL=900 # 15 minutes
HONEYCOMB_CACHE_MARKER_TTL=900 # 15 minutes
HONEYCOMB_CACHE_RECIPIENT_TTL=900 # 15 minutes
HONEYCOMB_CACHE_AUTH_TTL=3600 # 1 hour
# Maximum cache size (items per resource type)
HONEYCOMB_CACHE_MAX_SIZE=1000
Honeycomb MCP has been tested with the following clients:
It will likely work with other clients.
Access Honeycomb datasets using URIs in the format:
honeycomb://{environment}/{dataset}
For example:
honeycomb://production/api-requestshoneycomb://staging/backend-servicesThe resource response includes:
list_datasets: List all datasets in an environment
{ "environment": "production" }
get_columns: Get column information for a dataset
{
"environment": "production",
"dataset": "api-requests"
}
run_query: Run analytics queries with rich options
{
"environment": "production",
"dataset": "api-requests",
"calculations": [
{ "op": "COUNT" },
{ "op": "P95", "column": "duration_ms" }
],
"breakdowns": ["service.name"],
"time_range": 3600
}
analyze_columns: Analyzes specific columns in a dataset by running statistical queries and returning computed metrics.
list_slos: List all SLOs for a dataset
{
"environment": "production",
"dataset": "api-requests"
}
get_slo: Get detailed SLO information
{
"environment": "production",
"dataset": "api-requests",
"sloId": "abc123"
}
list_triggers: List all triggers for a dataset
{
"environment": "production",
"dataset": "api-requests"
}
get_trigger: Get detailed trigger information
{
"environment": "production",
"dataset": "api-requests",
"triggerId": "xyz789"
}
get_trace_link: Generate a deep link to a specific trace in the Honeycomb UI
get_instrumentation_help: Provides OpenTelemetry instrumentation guidance
{
"language": "python",
"filepath": "app/services/payment_processor.py"
}
Ask Claude things like:
All tool responses are optimized to reduce context window usage while maintaining essential information:
This optimization ensures that responses are concise but complete, allowing LLMs to process more data within context limitations.
run_queryThe run_query tool supports a comprehensive query specification:
calculations: Array of operations to perform
{"op": "HEATMAP", "column": "duration_ms"}filters: Array of filter conditions
{"column": "error", "op": "=", "value": true}filter_combination: "AND" or "OR" (default is "AND")
breakdowns: Array of columns to group results by
["service.name", "http.status_code"]orders: Array specifying how to sort results
{"op": "COUNT", "order": "descending"}time_range: Relative time range in seconds (e.g., 3600 for last hour)
start_time and end_time: UNIX timestamps for absolute time ranges
having: Filter results based on calculation values
{"calculate_op": "COUNT", "op": ">", "value": 100}Here are some real-world example queries:
{
"environment": "production",
"dataset": "api-requests",
"calculations": [
{"column": "duration_ms", "op": "HEATMAP"},
{"column": "duration_ms", "op": "MAX"}
],
"filters": [
{"column": "trace.parent_id", "op": "does-not-exist"}
],
"breakdowns": ["http.target", "name"],
"orders": [
{"column": "duration_ms", "op": "MAX", "order": "descending"}
]
}
{
"environment": "production",
"dataset": "api-requests",
"calculations": [
{"column": "duration_ms", "op": "HEATMAP"}
],
"filters": [
{"column": "db.statement", "op": "exists"}
],
"breakdowns": ["db.statement"],
"time_range": 604800
}
{
"environment": "production",
"dataset": "api-requests",
"calculations": [
{"op": "COUNT"}
],
"filters": [
{"column": "exception.message", "op": "exists"},
{"column": "parent_name", "op": "exists"}
],
"breakdowns": ["exception.message", "parent_name"],
"orders": [
{"op": "COUNT", "order": "descending"}
]
}
pnpm install
pnpm run build
MIT
Please log in to share your review and rating for this MCP.
{
"mcpServers": {
"honeycomb": {
"command": "npx",
"args": [
"-y",
"honeycomb-mcp"
],
"env": {
"HONEYCOMB_API_KEY": "<YOUR_API_KEY>"
}
}
}
}claude mcp add honeycomb npx -y honeycomb-mcpExplore related MCPs that share similar capabilities and solve comparable challenges
by netdata
Delivers real‑time, per‑second infrastructure monitoring with zero‑configuration agents, on‑edge machine‑learning anomaly detection, and built‑in dashboards.
by Arize-ai
Open-source AI observability platform enabling tracing, evaluation, dataset versioning, experiment tracking, prompt management, and interactive playground for LLM applications.
by msgbyte
Provides integrated website traffic analysis, uptime checking, and server health monitoring in a single self‑hosted platform.
by grafana
Provides programmatic access to a Grafana instance and its surrounding ecosystem through the Model Context Protocol, enabling AI assistants and other clients to query and manipulate dashboards, datasources, alerts, incidents, on‑call schedules, and more.
by dynatrace-oss
Provides a local server that enables real‑time interaction with the Dynatrace observability platform, exposing tools for querying data, retrieving problems, sending Slack notifications, and integrating AI assistance.
by pydantic
Provides tools to retrieve and query OpenTelemetry trace and metric data from Pydantic Logfire, allowing LLMs to analyze distributed traces and run arbitrary SQL queries against telemetry records.
by VictoriaMetrics-Community
Provides a Model Context Protocol server exposing read‑only VictoriaMetrics APIs, enabling seamless monitoring, observability, and automation through AI‑driven assistants.
by GeLi2001
Enables interaction with the Datadog API through a Model Context Protocol server, providing access to monitors, dashboards, metrics, logs, events, and incident data.
by last9
Provides AI agents with real‑time production context—including logs, metrics, traces, and alerts—through a Model Context Protocol server, enabling automatic code fixing and faster debugging.