by pydantic
Provides tools to retrieve and query OpenTelemetry trace and metric data from Pydantic Logfire, allowing LLMs to analyze distributed traces and run arbitrary SQL queries against telemetry records.
The server exposes Pydantic Logfire telemetry—traces and metrics—through a set of RPC‑style tools. LLMs can call these tools to fetch recent exceptions, execute custom SQL against the Logfire DataFusion database, generate direct links to trace visualisations, and inspect the underlying schema.
uv (the recommended Python package manager).uvx logfire-mcp@latest and supply the token via the LOGFIRE_READ_TOKEN environment variable or the --read-token flag.uvx with logfire-mcp@latest and the token in env or as an argument.LOGFIRE_READ_TOKEN and optional LOGFIRE_BASE_URL for self‑hosted deployments.Q: Do I need to run the server manually? A: Only if your MCP client does not manage servers for you. Clients like Cursor or Claude Desktop can launch the server automatically using the provided configuration.
Q: What is the maximum look‑back period?
A: Both tools accept an age argument up to 7 days (10,080 minutes).
Q: How do I point the server at a self‑hosted Logfire instance?
A: Set LOGFIRE_BASE_URL in the environment or use the --base-url flag when starting the server.
Q: Which language is the server written in?
A: It is a Python package distributed via uvx.
Q: Is the server open‑source? A: Yes, under the MIT License.
This repository contains a Model Context Protocol (MCP) server with tools that can access the OpenTelemetry traces and metrics you've sent to Pydantic Logfire.
This MCP server enables LLMs to retrieve your application's telemetry data, analyze distributed traces, and make use of the results of arbitrary SQL queries executed using the Pydantic Logfire APIs.
find_exceptions_in_file - Get the details about the 10 most recent exceptions on the file.
filepath (string) - The path to the file to find exceptions in.age (integer) - Number of minutes to look back, e.g. 30 for last 30 minutes. Maximum allowed value is 7 days.arbitrary_query - Run an arbitrary query on the Pydantic Logfire database.
query (string) - The query to run, as a SQL string.age (integer) - Number of minutes to look back, e.g. 30 for last 30 minutes. Maximum allowed value is 7 days.logfire_link - Creates a link to help the user to view the trace in the Logfire UI.
trace_id (string) - The trace ID to link to.schema_reference - The database schema for the Logfire DataFusion database.
uvThe first thing to do is make sure uv is installed, as uv is used to run the MCP server.
For installation instructions, see the uv installation docs.
If you already have an older version of uv installed, you might need to update it with uv self update.
In order to make requests to the Pydantic Logfire APIs, the Pydantic Logfire MCP server requires a "read token".
You can create one under the "Read Tokens" section of your project settings in Pydantic Logfire: https://logfire.pydantic.dev/-/redirect/latest-project/settings/read-tokens
[!IMPORTANT] Pydantic Logfire read tokens are project-specific, so you need to create one for the specific project you want to expose to the Pydantic Logfire MCP server.
Once you have uv installed and have a Pydantic Logfire read token, you can manually run the MCP server using uvx (which is provided by uv).
You can specify your read token using the LOGFIRE_READ_TOKEN environment variable:
LOGFIRE_READ_TOKEN=YOUR_READ_TOKEN uvx logfire-mcp@latest
You can also set LOGFIRE_READ_TOKEN in a .env file:
LOGFIRE_READ_TOKEN=pylf_v1_us_...
NOTE: for this to work, the MCP server needs to run with the directory containing the .env file in its working directory.
or using the --read-token flag:
uvx logfire-mcp@latest --read-token=YOUR_READ_TOKEN
[!NOTE] If you are using Cursor, Claude Desktop, Cline, or other MCP clients that manage your MCP servers for you, you do NOT need to manually run the server yourself. The next section will show you how to configure these clients to make use of the Pydantic Logfire MCP server.
If you are running Logfire in a self hosted environment, you need to specify the base URL.
This can be done using the LOGFIRE_BASE_URL environment variable:
LOGFIRE_BASE_URL=https://logfire.my-company.com uvx logfire-mcp@latest --read-token=YOUR_READ_TOKEN
You can also use the --base-url argument:
uvx logfire-mcp@latest --base-url=https://logfire.my-company.com --read-token=YOUR_READ_TOKEN
Create a .cursor/mcp.json file in your project root:
{
"mcpServers": {
"logfire": {
"command": "uvx",
"args": ["logfire-mcp@latest", "--read-token=YOUR-TOKEN"]
}
}
}
The Cursor doesn't accept the env field, so you need to use the --read-token flag instead.
Run the following command:
claude mcp add logfire -e LOGFIRE_READ_TOKEN=YOUR_TOKEN -- uvx logfire-mcp@latest
Add to your Claude settings:
{
"command": ["uvx"],
"args": ["logfire-mcp@latest"],
"type": "stdio",
"env": {
"LOGFIRE_READ_TOKEN": "YOUR_TOKEN"
}
}
Add to your Cline settings in cline_mcp_settings.json:
{
"mcpServers": {
"logfire": {
"command": "uvx",
"args": ["logfire-mcp@latest"],
"env": {
"LOGFIRE_READ_TOKEN": "YOUR_TOKEN"
},
"disabled": false,
"autoApprove": []
}
}
}
Make sure you enabled MCP support in VS Code.
Create a .vscode/mcp.json file in your project's root directory:
{
"servers": {
"logfire": {
"type": "stdio",
"command": "uvx", // or the absolute /path/to/uvx
"args": ["logfire-mcp@latest"],
"env": {
"LOGFIRE_READ_TOKEN": "YOUR_TOKEN"
}
}
}
}
Create a .zed/settings.json file in your project's root directory:
{
"context_servers": {
"logfire": {
"source": "custom",
"command": "uvx",
"args": ["logfire-mcp@latest"],
"env": {
"LOGFIRE_READ_TOKEN": "YOUR_TOKEN"
},
"enabled": true
}
}
}
{
"name": "find_exceptions_in_file",
"arguments": {
"filepath": "app/api.py",
"age": 1440
}
}
Response:
[
{
"created_at": "2024-03-20T10:30:00Z",
"message": "Failed to process request",
"exception_type": "ValueError",
"exception_message": "Invalid input format",
"function_name": "process_request",
"line_number": "42",
"attributes": {
"service.name": "api-service",
"code.filepath": "app/api.py"
},
"trace_id": "1234567890abcdef"
}
]
{
"name": "arbitrary_query",
"arguments": {
"query": "SELECT trace_id, message, created_at, attributes->>'service.name' as service FROM records WHERE severity_text = 'ERROR' ORDER BY created_at DESC LIMIT 10",
"age": 1440
}
}
First, obtain a Pydantic Logfire read token from: https://logfire.pydantic.dev/-/redirect/latest-project/settings/read-tokens
Run the MCP server:
uvx logfire-mcp@latest --read-token=YOUR_TOKEN
Configure your preferred client (Cursor, Claude Desktop, or Cline) using the configuration examples above
Start using the MCP server to analyze your OpenTelemetry traces and metrics!
We welcome contributions to help improve the Pydantic Logfire MCP server. Whether you want to add new trace analysis tools, enhance metrics querying functionality, or improve documentation, your input is valuable.
For examples of other MCP servers and implementation patterns, see the Model Context Protocol servers repository.
Pydantic Logfire MCP is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by netdata
Delivers real‑time, per‑second infrastructure monitoring with zero‑configuration agents, on‑edge machine‑learning anomaly detection, and built‑in dashboards.
by Arize-ai
Open-source AI observability platform enabling tracing, evaluation, dataset versioning, experiment tracking, prompt management, and interactive playground for LLM applications.
by msgbyte
Provides integrated website traffic analysis, uptime checking, and server health monitoring in a single self‑hosted platform.
by grafana
Provides programmatic access to a Grafana instance and its surrounding ecosystem through the Model Context Protocol, enabling AI assistants and other clients to query and manipulate dashboards, datasources, alerts, incidents, on‑call schedules, and more.
by dynatrace-oss
Provides a local server that enables real‑time interaction with the Dynatrace observability platform, exposing tools for querying data, retrieving problems, sending Slack notifications, and integrating AI assistance.
by VictoriaMetrics-Community
Provides a Model Context Protocol server exposing read‑only VictoriaMetrics APIs, enabling seamless monitoring, observability, and automation through AI‑driven assistants.
by GeLi2001
Enables interaction with the Datadog API through a Model Context Protocol server, providing access to monitors, dashboards, metrics, logs, events, and incident data.
by last9
Provides AI agents with real‑time production context—including logs, metrics, traces, and alerts—through a Model Context Protocol server, enabling automatic code fixing and faster debugging.
by metoro-io
Provides an MCP server that exposes Metoro's eBPF‑based telemetry APIs to large language models, enabling AI‑driven queries and insights about Kubernetes clusters.