by avivsinai
Provides a Model Context Protocol server that enables AI agents to query Langfuse trace data for enhanced debugging and observability.
Langfuse Mcp offers a server that exposes a set of tools for AI agents to fetch traces, observations, sessions, and exception data stored in Langfuse. By exposing this data through the Model Context Protocol, agents can incorporate real‑time execution context into their reasoning, making debugging and observability far more effective.
uv pip install langfuse-mcp
export LANGFUSE_PUBLIC_KEY=your_public_key
export LANGFUSE_SECRET_KEY=your_secret_key
export LANGFUSE_HOST=https://cloud.langfuse.com
uvx langfuse-mcp --public-key $LANGFUSE_PUBLIC_KEY \
--secret-key $LANGFUSE_SECRET_KEY \
--host $LANGFUSE_HOST
The server listens on stdin/stdout and can be integrated with Cursor, Claude Desktop, or any MCP‑compatible client..cursor/mcp.json or Claude Desktop JSON) to point to the running command.fetch_traces, fetch_observations, find_exceptions, and more.compact, full_json_string, full_json_file./tmp/langfuse_mcp.log with configurable log level.Q: Do I need Python 3.10+? A: Yes. The package requires Python 3.10 or newer (CI uses 3.11).
Q: Can I run the server without specifying --host?
A: If omitted, the server defaults to the Langfuse Cloud endpoint.
Q: How does caching work?
A: An LRU cache (size configurable via CACHE_SIZE) stores recent API responses, automatically evicting the least‑recently used entries.
Q: Is Docker supported? A: Absolutely. Build the image from the repository and run it with environment variables for credentials.
Q: What output mode should I use for large responses?
A: Use full_json_file to write the complete payload to a file while receiving a concise summary.
Q: How do I integrate with Cursor?
A: Add the provided deep‑link button or place a .cursor/mcp.json file in the project root as shown in the README.
Q: Where are logs stored?
A: By default, logs go to /tmp/langfuse_mcp.log. Override with LANGFUSE_MCP_LOG_FILE env var.
This project provides a Model Context Protocol (MCP) server for Langfuse, allowing AI agents to query Langfuse trace data for better debugging and observability.
🎯 From Cursor IDE: Click the button above (works seamlessly!)
🌐 From GitHub Web: Copy this deeplink and paste into your browser address bar:
cursor://anysphere.cursor-deeplink/mcp/install?name=langfuse-mcp&config=eyJjb21tYW5kIjoidXZ4IiwiYXJncyI6WyJsYW5nZnVzZS1tY3AiLCItLXB1YmxpYy1rZXkiLCJZT1VSX1BVQkxJQ19LRVkiLCItLXNlY3JldC1rZXkiLCJZT1VSX1NFQ1JFVF9LRVkiLCItLWhvc3QiLCJodHRwczovL2Nsb3VkLmxhbmdmdXNlLmNvbSJdfQ==
⚙️ Manual Setup: See Configuration section below
💡 Note: The "Add to Cursor" button only works from within Cursor IDE due to browser security restrictions on custom protocols (
cursor://). This is normal and expected behavior per Cursor's documentation.
After installation: Replace YOUR_PUBLIC_KEY and YOUR_SECRET_KEY with your actual Langfuse credentials in Cursor's MCP settings.
The MCP server provides the following tools for AI agents:
fetch_traces - Find traces based on criteria like user ID, session ID, etc.fetch_trace - Get a specific trace by IDfetch_observations - Get observations filtered by typefetch_observation - Get a specific observation by IDfetch_sessions - List sessions in the current projectget_session_details - Get detailed information about a sessionget_user_sessions - Get all sessions for a userfind_exceptions - Find exceptions and errors in tracesfind_exceptions_in_file - Find exceptions in a specific fileget_exception_details - Get detailed information about an exceptionget_error_count - Get the count of errorsget_data_schema - Get schema information for the data structuresuvFirst, make sure uv is installed. For installation instructions, see the uv installation docs.
If you already have an older version of uv installed, you might need to update it with uv self update.
Requirement: The server now depends on the Langfuse Python SDK v3. Installations automatically pull
langfuse>=3.0.0.
uv pip install langfuse-mcp
If you're iterating on this repository, install the local checkout instead of PyPI:
# from the repo root
uv pip install --editable .
For development we suggest creating an isolated environment pinned to Python 3.11 (the version used in CI):
uv venv --python 3.11 .venv
source .venv/bin/activate # On Windows use: .venv\Scripts\activate
uv pip install --python .venv/bin/python -e .
All subsequent examples assume the virtual environment is activated.
You'll need your Langfuse credentials:
You can store these in a local .env file instead of passing CLI flags each time:
LANGFUSE_PUBLIC_KEY=your_public_key
LANGFUSE_SECRET_KEY=your_secret_key
LANGFUSE_HOST=https://cloud.langfuse.com
When present, the MCP server reads these values automatically. CLI arguments still override the environment if provided.
Run the server using uvx or the project virtual environment:
uvx langfuse-mcp --public-key YOUR_KEY --secret-key YOUR_SECRET --host https://cloud.langfuse.com
# or, once inside the repo virtual environment
langfuse-mcp --public-key YOUR_KEY --secret-key YOUR_SECRET --host https://cloud.langfuse.com
Local checkout tip: During development run
uv run --from /path/to/langfuse-mcp langfuse-mcp ...(oruv run python -m langfuse_mcp ...) souvexecutes the code in your working tree. Using the PyPI shortcut skips repository-only changes such as the new environment-based credential defaults and logging tweaks.
The server writes diagnostic logs to /tmp/langfuse_mcp.log. Remove the --host switch if you are targeting the default Cloud endpoint.
Use --log-level (e.g., --log-level DEBUG) and --log-to-console to control verbosity during debugging.
Build the image from the repository root so the container installs the current checkout instead of the latest PyPI release:
docker build -t langfuse-logs-mcp .
docker run --rm -i \
-e LANGFUSE_PUBLIC_KEY=YOUR_PUBLIC_KEY \
-e LANGFUSE_SECRET_KEY=YOUR_SECRET_KEY \
-e LANGFUSE_HOST=https://cloud.langfuse.com \
-e LANGFUSE_MCP_LOG_FILE=/logs/langfuse_mcp.log \
-v "$(pwd)/logs:/logs" \
langfuse-logs-mcp
Why no
-t? Allocating a pseudo-TTY can interfere with MCP stdio clients. Use-ionly so the server communicates over plain stdin/stdout.
The Dockerfile copies the local source tree and installs it with pip install ., so the container always runs your latest commits - a must while testing features that have not shipped on PyPI.
Create a .cursor/mcp.json file in your project root:
{
"mcpServers": {
"langfuse": {
"command": "uvx",
"args": ["langfuse-mcp", "--public-key", "YOUR_KEY", "--secret-key", "YOUR_SECRET", "--host", "https://cloud.langfuse.com"]
}
}
}
Add to your Claude settings:
{
"command": ["uvx"],
"args": ["langfuse-mcp"],
"type": "stdio",
"env": {
"LANGFUSE_PUBLIC_KEY": "YOUR_KEY",
"LANGFUSE_SECRET_KEY": "YOUR_SECRET",
"LANGFUSE_HOST": "https://cloud.langfuse.com"
}
}
Each tool supports different output modes to control the level of detail in responses:
compact (default): Returns a summary with large values truncatedfull_json_string: Returns the complete data as a JSON stringfull_json_file: Saves the complete data to a file and returns a summary with file informationgit clone https://github.com/yourusername/langfuse-mcp.git
cd langfuse-mcp
uv venv --python 3.11 .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install --python .venv/bin/python -e ".[dev]"
export LANGFUSE_SECRET_KEY="your-secret-key"
export LANGFUSE_PUBLIC_KEY="your-public-key"
export LANGFUSE_HOST="https://cloud.langfuse.com" # Or your self-hosted URL
Run the unit test suite (mirrors CI):
pytest
To run the demo client:
uv run examples/langfuse_client_demo.py --public-key YOUR_PUBLIC_KEY --secret-key YOUR_SECRET_KEY
This project uses dynamic versioning based on Git tags:
uv-dynamic-versioninggit tag v0.1.2 (following semantic versioning)git push --tagsFor a detailed history of changes, please see the CHANGELOG.md file.
langfuse.api.trace.list, langfuse.api.observations.get_many, etc.).fetch_* helpers are invoked, helping catch regressions early./tmp/langfuse_mcp.log; this is useful when verifying the upgraded integration against a live Langfuse deployment.Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
We use the cachetools library to implement efficient caching with proper size limits:
cachetools.LRUCache for better reliabilityCACHE_SIZE constantPlease log in to share your review and rating for this MCP.
{
"mcpServers": {
"langfuse-mcp": {
"command": "uvx",
"args": [
"langfuse-mcp"
],
"env": {
"LANGFUSE_PUBLIC_KEY": "<YOUR_PUBLIC_KEY>",
"LANGFUSE_SECRET_KEY": "<YOUR_SECRET_KEY>",
"LANGFUSE_HOST": "https://cloud.langfuse.com"
}
}
}
}claude mcp add langfuse-mcp uvx langfuse-mcpExplore related MCPs that share similar capabilities and solve comparable challenges
by netdata
Delivers real‑time, per‑second infrastructure monitoring with zero‑configuration agents, on‑edge machine‑learning anomaly detection, and built‑in dashboards.
by Arize-ai
Open-source AI observability platform enabling tracing, evaluation, dataset versioning, experiment tracking, prompt management, and interactive playground for LLM applications.
by msgbyte
Provides integrated website traffic analysis, uptime checking, and server health monitoring in a single self‑hosted platform.
by grafana
Provides programmatic access to a Grafana instance and its surrounding ecosystem through the Model Context Protocol, enabling AI assistants and other clients to query and manipulate dashboards, datasources, alerts, incidents, on‑call schedules, and more.
by dynatrace-oss
Provides a local server that enables real‑time interaction with the Dynatrace observability platform, exposing tools for querying data, retrieving problems, sending Slack notifications, and integrating AI assistance.
by pydantic
Provides tools to retrieve and query OpenTelemetry trace and metric data from Pydantic Logfire, allowing LLMs to analyze distributed traces and run arbitrary SQL queries against telemetry records.
by VictoriaMetrics-Community
Provides a Model Context Protocol server exposing read‑only VictoriaMetrics APIs, enabling seamless monitoring, observability, and automation through AI‑driven assistants.
by GeLi2001
Enables interaction with the Datadog API through a Model Context Protocol server, providing access to monitors, dashboards, metrics, logs, events, and incident data.
by last9
Provides AI agents with real‑time production context—including logs, metrics, traces, and alerts—through a Model Context Protocol server, enabling automatic code fixing and faster debugging.