by atla-ai
Provides a standardized interface for large language models to evaluate their responses using the Atla API.
The server implements the Model Context Protocol (MCP) to expose Atla's evaluation models as tools that LLMs can call. It offers two evaluation tools: evaluate_llm_response
for a single criterion and evaluate_llm_response_on_multiple_criteria
for batch evaluation, returning scores and textual critiques.
uv
. After installing uv
, run the server with:
ATLA_API_KEY=<your-api-key> uvx atla-mcp-server
MCPServerStdio
with the same command and environment variables.claude_desktop_config.json
with the command, args, and ATLA_API_KEY
..cursor/mcp.json
.uvx
; no Docker or complex setup required.uv
is installed and the environment variable ATLA_API_KEY
is set.uvx
.CONTRIBUTING.md
.[!CAUTION] This repository was archived on July 21, 2025. The Atla API is no longer active.
An MCP server implementation providing a standardized interface for LLMs to interact with the Atla API for state-of-the-art LLMJ evaluation.
Learn more about Atla here. Learn more about the Model Context Protocol here.
evaluate_llm_response
: Evaluate an LLM's response to a prompt using a given evaluation criteria. This function uses an Atla evaluation model under the hood to return a dictionary containing a score for the model's response and a textual critique containing feedback on the model's response.evaluate_llm_response_on_multiple_criteria
: Evaluate an LLM's response to a prompt across multiple evaluation criteria. This function uses an Atla evaluation model under the hood to return a list of dictionaries, each containing an evaluation score and critique for a given criteria.To use the MCP server, you will need an Atla API key. You can find your existing API key here or create a new one here.
We recommend using
uv
to manage the Python environment. See here for installation instructions.
Once you have uv
installed and have your Atla API key, you can manually run the MCP server using uvx
(which is provided by uv
):
ATLA_API_KEY=<your-api-key> uvx atla-mcp-server
Having issues or need help connecting to another client? Feel free to open an issue or contact us!
For more details on using the OpenAI Agents SDK with MCP servers, refer to the official documentation.
pip install openai-agents
import os
from agents import Agent
from agents.mcp import MCPServerStdio
async with MCPServerStdio(
params={
"command": "uvx",
"args": ["atla-mcp-server"],
"env": {"ATLA_API_KEY": os.environ.get("ATLA_API_KEY")}
}
) as atla_mcp_server:
...
For more details on configuring MCP servers in Claude Desktop, refer to the official MCP quickstart guide.
claude_desktop_config.json
file:{
"mcpServers": {
"atla-mcp-server": {
"command": "uvx",
"args": ["atla-mcp-server"],
"env": {
"ATLA_API_KEY": "<your-atla-api-key>"
}
}
}
}
You should now see options from atla-mcp-server
in the list of available MCP tools.
For more details on configuring MCP servers in Cursor, refer to the official documentation.
.cursor/mcp.json
file:{
"mcpServers": {
"atla-mcp-server": {
"command": "uvx",
"args": ["atla-mcp-server"],
"env": {
"ATLA_API_KEY": "<your-atla-api-key>"
}
}
}
}
You should now see atla-mcp-server
in the list of available MCP servers.
Contributions are welcome! Please see the CONTRIBUTING.md file for details.
This project is licensed under the MIT License. See the LICENSE file for details.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.