by Alation
Enables AI agents to access and leverage metadata from the Alation Data Catalog for tasks such as asset curation, search, and data analysis.
The SDK provides a set of Python libraries and server components that let AI agents query, retrieve, and modify catalog metadata stored in Alation. It includes core API client, LangChain adapters, and an MCP‑compatible server exposing catalog context to any MCP client.
pip install alation-ai-agent-sdk # core SDK
pip install alation-ai-agent-langchain # LangChain integration
pip install alation-ai-agent-mcp # MCP server
ServiceAccountAuthParams or UserAccountAuthParams and instantiate AlationAPI.get_context, bulk_retrieval, lineage, etc., either directly through the core SDK or via LangChain agents.alation-ai-agent-mcp; the server will expose the catalog context over the Model Context Protocol for external clients.alation_context, get_data_products, bulk_retrieval, check_job_status, update_catalog_metadata, generate_data_product, lineage, get_custom_fields_definitions, and get_data_dictionary_instructions.generate_data_product and schema retrieval.Q: Which Python version is required?
A: Python 3.10 or higher.
Q: How do I authenticate if I don't have service‑account credentials?
A: Use UserAccountAuthParams as described in the authentication guide.
Q: Is there a way to retrieve custom field definitions?
A: Yes, the get_custom_fields_definitions tool returns all custom field metadata (admin) or built‑in fields (non‑admin).
Q: Can I run the MCP server in a container?
A: The MCP server is a Python package; you can containerize it like any Python app by installing the package and executing the provided entry point.
Q: What if I need lineage but the feature is in beta?
A: Enable the beta lineage feature on your Alation instance via support, then activate the lineage tool in the SDK.
The Alation AI Agent SDK enables AI agents to access and leverage metadata from the Alation Data Catalog.
This SDK empowers AI agents to:
The project is organized into multiple components:
alation-ai-agent-sdk)The core SDK provides the foundation for interacting with the Alation API. It handles authentication, request formatting, and response parsing.
alation-ai-agent-langchain)This component integrates the SDK with the LangChain framework, enabling the creation of sophisticated AI agents that can reason about your data catalog.
Learn more about the LangChain Integration
alation-ai-agent-mcp)The MCP integration provides an MCP-compatible server that exposes Alation's context capabilities to any MCP client. Supports both traditional STDIO mode for direct MCP client connections and HTTP mode for web applications and API integrations.
Learn more about the MCP Integration
pip install uv
# Install the core SDK
uv pip install alation-ai-agent-sdk==1.0.0rc1
# Install LangChain integration
uv pip install alation-ai-agent-langchain==1.0.0rc1
# Install the MCP integration
uv pip install alation-ai-agent-mcp==1.0.0rc1
The library needs to be configured with your Alation instance credentials. You should use ServiceAccountAuthParams.
from alation_ai_agent_sdk import AlationAPI, ServiceAccountAuthParams
# Initialize the SDK with Service Account Authentication
auth_params = ServiceAccountAuthParams(
client_id="your_client_id",
client_secret="your_client_secret"
)
alation_api = AlationAPI(
base_url="https://your-alation-instance.com",
auth_method="service_account",
auth_params=auth_params
)
If you cannot obtain service account credentials (admin only), see the User Account Authentication Guide for instructions.
We're excited to announce the 1.0.0rc1 version of the Alation AI Agent SDK is available.
IMPORTANT: In a breaking change user_account is no longer supported as an authorization mode. We recommend you migrate to service_account or bearer_token modes.
The new major version comes with several notable changes that should make the transition worth it.
The Alation Agent Studio gives you first class support for creating and leveraging the agents your business needs. Whether you're improving catalog curation or building data-centric query agents, the Agent Studio makes it easy to create agents, hone them, and deploy them across your enterprise. It includes a number of expert tools that are ready to be used or composed together as building blocks for more complex scenarios. And any precision agents you build are available within the SDK or MCP server as tools (See custom_agent).
We've heard from a number of customers that want the flexibility of MCP servers without the responsibility of having to install or upgrade the SDK. With our remote MCP server you don't have to do any of that. After a one time MCP focused authorization setup, it can be as simple as adding a remote MCP server to your favorite MCP client like: https://<your_instance>/ai/mcp
Note: MCP clients and platforms are rapidly evolving. Not all of them support authorization flows the same way nor path parameters etc. If you're running into blockers, please file an Issue so we can investigate and come up with a plan. We do not support dynamic client registration so please use an MCP client that allows you to pass in a client_id and client_secret.
One issue the remote MCP server solves is listing tools dynamically. This dynamic portion is doing a lot of work for us. For instance, it can filter out tools the current user cannot use or it can list brand new tools the SDK doesn't even know about.
And since the tools are resolved lazily instead of statically, it means the API contracts for those tools can also be dynamic. This avoids client server version mismatches which could otherwise break static integrations.
We will continue to support the SDK and issue new versions regularly, but if you're after a less brittle more robust integration, you should consider integrating directly with the remote MCP server as a starting place.
In the beginning of the Agent SDK we had only one tool: Alation Context. It offered a powerful way to dynamically select the right objects and their properties to best address a particular question. It's powerful signature parameter made it suitable for cases even without an user question (Bulk Retrieval). At the same time we saw a fair bit of friction with LLM generated signature parameters being invalid or just outright wrong. And a surprising amount of usage involved no signature at all which frequently resulted in poor results.
We've sought to address these issues by moving from a collection of these tools (alation_context, bulk_retrieval) into an agent that performs a series of checks and heuristics to dynamically create a signature when needed to take advantage of your custom fields. That is our new catalog_search_context_agent.
This should translate into fewer instructions you need to convince these tools to play nice with each other. And at the same time increase the accuracy of calls.
All tools now support a streaming option. Primarily this benefits our local MCP server in http mode. If your MCP clients support streaming you should now see some of the internal processing of tools and agents to give you more transparency into what is happening under the hood.
By default the SDK has streaming disabled but it can be enabled if you have a use case for it. To enable it pass a sdk_options=AgentSDKOptions(enable_streaming=True) argument to the AlationAIAgentSDK constructor. When streaming you'll need to loop over the result or yield from it to correctly handle the underlying generator.
Most of our tools and agents accept the chat_id parameter when invoked. Including this will associate that tool call with any other prior calls referencing the same chat_id. Any chat_id compatible tool will include a chat_id in the response.
Direct usage examples for the Alation AI Agent SDK:
Enable agentic experiences with the Alation Data Catalog.
Harness the SDK to build complex agents and workflows.
The number of published agent frameworks and toolkits appears to be increasing every day. If you don't happen to see the framework or toolkit you're using here, it's still possible to adapt alation-ai-agent-sdk to your needs. It may be as simple as writing a wrapping function where a decorator is applied.
While we want to reach as many developers as possible and make it as convenient as possible, we anticipate a long tail distribution of toolkits and won't be able to write adapters for every case. If you'd like support for a specific toolkit, please create an issue to discuss.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process.
by danny-avila
Provides a self‑hosted ChatGPT‑style interface supporting numerous AI models, agents, code interpreter, image generation, multimodal interactions, and secure multi‑user authentication.
by block
Automates engineering tasks on local machines, executing code, building projects, debugging, orchestrating workflows, and interacting with external APIs using any LLM.
by RooCodeInc
Provides an autonomous AI coding partner inside the editor that can understand natural language, manipulate files, run commands, browse the web, and be customized via modes and instructions.
by pydantic
A Python framework that enables seamless integration of Pydantic validation with large language models, providing type‑safe agent construction, dependency injection, and structured output handling.
by mcp-use
A Python SDK that simplifies interaction with MCP servers and enables developers to create custom agents with tool‑calling capabilities.
by lastmile-ai
Build effective agents using Model Context Protocol and simple, composable workflow patterns.
by Klavis-AI
Provides production‑ready MCP servers and a hosted service for integrating AI applications with over 50 third‑party services via standardized APIs, OAuth, and easy Docker or hosted deployment.
by nanbingxyz
A cross‑platform desktop AI assistant that connects to major LLM providers, supports a local knowledge base, and enables tool integration via MCP servers.