by ognis1205
Provides Unity Catalog Functions as MCP tools, enabling language models to list, retrieve, create, and delete catalog functions directly from a running server.
A Model Context Protocol (MCP) server that wraps Unity Catalog function management APIs, exposing them as tools that can be called by LLMs or other agents. It supports listing, fetching, creating, and deleting functions within a specified catalog and schema.
uv
. Run the server via uv run mcp-server-unitycatalog
..env
file:
--uc_server
/ UC_SERVER
– Unity Catalog base URL (required)--uc_catalog
/ UC_CATALOG
– Catalog name (required)--uc_schema
/ UC_SCHEMA
– Schema name (required)--uc_token
/ UC_TOKEN
– Access token (optional)--uc_verbosity
/ UC_VERBOSITY
– Logging level (optional, default warn
)--uc_log_directory
/ UC_LOG_DIRECTORY
– Log storage path (optional)uv
:
uv run mcp-server-unitycatalog \
--uc_server https://your-unity-catalog.com \
--uc_catalog my_catalog \
--uc_schema my_schema
Or via Docker:
docker run --rm -i mcp/unitycatalog \
--uc_server https://your-unity-catalog.com \
--uc_catalog my_catalog \
--uc_schema my_schema
uc_list_functions
, uc_get_function
, uc_create_function
, uc_delete_function
.mcp/unitycatalog
) for easy deployment.use_catalog
/use_schema
methods.Q: Do I need to install any Python packages?
A: No. The server runs directly with uv
; it will resolve dependencies on the fly.
Q: How do I provide the access token?
A: Use the --uc_token
flag, set the UC_TOKEN
environment variable, or place it in a .env
file.
Q: Can I change the catalog or schema after the server starts?
A: Currently you must specify them at startup, but upcoming use_catalog
and use_schema
methods will allow dynamic switches.
Q: Is there a way to view logs?
A: Logs are written to the directory defined by --uc_log_directory
(default .mcp_server_unitycatalog
).
A Model Context Protocol server for Unity Catalog. This server provides Unity Catalog Functions as MCP tools.
You can use all Unity Catalog Functions registered in Unity Catalog alongside the following predefined Unity Catalog AI tools:
uc_list_functions
uc_get_function
name
(string): The name of the function (not fully-qualified).uc_create_function
name
(string): The name of the function (not fully-qualified).script
(string): The Python script including the function to be registered.uc_delete_function
name
(string): The name of the function (not fully-qualified).When using uv
no specific installation is needed. We will use
uvx
to directly run mcp-server-git.
These values can also be set via CLI options or .env
environment variables. Required arguments are the Unity Catalog server, catalog, and schema, while the access token and verbosity level are optional. Run uv run mcp-server-unitycatalog --help
for more detailed configuration options.
Argument | Environment Variable | Description | Required/Optional |
---|---|---|---|
-u , --uc_server |
UC_SERVER |
The base URL of the Unity Catalog server. | Required |
-c , --uc_catalog |
UC_CATALOG |
The name of the Unity Catalog catalog. | Required |
-s , --uc_schema |
UC_SCHEMA |
The name of the schema within a Unity Catalog catalog. | Required |
-t , --uc_token |
UC_TOKEN |
The access token used to authorize API requests to the Unity Catalog server. | Optional |
-v , --uc_verbosity |
UC_VERBOSITY |
The verbosity level for logging. Default: warn . |
Optional |
-l , --uc_log_directory |
UC_LOG_DIRECTORY |
The directory where log files will be stored. Default: .mcp_server_unitycatalog . |
Optional |
Add this to your claude_desktop_config.json
(or cline_mcp_settings.json
):
{
"mcpServers": {
"unitycatalog": {
"command": "uv",
"args": [
"--directory",
"/<path to your local git repository>/mcp-server-unitycatalog",
"run",
"mcp-server-unitycatalog",
"--uc_server",
"<your unity catalog url>",
"--uc_catalog",
"<your catalog name>",
"--uc_schema",
"<your schema name>"
]
}
}
}
{
"mcpServers": {
"unitycatalog": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"mcp/unitycatalog",
"--uc_server",
"<your unity catalog url>",
"--uc_catalog",
"<your catalog name>",
"--uc_schema",
"<your schema name>"
]
}
}
}
Docker:
docker build -t mcp/unitycatalog .
list_functions
.get_function
.create_python_function
.execute_function
.delete_function
.use_xxx
methods. In the current implementation, catalog
and schema
need to be defined when starting the server. However, they will be implemented as use_catalog
and use_schema
functions, dynamically updating the list of available functions when the use_xxx
is executed.This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.