by wangsqly0407
Provides API interfaces to query OpenStack compute, storage, network, and image resources via MCP protocol, supporting real‑time status, flexible filtering, and multiple detail levels.
Offers a lightweight asynchronous HTTP service that translates OpenStack SDK calls into MCP‑compatible responses, enabling large language models or other MCP clients to retrieve up‑to‑date cloud resource information.
pip install openstack-mcp-server
openstack-mcp-server \
--port 8000 \
--log-level INFO \
--auth-url 'http://<OpenStack-API-Endpoint>:5000/v3' \
--username '<OpenStack-Admin-User>' \
--password '<OpenStack-Admin-Password>'
The MCP endpoint becomes http://localhost:8000/openstack
.get_instances
) from an AI client or any MCP‑compatible tool, passing JSON arguments such as filter, limit, and detail_level.list_tools
and call_tool
in the server code.Q: Do I need Docker to run the server? A: No. The server runs directly with Python after installing the package. Docker is only shown as an optional test harness.
Q: Can I get responses in plain JSON instead of SSE?
A: Yes. Use the --json-response
flag to disable streaming.
Q: Which OpenStack version is supported? A: Any version that the official OpenStack SDK (python-openstackclient) can communicate with, typically Pike and later.
Q: How do I add a new resource query tool?
A: Implement the acquisition function in server.py
, register its name in list_tools
, and handle its logic in call_tool
.
Q: Is there an authentication mechanism for the MCP endpoint? A: Authentication is handled by the underlying OpenStack SDK using the credentials supplied at startup; the MCP endpoint itself does not impose additional auth.
An OpenStack resource query service based on MCP (Model Context Protocol), providing API interfaces to query compute, storage, network, image, and other resources from the OpenStack cloud platform.
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ │ │ │ │ │
│ AI Client │───▶│ MCP Server │───▶│ OpenStack │
│ (LLM) │◀───│ (Server) │◀───│ API │
│ │ │ │ │ │
└─────────────┘ └─────────────┘ └─────────────┘
pip install openstack-mcp-server
openstack-mcp-server --port 8000 --log-level INFO --auth-url 'http://<OpenStack-API-Endpoint>:5000/v3' --username '<OpenStack-Admin-User>' --password '<OpenStack-Admin-Password>'
After starting, the MCP interface will be available at http://localhost:8000/openstack
.
--port
: Service listening port, default is 8000--log-level
: Log level, options are DEBUG, INFO, WARNING, ERROR, CRITICAL, default is INFO--json-response
: Use JSON response instead of SSE stream, default is FalseThrough the MCP protocol, you can use the following tools to query OpenStack resources:
{
"name": "get_instances",
"arguments": {
"filter": "web-server",
"limit": 10,
"detail_level": "detailed"
}
}
Parameter description:
filter
: Filter condition, such as instance name or ID (optional)limit
: Maximum number of results to return (optional, default 100)detail_level
: Level of detail in the returned information, options are basic, detailed, full (optional, default detailed)# Clone repository
git clone https://github.com/wangshqly0407/openstack-mcp-server.git
cd openstack-mcp-server
# Create virtual environment
uv venv
# Activate virtual environment
source .venv/bin/activate
# Initialize runtime environment
uv sync
# Start streaming HTTP MCP server
uv run ./src/mcp_openstack_http/server.py --port 8000 --log-level INFO --auth-url 'http://<OpenStack-API-Endpoint>:5000/v3' --username '<OpenStack-Admin-User>' --password '<OpenStack-Admin-Password>'
# Method 1: Test using npx
npx -y @modelcontextprotocol/inspector uv run ./src/mcp_openstack_http/server.py --port 8000 --log-level INFO --auth-url 'http://<OpenStack-API-Endpoint>:5000/v3' --username '<OpenStack-Admin-User>' --password '<OpenStack-Admin-Password>'
# Method 2: Test using docker
docker run -it --rm -p 6274:6274 -p 6277:6277 -v $(pwd):/app -w /app node:18 npx -y @modelcontextprotocol/inspector uv run ./src/mcp_openstack_http/server.py --port 8000 --log-level INFO --auth-url 'http://<OpenStack-API-Endpoint>:5000/v3' --username '<OpenStack-Admin-User>' --password '<OpenStack-Admin-Password>'
Access: http://localhost:6274/
src/mcp_openstack_http/server.py
list_tools
methodcall_tool
methodApache 2.0
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by awslabs
Provides specialized servers that expose AWS capabilities through the Model Context Protocol, enabling AI assistants to retrieve up-to-date documentation, execute API calls, and automate infrastructure workflows directly within development environments.
by cloudflare
Provides a collection of Model Context Protocol servers that enable MCP‑compatible clients to interact with Cloudflare services such as Workers, Observability, Radar, and more, allowing natural‑language driven management of configurations, data, and operations.
by Flux159
Connects to a Kubernetes cluster and offers a unified MCP interface for kubectl, Helm, port‑forwarding, diagnostics, and non‑destructive read‑only mode.
by TencentEdgeOne
Deploy HTML, folders, or zip archives to EdgeOne Pages and instantly obtain a public URL for fast edge delivery.
by rishikavikondala
Provides Model Context Protocol tools for performing AWS S3 and DynamoDB operations, with automatic logging and audit access via the `audit://aws-operations` endpoint.
by confluentinc
Enables AI assistants to manage Confluent Cloud resources such as Kafka topics, connectors, and Flink SQL statements through natural‑language interactions.
by aliyun
Enables AI assistants to operate Alibaba Cloud resources such as ECS, Cloud Monitor, OOS and other services through seamless integration with Alibaba Cloud APIs via the Model Context Protocol.
by aws-samples
Retrieve PDF documents and other S3 objects through Model Context Protocol resources, enabling LLMs to pull data directly from AWS S3 buckets.
by kocierik
Connects to HashiCorp Nomad and exposes Model Context Protocol endpoints for job, deployment, node, allocation, variable, volume, ACL, Sentinel, and cluster management via a Go‑based server.