by MatiasVara
Provides an experimental MCP server that connects libvirt virtual machine management with AI models via Ollama, allowing AI-driven control and automation of VMs.
Libvirt MCP is an experimental server that implements the Model Context Protocol (MCP) to bridge libvirt, the virtualization management library, with large language models hosted on Ollama. By doing so, it enables AI‑assisted operations such as creating, configuring, and monitoring virtual machines through natural‑language prompts.
git clone https://github.com/chrishayuk/mcp-cli
pip3.11 install -e "[.][cli,dev]"
curl -fsSL https://ollama.com/install.sh | sh
ollama serve >/dev/null 2>&1 &
ollama pull granite3.2:8b-instruct-q8_0
pip install uv
dnf install -y libvirt-devel python3-devel
uv sync
server_config.json
to point to the correct path of the libvirt‑mcp server../run.sh
run.sh
launches the MCP server using Ollama as the provider and the Granite model.dnf install -y npm
pip install mcp
mcp dev setup.py
uv sync
and a single run.sh
launch command.mcp
package and run development mode.Q: Do I need an internet connection? A: Only for the initial Ollama installation and model download. After the model is cached locally, the server works offline.
Q: Which operating systems are supported?
A: The instructions target Fedora/RHEL based systems (use dnf
). The setup can be adapted to other Linux distributions by installing equivalent packages.
Q: Can I use a different LLM provider?
A: The current run.sh
is hard‑coded for Ollama, but the MCP protocol is provider‑agnostic; swapping the provider would require custom scripting.
Q: How do I update the model?
A: Use ollama pull <model-name>
to fetch a newer version, then restart run.sh
.
Q: Is this production‑ready? A: It is experimental and intended for testing and prototyping rather than production deployments.
This is an experimental mcp server for libvirt. The following lines explain how to use it with mcp-cli and ollama. First, install mcp-cli:
git clone https://github.com/chrishayuk/mcp-cli
pip3.11 install -e ".[cli,dev]"
Then, install ollama:
curl -fsSL https://ollama.com/install.sh | sh
ollama serve >/dev/null 2>&1 &
ollama pull granite3.2:8b-instruct-q8_0
You need also uv
:
pip install uv
You need the following python bindings:
dnf install -y libvirt-devel python3-devel
Then, in the libvirt-mcp
directory, first install the dependencies by running:
uv sync
Then, edit server_config.json
and set up the correct path to the libvirt-mcp
server. Finally, execute run.sh
, that uses ollama
as provider and granite
as model.
For debugging, you can install mcp:
dnf install -y npm
pip install mcp
And then, run:
mcp dev setup.py
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.