by iconben
Generate images from text prompts using the Z‑Image‑Turbo model via a command‑line interface, a modern web UI, or an MCP server, with automatic GPU acceleration for NVIDIA CUDA, Apple Silicon MPS, and AMD ROCm.
Z Image Studio provides a unified toolbox for local text‑to‑image generation powered by the Tongyi‑MAI/Z‑Image‑Turbo model and its quantized variants. It bundles three access methods—CLI, web UI, and an MCP server—so developers, artists, and AI agents can invoke image generation in the way that best fits their workflow.
uv tool install git+https://github.com/iconben/z-image-studio.git or pip install z-image-studio.zimg generate "A futuristic city with neon lights" --width 1920 --height 1080 --steps 20 --precision q8
Options include custom output path, seed, LoRA files, and disabling history.zimg serve # starts on http://localhost:8000
Open the URL in a browser to access the responsive interface, switch themes, browse history, and manage LoRAs.zimg-mcp (or zimg mcp)./mcp (streamable) and /mcp-sse (SSE) become available for remote agents.generate, list_models, and list_history.~/.z-image-studio/config.json.zimg generate with different seeds or LoRAs for dataset creation.Q: Which GPU does Z Image Studio support? A: It automatically detects NVIDIA CUDA, Apple Silicon MPS, and AMD ROCm. If none are available, it runs on CPU.
Q: How do I use LoRA models?
A: Upload LoRA files through the web UI or pass --lora <file[:strength]> multiple times on the CLI (max 4). Strength values are clamped between -1.0 and 2.0.
Q: What if I get out‑of‑memory errors? A: The app enables attention slicing automatically for low‑memory systems and adjusts image dimensions to multiples of 16.
Q: Can I run the MCP server remotely?
A: Yes. Start the web server (zimg serve) and use the /mcp (streamable HTTP) or /mcp-sse endpoints. Clients should try /mcp first and fall back to SSE if needed.
Q: How to enable torch.compile on Python 3.12?
A: Set the environment variable ZIMAGE_ENABLE_TORCH_COMPILE=1 or add it to ~/.z-image-studio/config.json. This is experimental and may cause errors.
Q: Where are generated images stored?
A: By default, under ~/.local/share/z-image-studio/outputs (Linux), ~/Library/Application Support/z-image-studio/outputs (macOS), or %LOCALAPPDATA%\z-image-studio\outputs (Windows). Paths can be overridden with Z_IMAGE_STUDIO_OUTPUT_DIR.
A Cli, a webUI, and a MCP server for the Z-Image-Turbo text-to-image generation model (Tongyi-MAI/Z-Image-Turbo and its variants).
This tool is designed to run efficiently on local machines for Windows/Mac/Linux users. It features specific optimizations for NVIDIA (CUDA), Apple Silicon (MPS), and AMD on Linux (ROCm), falling back to CPU if no compatible GPU is detected.

Hybrid Interfaces:
Tongyi-MAI/Z-Image-Turbo model and quatized variants via diffusers.--lora entries with optional strengths.zimg mcp, zimg-mcp) for local agents, SSE available at /mcp-sse, and MCP 2025-03-26 Streamable HTTP transport at /mcp./mcp) first for optimal performance, falling back to SSE (/mcp-sse) if needed.uv (recommended for dependency management)Python 3.12+ Note: torch.compile is disabled by default for Python 3.12+ due to known compatibility issues with the Z-Image model architecture. If you want to experiment with torch.compile on Python 3.12+, set ZIMAGE_ENABLE_TORCH_COMPILE=1 via environment variable or in ~/.z-image-studio/config.json (experimental, may cause errors).
Note: AMD GPU support currently requires ROCm, which is only available for Linux PyTorch builds. Windows users with AMD GPUs will currently fall back to CPU.
pip install torch --index-url https://download.pytorch.org/whl/rocm6.1 or similar). Ensure the PyTorch ROCm version matches your installed driver version.zimg models.torch.version.hip is detected.HSA_OVERRIDE_GFX_VERSION (e.g., 10.3.0 for RDNA2, 11.0.0 for RDNA3).torch.compile is disabled by default on ROCm due to experimental support. You can force-enable it with ZIMAGE_ENABLE_TORCH_COMPILE=1 if your setup (Triton/ROCm version) supports it.If you just want the zimg CLI to be available from anywhere, install it as a uv tool:
uv tool install git+https://github.com/iconben/z-image-studio.git
# or, if you have the repo cloned locally:
# git clone https://github.com/iconben/z-image-studio.git
# cd z-image-studio
# uv tool install .
After this, the zimg command is available globally:
zimg --help
To update z-image-studio:
uv tool upgrade z-image-studio
# or, if you have the repo cloned locally, you pull the latest source code:
# git pull
For Windows users, a pre-built installer is available that bundles everything you need:
Z-Image-Studio-Setup-x.x.x.exeC:\Program Files\Z-Image Studio%LOCALAPPDATA%\z-image-studio (contains database, LoRAs, and outputs)Install Z-Image Studio via pip or uv:
pip install z-image-studio
# or
uv pip install z-image-studio
After installation, the zimg command is available globally:
zimg --help
git clone https://github.com/iconben/z-image-studio.git
cd z-image-studio
pip install -e .
# or
uv pip install -e .
After installation, you can use the zimg command directly from your terminal.
Generate images directly from the command line using the generate (or gen) subcommand.
# Basic generation
zimg generate "A futuristic city with neon lights"
# Using the alias 'gen'
zimg gen "A cute cat"
# Custom output path
zimg gen "A cute cat" --output "my_cat.png"
# High quality settings
zimg gen "Landscape view" --width 1920 --height 1080 --steps 20
# With a specific seed for reproducibility
zimg gen "A majestic dragon" --seed 12345
# Select model precision (full, q8, q4)
zimg gen "A futuristic city" --precision q8
# Skip writing to history DB
zimg gen "Quick scratch" --no-history
Launch the web interface to generate images interactively.
# Start server on default port (http://localhost:8000)
zimg serve
# Start on custom host/port
zimg serve --host 0.0.0.0 --port 9090
Once started, open your browser to the displayed URL.
Run Z-Image Studio as an MCP server:
# stdio transport (ideal for local agents/tools); also available as `zimg mcp`
zimg-mcp
# MCP transports are available when you run the web server:
zimg serve # Both Streamable HTTP (/mcp) and SSE (/mcp-sse) available
zimg serve --disable-mcp # Disable all MCP endpoints
Available tools: generate (prompt to image), list_models, and list_history. Logs are routed to stderr to keep MCP stdio clean.
zimg-mcpEnsure dependencies are installed (uv sync) and that zimg-mcp is on PATH (installed via uv tool install . or run locally via uv run zimg-mcp).
In Claude Desktop (or any MCP-aware client), add a local mcp server entry like:
{
"mcpServers": {
"z-image-studio": {
"command": "zimg-mcp",
"args": [],
"env": {}
}
}
}
Adjust the command to a full path if not on PATH. If the agent cannot find the zimg-mcp command, you can also try setting the path in environment.
Different agents may have slightly different parameters, for example, cline will timeout fast if you do not explicitly set a timeout parameter. Here is the example for cline:
{
"mcpServers": {
"z-image-studio": {
"command": "zimg-mcp",
"type": "stdio",
"args": [],,
"env": {},
"disabled": false,
"autoApprove": [],
"timeout": 300
}
}
}
Detailed syntax may vary, please refer to the specific agent's documentation.
For Clients that support remote mcp server, configure the client with the streamable Http mcp endpoint URL (meanwhile keep the server up by running zimg serve). Here is an example for Gemini CLI:
{
"mcpServers": {
"z-image-studio": {
"httpUrl": "http://localhost:8000/mcp"
}
}
}
Detailed syntax may vary, please refer to the specific agent's documentation.
For legacy SSE , run zimg serve and configure the client with the SSE endpoint URL. Here is an example for Cline CLI:
{
"mcpServers": {
"z-image-studio": {
"url": "http://localhost:8000/mcp-sse/sse"
}
}
}
Detailed syntax may vary, please refer to the specific agent's documentation.
The agent will receive tools: generate, list_models, list_history.
The generate tool returns a consistent content array with three items in this order:
TextContent: Enhanced metadata including generation info, file details, and preview metadata
{
"message": "Image generated successfully",
"duration_seconds": 1.23,
"width": 1280,
"height": 720,
"precision": "q8",
"model_id": "z-image-turbo-q8",
"seed": 12345,
"filename": "image_12345.png",
"file_path": "/absolute/path/to/image_12345.png",
"access_note": "Access full image via ResourceLink.uri or this URL",
"preview": true,
"preview_size": 400,
"preview_mime": "image/png"
}
url and access_note point to the absolute image URLfile_path and access_note point to the local file pathResourceLink: Main image file reference with context-appropriate URI
URI Building Priority (SSE/Streamable HTTP):
Example with Context parameter:
@mcp.tool()
async def generate_with_context(..., ctx: Context) -> ...:
request = ctx.request_context.request
proto = request.headers.get('x-forwarded-proto', 'http')
host = request.headers.get('x-forwarded-host', 'localhost')
return ResourceLink(uri=f"{proto}://{host}/outputs/image.png", ...)
{
"type": "resource_link",
"name": "image_12345.png",
"uri": "https://example.com/outputs/image_12345.png",
"mimeType": "image/png"
}
ImageContent: Thumbnail preview (base64 PNG, max 400px)
{
"data": "base64-encoded-png-data",
"mimeType": "image/png"
}
This structure ensures:
generate (alias: gen)| Argument | Short | Type | Default | Description |
|---|---|---|---|---|
prompt |
str |
Required | The text prompt for image generation. | |
--output |
-o |
str |
None |
Custom output filename. Defaults to outputs/<prompt-slug>.png inside the data directory. |
--steps |
int |
9 |
Number of inference steps. Higher usually means better quality. | |
--width |
-w |
int |
1280 |
Image width (automatically adjusted to be a multiple of 8). |
--height |
-H |
int |
720 |
Image height (automatically adjusted to be a multiple of 8). |
--seed |
int |
None |
Random seed for reproducible generation. | |
--precision |
str |
q8 |
Model precision (full, q8, q4). q8 is the default and balanced, full is higher quality but slower, q4 is fastest and uses less memory. |
|
--lora |
str |
[] |
LoRA filename or path, optionally with strength (name.safetensors:0.8). Can be passed multiple times (max 4); strength is clamped to -1.0..2.0. |
|
--no-history |
bool |
False |
Do not record this generation in the history database. |
serve| Argument | Type | Default | Description |
|---|---|---|---|
--host |
str |
0.0.0.0 |
Host to bind the server to. |
--port |
int |
8000 |
Port to bind the server to. |
--reload |
bool |
False |
Enable auto-reload (for development). |
--timeout-graceful-shutdown |
int |
5 |
Seconds to wait for graceful shutdown before forcing exit. |
--disable-mcp |
bool |
False |
Disable all MCP endpoints (/mcp and /mcp-sse). |
models| Argument | Short | Type | Default | Description |
|---|---|---|---|---|
| (None) | Lists available image generation models, highlights the one recommended for your system's hardware, and displays their corresponding Hugging Face model IDs. |
mcp| Argument | Type | Default | Description |
|---|---|---|---|
| (none) | Stdio-only MCP server (for agents). Use zimg-mcp or zimg mcp. |
By default, Z-Image Studio uses the following directories:
~/.local/share/z-image-studio (Linux), ~/Library/Application Support/z-image-studio (macOS), or %LOCALAPPDATA%\z-image-studio (Windows).<Data Directory>/outputs by default.~/.z-image-studio/config.json (created on first run after migration).
Z_IMAGE_STUDIO_DATA_DIR.Z_IMAGE_STUDIO_OUTPUT_DIR.Directory structure inside Data Directory by default:
zimage.db: SQLite databaseloras/: LoRA modelsoutputs/: Generated image filesOn first run without an existing config file, the app migrates legacy data by moving:
outputs/, loras/, and zimage.db from the current working directory (old layout) into the new locations.
(Screenshot 1: Two column layout with History browser collapsed)
(Screenshot 2: Three column layout with History browser pinned)
(Screenshot 3: Generated Image zoomed to fit the screen)
git clone https://github.com/iconben/z-image-studio.git
cd z-image-studio
Run CLI:
uv run src/zimage/cli.py generate "A prompt"
Run Server:
uv run src/zimage/cli.py serve --reload
Run tests:
uv run pytest
First install it:
```bash
uv pip install -e .
```
After this, the `zimg` command is available **inside this virtual environment**:
Then use the zimg command in either ways:
Using `uv` (recommended):
```bash
uv run zimg generate "A prompt"
```
or use in more traditional way:
```bash
source .venv/bin/activate # Under Windows: .venv\Scripts\activate
zimg serve
```
If you do not want your development data mess up your production data,
You can define environment variable Z_IMAGE_STUDIO_DATA_DIR to change the data folder for
You can also define environment variable Z_IMAGE_STUDIO_OUTPUT_DIR to change the output folder to another separate folder
| Variable | Description |
|---|---|
ZIMAGE_ENABLE_TORCH_COMPILE |
Force enable torch.compile optimization (experimental). By default disabled for Python 3.12+ due to known compatibility issues. Can be set to 1 via environment variable or config file (~/.z-image-studio/config.json) to enable at your own risk. |
Z_IMAGE_STUDIO_DATA_DIR |
Override the default data directory location. |
Z_IMAGE_STUDIO_OUTPUT_DIR |
Override the default output directory location. |
guidance_scale=0.0 as required by the Turbo model distillation process.For detailed architecture and development guidelines, see docs/architecture.md.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by GongRzhe
Generate chart images via QuickChart.io using MCP tools, supporting a wide range of chart types and customizable configurations.
by GongRzhe
Provides image generation capabilities using the Replicate Flux model.
by WaveSpeedAI
Provides a Model Control Protocol server that enables access to WaveSpeed AI’s image and video generation capabilities, supporting text‑to‑image, image‑to‑image, inpainting, and dynamic video creation with flexible resource handling and robust logging.
by felores
Generate image and video creatives using Placid.app templates within MCP compatible hosts.
by nkapila6
Generate custom AI‑powered memes and convert them into Telegram or WhatsApp stickers without requiring external APIs.
by jacwu
Provides a bridge between Azure OpenAI's DALL‑E 3 image generation capability and MCP clients, enabling generation and download of images via defined tools.
by netdata
Delivers real‑time, per‑second infrastructure monitoring with zero‑configuration agents, on‑edge machine‑learning anomaly detection, and built‑in dashboards.
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process.