by omergocmen
Provides video generation and status checking via the json2video API for seamless integration with LLMs, agents, and other MCP‑compatible clients.
The server enables programmatic creation of videos using json2video's powerful API. It accepts rich JSON definitions describing scenes, elements (text, image, video, audio, components, subtitles, etc.) and returns a project ID for asynchronous rendering.
npm install -g @omerrgocmen/json2video-mcp
or run it directly via npx
.JSON2VIDEO_API_KEY
environment variable (or supply apiKey
per request).generate_video
– send a JSON project definition and receive a project ID.get_video_status
– poll the status of a rendering job using the project ID.create_template
, get_template
, list_templates
– manage reusable video templates.Q: Do I need to install Node.js locally?
A: Yes, the server runs on Node.js. Use npx
for a quick start without a global install.
Q: How do I obtain the JSON2VIDEO_API_KEY
?
A: Sign up at https://json2video.com and generate an API key from your account dashboard.
Q: What if I get a client closed
error?
A: Run npm i @omerrgocmen/json2video-mcp
to ensure the package is properly installed.
Q: Can I specify a custom resolution?
A: Set resolution
to custom
and provide width
and height
values in the project JSON.
Q: Is the server compatible with Windows?
A: Yes. Use cmd /c "set JSON2VIDEO_API_KEY=your_key && npx -y @omerrgocmen/json2video-mcp"
.
A Model Context Protocol (MCP) server implementation for programmatically generating videos using the json2video API. This server exposes powerful video generation and status-checking tools for use with LLMs, agents, or any MCP-compatible client.
env JSON2VIDEO_API_KEY=your_api_key_here npx -y @omerrgocmen/json2video-mcp
npm install -g @omerrgocmen/json2video-mcp
If you are on Windows and encounter issues, try:
cmd /c "set JSON2VIDEO_API_KEY=your_api_key_here && npx -y @omerrgocmen/json2video-mcp"
env JSON2VIDEO_API_KEY=your_api_key_here npx -y @omerrgocmen/json2video-mcp
{
"mcpServers": {
"json2video-mcp": {
"command": "npx",
"args": ["-y", "@omerrgocmen/json2video-mcp"],
"env": {
"JSON2VIDEO_API_KEY": "your_api_key_here"
}
}
}
}
Replace your_api_key_here
with your json2video API key. You can get an API key from json2video.com.
After adding, refresh the MCP server list to see the new tools. Your agent or LLM will automatically use json2video MCP when appropriate, or you can explicitly request it by describing your video generation needs.
Add this to your mcp.json
or similar config:
{
"mcpServers": {
"json2video-mcp": {
"command": "npx",
"args": ["-y", "@omerrgocmen/json2video-mcp"],
"env": {
"JSON2VIDEO_API_KEY": "your_api_key_here"
}
}
}
}
your_api_key_here
with your actual json2video API key.generate_video
and get_video_status
tools for use in your workflows.JSON2VIDEO_API_KEY
(required): Your json2video API key. Can be set as an environment variable or provided per request.Note: If you encounter a
client closed
error, run the following command in your terminal:npm i @omerrgocmen/json2video-mcp
generate_video
)Create a customizable video project with scenes and elements.
Description: Creates a video project using the json2video API. Each project can contain multiple scenes, and each scene can contain various elements such as text, images, video, audio, components, HTML, voice, audiogram, and subtitles. Video generation is asynchronous; use the returned project ID to check status. See https://json2video.com/docs/api/ for full schema and more examples.
Input Schema:
{
"id": "string (optional, unique identifier for the movie)",
"comment": "string (optional, project description)",
"cache": true,
"client_data": {},
"draft": true,
"quality": "high", // one of: low, medium, high
"resolution": "custom", // one of: sd, hd, full-hd, squared, instagram-story, instagram-feed, twitter-landscape, twitter-portrait, custom
"width": 1920, // required if resolution is custom
"height": 1080, // required if resolution is custom
"variables": {},
"elements": [ /* global elements, see below for examples */ ],
"scenes": [
{
"id": "string (optional, unique scene id)",
"comment": "string (optional)",
"background_color": "#000000",
"cache": true,
"condition": "string (optional)",
"duration": -1,
"variables": {},
"elements": [ /* see element examples below */ ]
}
],
"apiKey": "string (optional)"
}
Element Types & Examples:
{
"type": "text",
"text": "Hello world",
"duration": 5,
"settings": { "font-size": "60px", "color": "#FF0000" }
}
{
"type": "image",
"src": "https://images.pexels.com/photos/1105666/pexels-photo-1105666.jpeg",
"width": 1620,
"height": 1080,
"x": 0,
"y": 0
}
{
"type": "video",
"src": "https://example.com/path/to/my/video.mp4",
"duration": 7.3
}
{
"type": "component",
"component": "basic/001",
"settings": {
"headline": { "text": "Lorem ipsum", "color": "white" },
"body": { "text": "Dolor sit amet" }
}
}
{
"type": "html",
"html": "<h1>Hello world</h1>",
"width": 800,
"height": 600
}
{
"type": "audio",
"src": "https://example.com/audio.mp3",
"duration": 5
}
{
"type": "voice",
"text": "This is a voiceover.",
"voice": "en-US-Wavenet-D"
}
{
"type": "audiogram",
"color": "#00FF00",
"amplitude": 5
}
{
"type": "subtitles",
"captions": "1\n00:00:00,000 --> 00:00:02,000\nHello world!"
}
Example Input:
{
"comment": "MyProject",
"resolution": "full-hd",
"scenes": [
{
"elements": [
{ "type": "video", "src": "https://example.com/path/to/my/video.mp4" },
{ "type": "text", "text": "Hello world", "duration": 5 },
{ "type": "image", "src": "https://images.pexels.com/photos/1105666/pexels-photo-1105666.jpeg", "width": 1620, "height": 1080, "x": 0, "y": 0 },
{ "type": "component", "component": "basic/001", "settings": { "headline": { "text": "Lorem ipsum" } } },
{ "type": "html", "html": "<h1>Hello world</h1>", "width": 800, "height": 600 },
{ "type": "audio", "src": "https://example.com/audio.mp3", "duration": 5 },
{ "type": "voice", "text": "This is a voiceover.", "voice": "en-US-Wavenet-D" },
{ "type": "audiogram", "color": "#00FF00", "amplitude": 5 },
{ "type": "subtitles", "captions": "1\n00:00:00,000 --> 00:00:02,000\nHello world!" }
]
}
]
}
Notes for Users:
width
and height
.get_video_status
.Output:
get_video_status
.get_video_status
)Check the status or retrieve the result of a video generation job.
Description: Retrieves the status or result of a previously started video generation job. Note: Video rendering is asynchronous and may take some time. If the status is not "done", please try again later using the same project ID.
Input Schema:
{
"project": "string (required)",
"apiKey": "string (optional)"
}
Example Input:
{
"project": "q663vmm2"
}
Example Output:
{
"success": true,
"movie": {
"success": true,
"status": "done",
"message": "",
"project": "q663vmm2",
"url": "https://assets.json2video.com/clients/yourclient/renders/yourvideo.mp4",
"created_at": "2025-04-27T10:44:18.880Z",
"ended_at": "2025-04-27T10:44:28.589Z",
"duration": 11,
"size": 359630,
"width": 640,
"height": 360,
"rendering_time": 10
}
}
create_template
)Create a new template in json2video.
Description: Creates a new template with a given name and optional description.
Input Schema:
{
"name": "string (required, name of the template)",
"description": "string (optional, description of the template)",
"apiKey": "string (optional)"
}
Example Input:
{
"name": "MyTemplate",
"description": "A reusable video template."
}
Output:
get_template
)Get template details from json2video.
Description: Retrieves details of a template by its name.
Input Schema:
{
"name": "string (required, name of the template)",
"apiKey": "string (optional)"
}
Example Input:
{
"name": "MyTemplate"
}
Output:
{
"updated_at": "YYYY-MM-DDTHH:MM:SSZ",
"created_at": "YYYY-MM-DDTHH:MM:SSZ",
"movie": "{\"id\":\"template1\",\"comment\":\"Example template\",\"resolution\":\"full-hd\",\"quality\":\"high\",\"scenes\":[{\"id\":\"scene1\",\"comment\":\"Scene 1\",\"elements\":[]}],\"elements\":[],\"width\":1920,\"height\":1080}",
"name": "MyTemplate",
"id": "MyTemplate_ID"
}
list_templates
)List all available templates from json2video.
Description: Lists all templates available to the user.
Input Schema:
{
"apiKey": "string (optional)"
}
Example Input:
{
}
Output:
[
{
"updated_at": "YYYY-MM-DDTHH:MM:SSZ",
"created_at": "YYYY-MM-DDTHH:MM:SSZ",
"movie": "{\"id\":\"template1\",\"comment\":\"Example template\",\"resolution\":\"full-hd\",\"quality\":\"high\",\"scenes\":[{\"id\":\"scene1\",\"comment\":\"Scene 1\",\"elements\":[]}],\"elements\":[],\"width\":1920,\"height\":1080}",
"name": "MyTemplate1",
"id": "TEMPLATE_ID_1"
},
{
"updated_at": "YYYY-MM-DDTHH:MM:SSZ",
"created_at": "YYYY-MM-DDTHH:MM:SSZ",
"movie": "{\"id\":\"template2\",\"resolution\":\"instagram-story\",\"quality\":\"medium\",\"scenes\":[{\"id\":\"scene2\",\"comment\":\"Scene 2\",\"elements\":[]}],\"elements\":[],\"comment\":\"Another template\"}",
"name": "MyTemplate2",
"id": "TEMPLATE_ID_2"
}
]
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "json2video-mcp": { "command": "npx", "args": [ "-y", "@omerrgocmen/json2video-mcp" ], "env": { "JSON2VIDEO_API_KEY": "<YOUR_API_KEY>" } } } }
Explore related MCPs that share similar capabilities and solve comparable challenges
by burningion
Upload, edit, search, and generate videos by leveraging LLM capabilities together with Video Jungle's media library.
by mamertofabian
Generate speech audio from text via ElevenLabs API and manage voice generation tasks through a Model Context Protocol server with a companion SvelteKit web client.
by Flyworks-AI
Create fast, free lip‑sync videos for digital avatars by providing audio or text, with optional avatar generation from images or videos.
by mberg
Generates spoken audio from text, outputting MP3 files locally and optionally uploading them to Amazon S3.
by allvoicelab
Generate natural speech, translate and dub videos, clone voices, remove hardcoded subtitles, and extract subtitles using powerful AI APIs.
by nabid-pf
Extracts YouTube video captions, subtitles, and metadata to supply structured information for AI assistants to generate concise video summaries.
by cartesia-ai
Provides clients such as Cursor, Claude Desktop, and OpenAI agents with capabilities to localize speech, convert text to audio, and infill voice clips via Cartesia's API.
by TSavo
Provides an enterprise‑grade MCP server that exposes 12 AI video generation tools, enabling AI assistants to create avatar videos, URL‑to‑video conversions, short videos, scripts, custom avatars, advanced lip‑sync, and more through natural language interactions.
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.