by burningion
Upload, edit, search, and generate videos by leveraging LLM capabilities together with Video Jungle's media library.
Video Editor MCP provides an MCP server that lets users manage video assets, perform multimodal analysis, search by content, and automatically generate or edit videos. It integrates with the Video Jungle platform via API keys and exposes a set of tools (add‑video, search‑videos, generate‑edit‑from‑videos, etc.) that can be invoked from LLM prompts.
uv run video-editor-mcp YOUR_API_KEY
or, to enable local Photos app access on macOS:
LOAD_PHOTOS_DB=1 uv run video-editor-mcp YOUR_API_KEY
can you download the video at https://... and name it fly traps?
can you search my videos for fly traps?
can you create an edit of all the times the video says "fly trap"?
app.log
for debugging or use the MCP Inspector for a UI view.vj://
URI scheme for referencing individual videos and projects.edit-locally
tool for professional editing.Q: Where do I get the API key? A: Sign in to Video Jungle, go to Profile → Settings, and copy the API key.
Q: Do I need any special environment variables?
A: Only if you want to search your macOS Photos library – set LOAD_PHOTOS_DB=1
before launching the server.
Q: How are videos stored and referenced?
A: Uploaded videos are assigned a vj://
URI which can be used in subsequent tool calls.
Q: Can I see what the server is doing in real time?
A: Yes, logs are written to app.log
. For a richer view, launch the MCP Inspector with:
npx @modelcontextprotocol/inspector uv run --directory /path/to/video-editor-mcp video-editor-mcp YOUR_API_KEY
Q: Is there a way to edit videos locally?
A: Use the edit-locally
tool – it creates an OpenTimelineIO project and opens it in a running Davinci Resolve Studio instance.
Q: How do I publish updates to Video Jungle?
A: The update-video-edit
tool pushes live updates; changes appear in Video Jungle if the app is open.
See a demo here: https://www.youtube.com/watch?v=KG6TMLD8GmA
Upload, edit, search, and generate videos from everyone's favorite LLM and Video Jungle.
You'll need to sign up for an account at Video Jungle in order to use this tool, and add your API key.
The server implements an interface to upload, generate, and edit videos with:
Coming soon.
The server implements a few tools:
In order to use the tools, you'll need to sign up for Video Jungle and add your API key.
add-video
Here's an example prompt to invoke the add-video
tool:
can you download the video at https://www.youtube.com/shorts/RumgYaH5XYw and name it fly traps?
This will download a video from a URL, add it to your library, and analyze it for retrieval later. Analysis is multi-modal, so both audio and visual components can be queried against.
search-videos
Once you've got a video downloaded and analyzed, you can then do queries on it using the search-videos
tool:
can you search my videos for fly traps?
Search results contain relevant metadata for generating a video edit according to details discovered in the initial analysis.
search-local-videos
You must set the environment variable LOAD_PHOTOS_DB=1
in order to use this tool, as it will make Claude prompt to access your files on your local machine.
Once that's done, you can search through your Photos app for videos that exist on your phone, using Apple's tags.
In my case, when I search for "Skateboard", I get 1903 video files.
can you search my local video files for Skateboard?
generate-edit-from-videos
Finally, you can use these search results to generate an edit:
can you create an edit of all the times the video says "fly trap"?
(Currently), the video edits tool relies on the context within the current chat.
generate-edit-from-single-video
Finally, you can cut down an edit from a single, existing video:
can you create an edit of all the times this video says the word "fly trap"?
You must login to Video Jungle settings, and get your API key. Then, use this to start Video Jungle MCP:
$ uv run video-editor-mcp YOURAPIKEY
To allow this MCP server to search your Photos app on MacOS:
$ LOAD_PHOTOS_DB=1 uv run video-editor-mcp YOURAPIKEY
To install Video Editor for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install video-editor-mcp --client claude
You'll need to adjust your claude_desktop_config.json
manually:
On MacOS: ~/Library/Application\ Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
"mcpServers": {
"video-editor-mcp": {
"command": "uvx",
"args": [
"video-editor-mcp",
"YOURAPIKEY"
]
}
}
"mcpServers": {
"video-editor-mcp": {
"command": "uv",
"args": [
"--directory",
"/Users/YOURDIRECTORY/video-editor-mcp",
"run",
"video-editor-mcp",
"YOURAPIKEY"
]
}
}
With local Photos app access enabled (search your Photos app):
"video-jungle-mcp": {
"command": "uv",
"args": [
"--directory",
"/Users/<PATH_TO>/video-jungle-mcp",
"run",
"video-editor-mcp",
"<YOURAPIKEY>"
],
"env": {
"LOAD_PHOTOS_DB": "1"
}
},
Be sure to replace the directories with the directories you've placed the repository in on your computer.
To prepare the package for distribution:
uv sync
uv build
This will create source and wheel distributions in the dist/
directory.
uv publish
Note: You'll need to set PyPI credentials via environment variables or command flags:
--token
or UV_PUBLISH_TOKEN
--username
/UV_PUBLISH_USERNAME
and --password
/UV_PUBLISH_PASSWORD
Since MCP servers run over stdio, debugging can be challenging. For the best debugging experience, we strongly recommend using the MCP Inspector.
You can launch the MCP Inspector via npm
with this command:
(Be sure to replace YOURDIRECTORY
and YOURAPIKEY
with the directory this repo is in, and your Video Jungle API key, found in the settings page.)
npx @modelcontextprotocol/inspector uv run --directory /Users/YOURDIRECTORY/video-editor-mcp video-editor-mcp YOURAPIKEY
Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.
Additionally, I've added logging to app.log
in the project directory. You can add logging to diagnose API calls via a:
logging.info("this is a test log")
A reasonable way to follow along as you're workin on the project is to open a terminal session and do a:
$ tail -n 90 -f app.log
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by mamertofabian
Generate speech audio from text via ElevenLabs API and manage voice generation tasks through a Model Context Protocol server with a companion SvelteKit web client.
by Flyworks-AI
Create fast, free lip‑sync videos for digital avatars by providing audio or text, with optional avatar generation from images or videos.
by mberg
Generates spoken audio from text, outputting MP3 files locally and optionally uploading them to Amazon S3.
by allvoicelab
Generate natural speech, translate and dub videos, clone voices, remove hardcoded subtitles, and extract subtitles using powerful AI APIs.
by nabid-pf
Extracts YouTube video captions, subtitles, and metadata to supply structured information for AI assistants to generate concise video summaries.
by omergocmen
Provides video generation and status checking via the json2video API for seamless integration with LLMs, agents, and other MCP‑compatible clients.
by cartesia-ai
Provides clients such as Cursor, Claude Desktop, and OpenAI agents with capabilities to localize speech, convert text to audio, and infill voice clips via Cartesia's API.
by TSavo
Provides an enterprise‑grade MCP server that exposes 12 AI video generation tools, enabling AI assistants to create avatar videos, URL‑to‑video conversions, short videos, scripts, custom avatars, advanced lip‑sync, and more through natural language interactions.
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.