by Flyworks-AI
Create fast, free lip‑sync videos for digital avatars by providing audio or text, with optional avatar generation from images or videos.
Flyworks MCP provides a Model Context Protocol server that generates lip‑synchronised videos for digital humans. Users can supply an existing avatar or create one from an image/video, then feed audio or plain text (which is converted to speech) to produce a speaking avatar clip.
pip install httpx "mcp[cli]>=1.6.0"
# or using uv
uv pip install httpx "mcp[cli]>=1.6.0"
export FLYWORKS_API_TOKEN="<YOUR_API_TOKEN>"
export FLYWORKS_API_BASE_URL="https://hfw-api.hifly.cc/api/v2/hifly"
export FLYWORKS_MCP_BASE_PATH="/path/to/output"
(A .env
file works as well.)python server.py
create_lipsync_video_by_audio
or create_lipsync_video_by_text
) via any MCP‑compatible client (Claude Desktop, Cursor, custom code). Async mode returns a task_id
; sync mode waits up to 10 minutes and downloads the result automatically.httpx
, mcp
).2aeda3bcefac46a3
(subject to quota and watermarks).output_path
in tool parameters or set FLYWORKS_MCP_BASE_PATH
for default storage.The Flyworks MCP is a Model Context Protocol (MCP) server that provides a convenient interface for interacting with the Flyworks API. It facilitates fast and free lipsync video creation for a wide range of digital avatars, including realistic and cartoon styles.
Input avatar video (footage):
Audio clip with TTS saying 我是一个飞影数字人。Welcome to Flyworks MCP server demo. This tool enables fast and free lipsync video creation for a wide range of digital avatars, including realistic and cartoon styles.
:
Generated lipsync video:
httpx
, mcp[cli]
Using in Claude Desktop
Go to Claude > Settings > Developer > Edit Config > claude_desktop_config.json
to include the following:
{
"mcpServers": {
"flyworks": {
"command": "uvx",
"args": [
"flyworks-mcp",
"-y"
],
"env": {
"FLYWORKS_API_TOKEN": "your_api_token_here",
"FLYWORKS_API_BASE_URL": "https://hfw-api.hifly.cc/api/v2/hifly",
"FLYWORKS_MCP_BASE_PATH": "/path/to/your/output/directory"
}
}
}
}
Using in Cursor
Go to Cursor -> Preferences -> Cursor Settings -> MCP -> Add new global MCP Server
to add above config.
Make sure to replace your_api_token_here
with your actual API token, and update the FLYWORKS_MCP_BASE_PATH
to a valid directory on your system where output files will be saved.
Note: We offer free trial access to our tool with the token
2aeda3bcefac46a3
. However, please be aware that the daily quota for this free access is limited. Additionally, the generated videos will be watermarked and restricted to a duration of 45 seconds. For full access, please contact us at bd@flyworks.ai to acquire your token.
To install flyworks-mcp for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @Flyworks-AI/flyworks-mcp --client claude
Clone this repository:
git clone https://github.com/yourusername/flyworks-mcp.git
cd flyworks-mcp
Install dependencies:
pip install httpx "mcp[cli]>=1.6.0"
Or using uv
:
uv pip install httpx "mcp[cli]>=1.6.0"
To avoid timeout issues during server startup, we recommend pre-installing all dependencies:
pip install pygments pydantic-core httpx "mcp[cli]>=1.6.0"
Configuration
Set your Flyworks API token as an environment variable:
# Linux/macOS
export FLYWORKS_API_TOKEN="your_token_here"
# Windows (Command Prompt)
set FLYWORKS_API_TOKEN=your_token_here
# Windows (PowerShell)
$env:FLYWORKS_API_TOKEN="your_token_here"
Alternatively, you can create a .env
file.
Run the server.py
file directly:
python server.py
spawn uvx ENOENT issue:
Please confirm its absolute path by running this command in your terminal:
which uvx
Once you obtain the absolute path (e.g., /usr/local/bin/uvx), update your configuration to use that path (e.g., "command": "/usr/local/bin/uvx").
create_lipsync_video_by_audio
)Create a lipsync video with audio input. Animates a digital human avatar to speak in sync with the provided audio.
Parameters:
avatar
: Digital human avatar ID. Either this or avatar creation parameters must be provided.avatar_video_url
: URL of a video to create the avatar from.avatar_image_url
: URL of an image to create the avatar from.avatar_video_file
: Local path to a video file to create the avatar from.avatar_image_file
: Local path to an image file to create the avatar from.audio_url
: Remote URL of the audio file. One of audio_url or audio_file must be provided.audio_file
: Local path to the audio file. One of audio_url or audio_file must be provided.title
: Optional title for the created video.async_mode
: If true, returns task_id immediately. If false, waits for completion and downloads the video. Default is true.output_path
: Where to save the downloaded video if async_mode is false. Default is "output.mp4".Notes:
Returns:
create_lipsync_video_by_text
)Create a lipsync video with text input. Generates audio from the text and animates a digital human avatar to speak it.
Parameters:
avatar
: Digital human avatar ID. Either this or avatar creation parameters must be provided.avatar_video_url
: URL of a video to create the avatar from.avatar_image_url
: URL of an image to create the avatar from.avatar_video_file
: Local path to a video file to create the avatar from.avatar_image_file
: Local path to an image file to create the avatar from.text
: Text content to be spoken by the avatar. Required.voice
: Voice ID to use for text-to-speech. If not provided, a random voice will be selected automatically.title
: Optional title for the created video.async_mode
: If true, returns task_id immediately. If false, waits for completion and downloads the video. Default is true.output_path
: Where to save the downloaded video if async_mode is false. Default is "output.mp4".Notes:
Returns:
For tasks run in async mode, you can check their status using the Flyworks API's /creation/task
endpoint with the task_id returned by the tool.
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "flyworks": { "command": "uvx", "args": [ "flyworks-mcp", "-y" ], "env": { "FLYWORKS_API_TOKEN": "<YOUR_API_TOKEN>", "FLYWORKS_API_BASE_URL": "https://hfw-api.hifly.cc/api/v2/hifly", "FLYWORKS_MCP_BASE_PATH": "/path/to/output" } } } }
Explore related MCPs that share similar capabilities and solve comparable challenges
by burningion
Upload, edit, search, and generate videos by leveraging LLM capabilities together with Video Jungle's media library.
by mamertofabian
Generate speech audio from text via ElevenLabs API and manage voice generation tasks through a Model Context Protocol server with a companion SvelteKit web client.
by mberg
Generates spoken audio from text, outputting MP3 files locally and optionally uploading them to Amazon S3.
by allvoicelab
Generate natural speech, translate and dub videos, clone voices, remove hardcoded subtitles, and extract subtitles using powerful AI APIs.
by nabid-pf
Extracts YouTube video captions, subtitles, and metadata to supply structured information for AI assistants to generate concise video summaries.
by omergocmen
Provides video generation and status checking via the json2video API for seamless integration with LLMs, agents, and other MCP‑compatible clients.
by cartesia-ai
Provides clients such as Cursor, Claude Desktop, and OpenAI agents with capabilities to localize speech, convert text to audio, and infill voice clips via Cartesia's API.
by TSavo
Provides an enterprise‑grade MCP server that exposes 12 AI video generation tools, enabling AI assistants to create avatar videos, URL‑to‑video conversions, short videos, scripts, custom avatars, advanced lip‑sync, and more through natural language interactions.
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.