by tan-yong-sheng
Provides AI‑powered analysis of images and videos using Google Gemini or Vertex AI, supporting URLs, local files, and base64 inputs, and returning detailed descriptions, comparisons, object detection with annotated output, and video summarization.
AI Vision MCP Server delivers multimodal vision capabilities through Model Context Protocol. It lets clients submit images or videos and receive AI‑generated insights such as descriptive captions, side‑by‑side comparisons, object detection with bounding‑box annotations, and video summarization. The service can operate with either Google Gemini APIs or Vertex AI, making it flexible for development and production environments.
npx ai-vision-mcp (or add it to your MCP client configuration).IMAGE_PROVIDER=google, VIDEO_PROVIDER=google, GEMINI_API_KEY=...IMAGE_PROVIDER=vertex_ai, VIDEO_PROVIDER=vertex_ai, VERTEX_CREDENTIALS=..., GCS_BUCKET_NAME=...analyze_image, compare_images, detect_objects_in_image, analyze_video) from any MCP‑compatible client (Claude Desktop, Cursor, Cline, etc.). Provide the required parameters (source, prompt, optional options) as JSON.Q: Which provider should I choose? A: Use the Google Gemini provider for quick prototyping; switch to Vertex AI for production‑grade scaling and GCS storage.
Q: What video sources are accepted? A: YouTube URLs and local file paths are supported. Other public URLs are not currently handled.
Q: How are annotated images returned?
A: If outputFilePath is supplied, the file is saved there and returned as a file object. Otherwise, the server saves the image to a temporary directory and returns a tempFile reference.
Q: Can I customize model parameters per function?
A: Yes. Environment variables like TEMPERATURE_FOR_ANALYZE_IMAGE or MAX_TOKENS_FOR_DETECT_OBJECTS_IN_IMAGE override defaults for specific tools.
Q: What timeout settings are recommended?
A: Set MCP_TIMEOUT ≥ 60000 ms for server start‑up and MCP_TOOL_TIMEOUT ≥ 300000 ms for tool execution, especially for video analysis.
A powerful Model Context Protocol (MCP) server that provides AI-powered image and video analysis using Google Gemini and Vertex AI models.
You could choose either to use google provider or vertex_ai provider. For simplicity, google provider is recommended.
Below are the environment variables you need to set based on your selected provider. (Note: It’s recommended to set the timeout configuration to more than 5 minutes for your MCP client).
(i) Using Google AI Studio Provider
export IMAGE_PROVIDER="google" # or vertex_ai
export VIDEO_PROVIDER="google" # or vertex_ai
export GEMINI_API_KEY="your-gemini-api-key"
Get your Google AI Studio's api key here
(ii) Using Vertex AI Provider
export IMAGE_PROVIDER="vertex_ai"
export VIDEO_PROVIDER="vertex_ai"
export VERTEX_CREDENTIALS="/path/to/service-account.json"
export GCS_BUCKET_NAME="your-gcs-bucket"
Refer to the guideline here on how to set this up.
Below are the installation guide for this MCP on different MCP clients, such as Claude Desktop, Claude Code, Cursor, Cline, etc.
Add to your Claude Desktop configuration:
(i) Using Google AI Studio Provider
{
"mcpServers": {
"ai-vision-mcp": {
"command": "npx",
"args": ["ai-vision-mcp"],
"env": {
"IMAGE_PROVIDER": "google",
"VIDEO_PROVIDER": "google",
"GEMINI_API_KEY": "your-gemini-api-key"
}
}
}
}
(ii) Using Vertex AI Provider
{
"mcpServers": {
"ai-vision-mcp": {
"command": "npx",
"args": ["ai-vision-mcp"],
"env": {
"IMAGE_PROVIDER": "vertex_ai",
"VIDEO_PROVIDER": "vertex_ai",
"VERTEX_CREDENTIALS": "/path/to/service-account.json",
"GCS_BUCKET_NAME": "ai-vision-mcp-{VERTEX_PROJECT_ID}"
}
}
}
}
(i) Using Google AI Studio Provider
claude mcp add ai-vision-mcp \
-e IMAGE_PROVIDER=google \
-e VIDEO_PROVIDER=google \
-e GEMINI_API_KEY=your-gemini-api-key \
-- npx ai-vision-mcp
(ii) Using Vertex AI Provider
claude mcp add ai-vision-mcp \
-e IMAGE_PROVIDER=vertex_ai \
-e VIDEO_PROVIDER=vertex_ai \
-e VERTEX_CREDENTIALS=/path/to/service-account.json \
-e GCS_BUCKET_NAME=ai-vision-mcp-{VERTEX_PROJECT_ID} \
-- npx ai-vision-mcp
Note: Increase the MCP startup timeout to 1 minutes and MCP tool execution timeout to about 5 minutes by updating ~\.claude\settings.json as follows:
{
"env": {
"MCP_TIMEOUT": "60000",
"MCP_TOOL_TIMEOUT": "300000"
}
}
Go to: Settings -> Cursor Settings -> MCP -> Add new global MCP server
Pasting the following configuration into your Cursor ~/.cursor/mcp.json file is the recommended approach. You may also install in a specific project by creating .cursor/mcp.json in your project folder. See Cursor MCP docs for more info.
(i) Using Google AI Studio Provider
{
"mcpServers": {
"ai-vision-mcp": {
"command": "npx",
"args": ["ai-vision-mcp"],
"env": {
"IMAGE_PROVIDER": "google",
"VIDEO_PROVIDER": "google",
"GEMINI_API_KEY": "your-gemini-api-key"
}
}
}
}
(ii) Using Vertex AI Provider
{
"mcpServers": {
"ai-vision-mcp": {
"command": "npx",
"args": ["ai-vision-mcp"],
"env": {
"IMAGE_PROVIDER": "vertex_ai",
"VIDEO_PROVIDER": "vertex_ai",
"VERTEX_CREDENTIALS": "/path/to/service-account.json",
"GCS_BUCKET_NAME": "ai-vision-mcp-{VERTEX_PROJECT_ID}"
}
}
}
}
Cline uses a JSON configuration file to manage MCP servers. To integrate the provided MCP server configuration:
(i) Using Google AI Studio Provider
{
"mcpServers": {
"timeout": 300,
"type": "stdio",
"ai-vision-mcp": {
"command": "npx",
"args": ["ai-vision-mcp"],
"env": {
"IMAGE_PROVIDER": "google",
"VIDEO_PROVIDER": "google",
"GEMINI_API_KEY": "your-gemini-api-key"
}
}
}
}
(ii) Using Vertex AI Provider
{
"mcpServers": {
"ai-vision-mcp": {
"timeout": 300,
"type": "stdio",
"command": "npx",
"args": ["ai-vision-mcp"],
"env": {
"IMAGE_PROVIDER": "vertex_ai",
"VIDEO_PROVIDER": "vertex_ai",
"VERTEX_CREDENTIALS": "/path/to/service-account.json",
"GCS_BUCKET_NAME": "ai-vision-mcp-{VERTEX_PROJECT_ID}"
}
}
}
}
The server uses stdio transport and follows the standard MCP protocol. It can be integrated with any MCP-compatible client by running:
npx ai-vision-mcp
The server provides four main MCP tools:
analyze_imageAnalyzes an image using AI and returns a detailed description.
Parameters:
imageSource (string): URL, base64 data, or file path to the imageprompt (string): Question or instruction for the AIoptions (object, optional): Analysis options including temperature and max tokensExamples:
{
"imageSource": "https://plus.unsplash.com/premium_photo-1710965560034-778eedc929ff",
"prompt": "What is this image about? Describe what you see in detail."
}
{
"imageSource": "C:\\Users\\username\\Downloads\\image.jpg",
"prompt": "What is this image about? Describe what you see in detail."
}
compare_imagesCompares multiple images using AI and returns a detailed comparison analysis.
Parameters:
imageSources (array): Array of image sources (URLs, base64 data, or file paths) - minimum 2, maximum 4 imagesprompt (string): Question or instruction for comparing the imagesoptions (object, optional): Analysis options including temperature and max tokensExamples:
{
"imageSources": [
"https://example.com/image1.jpg",
"https://example.com/image2.jpg"
],
"prompt": "Compare these two images and tell me the differences"
}
{
"imageSources": [
"https://example.com/image1.jpg",
"C:\\\\Users\\\\username\\\\Downloads\\\\image2.jpg",
"data:image/jpeg;base64,/9j/4AAQSkZJRgAB..."
],
"prompt": "Which image has the best lighting quality?"
}
detect_objects_in_imageDetects objects in an image using AI vision models and generates annotated images with bounding boxes. Returns detected objects with coordinates and either saves the annotated image to a file or temporary directory.
Parameters:
imageSource (string): URL, base64 data, or file path to the imageprompt (string): Custom detection prompt describing what to detect or recognize in the imageoutputFilePath (string, optional): Explicit output path for the annotated imageConfiguration:
This function uses optimized default parameters for object detection and does not accept runtime options parameter. To customize the AI parameters (temperature, topP, topK, maxTokens), use environment variables:
# Recommended environment variable settings for object detection (these are now the defaults)
TEMPERATURE_FOR_DETECT_OBJECTS_IN_IMAGE=0.0 # Deterministic responses
TOP_P_FOR_DETECT_OBJECTS_IN_IMAGE=0.95 # Nucleus sampling
TOP_K_FOR_DETECT_OBJECTS_IN_IMAGE=30 # Vocabulary selection
MAX_TOKENS_FOR_DETECT_OBJECTS_IN_IMAGE=8192 # High token limit for JSON
File Handling Logic:
Response Types:
file object when explicit outputFilePath is providedtempFile object when explicit outputFilePath is not provided so the image file output is auto-saved to temporary folderdetections array with detected objects and coordinatessummary with percentage-based coordinates for browser automationExamples:
{
"imageSource": "https://example.com/image.jpg",
"prompt": "Detect all objects in this image"
}
{
"imageSource": "C:\\Users\\username\\Downloads\\image.jpg",
"outputFilePath": "C:\\Users\\username\\Documents\\annotated_image.png"
}
{
"imageSource": "data:image/jpeg;base64,/9j/4AAQSkZJRgAB...",
"prompt": "Detect and label all electronic devices in this image"
}
analyze_videoAnalyzes a video using AI and returns a detailed description.
Parameters:
videoSource (string): YouTube URL, GCS URI, or local file path to the videoprompt (string): Question or instruction for the AIoptions (object, optional): Analysis options including temperature and max tokensSupported video sources:
https://www.youtube.com/watch?v=...)C:\Users\username\Downloads\video.mp4)Examples:
{
"videoSource": "https://www.youtube.com/watch?v=9hE5-98ZeCg",
"prompt": "What is this video about? Describe what you see in detail."
}
{
"videoSource": "C:\\Users\\username\\Downloads\\video.mp4",
"prompt": "What is this video about? Describe what you see in detail."
}
Note: Only YouTube URLs are supported for public video URLs. Other public video URLs are not currently supported.
For basic setup, you only need to configure the provider selection and required credentials:
export IMAGE_PROVIDER="google"
export VIDEO_PROVIDER="google"
export GEMINI_API_KEY="your-gemini-api-key"
export IMAGE_PROVIDER="vertex_ai"
export VIDEO_PROVIDER="vertex_ai"
export VERTEX_CREDENTIALS="/path/to/service-account.json"
export GCS_BUCKET_NAME="your-gcs-bucket"
For comprehensive environment variable documentation, including:
👉 See Environment Variable Guide
The server uses a hierarchical configuration system where more specific settings override general ones:
TEMPERATURE_FOR_ANALYZE_IMAGE, etc.)TEMPERATURE_FOR_IMAGE, etc.)TEMPERATURE, etc.)Basic Optimization:
# General settings
export TEMPERATURE=0.7
export MAX_TOKENS=1500
# Task-specific optimization
export TEMPERATURE_FOR_IMAGE=0.2 # More precise for images
export TEMPERATURE_FOR_VIDEO=0.5 # More creative for videos
Function-specific Optimization:
# Optimize individual functions
export TEMPERATURE_FOR_ANALYZE_IMAGE=0.1
export TEMPERATURE_FOR_COMPARE_IMAGES=0.3
export TEMPERATURE_FOR_DETECT_OBJECTS_IN_IMAGE=0.0 # Deterministic
export MAX_TOKENS_FOR_DETECT_OBJECTS_IN_IMAGE=8192 # High token limit
Model Selection:
# Choose models per function
export ANALYZE_IMAGE_MODEL="gemini-2.5-flash-lite"
export COMPARE_IMAGES_MODEL="gemini-2.5-flash"
export ANALYZE_VIDEO_MODEL="gemini-2.5-flash-pro"
# Clone the repository
git clone https://github.com/tan-yong-sheng/ai-vision-mcp.git
cd ai-vision-mcp
# Install dependencies
npm install
# Build the project
npm run build
# Start development server
npm run dev
npm run build - Build the TypeScript projectnpm run dev - Start development server with watch modenpm run lint - Run ESLintnpm run format - Format code with Prettiernpm start - Start the built serverThe project follows a modular architecture:
src/
├── providers/ # AI provider implementations
│ ├── gemini/ # Google Gemini provider
│ ├── vertexai/ # Vertex AI provider
│ └── factory/ # Provider factory
├── services/ # Core services
│ ├── ConfigService.ts
│ └── FileService.ts
├── storage/ # Storage implementations
├── file-upload/ # File upload strategies
├── types/ # TypeScript type definitions
├── utils/ # Utility functions
└── server.ts # Main MCP server
The server includes comprehensive error handling:
git checkout -b feature/amazing-feature)git commit -m 'Add amazing feature')git push origin feature/amazing-feature)This project is licensed under the MIT License - see the LICENSE file for details.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by DMontgomery40
A Model Context Protocol server that proxies DeepSeek's language models, enabling seamless integration with MCP‑compatible applications.
by deepfates
Runs Replicate models through the Model Context Protocol, exposing tools for model discovery, prediction management, and image handling via a simple CLI interface.
by 66julienmartin
A Model Context Protocol (MCP) server implementation connecting Claude Desktop with DeepSeek's language models (R1/V3)
by ruixingshi
Provides Deepseek model's chain‑of‑thought reasoning to MCP‑enabled AI clients, supporting both OpenAI API mode and local Ollama mode.
by groundlight
Expose HuggingFace zero‑shot object detection models as tools for large language or vision‑language models, enabling object localisation and zoom functionality on images.
by kumo-ai
Build, manage, and query relational graphs from CSV/Parquet, convert natural language to PQL, and obtain predictions, evaluations, and explanations from KumoRFM without any training.
by 66julienmartin
Provides a Model Context Protocol server for the Qwen Max language model, enabling seamless integration with Claude Desktop and other MCP‑compatible clients.
by Verodat
Enables AI models to interact with Verodat's data management capabilities through a set of standardized tools for retrieving, creating, and managing datasets.
by joelklabo
Provides Bitcoin and Lightning knowledge tools for AI agents, accessible via pay‑per‑use Lightning (L402) invoicing and a Web of Trust scoring API.
{
"mcpServers": {
"ai-vision-mcp": {
"command": "npx",
"args": [
"ai-vision-mcp"
],
"env": {
"IMAGE_PROVIDER": "google",
"VIDEO_PROVIDER": "google",
"GEMINI_API_KEY": "<YOUR_GEMINI_API_KEY>"
}
}
}
}claude mcp add ai-vision-mcp npx ai-vision-mcp