by 66julienmartin
Provides a Model Context Protocol server for the Qwen Max language model, enabling seamless integration with Claude Desktop and other MCP‑compatible clients.
A server implementation that exposes Qwen series language models (Max, Plus, Turbo) through the Model Context Protocol, allowing Claude Desktop and other tools to perform text generation, configure parameters, and handle errors via a standard MCP interface.
npx -y @smithery/cli install @66julienmartin/mcp-server-qwen_max --client claude
or:
git clone https://github.com/66julienmartin/mcp-server-qwen-max.git
cd Qwen_Max
npm install
.env
file with your Dashscope API key:
DASHSCOPE_API_KEY=your-api-key-here
Add the server entry to Claude Desktop configuration, pointing to the compiled index.js
(or use the npx command shown in the serverConfig
).npm run start # or use npx as configured
{
"name": "qwen_max",
"arguments": {
"prompt": "Your prompt here",
"max_tokens": 8192,
"temperature": 0.7
}
}
src/index.ts
.max_tokens
and temperature
.Q: Which Node.js version is required? A: Node.js v18 or higher.
Q: How do I switch to a different Qwen model?
A: Edit the model
field in src/index.ts
to qwen-plus
or qwen-turbo
.
Q: Where do I obtain a Dashscope API key? A: Sign up on Alibaba Cloud Dashscope and generate an API key from the console.
Q: What environment variable is needed?
A: DASHSCOPE_API_KEY
must be set in the .env
file or the MCP server configuration.
Q: Can I run the server in watch mode during development?
A: Yes, use npm run dev
which recompiles on file changes.
Q: Is the server open source? A: Yes, it is licensed under the MIT license.
A Model Context Protocol (MCP) server implementation for the Qwen Max language model.
Why Node.js? This implementation uses Node.js/TypeScript as it currently provides the most stable and reliable integration with MCP servers compared to other languages like Python. The Node.js SDK for MCP offers better type safety, error handling, and compatibility with Claude Desktop.
To install Qwen Max MCP Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @66julienmartin/mcp-server-qwen_max --client claude
git clone https://github.com/66julienmartin/mcp-server-qwen-max.git
cd Qwen_Max
npm install
By default, this server uses the Qwen-Max model. The Qwen series offers several commercial models with different capabilities:
Provides the best inference performance, especially for complex and multi-step tasks.
Context window: 32,768 tokens
Available versions:
Balanced combination of performance, speed, and cost, ideal for moderately complex tasks.
Context window: 131,072 tokens
Available versions:
Fast speed and low cost, suitable for simple tasks.
Available versions:
To modify the model, update the model name in src/index.ts:
// For Qwen-Max (default)
model: "qwen-max"
// For Qwen-Plus
model: "qwen-plus"
// For Qwen-Turbo
model: "qwen-turbo"
For more detailed information about available models, visit the Alibaba Cloud Model Documentation https://www.alibabacloud.com/help/en/model-studio/getting-started/models?spm=a3c0i.23458820.2359477120.1.446c7d3f9LT0FY.
qwen-max-mcp/
├── src/
│ ├── index.ts # Main server implementation
├── build/ # Compiled files
│ ├── index.js
├── LICENSE
├── README.md
├── package.json
├── package-lock.json
└── tsconfig.json
.env
file in the project root:DASHSCOPE_API_KEY=your-api-key-here
{
"mcpServers": {
"qwen_max": {
"command": "node",
"args": ["/path/to/Qwen_Max/build/index.js"],
"env": {
"DASHSCOPE_API_KEY": "your-api-key-here"
}
}
}
}
npm run dev # Watch mode
npm run build # Build
npm run start # Start server
// Example tool call
{
"name": "qwen_max",
"arguments": {
"prompt": "Your prompt here",
"max_tokens": 8192,
"temperature": 0.7
}
}
The temperature parameter controls the randomness of the model's output:
Lower values (0.0-0.7): More focused and deterministic outputs Higher values (0.7-1.0): More creative and varied outputs
Recommended temperature settings by task:
Code generation: 0.0-0.3 Technical writing: 0.3-0.5 General tasks: 0.7 (default) Creative writing: 0.8-1.0
The server provides detailed error messages for common issues:
API authentication errors Invalid parameters Rate limiting Network issues Token limit exceeded Model availability issues
Contributions are welcome! Please feel free to submit a Pull Request.
MIT
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "qwen_max": { "command": "npx", "args": [ "-y", "@66julienmartin/mcp-server-qwen_max" ], "env": { "DASHSCOPE_API_KEY": "<YOUR_API_KEY>" } } } }
Explore related MCPs that share similar capabilities and solve comparable challenges
by DMontgomery40
A Model Context Protocol server that proxies DeepSeek's language models, enabling seamless integration with MCP‑compatible applications.
by deepfates
Runs Replicate models through the Model Context Protocol, exposing tools for model discovery, prediction management, and image handling via a simple CLI interface.
by 66julienmartin
A Model Context Protocol (MCP) server implementation connecting Claude Desktop with DeepSeek's language models (R1/V3)
by ruixingshi
Provides Deepseek model's chain‑of‑thought reasoning to MCP‑enabled AI clients, supporting both OpenAI API mode and local Ollama mode.
by groundlight
Expose HuggingFace zero‑shot object detection models as tools for large language or vision‑language models, enabling object localisation and zoom functionality on images.
by Verodat
Enables AI models to interact with Verodat's data management capabilities through a set of standardized tools for retrieving, creating, and managing datasets.
Run advanced AI models locally with high performance while maintaining full data privacy, accessible through native desktop applications and a browser‑based platform.
Upload, analyze, and visualize documents, compare multiple AI model responses side‑by‑side, generate diagrams, solve math with KaTeX, and collaborate securely within a single unified interface.
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.