by FradSer
Provides advanced sequential thinking capabilities via a multi‑agent system, exposing a `sequentialthinking` MCP tool that orchestrates six specialized agents to analyze, research, and synthesize user thoughts.
Mcp Server Mas Sequential Thinking implements a multi‑dimensional thinking process using six specialized AI agents (Factual, Emotional, Critical, Optimistic, Creative, and Synthesis). Each request follows a deterministic full_exploration workflow where agents run in parallel, optionally perform web research via ExaTools, and a final synthesis agent composes a coherent answer.
sequentialthinking tool.sequentialthinking tool with the required ThoughtData fields.structuredContent.should_continue from the response and loop calls until it is false, using the suggested next‑call arguments for smooth iteration.Q: Do I need an Exa API key?
A: No. The system works without web research; agents rely on internal reasoning. Provide EXA_API_KEY only if you want enhanced data retrieval.
Q: Which LLM provider should I choose?
A: DeepSeek is the default and cost‑effective. Switch to Groq, OpenRouter, Anthropic, GitHub Models, or Ollama by setting LLM_PROVIDER and the corresponding API key.
Q: How many tokens will a call consume? A: Expect 5‑10× the token count of a single‑agent tool because six agents generate output plus a synthesis step.
Q: Can I branch from a previous thought?
A: Yes. Use branchFromThought and branchId fields in the request to create a divergent line of reasoning.
Q: How do I know when to stop the loop?
A: Examine structuredContent.should_continue. When it is false, the process has reached its conclusion.
Q: What’s the difference from the original TypeScript version? A: The Python/Agno implementation replaces a passive state‑tracker with an active multi‑agent system, adds research capabilities, uses Pydantic validation, and supports multiple LLM providers.
English | 简体中文
This project implements an advanced sequential thinking process using a Multi-Agent System (MAS) built with the Agno framework and served via MCP. It represents a significant evolution from simpler state-tracking approaches by leveraging coordinated, specialized agents for deeper analysis and problem decomposition.
This is an MCP server - not a standalone application. It runs as a background service that extends your LLM client (like Claude Desktop) with sophisticated sequential thinking capabilities. The server provides a sequentialthinking tool that processes thoughts through multiple specialized AI agents, each examining the problem from a different cognitive angle.
The system employs 6 specialized thinking agents, each focused on a distinct cognitive perspective:
The system uses AI-driven complexity analysis to determine the optimal thinking sequence:
full_exploration is mandatory for all requestsThe AI analyzer still evaluates:
Key Insights:
4 out of 6 agents are equipped with web research capabilities via ExaTools:
Research is optional - requires EXA_API_KEY environment variable. The system works perfectly without it, using pure reasoning capabilities.
This Python/Agno implementation marks a fundamental shift from the original TypeScript version:
| Feature/Aspect | Python/Agno Version (Current) | TypeScript Version (Original) |
|---|---|---|
| Architecture | Multi-Agent System (MAS); Active processing by a team of agents. | Single Class State Tracker; Simple logging/storing. |
| Intelligence | Distributed Agent Logic; Embedded in specialized agents & Coordinator. | External LLM Only; No internal intelligence. |
| Processing | Active Analysis & Synthesis; Agents act on the thought. | Passive Logging; Merely recorded the thought. |
| Frameworks | Agno (MAS) + FastMCP (Server); Uses dedicated MAS library. | MCP SDK only. |
| Coordination | Explicit Team Coordination Logic (Team in coordinate mode). |
None; No coordination concept. |
| Validation | Pydantic Schema Validation; Robust data validation. | Basic Type Checks; Less reliable. |
| External Tools | Integrated (Exa via Researcher); Can perform research tasks. | None. |
| Logging | Structured Python Logging (File + Console); Configurable. | Console Logging with Chalk; Basic. |
| Language & Ecosystem | Python; Leverages Python AI/ML ecosystem. | TypeScript/Node.js. |
In essence, the system evolved from a passive thought recorder to an active thought processor powered by a collaborative team of AI agents.
sequentialthinking tool to define the problem and initiate the process.sequentialthinking tool with the current thought, structured according to the ThoughtData model.full_exploration multi-step sequence.High Token Usage: Due to the Multi-Agent System architecture, this tool consumes significantly more tokens than single-agent alternatives or the previous TypeScript version. Each sequentialthinking call invokes multiple specialized agents simultaneously, leading to substantially higher token usage (potentially 5-10x more than simple approaches).
This parallel processing leads to substantially higher token usage (potentially 5-10x more) compared to simpler sequential approaches, but provides correspondingly deeper and more comprehensive analysis.
sequentialthinkingThe server exposes a single MCP tool that processes sequential thoughts:
{
thought: string, // One focused reasoning step
thoughtNumber: number, // 1-based step index; increment each call
totalThoughts: number, // Planned number of steps
nextThoughtNeeded: boolean, // true for intermediate steps, false on final step
isRevision: boolean, // true only when revising earlier conclusions
branchFromThought?: number, // Set with branchId to branch from a prior step
branchId?: string, // Branch identifier (required when branching)
needsMoreThoughts: boolean // true only when extending beyond totalThoughts
}
The tool returns both:
content: human-readable synthesis textstructuredContent: machine-readable loop control fields{
should_continue: boolean, // Canonical continuation signal
next_thought_number: number?, // Recommended next thoughtNumber
stop_reason: string, // Why to continue/stop/retry
current_thought_number: number,
total_thoughts: number,
next_call_arguments?: { // Suggested next-call arguments when applicable
thoughtNumber: number,
totalThoughts: number,
nextThoughtNeeded: boolean,
needsMoreThoughts: boolean
},
parameter_usage: Record<string, string>
}
structuredContent.should_continue.sequentialthinking until should_continue is false.isRevision=true.structuredContent.next_thought_number and next_call_arguments when building the next request.DEEPSEEK_API_KEY (default, recommended)GROQ_API_KEYOPENROUTER_API_KEYGITHUB_TOKENANTHROPIC_API_KEYEXA_API_KEY for web research capabilitiesuv package manager (recommended) or pipnpx -y @smithery/cli install @FradSer/mcp-server-mas-sequential-thinking --client claude
# Clone the repository
git clone https://github.com/FradSer/mcp-server-mas-sequential-thinking.git
cd mcp-server-mas-sequential-thinking
# Install with uv (recommended)
uv pip install .
# Or with pip
pip install .
Add to your MCP client configuration:
{
"mcpServers": {
"sequential-thinking": {
"command": "mcp-server-mas-sequential-thinking",
"env": {
"LLM_PROVIDER": "deepseek",
"DEEPSEEK_API_KEY": "your_api_key",
"EXA_API_KEY": "your_exa_key_optional"
}
}
}
}
Create a .env file or set these variables:
# LLM Provider (required)
LLM_PROVIDER="deepseek" # deepseek, groq, openrouter, github, anthropic, ollama
DEEPSEEK_API_KEY="sk-..."
# Optional: Enhanced/Standard Model Selection
# DEEPSEEK_ENHANCED_MODEL_ID="deepseek-chat" # For synthesis
# DEEPSEEK_STANDARD_MODEL_ID="deepseek-chat" # For other agents
# Optional: Web Research (enables ExaTools)
# EXA_API_KEY="your_exa_api_key"
# Optional: Custom endpoint
# LLM_BASE_URL="https://custom-endpoint.com"
# Groq with different models
GROQ_ENHANCED_MODEL_ID="openai/gpt-oss-120b"
GROQ_STANDARD_MODEL_ID="openai/gpt-oss-20b"
# Anthropic with Claude models
ANTHROPIC_ENHANCED_MODEL_ID="claude-3-5-sonnet-20241022"
ANTHROPIC_STANDARD_MODEL_ID="claude-3-5-haiku-20241022"
# GitHub Models
GITHUB_ENHANCED_MODEL_ID="gpt-4o"
GITHUB_STANDARD_MODEL_ID="gpt-4o-mini"
Once installed and configured in your MCP client:
sequentialthinking tool becomes availableRun the server manually for testing:
# Using installed script
mcp-server-mas-sequential-thinking
# Using uv
uv run mcp-server-mas-sequential-thinking
# Using Python
python src/mcp_server_mas_sequential_thinking/main.py
# Clone repository
git clone https://github.com/FradSer/mcp-server-mas-sequential-thinking.git
cd mcp-server-mas-sequential-thinking
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install with dev dependencies
uv pip install -e ".[dev]"
# Format and lint
uv run ruff check . --fix
uv run ruff format .
uv run mypy .
# Run tests (when available)
uv run pytest
npx @modelcontextprotocol/inspector uv run mcp-server-mas-sequential-thinking
Open http://127.0.0.1:6274/ and test the sequentialthinking tool.
mcp-server-mas-sequential-thinking/
├── src/mcp_server_mas_sequential_thinking/
│ ├── main.py # MCP server entry point
│ ├── processors/
│ │ ├── multi_thinking_core.py # 6 thinking agents definition
│ │ └── multi_thinking_processor.py # Sequential processing logic
│ ├── routing/
│ │ ├── ai_complexity_analyzer.py # AI-powered analysis
│ │ └── multi_thinking_router.py # Intelligent routing
│ ├── services/
│ │ ├── server_core.py # ThoughtProcessor implementation
│ │ ├── workflow_executor.py
│ │ └── context_builder.py
│ └── config/
│ ├── modernized_config.py # Provider strategies
│ └── constants.py # System constants
├── pyproject.toml # Project configuration
└── README.md # This file
See CHANGELOG.md for version history.
Contributions are welcome! Please ensure:
This project is licensed under the MIT License - see the LICENSE file for details.
Note: This is an MCP server, designed to work with MCP-compatible clients like Claude Desktop. It is not a standalone chat application.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process.
by danny-avila
Provides a self‑hosted ChatGPT‑style interface supporting numerous AI models, agents, code interpreter, image generation, multimodal interactions, and secure multi‑user authentication.
by block
Automates engineering tasks on local machines, executing code, building projects, debugging, orchestrating workflows, and interacting with external APIs using any LLM.
by RooCodeInc
Provides an autonomous AI coding partner inside the editor that can understand natural language, manipulate files, run commands, browse the web, and be customized via modes and instructions.
by pydantic
A Python framework that enables seamless integration of Pydantic validation with large language models, providing type‑safe agent construction, dependency injection, and structured output handling.
by mcp-use
A Python SDK that simplifies interaction with MCP servers and enables developers to create custom agents with tool‑calling capabilities.
by lastmile-ai
Build effective agents using Model Context Protocol and simple, composable workflow patterns.
by Klavis-AI
Provides production‑ready MCP servers and a hosted service for integrating AI applications with over 50 third‑party services via standardized APIs, OAuth, and easy Docker or hosted deployment.
by nanbingxyz
A cross‑platform desktop AI assistant that connects to major LLM providers, supports a local knowledge base, and enables tool integration via MCP servers.
{
"mcpServers": {
"sequential-thinking": {
"command": "mcp-server-mas-sequential-thinking",
"args": [],
"env": {
"LLM_PROVIDER": "<PROVIDER>",
"DEEPSEEK_API_KEY": "<YOUR_DEEPSEEK_API_KEY>",
"EXA_API_KEY": "<YOUR_EXA_API_KEY>"
}
}
}
}claude mcp add sequential-thinking mcp-server-mas-sequential-thinking