by Rai220
Provides a Model Context Protocol server that implements the “think” tool, enabling agents to pause, record explicit thoughts, and improve multi-step reasoning without altering the environment.
Think MCP delivers an MCP server that equips AI agents with a “think” tool for explicit thought logging, facilitating clearer reasoning, backtracking, and policy compliance during complex workflows.
"mcpServers": {
"think-mcp": {
"command": "uvx",
"args": ["think-mcp"],
"enabled": true
}
}
think
tool by sending a payload with a thought
string. In advanced mode, additional tools (criticize
, plan
, search
) become available via the --advanced
flag and require a TAVILY API key.mcp
Python package.criticize
, plan
, search
).uvx think-mcp
or uvx think-mcp --advanced
).Q: Do I need a specific model to use the think tool? A: No, the tool works with any model that can interact via MCP; it adds reasoning capability externally.
Q: How does advanced mode differ from the basic mode?
A: Advanced mode enables additional tools (criticize
, plan
, search
). The search
tool requires a TAVILY API key set in the environment.
Q: Can I run Think MCP locally?
A: Yes, install the think-mcp
package and start the server with uvx think-mcp
(or uvx think-mcp --advanced
).
Q: What format should the thought
input take?
A: A simple string describing the agent’s current consideration or decision point.
Think MCP is an implementation of an MCP (Model Context Protocol) server that provides a "think" tool for structured reasoning in agentic AI workflows. This project is inspired by the Anthropic engineering article: The "think" tool: Enabling Claude to stop and think in complex tool use situations.
According to the referenced article, adding the think tool can lead to improved evaluation metrics by enabling reasoning capabilities even in models that do not natively possess advanced reasoning skills.
The "think" tool allows an AI agent to pause and record an explicit thought during complex reasoning or multi-step tool use. It does not change the environment or database, but appends the thought to the log, helping the agent process information, backtrack, or comply with detailed policies.
This approach is especially useful for:
Add this MCP server to your facorite agent.
"mcpServers": {
"think-mcp": {
"command": "uvx",
"args": ["think-mcp"],
"enabled": true
}
}
The "think" tool is defined as:
thought
(string) — A thought to think about.Adds aditional tools for your agent:
"mcpServers": {
"think-mcp": {
"command": "uvx",
"args": ["think-mcp", "--advanced"],
"enabled": true,
"env": {
"TAVILY_API_KEY": ... YOUR TAVILY API KEY HERE ...
}
}
}
MIT License — see LICENSE
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "think-mcp": { "command": "uvx", "args": [ "think-mcp" ], "env": {} } } }
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process.
by danny-avila
Provides a self‑hosted ChatGPT‑style interface supporting numerous AI models, agents, code interpreter, image generation, multimodal interactions, and secure multi‑user authentication.
by block
Automates engineering tasks on local machines, executing code, building projects, debugging, orchestrating workflows, and interacting with external APIs using any LLM.
by RooCodeInc
Provides an autonomous AI coding partner inside the editor that can understand natural language, manipulate files, run commands, browse the web, and be customized via modes and instructions.
by pydantic
A Python framework that enables seamless integration of Pydantic validation with large language models, providing type‑safe agent construction, dependency injection, and structured output handling.
by lastmile-ai
Build effective agents using Model Context Protocol and simple, composable workflow patterns.
by mcp-use
A Python SDK that simplifies interaction with MCP servers and enables developers to create custom agents with tool‑calling capabilities.
by nanbingxyz
A cross‑platform desktop AI assistant that connects to major LLM providers, supports a local knowledge base, and enables tool integration via MCP servers.
by gptme
Provides a personal AI assistant that runs directly in the terminal, capable of executing code, manipulating files, browsing the web, using vision, and interfacing with various LLM providers.