by
A desktop‑optimized AI chatbot that connects to any LLM, supports multi‑modal inputs (audio, PDF, images, text files), provides real‑time web search via Tavily or a local browser, and keeps all user data stored locally for privacy.
ChatWise is a fast, privacy‑first AI chatbot that works with any language model—GPT‑4, Claude, Gemini, and others. It runs as a desktop‑focused web app, offering multi‑modal conversations, integrated web search, and seamless tool integration through the Model Context Protocol (MCP).
Q: Do I need an OpenAI API key? A: Only if you want to use OpenAI models. ChatWise works with many providers; you supply the relevant key for the model you choose.
Q: Is my chat data stored on the cloud? A: No. All conversation history is kept locally on your device unless you explicitly export it.
Q: Can I use ChatWise offline? A: You can compose messages offline, but the LLM responses require an internet connection to reach the provider’s API.
Q: How does the web‑search feature work? A: It calls Tavily’s Search API or your local browser to retrieve current information and inserts the results into the conversation.
Q: What is MCP and why is it useful? A: MCP enables ChatWise to call external services (Notion, Google Sheets, browsers, etc.) directly from the chat UI, extending the assistant’s capabilities.
Any LLM
ChatWise supports any LLM model, including GPT-4, Claude, Gemini, and more.
Performance
ChatWise is built with performance in mind, and is optimized for desktop experience.
Privacy
All your data is stored locally, and never leaves your device (except for sending chat requests to your LLM provider).
Simplicity
Simple yet powerful, ChatWise is designed to be easy to use, without the bloat.
Multi-modal
Chat with audio, PDF, images, text files, and more.
Web search
Search the web with Tavily or local browsers for free.
Render HTML/React/Charts/Documents
Get realtime info using Tavily's Search API or your local browser for free
Make AI use tools like Notion, Google Sheets, Browsers and more
4 days ago
5 days ago
6 days ago
7 days ago
10 days ago
Fix: use Responses API for OpenAI models when reasoning summary is enabled (it's necessary for showing reasoning summary)
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process.
by danny-avila
Provides a self‑hosted ChatGPT‑style interface supporting numerous AI models, agents, code interpreter, image generation, multimodal interactions, and secure multi‑user authentication.
by block
Automates engineering tasks on local machines, executing code, building projects, debugging, orchestrating workflows, and interacting with external APIs using any LLM.
by RooCodeInc
Provides an autonomous AI coding partner inside the editor that can understand natural language, manipulate files, run commands, browse the web, and be customized via modes and instructions.
by pydantic
A Python framework that enables seamless integration of Pydantic validation with large language models, providing type‑safe agent construction, dependency injection, and structured output handling.
by lastmile-ai
Build effective agents using Model Context Protocol and simple, composable workflow patterns.
by mcp-use
A Python SDK that simplifies interaction with MCP servers and enables developers to create custom agents with tool‑calling capabilities.
by nanbingxyz
A cross‑platform desktop AI assistant that connects to major LLM providers, supports a local knowledge base, and enables tool integration via MCP servers.
by gptme
Provides a personal AI assistant that runs directly in the terminal, capable of executing code, manipulating files, browsing the web, using vision, and interfacing with various LLM providers.