by ggozad
Interact with locally hosted Ollama models via an intuitive terminal UI with persistent chat sessions, model customization, and tool integration.
Oterm provides a terminal-based interface to interact with Ollama models, allowing users to chat, customize system prompts, adjust parameters, and leverage MCP tools directly from the command line.
Install quickly with uvx oterm (or via Homebrew). Run oterm in a supported terminal. Create or select a chat session, choose a model, optionally edit the system prompt and parameters, then start typing. Tools and image inputs can be invoked as needed.
uvxQ: Do I need an Ollama server running? A: Yes, Ollama must be installed and models pulled locally; Oterm communicates with the local Ollama instance.
Q: Which terminals are supported? A: Any standard terminal emulator on Linux, macOS, or Windows.
Q: How are chat histories persisted? A: They are saved in a SQLite database within the user’s config directory.
Q: Can I use custom models? A: Absolutely, any model available to Ollama can be selected and customized.
Q: How to install on macOS?
A: Via Homebrew (brew install oterm) or the generic uvx oterm command.
the terminal client for Ollama.
oterm in your terminal.uvx oterm
See Installation for more details.
oterm is now part of Homebrew!
The splash screen animation that greets users when they start oterm.
A view of the chat interface, showcasing the conversation between the user and the model.
The model selection screen, allowing users to choose and customize available models.
oterm using the
git MCP server to access its own repo.
The image selection interface, demonstrating how users can include images in their conversations.
oterm supports multiple themes, allowing users to customize the appearance of the interface.
This project is licensed under the MIT License.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.