by gptme
Provides a personal AI assistant that runs directly in the terminal, capable of executing code, manipulating files, browsing the web, using vision, and interfacing with various LLM providers.
Gptme is a terminal‑based AI assistant that augments large language models with practical tools such as a shell executor, file patcher, web browser, and vision capabilities. It enables developers and knowledge workers to interact with LLMs as if they had direct access to the local computer environment.
pipx install gptme
(requires Python 3.10+).gptme
or pass a prompt: gptme "write a Python script that fetches Bitcoin price"
./undo
, /tools
, /model
, etc.) to control the session.-m openai/gpt-4
), workspace (-w path
), enable/disable specific tools (-t shell,patch,browser
), or run in non‑interactive mode for scripting.llama.cpp
models.Q: Do I need an internet connection?
A: Only when using remote LLM providers or the web‑browser tool. Local models (llama.cpp
) run entirely offline.
Q: How is privacy handled? A: When using local models, no data leaves the machine. Remote providers follow their own policies; you can restrict usage to local models for full privacy.
Q: Can I extend Gptme with new tools? A: Yes. Tools are modular Python classes; adding a new tool involves implementing the required interface and registering it.
Q: What platforms are supported? A: Works on macOS, Linux, and Windows (via WSL or native Python). The web UI is browser‑agnostic.
Q: How do I run Gptme as a server?
A: Use the built‑in server command (gptme --serve
) to start the REST API and optional web UI.
[!NOTE] These demos are very out of date (2023) and do not reflect the latest capabilities.
You can find more Demos and Examples in the documentation.
llama.cpp
stdin
or as arguments.
GPTME_TOOL_SOUNDS=true
mypy
, ruff
, and pyupgrade
.Install with pipx:
# requires Python 3.10+
pipx install gptme
Now, to get started, run:
gptme
Here are some examples:
gptme 'write an impressive and colorful particle effect using three.js to particles.html'
gptme 'render mandelbrot set to mandelbrot.png'
gptme 'suggest improvements to my vimrc'
gptme 'convert to h265 and adjust the volume' video.mp4
git diff | gptme 'complete the TODOs in this diff'
make test | gptme 'fix the failing tests'
For more, see the Getting Started guide and the Examples in the documentation.
$ gptme --help
Usage: gptme [OPTIONS] [PROMPTS]...
gptme is a chat-CLI for LLMs, empowering them with tools to run shell
commands, execute code, read and manipulate files, and more.
If PROMPTS are provided, a new conversation will be started with it. PROMPTS
can be chained with the '-' separator.
The interface provides user commands that can be used to interact with the
system.
Available commands:
/undo Undo the last action
/log Show the conversation log
/tools Show available tools
/model List or switch models
/edit Edit the conversation in your editor
/rename Rename the conversation
/fork Copy the conversation using a new name
/summarize Summarize the conversation
/replay Rerun tools in the conversation, won't store output
/impersonate Impersonate the assistant
/tokens Show the number of tokens used
/export Export conversation as HTML
/commit Ask assistant to git commit
/setup Setup gptme with completions and configuration
/help Show this help message
/exit Exit the program
Keyboard shortcuts:
Ctrl+X Ctrl+E Edit prompt in your editor
Ctrl+J Insert a new line without executing the prompt
Options:
--name TEXT Name of conversation. Defaults to generating a random
name.
-m, --model TEXT Model to use, e.g. openai/gpt-5, anthropic/claude-
sonnet-4-20250514. If only provider given then a
default is used.
-w, --workspace TEXT Path to workspace directory. Pass '@log' to create a
workspace in the log directory.
--agent-path TEXT Path to agent workspace directory.
-r, --resume Load most recent conversation.
-y, --no-confirm Skip all confirmation prompts.
-n, --non-interactive Non-interactive mode. Implies --no-confirm.
--system TEXT System prompt. Options: 'full', 'short', or something
custom.
-t, --tools TEXT Tools to allow as comma-separated list. Available:
append, browser, chats, choice, computer, gh,
ipython, morph, patch, rag, read, save, screenshot,
shell, subagent, tmux, tts, vision, youtube.
--tool-format TEXT Tool format to use. Options: markdown, xml, tool
--no-stream Don't stream responses
--show-hidden Show hidden system messages.
-v, --verbose Show verbose output.
--version Show version and configuration information
--help Show this message and exit.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process.
by danny-avila
Provides a self‑hosted ChatGPT‑style interface supporting numerous AI models, agents, code interpreter, image generation, multimodal interactions, and secure multi‑user authentication.
by block
Automates engineering tasks on local machines, executing code, building projects, debugging, orchestrating workflows, and interacting with external APIs using any LLM.
by RooCodeInc
Provides an autonomous AI coding partner inside the editor that can understand natural language, manipulate files, run commands, browse the web, and be customized via modes and instructions.
by pydantic
A Python framework that enables seamless integration of Pydantic validation with large language models, providing type‑safe agent construction, dependency injection, and structured output handling.
by lastmile-ai
Build effective agents using Model Context Protocol and simple, composable workflow patterns.
by mcp-use
A Python SDK that simplifies interaction with MCP servers and enables developers to create custom agents with tool‑calling capabilities.
by nanbingxyz
A cross‑platform desktop AI assistant that connects to major LLM providers, supports a local knowledge base, and enables tool integration via MCP servers.
by Klavis-AI
Provides production‑ready MCP servers and a hosted service for integrating AI applications with over 50 third‑party services via standardized APIs, OAuth, and easy Docker or hosted deployment.