by g0t4
Provides a Model Context Protocol server that enables LLMs to execute arbitrary shell commands and receive stdout and stderr, with optional stdin for interactive scripts and file creation.
Enables large language models to request execution of shell commands through the run_command
tool, returning the command's output and error streams. Designed for integration with Claude Desktop, Groq Desktop, and other MCP clients.
npm run build
(if using source).\npx
command.\run_command
tool with the desired command, optionally providing stdin
.run_command
tool supporting any shell command (e.g., hostname
, ls -al
, python
).\STDOUT
and STDERR
as text.\stdin
allows piping code or creating files via commands like cat >> file.txt
.\npx
.\mcpo
bridge for web UI integration.\mcpo
.Q: Do I need root privileges?
A: No. Run the server as a regular user and avoid sudo
for security.
Q: How does logging work?
A: Logs are written to STDERR
and captured by the client (e.g., Claude Desktop). Use --verbose
to increase detail.
Q: Can I run the server over HTTP?
A: Yes, by using mcpo
to expose an OpenAPI‑compatible endpoint.
Q: Is there a way to debug communication?
A: Use the MCP Inspector via npm run inspector
for a browser‑based debugging UI.
Q: What models work best with this server?
A: Any model that can emit tool calls, such as Claude Sonnet 3.5, OpenHands LM, DevStral, or Qwen2.5‑Coder.
Tools are for LLMs to request. Claude Sonnet 3.5 intelligently uses run_command
. And, initial testing shows promising results with Groq Desktop with MCP and llama4
models.
Currently, just one command to rule them all!
run_command
- run a command, i.e. hostname
or ls -al
or echo "hello world"
etc
STDOUT
and STDERR
as textstdin
parameter means your LLM can
stdin
to commands like fish
, bash
, zsh
, python
cat >> foo/bar.txt
from the text in stdin
[!WARNING] Be careful what you ask this server to run! In Claude Desktop app, use
Approve Once
(notAllow for This Chat
) so you can review each command, useDeny
if you don't trust the command. Permissions are dictated by the user that runs the server. DO NOT run withsudo
.
Prompts are for users to include in chat history, i.e. via Zed
's slash commands (in its AI Chat panel)
run_command
- generate a prompt message with the command outputInstall dependencies:
npm install
Build the server:
npm run build
For development with auto-rebuild:
npm run watch
To use with Claude Desktop, add the server config:
On MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
Groq Desktop (beta, macOS) uses ~/Library/Application Support/groq-desktop-app/settings.json
Published to npm as mcp-server-commands using this workflow
{
"mcpServers": {
"mcp-server-commands": {
"command": "npx",
"args": ["mcp-server-commands"]
}
}
}
Make sure to run npm run build
{
"mcpServers": {
"mcp-server-commands": {
// works b/c of shebang in index.js
"command": "/path/to/mcp-server-commands/build/index.js"
}
}
}
run_commands
without double checking.# NOTE: make sure to review variants and sizes, so the model fits in your VRAM to perform well!
# Probably the best so far is [OpenHands LM](https://www.all-hands.dev/blog/introducing-openhands-lm-32b----a-strong-open-coding-agent-model)
ollama pull https://huggingface.co/lmstudio-community/openhands-lm-32b-v0.1-GGUF
# https://ollama.com/library/devstral
ollama pull devstral
# Qwen2.5-Coder has tool use but you have to coax it
ollama pull qwen2.5-coder
The server is implemented with the STDIO
transport.
For HTTP
, use mcpo
for an OpenAPI
compatible web server interface.
This works with Open-WebUI
uvx mcpo --port 3010 --api-key "supersecret" -- npx mcp-server-commands
# uvx runs mcpo => mcpo run npx => npx runs mcp-server-commands
# then, mcpo bridges STDIO <=> HTTP
[!WARNING] I briefly used
mcpo
withopen-webui
, make sure to vet it for security concerns.
Claude Desktop app writes logs to ~/Library/Logs/Claude/mcp-server-mcp-server-commands.log
By default, only important messages are logged (i.e. errors).
If you want to see more messages, add --verbose
to the args
when configuring the server.
By the way, logs are written to STDERR
because that is what Claude Desktop routes to the log files.
In the future, I expect well formatted log messages to be written over the STDIO
transport to the MCP client (note: not Claude Desktop app).
Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector, which is available as a package script:
npm run inspector
The Inspector will provide a URL to access debugging tools in your browser.
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "mcp-server-commands": { "command": "npx", "args": [ "mcp-server-commands" ] } } }
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.