by bazinga012
Execute Python code within a specified Conda, virtualenv, or UV environment, supporting incremental code generation and dynamic dependency handling for LLMs.
What is Mcp Code Executor about?
Mcp Code Executor provides an MCP server that lets large language models run Python snippets or full scripts inside a user‑defined Python environment (Conda, standard virtualenv, or UV virtualenv). It handles dependency installation, environment checks, and incremental file building to bypass token limits.
How to use Mcp Code Executor?
npx
).CODE_STORAGE_DIR
, ENV_TYPE
, and the env‑specific variable such as CONDA_ENV_NAME
).execute_code
, initialize_code_file
, append_to_code_file
, etc.) to generate, modify, and run code.Key features of Mcp Code Executor
initialize_code_file
, append_to_code_file
, read_code_file
) for large scripts.install_dependencies
).configure_environment
).Use cases of Mcp Code Executor
FAQ from the Mcp Code Executor
npm install
and npm run build
if you run it locally with Node. When using npx
, the build step is handled by the published package.configure_environment
tool to switch ENV_TYPE
and the corresponding environment identifier.install_dependencies
first; the server will install the packages into the configured environment.CODE_STORAGE_DIR
; each file gets a unique identifier to avoid collisions.The MCP Code Executor is an MCP server that allows LLMs to execute Python code within a specified Python environment. This enables LLMs to run code with access to libraries and dependencies defined in the environment. It also supports incremental code generation for handling large code blocks that may exceed token limits.
git clone https://github.com/bazinga012/mcp_code_executor.git
cd mcp_code_executor
npm install
npm run build
To configure the MCP Code Executor server, add the following to your MCP servers configuration file:
{
"mcpServers": {
"mcp-code-executor": {
"command": "node",
"args": [
"/path/to/mcp_code_executor/build/index.js"
],
"env": {
"CODE_STORAGE_DIR": "/path/to/code/storage",
"ENV_TYPE": "conda",
"CONDA_ENV_NAME": "your-conda-env"
}
}
}
}
{
"mcpServers": {
"mcp-code-executor": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"mcp-code-executor"
]
}
}
}
Note: The Dockerfile has been tested with the venv-uv environment type only. Other environment types may require additional configuration.
CODE_STORAGE_DIR
: Directory where the generated code will be storedFor Conda:
ENV_TYPE
: Set to conda
CONDA_ENV_NAME
: Name of the Conda environment to useFor Standard Virtualenv:
ENV_TYPE
: Set to venv
VENV_PATH
: Path to the virtualenv directoryFor UV Virtualenv:
ENV_TYPE
: Set to venv-uv
UV_VENV_PATH
: Path to the UV virtualenv directoryThe MCP Code Executor provides the following tools to LLMs:
execute_code
Executes Python code in the configured environment. Best for short code snippets.
{
"name": "execute_code",
"arguments": {
"code": "import numpy as np\nprint(np.random.rand(3,3))",
"filename": "matrix_gen"
}
}
install_dependencies
Installs Python packages in the environment.
{
"name": "install_dependencies",
"arguments": {
"packages": ["numpy", "pandas", "matplotlib"]
}
}
check_installed_packages
Checks if packages are already installed in the environment.
{
"name": "check_installed_packages",
"arguments": {
"packages": ["numpy", "pandas", "non_existent_package"]
}
}
configure_environment
Dynamically changes the environment configuration.
{
"name": "configure_environment",
"arguments": {
"type": "conda",
"conda_name": "new_env_name"
}
}
get_environment_config
Gets the current environment configuration.
{
"name": "get_environment_config",
"arguments": {}
}
initialize_code_file
Creates a new Python file with initial content. Use this as the first step for longer code that may exceed token limits.
{
"name": "initialize_code_file",
"arguments": {
"content": "def main():\n print('Hello, world!')\n\nif __name__ == '__main__':\n main()",
"filename": "my_script"
}
}
append_to_code_file
Appends content to an existing Python code file. Use this to add more code to a file created with initialize_code_file.
{
"name": "append_to_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py",
"content": "\ndef another_function():\n print('This was appended to the file')\n"
}
}
execute_code_file
Executes an existing Python file. Use this as the final step after building up code with initialize_code_file and append_to_code_file.
{
"name": "execute_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py"
}
}
read_code_file
Reads the content of an existing Python code file. Use this to verify the current state of a file before appending more content or executing it.
{
"name": "read_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py"
}
}
Once configured, the MCP Code Executor will allow LLMs to execute Python code by generating a file in the specified CODE_STORAGE_DIR
and running it within the configured environment.
LLMs can generate and execute code by referencing this MCP server in their prompts.
For larger code blocks that might exceed LLM token limits, use the incremental code generation approach:
initialize_code_file
append_to_code_file
read_code_file
execute_code_file
This approach allows LLMs to write complex, multi-part code without running into token limitations.
This package maintains backward compatibility with earlier versions. Users of previous versions who only specified a Conda environment will continue to work without any changes to their configuration.
Contributions are welcome! Please open an issue or submit a pull request.
This project is licensed under the MIT License.
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "mcp-code-executor": { "command": "npx", "args": [ "-y", "mcp-code-executor" ], "env": { "CODE_STORAGE_DIR": "/path/to/code/storage", "ENV_TYPE": "conda", "CONDA_ENV_NAME": "your-conda-env" } } } }
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.