by wilsonchenghy
Enables large language models to query ShaderToy data and generate complex GLSL shaders through MCP tools.
ShaderToy MCP provides Model Context Protocol (MCP) tools that let an LLM retrieve information about any shader on ShaderToy, search the ShaderToy catalogue, and synthesize new, sophisticated GLSL shaders by learning from existing examples.
brew install uv
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
and add the uv binary to your PATH.git clone https://github.com/wilsonchenghy/ShaderToy-MCP.git
claude_desktop_config.json
that points to the server script and supplies your ShaderToy API key.get_shader_info()
or search_shader()
in your prompts.mainImage
signature, automatically crediting source inspirations.Q: Do I need a ShaderToy account?
A: Only an API key is required. Obtain it from the ShaderToy developer portal and set it in the SHADERTOY_APP_KEY
environment variable.
Q: Can I run the server on Linux?
A: Yes. Install uv
for your distribution and follow the same steps as macOS/Windows.
Q: What languages are supported for the generated shader?
A: The output follows ShaderToy’s GLSL fragment‑shader format, using the void mainImage( out vec4 fragColor, in vec2 fragCoord )
entry point.
Q: How does the LLM get access to the shader data?
A: The LLM calls the MCP tools (get_shader_info
, search_shader
) which internally invoke the ShaderToy API and return structured results.
Q: Is there a way to limit the number of search results?
A: The search_shader
tool accepts optional parameters (e.g., limit
) that you can include in your prompt to control output size.
MCP Server for ShaderToy, a website for creating, running and sharing GLSL shader (https://www.shadertoy.com/). It connects LLM like Claude with ShaderToy through Model Context Protocol (MCP), allowing the LLM to query and read the entire web page, allowing it to make increasingly complex shader it normally isn't capable of.
Example of the complex shader it generates:
Ocean (https://www.shadertoy.com/view/tXs3Wf)
Mountains (https://www.shadertoy.com/view/W3l3Df)
Matrix Digital Rain (https://www.shadertoy.com/view/33l3Df)
On Mac, please install uv as
brew install uv
On Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
and then
set Path=C:\Users\nntra\.local\bin;%Path%
Otherwise installation instructions are on their website: Install uv
Git clone the project with git clone https://github.com/wilsonchenghy/ShaderToy-MCP.git
Go to Claude > Settings > Developer > Edit Config > claude_desktop_config.json to include the following:
{
"mcpServers": {
"ShaderToy_MCP": {
"command": "uv",
"args": [
"run",
"--with",
"mcp[cli]",
"mcp",
"run",
"<path_to_project>/ShaderToy-MCP/src/ShaderToy-MCP/server.py"
],
"env": {
"SHADERTOY_APP_KEY": "your_actual_api_key" // Replace with your API key
}
}
}
}
Once the config file has been set on Claude, you will see a hammer icon for the MCP. Test with the example commands to see if it correctly utilize the MCP tools.
Generate shader code of a {object}, if it is based on someone's work on ShaderToy, credit it, make the code follow the ShaderToy format: void mainImage( out vec4 fragColor, in vec2 fragCoord ) {}
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "ShaderToy_MCP": { "command": "uv", "args": [ "run", "--with", "mcp[cli]", "mcp", "run", "<path_to_project>/ShaderToy-MCP/src/ShaderToy-MCP/server.py" ], "env": { "SHADERTOY_APP_KEY": "<YOUR_API_KEY>" } } } }
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.