by hungryrobot1
Enables AI models to create, modify, and execute tools at runtime while maintaining safety through sandboxing, code validation, and comprehensive journaling.
Mcp Pif Cljs provides a Model Context Protocol (MCP) server written in ClojureScript that lets AI assistants generate new tools on‑the‑fly, execute them safely, and keep an auditable log of every change. The server treats code as data, allowing dynamic evolution without restarting.
git clone https://github.com/hungryrobot1/MCP-PIF
cd MCP-PIF
npm install
npx shadow-cljs compile mcp-server
mcp-server.js via Node:
{
"mcpServers": {
"mcp-pif-cljs": {
"command": "node",
"args": ["/full/path/to/MCP-PIF/out/mcp-server.js"]
}
}
}
memory-store, meta-evolve, execute-tool, etc.)../package-dxt.sh to produce a .dxt bundle for drag‑and‑drop installation.meta-evolve tool.execute-tool) that bypasses client‑side caching of newly created tools.memory-store, memory-retrieve, journal-recent, server-info).execute-tool tool with the tool name and arguments.journal-recent and server-info../package-dxt.sh to create a .dxt bundle that can be installed by dragging it onto Claude Desktop.A JSON-native lambda calculus runtime with metacircular evaluation, designed as an MCP (Model Context Protocol) server. Enables language models to evolve tools dynamically through metaprogramming.
# Build the project
cabal build
# Enable debug mode for detailed evaluation tracing
MCP_DEBUG=1 cabal run mcp-pif
# Debug output (to stderr) shows:
# - Each evaluation step
# - Environment keys at each step
# - Closure creation and application
# - Tool code lookups
// Create a tool
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "evolve",
"arguments": {
"name": "square",
"description": "Squares a number",
"code": {"lam": "x", "body": {"mul": [{"var": "x"}, {"var": "x"}]}}
}
}
}
// Use the tool
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "run",
"arguments": {
"tool": "square",
"input": 7
}
}
}
// Returns: 49
// Get help
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "help",
"arguments": {"category": "lists"}
}
}
// Returns: Documentation for list primitives
For pure computation language models need computational tools, but existing systems either provide fixed APIs or unrestricted code execution. In the context of metaprogramming and self-modification, neither is ideal. This program presents a middle ground structure for safe, inspectable, evolvable computation.
In the ideal sense, MCP-PIF can be thought of as a generic, metamorphic computer interface where the language model drops-in as an executive function. Here, we provide a simple vocabulary of lambda calculus primitives for access to pure computing only.
MCP-PIF provides a lambda calculus with three metaprogramming primitives:
quote - Treat code as data (prevent evaluation)eval - Execute quoted code dynamicallycode_of - Introspect any tool's source codeThis creates a metacircular system where tools can analyze and transform other tools while maintaining deterministic, fuel-bounded execution. It is homoiconic, but constrained.
It's worth noting quote and eval are not perfectly symmetric:
-- eval cleans the environment:
let cleanEnv = M.filterWithKey (\k _ -> not $ k `elem` ["__tool_name", "__self"]) env
This prevents eval'd code from inheriting the wrong tool context. When eval executes quoted code:
__tool_name and __self removed to prevent tool context confusion)code_of introspection)__eval_depth counter is added (max depth: 100, prevents infinite eval loops)Because this framework is built in MCP it makes significant compromises in purity. Namely, updating the server code is not part of the metaprogramming loop. This means that the list of primitives, parsing strategies, and the evolution and evaluation processes themselves are not modifiable during runtime.
There are two kinds of tools:
evolve (effectful, mutates registry)run (pure, functional)Evolved tools can interact with each other but cannot themselves evolve new tools without the access to the protocol-level tool registry. This "simulacrum" implementation prevents unbounded self-modification by isolating the very invariants which enable the rich metaprogramming.
| Category | Primitive | JSON Syntax | Description |
|---|---|---|---|
| Lambda | Variable | {"var": "x"} |
Variable reference |
| Function | {"lam": "x", "body": ...} |
Lambda abstraction | |
| Application | {"app": {"func": ..., "arg": ...}} |
Function application | |
| Arithmetic | Add | {"add": [a, b]} |
Addition |
| Subtract | {"sub": [a, b]} |
Subtraction | |
| Multiply | {"mul": [a, b]} |
Multiplication | |
| Divide | {"div": [a, b]} |
Division (errors on 0) | |
| Modulo | {"mod": [a, b]} |
Modulo (errors on 0) | |
| Comparison | Equal | {"eq": [a, b]} |
Equality test |
| Less Than | {"lt": [a, b]} |
Less than | |
| Less Than or Equal | {"lte": [a, b]} |
Less than or equal | |
| Greater Than | {"gt": [a, b]} |
Greater than | |
| Greater Than or Equal | {"gte": [a, b]} |
Greater than or equal | |
| Logic | And | {"and": [a, b]} |
Logical AND (short-circuit) |
| Or | {"or": [a, b]} |
Logical OR (short-circuit) | |
| Not | {"not": a} |
Logical NOT | |
| Control | If | {"if": {"cond": c, "then": t, "else": e}} |
Conditional |
| Continue | {"continue": {"input": x}} |
Recursive step | |
| Lists | Nil | {"nil": true} |
Empty list |
| Cons | {"cons": {"head": ..., "tail": ...}} |
List construction | |
| Fold | {"fold": [func, init, list]} |
Universal reducer (see GUIDE.md for pair parameter details) | |
| Pairs | Pair | {"pair": [a, b]} |
Pair construction |
| First | {"fst": pair} |
Get first element | |
| Second | {"snd": pair} |
Get second element | |
| Meta | Quote | {"quote": term} |
Prevent evaluation |
| Eval | {"eval": quoted} |
Execute quoted code | |
| Code Of | {"code_of": "tool_name"} |
Get tool source | |
| Self | {"self": true} |
Current closure reference (tools only) |
42, -17, 3.14 → Integers (floats rounded)true, false"hello", "world"[1, 2, 3] → Converted to cons listsnull → Unit valueThe run tool automatically normalizes inputs to make CLI usage more ergonomic:
"42" → 42"true" → true, "false" → false"{\"x\": 5}" → Parsed as JSON objectThis allows flexible input formats while maintaining type safety during evaluation.
For detailed patterns and examples, see the User Guide.
"square"), inline lambdas, or via code_of(accumulator, item), not two parameters{"self": true} only works inside registered tools (created via evolve), not in inline lambdas passed to runNeed to recurse? Use this decision tree:
Can you structure it with an accumulator?
├─ Yes → Use `continue` (works for any depth)
│ Pattern: take pair [state, accumulator]
│ Base case: return accumulator
│ Recursive: compute new accumulator, continue with [new_state, new_acc]
│
└─ No, need result immediately?
├─ Small input (n < 20) → Use `self`
└─ Large input → Redesign with accumulator or use fold
Examples:
continue with accumulator patterncontinue with accumulator patternselfeval + code_ofTools can use continuation-based recursion for step-by-step execution. Since continue pauses evaluation and returns control to the MCP layer, you must use an accumulator pattern where computation happens during recursion, not after:
{
"name": "evolve",
"arguments": {
"name": "factorial",
"description": "Computes factorial using continuation with accumulator",
"code": {
"lam": "n_acc",
"body": {
"if": {
"cond": {"lte": [{"fst": {"var": "n_acc"}}, 1]},
"then": {"snd": {"var": "n_acc"}},
"else": {
"continue": {
"input": {
"pair": [
{"sub": [{"fst": {"var": "n_acc"}}, 1]},
{"mul": [{"fst": {"var": "n_acc"}}, {"snd": {"var": "n_acc"}}]}
]
}
}
}
}
}
}
}
}
Usage:
{
"name": "run",
"arguments": {
"code": "factorial",
"input": {"pair": [5, 1]}
}
}
The program will return a structured response:
{
"type": "continuation",
"message": "Recursive step needed. Call run again with:",
"tool": "factorial_acc",
"next_input": {
"pair": [4, 5]
},
"step": 1
}
This renders for the client as a Haskell representation:
Object (fromList [("message",String "Recursive step needed. Call run again with:"),("next_input",Object (fromList [("pair",Array [Number 4.0,Number 5.0])])),("step",Number 1.0),("tool",String "factorial_acc"),("type",String "continuation")])
Important: The tool takes a pair [n, accumulator] as input. Start with [5, 1] to compute 5!. Each continuation step multiplies the accumulator by the current n, then decrements n.
The continue primitive doesn't return a value you can compute with—it returns a continuation marker. All computation must happen before calling continue, stored in the accumulator. The pattern is:
[n, acc] where acc holds the partial resultn ≤ 1, return the accumulatorn * acc), continue with [n-1, new_acc]Alternative: Direct recursion with self (fuel-limited):
{
"name": "evolve",
"arguments": {
"name": "factorial_self",
"description": "Simple factorial using self (small n only)",
"code": {
"lam": "n",
"body": {
"if": {
"cond": {"lte": [{"var": "n"}, 1]},
"then": 1,
"else": {
"mul": [
{"var": "n"},
{"app": {"func": {"self": true}, "arg": {"sub": [{"var": "n"}, 1]}}}
]
}
}
}
}
}
}
This works for small inputs but will hit the fuel limit (10,000 steps) around n=20.
{
"name": "map",
"description": "Maps a function over a list",
"code": {
"lam": "f",
"body": {
"lam": "list",
"body": {
"fold": [
{"lam": "acc_item", "body": {
"cons": {
"head": {"app": {"func": {"var": "f"}, "arg": {"snd": {"var": "acc_item"}}}},
"tail": {"fst": {"var": "acc_item"}}
}
}},
{"nil": true},
{"var": "list"}
]
}
}
}
}
{
"name": "count_operations",
"description": "Counts arithmetic operations in a tool",
"code": {
"lam": "tool_name",
"body": {
"eval": {
"quote": {
"analyze": [{"code_of": {"var": "tool_name"}}]
}
}
}
}
}
The code_of primitive returns a tool's source as quoted data, enabling program analysis and transformation.
JSON Input → Parser → Term → Evaluator → RuntimeValue → Encoder → JSON Output
validation syntax execution values serialization
| Module | Purpose |
|---|---|
| Main.hs | Entry point, JSON-RPC loop |
| Server.hs | MCP protocol, request routing |
| Core/Parser.hs | JSON → Term validation |
| Core/Evaluator.hs | Term execution with fuel |
| Core/Encoder.hs | RuntimeValue → JSON |
| Core/Types.hs | Core type definitions |
| Core/Syntax.hs | Term ADT |
| Tools/Registry.hs | Tool storage |
MCP-PIF implements the Model Context Protocol for tool discovery and execution:
evolve - Create new tools (stores in registry)run - Execute tools or inline lambda expressionslist - Show all registered toolshelp - Display documentation for primitives and system tools# Example using Python MCP SDK
import mcp
async with mcp.Client() as client:
await client.connect(stdio_transport("cabal run mcp-pif"))
# Create a tool
await client.call_tool("evolve", {
"name": "double",
"description": "Doubles a number",
"code": {"mul": [{"var": "x"}, 2]}
})
# Use it
result = await client.call_tool("run", {
"tool": "double",
"input": 21
})
print(result) # 42
MCP-PIF's current design maintains a clear boundary between pure computation and effectful operations. Several extensions have been considered that would expand these boundaries in interesting ways:
The current system is primarily synthetic - using primitives to compose new functions. A natural extension would be analytic capabilities:
These analytic functions would operate on code-as-data (quoted terms) and could enable powerful metaprogramming patterns. However, they require careful design to maintain the simplicity of the core calculus while providing meaningful guarantees.
Another direction involves controlled introduction of effects:
System Interaction:
readFile, writeFile)exec, env)fetch, serve)Design Challenges:
One approach might be capability-based security: tools could declare required capabilities (file access, network, etc.) at creation time, with the MCP layer enforcing access control. This is consistent with the idea that evolved tools should bear proof of their own validity.
The ultimate metacircular goal: implementing MCP-PIF's evaluator in MCP-PIF itself. This would require:
A self-hosted PIF could enable runtime evolution of the evaluation strategy itself - a truly reflective system.
The MCP boundary could support additional protocol-level operations:
These extensions maintain the event horizon principle while enriching the protocol layer's capabilities.
The key design principle for any extension: preserve the simplicity and predictability that makes PIF a reliable substrate for language model computation.
MIT
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process.
by danny-avila
Provides a self‑hosted ChatGPT‑style interface supporting numerous AI models, agents, code interpreter, image generation, multimodal interactions, and secure multi‑user authentication.
by block
Automates engineering tasks on local machines, executing code, building projects, debugging, orchestrating workflows, and interacting with external APIs using any LLM.
by RooCodeInc
Provides an autonomous AI coding partner inside the editor that can understand natural language, manipulate files, run commands, browse the web, and be customized via modes and instructions.
by pydantic
A Python framework that enables seamless integration of Pydantic validation with large language models, providing type‑safe agent construction, dependency injection, and structured output handling.
by mcp-use
A Python SDK that simplifies interaction with MCP servers and enables developers to create custom agents with tool‑calling capabilities.
by lastmile-ai
Build effective agents using Model Context Protocol and simple, composable workflow patterns.
by Klavis-AI
Provides production‑ready MCP servers and a hosted service for integrating AI applications with over 50 third‑party services via standardized APIs, OAuth, and easy Docker or hosted deployment.
by nanbingxyz
A cross‑platform desktop AI assistant that connects to major LLM providers, supports a local knowledge base, and enables tool integration via MCP servers.