by GBSOSS
Converts any MCP server into a Claude Skill, achieving up to 90% context savings while keeping tool functionality external to the model.
Mcp To Skill Converter transforms a Model Context Protocol (MCP) server into a Claude Skill. By loading only minimal metadata into the model context and fetching full tool instructions on demand, it reduces the token footprint from tens of thousands to a few hundred, preserving more context for user prompts.
cat > github-mcp.json << 'EOF'
{
"name": "github",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {"GITHUB_TOKEN": "your-token-here"}
}
EOF
python mcp_to_skill.py --mcp-config github-mcp.json --output-dir ./skills/github
cd skills/github
pip install mcp
cp -r . ~/.claude/skills/github
Claude will now be able to invoke the GitHub tools with only ~100 tokens of metadata loaded initially.
executor.py.mcp Python library.Q: Do I need to modify my existing MCP server? A: No. The converter reads the server’s JSON config and generates a wrapper; the original server runs unchanged.
Q: What Python version is required? A: Python 3.8 or newer.
Q: Can I convert multiple MCP servers at once? A: Yes. Iterate over your config files and invoke the CLI for each output directory.
Q: How does authentication work?
A: Include required env vars (e.g., GITHUB_TOKEN, SLACK_TOKEN) in the MCP config; the generated skill passes them to the executor.
Q: What if a tool needs a persistent connection? A: The converter is best for stateless tools. For persistent connections, consider keeping those tools as native MCP servers.
Q: Is the generated skill compatible with Claude’s latest version?
A: The skill follows Claude’s standard Skill layout (SKILL.md, executor.py), so it works with current Claude Skill loading mechanisms.
Convert any MCP server into a Claude Skill with 90% context savings.
MCP servers are great but load all tool definitions into context at startup. With 20+ tools, that's 30-50k tokens gone before Claude does any work.
This converter applies the "progressive disclosure" pattern (inspired by playwright-skill) to any MCP server:
# 1. Create your MCP config file
cat > github-mcp.json << 'EOF'
{
"name": "github",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {"GITHUB_TOKEN": "your-token-here"}
}
EOF
# 2. Convert to Skill
python mcp_to_skill.py \
--mcp-config github-mcp.json \
--output-dir ./skills/github
# 3. Install dependencies
cd skills/github
pip install mcp
# 4. Copy to Claude
cp -r . ~/.claude/skills/github
Done! Claude can now use GitHub tools with minimal context.
The converter:
SKILL.md - Instructions for Claudeexecutor.py - Handles MCP calls dynamicallyBefore (MCP):
20 tools = 30k tokens always loaded
Context available: 170k / 200k = 85%
After (Skills):
20 skills = 2k tokens metadata
When 1 skill active: 7k tokens
Context available: 193k / 200k = 96.5%
GitHub MCP server (8 tools):
| Metric | MCP | Skill | Savings |
|---|---|---|---|
| Idle | 8,000 tokens | 100 tokens | 98.75% |
| Active | 8,000 tokens | 5,000 tokens | 37.5% |
Any standard MCP server:
Use this converter when:
Stick with MCP when:
Best approach: Use both
pip install mcp
Python 3.8+ required.
┌─────────────────────────────────────┐
│ Your MCP Config │
│ (JSON file) │
└──────────┬──────────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ mcp_to_skill.py │
│ - Reads config │
│ - Generates Skill structure │
└──────────┬──────────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Generated Skill │
│ ├── SKILL.md (100 tokens) │
│ ├── executor.py (dynamic calls) │
│ └── config files │
└─────────────────────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Claude │
│ - Loads metadata only │
│ - Full docs when needed │
│ - Calls executor for tools │
└─────────────────────────────────────┘
# Create config
cat > github.json << 'EOF'
{
"name": "github",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {"GITHUB_TOKEN": "ghp_your_token"}
}
EOF
# Convert
python mcp_to_skill.py --mcp-config github.json --output-dir ./skills/github
# Result: GitHub tools accessible with 100 tokens vs 8k
# Convert multiple MCP servers
for config in configs/*.json; do
name=$(basename "$config" .json)
python mcp_to_skill.py --mcp-config "$config" --output-dir "./skills/$name"
done
pip install mcp
Check your config file:
cd skills/your-skill
# List tools
python executor.py --list
# Describe a tool
python executor.py --describe tool_name
# Call a tool
python executor.py --call '{"tool": "tool_name", "arguments": {...}}'
mcp Python packageThis is a proof of concept. Contributions welcome:
Inspired by:
MIT
Status: Functional but early stage
Feedback: Issues and PRs welcome
Questions: Open an issue
Please log in to share your review and rating for this MCP.
{
"mcpServers": {
"github": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-github"
],
"env": {
"GITHUB_TOKEN": "<YOUR_API_KEY>"
}
}
}
}claude mcp add github npx -y @modelcontextprotocol/server-githubExplore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.