by cyberchitta
Inject relevant code and text from a project into Large Language Model chat interfaces via the clipboard or Model Context Protocol, using smart file selection based on .gitignore patterns and rule‑based customization.
LLM Context enables developers to quickly share selected portions of a codebase or document collection with any LLM chat interface. It extracts files that match project‑specific rules, formats them, and sends the content either through the system clipboard or directly over the Model Context Protocol.
uv tool install "llm-context>=0.3.0" (or upgrade with uv tool upgrade llm-context).lc-init at the root of your project to create configuration files.lc-sel-files (or lc-sel-outlines for outlines) to pick the files that should be part of the context.lc-context (add -p for prompt instructions, -u for user notes, or -f FILE to write to a file). The result is copied to the clipboard.claude_desktop_config.json:{
"mcpServers": {
"CyberChitta": {
"command": "uvx",
"args": ["--from", "llm-context", "lc-mcp"]
}
}
}
After configuration, Claude Desktop can request the project context directly.
6. Iterative workflow – When the LLM asks for additional files, copy the file list, run lc-clip-files, and paste the returned contents back into the chat.
.gitignore patterns.lc-clip-implementations.Q: Do I need to reinstall after each update?
A: No, simply run uv tool upgrade llm-context to get the latest version.
Q: Will my configuration be overwritten?
A: Updates may overwrite files prefixed with lc-. Keep them under version control.
Q: Can I use LLM Context with large monorepos? A: It’s optimized for projects that fit within an LLM’s context window; large‑project support is being developed.
Q: How does rule switching work?
A: Run lc-set-rule <n> where <n> corresponds to a rule profile defined in the configuration. System rules are prefixed with lc-.
Q: Is there a Python API?
A: The primary interface is the CLI, but the underlying library can be imported from the llm_context package for custom integrations.
Reduce friction when providing context to LLMs. Share relevant project files instantly through smart selection and rule-based filtering.
Getting project context into LLM chats is tedious:
lc-select # Smart file selection
lc-context # Instant formatted context
# Paste and work - AI can access additional files seamlessly
Result: From "I need to share my project" to productive AI collaboration in seconds.
Note: This project was developed in collaboration with several Claude Sonnets (3.5, 3.6, 3.7 and 4.0), as well as Groks (3 and 4), using LLM Context itself to share code during development. All code in the repository is heavily human-curated (by me 😇, @restlessronin).
uv tool install "llm-context>=0.5.0"
# One-time setup
cd your-project
lc-init
# Daily usage
lc-select
lc-context
{
"mcpServers": {
"llm-context": {
"command": "uvx",
"args": ["--from", "llm-context", "lc-mcp"]
}
}
}
With MCP, AI can access additional files directly during conversations.
# Create project-specific filters
cat > .llm-context/rules/flt-repo-base.md << 'EOF'
---
compose:
filters: [lc/flt-base]
gitignores:
full-files: ["*.md", "/tests", "/node_modules"]
---
EOF
# Customize main development rule
cat > .llm-context/rules/prm-code.md << 'EOF'
---
instructions: [lc/ins-developer, lc/sty-python]
compose:
filters: [flt-repo-base]
excerpters: [lc/exc-base]
---
Additional project-specific guidelines and context.
EOF
Choose based on your LLM environment:
lc-context -p (AI Studio, etc.)lc-context -p -m (Grok, etc.)lc-prompt + lc-context -mlc-context (Claude Projects, etc.)lc-context -m (force into context)See Deployment Patterns in the user guide for details.
| Command | Purpose |
|---|---|
lc-init |
Initialize project configuration |
lc-select |
Select files based on current rule |
lc-context |
Generate and copy context |
lc-context -p |
Generate context with prompt |
lc-context -m |
Send context as separate message |
lc-context -nt |
No tools (for Project/Files inclusion) |
lc-context -f |
Write context to file |
lc-set-rule <n> |
Switch between rules |
lc-missing |
Handle file and context requests (non-MCP) |
Rules use a systematic five-category structure:
prm-): Generate project contexts (e.g., lc/prm-developer, lc/prm-rule-create)flt-): Control file inclusion (e.g., lc/flt-base, lc/flt-no-files)ins-): Provide guidelines (e.g., lc/ins-developer, lc/ins-rule-framework)sty-): Enforce coding standards (e.g., lc/sty-python, lc/sty-code)exc-): Configure extractions for context reduction (e.g., lc/exc-base)---
description: "Debug API authentication issues"
compose:
filters: [lc/flt-no-files]
excerpters: [lc/exc-base]
also-include:
full-files: ["/src/auth/**", "/tests/auth/**"]
---
Focus on authentication system and related tests.
Let AI create focused rules for specific tasks. There are two approaches depending on your setup:
How it works: A global Claude Skill helps you create rules interactively. It requires project context (with overview) already shared via llm-context, and uses lc-missing to examine specific files as needed.
Setup:
lc-init # Installs skill to ~/.claude/skills/
# Restart Claude Desktop or Claude Code
Workflow:
# 1. Share any project context (overview is required)
lc-context # Can use any rule - overview will be included
# 2. Paste into Claude
# 3. Ask the Skill to help create a rule
# "Create a rule for refactoring authentication to JWT"
# "I need a rule to debug the payment processing system"
Claude will:
lc-missing to examine specific files as needed for deeper analysis.llm-context/rules/tmp-prm-<task-name>.mdSkill Files:
Skill.md - Quick workflow and patterns (always loaded)PATTERNS.md - Common rule patterns (on demand)SYNTAX.md - Detailed syntax reference (on demand)EXAMPLES.md - Complete walkthroughs (on demand)TROUBLESHOOTING.md - Problem solving (on demand)Skill Updates: Automatically updated when you upgrade llm-context. Restart Claude to use the new version.
How it works: You use a project rule that loads comprehensive rule-creation documentation as context.
Setup: No special setup needed - the documentation is built-in.
Usage:
# 1. Load the rule creation framework into context
lc-set-rule lc/prm-rule-create
lc-select
lc-context -nt
# 2. Paste into any LLM and describe your task
# "I need to add OAuth integration to the auth system"
# 3. The LLM generates a focused rule using the framework
# 4. Use the focused context
lc-set-rule tmp-prm-oauth-task
lc-select
lc-context
Documentation Included:
lc/ins-rule-intro - Chat-based rule creation introductionlc/ins-rule-framework - Comprehensive decision framework, semantics, and best practices| Aspect | Skill | Instruction Rules |
|---|---|---|
| Setup | Automatic with lc-init |
Already available |
| Requires project context | Yes (overview needed) | Yes (overview needed) |
| Interaction | Interactive, multi-turn in Claude | Static documentation in context |
| Exploration | Uses lc-missing as needed |
Manual or via AI requests |
| Best for | Claude Desktop/Code users | Any LLM, API, automation |
Both approaches require sharing project context first via lc-context. They produce equivalent results - choose based on your environment and preference.
lc-set-rule lc/prm-developer
lc-select
lc-context
# AI can review changes, access additional files as needed
# Share project context
lc-context
# Then ask Skill (Claude Desktop/Code):
# "Create a rule for [your task]"
# Or work with any LLM using instruction rules:
# lc-set-rule lc/prm-rule-create && lc-context -nt
Apache License, Version 2.0. See LICENSE for details.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.