by cyberchitta
Inject relevant code and text from a project into Large Language Model chat interfaces via the clipboard or Model Context Protocol, using smart file selection based on .gitignore patterns and rule‑based customization.
LLM Context enables developers to quickly share selected portions of a codebase or document collection with any LLM chat interface. It extracts files that match project‑specific rules, formats them, and sends the content either through the system clipboard or directly over the Model Context Protocol.
uv tool install "llm-context>=0.3.0"
(or upgrade with uv tool upgrade llm-context
).lc-init
at the root of your project to create configuration files.lc-sel-files
(or lc-sel-outlines
for outlines) to pick the files that should be part of the context.lc-context
(add -p
for prompt instructions, -u
for user notes, or -f FILE
to write to a file). The result is copied to the clipboard.claude_desktop_config.json
:{
"mcpServers": {
"CyberChitta": {
"command": "uvx",
"args": ["--from", "llm-context", "lc-mcp"]
}
}
}
After configuration, Claude Desktop can request the project context directly.
6. Iterative workflow – When the LLM asks for additional files, copy the file list, run lc-clip-files
, and paste the returned contents back into the chat.
.gitignore
patterns.lc-clip-implementations
.Q: Do I need to reinstall after each update?
A: No, simply run uv tool upgrade llm-context
to get the latest version.
Q: Will my configuration be overwritten?
A: Updates may overwrite files prefixed with lc-
. Keep them under version control.
Q: Can I use LLM Context with large monorepos? A: It’s optimized for projects that fit within an LLM’s context window; large‑project support is being developed.
Q: How does rule switching work?
A: Run lc-set-rule <n>
where <n>
corresponds to a rule profile defined in the configuration. System rules are prefixed with lc-
.
Q: Is there a Python API?
A: The primary interface is the CLI, but the underlying library can be imported from the llm_context
package for custom integrations.
LLM Context is a tool that helps developers quickly inject relevant content from code/text projects into Large Language Model chat interfaces. It leverages .gitignore
patterns for smart file selection and provides both a streamlined clipboard workflow using the command line and direct LLM integration through the Model Context Protocol (MCP).
Note: This project was developed in collaboration with several Claude Sonnets - 3.5, 3.6 and 3.7 (and more recently Grok-3 as well), using LLM Context itself to share code during development. All code in the repository is human-curated (by me 😇, @restlessronin).
For an in-depth exploration of the reasoning behind LLM Context and its approach to AI-assisted development, check out our article: LLM Context: Harnessing Vanilla AI Chats for Development
To see LLM Context in action with real-world examples and workflows, read: Full Context Magic - When AI Finally Understands Your Entire Project
Install LLM Context using uv:
uv tool install "llm-context>=0.3.0"
To upgrade to the latest version:
uv tool upgrade llm-context
Warning: LLM Context is under active development. Updates may overwrite configuration files prefixed with
lc-
. We recommend all configuration files be version controlled for this reason.
Add to 'claude_desktop_config.json':
{
"mcpServers": {
"CyberChitta": {
"command": "uvx",
"args": ["--from", "llm-context", "lc-mcp"]
}
}
}
Once configured, you can start working with your project in two simple ways:
Say: "I would like to work with my project" Claude will ask you for the project root path.
Or directly specify: "I would like to work with my project /path/to/your/project" Claude will automatically load the project context.
For optimal results, combine initial context through Claude's Project Knowledge UI with dynamic code access via MCP. This provides both comprehensive understanding and access to latest changes. See Full Context Magic for details and examples.
lc-init
(only needed once)lc-sel-files
.llm-context/curr_ctx.yaml
lc-context
(with optional flags: -p
for prompt, -u
for user notes)lc-context -p
to include instructionslc-clip-files
lc-init
: Initialize project configurationlc-set-rule <n>
: Switch rules (system rules are prefixed with "lc-")lc-sel-files
: Select files for inclusionlc-sel-outlines
: Select files for outline generationlc-context [-p] [-u] [-f FILE]
: Generate and copy context
-p
: Include prompt instructions-u
: Include user notes-f FILE
: Write to output filelc-prompt
: Generate project instructions for LLMslc-clip-files
: Process LLM file requestslc-changed
: List files modified since last context generationlc-outlines
: Generate outlines for code fileslc-clip-implementations
: Extract code implementations requested by LLMs (doesn't support C/C++)LLM Context provides advanced features for customizing how project content is captured and presented:
.gitignore
patternslc-clip-implementations
commandSee our User Guide for detailed documentation of these features.
Check out our comprehensive list of alternatives - the sheer number of tools tackling this problem demonstrates its importance to the developer community.
LLM Context evolves from a lineage of AI-assisted development tools:
I am grateful for the open-source community's innovations and the AI assistance that have shaped this project's evolution.
I am grateful for the help of Claude-3.5-Sonnet in the development of this project.
This project is licensed under the Apache License, Version 2.0. See the LICENSE file for details.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.