by GreatScottyMac
Provides a project‑specific memory bank that stores decisions, progress, architecture, and custom data, exposing a structured knowledge graph via MCP for AI assistants and IDE tools.
Context Portal creates a project‑specific knowledge base that captures entities such as decisions, progress items, system patterns, and custom key‑value data. The information is stored in a SQLite database per workspace and enriched with vector embeddings for semantic search, enabling Retrieval Augmented Generation (RAG) for AI assistants.
uvx
(or via a virtual environment) and ensure Python 3.8+ is available.context_portal_mcp
binary in STDIO mode, passing the absolute workspace path via --workspace_id
.conport
) and supply the command/args configuration.get_product_context
, log_decision
, search_decisions_fts
, etc.) from your IDE or LLM agent, always providing the workspace_id
.link_conport_items
.Q: Do I need to supply workspace_id
for every tool call?
A: Yes. It allows a single server process to manage multiple workspaces safely.
Q: Can I run the server without uvx
?
A: Absolutely. You can clone the repo, create a virtual environment, install dependencies from requirements.txt
, and launch src/context_portal_mcp/main.py
directly.
Q: What happens if I start the server in the wrong directory?
A: The initial --workspace_id
check prevents the server from creating its database inside its own installation folder, protecting against misconfiguration.
Q: How are vector embeddings generated? A: The server integrates with an LLM provider (configurable via environment variables) to compute embeddings for stored text before indexing.
Q: Is there a way to see recent activity?
A: Use the get_recent_activity_summary
tool to obtain a concise list of recent changes across all item types.
A database-backed Model Context Protocol (MCP) server for managing structured project context, designed to be used by AI assistants and developer tools within IDEs and other interfaces.
Context Portal (ConPort) is your project's memory bank. It's a tool that helps AI assistants understand your specific software project better by storing important information like decisions, tasks, and architectural patterns in a structured way. Think of it as building a project-specific knowledge base that the AI can easily access and use to give you more accurate and helpful responses.
What it does:
ConPort provides a robust and structured way for AI assistants to store, retrieve, and manage various types of project context. It effectively builds a project-specific knowledge graph, capturing entities like decisions, progress, and architecture, along with their relationships. This structured knowledge base, enhanced by vector embeddings for semantic search, then serves as a powerful backend for Retrieval Augmented Generation (RAG), enabling AI assistants to access precise, up-to-date information for more context-aware and accurate responses.
It replaces older file-based context management systems by offering a more reliable and queryable database backend (SQLite per workspace). ConPort is designed to be a generic context backend, compatible with various IDEs and client interfaces that support MCP.
Key features include:
context_portal_mcp
) built with Python/FastAPI.workspace_id
.Before you begin, ensure you have the following installed:
uv
significantly simplifies virtual environment creation and dependency installation.
The recommended way to install and run ConPort is by using uvx
to execute the package directly from PyPI. This method avoids the need to manually create and manage virtual environments.
uvx
ConfigurationIn your MCP client settings (e.g., mcp_settings.json
), use the following configuration:
{
"mcpServers": {
"conport": {
"command": "uvx",
"args": [
"--from",
"context-portal-mcp",
"conport-mcp",
"--mode",
"stdio",
"--workspace_id",
"${workspaceFolder}",
"--log-file",
"./logs/conport.log",
"--log-level",
"INFO"
]
}
}
}
command
: uvx
handles the environment for you.args
: Contains the arguments to run the ConPort server.${workspaceFolder}
: This IDE variable is used to automatically provide the absolute path of the current project workspace.--log-file
: Optional: Path to a file where server logs will be written. If not provided, logs are directed to stderr
(console). Useful for persistent logging and debugging server behavior.--log-level
: Optional: Sets the minimum logging level for the server. Valid choices are DEBUG
, INFO
, WARNING
, ERROR
, CRITICAL
. Defaults to INFO
. Set to DEBUG
for verbose output during development or troubleshooting.These instructions guide you through setting up ConPort for development or contribution by cloning its Git repository and installing dependencies.
Clone the Repository: Open your terminal or command prompt and run:
git clone https://github.com/GreatScottyMac/context-portal.git
cd context-portal
Create and Activate a Virtual Environment:
In the context-portal
directory:
uv venv
Activate the environment:
source .venv/bin/activate
.venv\Scripts\activate.bat
.venv\Scripts\Activate.ps1
Install Dependencies: With your virtual environment activated:
uv pip install -r requirements.txt
Verify Installation (Optional): Ensure your virtual environment is activated.
uv run python src/context_portal_mcp/main.py --help
This should output the command-line help for the ConPort server.
Purpose of the --workspace_id
Command-Line Argument:
When you launch the ConPort server, particularly in STDIO mode (--mode stdio
), the --workspace_id
argument serves several key purposes:
context.db
, conport_vector_data/
) inside its own installation directory. This protects against misconfigurations where the client might not correctly provide the workspace path.Important Note: The --workspace_id
provided at server startup is not automatically used as the workspace_id
parameter for every subsequent MCP tool call. ConPort tools are designed to require the workspace_id
parameter explicitly in each call (e.g., get_product_context({"workspace_id": "..."})
). This design supports the possibility of a single server instance managing multiple workspaces and ensures clarity for each operation. Your client IDE/MCP client is responsible for providing the correct workspace_id
with each tool call.
Key Takeaway: ConPort critically relies on an accurate --workspace_id
to identify the target project. Ensure this argument correctly resolves to the absolute path of your project workspace, either through IDE variables like ${workspaceFolder}
or by providing a direct absolute path.
For pre-upgrade cleanup, including clearing Python bytecode cache, please refer to the v0.2.4_UPDATE_GUIDE.md.
ConPort's effectiveness with LLM agents is significantly enhanced by providing specific custom instructions or system prompts to the LLM. This repository includes tailored strategy files for different environments:
For Roo Code:
roo_code_conport_strategy
: Contains detailed instructions for LLMs operating within the Roo Code VS Code extension, guiding them on how to use ConPort tools for context management.For CLine:
cline_conport_strategy
: Contains detailed instructions for LLMs operating within the Cline VS Code extension, guiding them on how to use ConPort tools for context management.For Windsurf Cascade:
cascade_conport_strategy
: Specific guidance for LLMs integrated with the Windsurf Cascade environment. Important: When initiating a session in Cascade, it is necessary to explicity tell the LLM:Initialize according to custom instructions
For General/Platform-Agnostic Use:
generic_conport_strategy
: Provides a platform-agnostic set of instructions for any MCP-capable LLM. It emphasizes using ConPort's get_conport_schema
operation to dynamically discover the exact ConPort tool names and their parameters, guiding the LLM on when and why to perform conceptual interactions (like logging a decision or updating product context) rather than hardcoding specific tool invocation details.How to Use These Strategy Files:
These instructions equip the LLM with the knowledge to:
workspace_id
.
Important Tip for Starting Sessions:
To ensure the LLM agent correctly initializes and loads context, especially in interfaces that might not always strictly adhere to custom instructions on the first message, it's a good practice to start your interaction with a clear directive like:
Initialize according to custom instructions.
This can help prompt the agent to perform its ConPort initialization sequence as defined in its strategy file.When you first start using ConPort in a new or existing project workspace, the ConPort database (context_portal/context.db
) will be automatically created by the server if it doesn't exist. To help bootstrap the initial project context, especially the Product Context, consider the following:
projectBrief.md
File (Recommended)projectBrief.md
: In the root directory of your project workspace, create a file named projectBrief.md
.roo_code_conport_strategy
) initializes in the workspace, it is designed to:
projectBrief.md
.If projectBrief.md
is not found, or if you choose not to import it:
By providing initial context, either through projectBrief.md
or manual entry, you enable ConPort and the connected LLM agent to have a better foundational understanding of your project from the start.
The ConPort server exposes the following tools via MCP, allowing interaction with the underlying project knowledge graph. This includes tools for semantic search powered by vector data storage. These tools facilitate the Retrieval aspect crucial for Augmented Generation (RAG) by AI agents. All tools require a workspace_id
argument (string, required) to specify the target project workspace.
get_product_context
: Retrieves the overall project goals, features, and architecture.update_product_context
: Updates the product context. Accepts full content
(object) or patch_content
(object) for partial updates (use __DELETE__
as a value in patch to remove a key).get_active_context
: Retrieves the current working focus, recent changes, and open issues.update_active_context
: Updates the active context. Accepts full content
(object) or patch_content
(object) for partial updates (use __DELETE__
as a value in patch to remove a key).log_decision
: Logs an architectural or implementation decision.
summary
(str, req), rationale
(str, opt), implementation_details
(str, opt), tags
(list[str], opt).get_decisions
: Retrieves logged decisions.
limit
(int, opt), tags_filter_include_all
(list[str], opt), tags_filter_include_any
(list[str], opt).search_decisions_fts
: Full-text search across decision fields (summary, rationale, details, tags).
query_term
(str, req), limit
(int, opt).delete_decision_by_id
: Deletes a decision by its ID.
decision_id
(int, req).log_progress
: Logs a progress entry or task status.
status
(str, req), description
(str, req), parent_id
(int, opt), linked_item_type
(str, opt), linked_item_id
(str, opt).get_progress
: Retrieves progress entries.
status_filter
(str, opt), parent_id_filter
(int, opt), limit
(int, opt).update_progress
: Updates an existing progress entry.
progress_id
(int, req), status
(str, opt), description
(str, opt), parent_id
(int, opt).delete_progress_by_id
: Deletes a progress entry by its ID.
progress_id
(int, req).log_system_pattern
: Logs or updates a system/coding pattern.
name
(str, req), description
(str, opt), tags
(list[str], opt).get_system_patterns
: Retrieves system patterns.
tags_filter_include_all
(list[str], opt), tags_filter_include_any
(list[str], opt).delete_system_pattern_by_id
: Deletes a system pattern by its ID.
pattern_id
(int, req).log_custom_data
: Stores/updates a custom key-value entry under a category. Value is JSON-serializable.
category
(str, req), key
(str, req), value
(any, req).get_custom_data
: Retrieves custom data.
category
(str, opt), key
(str, opt).delete_custom_data
: Deletes a specific custom data entry.
category
(str, req), key
(str, req).search_project_glossary_fts
: Full-text search within the 'ProjectGlossary' custom data category.
query_term
(str, req), limit
(int, opt).search_custom_data_value_fts
: Full-text search across all custom data values, categories, and keys.
query_term
(str, req), category_filter
(str, opt), limit
(int, opt).link_conport_items
: Creates a relationship link between two ConPort items, explicitly building out the project knowledge graph.
source_item_type
(str, req), source_item_id
(str, req), target_item_type
(str, req), target_item_id
(str, req), relationship_type
(str, req), description
(str, opt).get_linked_items
: Retrieves items linked to a specific item.
item_type
(str, req), item_id
(str, req), relationship_type_filter
(str, opt), linked_item_type_filter
(str, opt), limit
(int, opt).get_item_history
: Retrieves version history for Product or Active Context.
item_type
("product_context" | "active_context", req), version
(int, opt), before_timestamp
(datetime, opt), after_timestamp
(datetime, opt), limit
(int, opt).get_recent_activity_summary
: Provides a summary of recent ConPort activity.
hours_ago
(int, opt), since_timestamp
(datetime, opt), limit_per_type
(int, opt, default: 5).get_conport_schema
: Retrieves the schema of available ConPort tools and their arguments.export_conport_to_markdown
: Exports ConPort data to markdown files.
output_path
(str, opt, default: "./conport_export/").import_markdown_to_conport
: Imports data from markdown files into ConPort.
input_path
(str, opt, default: "./conport_export/").batch_log_items
: Logs multiple items of the same type (e.g., decisions, progress entries) in a single call.
item_type
(str, req - e.g., "decision", "progress_entry"), items
(list[dict], req - list of Pydantic model dicts for the item type).For a more in-depth understanding of ConPort's design, architecture, and advanced usage patterns, please refer to:
Please see our CONTRIBUTING.md guide for details on how to contribute to the ConPort project.
This project is licensed under the Apache-2.0 license.
For detailed instructions on how to manage your context.db
file, especially when updating ConPort across versions that include database schema changes, please refer to the dedicated v0.2.4_UPDATE_GUIDE.md. This guide provides steps for manual data migration (export/import) if needed, and troubleshooting tips.
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "conport": { "command": "uvx", "args": [ "--from", "context-portal-mcp", "conport-mcp", "--mode", "stdio", "--workspace_id", "${workspaceFolder}", "--log-file", "./logs/conport.log", "--log-level", "INFO" ] } } }
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
A basic implementation of persistent memory using a local knowledge graph. This lets Claude remember information about the user across chats.
by topoteretes
Provides dynamic memory for AI agents through modular ECL (Extract, Cognify, Load) pipelines, enabling seamless integration with graph and vector stores using minimal code.
by basicmachines-co
Enables persistent, local‑first knowledge management by allowing LLMs to read and write Markdown files during natural conversations, building a traversable knowledge graph that stays under the user’s control.
by smithery-ai
Provides read and search capabilities for Markdown notes in an Obsidian vault for Claude Desktop and other MCP clients.
by chatmcp
Summarize chat messages by querying a local chat database and returning concise overviews.
by dmayboroda
Provides on‑premises conversational retrieval‑augmented generation (RAG) with configurable Docker containers, supporting fully local execution, ChatGPT‑based custom GPTs, and Anthropic Claude integration.
by andrea9293
Provides document management and AI-powered semantic search for storing, retrieving, and querying text, markdown, and PDF files locally without external databases.
by scorzeth
Provides a local MCP server that interfaces with a running Anki instance to retrieve, create, and update flashcards through standard MCP calls.
by sirmews
Read and write records in a Pinecone vector index via Model Context Protocol, enabling semantic search and document management for Claude Desktop.