by pinecone-io
Retrieves information from Pinecone Assistant via an MCP server, offering configurable result limits and easy Docker deployment.
A lightweight MCP server that connects to Pinecone Assistant to fetch relevant information. It runs as a Docker container (or Rust binary) and supports returning multiple results based on a configurable limit.
docker build -t pinecone/assistant-mcp .
docker run -i --rm \
-e PINECONE_API_KEY=<YOUR_PINECONE_API_KEY> \
-e PINECONE_ASSISTANT_HOST=<YOUR_PINECONE_ASSISTANT_HOST> \
pinecone/assistant-mcp
PINECONE_API_KEY – required API key from Pinecone Console.PINECONE_ASSISTANT_HOST – optional; defaults to https://prod-1-data.ke.pinecone.io.LOG_LEVEL – optional logging level (default info).claude_desktop_config.json as shown in the README.cargo build --release
./target/release/assistant-mcp
export PINECONE_API_KEY=...
export PINECONE_ASSISTANT_HOST=...
npx @modelcontextprotocol/inspector cargo run
Q: Do I need a Pinecone account? A: Yes, you must have a Pinecone API key and an Assistant created in the Pinecone Console.
Q: Can I run the server without Docker?
A: Absolutely. Clone the repo, install Rust, and use cargo build --release.
Q: How many results can I request? A: The server allows you to configure the limit; the default is defined in the source code (commonly 10).
Q: What logging levels are supported?
A: error, warn, info (default), debug, and trace.
Q: Is there a way to test the server locally?
A: Use the @modelcontextprotocol/inspector tool as demonstrated in the README.
An MCP server implementation for retrieving information from Pinecone Assistant.
To build the Docker image:
docker build -t pinecone/assistant-mcp .
Run the server with your Pinecone API key:
docker run -i --rm \
-e PINECONE_API_KEY=<YOUR_PINECONE_API_KEY_HERE> \
-e PINECONE_ASSISTANT_HOST=<YOUR_PINECONE_ASSISTANT_HOST_HERE> \
pinecone/assistant-mcp
PINECONE_API_KEY (required): Your Pinecone API keyPINECONE_ASSISTANT_HOST (optional): Pinecone Assistant API host (default: https://prod-1-data.ke.pinecone.io)LOG_LEVEL (optional): Logging level (default: info)Add this to your claude_desktop_config.json:
{
"mcpServers": {
"pinecone-assistant": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"PINECONE_API_KEY",
"-e",
"PINECONE_ASSISTANT_HOST",
"pinecone/assistant-mcp"
],
"env": {
"PINECONE_API_KEY": "<YOUR_PINECONE_API_KEY_HERE>",
"PINECONE_ASSISTANT_HOST": "<YOUR_PINECONE_ASSISTANT_HOST_HERE>"
}
}
}
}
If you prefer to build from source without Docker:
cargo build --releasetarget/release/assistant-mcpexport PINECONE_API_KEY=<YOUR_PINECONE_API_KEY_HERE>
export PINECONE_ASSISTANT_HOST=<YOUR_PINECONE_ASSISTANT_HOST_HERE>
# Run the inspector alone
npx @modelcontextprotocol/inspector cargo run
# Or run with Docker directly through the inspector
npx @modelcontextprotocol/inspector -- docker run -i --rm -e PINECONE_API_KEY -e PINECONE_ASSISTANT_HOST pinecone/assistant-mcp
This project is licensed under the terms specified in the LICENSE file.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by exa-labs
Provides real-time web search capabilities to AI assistants via a Model Context Protocol server, enabling safe and controlled access to the Exa AI Search API.
by perplexityai
Enables Claude and other MCP‑compatible applications to perform real‑time web searches through the Perplexity (Sonar) API without leaving the MCP ecosystem.
by MicrosoftDocs
Provides semantic search and fetch capabilities for Microsoft official documentation, returning content in markdown format via a lightweight streamable HTTP transport for AI agents and development tools.
by elastic
Enables natural‑language interaction with Elasticsearch indices via the Model Context Protocol, exposing tools for listing indices, fetching mappings, performing searches, running ES|QL queries, and retrieving shard information.
by graphlit
Enables integration between MCP clients and the Graphlit platform, providing ingestion, extraction, retrieval, and RAG capabilities across a wide range of data sources and connectors.
by mamertofabian
Fast cross‑platform file searching leveraging the Everything SDK on Windows, Spotlight on macOS, and locate/plocate on Linux.
by cr7258
Provides Elasticsearch and OpenSearch interaction via Model Context Protocol, enabling document search, index management, cluster monitoring, and alias operations.
by kagisearch
Provides web search and video summarization capabilities through the Model Context Protocol, enabling AI assistants like Claude to perform queries and summarizations.
by liuyoshio
Provides natural‑language search and recommendation for Model Context Protocol servers, delivering rich metadata and real‑time updates.