by pinecone-io
Enable AI assistants to interact with Pinecone projects, allowing documentation search, index configuration, code generation, and data operations directly from development environments.
Pinecone Developer MCP Server bridges Pinecone indexes with AI coding assistants such as Cursor and Claude, providing tools that let assistants search Pinecone documentation, list and describe indexes, create inference‑enabled indexes, upsert and search records, and perform cascading searches.
node and npx are in the PATH..cursor/mcp.json or edit claude_desktop_config.json) with the npx command and API key.Q: Do I need an API key to use the server? A: An API key is required for any operation that manages or queries your Pinecone indexes. Without it, only documentation search works.
Q: Which assistants are supported? A: The server works with any MCP‑compatible tool, such as Cursor, Claude desktop, and other coding assistants that can invoke MCP servers.
Q: Are indexes without integrated inference supported? A: No. The server only operates on indexes that have integrated inference enabled.
Q: How do I globally enable the server?
A: Place the same mcp.json configuration in the .cursor folder of your home directory.
Q: Can I customize the tool permissions? A: Assistants typically ask for permission before invoking a tool; you can grant or deny per request.
The Model Context Protocol (MCP) is a standard that allows coding assistants and other AI tools to interact with platforms like Pinecone. The Pinecone Developer MCP Server allows you to connect these tools with Pinecone projects and documentation.
Once connected, AI tools can:
See the docs for more detailed information.
This MCP server is focused on improving the experience of developers working with Pinecone as part of their technology stack. It is intended for use with coding assistants. Pinecone also offers the Assistant MCP, which is designed to provide AI assistants with relevant context sourced from your knowledge base.
To configure the MCP server to access your Pinecone project, you will need to generate an API key using the console. Without an API key, your AI tool will still be able to search documentation. However, it will not be able to manage or query your indexes.
The MCP server requires Node.js. Ensure that node and npx are available in your PATH.
Next, you will need to configure your AI assistant to use the MCP server.
To add the Pinecone MCP server to a project, create a .cursor/mcp.json file in the project root (if it doesn't already exist) and add the following configuration:
{
"mcpServers": {
"pinecone": {
"command": "npx",
"args": [
"-y", "@pinecone-database/mcp"
],
"env": {
"PINECONE_API_KEY": "<your pinecone api key>"
}
}
}
}
You can check the status of the server in Cursor Settings > MCP.
To enable the server globally, add the configuration to the .cursor/mcp.json in your home directory instead.
It is recommended to use rules to instruct Cursor on proper usage of the MCP server. Check out the docs for some suggestions.
Use Claude desktop to locate the claude_desktop_config.json file by navigating to Settings > Developer > Edit Config. Add the following configuration:
{
"mcpServers": {
"pinecone": {
"command": "npx",
"args": [
"-y", "@pinecone-database/mcp"
],
"env": {
"PINECONE_API_KEY": "<your pinecone api key>"
}
}
}
}
Restart Claude desktop. On the new chat screen, you should see a hammer (MCP) icon appear with the new MCP tools available.
Once configured, your AI tool will automatically make use of the MCP to interact with Pinecone. You may be prompted for permission before a tool can be used. Try asking your AI assistant to set up an example index, upload sample data, or search for you!
Pinecone Developer MCP Server provides the following tools for AI assistants to use:
search-docs: Search the official Pinecone documentation.list-indexes: Lists all Pinecone indexes.describe-index: Describes the configuration of an index.describe-index-stats: Provides statistics about the data in the index, including the number of records and available namespaces.create-index-for-model: Creates a new index that uses an integrated inference model to embed text as vectors.upsert-records: Inserts or updates records in an index with integrated inference.search-records: Searches for records in an index based on a text query, using integrated inference for embedding. Has options for metadata filtering and reranking.cascading-search: Searches for records across multiple indexes, deduplicating and reranking the results.rerank-documents: Reranks a collection of records or text documents using a specialized reranking model.Only indexes with integrated inference are supported. Assistants, indexes without integrated inference, standalone embeddings, and vector search are not supported.
We welcome your collaboration in improving the developer MCP experience. Please submit issues in the GitHub issue tracker. Information about contributing can be found in CONTRIBUTING.md.
Please log in to share your review and rating for this MCP.
{
"mcpServers": {
"pinecone": {
"command": "npx",
"args": [
"-y",
"@pinecone-database/mcp"
],
"env": {
"PINECONE_API_KEY": "<YOUR_API_KEY>"
}
}
}
}claude mcp add pinecone npx -y @pinecone-database/mcpExplore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.