by ivo-toby
Integrates with Contentful's Content Management API and exposes CRUD, publishing, bulk, pagination, and comment‑threading capabilities through the Model Context Protocol.
Provides a Model Context Protocol (MCP) server that maps Contentful Management API operations to MCP tools, enabling AI agents or other MCP clients to create, read, update, delete and publish entries, assets, spaces, environments, content types, and comments.
npx -y @ivotoby/contentful-management-mcp-server --management-token <YOUR_TOKEN>
--http
and optionally --port
and --host
.CONTENTFUL_MANAGEMENT_ACCESS_TOKEN
).claude_desktop_config.json
.search_entries
, create_comment
, bulk_publish
) from the MCP client.--http
).Q: Do I need to clone the repository to use the server?
A: No. The server can be launched directly with npx
as shown above, or installed via Smithery for Claude Desktop.
Q: Which authentication methods are supported?
A: Either a Content Management API token (CONTENTFUL_MANAGEMENT_ACCESS_TOKEN
) or App Identity (app ID, private key, space ID, environment ID).
Q: How does pagination work?
A: List tools return up to 3 items plus metadata (total
, skip
, remaining
). The LLM can request the next page using the provided skip
value.
Q: Can I run the server as a HTTP service?
A: Yes. Add the --http
flag and specify --port
/--host
. The server will use the StreamableHTTP transport.
Q: Is the server officially supported by Contentful? A: It is a community‑driven project; Contentful has its own official server separate from this implementation.
This is a community driven server! Contentful has released an official server which you can find here
An MCP server implementation that integrates with Contentful's Content Management API, providing comprehensive content management capabilities.
To prevent context window overflow in LLMs, list operations (like search_entries and list_assets) are limited to 3 items per request. Each response includes:
This pagination system allows the LLM to efficiently handle large datasets while maintaining context window limits.
The bulk operations feature provides efficient management of multiple content items simultaneously:
These bulk operation tools are ideal for content migrations, mass updates, or batch publishing workflows.
Comments support threading functionality to enable structured conversations and work around the 512-character limit:
parent
parameter in create_comment
to reply to an existing commentExample usage:
create_comment
with entryId
, body
, and status
create_comment
with entryId
, body
, status
, and parent
(the ID of the comment you're replying to)parent
The project includes an MCP Inspector tool that helps with development and debugging:
npm run inspect
to start the inspector, you can open the inspector by going to http://localhost:5173npm run inspect:watch
to automatically restart the inspector when files changeThe project also contains a npm run dev
command which rebuilds and reloads the MCP server on every change.
These variables can also be set as arguments
CONTENTFUL_HOST
/ --host
: Contentful Management API Endpoint (defaults to https://api.contentful.com)CONTENTFUL_MANAGEMENT_ACCESS_TOKEN
/ --management-token
: Your Content Management API tokenENABLE_HTTP_SERVER
/ --http
: Set to "true" to enable HTTP/SSE modeHTTP_PORT
/ --port
: Port for HTTP server (default: 3000)HTTP_HOST
/ --http-host
: Host for HTTP server (default: localhost)You can scope the spaceId and EnvironmentId to ensure the LLM will only do operations on the defined space/env ID's.
This is mainly to support agents that are to operate within specific spaces. If both SPACE_ID
and ENVIRONMENT_ID
env-vars are set
the tools will not report needing these values and the handlers will use the environment vars to do CMA operations.
You will also loose access to the tools in the space-handler, since these tools are across spaces.
You can also add the SPACE_ID
and ENVIRONMENT_ID
by using arguments --space-id
and --environment-id
Instead of providing a Management token you can also leverage App Identity for handling authentication. You would have to setup and install a Contentful App and set the following parameters when calling the MCP-server:
--app-id
= the app Id which is providing the Apptoken--private-key
= the private key you created in the user-interface with your app, tied to app_id
--space-id
= the spaceId in which the app is installed--environment-id
= the environmentId (within the space) in which the app is installed.With these values the MCP server will request a temporary AppToken to do content operation in the defined space/environment-id. This especially useful when using this MCP server in backend systems that act as MCP-client (like chat-agents)
You do not need to clone this repo to use this MCP, you can simply add it to
your claude_desktop_config.json
:
Add or edit ~/Library/Application Support/Claude/claude_desktop_config.json
and add the following lines:
{
"mcpServers": {
"contentful": {
"command": "npx",
"args": ["-y", "@ivotoby/contentful-management-mcp-server"],
"env": {
"CONTENTFUL_MANAGEMENT_ACCESS_TOKEN": "<Your CMA token>"
}
}
}
}
If your MCPClient does not support setting environment variables you can also set the management token using an argument like this:
{
"mcpServers": {
"contentful": {
"command": "npx",
"args": [
"-y",
"@ivotoby/contentful-management-mcp-server",
"--management-token",
"<your token>",
"--host",
"http://api.contentful.com"
]
}
}
}
To install Contentful Management Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @ivotoby/contentful-management-mcp-server --client claude
If you want to contribute and test what Claude does with your contributions;
npm run dev
, this will start the watcher that rebuilds the MCP server on every changeclaude_desktop_config.json
to reference the project directly, ie;{
"mcpServers": {
"contentful": {
"command": "node",
"args": ["/Users/ivo/workspace/contentful-mcp/bin/mcp-server.js"],
"env": {
"CONTENTFUL_MANAGEMENT_ACCESS_TOKEN": "<Your CMA Token>"
}
}
}
}
This will allow you to test any modification in the MCP server with Claude directly, however; if you add new tools/resources you will need to restart Claude Desktop
The MCP server supports two transport modes:
The default transport mode uses standard input/output streams for communication. This is ideal for integration with MCP clients that support stdio transport, like Claude Desktop.
To use stdio mode, simply run the server without the --http
flag:
npx -y contentful-mcp --management-token YOUR_TOKEN
# or alternatively
npx -y @ivotoby/contentful-management-mcp-server --management-token YOUR_TOKEN
The server also supports the StreamableHTTP transport as defined in the MCP protocol. This mode is useful for web-based integrations or when running the server as a standalone service.
To use StreamableHTTP mode, run with the --http
flag:
npx -y contentful-mcp --management-token YOUR_TOKEN --http --port 3000
# or alternatively
npx -y @ivotoby/contentful-management-mcp-server --management-token YOUR_TOKEN --http --port 3000
The implementation follows the standard MCP protocol specification, allowing any MCP client to connect to the server without special handling.
The server implements comprehensive error handling for:
MIT License
This MCP Server enables Claude (or other agents that can consume MCP resources) to update, delete content, spaces and content-models. So be sure what you allow Claude to do with your Contentful spaces!
This MCP-server is not officially supported by Contentful (yet)
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "contentful": { "command": "npx", "args": [ "-y", "@ivotoby/contentful-management-mcp-server" ], "env": { "CONTENTFUL_MANAGEMENT_ACCESS_TOKEN": "<YOUR_CMA_TOKEN>" } } } }
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.