by membranehq
Provides actions for connected integrations on Integration.app, exposing tools through Model Context Protocol endpoints for static and dynamic usage.
Mcp Server powers Integration.app by exposing integration‑specific tools (actions) via MCP endpoints. It can operate in static mode—returning every available tool for the authenticated user—or in dynamic mode, where a single enable-tools tool is offered to selectively activate needed actions.
git clone https://github.com/integration-app/mcp-server.git
cd mcp-server
npm install
npm run build
npm run dev
The server listens on http://localhost:3000.import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js';
const client = new Client({ name: 'example', version: '1.0.0' });
const transport = new StreamableHTTPClientTransport(
new URL('https://<HOSTED_MCP_SERVER_URL>/mcp?mode=dynamic'),
{ requestInit: { headers: { Authorization: `Bearer ${ACCESS_TOKEN}` } } }
);
await client.connect(transport);
await client.callTool({
name: 'enable-tools',
arguments: { tools: ['gmail-send-email', 'gmail-read-email'] }
});
x-chat-id header to persist a conversation across calls./sse) and recommended Streamable HTTP (/mcp).apps query parameter.x-chat-id header and /mcp/sessions endpoint.Q: Which transport should I use?
A: Use Streamable HTTP (/mcp) – it is the current, bidirectional, and recommended transport.
Q: How do I limit the tools returned?
A: Deploy the server in dynamic mode (?mode=dynamic) and invoke the enable-tools tool with the desired list.
Q: Can I run the server in the cloud?
A: Yes. Build the Docker image (docker build -t integration-app-mcp-server .) and run it on any container service.
Q: How do I fetch tools for only Google Calendar?
A: Add apps=google-calendar to the query string, e.g., /mcp?apps=google-calendar.
Q: How is chat session persistence achieved?
A: Include an x-chat-id header with a unique identifier; the server maintains a session mapped to that ID.
The Integration App MCP Server is a Model Context Protocol (MCP) server, it provides actions for connected integrations on Integration.app membrane as tools.
Here's our official AI Agent Example that shows you how to use this MCP server in your application.
git clone https://github.com/integration-app/mcp-server.git
cd mcp-server
npm install
npm run build
To run the development server locally, start it with:
npm run dev
The server will be live at http://localhost:3000 ⚡️
# Run the server in test mode
npm run start:test
# then run tests
npm test
Deploy your own instance of this MCP server to any cloud hosting service of your choice.
The project includes a Dockerfile for easy containerized deployment.
docker build -t integration-app-mcp-server .
docker run -p 3000:3000 integration-app-mcp-server
This MCP server support two transports:
| Transport | Endpoint | Status |
|---|---|---|
| SSE (Server‑Sent Events) | /sse |
🔴 Deprecated — deprecated as of November 5, 2024 in MCP spec |
| HTTP (Streamable HTTP) | /mcp |
🟢 Recommended — replaces SSE and supports bidirectional streaming |
Provide an Integration.app access token via query or Authorization header:
?token=ACCESS_TOKEN
Authorization: Bearer ACCESS_TOKEN
SSE (Deprecated)
await client.connect(
new SSEClientTransport(
new URL(
`https://<HOSTED_MCP_SERVER_URL>/sse`
)
{
requestInit: {
headers: {
Authorization: `Bearer ${ACCESS_TOKEN}`,
},
},
}
)
);
Streamable HTTP (Recommended)
await client.connect(
new StreamableHTTPClientTransport(
new URL(`https://<HOSTED_MCP_SERVER_URL>/mcp`)
{
requestInit: {
headers: {
Authorization: `Bearer ${ACCESS_TOKEN}`,
},
},
}
)
);
By default, the MCP server runs in static mode, which means it returns all available tools (actions) for all connected integrations.
With dynamic mode (?mode=dynamic), the server will only return one tool: enable-tools. You can use this tool to selectively enable the tools you actually need for that session.
In dynamic mode, your implementation should figure out which tools are most relevant to the user's query. Once you've identified them, prompt the LLM to call the enable-tools tool with the appropriate list.
Want to see how this works in practice? Check out our AI Agent Example.
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js';
const client = new Client({
name: 'example-integration-app-mcp-client',
version: '1.0.0',
});
const transport = new StreamableHTTPClientTransport(
new URL(`https://<HOSTED_MCP_SERVER_URL>/mcp?mode=dynamic`),
{
requestInit: {
headers: {
Authorization: `Bearer ${ACCESS_TOKEN}`,
},
},
}
);
await client.connect(transport);
await client.callTool({
name: 'enable-tools',
arguments: {
tools: ['gmail-send-email', 'gmail-read-email'],
},
});
In static mode, the MCP server fetches tools from all active connections associated with the provided token.
You can choose to only fetch tools for a specific integration by passing the apps query parameter: /mcp?apps=google-calendar,google-docs
The MCP server (streamable-http transport only) supports persistent chat sessions. Include an x-chat-id header in your requests to automatically track sessions for that specific chat. This is an experimental feature that we provide in addition to standard MCP sessions.
Starting a new chat session:
POST /mcp
Authorization: Bearer YOUR_ACCESS_TOKEN
x-chat-id: my-awesome-chat-123
Retrieving your chat sessions:
GET /mcp/sessions
Authorization: Bearer YOUR_ACCESS_TOKEN
Response:
{
"my-awesome-chat-123": "session-uuid-1",
"another-chat-456": "session-uuid-2"
}
This feature lets you use same session for a conversation. Check out our AI Agent Example to see how this works in practice.
To use this server with Cursor, update the ~/.cursor/mcp.json file:
{
"mcpServers": {
"integration-app": {
"url": "https://<HOSTED_MCP_SERVER_URL>/sse?token={ACCESS_TOKEN}"
}
}
}
Restart Cursor for the changes to take effect.
To use this server with Claude, update the config file (Settings > Developer > Edit Config):
{
"mcpServers": {
"integration-app": {
"url": "https://<HOSTED_MCP_SERVER_URL>/sse?token={ACCESS_TOKEN}"
}
}
}
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.