by knocklabs
Enables integration of Knock's notification and management APIs into AI agent frameworks and Model Context Protocol clients, supplying ready‑made tools for OpenAI, Vercel AI SDK, LangChain, and custom MCP servers.
Provides a set of helper libraries that expose Knock's Management API as function‑calling tools for popular AI agent ecosystems (OpenAI, Vercel AI SDK, LangChain, Mastra) and for direct use via a Model Context Protocol (MCP) server. The toolkit abstracts authentication, permission handling, and context defaults so agents can trigger notifications, manage workflows, and interact with Knock resources without custom boilerplate.
npm install @knocklabs/agent-toolkit
createKnockToolkit helper (e.g., @knocklabs/agent-toolkit/openai).tools in OpenAI, toolCalls in Vercel AI SDK, bindTools in LangChain, etc.).handleToolCall or by invoking returned tool objects directly.For a stand‑alone MCP server, run:
npx -y @knocklabs/agent-toolkit -p local-mcp --service-token <YOUR_SERVICE_TOKEN>
You can limit exposed tools with --tools and enable workflow‑as‑tool mode with --workflows.
environment, userId, tenantId) to reduce repetitive parameters.comment-created, activate-account) directly from LLM‑driven agents.Q: Do I need a Knock account to use the toolkit? A: Yes. You must have a Knock account and generate a service token.
Q: Which LLM providers are supported? A: The toolkit includes helpers for OpenAI, Vercel AI SDK, LangChain, and Mastra. Any provider that supports function calling can be used by manually passing the generated tool definitions.
Q: How do I restrict which tools are exposed?
A: Use the permissions object when creating the toolkit or the --tools flag when starting the MCP server.
Q: Can I set default user or tenant context?
A: Yes. Provide userId, tenantId, and environment in the toolkit configuration or as CLI options for the MCP server.
Q: Is the MCP server production‑ready? A: It is provided as a beta local server for development and testing. For production workloads, embed the toolkit directly in your service or host the MCP server behind appropriate security controls.
The Knock Agent toolkit enables popular agent frameworks including OpenAI and Vercel's AI SDK to integrate with Knock's APIs using tools (otherwise known as function calling). It also allows you to integrate Knock into a Model Context Protocol (MCP) client such as Cursor, Windsurf, or Claude Code.
Using the Knock agent toolkit allows you to build powerful agent systems that are capable of sending cross-channel notifications to the humans who need to be in the loop. As a developer, it also helps you build Knock integrations and manage your Knock account.
You can read more in the documentation.
The Knock Agent Toolkit provides four main entry points:
@knocklabs/agent-toolkit/ai-sdk: Helpers for integrating with Vercel's AI SDK.@knocklabs/agent-tookkit/langchain: Helpers for integrating with Langchain's JS SDK.@knocklabs/agent-toolkit/openai: Helpers for integrating with the OpenAI SDK.@knocklabs/agent-toolkit/modelcontextprotocol: Low level helpers for integrating with the Model Context Protocol (MCP).The agent toolkit exposes a large subset of the Knock Management API and API that you might need to invoke via an agent. You can see the full list of tools in the source code.
It's possible to pass additional context to the configuration of each library to help scope the calls made by the agent toolkit to Knock. The available properties to configure are:
environment: The slug of the Knock environment you wish to execute actions in by default, such as development.userId: The user ID of the current user. When set, this will be the default passed to user tools.tenantId: The ID of the current tenant. When set, will be the default passed to any tool that accepts the tenant.To start using the Knock MCP as a local server, you must start it with a service token. You can run it using npx.
npx -y @knocklabs/agent-toolkit -p local-mcp --service-token kst_12345
By default, the MCP server will expose all tools to the LLM. To limit the tools available you can use the --tools (-t) flag:
// Pass all tools
npx -y @knocklabs/agent-toolkit -p local-mcp --tools="*"
// Specific category
npx -y @knocklabs/agent-toolkit -p local-mcp --tools "workflows.*"
// Specific tools
npx -y @knocklabs/agent-toolkit -p local-mcp --tools "workflows.triggerWorkflow"
If you wish to enable workflows-as-tools within the MCP server, you must set the --workflows flag to pass in a list of approved workflow keys to expose. This ensures that you keep the number of tools exposed to your MCP client to a minimum.
npx -y @knocklabs/agent-toolkit -p local-mcp --workflows comment-created activate-account
It's also possible to pass environment, userId, and tenant to the local MCP server to set default values. Use the --help flag to view additional server options.
The agent toolkit provides a createKnockToolkit under the /ai-sdk path for easily integrating into the AI SDK and returning tools ready for use.
npm install @knocklabs/agent-toolkit
createKnockToolkit helper, configure it, and use it in your LLM calling:import { createKnockToolkit } from "@knocklabs/agent-toolkit/ai-sdk";
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
import { systemPrompt } from "@/lib/ai/prompts";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const toolkit = await createKnockToolkit({
serviceToken: "kst_12345",
permissions: {
workflows: { read: true, run: true, manage: true },
},
});
const result = streamText({
model: openai("gpt-4o"),
messages,
tools: {
// The tools given here are determined by the `permissions`
// list above in the configuration. For instance, here we're only
// allowing the workflows
...toolkit.getAllTools(),
},
});
return result.toDataStreamResponse();
}
The agent toolkit provides a createKnockToolkit under the /openai path for easily integrating into the Open AI SDK and returning tools ready for use.
npm install @knocklabs/agent-toolkit
createKnockToolkit helper, configure it, and use it in your LLM calling:import { createKnockToolkit } from "@knocklabs/agent-toolkit/openai";
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const toolkit = await createKnockToolkit({
serviceToken: "kst_12345",
permissions: {
// Set the permissions of the tools to expose
workflows: { read: true, run: true, manage: true },
},
});
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages,
// The tools given here are determined by the `permissions`
// list above in the configuration. For instance, here we're only
// allowing the workflows
tools: toolkit.getAllTools(),
});
// Execute the tool calls
const toolMessages = await Promise.all(
message.tool_calls.map((tc) => toolkit.handleToolCall(tc))
);
}
main();
The agent toolkit provides a createKnockToolkit under the /langchain path for easily integrating into the Lanchain JS SDK and returning tools ready for use.
npm install @knocklabs/agent-toolkit
createKnockToolkit helper, configure it, and use it in your LLM calling:import { createKnockToolkit } from "@knocklabs/agent-toolkit/langchain";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
import { LangChainAdapter } from "ai";
const systemPrompt = `You are a helpful assistant.`;
export const maxDuration = 30;
export async function POST(req: Request) {
const { prompt } = await req.json();
// Optional - get the auth context from the request
const authContext = await auth.protect();
// Instantiate a new Knock toolkit
const toolkit = await createKnockToolkit({
serviceToken: "kst_12345",
permissions: {
// (optional but recommended): Set the permissions of the tools to expose
workflows: { read: true, run: true, manage: true },
},
});
const model = new ChatOpenAI({ model: "gpt-4o", temperature: 0 });
const modelWithTools = model.bindTools(toolkit.getAllTools());
const messages = [new SystemMessage(systemPrompt), new HumanMessage(prompt)];
const aiMessage = await modelWithTools.invoke(messages);
messages.push(aiMessage);
for (const toolCall of aiMessage.tool_calls || []) {
// Call the selected tool by its `name`
const selectedTool = toolkit.getToolMap()[toolCall.name];
const toolMessage = await selectedTool.invoke(toolCall);
messages.push(toolMessage);
}
// To simplify the setup, this example uses the ai-sdk langchain adapter
// to stream the results back to the /langchain page.
// For more details, see: https://sdk.vercel.ai/providers/adapters/langchain
const stream = await modelWithTools.stream(messages);
return LangChainAdapter.toDataStreamResponse(stream);
}
The agent toolkit provides a createKnockToolkit under the /mastra path for easily integrating into the Mastra framework and returning tools ready for use.
npm install @knocklabs/agent-toolkit
createKnockToolkit helper, configure it, and use it in your LLM calling:import { anthropic } from "@ai-sdk/anthropic";
import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";
import { createKnockToolkit } from "@knocklabs/agent-toolkit/mastra";
const toolkit = await createKnockToolkit({
serviceToken: "knock_st_",
permissions: {
// (optional but recommended): Set the permissions of the tools to expose
workflows: { read: true, run: true, manage: true },
},
userId: "10",
});
export const weatherAgent = new Agent({
name: "Weather Agent",
instructions: `You are a helpful weather assistant that provides accurate weather information.`,
model: anthropic("claude-3-5-sonnet-20241022"),
tools: toolkit.getAllTools(),
memory: new Memory({
storage: new LibSQLStore({
url: "file:../mastra.db", // path is relative to the .mastra/output directory
}),
}),
});
Please log in to share your review and rating for this MCP.
{
"mcpServers": {
"local-mcp": {
"command": "npx",
"args": [
"-y",
"@knocklabs/agent-toolkit",
"-p",
"local-mcp",
"--service-token",
"<YOUR_SERVICE_TOKEN>"
],
"env": {}
}
}
}claude mcp add local-mcp npx -y @knocklabs/agent-toolkit -p local-mcp --service-token <YOUR_SERVICE_TOKEN>Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.