by awslabs
Wrap existing stdio‑based MCP servers to run as AWS Lambda functions, exposing them via HTTPS, API Gateway, Bedrock AgentCore, Lambda function URLs or the Lambda Invoke API.
Enables any stdio‑based MCP server—written in Python or TypeScript—to be packaged and executed inside an AWS Lambda function. Each Lambda invocation starts the server process, forwards the request, returns the response, and shuts down the process, providing a stateless, on‑demand execution model.
StdioServerParameters (Python) or a plain object (TypeScript).python -m …) and TypeScript (npx …) stdio servers.Q: Can I use languages other than Python or TypeScript? A: The library currently only provides adapters for Python and TypeScript stdio servers. Other languages would require a custom wrapper.
Q: What happens to server state between invocations? A: The Lambda execution model is stateless; any in‑memory or on‑disk state is lost after the request finishes. Use stateless tools or external storage if persistence is needed.
Q: How are secrets (API keys, tokens) handled? A: Supply them as encrypted Lambda environment variables. Access is controlled by IAM policies; only callers with permission can invoke the function.
Q: Which authentication methods are supported? A: OAuth via API Gateway or Bedrock, AWS IAM (SigV4) for Function URLs and Invoke API.
Q: Do I need to package the MCP server binary inside the Lambda deployment package? A: Yes. Bundling the server avoids the overhead of downloading it on every invocation and ensures it runs in the constrained Lambda environment.
This project enables you to run Model Context Protocol stdio-based servers in AWS Lambda functions.
Currently, most implementations of MCP servers and clients are entirely local on a single machine. A desktop application such as an IDE or Claude Desktop initiates MCP servers locally as child processes and communicates with each of those servers over a long-running stdio stream.
This library helps you to wrap existing stdio MCP servers into Lambda functions. You can invoke these function-based MCP servers from your application using the MCP protocol over short-lived HTTPS connections. Your application can then be a desktop-based app, a distributed system running in the cloud, or any other architecture.
Using this library, the Lambda function will manage the lifecycle of your stdio MCP server. Each Lambda function invocation will:
This library supports connecting to Lambda-based MCP servers in four ways:
Many stdio-based MCP servers's documentation encourages using tools that download and run the server on-demand.
For example, uvx my-mcp-server or npx my-mcp-server.
These tools are often not pre-packaged in the Lambda environment, and it can be inefficient to
re-download the server on every Lambda invocation.
Instead, the examples in this repository show how to package the MCP server along with
the Lambda function code, then start it with python or node (or npx --offline) directly.
You will need to determine the right parameters depending on your MCP server's package. This can often be a trial and error process locally, since MCP server packaging varies.
Basic example:
from mcp.client.stdio import StdioServerParameters
server_params = StdioServerParameters(
command=sys.executable,
args=[
"-m",
"my_mcp_server_python_module",
"--my-server-command-line-parameter",
"some_value",
],
)
Locally, you would run this module using:
python -m my_mcp_server_python_module --my-server-command-line-parameter some_value
Other examples:
python -m mcpdoc.cli # Note the sub-module
python -c "from mcp_openapi_proxy import main; main()"
python -c "import asyncio; from postgres_mcp.server import main; asyncio.run(main())"
If you use Lambda layers, you need to also set the PYTHONPATH for the python sub-process:
lambda_paths = ["/opt/python"] + sys.path
env_config = {"PYTHONPATH": ":".join(lambda_paths)}
server_params = StdioServerParameters(
command=sys.executable,
args=[
"-c",
"from mcp_openapi_proxy import main; main()",
],
env=env_config,
)
Basic example:
const serverParams = {
command: "npx",
args: [
"--offline",
"my-mcp-server-typescript-module",
"--my-server-command-line-parameter",
"some_value",
],
};
Locally, you would run this module using:
npx --offline my-mcp-server-typescript-module --my-server-command-line-parameter some_value
Other examples:
node /var/task/node_modules/@ivotoby/openapi-mcp-server/bin/mcp-server.js
This solution is compatible with most MCP clients that support the streamable HTTP transport. MCP servers deployed with this architecture can typically be used with off-the-shelf MCP-compatible applications such as Cursor, Cline, Claude Desktop, etc.
You can choose your desired OAuth server provider for this solution. The examples in this repository use Amazon Cognito, or you can use third-party providers such as Okta or Auth0 with API Gateway custom authorization.
import sys
from mcp.client.stdio import StdioServerParameters
from mcp_lambda import APIGatewayProxyEventHandler, StdioServerAdapterRequestHandler
server_params = StdioServerParameters(
command=sys.executable,
args=[
"-m",
"my_mcp_server_python_module",
"--my-server-command-line-parameter",
"some_value",
],
)
request_handler = StdioServerAdapterRequestHandler(server_params)
event_handler = APIGatewayProxyEventHandler(request_handler)
def handler(event, context):
return event_handler.handle(event, context)
See a full, deployable example here.
import {
Handler,
Context,
APIGatewayProxyWithCognitoAuthorizerEvent,
APIGatewayProxyResult,
} from "aws-lambda";
import {
APIGatewayProxyEventHandler,
StdioServerAdapterRequestHandler,
} from "@aws/run-mcp-servers-with-aws-lambda";
const serverParams = {
command: "npx",
args: [
"--offline",
"my-mcp-server-typescript-module",
"--my-server-command-line-parameter",
"some_value",
],
};
const requestHandler = new APIGatewayProxyEventHandler(
new StdioServerAdapterRequestHandler(serverParams)
);
export const handler: Handler = async (
event: APIGatewayProxyWithCognitoAuthorizerEvent,
context: Context
): Promise<APIGatewayProxyResult> => {
return requestHandler.handle(event, context);
};
See a full, deployable example here.
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
# Create OAuth client provider here
async with streamablehttp_client(
url="https://abc123.execute-api.us-west-2.amazonaws.com/prod/mcp",
auth=oauth_client_provider,
) as (
read_stream,
write_stream,
_,
):
async with ClientSession(read_stream, write_stream) as session:
await session.initialize()
tool_result = await session.call_tool("echo", {"message": "hello"})
See a full example as part of the sample chatbot here.
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
const client = new Client(
{
name: "my-client",
version: "0.0.1",
},
{
capabilities: {
sampling: {},
},
}
);
// Create OAuth client provider here
const transport = new StreamableHTTPClientTransport(
"https://abc123.execute-api.us-west-2.amazonaws.com/prod/mcp",
{
authProvider: oauthProvider,
}
);
await client.connect(transport);
See a full example as part of the sample chatbot here.
This solution is compatible with most MCP clients that support the streamable HTTP transport. MCP servers deployed with this architecture can typically be used with off-the-shelf MCP-compatible applications such as Cursor, Cline, Claude Desktop, etc.
You can choose your desired OAuth server provider with Bedrock AgentCore Gateway, such as Amazon Cognito, Okta, or Auth0.
Using Bedrock AgentCore Gateway in front of your stdio-based MCP server requires that you retrieve the MCP server's tool schema, and provide it in the AgentCore Gateway Lambda target configuration. AgentCore Gateway can then advertise the schema to HTTP clients and validate request inputs and outputs.
To retrieve and save your stdio-based MCP server's tool schema to a file, run:
npx @modelcontextprotocol/inspector --cli --method tools/list <your MCP server command and arguments> > tool-schema.json
# For example:
npx @modelcontextprotocol/inspector --cli --method tools/list uvx mcp-server-time > tool-schema.json
import sys
from mcp.client.stdio import StdioServerParameters
from mcp_lambda import BedrockAgentCoreGatewayTargetHandler, StdioServerAdapterRequestHandler
server_params = StdioServerParameters(
command=sys.executable,
args=[
"-m",
"my_mcp_server_python_module",
"--my-server-command-line-parameter",
"some_value",
],
)
request_handler = StdioServerAdapterRequestHandler(server_params)
event_handler = BedrockAgentCoreGatewayTargetHandler(request_handler)
def handler(event, context):
return event_handler.handle(event, context)
See a full, deployable example here.
import { Handler, Context } from "aws-lambda";
import {
BedrockAgentCoreGatewayTargetHandler,
StdioServerAdapterRequestHandler,
} from "@aws/run-mcp-servers-with-aws-lambda";
const serverParams = {
command: "npx",
args: [
"--offline",
"my-mcp-server-typescript-module",
"--my-server-command-line-parameter",
"some_value",
],
};
const requestHandler = new BedrockAgentCoreGatewayTargetHandler(
new StdioServerAdapterRequestHandler(serverParams)
);
export const handler: Handler = async (
event: Record<string, unknown>,
context: Context
): Promise<Record<string, unknown>> => {
return requestHandler.handle(event, context);
};
See a full, deployable example here.
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
# Create OAuth client provider here
async with streamablehttp_client(
url="https://abc123.gateway.bedrock-agentcore.us-west-2.amazonaws.com/mcp",
auth=oauth_client_provider,
) as (
read_stream,
write_stream,
_,
):
async with ClientSession(read_stream, write_stream) as session:
await session.initialize()
tool_result = await session.call_tool("echo", {"message": "hello"})
See a full example as part of the sample chatbot here.
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
const client = new Client(
{
name: "my-client",
version: "0.0.1",
},
{
capabilities: {
sampling: {},
},
}
);
// Create OAuth client provider here
const transport = new StreamableHTTPClientTransport(
"https://abc123.gateway.bedrock-agentcore.us-west-2.amazonaws.com/mcp",
{
authProvider: oauthProvider,
}
);
await client.connect(transport);
See a full example as part of the sample chatbot here.
This solution uses AWS IAM for authentication, and relies on granting Lambda InvokeFunctionUrl permission to your IAM users and roles to enable access to the MCP server. Clients must use an extension to the MCP Streamable HTTP transport that signs requests with AWS SigV4. Off-the-shelf MCP-compatible applications are unlikely to have support for this custom transport, so this solution is more appropriate for service-to-service communication rather than for end users.
import sys
from mcp.client.stdio import StdioServerParameters
from mcp_lambda import LambdaFunctionURLEventHandler, StdioServerAdapterRequestHandler
server_params = StdioServerParameters(
command=sys.executable,
args=[
"-m",
"my_mcp_server_python_module",
"--my-server-command-line-parameter",
"some_value",
],
)
request_handler = StdioServerAdapterRequestHandler(server_params)
event_handler = LambdaFunctionURLEventHandler(request_handler)
def handler(event, context):
return event_handler.handle(event, context)
See a full, deployable example here.
import {
Handler,
Context,
APIGatewayProxyEventV2WithIAMAuthorizer,
APIGatewayProxyResultV2,
} from "aws-lambda";
import {
LambdaFunctionURLEventHandler,
StdioServerAdapterRequestHandler,
} from "@aws/run-mcp-servers-with-aws-lambda";
const serverParams = {
command: "npx",
args: [
"--offline",
"my-mcp-server-typescript-module",
"--my-server-command-line-parameter",
"some_value",
],
};
const requestHandler = new LambdaFunctionURLEventHandler(
new StdioServerAdapterRequestHandler(serverParams)
);
export const handler: Handler = async (
event: APIGatewayProxyEventV2WithIAMAuthorizer,
context: Context
): Promise<APIGatewayProxyResultV2> => {
return requestHandler.handle(event, context);
};
See a full, deployable example here.
from mcp import ClientSession
from mcp_proxy_for_aws.client import aws_iam_streamablehttp_client
async with aws_iam_streamablehttp_client(
endpoint="https://url-id-12345.lambda-url.us-west-2.on.aws",
aws_service="lambda",
aws_region="us-west-2",
) as (
read_stream,
write_stream,
_,
):
async with ClientSession(read_stream, write_stream) as session:
await session.initialize()
tool_result = await session.call_tool("echo", {"message": "hello"})
See a full example as part of the sample chatbot here.
import { StreamableHTTPClientWithSigV4Transport } from "@aws/run-mcp-servers-with-aws-lambda";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
const client = new Client(
{
name: "my-client",
version: "0.0.1",
},
{
capabilities: {
sampling: {},
},
}
);
const transport = new StreamableHTTPClientWithSigV4Transport(
new URL("https://url-id-12345.lambda-url.us-west-2.on.aws"),
{
service: "lambda",
region: "us-west-2",
}
);
await client.connect(transport);
See a full example as part of the sample chatbot here.
Like the Lambda function URL approach, this solution uses AWS IAM for authentication. It relies on granting Lambda InvokeFunction permission to your IAM users and roles to enable access to the MCP server. Clients must use a custom MCP transport that directly calls the Lambda Invoke API. Off-the-shelf MCP-compatible applications are unlikely to have support for this custom transport, so this solution is more appropriate for service-to-service communication rather than for end users.
import sys
from mcp.client.stdio import StdioServerParameters
from mcp_lambda import stdio_server_adapter
server_params = StdioServerParameters(
command=sys.executable,
args=[
"-m",
"my_mcp_server_python_module",
"--my-server-command-line-parameter",
"some_value",
],
)
def handler(event, context):
return stdio_server_adapter(server_params, event, context)
See a full, deployable example here.
import { Handler, Context } from "aws-lambda";
import { stdioServerAdapter } from "@aws/run-mcp-servers-with-aws-lambda";
const serverParams = {
command: "npx",
args: [
"--offline",
"my-mcp-server-typescript-module",
"--my-server-command-line-parameter",
"some_value",
],
};
export const handler: Handler = async (event, context: Context) => {
return await stdioServerAdapter(serverParams, event, context);
};
See a full, deployable example here.
from mcp import ClientSession
from mcp_lambda import LambdaFunctionParameters, lambda_function_client
server_params = LambdaFunctionParameters(
function_name="my-mcp-server-function",
region_name="us-west-2",
)
async with lambda_function_client(server_params) as (
read_stream,
write_stream,
):
async with ClientSession(read_stream, write_stream) as session:
await session.initialize()
tool_result = await session.call_tool("echo", {"message": "hello"})
See a full example as part of the sample chatbot here.
import {
LambdaFunctionParameters,
LambdaFunctionClientTransport,
} from "@aws/run-mcp-servers-with-aws-lambda";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
const serverParams: LambdaFunctionParameters = {
functionName: "my-mcp-server-function",
regionName: "us-west-2",
};
const client = new Client(
{
name: "my-client",
version: "0.0.1",
},
{
capabilities: {
sampling: {},
},
}
);
const transport = new LambdaFunctionClientTransport(serverParams);
await client.connect(transport);
See a full example as part of the sample chatbot here.
See the development guide for instructions to deploy and run the examples in this repository.
See CONTRIBUTING for more information.
This project is licensed under the Apache-2.0 License.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by awslabs
Provides specialized servers that expose AWS capabilities through the Model Context Protocol, enabling AI assistants to retrieve up-to-date documentation, execute API calls, and automate infrastructure workflows directly within development environments.
by cloudflare
Provides a collection of Model Context Protocol servers that enable MCP‑compatible clients to interact with Cloudflare services such as Workers, Observability, Radar, and more, allowing natural‑language driven management of configurations, data, and operations.
by Flux159
Connects to a Kubernetes cluster and offers a unified MCP interface for kubectl, Helm, port‑forwarding, diagnostics, and non‑destructive read‑only mode.
by TencentEdgeOne
Deploy HTML, folders, or zip archives to EdgeOne Pages and instantly obtain a public URL for fast edge delivery.
by volcengine
A comprehensive collection of Model Context Protocol (MCP) servers that expose Volcengine cloud resources and third‑party services through natural‑language interfaces, enabling AI‑driven operations across compute, storage, databases, networking, security, and developer utilities.
by rishikavikondala
Provides Model Context Protocol tools for performing AWS S3 and DynamoDB operations, with automatic logging and audit access via the `audit://aws-operations` endpoint.
by confluentinc
Enables AI assistants to manage Confluent Cloud resources such as Kafka topics, connectors, and Flink SQL statements through natural‑language interactions.
by aliyun
Enables AI assistants to operate Alibaba Cloud resources such as ECS, Cloud Monitor, OOS and other services through seamless integration with Alibaba Cloud APIs via the Model Context Protocol.
by aws-samples
Retrieve PDF documents and other S3 objects through Model Context Protocol resources, enabling LLMs to pull data directly from AWS S3 buckets.
{
"mcpServers": {
"example-server": {
"command": "npx",
"args": [
"--offline",
"my-mcp-server-package",
"--my-server-flag",
"value"
],
"env": {
"API_KEY": "<YOUR_API_KEY>"
}
}
}
}claude mcp add example-server npx --offline my-mcp-server-package --my-server-flag value