by QAInsights
Execute JMeter test plans through Model Context Protocol clients, capture console output, generate HTML dashboards, and automatically analyze JTL results to surface performance metrics, bottlenecks, and actionable recommendations.
JMeter MCP Server enables remote execution of JMeter test plans in both GUI and non‑GUI modes via MCP‑compatible tools. It captures execution logs, builds the standard JMeter report dashboard, parses JTL files, computes detailed performance statistics, identifies bottlenecks, and produces visualizations and HTML reports.
uv
(or use a Python environment), and add numpy
and matplotlib
..env
file with JMETER_HOME
and optionally JMETER_JAVA_OPTS
.python jmeter_server.py
or via uv run jmeter_server.py
).mcpServers
JSON (see README), then send prompts such as Run JMeter test /path/to/test.jmx
.execute_jmeter_test_non_gui
, analyze_jmeter_results
, identify_performance_bottlenecks
, get_performance_insights
, and generate_visualization
.Q: Do I need a GUI for test execution? A: GUI mode is optional and intended for test development; non‑GUI mode is the default for performance.
Q: Which JTL formats are supported? A: Both XML and CSV JTL files are parsed.
Q: How are recommendations generated? A: The Insights Generator compares observed metrics against common performance patterns and provides prioritized actions.
Q: Can I customize the report layout? A: The server outputs a standard JMeter HTML dashboard; additional customization requires modifying the visualization engine.
Q: What MCP clients are compatible? A: Any client that follows the Model Context Protocol, such as Claude Desktop, Cursor, or Windsurf.
This is a Model Context Protocol (MCP) server that allows executing JMeter tests through MCP-compatible clients and analyzing test results.
[!IMPORTANT] 📢 Looking for an AI Assistant inside JMeter? 🚀 Check out Feather Wand
Install uv
:
Ensure JMeter is installed on your system and accessible via the command line.
⚠️ Important: Make sure JMeter is executable. You can do this by running:
chmod +x /path/to/jmeter/bin/jmeter
pip install numpy matplotlib
.env
file, refer to the .env.example
file for details.# JMeter Configuration
JMETER_HOME=/path/to/apache-jmeter-5.6.3
JMETER_BIN=${JMETER_HOME}/bin/jmeter
# Optional: JMeter Java options
JMETER_JAVA_OPTS="-Xms1g -Xmx2g"
Connect to the server using an MCP-compatible client (e.g., Claude Desktop, Cursor, Windsurf)
Send a prompt to the server:
Run JMeter test /path/to/test.jmx
execute_jmeter_test
: Launches JMeter in GUI mode, but doesn't execute test as per the JMeter designexecute_jmeter_test_non_gui
: Execute a JMeter test in non-GUI mode (default mode for better performance)analyze_jmeter_results
: Analyze JMeter test results and provide a summary of key metrics and insightsidentify_performance_bottlenecks
: Identify performance bottlenecks in JMeter test resultsget_performance_insights
: Get insights and recommendations for improving performancegenerate_visualization
: Generate visualizations of JMeter test resultsAdd the following configuration to your MCP client config:
{
"mcpServers": {
"jmeter": {
"command": "/path/to/uv",
"args": [
"--directory",
"/path/to/jmeter-mcp-server",
"run",
"jmeter_server.py"
]
}
}
}
The server will:
The Test Results Analyzer is a powerful feature that helps you understand your JMeter test results better. It consists of several components:
# Run a JMeter test and generate a results file
Run JMeter test sample_test.jmx in non-GUI mode and save results to results.jtl
# Analyze the results
Analyze the JMeter test results in results.jtl and provide detailed insights
# Identify bottlenecks
What are the performance bottlenecks in the results.jtl file?
# Get recommendations
What recommendations do you have for improving performance based on results.jtl?
# Generate visualizations
Create a time series graph of response times from results.jtl
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by Arize-ai
Open-source AI observability platform enabling tracing, evaluation, dataset versioning, experiment tracking, prompt management, and interactive playground for LLM applications.
by grafana
Provides programmatic access to a Grafana instance and its surrounding ecosystem through the Model Context Protocol, enabling AI assistants and other clients to query and manipulate dashboards, datasources, alerts, incidents, on‑call schedules, and more.
by dynatrace-oss
Provides a local server that enables real‑time interaction with the Dynatrace observability platform, exposing tools for querying data, retrieving problems, sending Slack notifications, and integrating AI assistance.
by VictoriaMetrics-Community
Provides a Model Context Protocol server exposing read‑only VictoriaMetrics APIs, enabling seamless monitoring, observability, and automation through AI‑driven assistants.
by GeLi2001
Enables interaction with the Datadog API through a Model Context Protocol server, providing access to monitors, dashboards, metrics, logs, events, and incident data.
by grafana
Provides a Model Context Protocol (MCP) server that enables AI agents to query Grafana Loki log data via stdin/stdout or Server‑Sent Events, supporting both local binary execution and containerized deployment.
by TocharianOU
Provides a Model Context Protocol (MCP) server that enables MCP‑compatible clients to access, search, and manage Kibana APIs using natural language or programmatic requests.
by MindscapeHQ
Provides comprehensive access to Raygun's API V3 endpoints for crash reporting and real user monitoring via the Model Context Protocol.
by grafana
Provides Model Context Protocol endpoints that enable AI assistants to query and analyze distributed tracing data stored in Grafana Tempo, supporting both stdin/stdout communication and an HTTP SSE interface.