by marctheshark3
Offers a unified JSON interface for Ergo blockchain data by converting heterogeneous API responses (JSON, Markdown, plain text) into a consistent structure and adds advanced analytics such as address clustering, token history tracking, and token‑estimation for LLM‑friendly usage.
Ergo Explorer MCP bridges AI assistants and the Ergo blockchain ecosystem, delivering blockchain data (blocks, transactions, token information, address activity) in a predictable JSON format while supporting both human‑readable (Markdown) and machine‑readable outputs.
git clone https://github.com/marctheshark3/ergo-mcp
cd ergo-mcp
pip install -r requirements.txt
python -m ergo_explorer.server
The service will start on the default port (e.g., 8000).from ergo_explorer.api import make_request
response = make_request("address_clustering/identify", {
"address": "9gUDVVx75KyZ...",
"depth": 2,
"tx_limit": 100
})
python mcp_response_standardizer.py blockchain_status response.txt
Q: Do I need a running Ergo node? A: No. The MCP can query the public Ergo Explorer API; a node is optional for advanced features.
Q: How are errors returned?
A: Errors follow the standardized schema with status: "error"
, an error
object containing code
and message
, and full metadata.
Q: Can I get both Markdown and JSON from the same endpoint?
A: Yes. Endpoints support dual‑format responses; the standardizer chooses the appropriate conversion based on the Accept
header or content detection.
Q: How is token estimation performed without tiktoken
?
A: The MCP falls back to a simple word‑count heuristic, ensuring an estimate is always available.
Q: Is there a way to limit response size?
A: The metadata.is_truncated
flag indicates when a response was cut to respect token thresholds; you can configure limits via request parameters.
A standardization tool for Ergo MCP API responses that transforms various output formats (JSON, Markdown, plaintext) into a consistent JSON structure for improved integration and usability.
The MCP API returns responses in inconsistent formats:
This inconsistency makes it difficult to integrate with other systems and requires custom handling for each endpoint.
The MCPResponseStandardizer
transforms all responses into a consistent JSON structure:
{
"success": true,
"data": {
// Standardized response data extracted from the original
},
"meta": {
"format": "json|markdown|text|mixed",
"endpoint": "endpoint_name",
"timestamp": "ISO-timestamp"
}
}
For error responses:
{
"success": false,
"error": {
"code": 400,
"message": "Error message"
},
"meta": {
"format": "json|markdown|text|mixed",
"endpoint": "endpoint_name",
"timestamp": "ISO-timestamp"
}
}
from mcp_response_standardizer import MCPResponseStandardizer
# Initialize the standardizer
standardizer = MCPResponseStandardizer()
# Standardize a response
endpoint_name = "blockchain_status"
response_content = "..." # Content from the MCP API
status_code = 200 # HTTP status code from the API call
# Get standardized response
standardized = standardizer.standardize_response(
endpoint_name,
response_content,
status_code
)
# Access the standardized data
if standardized["success"]:
data = standardized["data"]
# Use the standardized data...
else:
error = standardized["error"]
print(f"Error {error['code']}: {error['message']}")
You can also use the standardizer from the command line:
python mcp_response_standardizer.py blockchain_status response.txt
Where:
blockchain_status
is the endpoint nameresponse.txt
is a file containing the response contentA test script test_standardizer.py
is provided to demonstrate the standardizer with sample responses:
python test_standardizer.py
This script:
sample_responses
directoryThe standardizer uses the following approach:
Ergo Explorer Model Context Protocol (MCP) is a comprehensive server that provides AI assistants with direct access to Ergo blockchain data through a standardized interface.
This project bridges the gap between AI assistants and the Ergo blockchain ecosystem by:
All endpoints in the Ergo Explorer MCP implement a standardized response format system that:
@standardize_response
decorator for automatic format conversion{
"status": "success", // or "error"
"data": {
// Endpoint-specific structured data
},
"metadata": {
"execution_time_ms": 123,
"result_size_bytes": 456,
"is_truncated": false,
"token_estimate": 789
}
}
For more information on response standardization, see RESPONSE_STANDARDIZATION.md.
The Ergo Explorer MCP provides advanced entity identification capabilities through address clustering algorithms. This feature helps identify groups of addresses likely controlled by the same entity.
The following endpoints are available for entity identification:
/address_clustering/identify
/address_clustering/visualize
/address_clustering/openwebui_entity_tool
/address_clustering/openwebui_viz_tool
Ergo Explorer MCP integrates with Open WebUI to provide enhanced visualization and interaction capabilities:
To identify entities related to an address:
from ergo_explorer.api import make_request
# Identify entities for an address
response = make_request("address_clustering/identify", {
"address": "9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN",
"depth": 2,
"tx_limit": 100
})
# Get visualization for an address
viz_response = make_request("address_clustering/visualize", {
"address": "9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN",
"depth": 2,
"tx_limit": 100
})
# Access entity clusters
entities = response["data"]["clusters"]
for entity_id, entity_data in entities.items():
print(f"Entity {entity_id}: {len(entity_data['addresses'])} addresses")
print(f"Confidence: {entity_data['confidence_score']}")
To use the Open WebUI tools:
[Tool: openwebui_entity_tool]
[Address: 9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN]
[Depth: 2]
[TX Limit: 100]
[Tool: openwebui_viz_tool]
[Address: 9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN]
[Depth: 2]
[TX Limit: 100]
The Ergo Explorer MCP includes built-in token estimation capabilities to help AI assistants optimize their context window usage. This feature provides an estimate of the number of tokens in each response for various LLM models.
tiktoken
is not availableToken estimation is included in the metadata
section of all standardized responses:
{
"status": "success",
"data": {
// Response data
},
"metadata": {
"execution_time_ms": 123,
"result_size_bytes": 456,
"is_truncated": false,
"token_estimate": 789,
"token_breakdown": {
"data": 650,
"metadata": 89,
"status": 50
}
}
}
To access token estimates in responses:
from ergo_explorer.api import make_request
# Make a request to any endpoint
response = make_request("blockchain/status")
# Access token estimation information
token_count = response["metadata"]["token_estimate"]
is_truncated = response["metadata"]["is_truncated"]
print(f"Response contains approximately {token_count} tokens")
if is_truncated:
print("Response was truncated to fit within token limits")
You can specify which LLM model to use for token estimation:
from ergo_explorer.api import make_request
# Request with specific model type for token estimation
response = make_request("blockchain/address_info",
{"address": "9hdcMw4eRpJPJGx8RJhvdRgFRsE1URpQCsAWM3wG547gQ9awZgi"},
model_type="gpt-4")
# The token_estimate will be calculated based on GPT-4's tokenization
Response Type | Target Token Range | Optimization Strategy |
---|---|---|
Simple queries | < 500 tokens | Full response without truncation |
Standard queries | 500-2000 tokens | Selective field inclusion |
Complex queries | 2000-5000 tokens | Pagination or truncated response |
Data-intensive | > 5000 tokens | Summary with optional detail retrieval |
The Ergo Explorer MCP includes comprehensive functionality for tracking the historical ownership of tokens and analyzing how distribution changes over time:
// Simple request with just essential parameters
GET /token/historical_token_holders
{
"token_id": "d71693c49a84fbbecd4908c94813b46514b18b67a99952dc1e6e4791556de413",
"max_transactions": 200
}
Response format includes detailed token transfer history and snapshots of token distribution at various points in time (or block heights).
Clone the repository:
git clone https://github.com/ergo-mcp/ergo-explorer-mcp.git
cd ergo-explorer-mcp
Install dependencies:
pip install -r requirements.txt
Configure your environment:
# Set up environment variables
export ERGO_EXPLORER_API="https://api.ergoplatform.com/api/v1"
export ERGO_NODE_API="http://your-node-address:9053" # Optional
export ERGO_NODE_API_KEY="your-api-key" # Optional
Run the MCP server:
python -m ergo_explorer.server
Build the Docker image:
docker build -t ergo-explorer-mcp .
Run the container:
docker run -d -p 8000:8000 \
-e ERGO_EXPLORER_API="https://api.ergoplatform.com/api/v1" \
-e ERGO_NODE_API="http://your-node-address:9053" \
-e ERGO_NODE_API_KEY="your-api-key" \
--name ergo-mcp ergo-explorer-mcp
To contribute to the project:
pip install -r requirements.txt
pip install -r requirements.test.txt
pytest
For comprehensive documentation, see:
This project is licensed under the MIT License - see the LICENSE file for details.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by antvis
Offers over 25 AntV chart types for automated chart generation and data analysis, callable via MCP tools, CLI, HTTP, SSE, or streamable transports.
by reading-plus-ai
A versatile tool that enables interactive data exploration through prompts, CSV loading, and script execution.
by Canner
Provides a semantic engine that lets MCP clients and AI agents query enterprise data with contextual understanding, precise calculations, and built‑in governance.
by surendranb
Provides natural‑language access to Google Analytics 4 data via MCP, exposing over 200 dimensions and metrics for Claude, Cursor and other compatible clients.
by ergut
Provides secure, read‑only access to BigQuery datasets, allowing large language models to query and analyze data through a standardized interface.
by isaacwasserman
Provides an interface for LLMs to visualize data using Vega‑Lite syntax, supporting saving of data tables and rendering visualizations as either a full Vega‑Lite specification (text) or a base64‑encoded PNG image.
by vantage-sh
Fetch and explore cloud cost and usage data from a Vantage account using natural language through AI assistants and MCP clients.
by acryldata
Provides a Model Context Protocol server that enables searching, metadata retrieval, lineage traversal, and SQL query listing for DataHub entities.
by rishijatia
Provides programmatic access to Fantasy Premier League statistics, team information, gameweeks, and analysis tools via a Model Context Protocol server.