by tuannvm
Enables large language models to interact with real tools and systems through Slack conversations by acting as a bridge that routes LLM requests to Model Context Protocol (MCP) servers and returns the results back to Slack users.
Provides a production‑ready bridge that lets AI models (OpenAI, Anthropic, Ollama) execute commands on MCP servers—such as filesystem, Git, Kubernetes, databases—directly from Slack threads.
SLACK_BOT_TOKEN
, SLACK_APP_TOKEN
, OPENAI_API_KEY
).config.json
that defines Slack credentials, LLM provider settings, and MCP servers.slack-mcp-client --config config.json
) or launch the Docker image (ghcr.io/tuannvm/slack-mcp-client:latest
).Q: Which LLM providers are supported? A: OpenAI, Anthropic, and local Ollama models via LangChain.
Q: How does the bot handle tool naming collisions? A: Each tool name is prefixed with the MCP server identifier, guaranteeing uniqueness.
Q: Can I run the client without Docker?
A: Yes – install via go install github.com/tuannvm/slack-mcp-client@latest
or build from source.
Q: Is there a way to limit which tools are exposed?
A: The unified config allows allowList
/ blockList
definitions per MCP server.
Q: How are metrics exposed?
A: A /metrics
endpoint (default port 8080) provides Prometheus‑compatible counters for tool invocations and LLM token usage.
A production-ready bridge between Slack and AI models with full MCP compatibility.
This client enables AI models (OpenAI, Anthropic, Ollama) to interact with real tools and systems through Slack conversations. Built on the industry-standard Model Context Protocol (MCP), it provides secure access to filesystems, databases, Kubernetes clusters, Git repositories, and custom tools.
Compatible with MCP Specification 2025-06-18 - Compliant with the latest Model Context Protocol standards
Compliant with the official Model Context Protocol (2025-06-18 specification):
Authentication with Server-Sent Events (SSE) MCP servers can be achieved using the following setup:
Example:
{
"httpHeaders": {
"Authorization": "Bearer YOUR_TOKEN_HERE"
}
}
Make sure to replace YOUR_TOKEN_HERE
with your actual token for authentication.
vectorStoreId
supportDownload the latest binary from the GitHub releases page or install using Go:
# Install latest version using Go
go install github.com/tuannvm/slack-mcp-client@latest
# Or build from source
git clone https://github.com/tuannvm/slack-mcp-client.git
cd slack-mcp-client
make build
# Binary will be in ./bin/slack-mcp-client
After installing the binary, you can run it locally with the following steps:
# Using environment variables directly
export SLACK_BOT_TOKEN="xoxb-your-bot-token"
export SLACK_APP_TOKEN="xapp-your-app-token"
export OPENAI_API_KEY="sk-your-openai-key"
export OPENAI_MODEL="gpt-4o"
export LOG_LEVEL="info"
# Or create a .env file and source it
cat > .env << EOL
SLACK_BOT_TOKEN="xoxb-your-bot-token"
SLACK_APP_TOKEN="xapp-your-app-token"
OPENAI_API_KEY="sk-your-openai-key"
OPENAI_MODEL="gpt-4o"
LOG_LEVEL="info"
EOL
source .env
# Create config.json with the new unified configuration format
cat > config.json << EOL
{
"\$schema": "https://github.com/tuannvm/slack-mcp-client/schema/config-schema.json",
"version": "2.0",
"slack": {
"botToken": "\${SLACK_BOT_TOKEN}",
"appToken": "\${SLACK_APP_TOKEN}"
},
"llm": {
"provider": "openai",
"useNativeTools": true,
"providers": {
"openai": {
"model": "gpt-4o",
"apiKey": "\${OPENAI_API_KEY}",
"temperature": 0.7
}
}
},
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "\$HOME"]
}
},
"monitoring": {
"enabled": true,
"metricsPort": 8080,
"loggingLevel": "info"
}
}
EOL
# Run with unified configuration (looks for config.json in current directory)
slack-mcp-client --config config.json
# Enable debug mode with structured logging
slack-mcp-client --config config.json --debug
# Validate configuration before running
slack-mcp-client --config-validate --config config.json
# Configure metrics port via config file or flag
slack-mcp-client --config config.json --metrics-port 9090
If you have an existing mcp-servers.json
file from a previous version, you can migrate to the new unified configuration format:
# Automatic migration (recommended)
slack-mcp-client --migrate-config --config legacy-mcp-servers.json --output config.json
# Manual migration: Use examples as templates
cp examples/minimal.json config.json
# Edit config.json with your specific settings
# Validate the new configuration
slack-mcp-client --config-validate --config config.json
The new configuration format provides:
config.json
file${VAR_NAME}
syntax for secretsThe application will connect to Slack and start listening for messages. You can check the logs for any errors or connection issues.
The client includes an improved RAG (Retrieval-Augmented Generation) system that's compatible with LangChain Go and provides professional-grade performance:
{
"$schema": "https://github.com/tuannvm/slack-mcp-client/schema/config-schema.json",
"version": "2.0",
"slack": {
"botToken": "${SLACK_BOT_TOKEN}",
"appToken": "${SLACK_APP_TOKEN}"
},
"llm": {
"provider": "openai",
"useNativeTools": true,
"providers": {
"openai": {
"model": "gpt-4o",
"apiKey": "${OPENAI_API_KEY}"
}
}
},
"rag": {
"enabled": true,
"provider": "simple",
"chunkSize": 1000,
"providers": {
"simple": {
"databasePath": "./knowledge.json"
},
"openai": {
"indexName": "my-knowledge-base",
"vectorStoreId": "vs_existing_store_id",
"dimensions": 1536,
"maxResults": 10
}
}
}
}
# Ingest PDF files from a directory
slack-mcp-client --rag-ingest ./company-docs --rag-db ./knowledge.json
# Test search functionality
slack-mcp-client --rag-search "vacation policy" --rag-db ./knowledge.json
# Get database statistics
slack-mcp-client --rag-stats --rag-db ./knowledge.json
Once configured, the LLM can automatically search your knowledge base:
User: "What's our vacation policy?"
AI: "Let me search our knowledge base for vacation policy information..." (Automatically searches RAG database)
AI: "Based on our company policy documents, you get 15 days of vacation..."
The client supports advanced prompt engineering capabilities for creating specialized AI assistants:
Create custom AI personalities and behaviors:
# Create a custom system prompt file
cat > sales-assistant.txt << EOL
You are SalesGPT, a helpful sales assistant specializing in B2B software sales.
Your expertise includes:
- Lead qualification and discovery
- Solution positioning and value propositions
- Objection handling and negotiation
- CRM best practices and sales processes
Always:
- Ask qualifying questions to understand prospect needs
- Provide specific, actionable sales advice
- Reference industry best practices
- Maintain a professional yet friendly tone
When discussing pricing, always emphasize value over cost.
EOL
# Use the custom prompt
slack-mcp-client --system-prompt ./sales-assistant.txt
Define prompts in your configuration:
{
"$schema": "https://github.com/tuannvm/slack-mcp-client/schema/config-schema.json",
"version": "2.0",
"slack": {
"botToken": "${SLACK_BOT_TOKEN}",
"appToken": "${SLACK_APP_TOKEN}"
},
"llm": {
"provider": "openai",
"useNativeTools": true,
"customPrompt": "You are a helpful DevOps assistant specializing in Kubernetes and cloud infrastructure.",
"providers": {
"openai": {
"model": "gpt-4o",
"apiKey": "${OPENAI_API_KEY}",
"temperature": 0.7
}
}
}
}
Create specialized assistants for different use cases:
Agent Mode enables more interactive and context-aware conversations using LangChain's agent framework. Instead of single-prompt interactions, agents can engage in multi-step reasoning, use tools more strategically, and maintain better context throughout conversations.
Agent Mode uses LangChain's conversational agent framework to provide:
Enable Agent Mode in your configuration file:
{
"$schema": "https://github.com/tuannvm/slack-mcp-client/schema/config-schema.json",
"version": "2.0",
"slack": {
"botToken": "${SLACK_BOT_TOKEN}",
"appToken": "${SLACK_APP_TOKEN}"
},
"llm": {
"provider": "openai",
"useNativeTools": true,
"useAgent": true,
"customPrompt": "You are a DevOps expert specializing in Kubernetes and cloud infrastructure. Always think through problems step by step.",
"maxAgentIterations": 20,
"providers": {
"openai": {
"model": "gpt-4o",
"apiKey": "${OPENAI_API_KEY}",
"temperature": 0.7
}
}
},
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
},
"github": {
"command": "github-mcp-server",
"args": ["stdio"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}"
}
}
}
}
llm.useAgent
: Enable agent mode (default: false)llm.useNativeTools
: Use native LangChain tools vs system prompt-based tools (default: false)llm.customPrompt
: System prompt for agent behaviorllm.maxAgentIterations
: Maximum agent reasoning steps (default: 20)Standard Mode:
Agent Mode:
Interactive Development Consultation:
User: "I need help optimizing my React app performance"
Agent Response:
🤖 I'd be happy to help optimize your React app performance! Let me understand your current setup better.
[Agent maintains conversation context and asks relevant follow-up questions]
Agent: "What specific performance issues are you experiencing? Are you seeing slow renders, large bundle sizes, or something else?"
User: "The app takes too long to load initially"
Agent: "Let me check your current bundle setup and suggest optimizations..."
[Agent uses filesystem tools to analyze the project structure and provides targeted advice]
Contextual Problem Solving:
User: "Can you help me with my deployment pipeline?"
Agent Response:
🤖 I'll help you with your deployment pipeline. Since I know you're working on a React project, let me check your current CI/CD setup.
[Agent leverages previous conversation context and user information to provide personalized assistance]
[Agent strategically uses relevant tools based on the conversation flow]
For deploying to Kubernetes, a Helm chart is available in the helm-chart
directory. This chart provides a flexible way to deploy the slack-mcp-client with proper configuration and secret management.
The Helm chart is also available directly from GitHub Container Registry, allowing for easier installation without needing to clone the repository:
# Add the OCI repository to Helm (only needed once)
helm registry login ghcr.io -u USERNAME -p GITHUB_TOKEN
# Pull the Helm chart
helm pull oci://ghcr.io/tuannvm/charts/slack-mcp-client --version 0.1.0
# Or install directly
helm install my-slack-bot oci://ghcr.io/tuannvm/charts/slack-mcp-client --version 0.1.0 -f values.yaml
You can check available versions by visiting the GitHub Container Registry in your browser.
# Create a values file with your configuration
cat > values.yaml << EOL
secret:
create: true
env:
SLACK_BOT_TOKEN: "xoxb-your-bot-token"
SLACK_APP_TOKEN: "xapp-your-app-token"
OPENAI_API_KEY: "sk-your-openai-key"
OPENAI_MODEL: "gpt-4o"
LOG_LEVEL: "info"
# Optional: Configure MCP servers
configMap:
create: true
EOL
# Install the chart
helm install my-slack-bot ./helm-chart/slack-mcp-client -f values.yaml
The Helm chart supports various configuration options including:
For more details, see the Helm chart README.
The Helm chart uses the Docker image from GitHub Container Registry (GHCR) by default. You can specify a particular version or use the latest tag:
# In your values.yaml
image:
repository: ghcr.io/tuannvm/slack-mcp-client
tag: "latest" # Or use a specific version like "1.0.0"
pullPolicy: IfNotPresent
To manually pull the image:
# Pull the latest image
docker pull ghcr.io/tuannvm/slack-mcp-client:latest
# Or pull a specific version
docker pull ghcr.io/tuannvm/slack-mcp-client:1.0.0
If you're using private images, you can configure image pull secrets in your values:
imagePullSecrets:
- name: my-ghcr-secret
For local testing and development, you can use Docker Compose to easily run the slack-mcp-client along with additional MCP servers.
.env
file with your credentials:# Create .env file from example
cp .env.example .env
# Edit the file with your credentials
nano .env
mcp-servers.json
file (or use the example):# Create mcp-servers.json from example
cp mcp-servers.json.example mcp-servers.json
# Edit if needed
nano mcp-servers.json
# Start services in detached mode
docker-compose up -d
# View logs
docker-compose logs -f
# Stop services
docker-compose down
The included docker-compose.yml
provides:
.env
fileversion: '3.8'
services:
slack-mcp-client:
image: ghcr.io/tuannvm/slack-mcp-client:latest
container_name: slack-mcp-client
environment:
- SLACK_BOT_TOKEN=${SLACK_BOT_TOKEN}
- SLACK_APP_TOKEN=${SLACK_APP_TOKEN}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- OPENAI_MODEL=${OPENAI_MODEL:-gpt-4o}
volumes:
- ./mcp-servers.json:/app/mcp-servers.json:ro
You can easily extend this setup to include additional MCP servers in the same network.
Allow users to send Slash commands and messages from the chat tab
in App Home page, to enable direct message to Slack app.
app_mentions:read
chat:write
im:history
im:read
im:write
users:read
users.profile:read
app_mention
message.im
For detailed instructions on Slack app configuration, token setup, required permissions, and troubleshooting common issues, see the Slack Configuration Guide.
The client supports multiple LLM providers through a flexible integration system:
The LangChain gateway enables seamless integration with various LLM providers:
The custom LLM-MCP bridge layer enables any LLM to use MCP tools without requiring native function-calling capabilities:
LLM providers can be configured via environment variables or command-line flags:
# Set OpenAI as the provider (default)
export LLM_PROVIDER="openai"
export OPENAI_MODEL="gpt-4o"
# Use Anthropic
export LLM_PROVIDER="anthropic"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export ANTHROPIC_MODEL="claude-3-5-sonnet-20241022"
# Or use Ollama
export LLM_PROVIDER="ollama"
export LANGCHAIN_OLLAMA_URL="http://localhost:11434"
export LANGCHAIN_OLLAMA_MODEL="llama3"
You can easily switch between providers by changing the LLM_PROVIDER
environment variable:
# Use OpenAI
export LLM_PROVIDER=openai
# Use Anthropic
export LLM_PROVIDER=anthropic
# Use Ollama (local)
export LLM_PROVIDER=ollama
The client uses two main configuration approaches:
Configure LLM providers and Slack integration using environment variables:
Variable | Description | Default |
---|---|---|
SLACK_BOT_TOKEN | Bot token for Slack API | (required) |
SLACK_APP_TOKEN | App-level token for Socket Mode | (required) |
OPENAI_API_KEY | API key for OpenAI authentication | (required) |
OPENAI_MODEL | OpenAI model to use | gpt-4o |
ANTHROPIC_API_KEY | API key for Anthropic authentication | (required for Anthropic) |
ANTHROPIC_MODEL | Anthropic model to use | claude-3-5-sonnet-20241022 |
LOG_LEVEL | Logging level (debug, info, warn, error) | info |
LLM_PROVIDER | LLM provider to use (openai, anthropic, ollama) | openai |
LANGCHAIN_OLLAMA_URL | URL for Ollama when using LangChain | http://localhost:11434 |
LANGCHAIN_OLLAMA_MODEL | Model name for Ollama when using LangChain | llama3 |
The client includes Prometheus metrics support for monitoring tool usage and performance:
/metrics
on the configured port--metrics-port
flag)slackmcp_tool_invocations_total
: Counter for tool invocations with labels for tool name, server, and error statusslackmcp_llm_tokens
: Histogram for LLM token usage by type and modelExample metrics access:
# Access metrics endpoint
curl http://localhost:8080/metrics
# Run with custom metrics port
slack-mcp-client --metrics-port 9090
All configuration is now managed through a single config.json
file with comprehensive options:
{
"$schema": "https://github.com/tuannvm/slack-mcp-client/schema/config-schema.json",
"version": "2.0",
"slack": {
"botToken": "${SLACK_BOT_TOKEN}",
"appToken": "${SLACK_APP_TOKEN}",
"messageHistory": 50,
"thinkingMessage": "Processing..."
},
"llm": {
"provider": "openai",
"useNativeTools": true,
"useAgent": false,
"customPrompt": "You are a helpful assistant.",
"maxAgentIterations": 20,
"providers": {
"openai": {
"model": "gpt-4o",
"apiKey": "${OPENAI_API_KEY}",
"temperature": 0.7,
"maxTokens": 2000
},
"anthropic": {
"model": "claude-3-5-sonnet-20241022",
"apiKey": "${ANTHROPIC_API_KEY}",
"temperature": 0.7
}
}
},
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"],
"initializeTimeoutSeconds": 30,
"tools": {
"allowList": ["read_file", "write_file", "list_directory"],
"blockList": ["delete_file"]
}
},
"web-api": {
"url": "http://localhost:8080/mcp",
"transport": "sse",
"initializeTimeoutSeconds": 30
}
},
"rag": {
"enabled": true,
"provider": "openai",
"chunkSize": 1000,
"providers": {
"openai": {
"vectorStoreId": "vs_existing_store_id",
"dimensions": 1536,
"maxResults": 10
}
}
},
"timeouts": {
"httpRequestTimeout": "30s",
"toolProcessingTimeout": "3m",
"mcpInitTimeout": "30s"
},
"retry": {
"maxAttempts": 3,
"baseBackoff": "500ms",
"maxBackoff": "5s"
},
"monitoring": {
"enabled": true,
"metricsPort": 8080,
"loggingLevel": "info"
}
}
For detailed configuration options and migration guides, see the Configuration Guide.
The client supports optional automatic reloading to handle MCP server restarts without downtime - perfect for Kubernetes deployments where MCP servers may restart independently.
Note: The reload feature is disabled by default and must be explicitly enabled in your configuration file.
To enable reload functionality, add reload settings to your config.json
:
{
"version": "2.0",
"reload": {
"enabled": true,
"interval": "30m"
}
}
Configuration Options:
enabled
: Must be set to true
to activate reload functionality (default: false
)interval
: Time between automatic reloads (default: "30m"
, minimum: "10s"
)Automatic Reload: When enabled, the application automatically reloads at the configured interval to reconnect to MCP servers and refresh tool discovery.
Manual Reload: Even with automatic reload disabled, you can trigger manual reloads using signals:
# In Kubernetes
kubectl exec -it <pod-name> -- kill -USR1 1
# Local process
kill -USR1 <process-id>
When enabled, the reload feature automatically:
Perfect for production environments where MCP servers may restart due to updates, scaling, or maintenance.
The client includes a comprehensive Slack-formatted output system that enhances message display in Slack:
**bold**
to *bold*
for proper Slack bold formatting"namespace-name"
becomes `namespace-name`
in SlackFor more details, see the Slack Formatting Guide.
The client supports three transport modes:
Comprehensive documentation is available in the docs/
directory:
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
This project uses GitHub Actions for continuous integration and GoReleaser for automated releases.
Our CI pipeline performs the following checks on all PRs and commits to the main branch:
When changes are merged to the main branch:
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "filesystem": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-filesystem", "$HOME" ], "env": {} } } }
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by GLips
Provides Figma layout and styling information to AI coding agents, enabling one‑shot implementation of designs in any framework.
by idosal
Provides a remote Model Context Protocol server that transforms any public GitHub repository into an up‑to‑date documentation hub, enabling AI assistants to fetch live code and docs, dramatically reducing hallucinations.