by GongRzhe
Provides real-time GUI dialogs for AI assistants to gather user input, present choices, confirmations, and feedback, enabling seamless human‑in‑the‑loop interactions.
Enables AI assistants such as Claude to interact with users through intuitive graphical dialogs. The server supplies tools for text input, multiple‑choice selection, multiline entry, confirmation prompts, and informational messages, all displayed in a modern cross‑platform GUI.
uvx
(recommended):
uvx hitl-mcp-server
pip install hitl-mcp-server
hitl-mcp-server # or hitl_mcp_server
hitl-mcp-server
or hitl_mcp_server
). The server starts a local MCP endpoint and opens GUI windows on demand.claude_desktop_config.json
:
{
"mcpServers": {
"human-in-the-loop": {
"command": "uvx",
"args": ["hitl-mcp-server"]
}
}
}
Restart Claude Desktop afterwards.Q: Do I need a graphical environment? A: Yes, the dialogs rely on Tkinter and require a desktop session (not a head‑less server).
Q: How do I enable the dialogs on macOS? A: Grant Python accessibility permissions via System Preferences → Security & Privacy → Accessibility.
Q: What happens if the user does not respond?
A: After the default 5‑minute timeout the dialog returns with "cancelled": true
.
Q: Can I customize the timeout?
A: Yes, the server accepts a --timeout
argument (check hitl-mcp-server --help
).
Q: Is the server thread‑safe? A: Dialogs are executed in separate threads, allowing concurrent calls from multiple AI agents.
A powerful Model Context Protocol (MCP) Server that enables AI assistants like Claude to interact with humans through intuitive GUI dialogs. This server bridges the gap between automated AI processes and human decision-making by providing real-time user input tools, choices, confirmations, and feedback mechanisms.
The easiest way to use this MCP server is with uvx
:
# Install and run directly
uvx hitl-mcp-server
# Or use the underscore version
uvx hitl_mcp_server
Install from PyPI:
pip install hitl-mcp-server
Run the server:
hitl-mcp-server
# or
hitl_mcp_server
Clone the repository:
git clone https://github.com/GongRzhe/Human-In-the-Loop-MCP-Server.git
cd Human-In-the-Loop-MCP-Server
Install in development mode:
pip install -e .
To use this server with Claude Desktop, add the following configuration to your claude_desktop_config.json
:
{
"mcpServers": {
"human-in-the-loop": {
"command": "uvx",
"args": ["hitl-mcp-server"]
}
}
}
{
"mcpServers": {
"human-in-the-loop": {
"command": "hitl-mcp-server",
"args": []
}
}
}
%APPDATA%\Claude\claude_desktop_config.json
~/Library/Application Support/Claude/claude_desktop_config.json
~/.config/Claude/claude_desktop_config.json
Note: You may need to allow Python to control your computer in System Preferences > Security & Privacy > Accessibility for the GUI dialogs to work properly.
After updating the configuration, restart Claude Desktop for the changes to take effect.
get_user_input
Get single-line text, numbers, or other data from users.
Parameters:
title
(str): Dialog window titleprompt
(str): Question/prompt textdefault_value
(str): Pre-filled value (optional)input_type
(str): "text", "integer", or "float" (default: "text")Example Usage:
result = await get_user_input(
title="Project Setup",
prompt="Enter your project name:",
default_value="my-project",
input_type="text"
)
get_user_choice
Present multiple options for user selection.
Parameters:
title
(str): Dialog window titleprompt
(str): Question/prompt textchoices
(List[str]): Available optionsallow_multiple
(bool): Allow multiple selections (default: false)Example Usage:
result = await get_user_choice(
title="Framework Selection",
prompt="Choose your preferred framework:",
choices=["React", "Vue", "Angular", "Svelte"],
allow_multiple=False
)
get_multiline_input
Collect longer text content, code, or detailed descriptions.
Parameters:
title
(str): Dialog window titleprompt
(str): Question/prompt textdefault_value
(str): Pre-filled text (optional)Example Usage:
result = await get_multiline_input(
title="Code Review",
prompt="Please provide your detailed feedback:",
default_value=""
)
show_confirmation_dialog
Ask for yes/no confirmation before proceeding.
Parameters:
title
(str): Dialog window titlemessage
(str): Confirmation messageExample Usage:
result = await show_confirmation_dialog(
title="Delete Confirmation",
message="Are you sure you want to delete these 5 files? This action cannot be undone."
)
show_info_message
Display information, notifications, or status updates.
Parameters:
title
(str): Dialog window titlemessage
(str): Information messageExample Usage:
result = await show_info_message(
title="Process Complete",
message="Successfully processed 1,247 records in 2.3 seconds!"
)
health_check
Check server status and GUI availability.
Example Usage:
status = await health_check()
# Returns detailed platform and functionality information
All tools return structured JSON responses:
{
"success": true,
"user_input": "User's response text",
"cancelled": false,
"platform": "windows",
"input_type": "text"
}
Common Response Fields:
success
(bool): Whether the operation completed successfullycancelled
(bool): Whether the user cancelled the dialogplatform
(str): Operating system platformerror
(str): Error message if operation failedTool-Specific Fields:
user_input
, input_type
selected_choice
, selected_choices
, allow_multiple
user_input
, character_count
, line_count
confirmed
, response
acknowledged
# Get target directory
location = await get_user_input(
title="Backup Location",
prompt="Enter backup directory path:",
default_value="~/backups"
)
# Choose backup type
backup_type = await get_user_choice(
title="Backup Options",
prompt="Select backup type:",
choices=["Full Backup", "Incremental", "Differential"]
)
# Confirm before proceeding
confirmed = await show_confirmation_dialog(
title="Confirm Backup",
message=f"Create {backup_type['selected_choice']} backup to {location['user_input']}?"
)
if confirmed['confirmed']:
# Perform backup
await show_info_message("Success", "Backup completed successfully!")
# Get content requirements
requirements = await get_multiline_input(
title="Content Requirements",
prompt="Describe your content requirements in detail:"
)
# Choose tone and style
tone = await get_user_choice(
title="Content Style",
prompt="Select desired tone:",
choices=["Professional", "Casual", "Friendly", "Technical"]
)
# Generate and show results
# ... content generation logic ...
await show_info_message("Content Ready", "Your content has been generated successfully!")
GUI Not Appearing
python -c "import tkinter"
health_check()
tool to diagnose issuesPermission Errors (macOS)
Import Errors
pip install hitl-mcp-server
Claude Desktop Integration Issues
pip install uvx
uvx hitl-mcp-server
Dialog Timeout
Enable detailed logging by running the server with environment variable:
HITL_DEBUG=1 uvx hitl-mcp-server
Human-In-the-Loop-MCP-Server/
├── human_loop_server.py # Main server implementation
├── pyproject.toml # Package configuration
├── README.md # Documentation
├── LICENSE # MIT License
├── .gitignore # Git ignore rules
└── demo.gif # Demo animation
git checkout -b feature-name
This project is licensed under the MIT License - see the LICENSE file for details.
Made with ❤️ for the AI community - Bridging humans and AI through intuitive interaction
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.