by mrexodia
Enables human‑in‑the‑loop feedback within Cline and Cursor tools via a Model Context Protocol server, allowing developers to request user input before completing tasks.
User Feedback MCP provides a lightweight MCP server that tools like Cline and Cursor can call to solicit real‑time user feedback. It stores configuration in a .user-feedback.json file and can automatically run commands based on that configuration, making it ideal for testing desktop applications that require complex user interactions.
uv) globally.C:\MCP\user-feedback-mcp).github.com/mrexodia/user-feedback-mcp with the appropriate uv command and arguments.uv run fastmcp dev server.py). The web UI will be available at http://localhost:5173.user_feedback MCP tool before completing a task..user-feedback.json to define a command that should be executed automatically or manually after receiving feedback.user_feedback tool..user-feedback.json configuration.uv? Yes, the server is built with Python and uv is the recommended package manager for quick installation.uv as described.execute_automatically do? When set to true, the command specified in .user-feedback.json runs immediately after feedback is received, without manual interaction.cline_mcp_settings.json under mcpServers and ensure the path points to your cloned repository.Simple MCP Server to enable a human-in-the-loop workflow in tools like Cline and Cursor. This is especially useful for developing desktop applications that require complex user interactions to test.

For the best results, add the following to your custom prompt:
Before completing the task, use the user_feedback MCP tool to ask the user for feedback.
This will ensure Cline uses this MCP server to request user feedback before marking the task as completed.
.user-feedback.jsonHitting Save Configuration creates a .user-feedback.json file in your project directory that looks like this:
{
  "command": "npm run dev",
  "execute_automatically": false
}
This configuration will be loaded on startup and if execute_automatically is enabled your command will be instantly executed (you will not have to click Run manually). For multi-step commands you should use something like Task.
To install the MCP server in Cline, follow these steps (see screenshot):

pip install uvcurl -LsSf https://astral.sh/uv/install.sh | shC:\MCP\user-feedback-mcp.cline_mcp_settings.json.user-feedback-mcp server:{
  "mcpServers": {
    "github.com/mrexodia/user-feedback-mcp": {
      "command": "uv",
      "args": [
        "--directory",
        "c:\\MCP\\user-feedback-mcp",
        "run",
        "server.py"
      ],
      "timeout": 600,
      "autoApprove": [
        "user_feedback"
      ]
    }
  }
}
uv run fastmcp dev server.py
This will open a web interface at http://localhost:5173 and allow you to interact with the MCP tools for testing.
<use_mcp_tool>
<server_name>github.com/mrexodia/user-feedback-mcp</server_name>
<tool_name>user_feedback</tool_name>
<arguments>
{
  "project_directory": "C:/MCP/user-feedback-mcp",
  "summary": "I've implemented the changes you requested."
}
</arguments>
</use_mcp_tool>
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.