by gleanwork
Runs a local MCP server to bridge Glean API with client applications, enabling quick testing and development.
Provides a locally hosted MCP server that integrates with Glean's API, allowing developers to configure and run the server alongside popular MCP clients for rapid development and testing.
npx -y @gleanwork/local-mcp-server
API_KEY
.@gleanwork/configure-mcp-server
.Q: Do I need a Glean account to use the server?
A: Yes, an API key from a Glean account is required and should be set in the API_KEY
environment variable.
Q: Can I run multiple instances simultaneously? A: Each instance can be started with a different port configuration; refer to the package docs for custom settings.
Q: Is the server production‑ready? A: The server is intended for local development and testing. For production deployments, use Glean's hosted services.
Q: How do I contribute?
A: See CONTRIBUTING.md
in the repository for guidelines on setting up the development environment and submitting pull requests.
This monorepo contains packages for Glean's local MCP server. For more details see the READMEs of the individual packages.
Please see CONTRIBUTING.md for development setup and guidelines.
MIT License - see the LICENSE file for details
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "glean-mcp": { "command": "npx", "args": [ "-y", "@gleanwork/local-mcp-server" ], "env": { "API_KEY": "<YOUR_API_KEY>" } } } }
Explore related MCPs that share similar capabilities and solve comparable challenges
by zed-industries
A high‑performance, multiplayer code editor designed for speed and collaboration.
by modelcontextprotocol
Model Context Protocol Servers
by modelcontextprotocol
A Model Context Protocol server for Git repository interaction and automation.
by modelcontextprotocol
A Model Context Protocol server that provides time and timezone conversion capabilities.
by cline
An autonomous coding assistant that can create and edit files, execute terminal commands, and interact with a browser directly from your IDE, operating step‑by‑step with explicit user permission.
by continuedev
Enables faster shipping of code by integrating continuous AI agents across IDEs, terminals, and CI pipelines, offering chat, edit, autocomplete, and customizable agent workflows.
by upstash
Provides up-to-date, version‑specific library documentation and code examples directly inside LLM prompts, eliminating outdated information and hallucinated APIs.
by github
Connects AI tools directly to GitHub, enabling natural‑language interactions for repository browsing, issue and pull‑request management, CI/CD monitoring, code‑security analysis, and team collaboration.
by daytonaio
Provides a secure, elastic infrastructure that creates isolated sandboxes for running AI‑generated code with sub‑90 ms startup, unlimited persistence, and OCI/Docker compatibility.