by JSONbored
Provides a turnkey, single‑container deployment of the Mem0 OpenMemory AI memory layer, bundling a FastAPI/MCP server, Qdrant vector store, and Next.js UI for easy self‑hosting on Unraid.
Mem0 Aio delivers a click‑and‑play local AI memory stack that combines the OpenMemory web UI, its FastAPI/MCP backend, and an embedded Qdrant vector database inside a single Docker image. The design targets homelab users—especially those running Unraid—who want a self‑hosted, privacy‑first memory layer without wiring separate services.
mem0-aio.xml.OPENAI_API_KEY.OLLAMA_BASE_URL (e.g., http://host.docker.internal:11434), plus LLM_MODEL and EMBEDDER_MODEL.v2.0.0-aio.1).Q: Do I need an external vector database? A: No. Qdrant is bundled and persists state in the appdata folder. External backends are optional for advanced setups.
Q: Can I run the container without an OpenAI key?
A: Yes. Leaving OPENAI_API_KEY empty enables the Ollama path, which relies on a locally hosted LLM.
Q: How do I expose the MCP/API securely? A: The API port is optional for browser use. If you need external access, place it behind a reverse proxy, enable authentication, and treat model/provider credentials as sensitive.
Q: What if I want to use a different vector store?
A: Set the corresponding environment variable (e.g., REDIS_URL, PG_CONNECTION_STRING, ELASTICSEARCH_URL, etc.). Only one backend may be defined; the container will reject conflicting settings.
Q: Is telemetry enabled for the bundled Qdrant? A: Telemetry is disabled by default and only re‑enabled if you explicitly configure it.
Q: How are releases versioned?
A: Tags follow the pattern v<upstream>-aio.<revision> (e.g., v2.0.0-aio.1). The latest tag points to the most recent release.
An Unraid-first, single-container deployment of Mem0 OpenMemory for people who want the easiest reliable self-hosted install without manually wiring a separate vector database on day one.
mem0-aio keeps the critical first-boot dependency bundled: Qdrant plus persistent local storage. The wrapper is opinionated for a predictable beginner install, but it does not hide the real tradeoffs: OpenMemory still needs a valid model/provider configuration to do useful work, external vector backends and hosted model endpoints still need operator knowledge, and exposing the direct MCP/API port is a deliberate security decision rather than a default requirement.
30008765If you want the simplest supported path:
OPENAI_API_KEY for the hosted quick-start, or set OLLAMA_BASE_URL to your external native Ollama root URL for the normal local-LLM path.LLM_MODEL and EMBEDDER_MODEL to models you already have pulled on that server.3000.Leaving OPENAI_API_KEY blank is supported. The intended companion path is external Ollama, not bundled inference inside this image.
When OLLAMA_BASE_URL is set and you do not explicitly override LLM_PROVIDER or EMBEDDER_PROVIDER, the wrapper now defaults both to ollama automatically.
For normal Ollama installs, the wrapper now also auto-detects the embedding dimension it needs for Qdrant. If you use a custom embedder and auto-detection cannot determine the size, set EMBEDDER_DIMENSIONS explicitly in Advanced View.
This repo is deliberately not a stripped-down wrapper. The template now tracks the practical OpenMemory self-hosted environment surface exposed by upstream source and docs, plus AIO defaults for the bundled SQLite + Qdrant path. In Advanced View you can:
/v1 base URLs for auth-protected reverse proxiesQDRANT_URL and QDRANT_API_KEYExternal vector storage is exclusive. Configure one backend only: REDIS_URL, PG_*, external QDRANT_*, Chroma, Weaviate, Milvus, Elasticsearch, OpenSearch, or FAISS. The container rejects competing or partial vector-store selectors instead of letting OpenMemory silently pick the first matching backend.
The wrapper still defaults to the internal bundled storage path so new Unraid users are not forced into extra services on day one.
2026-04-17, upstream Mem0 has a newer stable release than the original wrapper baseline; this repo is being moved to the current stable v2.0.0 line rather than staying on the older v1.0.x line.QDRANT_URL=http://qdrant:6333 plus QDRANT_API_KEY instead of only QDRANT_HOST and QDRANT_PORT.QDRANT_API_KEY against the bundled Qdrant default. Use QDRANT_URL or an external QDRANT_HOST when Qdrant auth is enabled.http://host.docker.internal:11434, not an OpenAI-compatible /v1 path./v1 endpoint, use the OpenAI-compatible base URL fields instead of the native Ollama provider path.ELASTICSEARCH_USE_SSL and ELASTICSEARCH_VERIFY_CERTS. For OpenSearch, it now also exposes optional user/password plus OPENSEARCH_USE_SSL and OPENSEARCH_VERIFY_CERTS, with SSL defaulting to true to match modern secured nodes.v2.0.0-aio.1.git-cliff.<Changes> block is synced from CHANGELOG.md during release preparation.main publishes latest, the pinned upstream version tag, an explicit AIO packaging line tag, and sha-<commit>.See docs/releases.md for the release workflow details.
Required local validation is pytest-first:
git submodule update --init --recursive
python3 -m venv .venv-local
.venv-local/bin/pip install -r requirements-dev.txt
.venv-local/bin/pytest tests/unit tests/template --junit-xml=reports/pytest-unit.xml -o junit_family=xunit1
.venv-local/bin/pytest tests/integration -m integration --junit-xml=reports/pytest-integration.xml -o junit_family=xunit1
./trunk-analytics-cli validate --junit-paths "reports/pytest-unit.xml,reports/pytest-integration.xml"
trunk check --show-existing --all
CI cost model:
main pushes run the fast validation layers firstmain release-metadata commits when publish is still in play, and for manual dispatchesThe external-backend coverage uses the same pytest command. By default it starts a local mock Ollama container for deterministic embeddings; set OLLAMA_CONTAINER only if you intentionally want to test against an existing Ollama container:
.venv-local/bin/pytest tests/integration -m integration
If this work saves you time, support it here:
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
A basic implementation of persistent memory using a local knowledge graph. This lets Claude remember information about the user across chats.
by topoteretes
Provides dynamic memory for AI agents through modular ECL (Extract, Cognify, Load) pipelines, enabling seamless integration with graph and vector stores using minimal code.
by basicmachines-co
Enables persistent, local‑first knowledge management by allowing LLMs to read and write Markdown files during natural conversations, building a traversable knowledge graph that stays under the user’s control.
by agentset-ai
Provides an open‑source platform to build, evaluate, and ship production‑ready retrieval‑augmented generation (RAG) and agentic applications, offering end‑to‑end tooling from ingestion to hosting.
by smithery-ai
Provides read and search capabilities for Markdown notes in an Obsidian vault for Claude Desktop and other MCP clients.
by chatmcp
Summarize chat messages by querying a local chat database and returning concise overviews.
by dmayboroda
Provides on‑premises conversational retrieval‑augmented generation (RAG) with configurable Docker containers, supporting fully local execution, ChatGPT‑based custom GPTs, and Anthropic Claude integration.
by qdrant
Provides a Model Context Protocol server that stores and retrieves semantic memories using Qdrant vector search, acting as a semantic memory layer.
by doobidoo
Provides a universal memory service with semantic search, intelligent memory triggers, OAuth‑enabled team collaboration, and multi‑client support for Claude Desktop, Claude Code, VS Code, Cursor and over a dozen AI applications.