by AdbC99
Provides reliable, repeatable retrieval of biblical verses for Large Language Models through an MCP server and an OpenAI‑compatible completions API, enabling consistent research and educational outputs.
Ai Bible enables LLMs to look up biblical verses in a deterministic way. It ships an MCP server that returns verse text based on references, and a Docker‑wrapped service exposing the OpenAI completions endpoint, so any client that understands the OpenAI API can query biblical data without custom parsers.
npm run build
).docker build -f completions/Dockerfile -t mcp-server .
docker run -p 8002:8000 mcp-server
http://localhost:8002/docs
and trying the get‑verse endpoint with JSON payload like:
{ "reference": ["Gen.1.1", "Gen.2.1"], "language": "english" }
http://localhost:8002
).Q: Do I need an OpenAI API key?
A: No. The container provides its own OpenAI‑compatible endpoint that works locally.
Q: Which LLMs are supported?
A: Any model that can call an OpenAI completions endpoint – e.g., Claude Desktop, Ollama models (Llama 3.1 8B, etc.).
Q: Can I add other languages?
A: The server accepts a language
parameter; adding new translations involves extending the data files.
Q: How is the service licensed?
A: The source code is under GNU GPL v3; data files may have separate licenses (see LICENCE.md
).
ai-Bible is a project that explores the use of AI within a context of interpreting and understanding biblical text. This repository contains mcp-servers and a container for compatibility with the openai completions API that support an AI or Large Language Model reliably and repeatably lookup data so that it can be represented in different forms for research or educational purposes with some confidence that results will be reproducable and reasonable.
For web accessible front end as a pocket bible see http://ai-bible.com
The mcp-server contains the current implementation of a server for repeatedly and reliably retrieving bible verses when using LLMs. Claude Desktop can be configured to use the mcp-server.stdio.js file built in the build folder of this project as an mcp-server.
See the README.md in that subfolder for detailed information.
The docker container wraps the mcp server up using mcpo in order to turn it into server supporting the openai completions api. Run these commands from the project root after building the mcp-server.
docker build -f completions/Dockerfile -t mcp-server .
docker run -p 8002:8000 mcp-server
You can check it is running be checking the swagger api page:
http://localhost:8002/docs
Try the get-verse api with parameters:
{
"reference": ["Gen.1.1", "Gen.2.1"],
"language": "english"
}
One way to access the completions api is via Open WebUI and then you can do everything locally with a LLM via Ollama with a model such as llama 3.1 8b, see:
https://docs.openwebui.com/getting-started/quick-start/
Contributions are welcome! Please feel free to submit a pull request or open an issue for any enhancements or bug fixes.
This project source code is under the GNU GPL v3 Licence. Within the project there are data files that come under different licences. See the file LICENCE.md for details of the GPL licence.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
A basic implementation of persistent memory using a local knowledge graph. This lets Claude remember information about the user across chats.
by topoteretes
Provides dynamic memory for AI agents through modular ECL (Extract, Cognify, Load) pipelines, enabling seamless integration with graph and vector stores using minimal code.
by basicmachines-co
Enables persistent, local‑first knowledge management by allowing LLMs to read and write Markdown files during natural conversations, building a traversable knowledge graph that stays under the user’s control.
by smithery-ai
Provides read and search capabilities for Markdown notes in an Obsidian vault for Claude Desktop and other MCP clients.
by chatmcp
Summarize chat messages by querying a local chat database and returning concise overviews.
by dmayboroda
Provides on‑premises conversational retrieval‑augmented generation (RAG) with configurable Docker containers, supporting fully local execution, ChatGPT‑based custom GPTs, and Anthropic Claude integration.
by GreatScottyMac
Provides a project‑specific memory bank that stores decisions, progress, architecture, and custom data, exposing a structured knowledge graph via MCP for AI assistants and IDE tools.
by andrea9293
Provides document management and AI-powered semantic search for storing, retrieving, and querying text, markdown, and PDF files locally without external databases.
by scorzeth
Provides a local MCP server that interfaces with a running Anki instance to retrieve, create, and update flashcards through standard MCP calls.