by zzaebok
Provides tools to search Wikidata entity and property identifiers, retrieve metadata such as labels and descriptions, and execute SPARQL queries via an MCP server.
Mcp Wikidata offers an MCP‑compatible server that wraps common Wikidata operations. It enables clients to programmatically search for entities and properties, list an entity's properties, fetch English labels/descriptions, and run arbitrary SPARQL queries against the Wikidata endpoint.
npx -y @smithery/cli install @zzaeback/mcp-wikidata --client claude
or manually with uv
:
curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/zzaebok/mcp-wikidata.git
cd mcp-wikidata
uv sync
# optional example extras
uv sync --extra example
uv run src/server.py
src/client.py
.search_entity(query)
– returns the best‑matching Wikidata entity ID.search_property(query)
– returns the best‑matching property ID.get_properties(entity_id)
– lists all property IDs attached to an entity.get_metadata(entity_id, language="en")
– fetches label and description.execute_sparql(sparql_query)
– runs a SPARQL query and returns results.Q: Do I need an API key? A: No. The server queries the public Wikidata SPARQL endpoint.
Q: Which languages are supported for metadata?
A: The get_metadata
tool defaults to English but accepts any Wikidata language code.
Q: Can I run multiple instances? A: Yes. The server is a standard Python script; you can start as many processes as needed, each on a different port.
Q: How do I integrate with LangChain?
A: Install langchain-mcp-adapters
(included in the example
extra) and follow the client example in src/client.py
.
Q: What Python version is required?
A: The project uses uv
for dependency management; Python 3.9+ is recommended.
A server implementation for Wikidata API using the Model Context Protocol (MCP). This project provides tools to interact with Wikidata, such as searching identifiers (entity and property), extracting metadata (label and description) and executing sparql query.
To install Wikidata MCP Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @zzaebok/mcp-wikidata --client claude
Install uv
if it is not installed yet.
$ curl -LsSf https://astral.sh/uv/install.sh | sh
Then, install dependencies.
$ git clone https://github.com/zzaebok/mcp-wikidata.git
$ cd mcp-wikidata
$ uv sync
# if you want to run client example together
$ uv sync --extra example
Run the server with:
$ uv run src/server.py
If you want to test it with a simple client code (with langchain-mcp-adapters
), run the client with:
# in another shell
$ uv run src/client.py
The LLM extracts valid entity and property identifiers, executes a sparql query, and finally recommend a movie directed by Bong Joon-ho.
{
"messages": [
HumanMessage(
content="Can you recommend me a movie directed by Bong Joonho?",
),
AIMessage(
tool_calls=[
{
"name": "search_entity",
"args": {"query": "Bong Joon-ho"},
}
],
),
ToolMessage(
content="Q495980",
name="search_entity",
),
AIMessage(
tool_calls=[
{
"name": "get_properties",
"args": {"entity_id": "Q495980"},
}
],
),
ToolMessage(
content='["P345", "P244", "P214", "P227", ...]',
name="get_properties",
),
AIMessage(
tool_calls=[
{
"name": "search_property",
"args": {"query": "director"},
}
],
),
ToolMessage(
content="P57",
name="search_property",
),
AIMessage(
tool_calls=[
{
"name": "execute_sparql",
"args": {
"sparql_query": 'SELECT ?film ?filmLabel WHERE {\n ?film wdt:P57 wd:Q495980.\n SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }\n} LIMIT 1'
},
}
],
),
ToolMessage(
content='[{"film": {"type": "uri", "value": "http://www.wikidata.org/entity/Q483761"}, "filmLabel": {"xml:lang": "en", "type": "literal", "value": "Mother"}}]',
name="execute_sparql",
),
AIMessage(
content='I recommend the movie "Mother," which was directed by Bong Joon-ho.',
),
]
}
The following tools are implemented in the server:
Tool | Description |
---|---|
search_entity(query: str) |
Search for a Wikidata entity ID by its query. |
search_property(query: str) |
Search for a Wikidata property ID by its query. |
get_properties(entity_id: str) |
Get the properties associated with a given Wikidata entity ID. |
execute_sparql(sparql_query: str) |
Execute a SPARQL query on Wikidata. |
get_metadata(entity_id: str, language: str = "en") |
Retrieve the English label and description for a given Wikidata entity ID. |
MIT License
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
A basic implementation of persistent memory using a local knowledge graph. This lets Claude remember information about the user across chats.
by topoteretes
Provides dynamic memory for AI agents through modular ECL (Extract, Cognify, Load) pipelines, enabling seamless integration with graph and vector stores using minimal code.
by basicmachines-co
Enables persistent, local‑first knowledge management by allowing LLMs to read and write Markdown files during natural conversations, building a traversable knowledge graph that stays under the user’s control.
by smithery-ai
Provides read and search capabilities for Markdown notes in an Obsidian vault for Claude Desktop and other MCP clients.
by chatmcp
Summarize chat messages by querying a local chat database and returning concise overviews.
by dmayboroda
Provides on‑premises conversational retrieval‑augmented generation (RAG) with configurable Docker containers, supporting fully local execution, ChatGPT‑based custom GPTs, and Anthropic Claude integration.
by GreatScottyMac
Provides a project‑specific memory bank that stores decisions, progress, architecture, and custom data, exposing a structured knowledge graph via MCP for AI assistants and IDE tools.
by andrea9293
Provides document management and AI-powered semantic search for storing, retrieving, and querying text, markdown, and PDF files locally without external databases.
by scorzeth
Provides a local MCP server that interfaces with a running Anki instance to retrieve, create, and update flashcards through standard MCP calls.