by caol64
Automatically format Markdown articles and publish them to WeChat public accounts, supporting theme selection, image upload, and AI integration.
Wenyan MCP Server enables AI assistants to automatically format Markdown articles and submit them to a WeChat public account draft box.
Installation
npm install -g @wenyan-md/mcpdocker pull caol64/wenyan-mcpMCP client configuration (example for Claude Desktop or any MCP client):
{
"mcpServers": {
"wenyan-mcp": {
"name": "公众号助手",
"command": "npx",
"args": ["@wenyan-md/mcp"],
"env": {
"WECHAT_APP_ID": "your_app_id",
"WECHAT_APP_SECRET": "your_app_secret"
}
}
}
}
Running
npx @wenyan-md/mcp
Provide WECHAT_APP_ID and WECHAT_APP_SECRET as environment variables. Each Markdown file must start with a front‑matter block containing at least a title; a cover image can be provided optionally.
Q: Do I need a WeChat public account? A: Yes. Obtain its App ID and App Secret and add the server’s IP to the WeChat IP whitelist.
Q: How are images uploaded? A: Images referenced in the Markdown (local file path or URL) are automatically uploaded to WeChat during the publishing process.
Q: Can I use Docker instead of npm?
A: Absolutely. Pull caol64/wenyan-mcp and configure the MCP client with a Docker command as shown in the README.
Q: What front‑matter fields are required?
A: title is mandatory. cover is optional unless the article contains no images; in that case a cover image must be supplied.
文颜(Wenyan) 是一款多平台 Markdown 排版与发布工具,支持将 Markdown 一键转换并发布至:
文颜的目标是:让写作者专注内容,而不是排版和平台适配。
本仓库是 文颜的 MCP Server 版本,基于模型上下文协议(Model Context Protocol),旨在让 AI 助手(如 Claude Desktop)具备自动排版和发布公众号文章的能力。
文颜目前提供多种形态,覆盖不同使用场景:
👉 内置主题预览
文颜内置并适配了多个优秀的 Typora 主题,在此感谢原作者:
文颜 MCP Server 支持多种运行方式,请根据你的环境选择。
直接安装到本地:
npm install -g @wenyan-md/mcp
配置 MCP Client(如 Claude Desktop):
在你的 MCP 配置文件中加入以下内容:
{
"mcpServers": {
"wenyan-mcp": {
"name": "公众号助手",
"command": "wenyan-mcp",
"env": {
"WECHAT_APP_ID": "your_app_id",
"WECHAT_APP_SECRET": "your_app_secret"
}
}
}
}
适合部署到服务器环境,或希望环境隔离的用户。
拉取镜像:
docker pull caol64/wenyan-mcp
配置 MCP Client:
{
"mcpServers": {
"wenyan-mcp": {
"name": "公众号助手",
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"-v", "/your/host/file/path:/mnt/host-downloads",
"-e", "WECHAT_APP_ID=your_app_id",
"-e", "WECHAT_APP_SECRET=your_app_secret",
"-e", "HOST_FILE_PATH=/your/host/file/path",
"caol64/wenyan-mcp"
]
}
}
}
Docker 配置特别说明:
- 挂载目录 (
-v):必须将宿主机的文件/图片目录挂载到容器内的/mnt/host-downloads。- 环境变量 (
HOST_FILE_PATH):必须与宿主机挂载的文件/图片目录路径保持一致。- 原理:你的 Markdown 文件/文章内所引用的本地图片应放置在该目录中,Docker 会自动将其映射,使容器能够读取并上传。
无论使用哪种方式,都需要配置以下环境变量以连接微信公众号平台:
WECHAT_APP_ID:微信公众号平台的 App IDWECHAT_APP_SECRET:微信公众号平台的 App Secret为了正确上传文章,每篇 Markdown 顶部需要包含 frontmatter:
---
title: 在本地跑一个大语言模型(2) - 给模型提供外部知识库
cover: /Users/xxx/image.jpg
---
字段说明:
title 文章标题(必填)cover 文章封面
支持以下图片来源:
/Users/lei/Downloads/result_image.jpg)https://example.com/image.jpg)⚠️ 重要
请确保运行文颜 MCP Server 的机器 IP 已加入微信公众号后台的 IP 白名单,否则上传接口将调用失败。
配置说明文档:https://yuzhi.tech/docs/wenyan/upload
---
title: 在本地跑一个大语言模型(2) - 给模型提供外部知识库
cover: /Users/lei/Downloads/result_image.jpg
---
在[上一篇文章](https://babyno.top/posts/2024/02/running-a-large-language-model-locally/)中,我们展示了如何在本地运行大型语言模型。本篇将介绍如何让模型从外部知识库中检索定制数据,提升答题准确率,让它看起来更“智能”。
## 准备模型
访问 `Ollama` 的模型页面,搜索 `qwen`,我们使用支持中文语义的“[通义千问](https://ollama.com/library/qwen:7b)”模型进行实验。

推荐使用官方 Inspector 进行调试:
npx @modelcontextprotocol/inspector <command>
启动成功出现类似提示:
🔗 Open inspector with token pre-filled:
http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=761c05058aa4f84ad02280e62d7a7e52ec0430d00c4c7a61492cca59f9eac299
(Auto-open is disabled when authentication is enabled)
访问以上链接即可打开调试页面。

如果你觉得文颜对你有帮助,可以给我家猫咪买点罐头 ❤️
Apache License Version 2.0
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by lharries
Enables searching, reading, and sending personal WhatsApp messages and media through a Model Context Protocol (MCP) server, storing all data locally in SQLite and exposing controlled tools for LLMs like Claude.
by korotovsky
Provides a powerful Model Context Protocol interface for Slack workspaces, enabling message retrieval, search, and optional posting via Stdio or SSE transports without requiring bot permissions.
by iFurySt
Provides authenticated access to XiaoHongShu (RedNote) notes, supporting keyword search, note retrieval by URL, and cookie persistence via a Model Context Protocol server.
by chigwell
Provides a full‑featured Telegram integration for MCP‑compatible clients, enabling programmatic access to chats, messages, contacts, profile management, and group administration.
by line
Integrates the LINE Messaging API with a Model Context Protocol server, enabling AI agents to send text, flex, broadcast messages, retrieve user profiles, and manage rich menus on a LINE Official Account.
by ZubeidHendricks
Provides a standardized interface for interacting with YouTube content, enabling video retrieval, transcript access, channel and playlist management, and advanced analytics through the Model Context Protocol.
by InditexTech
Provides Microsoft Teams integration via the Model Context Protocol, enabling reading, creating, replying to messages and mentioning members.
by EnesCinr
Interact with Twitter to post tweets and search tweets programmatically via an MCP server.
by pipeboard-co
Provides a standardized interface for AI models to retrieve performance data, visualize creatives, and manage Meta advertising campaigns across Facebook, Instagram, and other Meta platforms.
{
"mcpServers": {
"wenyan-mcp": {
"command": "npx",
"args": [
"@wenyan-md/mcp"
],
"env": {
"WECHAT_APP_ID": "<YOUR_APP_ID>",
"WECHAT_APP_SECRET": "<YOUR_APP_SECRET>"
}
}
}
}claude mcp add wenyan-mcp npx @wenyan-md/mcp