by AdamStrojek
Simplifies creation of AI agents with a Rust library that connects to major LLM providers, offers structured output, and includes a flexible ToolBox for custom tool integration.
AgentAI provides a Rust‑native framework for building AI agents. It abstracts LLM provider details, supports the Model‑Context‑Protocol (MCP) for server‑based tools, and delivers structured responses so developers can focus on agent logic rather than low‑level API handling.
cargo add agentai
mcp-client
, macros
, tools-buildin
, tools-web
).Agent
instance, select a model, and call run
:
use agentai::Agent;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let mut agent = Agent::new("You are a useful assistant");
let answer: String = agent.run("gpt-4o", "Why is the sky blue?", None).await?;
println!("Answer: {}", answer);
Ok(())
}
ToolBox
trait or enable the macros
attribute to generate tool scaffolding automatically.ToolBox
trait for easy creation and management of custom toolsQ: Is the API stable? A: The library is under heavy development; interfaces may change without notice.
Q: Do I need to enable mcp-client
?
A: It is enabled by default and provides experimental MCP tool support. Disable it only if you do not need MCP integration.
Q: How can I add my own tool?
A: Implement the ToolBox
trait for your tool or use the #[toolbox]
macro to generate the boilerplate.
Q: What Rust version is required?
A: The crate follows the latest stable Rust release; consult Cargo.toml
for the minimum version.
Q: Where can I find more examples?
A: The examples
directory on docs.rs contains runnable demos; run them with cargo run --example <name>
.
AgentAI is a Rust library designed to simplify the creation of AI agents. It leverages the GenAI library to interface with a wide range of popular Large Language Models (LLMs), making it versatile and powerful. Written in Rust, AgentAI benefits from strong static typing and robust error handling, ensuring reliable and maintainable code. Whether you're developing simple or complex AI agents, AgentAI provides a streamlined and efficient development process.
This library is under heavy development. The interface may change at any time without notice.
ToolBox
.ToolBox
(version 0.1.5)This release introduces the ToolBox
, a new feature providing an easy-to-use interface for supplying tools to AI agents.
We are continuously working to improve AgentAI. Here are some of the features planned for the near future:
To add the AgentAI crate to your project, run the following command in your project's root directory:
cargo add agentai
This command adds the crate and its dependencies to your project.
Available features for agentai
crate.
To enable any of these features, you need to enter this command:
cargo add agentai -F mcp-client
Features list:
mcp-client
(enabled by default) — Enables experimental support for Agent Tools based on MCP Serversmacros
(enabled by default) — Enables support for macro #[toolbox]
tools-buildin
(enabled by default) — Enables support for buildin toolstools-web
(enabled by default) — Enables support for web toolsHere is a basic example of how to create an AI agent using AgentAI:
use agentai::Agent;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let mut agent = Agent::new("You are a useful assistant");
let answer: String = agent.run("gpt-4o", "Why is the sky blue?", None).await?;
println!("Answer: {}", answer);
Ok(())
}
For more examples, check out the examples directory. To run an example, use the following command, replacing <example_name>
with the name of the example file (without the .rs
extension):
cargo run --example <example_name>
For instance, to run the simple
example:
cargo run --example simple
Full documentation is available on docs.rs.
Contributions are welcome! Please see our CONTRIBUTING.md for more details.
This project is licensed under the MIT License. See the LICENSE file for details.
Special thanks to the creators of the GenAI library for providing a robust framework for interfacing with various LLMs.
Please log in to share your review and rating for this MCP.
Explore related MCPs that share similar capabilities and solve comparable challenges
by modelcontextprotocol
An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process.
by danny-avila
Provides a self‑hosted ChatGPT‑style interface supporting numerous AI models, agents, code interpreter, image generation, multimodal interactions, and secure multi‑user authentication.
by block
Automates engineering tasks on local machines, executing code, building projects, debugging, orchestrating workflows, and interacting with external APIs using any LLM.
by RooCodeInc
Provides an autonomous AI coding partner inside the editor that can understand natural language, manipulate files, run commands, browse the web, and be customized via modes and instructions.
by pydantic
A Python framework that enables seamless integration of Pydantic validation with large language models, providing type‑safe agent construction, dependency injection, and structured output handling.
by lastmile-ai
Build effective agents using Model Context Protocol and simple, composable workflow patterns.
by mcp-use
A Python SDK that simplifies interaction with MCP servers and enables developers to create custom agents with tool‑calling capabilities.
by nanbingxyz
A cross‑platform desktop AI assistant that connects to major LLM providers, supports a local knowledge base, and enables tool integration via MCP servers.
by gptme
Provides a personal AI assistant that runs directly in the terminal, capable of executing code, manipulating files, browsing the web, using vision, and interfacing with various LLM providers.