OpenWalrus Documentation
Build and run autonomous AI agents locally with OpenWalrus — a Rust-powered, local-first agent runtime.
OpenWalrus is an open-source agent runtime that runs autonomous AI agents entirely on your machine. Built in Rust, it bundles LLM inference, tool execution, persistent memory, and messaging integrations into a single binary.
Quick links
How it works
OpenWalrus runs a background daemon that manages agents, routes events, and dispatches tool calls. You interact with agents through the CLI, or connect them to Telegram and Discord.
[Sources] ──→ [Event Loop] ──→ [Agents]
Socket mpsc channel Agent 1
Telegram Agent 2
Discord ...
Tool callsEvery event flows through a single mpsc::unbounded channel. The event loop dispatches each event as its own async task — agents never block each other.
Architecture
The runtime is composed of focused crates:
| Crate | Purpose |
|---|---|
walrus-core | Agent execution, runtime, hooks, model trait |
walrus-model | LLM providers (OpenAI, Claude, DeepSeek, local) |
walrus-memory | Persistent memory (SQLite, filesystem, in-memory) |
walrus-daemon | Background service, config, event loop |
walrus-channel | Telegram and Discord integrations |
walrus-socket | Unix domain socket transport |
walrus-cli | CLI interface and REPL |
Each crate can be used independently. The daemon composes them all into a running system.
What you can do
- Run agents with built-in LLM inference — no API keys needed
- Connect to remote providers like OpenAI, Claude, and DeepSeek
- Give agents access to your filesystem and shell
- Extend agents with skills and MCP servers
- Deploy bots on Telegram and Discord
- Store long-term knowledge in persistent memory