Agents
How OpenWalrus agents work — lifecycle, configuration, execution modes, and stop reasons.
An agent is the core execution unit in OpenWalrus. Each agent wraps an LLM model, maintains conversation history, and can invoke tools.
Agent configuration
Agents are defined as Markdown files in ~/.openwalrus/agents/:
---
name: my-agent
description: A helpful coding assistant
system_prompt: You are a coding assistant. Be concise and precise.
model: deepseek-chat
max_iterations: 20
tool_choice: auto
---| Field | Description | Default |
|---|---|---|
name | Unique identifier | Required |
description | Human-readable description | — |
system_prompt | Instructions for the model | — |
model | Model name (see providers) | Default from config |
max_iterations | Maximum tool-use rounds | 20 |
tool_choice | auto, none, or a specific tool name | auto |
Execution modes
Agents support three execution methods:
Step
Execute a single LLM round — one model call plus any tool dispatches:
let events = agent.step(&tools).await?;Run
Loop until the agent stops (text response or max iterations):
let response = agent.run(&tools).await?;Stream
The canonical execution mode. Returns an async stream of events:
let stream = agent.run_stream(&tools);Agent events
During execution, agents emit events:
| Event | Description |
|---|---|
TextDelta | Incremental text from the model |
ToolCallsStart | Tool calls initiated |
ToolResult | A single tool result returned |
ToolCallsComplete | All tool calls finished |
Done | Agent has stopped |
Stop reasons
An agent stops when one of these conditions is met:
- TextResponse — the model produced text without requesting tools
- MaxIterations — reached the configured limit
- NoAction — the model returned neither text nor tool calls
- Error — an execution error occurred
Concurrency
Each agent is wrapped in a Mutex inside the runtime. Multiple agents run concurrently via tokio::spawn, but a single agent processes one request at a time.
See event loop for how events are dispatched across agents.