OpenWalrusOpenWalrus

Agents

How OpenWalrus agents work — lifecycle, configuration, execution modes, and stop reasons.

An agent is the core execution unit in OpenWalrus. Each agent wraps an LLM model, maintains conversation history, and can invoke tools.

Agent configuration

Agents are defined as Markdown files in ~/.openwalrus/agents/:

---
name: my-agent
description: A helpful coding assistant
system_prompt: You are a coding assistant. Be concise and precise.
model: deepseek-chat
max_iterations: 20
tool_choice: auto
---
FieldDescriptionDefault
nameUnique identifierRequired
descriptionHuman-readable description
system_promptInstructions for the model
modelModel name (see providers)Default from config
max_iterationsMaximum tool-use rounds20
tool_choiceauto, none, or a specific tool nameauto

Execution modes

Agents support three execution methods:

Step

Execute a single LLM round — one model call plus any tool dispatches:

let events = agent.step(&tools).await?;

Run

Loop until the agent stops (text response or max iterations):

let response = agent.run(&tools).await?;

Stream

The canonical execution mode. Returns an async stream of events:

let stream = agent.run_stream(&tools);

Agent events

During execution, agents emit events:

EventDescription
TextDeltaIncremental text from the model
ToolCallsStartTool calls initiated
ToolResultA single tool result returned
ToolCallsCompleteAll tool calls finished
DoneAgent has stopped

Stop reasons

An agent stops when one of these conditions is met:

  • TextResponse — the model produced text without requesting tools
  • MaxIterations — reached the configured limit
  • NoAction — the model returned neither text nor tool calls
  • Error — an execution error occurred

Concurrency

Each agent is wrapped in a Mutex inside the runtime. Multiple agents run concurrently via tokio::spawn, but a single agent processes one request at a time.

See event loop for how events are dispatched across agents.

On this page