OpenWalrusOpenWalrus

Providers

How OpenWalrus routes model requests — two API standards, provider config, and hot-reload.

OpenWalrus supports multiple LLM providers through a unified Model trait. Providers fall into two categories: local (built-in registry) and remote (API-based).

Two API standards

All remote providers use one of two wire formats, selected by the standard field in config:

StandardProtocolUsed by
openai (default)OpenAI chat completions APIOpenAI, DeepSeek, Grok, Qwen, Kimi, Ollama, and any compatible endpoint
anthropicAnthropic Messages APIClaude

If standard is omitted, OpenWalrus defaults to openai. If the base_url contains "anthropic", the Anthropic standard is auto-detected.

Local vs remote

  • Local models are handled by the built-in model registry. Set model.default to a registry key (e.g., "qwen3-4b") — no provider config needed.
  • Remote models require a [model.providers.*] entry with model, api_key or base_url, and optionally standard.

Provider configuration

Each remote provider is configured in walrus.toml:

[model]
default = "deepseek-chat"

[model.providers.deepseek-chat]
model = "deepseek-chat"
api_key = "${DEEPSEEK_API_KEY}"

You can configure multiple providers and switch between them:

[model.providers.gpt-4o]
model = "gpt-4o"
api_key = "${OPENAI_API_KEY}"

[model.providers.claude-sonnet]
model = "claude-sonnet-4-20250514"
api_key = "${ANTHROPIC_API_KEY}"
standard = "anthropic"

Any model name is valid — the standard field (not the model name) determines which API protocol to use.

Provider manager

The ProviderManager holds all configured providers and routes requests by model name. It supports hot-reload — update the config and the active provider changes without restarting the daemon.

What's next

On this page