Providers
How OpenWalrus routes model requests — two API standards, provider config, and hot-reload.
OpenWalrus supports multiple LLM providers through a unified Model trait. Providers fall into two categories: local (built-in registry) and remote (API-based).
Two API standards
All remote providers use one of two wire formats, selected by the standard field in config:
| Standard | Protocol | Used by |
|---|---|---|
openai (default) | OpenAI chat completions API | OpenAI, DeepSeek, Grok, Qwen, Kimi, Ollama, and any compatible endpoint |
anthropic | Anthropic Messages API | Claude |
If standard is omitted, OpenWalrus defaults to openai. If the base_url contains "anthropic", the Anthropic standard is auto-detected.
Local vs remote
- Local models are handled by the built-in model registry. Set
model.defaultto a registry key (e.g.,"qwen3-4b") — no provider config needed. - Remote models require a
[model.providers.*]entry withmodel,api_keyorbase_url, and optionallystandard.
Provider configuration
Each remote provider is configured in walrus.toml:
[model]
default = "deepseek-chat"
[model.providers.deepseek-chat]
model = "deepseek-chat"
api_key = "${DEEPSEEK_API_KEY}"You can configure multiple providers and switch between them:
[model.providers.gpt-4o]
model = "gpt-4o"
api_key = "${OPENAI_API_KEY}"
[model.providers.claude-sonnet]
model = "claude-sonnet-4-20250514"
api_key = "${ANTHROPIC_API_KEY}"
standard = "anthropic"Any model name is valid — the standard field (not the model name) determines which API protocol to use.
Provider manager
The ProviderManager holds all configured providers and routes requests by model name. It supports hot-reload — update the config and the active provider changes without restarting the daemon.
What's next
- Local inference — model registry and auto-quantization
- Remote providers — OpenAI, Claude, DeepSeek, and more
- Configuration — full config setup