ZeroClaw routes every model call through a singleDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/openagen/zeroclaw/llms.txt
Use this file to discover all available pages before exploring further.
Provider trait, so swapping backends requires only a config change — no code changes. Built-in providers cover the major hosted APIs and several local inference servers. Custom endpoints let you target any compatible gateway without writing Rust.
Provider trait
The trait lives atsrc/providers/traits.rs. The only required method is chat_with_system. Everything else has default implementations that delegate to it.
Built-in providers
Usezeroclaw providers to list every provider available in your build, including aliases.
anthropic
Claude model family via the Anthropic API
openai
GPT models via the OpenAI API
gemini
Gemini models via Google AI Studio / Vertex
openrouter
Multi-model routing gateway
azure_openai
OpenAI models via Azure OpenAI Service
bedrock
Amazon Bedrock model access
ollama
Local models via Ollama
llamacpp
Local inference via llama-server (alias:
llama.cpp)vllm
High-throughput local serving via vLLM
sglang
Structured generation via SGLang
copilot
GitHub Copilot backend
telnyx
Telnyx AI inference API
glm
Zhipu AI GLM model family
openai_codex
OpenAI Codex (legacy)
compatible
Generic OpenAI-compatible endpoint
Switching providers in config
Setdefault_provider in ~/.zeroclaw/config.toml:
--provider flag:
Custom OpenAI-compatible endpoints
Prefix your base URL withcustom: to route calls through any service that implements the OpenAI chat completions API:
The URL must include the scheme (
http:// or https://). ZeroClaw appends /v1/chat/completions automatically.Examples
Custom Anthropic-compatible endpoints
Prefix withanthropic-custom: for services that implement the Anthropic messages API — corporate proxies and self-hosted Anthropic-format gateways:
Environment variable auth
For bothcustom: and anthropic-custom: providers, supply the key via environment variable instead of the config file:
Local inference servers
ZeroClaw ships first-class support for three local inference runtimes. None of them require an API key unless you explicitly configure the server to require one.- llama.cpp
- SGLang
- vLLM
Provider ID:
Default endpoint:
llamacpp (alias: llama.cpp)Default endpoint:
http://localhost:8080/v1