ZeroClaw resolves the active provider fromDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/openagen/zeroclaw/llms.txt
Use this file to discover all available pages before exploring further.
default_provider in config.toml, with environment variables able to override it at runtime. You can list all built-in providers and their aliases with zeroclaw providers.
Credential resolution order
For every request the runtime picks credentials in this order:- Explicit
api_keyvalue fromconfig.tomlor CLI flag - Provider-specific environment variable (for example
OPENROUTER_API_KEY) - Generic fallbacks:
ZEROCLAW_API_KEY, thenAPI_KEY
reliability.fallback_providers, each fallback provider resolves its credentials independently — the primary provider’s key is not reused.
Environment variable
ZEROCLAW_PROVIDER always overrides default_provider when non-empty. The legacy PROVIDER variable is only applied when the config provider is unset or still set to openrouter. Use ZEROCLAW_PROVIDER whenever you need a clean runtime override.Provider catalog
- Cloud providers
- Local providers
| Canonical ID | Aliases | Provider-specific env var |
|---|---|---|
openrouter | — | OPENROUTER_API_KEY |
anthropic | — | ANTHROPIC_OAUTH_TOKEN, ANTHROPIC_API_KEY |
openai | — | OPENAI_API_KEY |
gemini | google, google-gemini | GEMINI_API_KEY, GOOGLE_API_KEY |
xai | grok | XAI_API_KEY |
deepseek | — | DEEPSEEK_API_KEY |
mistral | — | MISTRAL_API_KEY |
groq | — | GROQ_API_KEY |
together | together-ai | TOGETHER_API_KEY |
fireworks | fireworks-ai | FIREWORKS_API_KEY |
perplexity | — | PERPLEXITY_API_KEY |
cohere | — | COHERE_API_KEY |
novita | — | NOVITA_API_KEY |
nvidia | nvidia-nim, build.nvidia.com | NVIDIA_API_KEY |
vercel | vercel-ai | VERCEL_API_KEY |
cloudflare | cloudflare-ai | CLOUDFLARE_API_KEY |
bedrock | aws-bedrock | AWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY |
moonshot | kimi | MOONSHOT_API_KEY |
kimi-code | kimi_coding, kimi_for_coding | KIMI_CODE_API_KEY, MOONSHOT_API_KEY |
opencode | opencode-zen | OPENCODE_API_KEY |
opencode-go | — | OPENCODE_GO_API_KEY |
zai | z.ai | ZAI_API_KEY |
glm | zhipu | GLM_API_KEY |
minimax | minimax-intl, minimax-oauth, and others | MINIMAX_OAUTH_TOKEN, MINIMAX_API_KEY |
qianfan | baidu | QIANFAN_API_KEY |
doubao | volcengine, ark, doubao-cn | ARK_API_KEY, DOUBAO_API_KEY |
qwen | dashscope, qwen-intl, qwen-code, and others | QWEN_OAUTH_TOKEN, DASHSCOPE_API_KEY |
venice | — | VENICE_API_KEY |
synthetic | — | SYNTHETIC_API_KEY |
copilot | github-copilot | GitHub token via API_KEY fallback |
Provider-specific examples
OpenRouter
OpenRouter
OpenRouter is the default provider. Set your key and choose any model from the OpenRouter catalog:
Anthropic (direct)
Anthropic (direct)
ANTHROPIC_OAUTH_TOKEN for OAuth-based access.OpenAI (direct)
OpenAI (direct)
Ollama (local)
Ollama (local)
For local Ollama, leave You can disable or enable Ollama’s built-in reasoning with
api_url unset and run ollama serve. ZeroClaw connects to http://localhost:11434 automatically:[runtime].reasoning_enabled:Ollama (remote / Ollama Cloud)
Ollama (remote / Ollama Cloud)
For a remote Ollama endpoint, set
api_url and optionally api_key. The :cloud suffix on a model ID is normalized before the request:llama.cpp server
llama.cpp server
Start Set
llama-server and point ZeroClaw at it:LLAMACPP_API_KEY only when llama-server was started with --api-key. Use zeroclaw models refresh --provider llamacpp to populate the local model list.vLLM server
vLLM server
http://localhost:8000/v1. Set VLLM_API_KEY only when your server requires authentication.Osaurus (macOS edge runtime)
Osaurus (macOS edge runtime)
Osaurus is a unified AI edge runtime for macOS Apple Silicon that combines local MLX inference with cloud provider proxying through a single endpoint:The default endpoint is
http://localhost:1337/v1. The API key defaults to "osaurus" and is optional.Google Gemini
Google Gemini
GEMINI_API_KEY, GOOGLE_API_KEY, or a cached Gemini CLI OAuth token at ~/.gemini/oauth_creds.json. Thinking models such as gemini-3-pro-preview are supported — internal reasoning parts are automatically filtered from the response.AWS Bedrock
AWS Bedrock
Bedrock uses AKSK authentication instead of a single API key:Set
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in your environment. Use AWS_SESSION_TOKEN for temporary STS credentials and AWS_REGION to override the default us-east-1.MiniMax OAuth
MiniMax OAuth
MINIMAX_OAUTH_TOKEN(preferred direct access token)MINIMAX_OAUTH_REFRESH_TOKEN(auto-refreshes on startup)MINIMAX_API_KEY(legacy static token)
Qwen Code OAuth
Qwen Code OAuth
QWEN_OAUTH_TOKEN, then ~/.qwen/oauth_creds.json, then QWEN_OAUTH_REFRESH_TOKEN, and finally DASHSCOPE_API_KEY as a fallback.Custom endpoints
ZeroClaw supports arbitrary OpenAI-compatible and Anthropic-compatible base URLs as first-class provider IDs. Pass the full base URL as a URI-encoded suffix:Subscription auth (OpenAI Codex / Claude Code)
ZeroClaw supports OAuth-based subscription auth profiles stored encrypted at~/.zeroclaw/auth-profiles.json. Profile IDs use the format <provider>:<profile_name> — for example openai-codex:work.
Auth profile format
Optional provider type or profile ID override (for example
"openai", "openai-codex", or a custom profile name).Optional base URL for OpenAI-compatible endpoints when using a named provider profile.
When true, the runtime loads OpenAI auth material from
OPENAI_API_KEY or ~/.codex/auth.json.Model routing with hints
You can keep call sites stable while swapping underlying models by defining named routes. Setdefault_model = "hint:reasoning" or pass hint:fast from any integration:
model field in the route — nothing else in your config or integrations needs to change.