Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/openagen/zeroclaw/llms.txt

Use this file to discover all available pages before exploring further.

ZeroClaw resolves the active provider from default_provider in config.toml, with environment variables able to override it at runtime. You can list all built-in providers and their aliases with zeroclaw providers.

Credential resolution order

For every request the runtime picks credentials in this order:
  1. Explicit api_key value from config.toml or CLI flag
  2. Provider-specific environment variable (for example OPENROUTER_API_KEY)
  3. Generic fallbacks: ZEROCLAW_API_KEY, then API_KEY
When you configure reliability.fallback_providers, each fallback provider resolves its credentials independently — the primary provider’s key is not reused.
Environment variable ZEROCLAW_PROVIDER always overrides default_provider when non-empty. The legacy PROVIDER variable is only applied when the config provider is unset or still set to openrouter. Use ZEROCLAW_PROVIDER whenever you need a clean runtime override.

Provider catalog

Canonical IDAliasesProvider-specific env var
openrouterOPENROUTER_API_KEY
anthropicANTHROPIC_OAUTH_TOKEN, ANTHROPIC_API_KEY
openaiOPENAI_API_KEY
geminigoogle, google-geminiGEMINI_API_KEY, GOOGLE_API_KEY
xaigrokXAI_API_KEY
deepseekDEEPSEEK_API_KEY
mistralMISTRAL_API_KEY
groqGROQ_API_KEY
togethertogether-aiTOGETHER_API_KEY
fireworksfireworks-aiFIREWORKS_API_KEY
perplexityPERPLEXITY_API_KEY
cohereCOHERE_API_KEY
novitaNOVITA_API_KEY
nvidianvidia-nim, build.nvidia.comNVIDIA_API_KEY
vercelvercel-aiVERCEL_API_KEY
cloudflarecloudflare-aiCLOUDFLARE_API_KEY
bedrockaws-bedrockAWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY
moonshotkimiMOONSHOT_API_KEY
kimi-codekimi_coding, kimi_for_codingKIMI_CODE_API_KEY, MOONSHOT_API_KEY
opencodeopencode-zenOPENCODE_API_KEY
opencode-goOPENCODE_GO_API_KEY
zaiz.aiZAI_API_KEY
glmzhipuGLM_API_KEY
minimaxminimax-intl, minimax-oauth, and othersMINIMAX_OAUTH_TOKEN, MINIMAX_API_KEY
qianfanbaiduQIANFAN_API_KEY
doubaovolcengine, ark, doubao-cnARK_API_KEY, DOUBAO_API_KEY
qwendashscope, qwen-intl, qwen-code, and othersQWEN_OAUTH_TOKEN, DASHSCOPE_API_KEY
veniceVENICE_API_KEY
syntheticSYNTHETIC_API_KEY
copilotgithub-copilotGitHub token via API_KEY fallback

Provider-specific examples

OpenRouter is the default provider. Set your key and choose any model from the OpenRouter catalog:
api_key = "sk-or-..."
default_provider = "openrouter"
default_model = "anthropic/claude-sonnet-4-6"
api_key = "sk-ant-..."
default_provider = "anthropic"
default_model = "claude-sonnet-4-5-20250929"
Anthropic also accepts ANTHROPIC_OAUTH_TOKEN for OAuth-based access.
api_key = "sk-..."
default_provider = "openai"
default_model = "gpt-4o"
For local Ollama, leave api_url unset and run ollama serve. ZeroClaw connects to http://localhost:11434 automatically:
default_provider = "ollama"
default_model = "llama3.2"
You can disable or enable Ollama’s built-in reasoning with [runtime].reasoning_enabled:
[runtime]
reasoning_enabled = false   # sends think: false to Ollama
For a remote Ollama endpoint, set api_url and optionally api_key. The :cloud suffix on a model ID is normalized before the request:
default_provider = "ollama"
default_model = "qwen3:cloud"
api_url = "https://ollama.com"
api_key = "ollama_api_key_here"
Using the :cloud suffix while api_url is local or unset causes a startup validation error. :cloud model IDs are only valid with a remote endpoint.
Start llama-server and point ZeroClaw at it:
llama-server -hf ggml-org/gpt-oss-20b-GGUF --jinja -c 133000 --host 127.0.0.1 --port 8033
default_provider = "llamacpp"
api_url = "http://127.0.0.1:8033/v1"
default_model = "ggml-org/gpt-oss-20b-GGUF"
Set LLAMACPP_API_KEY only when llama-server was started with --api-key. Use zeroclaw models refresh --provider llamacpp to populate the local model list.
vllm serve meta-llama/Llama-3.1-8B-Instruct
default_provider = "vllm"
default_model = "meta-llama/Llama-3.1-8B-Instruct"
The default endpoint is http://localhost:8000/v1. Set VLLM_API_KEY only when your server requires authentication.
Osaurus is a unified AI edge runtime for macOS Apple Silicon that combines local MLX inference with cloud provider proxying through a single endpoint:
default_provider = "osaurus"
default_model = "qwen3-30b-a3b-8bit"
The default endpoint is http://localhost:1337/v1. The API key defaults to "osaurus" and is optional.
api_key = "AIza..."
default_provider = "gemini"
default_model = "gemini-2.0-flash"
Auth can come from GEMINI_API_KEY, GOOGLE_API_KEY, or a cached Gemini CLI OAuth token at ~/.gemini/oauth_creds.json. Thinking models such as gemini-3-pro-preview are supported — internal reasoning parts are automatically filtered from the response.
Bedrock uses AKSK authentication instead of a single API key:
default_provider = "bedrock"
default_model = "anthropic.claude-sonnet-4-5-20250929-v1:0"
Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in your environment. Use AWS_SESSION_TOKEN for temporary STS credentials and AWS_REGION to override the default us-east-1.
default_provider = "minimax-oauth"
api_key = "minimax-oauth"
Then provide credentials via environment variables:
  • MINIMAX_OAUTH_TOKEN (preferred direct access token)
  • MINIMAX_OAUTH_REFRESH_TOKEN (auto-refreshes on startup)
  • MINIMAX_API_KEY (legacy static token)
default_provider = "qwen-code"
api_key = "qwen-oauth"
Credentials are resolved from QWEN_OAUTH_TOKEN, then ~/.qwen/oauth_creds.json, then QWEN_OAUTH_REFRESH_TOKEN, and finally DASHSCOPE_API_KEY as a fallback.

Custom endpoints

ZeroClaw supports arbitrary OpenAI-compatible and Anthropic-compatible base URLs as first-class provider IDs. Pass the full base URL as a URI-encoded suffix:
default_provider = "custom:https://your-api.example.com"
api_key = "your-key"
default_model = "your-model-id"

Subscription auth (OpenAI Codex / Claude Code)

ZeroClaw supports OAuth-based subscription auth profiles stored encrypted at ~/.zeroclaw/auth-profiles.json. Profile IDs use the format <provider>:<profile_name> — for example openai-codex:work.
1

Log in with device-code flow (headless servers)

zeroclaw auth login --provider openai-codex --device-code
2

Or use browser callback flow with paste fallback

zeroclaw auth login --provider openai-codex --profile default
zeroclaw auth paste-redirect --provider openai-codex --profile default
3

For Anthropic, paste a setup token

zeroclaw auth paste-token --provider anthropic --profile default --auth-kind authorization
4

Run the agent with a specific profile

zeroclaw agent --provider openai-codex --auth-profile openai-codex:work -m "hello"
Anthropic updated their Authentication and Credential Use terms on 2026-02-19. Claude Code OAuth tokens (Free, Pro, Max) are intended exclusively for Claude Code and Claude.ai — using them in other tools may violate the Consumer Terms of Service. Check the official terms before using subscription auth.

Auth profile format

model_providers.<name>.name
string
Optional provider type or profile ID override (for example "openai", "openai-codex", or a custom profile name).
model_providers.<name>.base_url
string
Optional base URL for OpenAI-compatible endpoints when using a named provider profile.
model_providers.<name>.requires_openai_auth
boolean
default:"false"
When true, the runtime loads OpenAI auth material from OPENAI_API_KEY or ~/.codex/auth.json.

Model routing with hints

You can keep call sites stable while swapping underlying models by defining named routes. Set default_model = "hint:reasoning" or pass hint:fast from any integration:
[[model_routes]]
hint = "reasoning"
provider = "openrouter"
model = "anthropic/claude-opus-4-20250514"

[[model_routes]]
hint = "fast"
provider = "groq"
model = "llama-3.3-70b-versatile"
When a provider deprecates a model ID, update only the model field in the route — nothing else in your config or integrations needs to change.

Build docs developers (and LLMs) love