Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/openagen/zeroclaw/llms.txt

Use this file to discover all available pages before exploring further.

ZeroClaw routes every model call through a single Provider trait, so swapping backends requires only a config change — no code changes. Built-in providers cover the major hosted APIs and several local inference servers. Custom endpoints let you target any compatible gateway without writing Rust.

Provider trait

The trait lives at src/providers/traits.rs. The only required method is chat_with_system. Everything else has default implementations that delegate to it.
// Required method — all other methods have defaults
async fn chat_with_system(
    &self,
    system_prompt: Option<&str>,
    message: &str,
    model: &str,
    temperature: f64,
) -> Result<String>;

// Default implementations provided:
// simple_chat()          — calls chat_with_system with no system prompt
// chat_with_history()    — calls chat_with_system with history serialized
// capabilities()         — returns no native tool-calling by default
// streaming methods      — return empty/error streams by default

Built-in providers

Use zeroclaw providers to list every provider available in your build, including aliases.

anthropic

Claude model family via the Anthropic API

openai

GPT models via the OpenAI API

gemini

Gemini models via Google AI Studio / Vertex

openrouter

Multi-model routing gateway

azure_openai

OpenAI models via Azure OpenAI Service

bedrock

Amazon Bedrock model access

ollama

Local models via Ollama

llamacpp

Local inference via llama-server (alias: llama.cpp)

vllm

High-throughput local serving via vLLM

sglang

Structured generation via SGLang

copilot

GitHub Copilot backend

telnyx

Telnyx AI inference API

glm

Zhipu AI GLM model family

openai_codex

OpenAI Codex (legacy)

compatible

Generic OpenAI-compatible endpoint

Switching providers in config

Set default_provider in ~/.zeroclaw/config.toml:
default_provider = "anthropic"
default_model    = "claude-sonnet-4-7"
api_key          = "sk-ant-..."
You can override the provider per-session with the --provider flag:
zeroclaw agent --provider openai -m "summarise this file"

Custom OpenAI-compatible endpoints

Prefix your base URL with custom: to route calls through any service that implements the OpenAI chat completions API:
default_provider = "custom:https://your-api.com"
api_key          = "your-api-key"
default_model    = "your-model-name"
The URL must include the scheme (http:// or https://). ZeroClaw appends /v1/chat/completions automatically.

Examples

default_provider = "custom:http://localhost:8080/v1"
api_key          = "your-api-key-if-required"
default_model    = "local-model"

Custom Anthropic-compatible endpoints

Prefix with anthropic-custom: for services that implement the Anthropic messages API — corporate proxies and self-hosted Anthropic-format gateways:
default_provider = "anthropic-custom:https://llm-proxy.corp.example.com"
api_key          = "internal-token"
default_model    = "claude-sonnet-4-6"

Environment variable auth

For both custom: and anthropic-custom: providers, supply the key via environment variable instead of the config file:
export API_KEY="your-api-key"
# or:
export ZEROCLAW_API_KEY="your-api-key"
zeroclaw agent

Local inference servers

ZeroClaw ships first-class support for three local inference runtimes. None of them require an API key unless you explicitly configure the server to require one.
Provider ID: llamacpp (alias: llama.cpp)
Default endpoint: http://localhost:8080/v1
llama-server -hf ggml-org/gpt-oss-20b-GGUF --jinja -c 133000 \
  --host 127.0.0.1 --port 8033
default_provider     = "llamacpp"
api_url              = "http://127.0.0.1:8033/v1"
default_model        = "ggml-org/gpt-oss-20b-GGUF"
default_temperature  = 0.7
zeroclaw models refresh --provider llamacpp
zeroclaw agent -m "hello"

zeroclaw providers command

zeroclaw providers          # list all registered providers and their aliases
zeroclaw models refresh     # fetch available models from the active provider
zeroclaw models refresh --provider <name>   # fetch from a specific provider

Implementing a custom provider

See Adding providers, channels, tools, and peripherals for the full trait signature and registration pattern.

Build docs developers (and LLMs) love