Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/openagen/zeroclaw/llms.txt

Use this file to discover all available pages before exploring further.

ZeroClaw supports custom API endpoints for both OpenAI-compatible and Anthropic-compatible providers. This means you can connect to any self-hosted inference server, corporate LLM proxy, or third-party gateway without writing a custom provider implementation — just configure the endpoint URL with the appropriate prefix.

OpenAI-compatible endpoints (custom:)

Prefix your endpoint URL with custom: to route calls through any service that implements the OpenAI chat completions API format:
default_provider = "custom:https://your-api.com"
api_key          = "your-api-key"
default_model    = "your-model-name"
The URL must include the scheme (http:// or https://). ZeroClaw appends the completions path automatically.

Anthropic-compatible endpoints (anthropic-custom:)

Prefix with anthropic-custom: for services that implement the Anthropic messages API format — corporate proxies and self-hosted gateways that mirror the Anthropic API:
default_provider = "anthropic-custom:https://your-api.com"
api_key          = "your-api-key"
default_model    = "your-model-name"

Configuration methods

Edit ~/.zeroclaw/config.toml:
api_key          = "your-api-key"
default_provider = "anthropic-custom:https://api.example.com"
default_model    = "claude-sonnet-4-6"

Local inference servers

ZeroClaw ships first-class provider support for three local inference runtimes. These use dedicated provider IDs rather than the custom: prefix, and do not require an API key unless you configure the server to require one.
  • Provider ID: llamacpp (alias: llama.cpp)
  • Default endpoint: http://localhost:8080/v1
Start the server:
llama-server -hf ggml-org/gpt-oss-20b-GGUF --jinja -c 133000 \
  --host 127.0.0.1 --port 8033
Configure ZeroClaw:
default_provider    = "llamacpp"
api_url             = "http://127.0.0.1:8033/v1"
default_model       = "ggml-org/gpt-oss-20b-GGUF"
default_temperature = 0.7
Validate:
zeroclaw models refresh --provider llamacpp
zeroclaw agent -m "hello"

Examples

default_provider = "custom:http://localhost:8080/v1"
api_key          = "your-api-key-if-required"
default_model    = "local-model"

Testing your configuration

# Interactive session
zeroclaw agent

# Single message test
zeroclaw agent -m "test message"

Troubleshooting

  • Verify the API key is correct and has not expired.
  • Confirm the endpoint URL includes the scheme (http:// or https://).
  • Ensure the endpoint is reachable from your network: curl -I https://your-api.com
  • Confirm the model name matches the provider’s available models exactly.
  • Check that the endpoint and model family match — some gateways only expose a subset of models.
  • List available models from the configured endpoint and key:
curl -sS https://your-api.com/models \
  -H "Authorization: Bearer $API_KEY"
  • If the gateway does not implement /models, send a minimal chat request and inspect the error text returned by the provider.
  • Test endpoint accessibility: curl -I https://your-api.com
  • Check firewall and proxy settings.
  • Confirm the provider service is operational.

Implementing a fully custom provider

If you need behaviour that a URL prefix cannot provide — custom auth headers, a non-standard API shape, streaming quirks — implement the Provider trait directly. See Adding providers, channels, tools, and peripherals for the trait signature and a complete worked example.

Build docs developers (and LLMs) love