ZeroClaw supports custom API endpoints for both OpenAI-compatible and Anthropic-compatible providers. This means you can connect to any self-hosted inference server, corporate LLM proxy, or third-party gateway without writing a custom provider implementation — just configure the endpoint URL with the appropriate prefix.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/openagen/zeroclaw/llms.txt
Use this file to discover all available pages before exploring further.
OpenAI-compatible endpoints (custom:)
Prefix your endpoint URL with custom: to route calls through any service that implements the OpenAI chat completions API format:
The URL must include the scheme (
http:// or https://). ZeroClaw appends the completions path automatically.Anthropic-compatible endpoints (anthropic-custom:)
Prefix with anthropic-custom: for services that implement the Anthropic messages API format — corporate proxies and self-hosted gateways that mirror the Anthropic API:
Configuration methods
- Config file
- Environment variables
Edit
~/.zeroclaw/config.toml:Local inference servers
ZeroClaw ships first-class provider support for three local inference runtimes. These use dedicated provider IDs rather than thecustom: prefix, and do not require an API key unless you configure the server to require one.
- llama.cpp
- SGLang
- vLLM
- Provider ID:
llamacpp(alias:llama.cpp) - Default endpoint:
http://localhost:8080/v1
Examples
Testing your configuration
Troubleshooting
Authentication errors
Authentication errors
- Verify the API key is correct and has not expired.
- Confirm the endpoint URL includes the scheme (
http://orhttps://). - Ensure the endpoint is reachable from your network:
curl -I https://your-api.com
Model not found
Model not found
- Confirm the model name matches the provider’s available models exactly.
- Check that the endpoint and model family match — some gateways only expose a subset of models.
- List available models from the configured endpoint and key:
- If the gateway does not implement
/models, send a minimal chat request and inspect the error text returned by the provider.
Connection issues
Connection issues
- Test endpoint accessibility:
curl -I https://your-api.com - Check firewall and proxy settings.
- Confirm the provider service is operational.
Implementing a fully custom provider
If you need behaviour that a URL prefix cannot provide — custom auth headers, a non-standard API shape, streaming quirks — implement theProvider trait directly. See Adding providers, channels, tools, and peripherals for the trait signature and a complete worked example.