ZeroClaw is the runtime operating system for agentic workflows — a single Rust binary that abstracts AI models, communication channels, tools, memory, and execution so you can build agents once and run them anywhere. It starts in under 10ms, uses less than 5MB RAM, and runs on everything from a $10 board to a cloud VM.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/openagen/zeroclaw/llms.txt
Use this file to discover all available pages before exploring further.
Quickstart
Set up ZeroClaw and send your first agent message in minutes
Installation
Install on Linux, macOS, Windows, or ARM devices
Configuration
Configure providers, channels, memory, and runtime options
CLI Reference
Complete reference for all ZeroClaw commands and flags
Why ZeroClaw
ZeroClaw is built for teams and individuals who need an autonomous AI assistant that is lean, secure, and fully under their control.Lean by default
Less than 5MB RAM and sub-10ms cold starts. Runs on a $10 ARM board with the same binary as your production server.
Secure by design
Gateway pairing, strict sandbox, filesystem scoping, encrypted secrets, and deny-by-default channel allowlists.
Fully swappable
Every subsystem is a trait — swap AI providers, messaging channels, memory backends, and tools with a single config line.
No lock-in
OpenAI-compatible and Anthropic-compatible provider support. Run any model locally with Ollama, llama.cpp, or vLLM.
Get started in three steps
Install ZeroClaw
Install via Homebrew, the one-line script, or download a pre-built binary for your platform.
Explore by topic
Deployment
Run ZeroClaw as a daemon, background service, or in Docker
Channels
Connect Telegram, Discord, Slack, WhatsApp, and 70+ more
Providers
OpenAI, Anthropic, Ollama, llama.cpp, vLLM, and custom endpoints
Memory
SQLite hybrid search, PostgreSQL, Markdown, or no-op backends
Security
Pairing, sandboxing, allowlists, encrypted secrets
Hardware
Run on ARM boards, Android, STM32, and Raspberry Pi