ZeroClaw includes a full-stack memory search engine built without external dependencies — no Pinecone, no Elasticsearch, no LangChain. The agent automatically recalls, saves, and manages memory through dedicated tools. All memory settings live underDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/openagen/zeroclaw/llms.txt
Use this file to discover all available pages before exploring further.
[memory] in ~/.zeroclaw/config.toml.
Memory system architecture
| Layer | Implementation |
|---|---|
| Vector DB | Embeddings stored as BLOB in SQLite, cosine similarity search |
| Keyword search | FTS5 virtual tables with BM25 scoring |
| Hybrid merge | Custom weighted merge function (vector.rs) |
| Embeddings | EmbeddingProvider trait — OpenAI, custom URL, or noop |
| Chunking | Line-based markdown chunker with heading preservation |
| Caching | SQLite embedding_cache table with LRU eviction |
| Safe reindex | Rebuild FTS5 and re-embed missing vectors atomically |
Core memory parameters
Storage backend. Accepted values:
"sqlite", "lucid", "postgres", "markdown", "none".Persist user-stated conversation inputs to memory. Assistant outputs are excluded to prevent old model-authored summaries from being treated as facts.
Embedding provider for vector search. Accepted values:
"none", "openai", "custom:https://...".Embedding model ID. Accepts a literal model name or a
hint:<name> route (see embedding routing).Expected vector size for the selected embedding model. Must match the model output dimension.
Weight for vector similarity in hybrid search. Must be between
0.0 and 1.0. Pair with keyword_weight.Weight for keyword BM25 scoring in hybrid search. Must be between
0.0 and 1.0.Minimum hybrid score for a memory entry to be included in context. Memories scoring below this threshold are dropped to prevent irrelevant context bleeding into conversations.
Backends
- SQLite (default)
- PostgreSQL
- Lucid
- Markdown
- None (disable memory)
SQLite is the default backend. It combines FTS5 full-text search and vector similarity into a single local file with no external services required.To enable semantic vector search, configure an embedding provider:Optional SQLite tuning:
Maximum seconds to wait when opening the SQLite database file. Useful when the file may be locked by another process. Leave unset for no timeout (the default).
Maximum embedding cache entries before LRU eviction.
Maximum tokens per chunk for document splitting.
Embedding providers
Controls the embedding backend used for vector search.
"none"— disables vector embeddings. Only keyword (BM25) search is used."openai"— uses OpenAI’s embeddings API. Requiresapi_keyto be set."custom:https://..."— any OpenAI-compatible embeddings endpoint.
Custom embedding endpoint
Embedding routing
You can route embedding calls to different providers by hint, the same way model routing works. This lets you keepembedding_model stable while swapping the underlying provider or model:
Memory hygiene and snapshots
ZeroClaw can automatically archive and purge old memory entries and optionally export core memories to a Markdown snapshot file.Run periodic memory hygiene passes (archiving and retention cleanup).
Archive daily and session files older than this many days.
Purge archived files older than this many days.
For the SQLite backend, prune conversation rows older than this many days.
Enable periodic export of core memories to
MEMORY_SNAPSHOT.md.Run a snapshot export during hygiene passes.
Automatically hydrate from
MEMORY_SNAPSHOT.md when brain.db is missing.Response caching
ZeroClaw can cache LLM responses to avoid paying for duplicate prompts on repeated inputs.Enable LLM response caching.
TTL in minutes for cached responses.
Maximum number of cached responses before LRU eviction.