Overview
Thememory reindex command regenerates embeddings for all memories using the current embedding provider configured in config.yaml. This is necessary after changing embedding settings or when adding embeddings to an existing vault.
Syntax
When to Use
After changing embedding provider
When you switch providers:After changing embedding model
When you switch models within the same provider:Adding embeddings to existing vault
When you initially used keyword-only search and now want semantic search:After embedding provider upgrade
When your provider releases a new model version:Examples
Basic reindex
When no memories exist
With progress updates
For large vaults, you’ll see progress:How It Works
- Count memories: Queries the database for total memory count
- Load config: Reads current embedding provider and model from
config.yaml - Generate embeddings: For each memory:
- Extracts text (title + what + why + impact + details)
- Sends to embedding provider API
- Stores the vector in the database
- Show progress: Updates the counter for each memory processed
- Report results: Shows total count, model name, and vector dimensions
Performance
Speed
- Ollama (local): 10-50 memories/second (depends on hardware)
- OpenAI: 5-20 memories/second (depends on rate limits)
Time estimates
- 50 memories: ~5-15 seconds
- 200 memories: ~20-60 seconds
- 1000 memories: ~2-5 minutes
Rate limiting
If you hit rate limits:Supported Providers
Ollama (local)
nomic-embed-text: 768 dimensions, fast, good qualitymxbai-embed-large: 1024 dimensions, slower, better quality
OpenAI
text-embedding-3-small: 1536 dimensions, fast, cheapertext-embedding-3-large: 3072 dimensions, slower, more expensivetext-embedding-ada-002: 1536 dimensions (legacy)
Error Handling
Provider not configured
Provider unavailable
Invalid API key
What Gets Updated
Database changes
- embeddings table: All vectors are replaced
- model metadata: Updated with new model name and dimensions
- memories table: Unchanged (content is preserved)
Files unchanged
- Memory markdown files in
vault/are NOT modified - Only the database vectors are regenerated
Use Cases
Migrate from keyword-only to semantic search
Switch from Ollama to OpenAI
Upgrade to better model
Fix corrupted embeddings
If embeddings are corrupted or inconsistent:Verification
Check embedding count
Check model metadata
Test semantic search
Related Commands
memory config- Configure embedding providermemory config init- Generate config templatememory search- Test semantic search after reindexingmemory context --semantic- Force semantic context retrieval
Reindexing is safe and idempotent. You can run it multiple times without risk. It only updates the vector database, never modifies your memory files.