Installation Issues
sqlite-vec extension fails to load
sqlite-vec extension fails to load
EchoVault requires the sqlite-vec extension for vector operations. If you see errors about loading extensions:Solution: Install The system will automatically fall back to the standard
pysqlite3-binary which includes extension support:sqlite3 module if pysqlite3 is unavailable, but extension loading may fail depending on your Python installation.Command 'memory' not found after installation
Command 'memory' not found after installation
After installing with pip, the
memory command may not be in your PATH.Solution: Try one of these approaches:Embedding Issues
DimensionMismatchError when saving memories
DimensionMismatchError when saving memories
This error occurs when your embedding provider’s dimension doesn’t match the dimension stored in the database.Cause: You changed embedding models or providers after memories were already saved.Solution: Rebuild the vector index with the new model:The reindex command:
- Detects the new embedding dimension
- Drops and recreates the vector table
- Re-embeds all memories with the current provider
Embedding failed - memory saved without vector
Embedding failed - memory saved without vector
You may see warnings like:Cause: The embedding provider is unreachable or returned an error.What happens: Your memory is still saved to the database and markdown file, but without a vector embedding. You can still search it using keyword search (FTS5).Solutions:For Ollama:For OpenAI:
- Verify your API key is correct in
~/.memory/config.yaml - Check your network connection
- Ensure you have API credits remaining
Ollama timeout or slow embedding
Ollama timeout or slow embedding
If embeddings are slow or timing out, Ollama may need to load the model into memory first.Solution: Pre-warm the model:This loads the model into memory. Subsequent embeddings will be much faster.
EchoVault’s
auto semantic mode (default) checks if the Ollama model is already loaded before using vector search. If the model isn’t warm, it falls back to keyword search to avoid delays.Search Issues
Search returns no results
Search returns no results
If Check 2: Try keyword searchCheck 3: Check project filter
Memories are project-scoped. If you saved a memory in one project and search from another, you won’t find it unless you omit Check 4: FTS5 tokenization
FTS5 uses the Porter stemmer and may not match exact technical terms. Try searching for the root word or partial matches.
memory search returns no results:Check 1: Verify memories exist--project:Vector search not working
Vector search not working
If semantic search isn’t finding relevant results:Check if vectors are enabled:Look for the Then edit
embedding section. If provider is not configured, you’re only using keyword search.Enable vector search:~/.memory/config.yaml to configure Ollama or OpenAI.Rebuild vectors for existing memories:MCP Integration Issues
Agent doesn't see memory tools
Agent doesn't see memory tools
After running Verify the For OpenCode:For Codex:
memory setup <agent>, the agent should have access to memory_save, memory_search, and memory_context tools.Troubleshooting:For Claude Code:echovault server entry exists.For Cursor:After updating MCP config, restart your agent to load the new configuration.
MCP server fails to start
MCP server fails to start
If the agent reports that the MCP server failed to start:Check 1: Test the server manuallyThis starts the MCP server in stdio mode. If it fails, you’ll see the error directly.Check 2: Verify Python environment
The MCP config uses Check 3: Review agent logs
Each agent stores MCP logs in different locations:
uvx or npx to run the server. Ensure your Python environment is accessible:- Claude Code: Check the developer console
- Cursor:
~/Library/Application Support/Cursor/logs/ - OpenCode: Terminal output where you launched it
Configuration Issues
Config changes not taking effect
Config changes not taking effect
After editing This displays the effective configuration, including redacted API keys.Common mistakes:
~/.memory/config.yaml, changes should take effect immediately.Verify your config:- YAML indentation errors
- Wrong key names (e.g.,
api-keyinstead ofapi_key) - Missing quotes around special characters
MEMORY_HOME environment variable not working
MEMORY_HOME environment variable not working
The Look for Set per-session:
MEMORY_HOME environment variable sets the memory storage location.Check current value:memory_home and memory_home_source.Priority order:MEMORY_HOMEenv var (highest)- Persisted home via
memory config set-home - Default
~/.memory(lowest)
File System Issues
Markdown files missing or corrupted
Markdown files missing or corrupted
EchoVault stores memories as Markdown files in Check vault directory:Each project has its own subdirectory with session files named
~/.memory/vault/.List session files:YYYY-MM-DD-session.md.If files are missing:- Check if
MEMORY_HOMEis set to a different location - Verify file permissions on the vault directory
- Memories are still in the SQLite database (
~/.memory/index.db) even if markdown files are deleted
Can't edit memories in Obsidian
Can't edit memories in Obsidian
EchoVault markdown files are designed to be Obsidian-compatible, but editing them directly can cause sync issues.Best practice:
- Use Obsidian for reading memories
- Use
memory savefor creating new entries - The database is the source of truth for search
Database Issues
Database locked or corrupted
Database locked or corrupted
SQLite database issues are rare but can occur.Symptoms:Backup and rebuild:
- “Database is locked” errors
- “Malformed database” errors
- Search returns unexpected results
The markdown files in
~/.memory/vault/ are the authoritative source. If the database is lost, you can rebuild it by reindexing.FTS5 search not working
FTS5 search not working
If keyword search is failing but vector search works:Rebuild FTS index:This rebuilds the FTS5 index from the memories table.
Performance Issues
Slow search with large memory vault
Slow search with large memory vault
If you have thousands of memories, search may become slower.Optimizations:Use project filters:Use source filters:Adjust context limit:Disable semantic search temporarily:
Edit
~/.memory/config.yaml:Reindex taking too long
Reindex taking too long
Reindexing re-embeds all memories. For large vaults, this can take time.Progress tracking:The command shows progress:
Reindexing memories: 245/1000Speed up embedding:- Use Ollama locally instead of OpenAI (no network latency)
- Use a smaller embedding model (faster but less accurate)
- Run reindex during off-hours
Reindexing is only needed when you change embedding providers or models. Normal operations don’t require it.
Getting Help
If you encounter issues not covered here:- Check the GitHub Issues for known problems
- Run
memory configto verify your setup - Check agent logs for MCP-related errors
- Open a new issue with:
- Your OS and Python version
- Output of
memory config - Full error message and stack trace
When reporting issues, use
memory config to share your configuration. API keys are automatically redacted.