Skip to main content
This guide covers common issues you may encounter while using EchoVault and how to resolve them.

Installation Issues

EchoVault requires the sqlite-vec extension for vector operations. If you see errors about loading extensions:Solution: Install pysqlite3-binary which includes extension support:
pip install pysqlite3-binary
The system will automatically fall back to the standard sqlite3 module if pysqlite3 is unavailable, but extension loading may fail depending on your Python installation.
After installing with pip, the memory command may not be in your PATH.Solution: Try one of these approaches:
# Option 1: Use python -m
python -m memory init

# Option 2: Add pip's bin directory to PATH
export PATH="$HOME/.local/bin:$PATH"

# Option 3: Reinstall with --user flag
pip install --user git+https://github.com/mraza007/echovault.git

Embedding Issues

This error occurs when your embedding provider’s dimension doesn’t match the dimension stored in the database.
DimensionMismatchError: Embedding dimension mismatch: database has 768, provider returned 384
Cause: You changed embedding models or providers after memories were already saved.Solution: Rebuild the vector index with the new model:
memory reindex
Reindexing will re-embed all existing memories with your current provider. This may take several minutes if you have many memories stored.
The reindex command:
  • Detects the new embedding dimension
  • Drops and recreates the vector table
  • Re-embeds all memories with the current provider
You may see warnings like:
Warning: embedding failed (HTTPError). Memory saved without vector.
Cause: The embedding provider is unreachable or returned an error.What happens: Your memory is still saved to the database and markdown file, but without a vector embedding. You can still search it using keyword search (FTS5).Solutions:For Ollama:
# Check if Ollama is running
ollama list

# Start Ollama if needed
ollama serve

# Pull the model if not installed
ollama pull nomic-embed-text
For OpenAI:
  • Verify your API key is correct in ~/.memory/config.yaml
  • Check your network connection
  • Ensure you have API credits remaining
After fixing the provider:
memory reindex
If embeddings are slow or timing out, Ollama may need to load the model into memory first.Solution: Pre-warm the model:
ollama run nomic-embed-text "test"
This loads the model into memory. Subsequent embeddings will be much faster.
EchoVault’s auto semantic mode (default) checks if the Ollama model is already loaded before using vector search. If the model isn’t warm, it falls back to keyword search to avoid delays.

Search Issues

If memory search returns no results:Check 1: Verify memories exist
memory context --project
Check 2: Try keyword search
memory search "exact word from your memory"
Check 3: Check project filter Memories are project-scoped. If you saved a memory in one project and search from another, you won’t find it unless you omit --project:
# Search current project only
memory search "auth"

# Search all projects
memory search "auth" --all
Check 4: FTS5 tokenization FTS5 uses the Porter stemmer and may not match exact technical terms. Try searching for the root word or partial matches.
If semantic search isn’t finding relevant results:Check if vectors are enabled:
memory config
Look for the embedding section. If provider is not configured, you’re only using keyword search.Enable vector search:
memory config init
Then edit ~/.memory/config.yaml to configure Ollama or OpenAI.Rebuild vectors for existing memories:
memory reindex

MCP Integration Issues

After running memory setup <agent>, the agent should have access to memory_save, memory_search, and memory_context tools.Troubleshooting:For Claude Code:
# Check global config
cat ~/.claude.json

# Or check project config
cat .mcp.json
Verify the echovault server entry exists.For Cursor:
cat .cursor/mcp.json
For OpenCode:
# Global
cat ~/.config/opencode/opencode.json

# Project
cat opencode.json
For Codex:
cat .codex/config.toml
After updating MCP config, restart your agent to load the new configuration.
If the agent reports that the MCP server failed to start:Check 1: Test the server manually
memory mcp
This starts the MCP server in stdio mode. If it fails, you’ll see the error directly.Check 2: Verify Python environment The MCP config uses uvx or npx to run the server. Ensure your Python environment is accessible:
which python
python -m memory mcp
Check 3: Review agent logs Each agent stores MCP logs in different locations:
  • Claude Code: Check the developer console
  • Cursor: ~/Library/Application Support/Cursor/logs/
  • OpenCode: Terminal output where you launched it

Configuration Issues

After editing ~/.memory/config.yaml, changes should take effect immediately.Verify your config:
memory config
This displays the effective configuration, including redacted API keys.Common mistakes:
  • YAML indentation errors
  • Wrong key names (e.g., api-key instead of api_key)
  • Missing quotes around special characters
Validate YAML syntax:
python -c "import yaml; yaml.safe_load(open('$HOME/.memory/config.yaml'))"
The MEMORY_HOME environment variable sets the memory storage location.Check current value:
memory config
Look for memory_home and memory_home_source.Priority order:
  1. MEMORY_HOME env var (highest)
  2. Persisted home via memory config set-home
  3. Default ~/.memory (lowest)
Set persistently:
memory config set-home /path/to/memory
Set per-session:
export MEMORY_HOME=/path/to/memory
memory context --project

File System Issues

EchoVault stores memories as Markdown files in ~/.memory/vault/.List session files:
memory sessions
Check vault directory:
ls -la ~/.memory/vault/
Each project has its own subdirectory with session files named YYYY-MM-DD-session.md.If files are missing:
  • Check if MEMORY_HOME is set to a different location
  • Verify file permissions on the vault directory
  • Memories are still in the SQLite database (~/.memory/index.db) even if markdown files are deleted
EchoVault markdown files are designed to be Obsidian-compatible, but editing them directly can cause sync issues.Best practice:
  • Use Obsidian for reading memories
  • Use memory save for creating new entries
  • The database is the source of truth for search
Editing markdown files directly won’t update the SQLite index or embeddings. Changes won’t appear in search results.
If you edit in Obsidian: You’ll need to manually update the database or re-save the memory via CLI.

Database Issues

SQLite database issues are rare but can occur.Symptoms:
  • “Database is locked” errors
  • “Malformed database” errors
  • Search returns unexpected results
Solutions:Check for active connections:
lsof ~/.memory/index.db
Backup and rebuild:
# Backup the database
cp ~/.memory/index.db ~/.memory/index.db.backup

# Export all memories (if possible)
memory context --all > memories-backup.txt

# Delete the database
rm ~/.memory/index.db

# Reinitialize
memory init

# Reindex will rebuild from markdown files
memory reindex
The markdown files in ~/.memory/vault/ are the authoritative source. If the database is lost, you can rebuild it by reindexing.
If keyword search is failing but vector search works:Rebuild FTS index:
sqlite3 ~/.memory/index.db "INSERT INTO memories_fts(memories_fts) VALUES('rebuild');"
This rebuilds the FTS5 index from the memories table.

Performance Issues

If you have thousands of memories, search may become slower.Optimizations:Use project filters:
memory search "query" --project
Use source filters:
memory search "query" --source opencode
Adjust context limit:
memory context --project --limit 5
Disable semantic search temporarily: Edit ~/.memory/config.yaml:
context:
  semantic: never  # Forces keyword search only
  topup_recent: true
Reindexing re-embeds all memories. For large vaults, this can take time.Progress tracking:
memory reindex
The command shows progress: Reindexing memories: 245/1000Speed up embedding:
  • Use Ollama locally instead of OpenAI (no network latency)
  • Use a smaller embedding model (faster but less accurate)
  • Run reindex during off-hours
Reindexing is only needed when you change embedding providers or models. Normal operations don’t require it.

Getting Help

If you encounter issues not covered here:
  1. Check the GitHub Issues for known problems
  2. Run memory config to verify your setup
  3. Check agent logs for MCP-related errors
  4. Open a new issue with:
    • Your OS and Python version
    • Output of memory config
    • Full error message and stack trace
When reporting issues, use memory config to share your configuration. API keys are automatically redacted.

Build docs developers (and LLMs) love