Core Privacy Principles
Local Storage
All memories are stored on your machine by default. No cloud database, no remote storage.
You Own Your Data
Memories are stored as human-readable Markdown files. No proprietary formats, no lock-in.
Secret Redaction
3-layer redaction pipeline prevents API keys, passwords, and credentials from being stored.
Optional Cloud
Cloud embeddings are opt-in. Use Ollama for fully local operation with zero network calls.
Data Storage Location
By default, EchoVault stores all data in~/.memory/ on your local machine.
Directory Structure
What’s Stored Where
Markdown files (vault/)
Markdown files (vault/)
Location:
~/.memory/vault/<project>/<date>-session.mdContents:- Memory title, category, tags
- What/why/impact descriptions
- Full decision details
- Timestamps and metadata
SQLite database (index.db)
SQLite database (index.db)
Location:
~/.memory/index.dbContents:- FTS5 full-text search index
- Vector embeddings (if configured)
- Memory metadata (IDs, timestamps, projects)
- Pointers to markdown files
memory reindexConfiguration (config.yaml)
Configuration (config.yaml)
Location:
~/.memory/config.yamlContents:- Embedding provider settings
- API keys (for OpenAI, if configured)
- Context retrieval preferences
memory config, API keys are redacted.Example:Network Calls and Cloud Services
EchoVault can operate completely offline or with optional cloud embeddings.Fully Local Operation (Default)
If you use Ollama for embeddings (or no embeddings at all), EchoVault makes zero network calls.- Memories are saved to local disk
- Embeddings are generated by Ollama locally
- Search queries run against local SQLite database
- No data leaves your machine
To verify Ollama is running locally, check
http://localhost:11434 in your browser. You should see “Ollama is running”.Optional Cloud Embeddings (OpenAI)
If you configure OpenAI for embeddings, EchoVault will make API calls to OpenAI’s servers.- Text to embed:
<title> <what> <why> <impact> <tags> - This is sent when saving memories or running
memory reindex
- Full memory details (only the summary fields)
- Markdown files
- Database contents
- Your configuration
- API data is not used to train models (as of their current policy)
- Data is retained for 30 days for abuse monitoring
- Review OpenAI’s privacy policy for details
On-Premises Embeddings (Custom)
You can use your own embedding service via OpenAI-compatible APIs:Secret Redaction System
EchoVault includes a 3-layer redaction pipeline to prevent secrets from being stored.Layer 1: Explicit Redaction Tags
You can manually mark sensitive content:Configured API key [REDACTED]
The content between <redacted> tags is replaced before storage.
Layer 2: Automatic Pattern Detection
EchoVault automatically detects and redacts known secret formats:| Pattern | Example | Redacted |
|---|---|---|
| Stripe keys | sk_live_abc123 | [REDACTED] |
| GitHub tokens | ghp_abc123 | [REDACTED] |
| AWS access keys | AKIAIOSFODNN7EXAMPLE | [REDACTED] |
| Slack tokens | xoxb-abc-123 | [REDACTED] |
| JWT tokens | eyJhbGc... | [REDACTED] |
| Private keys | -----BEGIN PRIVATE KEY----- | [REDACTED] |
| Password fields | password: secret123 | [REDACTED] |
| API key fields | api_key: abc123 | [REDACTED] |
src/memory/redaction.py
Layer 3: Custom Patterns (.memoryignore)
Add project-specific patterns to~/.memory/.memoryignore:
# are comments.
Redaction happens before storage. Once a memory is saved, redacted content is permanently replaced with
[REDACTED]. The original secret is never written to disk.Testing Redaction
You can test what would be redacted:What Agents Can Access
EchoVault exposes three MCP tools to agents:memory_context
memory_context
Purpose: Load recent memories at session startWhat the agent can access:
- Memory IDs, titles, categories, tags
- Project names and creation timestamps
- Summary fields (what/why/impact)
- Full memory details (unless they call
memory_searchwith the ID) - Configuration file
- Database internals
memory_search
memory_search
Purpose: Find relevant memories during workWhat the agent can access:
- Search results matching the query
- Memory pointers (IDs, titles, summaries)
- Relevance scores
- Memories from other projects (unless they explicitly search across all projects)
- Raw database queries
memory_save
memory_save
Purpose: Persist decisions and learningsWhat the agent can do:
- Save new memories to the current project
- Provide title, what, why, impact, tags, category, details
- Save to arbitrary file paths
- Bypass redaction
- Delete existing memories (unless you’ve granted file system access)
Syncing Across Machines
EchoVault doesn’t include built-in sync. To share memories across machines:Option 1: Cloud Storage
PointMEMORY_HOME to a synced folder:
- Your memories are now stored in Dropbox/iCloud/Google Drive
- They’re encrypted in transit but readable by the cloud provider
- Consider this if you have sensitive codebases
Option 2: Git Repository
Version control your memory vault:- Memories are stored in Git history (even if deleted later)
- Use a private repository
- Consider encrypting the repo with
git-cryptor similar
Option 3: Manual Sync
Use rsync or scp:- Full control over when and where data is synced
- No third-party involvement
If you sync across machines, avoid running multiple agents simultaneously. SQLite doesn’t handle concurrent writes well.
Compliance and Data Residency
GDPR (European Union)
EchoVault is compliant-friendly:- Data minimization: Only stores what you explicitly save
- Right to erasure: Delete memories with
memory delete <id>orrm -rf ~/.memory - Data portability: Memories are stored as Markdown (open format)
- Local processing: No third-party data processors (unless you use OpenAI)
CCPA (California)
Similar to GDPR:- You control all data
- No sale of personal information (EchoVault doesn’t collect any)
- Delete your data anytime
SOC 2, ISO 27001, etc.
If you work at a company with compliance requirements:- Use Ollama for fully local operation
- Store
MEMORY_HOMEon encrypted volumes - Exclude
~/.memory/from backup systems if required - Use
.memoryignoreto enforce redaction policies
Security Best Practices
Use Ollama
For fully local operation with zero network calls. Ideal for sensitive work.
Encrypt Your Disk
Use FileVault (macOS), BitLocker (Windows), or LUKS (Linux) to encrypt
~/.memory/.Review .memoryignore
Add patterns for project-specific secrets (internal IPs, custom tokens, etc.).
Don't Commit Memories
Add
~/.memory/ to .gitignore if you’re syncing via Git. Use private repos only.File Permissions
EchoVault creates files with default permissions. You can restrict access:API Key Storage
If you use OpenAI, your API key is stored in~/.memory/config.yaml.
Protection:
- The file is only readable by your user (default Linux/macOS permissions)
- API keys are redacted in
memory configoutput - Keys are never logged or transmitted anywhere except to the configured API
- Use environment variables for extra security:
Then reference it in
config.yaml:(Note: This is not currently supported but is a common pattern)
Deleting All Data
To completely remove EchoVault and all memories:- All markdown files
- The SQLite database
- Configuration file
- MCP server configs
Questions About Privacy?
If you have specific privacy concerns:- Review the source code (MIT licensed, fully transparent)
- Check
src/memory/redaction.pyfor redaction implementation - Check
src/memory/core.pyfor data storage logic - Open an issue on GitHub for clarification
EchoVault is open source. You can audit exactly what data is stored and where it goes.