Skip to main content
Engram’s Git Sync feature allows you to share memories through git repositories using compressed chunks with a manifest index. This design avoids merge conflicts, keeps files small, and works seamlessly for teams.

Quick Start

# Export new memories as a compressed chunk
# (automatically filters by current directory name as project)
engram sync

# Commit to git
git add .engram/ && git commit -m "sync engram memories"

# On another machine / clone: import new chunks
engram sync --import

# Check sync status
engram sync --status

Architecture

Directory Structure

.engram/
├── manifest.json          ← small index (git diffs this)
├── chunks/
│   ├── a3f8c1d2.jsonl.gz ← chunk by Alan (compressed, ~2KB)
│   ├── b7d2e4f1.jsonl.gz ← chunk by Juan
│   └── c9f1a2b3.jsonl.gz ← chunk by Alan (next day)
└── engram.db              ← gitignored (local working DB)

How It Works

1

Export creates a new chunk

Each engram sync creates a new chunk — never modifies old ones.The chunk contains:
  • Sessions created after the last chunk
  • Observations created after the last chunk
  • Prompts created after the last chunk
2

Chunk is compressed and hashed

  • Chunk content is serialized to JSON
  • Compressed with gzip (typically ~2KB for 8 sessions + 10 observations)
  • SHA-256 hash is computed from content
  • First 8 characters of hash become the chunk ID
3

Manifest is updated

The manifest.json file is updated with a new entry:
{
  "version": 1,
  "chunks": [
    {
      "id": "a3f8c1d2",
      "created_by": "alan",
      "created_at": "2026-03-03T10:15:30Z",
      "sessions": 8,
      "memories": 10,
      "prompts": 5
    }
  ]
}
4

Chunk is tracked locally

The local DB stores a sync_chunks table with chunk IDs that have been imported or exported.This prevents re-importing the same data if sync --import runs multiple times.

Commands

Export Memories

engram sync
Exports new memories as a compressed chunk to .engram/chunks/. Project Detection: By default, engram sync uses the current directory name as the project filter. Only memories from sessions matching that project are exported. Example:
cd ~/projects/my-app
engram sync  # Exports only memories from project "my-app"

Export All Projects

engram sync --all
Exports ALL memories from every project, ignoring the directory-based filter. Use case: Shared team knowledge base or documentation repo where you want to sync memories from multiple projects into one centralized location.

Override Project

engram sync --project other-name
Manually specify a project name instead of using the directory name.

Import Chunks

engram sync --import
Imports chunks listed in the manifest that haven’t been imported yet. How it works:
  1. Reads manifest.json
  2. Gets list of chunks already imported from local DB (sync_chunks table)
  3. For each chunk in the manifest not yet imported:
    • Reads and decompresses the chunk file
    • Imports sessions, observations, and prompts into local DB
    • Records the chunk ID as imported
Duplicate handling:
  • Sessions use INSERT OR IGNORE (skip if session ID already exists)
  • Observations and prompts are imported as-is (Engram’s deduplication logic handles duplicates at save time)
Auto-import: The OpenCode plugin automatically runs engram sync --import when it detects .engram/manifest.json in the project directory.Clone a repo → open OpenCode → team memories are loaded automatically.

Check Status

engram sync --status
Shows:
  • How many chunks exist locally (in your DB)
  • How many chunks exist remotely (in the manifest)
  • How many chunks are pending import
Example output:
Local chunks: 3
Remote chunks: 5
Pending import: 2

Why Chunks?

Engram uses a chunked architecture instead of a single large JSON file. Here’s why:
Each engram sync creates a new chunk — old chunks are never modified.When multiple developers sync independently:
  • Alan creates a3f8c1d2.jsonl.gz
  • Juan creates b7d2e4f1.jsonl.gz
  • Git just adds both files — no conflicts
The manifest is the only file git diffs, and it’s small and append-only.
Each chunk is identified by the first 8 characters of its SHA-256 content hash.This means:
  • Each chunk is imported only once (tracked in local sync_chunks table)
  • If two devs create identical chunks, the hash deduplicates them
  • No risk of double-importing the same data
Chunks are gzipped JSONL.Typical sizes:
  • 8 sessions + 10 observations + 5 prompts = ~2KB compressed
  • 50 sessions + 100 observations = ~10KB compressed
Git handles these small binary files efficiently.
The manifest is the only file git needs to diff/merge:
{
  "version": 1,
  "chunks": [
    {"id": "a3f8c1d2", "created_by": "alan", ...},
    {"id": "b7d2e4f1", "created_by": "juan", ...}
  ]
}
It’s small, human-readable, and append-only (new chunks are added, old entries never change).

Workflow Examples

Solo Developer (Multiple Machines)

# On laptop
cd ~/projects/my-app
engram sync
git add .engram/ && git commit -m "sync memories" && git push

# On desktop
cd ~/projects/my-app
git pull
engram sync --import

Team Collaboration

# Developer A
cd ~/projects/team-app
engram sync
git add .engram/ && git commit -m "sync engram" && git push

# Developer B
cd ~/projects/team-app
git pull
engram sync --import  # Imports A's memories
# Work, then sync their own memories
engram sync
git add .engram/ && git commit -m "sync engram" && git push

# Developer A
git pull
engram sync --import  # Imports B's memories
No merge conflicts because each dev creates independent chunks. Git just adds new files to .engram/chunks/.

Shared Knowledge Base

Create a separate repo for team-wide memories across multiple projects:
# Create a shared memory repo
mkdir team-knowledge && cd team-knowledge
git init

# From any project, sync ALL memories to this repo
cd ~/projects/project-a
engram sync --all  # Use --all to ignore project filter
cp -r .engram ~/team-knowledge/
cd ~/team-knowledge
git add .engram/ && git commit -m "sync all memories" && git push

# Team members clone and import
git clone <repo-url> team-knowledge
cd team-knowledge
engram sync --import

Data Model

Manifest Entry

type ChunkEntry struct {
    ID        string `json:"id"`         // SHA-256 hash prefix (8 chars)
    CreatedBy string `json:"created_by"` // Username or machine identifier
    CreatedAt string `json:"created_at"` // ISO 8601 timestamp
    Sessions  int    `json:"sessions"`   // Number of sessions in chunk
    Memories  int    `json:"memories"`   // Number of observations in chunk
    Prompts   int    `json:"prompts"`    // Number of prompts in chunk
}

Chunk Data

type ChunkData struct {
    Sessions     []Session     `json:"sessions"`
    Observations []Observation `json:"observations"`
    Prompts      []Prompt      `json:"prompts"`
}
Each chunk is a single gzipped JSON object containing arrays of sessions, observations, and prompts.

Implementation Details

Time-Based Filtering

When exporting, Engram filters data based on the timestamp of the last chunk:
  1. Read the manifest
  2. Find the most recent chunk’s created_at timestamp
  3. Export only sessions/observations/prompts created after that timestamp
First sync: If there are no chunks yet, everything is exported. Incremental syncs: Only new data is exported (data created since the last chunk).

Project Filtering

When --project is specified (or auto-detected from directory name):
  1. Filter sessions by project field
  2. Include only observations/prompts from those sessions
Example:
cd ~/projects/my-app
engram sync  # Auto-detects project as "my-app"
This ensures each project’s memories are scoped to that project’s git repo.

Compression

Chunks are gzipped using Go’s compress/gzip:
func writeGzip(path string, data []byte) error {
    f, err := os.Create(path)
    if err != nil {
        return err
    }
    defer f.Close()

    gz := gzip.NewWriter(f)
    if _, err := gz.Write(data); err != nil {
        return err
    }
    return gz.Close()
}
Typical compression ratios: 5-10x (e.g. 20KB JSON → 2KB gzipped).

Deduplication

At export time: Engram checks if a chunk with the same content hash already exists. If yes, skip export. At import time: Engram checks the sync_chunks table to see if the chunk ID has been imported. If yes, skip import. At save time: Engram’s normal deduplication logic (normalized hash + project + scope + type + title) prevents duplicate observations even if the same data is imported multiple times.

MCP Tools

All 14 MCP tools for agents

Privacy

Redact sensitive data before syncing

CLI Reference

Full command-line reference

Export/Import

JSON export/import (alternative to git sync)

Build docs developers (and LLMs) love