Skip to main content
Anchor supports two PostgreSQL deployment modes: embedded (default) and external. Choose based on your deployment requirements and scale.

Embedded PostgreSQL

The default configuration uses an embedded PostgreSQL instance running inside the Anchor container. This is ideal for:
  • Single-server deployments
  • Development and testing
  • Small to medium workloads
  • Simplified setup with minimal configuration

How It Works

The Anchor Docker image is built on postgres:18-alpine and includes:
  1. PostgreSQL 18 database server
  2. Node.js runtime (for API and web services)
  3. Supervisord process manager
When PG_HOST is empty, Anchor:
  • Initializes PostgreSQL in /data/postgres
  • Listens on 127.0.0.1:5432 (internal only)
  • Runs migrations automatically
  • Starts API and web services

Configuration

services:
  anchor:
    image: ghcr.io/zhfahim/anchor:latest
    ports:
      - "3000:3000"
    volumes:
      - anchor_data:/data

volumes:
  anchor_data:

Data Location

All data is stored in the /data volume:
/data/
├── postgres/          # PostgreSQL data directory (PGDATA)
│   ├── base/
│   ├── global/
│   ├── pg_wal/
│   └── ...
└── .jwt_secret        # Auto-generated JWT secret

Advantages

  • Simple setup: Single container, no external dependencies
  • Auto-configured: Credentials and initialization handled automatically
  • Resource efficient: Minimal overhead for small workloads
  • Easy backups: Single volume contains everything

Limitations

  • No horizontal scaling: Cannot run multiple Anchor instances
  • Resource sharing: Database and application share container resources
  • Limited tuning: PostgreSQL settings use container defaults
  • Backup complexity: Requires container shutdown for consistent backups

External PostgreSQL

For production deployments, use a dedicated PostgreSQL server. This is ideal for:
  • High-availability setups
  • Horizontal scaling (multiple Anchor instances)
  • Managed database services (AWS RDS, Azure Database, etc.)
  • Advanced database tuning and monitoring
  • Easier backup and disaster recovery

Configuration

1

Set up PostgreSQL server

Create a database and user for Anchor:
CREATE USER anchor WITH PASSWORD 'secure-password';
CREATE DATABASE anchor OWNER anchor;
GRANT ALL PRIVILEGES ON DATABASE anchor TO anchor;
2

Configure Anchor

Set PG_HOST to your PostgreSQL server:
docker-compose.yml
services:
  anchor:
    image: ghcr.io/zhfahim/anchor:latest
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      - APP_URL=https://notes.example.com
      - PG_HOST=postgres.example.com
      - PG_PORT=5432
      - PG_USER=anchor
      - PG_PASSWORD=${PG_PASSWORD}
      - PG_DATABASE=anchor
    volumes:
      - anchor_data:/data  # Still needed for JWT secret

volumes:
  anchor_data:
3

Start Anchor

Anchor will connect to the external database and run migrations:
docker compose up -d
Check logs to verify connection:
docker compose logs anchor | grep postgres
# Should see: [anchor] Using external Postgres: postgres.example.com:5432

Docker Compose with External Database

services:
  anchor:
    image: ghcr.io/zhfahim/anchor:latest
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      - APP_URL=https://notes.example.com
      - PG_HOST=postgres
      - PG_PORT=5432
      - PG_USER=anchor
      - PG_PASSWORD=${PG_PASSWORD}
      - PG_DATABASE=anchor
    volumes:
      - anchor_data:/data
    depends_on:
      postgres:
        condition: service_healthy

  postgres:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      - POSTGRES_USER=anchor
      - POSTGRES_PASSWORD=${PG_PASSWORD}
      - POSTGRES_DB=anchor
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U anchor"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  anchor_data:
  postgres_data:

PostgreSQL Version Requirements

  • Minimum version: PostgreSQL 12
  • Recommended: PostgreSQL 15 or 16
  • Tested with: PostgreSQL 18 (embedded)

Connection String Format

Anchor constructs the connection string internally:
postgresql://PG_USER:PG_PASSWORD@PG_HOST:PG_PORT/PG_DATABASE
Example:
postgresql://anchor:[email protected]:5432/anchor

Migration Between Modes

From Embedded to External

1

Backup embedded database

Export data using pg_dump:
docker exec anchor pg_dump -U anchor -d anchor -h 127.0.0.1 > anchor_backup.sql
2

Set up external PostgreSQL

Create the database and user as shown in the external configuration section.
3

Import data

Restore the backup to your external database:
psql -h postgres.example.com -U anchor -d anchor < anchor_backup.sql
4

Update configuration

Modify docker-compose.yml to use external database:
environment:
  - PG_HOST=postgres.example.com
  - PG_USER=anchor
  - PG_PASSWORD=${PG_PASSWORD}
5

Restart Anchor

docker compose up -d
The container will connect to the external database instead of starting embedded PostgreSQL.

From External to Embedded

1

Backup external database

pg_dump -h postgres.example.com -U anchor -d anchor > anchor_backup.sql
2

Update configuration

Remove or empty the PG_HOST variable:
environment:
  # - PG_HOST=postgres.example.com  # Remove this line
  - PG_USER=anchor
  - PG_PASSWORD=password
3

Start with embedded mode

docker compose down
docker compose up -d
4

Import data

docker exec -i anchor psql -U anchor -d anchor -h 127.0.0.1 < anchor_backup.sql

Database Maintenance

Backups

# Backup
docker exec anchor pg_dump -U anchor -d anchor -h 127.0.0.1 -F c -f /data/backup.dump

# Copy from container
docker cp anchor:/data/backup.dump ./anchor_backup_$(date +%Y%m%d).dump

# Restore
docker exec -i anchor pg_restore -U anchor -d anchor -h 127.0.0.1 -c < backup.dump

Monitoring

# Embedded
docker exec anchor psql -U anchor -d anchor -h 127.0.0.1 -c "SELECT pg_size_pretty(pg_database_size('anchor'));"

# External
psql -h postgres.example.com -U anchor -d anchor -c "SELECT pg_size_pretty(pg_database_size('anchor'));"

Performance Tuning

Embedded PostgreSQL

Limited tuning available. Increase container resources:
docker-compose.yml
services:
  anchor:
    image: ghcr.io/zhfahim/anchor:latest
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 4G
        reservations:
          cpus: '1'
          memory: 2G

External PostgreSQL

Full control over PostgreSQL configuration:
postgresql.conf
max_connections = 100
shared_buffers = 256MB
effective_cache_size = 1GB
maintenance_work_mem = 64MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 2621kB
min_wal_size = 1GB
max_wal_size = 4GB

Troubleshooting

Connection refused errors

# Check if PostgreSQL is accessible
docker exec anchor pg_isready -h postgres.example.com -p 5432 -U anchor

# Verify network connectivity
docker exec anchor ping postgres.example.com

# Check Anchor logs
docker compose logs anchor | grep -i postgres

Migration failures

# View migration status
docker exec anchor sh -c "cd /app/server && npx prisma migrate status"

# Manually run migrations
docker exec anchor sh -c "cd /app/server && npx prisma migrate deploy"

Database corruption (embedded)

# Stop container
docker compose down

# Check PostgreSQL data
docker run --rm -v anchor_data:/data alpine sh -c "ls -la /data/postgres"

# If corrupted, restore from backup or start fresh
docker volume rm anchor_data

Comparison Table

FeatureEmbedded PostgreSQLExternal PostgreSQL
Setup complexityMinimalModerate
ConfigurationAuto-configuredManual setup required
Container count12+
Horizontal scalingNoYes
High availabilityNoYes (with replicas)
Resource isolationSharedSeparate
Backup complexityVolume backupStandard pg_dump
Managed servicesNoYes (RDS, Azure, etc.)
Performance tuningLimitedFull control
Best forSmall deploymentsProduction

Next Steps

Configuration

Learn about all configuration options

Updating

Keep your instance up to date

Build docs developers (and LLMs) love