Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/prisma/prisma-next/llms.txt

Use this file to discover all available pages before exploring further.

Prisma Next migrations are contract-driven: you change your contract, emit it, plan the migration offline, fill in any data transforms, then apply to the database. Every migration package is fully attested on disk — a content-addressed migrationHash in migration.json means the runner can detect hand-edits or partial writes before applying anything.

The migration workflow

1

Bootstrap a new database with db init

For a fresh database, db init plans and applies all additive operations needed to bring an empty database up to your current contract, then writes the contract marker:
prisma-next db init --db postgresql://user:pass@localhost/mydb
Use --dry-run to preview the plan without touching the database:
prisma-next db init --dry-run
db init is additive-only — it creates missing tables, columns, constraints, and indexes. It will not rename or drop anything. If your database already has a marker that does not match the current contract, db init will refuse and ask you to use the migration workflow instead.
2

Update the contract after schema changes

After editing your schema, emit a fresh contract.json and contract.d.ts:
prisma-next contract emit
This does not connect to the database. It is a pure compilation step that produces deterministic artifacts from your schema definition.
3

Plan a migration

migration plan diffs the latest on-disk migration state against the newly emitted contract and scaffolds a new migration package. No database connection is needed:
prisma-next migration plan --name add-order-status
This writes four files under migrations/<timestamp>-add-order-status/:
FileContents
migration.tsEditable migration source with placeholder() slots
migration.jsonAttested metadata including migrationHash
ops.jsonPlanned operations (or [] if placeholders are unfilled)
start-contract.json / end-contract.jsonContract bookends for the migration edge
Use --from <hash> to branch from a specific contract hash rather than the latest migration target:
prisma-next migration plan --name add-order-status --from sha256:abc123...
4

Fill in placeholder() slots if needed

When the planner cannot lower a data transform automatically (for example, backfilling a new non-nullable column), it inserts a placeholder() slot in migration.ts. Open the file and replace the placeholder with your implementation:
// migrations/20240115-add-order-status/migration.ts
import { placeholder } from '@prisma-next/migration-tools';

export default {
  operations: [
    // ... additive operations planned automatically ...
    placeholder('backfill-order-status', async (tx) => {
      // Replace this placeholder with your data transform:
      await tx.execute(
        sql`UPDATE orders SET status = 'pending' WHERE status IS NULL`,
      );
    }),
  ],
};
After editing, re-emit ops.json and update the migrationHash by running the file directly with Node:
node migrations/20240115-add-order-status/migration.ts
This regenerates ops.json with your transform included and reattests migration.json. If any placeholder() slot is still unfilled, the script exits with PN-MIG-2001.
5

Apply migrations to the database

migration apply reads the migration graph from disk, determines which migrations are pending relative to the database marker, and executes them in order. Each migration runs in its own transaction:
prisma-next migration apply --db postgresql://user:pass@localhost/mydb
If a migration fails, previously applied migrations are preserved. Re-running migration apply resumes from the last successful migration.Use --ref to target a named environment ref rather than the current contract hash:
prisma-next migration apply --ref staging
6

Inspect migration status

migration status shows the migration graph and how the database marker relates to it:
# With a DB connection — shows applied/pending markers
prisma-next migration status --db postgresql://user:pass@localhost/mydb

# Without a DB connection — shows the graph from disk only
prisma-next migration status
Output includes ◄ DB, ◄ Contract, and ◄ ref:<name> markers so you can see exactly where each environment sits in the graph.
7

Create named refs for multi-environment workflows

Named refs map logical environment names (such as staging or production) to specific contract hashes. Use them to route migration apply and migration status to the right point in the graph for each environment:
# Set a ref
prisma-next migration ref set production sha256:abc123...

# List all refs
prisma-next migration ref list

# Target a ref during apply
prisma-next migration apply --ref production
Refs are stored in migrations/refs.json and written atomically to prevent corruption.

Migration integrity: hash attestation

Every migration package on disk is fully attested. migration.json contains a migrationHash that is computed over (metadata, ops). When migration apply loads packages, it rehashes each one and confirms the result matches the stored hash. If a package has been hand-edited or partially written, the load fails with MIGRATION.HASH_MISMATCH and points at the offending directory:
✖ Migration hash mismatch (MIGRATION.HASH_MISMATCH)
  path: migrations/20240115-add-order-status/
  Fix: Re-run `node migrations/20240115-add-order-status/migration.ts`
       or restore from version control.
This means the runner never executes a migration that has silently diverged from what was planned and reviewed.

db init vs db update

db update allows destructive operations (renaming columns, dropping tables) in addition to additive ones. In interactive mode you will be prompted to confirm; in non-interactive mode you must pass -y / --yes. Review the plan carefully before confirming — destructive operations can cause data loss.
CommandUse caseAllowed operations
db initBootstrap a fresh databaseAdditive only
db updateBring any existing database to the current contractAdditive, widening, destructive
db init is the safe default for new databases. db update is the escape hatch for databases that were not initialized through the migration workflow, or for quick local resets. For production, prefer migration planmigration apply to keep a full audit trail in version control.

Build docs developers (and LLMs) love