Documentation Index
Fetch the complete documentation index at: https://mintlify.com/prisma/prisma-next/llms.txt
Use this file to discover all available pages before exploring further.
Prisma Next ships two Postgres facades with different lifecycle models. The right choice depends on whether your runtime keeps a process alive across requests or spins up a fresh isolate per invocation. This guide covers the per-request facade (@prisma-next/postgres/serverless) and walks through a complete Cloudflare Workers + Hyperdrive deployment.
Two facades, one driver
@prisma-next/postgres exports two facades that compose the same execution stack and differ only in lifecycle ergonomics:
| Surface | postgres() — /runtime | postgresServerless() — /serverless |
|---|
| Lifecycle | Long-lived process | Per-request invocation |
sql | yes | yes |
context | yes | yes |
stack | yes | yes |
contract | yes | yes |
orm | Closure-cached on the client | Constructed per request via createOrmClient(runtime) |
runtime() | Closure-cached Runtime | Not available — use db.connect({ url }) per request |
transaction(...) | Closure-cached entrypoint | Not available — use withTransaction(runtime, ...) per request |
| Cursor default | Disabled | Enabled |
| Disposal | None — process owns the lifetime | Symbol.asyncDispose on the runtime; await using disposes |
The static authoring surface (sql, context, stack, contract) is identical on both facades — it is a pure function of the contract and is safe to cache at module scope. The runtime-bound surface differs because closure-caching a Runtime (and the pg.Client wired into it) across fetch invocations causes stale-connection failures after isolate idle and concurrent-fetch races on a shared client.
Cloudflare Workers + Hyperdrive
Cloudflare Workers + Hyperdrive is the primary tested deployment path. Hyperdrive is Cloudflare’s managed Postgres connection pooler at the edge: the Worker connects to it using the standard Postgres wire protocol via the pg library, and Hyperdrive pools connections to your origin Postgres. The Worker reads the connection string off env.HYPERDRIVE.connectionString.Architecture
┌─────────────────┐ ┌────────────────┐ ┌─────────────────┐
│ Worker isolate │ ───→ │ Hyperdrive │ ───→ │ Origin Postgres │
│ (per fetch) │ pg │ (edge pooler) │ pg │ │
│ db.connect() │ │ │ │ │
└─────────────────┘ └────────────────┘ └─────────────────┘
▲ ▲
│ runtime queries (per fetch) Node-side migrations
│ run directly against
│ the origin URL
Step 1: Provision Hyperdrive
pnpm exec wrangler hyperdrive create my-hyperdrive \
--connection-string="postgres://USER:PASS@HOST:PORT/DBNAME"
Wire the binding ID printed by Wrangler into wrangler.jsonc:{
"name": "my-worker",
"main": "src/worker.ts",
"compatibility_date": "2025-07-18",
"compatibility_flags": ["nodejs_compat"],
"hyperdrive": [
{
"binding": "HYPERDRIVE",
"id": "<the-id-printed-by-wrangler-hyperdrive-create>"
}
]
}
nodejs_compat is required. The Postgres driver (pg) uses Node built-ins that workerd polyfills under that flag.For wrangler dev, set the local connection string in .env (not .dev.vars):# .env (gitignored)
WRANGLER_HYPERDRIVE_LOCAL_CONNECTION_STRING_HYPERDRIVE="postgres://user:pass@127.0.0.1:5432/mydb"
Step 3: Module-scope setup
Construct the db client once per isolate at module scope. Only the static authoring surface is closure-cached here — the runtime-bound surface is acquired per request:// src/prisma/db.ts
import postgresServerless from '@prisma-next/postgres/serverless';
import type { Contract } from './contract.d';
import contractJson from './contract.json' with { type: 'json' };
export const db = postgresServerless<Contract>({
contractJson,
// cursor: { disabled: true }, // Required if your origin is behind Hyperdrive
// // See warning below.
});
Cursor mode hangs on Cloudflare Hyperdrive. The default cursor path uses pg-cursor’s extended-query named-portal protocol. Against a real deployed Hyperdrive config, after rows are returned, Hyperdrive emits Protocol Error: Unexpected protocol code: C (SQLSTATE 58000) and never sends ReadyForQuery. The connection wedges and Cloudflare kills the request after 30 s with error 1101. This affects every read path (SQL DSL, ORM .all() / .first(), for await). Workaround: pass cursor: { disabled: true } to postgresServerless({...}) to force the simple-protocol path. This bug is tracked upstream as a Cloudflare Hyperdrive issue.
Step 4: Per-request handler
// src/worker.ts
import { withTransaction } from '@prisma-next/sql-runtime';
import { createOrmClient } from './orm-client/client';
import { db } from './prisma/db';
interface Env {
HYPERDRIVE: { connectionString: string };
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
// Fresh runtime per fetch. runtime.close() runs automatically
// when the fetch body returns (including on throw paths).
await using runtime = await db.connect({ url: env.HYPERDRIVE.connectionString });
const url = new URL(request.url);
// SQL DSL
if (url.pathname === '/sql/users') {
const rows = await runtime.execute(
db.sql.user.select('id', 'email').limit(10).build(),
);
return Response.json(rows);
}
// ORM — constructed against the per-request runtime
if (url.pathname === '/orm/users') {
const orm = createOrmClient(runtime);
const rows = await orm.User.newestFirst().take(10).all();
return Response.json(rows);
}
// Transactions
if (url.pathname === '/tx/example') {
const result = await withTransaction(runtime, async (tx) => {
await tx.execute(db.sql.user.update({ /* ... */ }).where(/* ... */).build());
await tx.execute(db.sql.post.insert({ /* ... */ }).build());
return { ok: true };
});
return Response.json(result);
}
return new Response('not found', { status: 404 });
},
};
Wiring the ORM client
createOrmClient(runtime) is a factory that passes the per-request runtime to orm({...}):// src/orm-client/client.ts
import type { Runtime } from '@prisma-next/sql-runtime';
import { orm } from '@prisma-next/sql-orm-client';
import { db } from '../prisma/db';
import { UserCollection, PostCollection } from './collections';
export function createOrmClient(runtime: Runtime) {
return orm({
runtime,
context: db.context,
collections: {
User: UserCollection,
Post: PostCollection,
},
});
}
Other per-request runtimes
The postgresServerless facade works on any per-request runtime. The only difference between runtimes is how you source the connection string:| Runtime | Connection-string source |
|---|
| AWS Lambda (Node) | process.env.DATABASE_URL |
| Vercel Serverless (Node) | process.env.DATABASE_URL |
| Vercel Edge | process.env.DATABASE_URL |
| Deno Deploy | Deno.env.get('DATABASE_URL') |
| Bun edge | process.env.DATABASE_URL |
The module-scope and per-request patterns are identical to the Cloudflare Workers example. Replace env.HYPERDRIVE.connectionString with the appropriate env variable for your platform:// Module scope
export const db = postgresServerless<Contract>({ contractJson });
// Per-request (AWS Lambda example)
export const handler = async (event: APIGatewayEvent) => {
await using runtime = await db.connect({ url: process.env.DATABASE_URL! });
const rows = await runtime.execute(db.sql.user.select('id', 'email').limit(10).build());
return { statusCode: 200, body: JSON.stringify(rows) };
};
Hyperdrive is Cloudflare-specific. On other runtimes, the URL points directly at your origin Postgres or at whatever pooler your platform exposes (RDS Proxy on Lambda, Vercel Postgres pooler, etc.).
Migrations from Node
Migrations stay on Node, against the origin database connection string — not Hyperdrive or any edge pooler.
Migration commands (prisma-next migration apply, prisma-next db init, etc.) are control-plane operations that run in long-lived Node processes (CI runners, deploy pipelines, dev workstations). Routing DDL through Hyperdrive is explicitly not recommended: Hyperdrive caches query results at the edge, which is desirable for runtime reads but unsafe for migration ledger reads (a stale read could cause duplicate-apply or skipped-apply).
# Always run migrations against the origin URL directly
DATABASE_URL="postgres://user:pass@origin-host:5432/mydb" \
prisma-next migration apply
There is no per-request migration story. See Managing database migrations for the full workflow.