Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/iterate/sqlfu/llms.txt

Use this file to discover all available pages before exploring further.

sqlfu/outbox is a small transactional-outbox and job-queue built on top of the same sqlfu client you already use. It gives you transactional event emission, per-consumer fan-out, retry and dead-letter handling, delayed dispatch, crash recovery, and causation chains — all in a single dependency-free module.
This module is experimental. The shape of its API is still in flux and may change between releases. The basic principle of events and consumers will remain, so migrations will be manageable, but breaking changes are expected before a stable release.

What the outbox provides

  • Transactional emit — the event row is inserted in the same transaction as your domain write, so either both happen or neither does.
  • Per-consumer fan-out — one emitted event spawns one job per registered consumer.
  • Retry + DLQ — failed jobs are rescheduled according to a retry policy; once a hard attempt cap is hit, they transition to status = 'failed'.
  • Delayed dispatch — a consumer can schedule its job to run 24h later via the delay option.
  • Visibility-timeout crash recovery — if a worker dies holding a claimed job, the job becomes re-claimable by a future worker after the visibility timeout expires.
  • Causation chains — an event emitted inside a handler automatically records which job and consumer caused it.

Why SQLite serialises writers

The whole module is built on the observation that SQLite serialises writers. You do not need row-locking or work-leasing: a plain begin; select pending; update to running; commit sequence is enough to safely claim jobs without races.

Setup

1

Define your event types

Start with a TypeScript type map from event name to payload shape. This gives you end-to-end type safety through emit and handler.
type AppEvents = {
  'user:signed_up': {userId: number; email: string};
};
2

Define your consumers

Use defineConsumer to declare a named handler with its options:
import {defineConsumer} from 'sqlfu/outbox';

const welcomeEmail = defineConsumer<AppEvents['user:signed_up']>({
  name: 'welcomeEmail',
  handler: async ({payload}) => {
    await sendEmail(payload.email, 'Welcome!');
  },
});
3

Create the outbox

Wire your consumers together with createOutbox:
import {createOutbox} from 'sqlfu/outbox';

const outbox = createOutbox<AppEvents>({
  client,                                 // any sqlfu Client (SyncClient or AsyncClient)
  consumers: {
    'user:signed_up': [welcomeEmail],
  },
  defaults: {
    visibilityTimeout: '30s',
    maxAttempts: 5,
  },
});

await outbox.setup();                      // idempotent; creates sqlfu_outbox_{events,jobs}
4

Emit events in your domain transactions

Pass {client: tx} to emit the event inside the same transaction as your domain write:
await client.transaction(async (tx) => {
  await tx.run({sql: 'insert into users (email) values (?)', args: [email]});
  await outbox.emit({name: 'user:signed_up', payload: {userId: 1, email}}, {client: tx});
});
5

Drive a worker loop

Call tick() on a timer to drain pending jobs. tick() returns quickly; sleep briefly when there is nothing to claim:
while (!signal.aborted) {
  const result = await outbox.tick();
  if (result.claimed === 0) await sleep(500);
}

Consumer options

Every field except name and handler is optional:
defineConsumer<Payload, AppEvents>({
  name: 'myConsumer',

  // truthy → fan-out includes this consumer for this event
  when: ({payload}) => payload.shouldDispatch,

  // how long to wait before the job is eligible to run
  delay: ({payload}) => '24h',

  // return {retry: true, delay, reason} to reschedule, or {retry: false} to fail fast
  retry: (job, error) => ({retry: true, delay: '30s', reason: String(error)}),

  // how long after claiming a job before another worker may reclaim it
  visibilityTimeout: '2m',

  handler: async ({payload, eventId, job, emit}) => {
    // `emit` is pre-bound to this job's causation context.
    // Events emitted here will have causedBy pointing back to this job.
    await emit({name: 'myConsumer:didAThing', payload: {/* … */}});
  },
});
Time periods use the Ns, Nm, Nh, Nd suffix format (seconds, minutes, hours, days).

Causation chains

Handlers receive an emit helper that already knows its own job context. Events emitted through that helper automatically get context.causedBy = {eventId, consumerName, jobId} pointing back to the originating job. This is explicit by design. sqlfu runs in browsers, edge workers, and mobile environments, so the outbox avoids any node: imports. AsyncLocalStorage would have made causation automatic for Node users but broken everywhere else. Threading emit through the handler input keeps the module dependency-free at the cost of one extra argument. If you call outbox.emit(...) from outside a handler — for example, in response to a user action — the event is still emitted, just without a causedBy entry. That is the correct behaviour: it was not caused by another job.

Out of scope

  • HTTP integration — wire-up is straightforward: consumer objects are plain data, and outbox.tick() returns quickly. Wrap it in whatever scheduler you like.
  • OpenTelemetry spans per job — use the existing instrument() hook on the sqlfu client; handlers run against the same client.
  • PostHog / Sentry DLQ reporting — the onBookkeepingError hook and the status = 'failed' terminal state are the building blocks. Wiring them into your telemetry pipeline is a downstream concern.

Build docs developers (and LLMs) love