DataChanges events whenever relationships or attributes are written or deleted in Permify. Consumers can use these events to react to authorization data changes in real time.
Common use cases for the Watch API include:
- Cache invalidation: invalidate or update downstream permission caches when a relationship changes.
- Audit logging: record every change to authorization data for compliance and forensic purposes.
- Event sourcing: propagate authorization changes to other services via a pub/sub system.
Connecting to the watch stream
Open a Watch stream by calling the Watch endpoint with a tenant ID and an optionalsnap_token representing the point in time from which you want to start receiving events.
snap_token, the stream starts from the current head of the transaction log.
Events emitted
Each message on the stream is aDataChanges object. It contains a collection of change records, each describing a single relationship or attribute that was written or deleted:
DataChanges message includes a snap_token. Store this token durably — it lets you resume the stream from the last processed event after a reconnect without replaying the full history.
Reconnecting after disconnection
Watch streams are pod-specific and are not handed off when a Permify instance terminates. If the pod running your Watch stream shuts down (scale-in, rolling restart, node eviction), the gRPC stream is terminated and clients must reconnect. Best practices for reconnection:- Store the last received
snap_tokendurably (for example, in Redis or your application database) so reconnects can resume from where processing left off. - Implement exponential backoff with jitter to avoid a wave of simultaneous reconnections after a rolling deployment or pod restart.
- Apply a connection budget per client to cap the maximum reconnect rate.
Running Watch at scale
Each active Watch stream opens a long-lived connection with a continuous polling loop against the database. At high connection counts this can result in significant CPU and I/O usage.Fan-in / fan-out architecture
Instead of each application service or pod opening its own Watch stream, run a small number of dedicated Watch consumers (for example, 2–4 Permify pods) and distribute permission-change events internally via a pub/sub system (Kafka, Redis Pub/Sub, NATS, etc.) to the rest of your fleet. This limits the number of concurrent Watch connections to a fixed, controlled count regardless of how many application pods you run.Separate Watch and Check deployments
Run Watch-heavy workloads on a dedicated Permify deployment with its own Horizontal Pod Autoscaler (HPA), separate from the fleet serving Check, LookupEntity, and other read APIs. This prevents Watch load from affecting Check API capacity and vice versa.Tuning watch_buffer_size
The database.watch_buffer_size config key (default: 100) controls how many pending change events can be queued per Watch stream before back-pressure is applied. If your write rate is high and consumers are slow, increase this value to reduce the risk of events being dropped.
See Database Configurations for the full list of configuration options.
Controlling mass reconnections
After a Permify restart or rolling deployment, all Watch clients reconnect simultaneously. Implement:- Exponential backoff — double the wait time after each failed attempt.
- Jitter — add a random offset to the backoff to spread reconnects over time.
- Connection budgets — limit the maximum reconnect rate per client.