Ployz owns storage end-to-end. There is no generic “persistent volume” abstraction sitting between you and the filesystem. ZFS is the primary storage backend because its native capabilities — instant snapshots, copy-on-write clones, and incremental streaming send — are exactly the substrate that single-command primitives require. Without them,Documentation Index
Fetch the complete documentation index at: https://mintlify.com/getployz/ployz/llms.txt
Use this file to discover all available pages before exploring further.
branch, fork-volume, migrate, and rollback would be multi-step procedures involving downtime and data copies. With ZFS, they are atomic operations.
ZFS as primary backend
ZFS provides three capabilities that directly map to Ployz primitives:Snapshots
Atomic, instant point-in-time captures of a dataset. The basis for
rollback — promoting a branch leaves the old environment snapshotted and instantly restorable.Clones (copy-on-write)
Instant forks of a snapshot that share unchanged blocks with their origin. The basis for
branch and fork-volume — cloning a 500 GB database takes milliseconds, not hours.Incremental send
Stream only the changed blocks between two snapshots to another machine. The basis for
migrate — volumes transfer efficiently without a full copy.Quota enforcement
Hard per-dataset quotas. Ployz sets a quota on each volume dataset. An
overcommit_ratio controls how much of the pool’s available space can be allocated across all volumes.ZFS configuration
Two ZFS-related settings control how Ployz uses the pool:zfs_root— The ZFS dataset under which Ployz creates all volume datasets. For example,tank/ployz. Ployz reads the mountpoint of this dataset at startup and creates child datasets beneath it.overcommit_ratio— A multiplier on the pool’s available space used to calculate how much quota can be allocated across all volumes. A ratio of1.5allows total allocated quota to reach 1.5× the pool’s actual free space. Set this based on how much you expect volumes to diverge from their allocated size.
The
overcommit_ratio must be a finite positive number. Ployz rejects invalid values at startup rather than silently accepting them and failing later.Btrfs: the small-machine tier
Btrfs is supported for machines where ZFS is unavailable or impractical — typically smaller machines, VMs with constrained kernel configurations, or environments where ZFS kernel modules cannot be loaded. Btrfs provides the same snapshot and clone primitives that Ployz requires, sobranch, fork-volume, and rollback work the same way from the operator’s perspective. What changes is performance and reliability at scale: ZFS has a stronger track record for large datasets, higher-throughput transfer, and more predictable behavior under concurrent workloads.
Volumes and workload attachment
A volume in Ployz is a named dataset declaration in a deploy manifest. When a service mounts a volume, Ployz ensures the corresponding ZFS (or Btrfs) dataset exists and mounts it into the container at the specified path. Volume declarations have a scope:single— The volume is attached to exactly one instance. Used for stateful workloads like databases where shared access would cause corruption. A service withscope=singleandreplicas > 1is rejected at manifest validation time.- Other scopes — Shared or replicated volumes for workloads that can safely share access.
migrate.
fork-volume: copy-on-write cloning
fork-volume creates an instant copy-on-write clone of a volume for use by another workload. The clone and the original share all unchanged blocks at the time of the fork. Only blocks written after the fork diverge, and only those blocks consume additional pool space.
ZFS transfer protocol for migration
When a volume moves between machines — duringmigrate, machine remove, or a deploy that relocates a stateful workload — Ployz uses a streaming ZFS send/receive protocol over TCP. The default transfer port is 4319.
The transfer flow is:
- Take a snapshot of the volume on the source machine.
- Stream the snapshot (or the delta from a shared base snapshot) to the target machine using
zfs send | zfs receive. - Once the transfer is verified, the deploy commits durable volume movement evidence: which deploy moved the volume, which machines were involved, and which snapshot GUID confirmed the transfer.
- The workload starts on the target machine against the received dataset.
Only verified transfer success is folded into a deploy commit. A transfer that completes but cannot be verified does not become durable movement evidence, and the deploy does not proceed as if the move succeeded.
ZFS is product strategy
ZFS ownership is what makes the Ployz primitive surface possible. A managed PaaS abstracts storage away to serve many customers; Ployz keeps it because owning the substrate is the unlock for single-command operations. Branch, snapshot, clone, send, and rollback are not features built on top of storage — they are properties of the storage layer itself, surfaced directly as product primitives. When you runployzctl branch, you are not running a procedure that copies files. You are taking a ZFS snapshot and creating a clone. When you run ployzctl rollback, you are not re-deploying from source. You are restoring a known-good snapshot.