Ployz networking is built on two layers: a WireGuard overlay mesh that connects every node at the IP level, and a set of cluster services — gateway, DNS, and NATS — that run on top of the overlay. This page explains how those layers are configured and how they interact.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/getployz/ployz/llms.txt
Use this file to discover all available pages before exploring further.
WireGuard overlay mesh
Each node in a Ployz cluster runs a WireGuard interface (wg0) and gets a unique subnet carved from the cluster’s address range. Workload containers and sidecars bind to addresses inside that subnet, and they can reach any other node’s addresses directly over the encrypted tunnel.
Address allocation
Two fields inconfig.toml control the address space:
| Field | Default | Description |
|---|---|---|
cluster_cidr | 10.101.0.0/16 | The full address range shared by all nodes in the mesh |
subnet_prefix_len | 24 | The prefix length of each node’s slice of that range |
/16 range split into /24 subnets gives 256 possible nodes and 254 usable addresses per node. To support more nodes, widen the CIDR. To give each node more addresses, decrease the prefix length.
Endpoint ordering
When a node advertises its network addresses to peers, Ployz filters and orders them according to a fixed policy. The ordering matters because it becomes the candidate order WireGuard uses for endpoint selection and rotation:- Dropped entirely: loopback, link-local, IPv6 ULA, interfaces below the minimum MTU for the overlay, and container, bridge, or helper interfaces that are not cluster-facing.
- Ordered by likely usefulness:
- Private RFC1918 addresses first
- CGNAT addresses second
- Public addresses after that
You do not configure endpoint ordering directly. It is applied automatically when a node publishes or refreshes its endpoint record.
Mesh commands
Useployzctl mesh to manage the lifecycle of a mesh network.
ployzctl mesh init
ployzctl mesh init
Create a new mesh network on this node and activate it as the current network. Pass a name as the argument, or use After
--name-stdin to read it from standard input.init, the node generates a WireGuard keypair, allocates a subnet from cluster_cidr, and writes the network record to the store.ployzctl mesh create
ployzctl mesh create
Create a named network record without activating it. Useful when you want to set up a network before starting it.
ployzctl mesh start
ployzctl mesh start
Start the WireGuard interface and sidecars for an existing network.
ployzctl mesh stop
ployzctl mesh stop
Stop the active mesh. Pass
--force to stop even if workloads are still running.ployzctl mesh join --token
ployzctl mesh join --token
Join this node to an existing mesh using an invite token generated on the primary node. The token encodes the network’s public key, CIDR, and initial peer endpoints.After joining, the daemon connects to NATS through the overlay, syncs routing state, and begins receiving peer updates.
ployzctl mesh status / list
ployzctl mesh status / list
Inspect active network state:
NATS control-plane
NATS is the native substrate for cluster coordination. It provides durable key-value records, streams for ordered events, request/reply for foreground commands, and work queues for distributed operations. NATS is managed as a sidecar by the daemon — you do not run or configure it separately. The daemon starts NATS when the mesh starts and adopts a running NATS process on restart if the configuration matches.NATS is not a messaging bus for application workloads. It is the internal control-plane substrate. Application services communicate over the overlay network using whatever protocols their workloads require.
ployzd is absent. Daemon restart does not disrupt running workloads or break mesh connectivity.
Gateway
The HTTP/HTTPS gateway runs as a sidecar on each node and proxies inbound traffic to the correct workload container based on routing rules published to the cluster store. Configure the gateway through the daemon’sconfig.toml:
| Field | Default | Description |
|---|---|---|
gateway_listen_addr | 0.0.0.0:80 | HTTP listen address |
gateway_https_listen_addr | (unset) | HTTPS listen address — enables TLS when set |
gateway_threads | 2 | Worker threads for the gateway process |
HTTPS support
Whengateway_https_listen_addr is set, the gateway serves TLS. Certificates are loaded from the cluster’s routing store using SNI-based selection. You can also supply static certificate paths via the gateway’s environment variables (PLOYZ_GATEWAY_TLS_CERT_PATH and PLOYZ_GATEWAY_TLS_KEY_PATH), but both paths must be set together with gateway_https_listen_addr.
Cluster DNS
Each node runs a DNS sidecar that answers queries for cluster service names. Services deployed to Ployz are automatically registered in the cluster DNS and are reachable by name from any node in the mesh. The DNS server listens on the node’s overlay IPv6 address on port 53. In Docker runtime mode, it may also bind a bridge address so containers in theployz-networking namespace can resolve cluster names.
You do not configure cluster DNS directly in config.toml. The daemon provisions and configures the DNS sidecar automatically based on the active network and the node’s overlay address. To expose DNS metrics, set dns_metrics_listen_addr in config.toml or the PLOYZ_DNS_METRICS_LISTEN_ADDR environment variable.
ZFS transfer port
When a volume is migrated between nodes, the daemon opens a direct TCP connection from the destination node to the source node to stream the ZFS dataset. The source node listens onzfs_transfer_port for these connections.
| Field | Default | Override |
|---|---|---|
zfs_transfer_port | 4319 | PLOYZ_ZFS_TRANSFER_PORT or --zfs-transfer-port |
4319 (or your configured value) is reachable between cluster nodes on the overlay network. The transfer always uses the overlay address, not the public IP.
- Linux
- macOS
If you run a host firewall such as
nftables or iptables, allow inbound TCP on the transfer port for overlay addresses:macOS networking architecture
On macOS,ployzd runs on the host. The WireGuard interface, NATS, gateway, DNS, and all workload containers run inside the Docker Desktop Linux VM. The daemon bridges the two environments:
OverlayBridge uses userspace WireGuard and a smoltcp TCP stack to bridge the macOS host into the container overlay network. NATS, gateway, and DNS bind on the node’s overlay IPv6 address so other mesh nodes can reach them directly.
macOS requires OrbStack or Docker Desktop to be running. The daemon will not start the mesh if Docker is not reachable.