Nestri is built around three cooperating layers: a Rust streaming server that captures and encodes your game’s display, a Go libp2p relay node that brokers WebRTC connections, and Docker runner containers that house games and launchers. Understanding how these parts interact makes it much easier to configure, troubleshoot, and extend Nestri for your own setup.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/nestrilabs/nestri/llms.txt
Use this file to discover all available pages before exploring further.
Streaming server
Rust process that captures Wayland output and encodes it over WebRTC using GStreamer.
Relay
Go libp2p node that handles WebRTC signaling, hole-punching, and state sync between peers.
Runner containers
Layered Docker images that bundle games, launchers, GPU drivers, and the streaming server.
The streaming server
nestri-server is a Rust binary included in every runner container. It captures the Wayland compositor output from inside the container, encodes the video and audio stream, and transmits it to connected viewers over WebRTC.
The server is configured entirely through environment variables or CLI flags (defined in packages/server/src/args.rs):
| Parameter | Env var | Default | Description |
|---|---|---|---|
| Relay URL | RELAY_URL | (required) | WebSocket URL of the relay to connect to |
| Room | NESTRI_ROOM | (required) | Identifies which room this stream belongs to |
| Resolution | RESOLUTION | 1280x720 | Display and stream resolution (WxH) |
| Framerate | FRAMERATE | 60 | Stream framerate, 5–240 fps |
| Video codec | VIDEO_CODEC | h264 | h264, h265, or av1 |
| Video bitrate | VIDEO_BITRATE | 6000 | Target bitrate in kbps |
| Max bitrate | VIDEO_BITRATE_MAX | 8000 | Maximum bitrate in kbps |
| Rate control | VIDEO_RATE_CONTROL | cbr | cbr or vbr |
| Latency control | VIDEO_LATENCY_CONTROL | lowest-latency | Latency mode |
| Encoder type | VIDEO_ENCODER_TYPE | hardware | hardware or software |
| GPU vendor | GPU_VENDOR | (auto) | Select GPU by vendor string |
| GPU name | GPU_NAME | (auto) | Select GPU by name string |
| GPU index | GPU_INDEX | (auto) | Select GPU by index |
| GPU card path | GPU_CARD_PATH | (auto) | Force specific /dev/dri/ path |
| Audio codec | AUDIO_CODEC | opus | Audio codec (default: Opus) |
| Audio bitrate | AUDIO_BITRATE | 128 | Audio target bitrate in kbps |
| Audio capture | AUDIO_CAPTURE_METHOD | pipewire | Audio capture backend |
| Zero-copy | ZERO_COPY | false | Enable zero-copy DMA-BUF pipeline |
| Software render | SOFTWARE_RENDER | false | Use software Wayland renderer |
Video capture and encoding pipeline
Video capture and encoding pipeline
nestri-server uses GStreamer to build a media pipeline. Video capture uses waylanddisplaysrc to read frames directly from the Wayland compositor running inside the container. These frames are then passed to a hardware GPU encoder when available.Supported hardware encoders:- NVIDIA — NVENC via the
nvcodecGStreamer plugin - AMD / Intel — VAAPI via the
gst-plugin-vaplugin - Intel — QSV via the
gst-plugin-qsvplugin - Software fallback — available when no GPU encoder is detected
webrtcsink, which negotiates a WebRTC peer connection through the relay.Audio capture
Audio capture
Audio is captured via PipeWire (
AUDIO_CAPTURE_METHOD=pipewire by default). The runner-common image includes a full PipeWire + WirePlumber stack with latency-optimized configurations applied at build time. Audio is encoded as Opus (AUDIO_CODEC=opus) and transmitted alongside video in the same WebRTC session.Keyframe and latency settings
Keyframe and latency settings
nestri-server defaults to VIDEO_LATENCY_CONTROL=lowest-latency and KEYFRAME_DIST_SECS=1 to minimise buffering. Keyframes every second let new viewers join quickly without waiting for a full intra-frame. CBR rate control (VIDEO_RATE_CONTROL=cbr) keeps latency predictable under varying scene complexity.The relay
The relay is a Go binary that acts as a libp2p node (packages/relay/internal/core/core.go). Its primary jobs are WebRTC signaling, P2P hole-punching, and broadcasting room state across a mesh of relay peers.
Transport protocols
A single relay instance listens on the same port across multiple protocols simultaneously:ENDPOINT_PORT, default 8088) needs to be open — the relay shares it across TCP and UDP.
libp2p features
The relay uses the following libp2p capabilities:- Noise — encrypted transport layer (all connections are encrypted)
- Circuit relay — allows peers behind symmetric NAT to route through the relay
- Hole-punching —
libp2p.EnableHolePunching()attempts direct P2P connections first - NAT service —
libp2p.EnableNATService()andlibp2p.EnableAutoNATv2()for NAT detection - mDNS discovery — relay nodes on the same LAN discover each other automatically via the rendezvous string
/nestri-relay/mdns-discovery/1.0.0
GossipSub and room state
The relay uses GossipSub PubSub (fromgo-libp2p-pubsub) to synchronise room state and relay metrics across all nodes in the mesh. Two topics are maintained:
pubTopicState— broadcasts which rooms are currently active on which relaypubTopicRelayMetrics— broadcasts periodic load and health metrics
Local room tracking
When a runner connects, the relay creates aRoom entry in LocalRooms (a thread-safe SafeMap[ulid.ULID, *shared.Room]). Each room has a ULID identifier, a name, an owner peer ID, and a list of participants. The relay fans out incoming RTP packets from the runner to all connected viewer PeerConnections using a lock-free atomic slice of participant channels.
Relay identity and persistence
Relay identity and persistence
Each relay generates an Ed25519 key pair on first start and saves it to
PERSIST_DIR/identity.key (default ./persist-data/identity.key). This key becomes the relay’s stable libp2p peer ID. Set REGEN_IDENTITY=true to force a new identity on startup.The relay also persists known peer addresses in peerstore.json inside PERSIST_DIR and reconnects to them on startup, so the relay mesh re-forms automatically after a restart.NAT traversal configuration
NAT traversal configuration
When a relay is behind NAT, WebRTC ICE candidates must include the public IP. Set
WEBRTC_NAT_IPS to your public IP, or set AUTO_ADD_LOCAL_IP=true and the relay will detect a non-loopback, non-private IP from the local interfaces. You can also override the STUN server with STUN_SERVER (default: stun.l.google.com:19302).Prometheus metrics
Prometheus metrics
Set
METRICS=true and the relay exposes a Prometheus-compatible endpoint at /debug/metrics/prometheus on port METRICS_PORT (default 3030). The relay also publishes metrics periodically to the GossipSub mesh so that the wider relay network stays informed of each node’s load.Runner containers
Runner containers are the environments where games actually run. They are built in layers, each adding more capabilities:Base image (runner-base)
Built on CachyOS (docker.io/cachyos/cachyos:latest). Installs lightweight essentials: libssh2, curl, wget, libevdev, libc++abi, and the core GStreamer stack (gstreamer, gst-plugins-base).
Common layer (runner-common)
Adds everything needed for GPU-accelerated streaming:
- GPU driver packages:
vulkan-intel,vulkan-radeon,mesa, and alllib32variants - Display server components:
xorg-xwayland,seatd,libinput,gamescope,wlr-randr - Full GStreamer plugin set including
gst-plugin-va,gst-plugin-qsv,gst-plugin-rswebrtc - Audio stack:
pipewire,pipewire-pulse,pipewire-alsa,wireplumber - Process supervisor:
supervisor(supervisord manages all processes inside the container) nestri-serverbinary,vimputti-manager(virtual input manager), andbwrap(sandbox)
nestri user is created with UID/GID 1000 and added to the input, video, render, and seat groups.
Flavor layer (steam, heroic, etc.)
Each flavor installs a specific launcher and setsNESTRI_LAUNCH_CMD:
| Flavor | Launcher | NESTRI_LAUNCH_CMD |
|---|---|---|
runner-steam | Steam | steam -tenfoot -cef-force-gpu |
runner-heroic | Heroic Games Launcher | heroic |
supervisord -c /etc/nestri/supervisord.conf. Supervisord starts the Wayland compositor, PipeWire, nestri-server, and the launcher in the correct order.
Maitred
Maitred is an orchestration daemon (cloud/packages/maitred/main.go) designed for managed or self-hosted deployments that need lifecycle management of runner containers.
On startup, maitred:
- Identifies the machine using a stable machine ID
- Initialises a container engine (Docker or Podman)
- Connects to SST/MQTT realtime for cloud-triggered events (skipped in
--debugmode) - Automatically creates and starts a local relay container — so every maitred host has a co-located relay
- Create a new runner container for a game session
- Start, stop, or destroy existing containers
- Clean up all managed containers on shutdown (30-second graceful timeout)
Maitred is used in Nestri’s hosted cloud offering. If you are self-hosting, you manage containers directly with Docker. See the self-hosting guide for details.
Rooms
A room is the fundamental unit of a Nestri game session. Each room ties together one runner (the source of the stream) and one or more viewers (browser participants).- The room identifier is set via
NESTRI_ROOMon the runner container and matched bynestri-serverwhen connecting to the relay - The relay creates a
Roomstruct (with a ULID identifier) when the runner’snestri-serverregisters - Participants join by connecting their browser to the same relay and subscribing to that room
- The relay fans RTP video and audio packets from the runner’s
PeerConnectionto each participant’sPeerConnectionin real time, using a lock-free broadcast over per-participant channels
Room lifecycle
Room lifecycle
- Runner starts and
nestri-serverconnects to the relay via WebSocket, advertisingNESTRI_ROOM - Relay creates a
Roomentry inLocalRoomsand publishes room state to the GossipSub mesh - A viewer opens the Nestri web app and connects to the relay for the matching room name
- The relay adds the viewer as a
Participantand starts forwarding RTP packets - When the runner disconnects, the relay calls
Room.Close(), tears down thePeerConnection, and removes the room fromLocalRooms
Multi-relay setups
Multi-relay setups
In a multi-relay mesh, room state is broadcast via the GossipSub
pubTopicState topic. A viewer can connect to any relay in the mesh — if the room is hosted on a different relay, the mesh routes the connection to the correct peer. mDNS discovery handles automatic relay-to-relay connectivity on LAN setups.