Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/nestrilabs/nestri/llms.txt

Use this file to discover all available pages before exploring further.

Nestri is built around three cooperating layers: a Rust streaming server that captures and encodes your game’s display, a Go libp2p relay node that brokers WebRTC connections, and Docker runner containers that house games and launchers. Understanding how these parts interact makes it much easier to configure, troubleshoot, and extend Nestri for your own setup.

Streaming server

Rust process that captures Wayland output and encodes it over WebRTC using GStreamer.

Relay

Go libp2p node that handles WebRTC signaling, hole-punching, and state sync between peers.

Runner containers

Layered Docker images that bundle games, launchers, GPU drivers, and the streaming server.

The streaming server

nestri-server is a Rust binary included in every runner container. It captures the Wayland compositor output from inside the container, encodes the video and audio stream, and transmits it to connected viewers over WebRTC. The server is configured entirely through environment variables or CLI flags (defined in packages/server/src/args.rs):
ParameterEnv varDefaultDescription
Relay URLRELAY_URL(required)WebSocket URL of the relay to connect to
RoomNESTRI_ROOM(required)Identifies which room this stream belongs to
ResolutionRESOLUTION1280x720Display and stream resolution (WxH)
FramerateFRAMERATE60Stream framerate, 5–240 fps
Video codecVIDEO_CODECh264h264, h265, or av1
Video bitrateVIDEO_BITRATE6000Target bitrate in kbps
Max bitrateVIDEO_BITRATE_MAX8000Maximum bitrate in kbps
Rate controlVIDEO_RATE_CONTROLcbrcbr or vbr
Latency controlVIDEO_LATENCY_CONTROLlowest-latencyLatency mode
Encoder typeVIDEO_ENCODER_TYPEhardwarehardware or software
GPU vendorGPU_VENDOR(auto)Select GPU by vendor string
GPU nameGPU_NAME(auto)Select GPU by name string
GPU indexGPU_INDEX(auto)Select GPU by index
GPU card pathGPU_CARD_PATH(auto)Force specific /dev/dri/ path
Audio codecAUDIO_CODECopusAudio codec (default: Opus)
Audio bitrateAUDIO_BITRATE128Audio target bitrate in kbps
Audio captureAUDIO_CAPTURE_METHODpipewireAudio capture backend
Zero-copyZERO_COPYfalseEnable zero-copy DMA-BUF pipeline
Software renderSOFTWARE_RENDERfalseUse software Wayland renderer
nestri-server uses GStreamer to build a media pipeline. Video capture uses waylanddisplaysrc to read frames directly from the Wayland compositor running inside the container. These frames are then passed to a hardware GPU encoder when available.Supported hardware encoders:
  • NVIDIA — NVENC via the nvcodec GStreamer plugin
  • AMD / Intel — VAAPI via the gst-plugin-va plugin
  • Intel — QSV via the gst-plugin-qsv plugin
  • Software fallback — available when no GPU encoder is detected
The runner-common image installs all encoder plugins:
gst-plugins-good gst-plugins-bad gst-plugin-pipewire
gst-plugin-webrtchttp gst-plugin-rswebrtc gst-plugin-rsrtp
gst-plugin-va gst-plugin-qsv
Encoded frames are sent to viewers via webrtcsink, which negotiates a WebRTC peer connection through the relay.
Audio is captured via PipeWire (AUDIO_CAPTURE_METHOD=pipewire by default). The runner-common image includes a full PipeWire + WirePlumber stack with latency-optimized configurations applied at build time. Audio is encoded as Opus (AUDIO_CODEC=opus) and transmitted alongside video in the same WebRTC session.
nestri-server defaults to VIDEO_LATENCY_CONTROL=lowest-latency and KEYFRAME_DIST_SECS=1 to minimise buffering. Keyframes every second let new viewers join quickly without waiting for a full intra-frame. CBR rate control (VIDEO_RATE_CONTROL=cbr) keeps latency predictable under varying scene complexity.

The relay

The relay is a Go binary that acts as a libp2p node (packages/relay/internal/core/core.go). Its primary jobs are WebRTC signaling, P2P hole-punching, and broadcasting room state across a mesh of relay peers.

Transport protocols

A single relay instance listens on the same port across multiple protocols simultaneously:
/ip4/0.0.0.0/tcp/8088                          # IPv4 raw TCP
/ip6/::/tcp/8088                                # IPv6 raw TCP
/ip4/0.0.0.0/udp/8088/quic-v1/webtransport     # IPv4 QUIC WebTransport
/ip6/::/udp/8088/quic-v1/webtransport           # IPv6 QUIC WebTransport
/ip4/0.0.0.0/udp/8088/quic-v1                   # IPv4 raw QUIC
/ip6/::/udp/8088/quic-v1                        # IPv6 raw QUIC
Only one port (ENDPOINT_PORT, default 8088) needs to be open — the relay shares it across TCP and UDP.

libp2p features

The relay uses the following libp2p capabilities:
  • Noise — encrypted transport layer (all connections are encrypted)
  • Circuit relay — allows peers behind symmetric NAT to route through the relay
  • Hole-punchinglibp2p.EnableHolePunching() attempts direct P2P connections first
  • NAT servicelibp2p.EnableNATService() and libp2p.EnableAutoNATv2() for NAT detection
  • mDNS discovery — relay nodes on the same LAN discover each other automatically via the rendezvous string /nestri-relay/mdns-discovery/1.0.0

GossipSub and room state

The relay uses GossipSub PubSub (from go-libp2p-pubsub) to synchronise room state and relay metrics across all nodes in the mesh. Two topics are maintained:
  • pubTopicState — broadcasts which rooms are currently active on which relay
  • pubTopicRelayMetrics — broadcasts periodic load and health metrics

Local room tracking

When a runner connects, the relay creates a Room entry in LocalRooms (a thread-safe SafeMap[ulid.ULID, *shared.Room]). Each room has a ULID identifier, a name, an owner peer ID, and a list of participants. The relay fans out incoming RTP packets from the runner to all connected viewer PeerConnections using a lock-free atomic slice of participant channels.
Each relay generates an Ed25519 key pair on first start and saves it to PERSIST_DIR/identity.key (default ./persist-data/identity.key). This key becomes the relay’s stable libp2p peer ID. Set REGEN_IDENTITY=true to force a new identity on startup.The relay also persists known peer addresses in peerstore.json inside PERSIST_DIR and reconnects to them on startup, so the relay mesh re-forms automatically after a restart.
When a relay is behind NAT, WebRTC ICE candidates must include the public IP. Set WEBRTC_NAT_IPS to your public IP, or set AUTO_ADD_LOCAL_IP=true and the relay will detect a non-loopback, non-private IP from the local interfaces. You can also override the STUN server with STUN_SERVER (default: stun.l.google.com:19302).
Set METRICS=true and the relay exposes a Prometheus-compatible endpoint at /debug/metrics/prometheus on port METRICS_PORT (default 3030). The relay also publishes metrics periodically to the GossipSub mesh so that the wider relay network stays informed of each node’s load.

Runner containers

Runner containers are the environments where games actually run. They are built in layers, each adding more capabilities:
runner-base (CachyOS + GStreamer base)
    └── runner-common (GPU drivers, PipeWire, GStreamer plugins, nestri-server)
            ├── runner-steam   (Steam + NESTRI_LAUNCH_CMD="steam -tenfoot -cef-force-gpu")
            ├── runner-heroic  (Heroic Games Launcher + NESTRI_LAUNCH_CMD="heroic")
            └── runner-minecraft (Minecraft launcher + custom NESTRI_LAUNCH_CMD)

Base image (runner-base)

Built on CachyOS (docker.io/cachyos/cachyos:latest). Installs lightweight essentials: libssh2, curl, wget, libevdev, libc++abi, and the core GStreamer stack (gstreamer, gst-plugins-base).

Common layer (runner-common)

Adds everything needed for GPU-accelerated streaming:
  • GPU driver packages: vulkan-intel, vulkan-radeon, mesa, and all lib32 variants
  • Display server components: xorg-xwayland, seatd, libinput, gamescope, wlr-randr
  • Full GStreamer plugin set including gst-plugin-va, gst-plugin-qsv, gst-plugin-rswebrtc
  • Audio stack: pipewire, pipewire-pulse, pipewire-alsa, wireplumber
  • Process supervisor: supervisor (supervisord manages all processes inside the container)
  • nestri-server binary, vimputti-manager (virtual input manager), and bwrap (sandbox)
The nestri user is created with UID/GID 1000 and added to the input, video, render, and seat groups.

Flavor layer (steam, heroic, etc.)

Each flavor installs a specific launcher and sets NESTRI_LAUNCH_CMD:
FlavorLauncherNESTRI_LAUNCH_CMD
runner-steamSteamsteam -tenfoot -cef-force-gpu
runner-heroicHeroic Games Launcherheroic
All flavors share the same entrypoint: supervisord -c /etc/nestri/supervisord.conf. Supervisord starts the Wayland compositor, PipeWire, nestri-server, and the launcher in the correct order.

Maitred

Maitred is an orchestration daemon (cloud/packages/maitred/main.go) designed for managed or self-hosted deployments that need lifecycle management of runner containers. On startup, maitred:
  1. Identifies the machine using a stable machine ID
  2. Initialises a container engine (Docker or Podman)
  3. Connects to SST/MQTT realtime for cloud-triggered events (skipped in --debug mode)
  4. Automatically creates and starts a local relay container — so every maitred host has a co-located relay
When triggered via MQTT realtime events, maitred can:
  • Create a new runner container for a game session
  • Start, stop, or destroy existing containers
  • Clean up all managed containers on shutdown (30-second graceful timeout)
Maitred is used in Nestri’s hosted cloud offering. If you are self-hosting, you manage containers directly with Docker. See the self-hosting guide for details.

Rooms

A room is the fundamental unit of a Nestri game session. Each room ties together one runner (the source of the stream) and one or more viewers (browser participants).
  • The room identifier is set via NESTRI_ROOM on the runner container and matched by nestri-server when connecting to the relay
  • The relay creates a Room struct (with a ULID identifier) when the runner’s nestri-server registers
  • Participants join by connecting their browser to the same relay and subscribing to that room
  • The relay fans RTP video and audio packets from the runner’s PeerConnection to each participant’s PeerConnection in real time, using a lock-free broadcast over per-participant channels
  1. Runner starts and nestri-server connects to the relay via WebSocket, advertising NESTRI_ROOM
  2. Relay creates a Room entry in LocalRooms and publishes room state to the GossipSub mesh
  3. A viewer opens the Nestri web app and connects to the relay for the matching room name
  4. The relay adds the viewer as a Participant and starts forwarding RTP packets
  5. When the runner disconnects, the relay calls Room.Close(), tears down the PeerConnection, and removes the room from LocalRooms
In a multi-relay mesh, room state is broadcast via the GossipSub pubTopicState topic. A viewer can connect to any relay in the mesh — if the room is hosted on a different relay, the mesh routes the connection to the correct peer. mDNS discovery handles automatic relay-to-relay connectivity on LAN setups.

Build docs developers (and LLMs) love