Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/nestrilabs/nestri/llms.txt

Use this file to discover all available pages before exploring further.

Nestri is designed to run entirely on hardware you control. You can deploy the relay, runner containers, and — optionally — the maitred orchestration daemon on a single Linux machine or across multiple hosts. This page orients you to the architecture and links to each component’s setup guide.
If you want the fastest path to a running game stream, see the quickstart first. This section goes deeper into each component’s configuration for production or advanced self-hosted deployments.

Hardware requirements

RequirementDetails
OSLinux (any modern distribution with kernel 5.15+)
GPUNVIDIA, AMD, or Intel — hardware encoding is required for low-latency streaming
Container runtimeDocker (Podman support is planned)
NetworkPort 8088/udp reachable from the internet for WebRTC; TLS termination recommended for the relay HTTP/WebSocket endpoint
RAM8 GB minimum; 16 GB or more recommended when running multiple game sessions
Storage50 GB minimum for a single runner image; SSD recommended
NVIDIA users: the runner container downloads and installs the matching host GPU driver at startup. Ensure your host’s NVIDIA driver version is supported and that the container has access to /dev/dri or /dev/nvidia* devices.

Architecture overview

A complete Nestri self-hosted stack has three layers. A player’s browser connects through a relay to a runner container that streams video and receives input over WebRTC.
1

Browser initiates a session

The player opens nestri.io (or your self-hosted frontend) and requests a room by name. The frontend contacts the Nestri cloud (or your local maitred instance) to locate the relay endpoint for that room.
2

Relay brokers the connection

The relay — a libp2p node — handles WebRTC signaling, STUN/TURN negotiation, and NAT hole-punching between the browser and the runner container. All media traffic flows peer-to-peer once the connection is established; the relay is not in the media path.
3

Runner streams the game

The runner container runs nestri-server, which captures the Wayland display output via GStreamer, encodes it with a hardware encoder (VA-API, NVENC, or QSV), and sends it directly to the browser over WebRTC. Input events (mouse, keyboard, gamepad) travel the same channel in reverse.
4

Maitred manages container lifecycles (optional)

When connected to the Nestri cloud via SST Realtime, maitred listens for session requests and automatically creates, starts, and tears down runner and relay containers. Without maitred, you manage containers manually.

Components

Relay

Deploy the WebRTC signaling and libp2p relay node. Covers Docker setup, all environment variables, TLS proxy configuration with Caddy or Traefik, and NAT traversal tips.

Runner containers

Build and run Steam, Heroic, or Minecraft runner containers. Covers the container image hierarchy, GPU passthrough, encoding env vars, and custom launcher commands.

Maitred

Run the container orchestration daemon that automatically manages relay and runner lifecycles in response to cloud session requests.

Quickstart

Get from zero to a running game stream in minutes with the minimal setup guide.

Managed cloud vs. self-hosting

Nestri offers a managed cloud option at nestri.io that handles relay infrastructure and session routing for you. Self-hosting is the right choice when you need:
  • Full control over where your game data and streams live
  • Lower latency by co-locating the runner with your network
  • Custom launcher configurations or game libraries not covered by the managed service
  • Operation in an air-gapped or private network environment
You can also mix both: run your own runners while using the Nestri managed relay network, or run a standalone relay while using the cloud session management.

Build docs developers (and LLMs) love