Skip to main content
Masterselects is alpha software. Features are added fast and things occasionally break. If something stops working, refresh the page. If the problem persists, open an issue.
Masterselects is a professional video compositor that runs entirely in your browser — no backend, no plugins, no installation. It is built on WebGPU, which gives it direct access to the GPU for rendering, compositing, and encoding. Every frame you preview or export goes through the same GPU pipeline.

Key highlights

  • 30 GPU effects — color correction, blur, distort, stylize, keying, all real-time
  • 37 blend modes — After Effects-style, including stencil and silhouette modes
  • 76 AI tools — direct timeline access via OpenAI function calling (GPT-4/GPT-5)
  • 3D layer support — import OBJ, glTF, GLB, FBX models directly onto the timeline via Three.js
  • WebCodecs export — GPU-accelerated encoding with zero readPixels() calls
  • Zero-copy GPU pipeline — video textures are imported as texture_external, no CPU roundtrip
  • On-device AI — SAM2 segmentation runs in-browser via ONNX Runtime, no server required
  • Local-first — all editing and rendering stay in the browser; API keys are encrypted in IndexedDB

Architecture overview

Masterselects is built around a GPU-first rendering model. Preview, scrubbing, and export all run through the same WebGPU ping-pong compositor. There is no Canvas 2D fallback in the hot path.
UI Layer       React 19 + TypeScript + Zustand state slices

Engine Layer   WebGPU Compositor → Effects Pipeline → Output Pipeline
     │              ↑ zero-copy texture_external imports
Media Layer    WebCodecs decode → GPU textures → WebCodecs encode

Native Layer   Rust helper (Firefox storage, yt-dlp downloads, AI bridge)
Scrubbing uses a 3-tier cache: 300 GPU textures in VRAM for instant access (Tier 1), a per-video last-frame cache for seek transitions (Tier 2), and a 900-frame RAM Preview with CPU/GPU promotion (Tier 3). When the cache is warm, scrubbing does not decode at all. Nested compositions — compositions within compositions — are each rendered to pooled GPU textures with frame-level caching, composited in the parent’s ping-pong pass, all in a single device.queue.submit().

Tech stack

LayerTechnologies
FrontendReact 19, TypeScript, Vite 7.2
StateZustand with modular slice architecture (17 timeline slices, 9 media slices)
GPU renderingWebGPU + 2,500+ lines of WGSL shaders
3D engineThree.js (lazy-loaded, isolated per-layer scene renderer)
VideoWebCodecs API, mp4box, mp4-muxer, webm-muxer, HTMLVideo fallback
AudioWeb Audio API, 10-band live EQ, audio master clock, varispeed
AIOpenAI GPT-4/GPT-5 function calling, SAM2 via ONNX Runtime, Whisper via Transformers.js
StorageFile System Access API (Chrome), Native Helper backend (Firefox), IndexedDB
NativeRust helper — Firefox storage backend, yt-dlp downloads, external agent bridge

Where to go next

Quick Start

Open the app, import media, and make your first edit in under two minutes.

Browser Requirements

Check which browsers and GPU configurations are supported.

Timeline

Multi-track editing, nested compositions, multicam, and JKL shuttle.

AI features

76 AI editing tools with direct timeline access and an external agent bridge.

Build docs developers (and LLMs) love