Skip to main content
Masterselects includes Segment Anything Model 2 (SAM2) from Meta, running entirely in the browser via ONNX Runtime with WebGPU acceleration. Click on any object in the preview canvas to generate a segmentation mask, then propagate it forward through the clip — no upload, no API key, no external service.
On first use, SAM2 downloads approximately 103 MB of model files (encoder ~82 MB + decoder ~21 MB). This is a one-time download. The model is cached in the browser’s Origin Private File System (OPFS) and loads automatically on subsequent visits.

How it works

SAM2 runs off the main thread in a Web Worker so the UI stays responsive during inference. The encoder runs with WebGPU acceleration (falling back to WASM if WebGPU is unavailable in the worker context). Each point you place triggers an immediate decode pass, updating the mask in real time. Per-frame masks are stored using run-length encoding (RLE compression) to keep memory use manageable across long clips.

Segmentation workflow

1

Open the SAM2 panel

Click the AI Segment tab in the dock panels, or go to View → AI Segment.If the model has not been downloaded yet, the panel shows a download prompt. Click Download and wait for the progress bar to complete (~103 MB).
2

Select a clip

Click a video clip in the timeline. SAM2 works on the frame at the current playhead position.
3

Activate segmentation mode

Click the Activate button in the panel. A crosshair cursor and a semi-transparent overlay appear on the preview canvas.
4

Click to place points

  • Left-click on an object to add a foreground point (green) — includes that region in the mask.
  • Right-click to add a background point (red) — excludes that region from the mask.
Each point immediately updates the mask. Add as many points as needed to refine the selection. You can remove individual points from the list in the panel.Use Auto-Detect to place a center point automatically and run a full encode + decode cycle — useful for quickly selecting a prominent subject.
5

Adjust the mask display

Use the display settings in the panel to refine how the mask looks:
SettingRangeDescription
Opacity0–100%Transparency of the mask overlay
Feather0–50 pxEdge softness
Invert maskOn/OffSwap foreground and background regions
6

Propagate across frames

Click Forward to propagate the mask through up to 150 subsequent frames (~5 seconds at 30 fps). A progress bar tracks propagation. Click Stop to cancel at any time.SAM2 uses a memory bank mechanism to track the object across frames.
7

Apply the mask

The generated mask is applied directly to the clip as a track mask. It composites into the WebGPU render pipeline alongside any other masks on the clip.Click Clear All to remove all points and masks and start over.

Model details

DetailValue
ModelSAM 2 Hiera Small (fp16 encoder + ONNX decoder)
Total download size~103 MB (encoder ~82 MB, decoder ~21 MB)
Storage locationBrowser Origin Private File System (OPFS)
ComputeWebGPU (encoder), WASM (decoder)
Thread modelWeb Worker — UI stays responsive during inference
API key requiredNo
Data uploadedNone

Model lifecycle

StatusWhat you see
Not downloadedDownload prompt in the panel
DownloadingProgress bar with percentage
LoadingONNX inference sessions initializing
ReadyGreen status dot — ready for segmentation
ErrorRed status dot with error message and retry button

Performance notes

  • The first run downloads ~103 MB. On a typical broadband connection this takes 10–30 seconds.
  • After the initial download, the model loads from the OPFS cache in a few seconds.
  • WebGPU acceleration significantly speeds up the encoder. On hardware without WebGPU support in the worker context, encoding falls back to WASM and will be slower.
  • Propagation processes up to 150 frames per run. For very long clips, run multiple propagation passes or use Stop to apply a partial result.
Use Auto-Detect for a quick first pass on a prominent subject, then refine with additional foreground and background points.

Build docs developers (and LLMs) love