Skip to main content
Multicam editing lets you work with footage from multiple cameras covering the same event. Masterselects synchronizes the cameras automatically using audio cross-correlation, then uses Claude (Anthropic) to generate an Edit Decision List (EDL) — a structured cut plan that tells the editor which camera to use and when.
Multicam AI is experimental. The core pipeline — analysis, transcription, EDL generation, and timeline integration — is functional. Face detection is not yet implemented. Analysis runs on CPU via Canvas 2D rather than WebGPU compute shaders.
EDL generation requires an Anthropic API key. Set it in the Multicam Panel → Settings. Camera sync, CV analysis, and local Whisper transcription do not require any API key.

How multicam AI works

The AI receives no video frames. Instead, it receives extracted metadata — motion curves, sharpness values, audio levels, and a timestamped transcript — and uses that signal to decide where to cut and which camera to show.
Multiple cameras

CV Analysis (motion, sharpness, audio levels per camera)

Whisper transcription (speaker-attributed, word-level)

Claude generates EDL (JSON edit decisions with reasons)

Apply EDL to timeline

Multicam workflow

1

Import your camera angles

Import all camera files through the Media Panel. Each file becomes one camera angle.
2

Open the Multicam panel

Open the Multicam panel from the dock. Click Add Camera to add your imported clips. Assign roles (wide, closeup, detail, or custom) and set the master camera — the camera whose audio is used as the sync reference.
3

Sync cameras

Click Sync. Masterselects cross-correlates the audio waveforms of all cameras against the master camera to calculate millisecond-accurate sync offsets. Manual offset adjustment is also available.
4

Analyze footage

Click Analyze. The analyzer samples each camera at 500 ms intervals and computes:
  • Motion — frame-difference on luminance (normalized 0–1)
  • Sharpness — Laplacian variance (normalized 0–1)
  • Audio levels — RMS per interval
A progress bar tracks each camera in sequence.
5

Transcribe footage

Click Transcribe to run local Whisper transcription via @huggingface/transformers. The transcript includes word-level timestamps and speaker attribution. You can also import an existing transcript if you have one.
6

Generate the EDL

Select an edit style and click Generate EDL. Claude receives the camera metadata, analysis curves, audio levels, and full transcript and returns a JSON edit decision list.Edit style presets:
StyleKey rules
PodcastCut to active speaker, reaction shots sparingly, 3 s minimum cut length
InterviewInterviewee primary, interviewer on questions, 2 s minimum
MusicCut on beat, motion-driven, 1–2 s minimum, fast pacing
DocumentaryLong cuts (5 s+), B-roll, wide establishing shots, follow narrative
CustomProvide your own instructions
7

Review and apply the EDL

The generated EDL appears in the panel as a list of edit decisions, each showing the camera, start time, end time, and Claude’s reasoning. You can edit, insert, or remove individual decisions before applying.Click Apply to Timeline to create clips on the timeline tracks according to the EDL, with sync offsets applied.

Audio synchronization

Camera sync uses cross-correlation of audio waveforms. The master camera is set to offset 0. All other cameras receive a calculated offset in milliseconds. The algorithm handles cameras that started recording at different times, as long as they captured overlapping audio. Manual offset adjustment is available for cases where automatic sync does not produce a clean result.

EDL format

The EDL is a JSON array of edit decisions. Each decision specifies:
interface EditDecision {
  id: string;
  start: number;       // milliseconds
  end: number;         // milliseconds
  cameraId: string;
  reason?: string;     // Claude's reasoning for the cut
  confidence?: number; // 0–1
}

API key setup

The Claude API key for multicam is stored separately from other API keys and uses the same encrypted IndexedDB storage (Web Crypto API) as all other keys in Masterselects.
1

Get an Anthropic API key

Sign up at anthropic.com and create an API key.
2

Enter the key in Masterselects

Open the Multicam Panel, go to Settings, and paste your Claude API key.
EDL generation uses claude-sonnet-4-20250514 with a 4096-token output limit.

Current limitations

  • Face detection is not yet implemented (returns empty).
  • Analysis runs on CPU via Canvas 2D — WebGPU compute shader acceleration is planned.
  • FCPXML and DaVinci Resolve EDL export are not yet available.
  • Beat detection for the music edit style is not yet implemented.
  • Very long recordings may be slow to analyze — cameras are processed sequentially.

Build docs developers (and LLMs) love