Skip to main content
Masterselects has a built-in AI system that can directly operate the timeline — not just suggest edits, but execute them. The AI chat connects to OpenAI and exposes 76 callable editing actions. A separate video generation panel streams AI-generated footage into your timeline. Transcription runs locally in the browser without any server.
All API keys are encrypted with AES-256-GCM via the Web Crypto API before being stored in IndexedDB. Keys are never sent to any server other than the provider you configure. They never leave the browser.

AI Chat Panel

The AI Chat Panel is the primary interface for giving natural-language editing instructions. Location: Default tab in the dock panels — or open it from View → AI Chat. The panel includes:
  • A chat interface with conversation history
  • A model selection dropdown
  • Tool execution indicators that show what the AI is doing in real time
  • A Clear chat button to reset the conversation
  • Auto-scrolling as the AI responds

Available models

GPT-5.2, GPT-5.2 Pro
GPT-5.1, GPT-5.1 Codex, GPT-5.1 Codex Mini
GPT-5, GPT-5 Mini, GPT-5 Nano
GPT-4.1, GPT-4.1 Mini, GPT-4.1 Nano
GPT-4o, GPT-4o Mini
o3, o4-mini, o3-pro
The default model is gpt-5.1. Select any model from the dropdown in the panel header.

Editor mode

Editor mode is enabled by default. When active:
  • The AI receives your current timeline state as context with every message — tracks, clips, playhead position, and in/out markers.
  • 76 editing tools are available as callable functions, covering clip editing, effects, keyframes, masks, transitions, media management, playback, and more.
  • The AI can manipulate the timeline directly in response to your instructions.
All AI edits are undoable with Ctrl+Z.

Example prompts

"Trim the first clip to 5 seconds"
"Add a blur effect to the second track"
"Split at all the pauses"
"Remove segments where motion is above 0.7"
"Add a cross dissolve transition between all clips"
"Set opacity to 50% on the selected clip"
"Move the selected clip to track 2"

Iterative editing

1

Send a prompt

Type your instruction in the chat and press Enter. The AI reads your timeline state and decides which tools to call.
2

Watch the AI edit

Tool execution indicators appear as the AI applies changes. The timeline updates live.
3

Preview the result

Play back the edited section to review the changes.
4

Undo or refine

Press Ctrl+Z to undo if needed, or send a follow-up prompt to adjust.

AI editor tools

The AI has access to 76 tools across 15 categories. These are exposed as OpenAI function calls — the model decides which tools to call and in what order based on your prompt.

Clip editing

Split, trim, delete, move, reorder, and cut ranges. Includes batch operations that execute as a single undoable action.

Effects & keyframes

List, add, remove, and update GPU effects. Add, remove, and query keyframes for any clip property.

Masks & transitions

Add rectangle, ellipse, and polygon masks. Add and remove transitions between clips.

Analysis & transcripts

Fetch word-level transcripts, find silence, detect low-quality sections, and trigger background analysis.

Track management

Create, delete, show, hide, mute, and unmute tracks.

Media panel

Create folders, import files, rename items, create compositions, and move media.

Visual capture

Export PNG frames, capture cut-point previews, and get frame grids at multiple timestamps.

Playback & markers

Play, pause, set speed, undo, redo, add markers, and get marker lists.
Use executeBatch to group multiple tool calls into a single undo step. The AI does this automatically for complex operations.

AI Video Panel

The AI Video Panel integrates with PiAPI to generate video from text prompts or images and place clips directly on the timeline. Location: Tab next to AI Chat in the dock panels — or open it from View → AI Video.

Supported providers

PiAPI acts as a unified gateway. The following providers are available from the dropdown in the panel header:
ProviderText-to-videoImage-to-video
Kling AIYesYes
Luma Dream MachineYesYes
Hailuo (MiniMax)YesYes
HunyuanYesYes
Wanx (Wan)YesYes
SkyReelsYesYes

Generation modes

Text-to-video: Describe a scene and select an aspect ratio (16:9, 9:16, or 1:1). Image-to-video: Upload a start frame — or click Use Current Frame to capture the current timeline preview — and animate it. Kling supports an optional end frame to guide the animation.

Timeline integration

Generated videos are automatically imported to an AI Video folder in the Media Panel and placed on the timeline at the playhead position. The History tab keeps the last 50 generated videos, with thumbnails you can drag back onto the timeline at any time.

Setup

Get a PiAPI key from piapi.ai and enter it under Settings → API Keys.

Transcription

Masterselects supports four transcription providers. The local browser option requires no API key and no upload.
ProviderRuns locallyAPI key requiredSpeaker diarization
Local Whisper (browser)YesNoNo
OpenAI Whisper APINoYesNo
AssemblyAINoYesYes
DeepgramNoYesYes
Local Whisper uses @huggingface/transformers to run the Whisper model directly in the browser via ONNX Runtime. The model is downloaded on first use and cached. English audio uses the whisper-tiny.en model; other languages use whisper-tiny. Transcripts are stored at the word level with millisecond timestamps and are accessible to the AI editor tools via getClipTranscript and getClipDetails.

Setup: API key configuration

All API keys are managed from Settings → API Keys.
KeyUsed for
OpenAIAI Chat, OpenAI Whisper transcription
PiAPIAI Video generation (Kling, Luma, Hailuo, etc.)
AssemblyAITranscription with speaker diarization
DeepgramTranscription with speaker diarization
YouTube Data API v3YouTube search (optional)
The Claude API key for multicam EDL generation is set separately in the Multicam Panel → Settings. SAM2 segmentation and local Whisper transcription require no API key.

External agent bridge

External agents — Claude Code, custom scripts, or any HTTP client — can drive the running editor over a local HTTP bridge exposed by the Native Helper. This lets you script complex edits, run automated workflows, or integrate Masterselects into a larger agent pipeline. See AI tools bridge for the full API reference, authentication, and example calls.

Build docs developers (and LLMs) love