All API keys are encrypted with AES-256-GCM via the Web Crypto API before being stored in IndexedDB. Keys are never sent to any server other than the provider you configure. They never leave the browser.
AI Chat Panel
The AI Chat Panel is the primary interface for giving natural-language editing instructions. Location: Default tab in the dock panels — or open it from View → AI Chat. The panel includes:- A chat interface with conversation history
- A model selection dropdown
- Tool execution indicators that show what the AI is doing in real time
- A Clear chat button to reset the conversation
- Auto-scrolling as the AI responds
Available models
gpt-5.1. Select any model from the dropdown in the panel header.
Editor mode
Editor mode is enabled by default. When active:- The AI receives your current timeline state as context with every message — tracks, clips, playhead position, and in/out markers.
- 76 editing tools are available as callable functions, covering clip editing, effects, keyframes, masks, transitions, media management, playback, and more.
- The AI can manipulate the timeline directly in response to your instructions.
Ctrl+Z.
Example prompts
Iterative editing
Send a prompt
Type your instruction in the chat and press Enter. The AI reads your timeline state and decides which tools to call.
Watch the AI edit
Tool execution indicators appear as the AI applies changes. The timeline updates live.
AI editor tools
The AI has access to 76 tools across 15 categories. These are exposed as OpenAI function calls — the model decides which tools to call and in what order based on your prompt.Clip editing
Split, trim, delete, move, reorder, and cut ranges. Includes batch operations that execute as a single undoable action.
Effects & keyframes
List, add, remove, and update GPU effects. Add, remove, and query keyframes for any clip property.
Masks & transitions
Add rectangle, ellipse, and polygon masks. Add and remove transitions between clips.
Analysis & transcripts
Fetch word-level transcripts, find silence, detect low-quality sections, and trigger background analysis.
Track management
Create, delete, show, hide, mute, and unmute tracks.
Media panel
Create folders, import files, rename items, create compositions, and move media.
Visual capture
Export PNG frames, capture cut-point previews, and get frame grids at multiple timestamps.
Playback & markers
Play, pause, set speed, undo, redo, add markers, and get marker lists.
AI Video Panel
The AI Video Panel integrates with PiAPI to generate video from text prompts or images and place clips directly on the timeline. Location: Tab next to AI Chat in the dock panels — or open it from View → AI Video.Supported providers
PiAPI acts as a unified gateway. The following providers are available from the dropdown in the panel header:| Provider | Text-to-video | Image-to-video |
|---|---|---|
| Kling AI | Yes | Yes |
| Luma Dream Machine | Yes | Yes |
| Hailuo (MiniMax) | Yes | Yes |
| Hunyuan | Yes | Yes |
| Wanx (Wan) | Yes | Yes |
| SkyReels | Yes | Yes |
Generation modes
Text-to-video: Describe a scene and select an aspect ratio (16:9, 9:16, or 1:1). Image-to-video: Upload a start frame — or click Use Current Frame to capture the current timeline preview — and animate it. Kling supports an optional end frame to guide the animation.Timeline integration
Generated videos are automatically imported to an AI Video folder in the Media Panel and placed on the timeline at the playhead position. The History tab keeps the last 50 generated videos, with thumbnails you can drag back onto the timeline at any time.Setup
Get a PiAPI key from piapi.ai and enter it under Settings → API Keys.Transcription
Masterselects supports four transcription providers. The local browser option requires no API key and no upload.| Provider | Runs locally | API key required | Speaker diarization |
|---|---|---|---|
| Local Whisper (browser) | Yes | No | No |
| OpenAI Whisper API | No | Yes | No |
| AssemblyAI | No | Yes | Yes |
| Deepgram | No | Yes | Yes |
@huggingface/transformers to run the Whisper model directly in the browser via ONNX Runtime. The model is downloaded on first use and cached. English audio uses the whisper-tiny.en model; other languages use whisper-tiny.
Transcripts are stored at the word level with millisecond timestamps and are accessible to the AI editor tools via getClipTranscript and getClipDetails.
Setup: API key configuration
All API keys are managed from Settings → API Keys.| Key | Used for |
|---|---|
| OpenAI | AI Chat, OpenAI Whisper transcription |
| PiAPI | AI Video generation (Kling, Luma, Hailuo, etc.) |
| AssemblyAI | Transcription with speaker diarization |
| Deepgram | Transcription with speaker diarization |
| YouTube Data API v3 | YouTube search (optional) |