Skip to main content

Overview

Every frame follows the same path: the useEngine React hook initializes the engine once, then a requestAnimationFrame loop drives the RenderDispatcher, which collects textures, composites layers, and submits everything to the GPU in a single device.queue.submit().

Pipeline diagram

useEngine hook (src/hooks/useEngine.ts)
  └── engine.initialize() -> WebGPUContext + all pipelines
        └── RenderLoop.start()
              └── requestAnimationFrame loop (idle detection + fps limiting)
                    └── RenderDispatcher.render(layers)
                          ├── LayerCollector: Import textures (external/cached/scrubbing)
                          ├── Compositor: Ping-pong compositing + effects per layer
                          ├── NestedCompRenderer: Handle compositions-in-compositions
                          ├── OutputPipeline: Output to preview canvas
                          └── SlicePipeline: Output to slice/render target canvases

How a frame is rendered

1

useEngine initializes the engine

src/hooks/useEngine.ts calls engine.initialize(), which sets up WebGPUContext (adapter, device, canvas) and creates all GPU pipelines. The engine is a singleton that survives HMR via import.meta.hot.data.
2

RenderLoop drives the RAF

src/engine/render/RenderLoop.ts starts a requestAnimationFrame loop with:
  • Idle detection: stops rendering after 1s of inactivity (RAF stays alive)
  • Frame rate limiting: ~60 fps during playback, ~30 fps baseline during scrubbing
  • Watchdog: checks every 2s, detects 3s stalls, auto-restarts dead RAF loops
3

RenderDispatcher orchestrates the frame

On each RAF tick, src/engine/render/RenderDispatcher.ts runs in order:
  1. compositorPipeline.beginFrame() — clear frame-scoped caches
  2. layerCollector.collect() — import textures from all layer sources
  3. nestedCompRenderer.preRender() — pre-render nested compositions
  4. compositor.composite() — ping-pong compositing with effects
  5. outputPipeline.renderToCanvas() — output to main preview + all active render targets
  6. slicePipeline.renderSlicedOutput() — sliced output for corner-pin targets
  7. device.queue.submit()single batched GPU submit
  8. performanceStats.recordRenderTiming() — update stats
4

LayerCollector imports textures

src/engine/render/LayerCollector.ts resolves textures in priority order (default mode, useFullWebCodecsPlayback = false):
  1. Native HelperImageBitmap from native decoder
  2. Direct VideoFrame — from parallel decode
  3. HTML VideoHTMLVideoElement (active during scrub/pause)
  4. WebCodecs — full mode or export
  5. Cache fallbacks — scrubbing cache, stall hold frame
  6. Image / Text Canvas / Nested Composition
scrubGraceUntil (~150ms) keeps the HTML preview path active after scrubbing stops, allowing seek completion before switching back to normal decoding.
5

Compositor runs ping-pong compositing

src/engine/render/Compositor.ts alternates between Ping and Pong textures, compositing one layer per pass:
Clear Ping → transparent
Layer 1 → Read Ping, Write Pong
Layer 2 → Read Pong, Write Ping
Layer 3 → Read Ping, Write Pong
...
Inline effects (brightness, contrast, saturation, invert) are applied as uniform values — no extra render passes. Complex effects (blur, glow, etc.) use EffectTemp1/2 textures for pre-processing before each layer is composited.
6

NestedCompRenderer handles compositions-in-compositions

Before the main composite, src/engine/render/NestedCompRenderer.ts pre-renders any nested compositions:
  • Pooled ping-pong texture pairs keyed by widthxheight — no per-frame allocation
  • Frame-level caching: skips re-render if the same time + layer count (quantized to 60 fps)
  • Recursive up to MAX_NESTING_DEPTH levels
  • Command buffers are batched with the main composite for a single device.queue.submit()
7

OutputPipeline writes to canvases

src/engine/pipeline/OutputPipeline.ts renders the final composited frame to:
  • Main preview canvas — primary editor display
  • Render target canvases — registered via registerTargetCanvas()
  • Output windows — external popup windows via OutputWindowManager
  • Export canvasOffscreenCanvas for zero-copy VideoFrame creation
Three uniform buffers (uniformBufferGridOn, uniformBufferGridOff, uniformBufferStackedAlpha) allow different targets in the same command encoder to have different transparency grid or alpha states.
8

SlicePipeline renders corner-pin outputs

src/engine/pipeline/SlicePipeline.ts handles corner-pin warped output slices:
  • 16×16 vertex subdivision per slice for perspective-correct warping
  • CPU-computed vertex positions (position.xy + uv.xy + maskFlag per vertex)
  • Supports inverted and non-inverted mask strips

Nested composition rendering

Nested compositions (compositions placed on a parent timeline) are rendered to pooled offscreen GPU textures before the parent’s ping-pong composite runs.
  • Pooled textures: texture pairs are reused across frames (keyed by resolution) to avoid per-frame GPU allocation
  • Frame caching: if the nested composition’s time and layer count haven’t changed (quantized to 60 fps), the pre-render is skipped entirely
  • Single GPU submit: nested composition command buffers are enqueued alongside the parent’s composite and flushed in one device.queue.submit()

Export pipeline

Export uses the same render path with two differences:
  1. ExportCanvasManager creates an OffscreenCanvas at export resolution
  2. After rendering each frame, new VideoFrame(offscreenCanvas) captures the GPU output directly — no readPixels(), no staging buffers
FrameExporter → engine.render() → OutputPipeline (OffscreenCanvas) →
  VideoFrame(offscreenCanvas) → WebCodecs VideoEncoder → mp4/webm muxer

Performance statistics

src/engine/stats/PerformanceStats.ts tracks per-frame timing:
MetricDescription
rafGapEMA-smoothed gap between RAF calls
importTextureTime to import all layer textures
renderPassTime for compositing passes
submitTime for device.queue.submit()
totalFull frame time
fpsUpdated every 250ms
dropsCount, last-second rate, reason
Drop reasons: slow_raf, slow_import, slow_render.
  • GPU Engine — subsystem architecture and texture types
  • Debugging — how to inspect the render pipeline at runtime

Build docs developers (and LLMs) love