Skip to main content
The ClientSideExporter class enables video export directly in the browser using the WebCodecs API and MediaBunny library, without requiring a server or FFmpeg.

Overview

Client-side export is useful for:
  • Interactive video editors where users can preview and export
  • Serverless applications that want to avoid backend rendering costs
  • Progressive web apps that work offline
  • Quick previews during development
Client-side export requires modern browsers with WebCodecs support (Chrome 94+, Edge 94+). It will not work in Firefox or Safari.

Basic usage

import { HeliosPlayer } from '@helios-project/player';

const player = new HeliosPlayer({
  src: './composition.html',
  container: document.getElementById('player-container')
});

const exporter = player.getExporter();

await exporter.export({
  format: 'mp4',
  onProgress: (progress) => {
    console.log(`Export: ${(progress * 100).toFixed(1)}%`);
  }
});

Export options

format
'mp4' | 'webm' | 'png' | 'jpeg'
default:"'mp4'"
Output format:
  • 'mp4': H.264 video in MP4 container
  • 'webm': VP9 video in WebM container
  • 'png': Single frame snapshot as PNG
  • 'jpeg': Single frame snapshot as JPEG
mode
'auto' | 'canvas' | 'dom'
default:"'auto'"
Capture mode:
  • 'auto': Try canvas first, fall back to DOM
  • 'canvas': Capture from HTMLCanvasElement
  • 'dom': Capture from DOM using screenshots
canvasSelector
string
default:"'canvas'"
CSS selector to find the canvas element in canvas mode
width
number
Override output width (defaults to composition width)
height
number
Override output height (defaults to composition height)
bitrate
number
default:"5000000"
Video bitrate in bits per second (default: 5 Mbps)
includeCaptions
boolean
default:"true"
Whether to burn captions into the video
captionStyle
object
Caption styling options:
{
  color?: string;           // Text color (default: 'white')
  backgroundColor?: string; // Background color (default: 'rgba(0,0,0,0.7)')
  fontFamily?: string;      // Font family (default: 'sans-serif')
  scale?: number;           // Font scale (default: 0.05)
}
filename
string
default:"'video'"
Output filename (without extension)
onProgress
(progress: number) => void
required
Progress callback receiving values from 0 to 1
signal
AbortSignal
AbortSignal to cancel the export

Export workflow

1

Pause playback

The exporter pauses the player to ensure consistent frame capture:
this.controller.pause();
2

Determine capture mode

If mode is 'auto', the exporter attempts to capture from canvas first:
const result = await this.controller.captureFrame(0, {
  selector: canvasSelector,
  mode: 'canvas'
});

if (result && result.frame) {
  effectiveMode = 'canvas';
} else {
  effectiveMode = 'dom';
  console.log("Falling back to DOM mode");
}
3

Initialize MediaBunny output

Create output with the selected format:
const target = new BufferTarget();
const outputFormat = format === 'webm' 
  ? new WebMOutputFormat() 
  : new Mp4OutputFormat();

const output = new Output({ format: outputFormat, target });
4

Setup video track

Configure video encoding:
const videoConfig = {
  codec: format === 'webm' ? 'vp9' : 'avc',
  bitrate: bitrate ?? 5_000_000
};

const videoSource = new VideoSampleSource(videoConfig);
output.addVideoTrack(videoSource);
5

Setup audio track

If audio tracks exist, configure audio encoding:
const audioTracks = await this.controller.getAudioTracks();

if (audioTracks.length > 0) {
  const audioConfig = format === 'webm'
    ? { codec: 'opus' }
    : { codec: 'aac' };
  
  const audioSource = new AudioSampleSource(audioConfig);
  output.addAudioTrack(audioSource);
}
6

Capture and encode frames

Loop through all frames in the composition:
for (let i = 0; i < totalFrames; i++) {
  const frameIndex = startFrame + i;
  const result = await this.controller.captureFrame(frameIndex, {
    selector: canvasSelector,
    mode: effectiveMode,
    width: targetWidth,
    height: targetHeight
  });
  
  let finalFrame = result.frame;
  
  // Optionally burn captions
  if (includeCaptions && result.captions?.length > 0) {
    finalFrame = await this.drawCaptions(
      result.frame,
      result.captions,
      captionStyle
    );
    result.frame.close();
  }
  
  const keyFrame = i % (fps * 2) === 0;
  await videoSource.add(new VideoSample(finalFrame), { keyFrame });
  
  finalFrame.close();
  onProgress((i + 1) / totalFrames);
}
7

Process audio

Mix all audio tracks into a single buffer:
const durationInSeconds = totalFrames / fps;
const audioBuffer = await mixAudio(
  audioTracks,
  durationInSeconds,
  48000,
  rangeStartInSeconds
);

// Convert to planar format for MediaBunny
const c0 = audioBuffer.getChannelData(0);
const c1 = audioBuffer.getChannelData(1);
const planarData = new Float32Array(c0.length + c1.length);
planarData.set(c0, 0);
planarData.set(c1, c0.length);

const sample = new AudioSample({
  format: 'f32-planar',
  sampleRate: 48000,
  numberOfChannels: 2,
  timestamp: 0,
  data: planarData
});

await audioSource.add(sample);
8

Finalize and download

Complete encoding and trigger browser download:
await output.finalize();

if (target.buffer) {
  const blob = new Blob([target.buffer], { 
    type: format === 'webm' ? 'video/webm' : 'video/mp4' 
  });
  const url = URL.createObjectURL(blob);
  const a = document.createElement('a');
  a.href = url;
  a.download = `${filename}.${format}`;
  a.click();
  URL.revokeObjectURL(url);
}

Image snapshots

Export single frames as images:
await exporter.export({
  format: 'png',
  onProgress: (progress) => console.log(progress)
});
The snapshot captures the current frame and optionally burns captions:
const frameToCapture = state.currentFrame;
const result = await this.controller.captureFrame(frameToCapture, {
  selector: canvasSelector,
  mode: effectiveMode,
  width: targetWidth,
  height: targetHeight
});

let finalFrame = result.frame;

if (includeCaptions && result.captions?.length > 0) {
  finalFrame = await this.drawCaptions(
    result.frame,
    result.captions,
    captionStyle
  );
  result.frame.close();
}

// Convert VideoFrame to Blob
const canvas = new OffscreenCanvas(finalFrame.displayWidth, finalFrame.displayHeight);
const ctx = canvas.getContext('2d');
ctx.drawImage(finalFrame, 0, 0);
finalFrame.close();

const blob = await canvas.convertToBlob({ type: 'image/png' });

Caption rendering

Captions are burned into the video by drawing them onto each frame:
private async drawCaptions(
  frame: VideoFrame,
  captions: CaptionCue[],
  style?: CaptionStyle
): Promise<VideoFrame> {
  const width = frame.displayWidth;
  const height = frame.displayHeight;
  
  const canvas = new OffscreenCanvas(width, height);
  const ctx = canvas.getContext('2d');
  
  // Draw original frame
  ctx.drawImage(frame, 0, 0);
  
  // Configure caption styling
  const scale = style?.scale ?? 0.05;
  const fontSize = Math.max(16, Math.round(height * scale));
  const padding = fontSize * 0.5;
  const lineHeight = fontSize * 1.2;
  const bottomMargin = height * 0.05;
  
  ctx.font = `${fontSize}px ${style?.fontFamily || 'sans-serif'}`;
  ctx.textAlign = 'center';
  ctx.textBaseline = 'top';
  
  let currentBottomY = height - bottomMargin;
  
  // Render captions from bottom to top
  [...captions].reverse().forEach(cue => {
    const lines = cue.text.split('\n');
    const cueHeight = lines.length * lineHeight + (padding * 2);
    
    // Measure text for background
    let maxLineWidth = 0;
    lines.forEach(line => {
      const m = ctx.measureText(line);
      if (m.width > maxLineWidth) maxLineWidth = m.width;
    });
    
    const bgWidth = maxLineWidth + (fontSize * 1.0);
    const bgTopY = currentBottomY - cueHeight;
    
    // Draw background
    ctx.fillStyle = style?.backgroundColor || 'rgba(0, 0, 0, 0.7)';
    ctx.fillRect((width / 2) - (bgWidth / 2), bgTopY, bgWidth, cueHeight);
    
    // Draw text with shadow
    ctx.shadowColor = 'black';
    ctx.shadowBlur = 2;
    ctx.shadowOffsetY = 1;
    ctx.fillStyle = style?.color || 'white';
    
    lines.forEach((line, i) => {
      const y = bgTopY + padding + (i * lineHeight);
      ctx.fillText(line, width / 2, y);
    });
    
    currentBottomY -= (cueHeight + 4);
  });
  
  return new VideoFrame(canvas, { timestamp: frame.timestamp });
}

Example: Interactive export UI

import { HeliosPlayer } from '@helios-project/player';

const player = new HeliosPlayer({
  src: './composition.html',
  container: document.getElementById('player-container')
});

const exporter = player.getExporter();

// Export button
document.getElementById('export-btn').addEventListener('click', async () => {
  const format = document.getElementById('format-select').value;
  const progressBar = document.getElementById('progress-bar');
  const statusText = document.getElementById('status-text');
  
  const controller = new AbortController();
  
  // Cancel button
  const cancelBtn = document.getElementById('cancel-btn');
  cancelBtn.addEventListener('click', () => controller.abort());
  
  try {
    statusText.textContent = 'Exporting...';
    
    await exporter.export({
      format,
      bitrate: 8_000_000,
      includeCaptions: true,
      captionStyle: {
        color: '#ffffff',
        backgroundColor: 'rgba(0, 0, 0, 0.8)',
        scale: 0.04
      },
      filename: `export-${Date.now()}`,
      onProgress: (progress) => {
        progressBar.value = progress;
        statusText.textContent = `Exporting: ${(progress * 100).toFixed(0)}%`;
      },
      signal: controller.signal
    });
    
    statusText.textContent = 'Export complete!';
  } catch (error) {
    if (error.message === 'Export aborted') {
      statusText.textContent = 'Export cancelled';
    } else {
      statusText.textContent = `Export failed: ${error.message}`;
    }
  }
});

Performance considerations

Frame capture overhead

Capturing frames in the browser is slower than server-side rendering:
  • Canvas mode: ~10-30ms per frame
  • DOM mode: ~50-100ms per frame
For a 60 FPS, 10-second composition:
  • Canvas: ~6-18 seconds
  • DOM: ~30-60 seconds

Memory usage

VideoFrames consume memory until closed:
// Always close frames after use
const frame = await captureFrame(i);
try {
  await processFrame(frame);
} finally {
  frame.close();  // Free memory
}

Browser limitations

Large exports may fail due to browser memory limits. Keep compositions under 60 seconds at 1080p.
For longer videos, use server-side rendering with the Renderer class.

Codec support

Different browsers support different codecs:
FormatCodecChromeEdgeFirefoxSafari
MP4H.264YesYesNo*No*
WebMVP9YesYesNo*No
* Firefox and Safari do not support WebCodecs API Check codec support:
if (typeof VideoEncoder === 'undefined') {
  console.error('WebCodecs not supported');
  return;
}

const config = { codec: 'avc1.42001E', width: 1920, height: 1080 };
const support = await VideoEncoder.isConfigSupported(config);

if (!support.supported) {
  console.error('H.264 encoding not supported');
}

Audio mixing

The exporter uses Web Audio API to mix multiple audio tracks:
export async function mixAudio(
  tracks: AudioTrack[],
  duration: number,
  sampleRate: number,
  offset: number
): Promise<AudioBuffer> {
  const audioContext = new OfflineAudioContext(2, duration * sampleRate, sampleRate);
  
  for (const track of tracks) {
    const source = audioContext.createBufferSource();
    source.buffer = track.buffer;
    
    // Apply volume
    const gainNode = audioContext.createGain();
    gainNode.gain.value = track.volume ?? 1.0;
    
    source.connect(gainNode);
    gainNode.connect(audioContext.destination);
    
    // Schedule playback
    const startTime = (track.offset ?? 0) - offset;
    source.start(Math.max(0, startTime));
  }
  
  return await audioContext.startRendering();
}

Build docs developers (and LLMs) love