The RenderOrchestrator class enables distributed rendering by splitting a composition into chunks that can be rendered in parallel across multiple workers.
Overview
Distributed rendering is ideal for:
- Long compositions that would take too long to render sequentially
- Cloud rendering on serverless platforms (AWS Lambda, Google Cloud Run)
- Multi-core local rendering to maximize CPU/GPU utilization
- CI/CD pipelines that need fast video generation
Basic usage
import { RenderOrchestrator } from '@helios-project/renderer';
await RenderOrchestrator.render(
'http://localhost:3000/composition.html',
'./output.mp4',
{
width: 1920,
height: 1080,
fps: 60,
durationInSeconds: 30,
concurrency: 4 // Split into 4 chunks
}
);
Configuration
The DistributedRenderOptions interface extends RendererOptions with distributed rendering options:
Number of chunks to split the job into. Defaults to Math.max(1, os.cpus().length - 1).
Custom executor for rendering chunks. Defaults to LocalExecutor which runs chunks as local processes.
All options from RendererOptions are also supported. See local rendering for the full list.
Distributed rendering workflow
Create render plan
The orchestrator analyzes the composition and creates a RenderPlan:const plan = RenderOrchestrator.plan(
compositionUrl,
outputPath,
options
);
The plan contains:
- Array of
RenderChunk objects defining each chunk’s frame range
- Temporary file paths for chunk outputs
- Configuration for concatenation and final mixing
Render chunks in parallel
Each chunk is rendered independently with its own frame range:const chunkOptions = {
...baseOptions,
startFrame: 60, // Start at frame 60
frameCount: 30, // Render 30 frames
audioTracks: [], // No explicit audio in chunks
audioCodec: 'pcm_s16le' // Uncompressed audio
};
Workers report individual progress which is aggregated into global progress. Concatenate chunks
After all chunks complete, they are concatenated using FFmpeg’s concat demuxer:await concatenateVideos(
plan.concatManifest, // List of chunk files
plan.concatOutputFile, // Intermediate output
{ ffmpegPath }
);
This performs stream copy (no re-encoding) for maximum speed. Mix final audio
The concatenated video is mixed with explicit audio tracks:const mixOptions = {
...options,
videoCodec: 'copy', // Don't re-encode video
mixInputAudio: true // Mix implicit audio from chunks
};
FFmpeg processes the audio tracks with filters for volume, fades, delays, etc. Clean up temporary files
All chunk files and intermediate outputs are deleted:for (const file of plan.cleanupFiles) {
await fs.promises.unlink(file);
}
Render plan structure
The RenderPlan interface defines the execution plan:
interface RenderPlan {
totalFrames: number; // Total frames to render
chunks: RenderChunk[]; // Array of chunk definitions
concatManifest: string[]; // List of chunk files
concatOutputFile: string; // Intermediate concatenated file
finalOutputFile: string; // Final output path
mixOptions: RendererOptions; // Options for final mix
cleanupFiles: string[]; // Temp files to delete
}
Render chunk
Each chunk is defined by:
interface RenderChunk {
id: number; // Chunk identifier
startFrame: number; // First frame to render
frameCount: number; // Number of frames in chunk
outputFile: string; // Temporary output path
options: RendererOptions; // Render options for this chunk
}
Local parallel rendering
By default, the orchestrator uses LocalExecutor to render chunks on the local machine:
import { RenderOrchestrator } from '@helios-project/renderer';
// Automatically uses all CPU cores minus one
await RenderOrchestrator.render(
compositionUrl,
'./output.mp4',
{
width: 1920,
height: 1080,
fps: 60,
durationInSeconds: 60,
// concurrency defaults to os.cpus().length - 1
}
);
Progress tracking
The orchestrator aggregates progress from all workers:
await RenderOrchestrator.render(
compositionUrl,
outputPath,
options,
{
onProgress: (globalProgress) => {
console.log(`Overall: ${(globalProgress * 100).toFixed(1)}%`);
}
}
);
Progress calculation:
// Each worker has a weight based on its frame count
const workerWeight = chunk.frameCount / totalFrames;
// Global progress is weighted sum of worker progress
let globalProgress = 0;
for (let i = 0; i < chunks.length; i++) {
globalProgress += workerProgress[i] * workerWeights[i];
}
Error handling
If any worker fails, all other workers are immediately aborted:
const promise = executor.render(url, output, chunkOptions, jobOptions)
.catch(err => {
// Abort all other workers
if (!internalController.signal.aborted) {
console.warn(`[Worker ${i}] failed. Aborting others...`);
internalController.abort();
}
throw err;
});
Custom executors
Implement the RenderExecutor interface to create custom execution strategies:
interface RenderExecutor {
render(
compositionUrl: string,
outputPath: string,
options: RendererOptions,
jobOptions?: RenderJobOptions
): Promise<void>;
}
Example: Cloud executor
import { RenderExecutor, RendererOptions, RenderJobOptions } from '@helios-project/renderer';
class LambdaExecutor implements RenderExecutor {
async render(
compositionUrl: string,
outputPath: string,
options: RendererOptions,
jobOptions?: RenderJobOptions
): Promise<void> {
// 1. Upload composition to S3
const s3Url = await this.uploadToS3(compositionUrl);
// 2. Invoke Lambda function
const result = await lambda.invoke({
FunctionName: 'helios-renderer',
Payload: JSON.stringify({
compositionUrl: s3Url,
options,
outputKey: outputPath
})
});
// 3. Download result from S3
await this.downloadFromS3(outputPath);
// 4. Report progress
if (jobOptions?.onProgress) {
jobOptions.onProgress(1);
}
}
}
// Use custom executor
await RenderOrchestrator.render(
compositionUrl,
'./output.mp4',
{
width: 1920,
height: 1080,
fps: 60,
durationInSeconds: 120,
concurrency: 10,
executor: new LambdaExecutor()
}
);
Audio handling in distributed mode
Distributed rendering uses a two-phase audio approach:
Phase 1: Chunk rendering
Chunks capture implicit audio (from DOM audio elements) using PCM:
const chunkBaseOptions = {
...options,
audioTracks: [], // Remove explicit tracks
audioFilePath: undefined, // Remove audio file
audioCodec: 'pcm_s16le' // Uncompressed PCM
};
This ensures:
- No audio re-encoding during chunk rendering
- Implicit audio is preserved for concatenation
- Deterministic audio synchronization
Phase 2: Final mixing
After concatenation, explicit audio tracks are mixed:
const mixOptions = {
...options,
videoCodec: 'copy', // Don't re-encode video
mixInputAudio: hasImplicitAudio, // Mix PCM from chunks
subtitles: undefined // Skip subtitles (already burned)
};
The orchestrator detects implicit audio:
function hasAudioStream(filePath: string, ffmpegPath: string): Promise<boolean> {
return new Promise((resolve) => {
const process = spawn(ffmpegPath, ['-i', filePath]);
let stderr = '';
process.stderr.on('data', (d) => stderr += d.toString());
process.on('close', () => {
// Look for "Audio:" stream in FFmpeg output
resolve(/Stream #.*:.*Audio:/.test(stderr));
});
});
}
Optimal concurrency
For CPU-bound workloads (software encoding):
const concurrency = os.cpus().length - 1; // Leave one core free
For GPU-bound workloads (hardware encoding):
const concurrency = 2; // Most GPUs can't parallelize well
For cloud rendering:
const concurrency = Math.ceil(totalFrames / (fps * 30)); // 30s per worker
Chunk size
Avoid chunks that are too small:
const MIN_CHUNK_DURATION = 5; // seconds
const minChunkSize = fps * MIN_CHUNK_DURATION;
const maxConcurrency = Math.floor(totalFrames / minChunkSize);
Small chunks have overhead:
- Browser startup time
- FFmpeg initialization
- File I/O operations
Memory usage
Each worker runs a full browser instance:
// Estimate: ~200MB per browser instance
const estimatedMemoryMB = concurrency * 200;
console.log(`Estimated memory usage: ${estimatedMemoryMB}MB`);
Limit concurrency on memory-constrained systems:
const availableMemoryGB = os.totalmem() / (1024 ** 3);
const maxConcurrency = Math.floor(availableMemoryGB / 0.5); // 0.5GB per worker
Example: Full distributed workflow
import { RenderOrchestrator } from '@helios-project/renderer';
import * as os from 'os';
const compositionUrl = 'http://localhost:3000';
const outputPath = './output.mp4';
const options = {
width: 1920,
height: 1080,
fps: 60,
durationInSeconds: 120, // 2 minutes
concurrency: os.cpus().length - 1,
// Video settings
videoCodec: 'libx264',
preset: 'medium',
crf: 23,
// Audio tracks
audioTracks: [
{
path: './music.mp3',
volume: 0.7,
loop: true,
fadeInDuration: 2,
fadeOutDuration: 3
},
{
path: './narration.mp3',
offset: 5,
volume: 1.0
}
],
// Performance
mode: 'canvas',
intermediateVideoCodec: 'avc1.4d002a', // H.264 hardware
webCodecsPreference: 'hardware'
};
const jobOptions = {
onProgress: (progress) => {
const percent = (progress * 100).toFixed(1);
process.stdout.write(`\rRendering: ${percent}%`);
},
signal: AbortSignal.timeout(600000) // 10 minute timeout
};
try {
await RenderOrchestrator.render(
compositionUrl,
outputPath,
options,
jobOptions
);
console.log('\nRender complete!');
} catch (error) {
console.error('\nRender failed:', error);
process.exit(1);
}
Stateless rendering requirements
For cloud/distributed rendering, compositions must be stateless:
Each chunk renders independently without access to previous frames. Your composition must support deterministic frame seeking.
Requirements
-
Deterministic time-based rendering
// Good: Based purely on currentFrame
function render(currentFrame: number) {
const time = currentFrame / fps;
const x = Math.sin(time * Math.PI * 2) * 100;
draw(x);
}
// Bad: Depends on previous state
let position = 0;
function render() {
position += velocity; // Lost when chunk starts
draw(position);
}
-
No reliance on playback history
// Good: Calculate from absolute frame
const rotation = (currentFrame / fps) * 360;
// Bad: Accumulate over time
rotation += deltaRotation; // Wrong for chunk N > 0
-
Bind to document timeline
import { Helios } from '@helios-project/core';
const helios = new Helios({ duration: 10, fps: 60 });
// REQUIRED for distributed rendering
helios.bindToDocumentTimeline();
See the distributed rendering example for a complete stateless composition.