The batch operation sends an array of jobs to the Rust engine in a single call. The engine processes them in parallel using Rayon and returns a combined result. This is more efficient than issuing multiple sequential calls because it avoids the round-trip latency between jobs and lets the CPU work on all of them simultaneously.
Use batch when you need to produce several variants of the same asset at once — for example, responsive image sizes, a WebP conversion, and a low-quality placeholder in a single upload handler. For independent assets with no shared input, issuing separate calls through StreamClient is equally effective and simpler to reason about.
The batch job wraps an array of any valid job objects under the jobs key:
{
"operation": "batch",
"jobs": [
{ "operation": "resize", "input": "a.png", "output_dir": "out/", "widths": [320, 640] },
{ "operation": "optimize", "inputs": ["b.jpg"], "output_dir": "out/" }
],
"parallel": true
}
| Field | Type | Required | Default | Description |
|---|
jobs | array | Yes | — | Any valid job objects |
parallel | bool | No | true | Execute jobs in parallel |
Go API
There is no BatchJob struct in the Go client. Use an anonymous struct that serializes to the batch JSON format, then call Execute on either Client or StreamClient:
type batchRequest struct {
Operation string `json:"operation"`
Jobs []any `json:"jobs"`
Parallel bool `json:"parallel"`
}
result, err := sc.Execute(&batchRequest{
Operation: "batch",
Parallel: true,
Jobs: []any{
dpf.VideoTranscodeJob{
Operation: "video_transcode",
Input: "video.mp4",
Output: "video_720p.mp4",
Codec: "h264",
},
dpf.VideoThumbnailJob{
Operation: "video_thumbnail",
Input: "video.mp4",
Output: "poster.jpg",
Timestamp: "25%",
},
dpf.AudioNormalizeJob{
Operation: "audio_normalize",
Input: "audio.mp3",
Output: "audio_normalized.mp3",
TargetLUFS: -14.0,
},
},
})
if err != nil {
log.Fatalf("batch failed: %v", err)
}
log.Printf("batch completed in %dms, produced %d files", result.ElapsedMs, len(result.Outputs))
Real-world example: process an uploaded image
When a user uploads a source image, you often need responsive sizes, an optimized version, a WebP conversion, and a placeholder — all at once. A single batch call handles this:
type batchRequest struct {
Operation string `json:"operation"`
Jobs []any `json:"jobs"`
Parallel bool `json:"parallel"`
}
func processUpload(sc *dpf.StreamClient, srcPath, outDir string) (*dpf.JobResult, error) {
webp := "webp"
q := uint8(85)
optimizeDir := outDir + "/optimized"
return sc.Execute(&batchRequest{
Operation: "batch",
Parallel: true,
Jobs: []any{
// Responsive sizes for <img srcset>
dpf.ResizeJob{
Operation: "resize",
Input: srcPath,
OutputDir: outDir + "/responsive",
Widths: []uint32{320, 640, 1024, 1920},
Format: &webp,
Quality: &q,
},
// Lossless-optimized original
dpf.OptimizeJob{
Operation: "optimize",
Inputs: []string{srcPath},
OutputDir: &optimizeDir,
},
// AVIF conversion for modern browsers
dpf.ConvertJob{
Operation: "convert",
Input: srcPath,
Output: outDir + "/hero.avif",
Format: "avif",
},
// LQIP placeholder (base64-encoded, returned inline)
dpf.PlaceholderJob{
Operation: "placeholder",
Input: srcPath,
Inline: true,
},
},
})
}
The result.Outputs slice contains the files from all four jobs combined. Check result.Success to confirm the batch succeeded as a whole.
UI asset generation example
Generate all icon and responsive assets from a single SVG source in one batch call:
type batchRequest struct {
Operation string `json:"operation"`
Jobs []any `json:"jobs"`
Parallel bool `json:"parallel"`
}
result, err := sc.Execute(&batchRequest{
Operation: "batch",
Parallel: true,
Jobs: []any{
dpf.ResizeJob{
Operation: "resize",
Input: "logo.svg",
OutputDir: "out/responsive",
Widths: []uint32{256, 512, 1024},
},
dpf.FaviconJob{
Operation: "favicon",
Input: "logo.svg",
OutputDir: "out/favicons",
GenerateICO: true,
},
dpf.PlaceholderJob{
Operation: "placeholder",
Input: "logo.svg",
Inline: true,
},
},
})
Batch via CLI
You can also drive batch processing directly from the command line using a JSON file:
./dpf/target/release/dpf batch --file jobs.json
Where jobs.json contains the batch JSON structure:
{
"operation": "batch",
"jobs": [
{
"operation": "resize",
"input": "hero.png",
"output_dir": "out/responsive",
"widths": [320, 640, 1024]
},
{
"operation": "convert",
"input": "hero.png",
"output": "out/hero.avif",
"format": "avif"
},
{
"operation": "placeholder",
"input": "hero.png",
"inline": true
}
],
"parallel": true
}
Error handling
A batch call returns a single *JobResult. Check both the Go error and the Success flag:
result, err := sc.Execute(&batchRequest{...})
if err != nil {
// Process-level failure (pipe broken, binary not found, JSON malformed)
log.Printf("batch error: %v", err)
return
}
if !result.Success {
log.Printf("batch operation failed: %s elapsed=%dms", result.Operation, result.ElapsedMs)
return
}
log.Printf("batch produced %d outputs in %dms", len(result.Outputs), result.ElapsedMs)
for _, f := range result.Outputs {
log.Printf(" %s (%d bytes)", f.Path, f.SizeBytes)
}