Skip to main content
Model performance varies significantly across the library. Understanding latency characteristics helps you choose the right balance between speed and quality for your use case.

Performance categories

Models are categorized by processing speed:

Very Fast

Latency ≤ 1.0s - Best for real-time processing and batch workflows

Fast

Latency 1.0s - 3.0s - Good balance of speed and quality

Medium

Latency 3.0s - 5.0s - Higher quality with moderate wait

Slow

Latency > 5.0s - Maximum quality, longer processing time

Upscale model performance

Very fast models (≤ 1.0s)

ModelLatencyScales
4xLSDIRCompactC31.31s2x, 3x, 4x
RealESRGAN_General_x4_v31.35s2x, 3x, 4x
realesr-animevideov31.36s2x, 3x, 4x
RealESRGAN_General_WDN_x4_v31.6s2x, 3x, 4x
These models are excellent for batch processing and real-time applications.

Fast models (1.0s - 3.0s)

ModelLatencyScales
waifu2x-models-upconv2.61s2x, 4x, 8x, 16x, 32x
realcugan2.96s2x, 3x, 4x
Good all-around performers with quality output.

Medium models (3.0s - 5.0s)

ModelLatencyScales
realesrgan-x4plus-anime3.61s2x, 3x, 4x
waifu2x-models-cunet5.2s2x, 4x, 8x, 16x, 32x
Balanced quality and performance for anime content.

Slow models (> 5.0s)

ModelLatencyScales
realesrnet-x4plus9.35s2x, 3x, 4x
realesrgan-x4plus9.44s2x, 3x, 4x
4xInt-RemAnime9.46s2x, 3x, 4x
uniscale_restore_x49.51s2x, 3x, 4x
4xLSDIRplusC9.51s2x, 3x, 4x
4x-WTP-ColorDS9.53s2x, 3x, 4x
AI-Forever_x4plus9.55s2x, 3x, 4x
4xNomos8kSC9.58s2x, 3x, 4x
4x_NMKD-Siax_200k9.67s2x, 3x, 4x
4xHFA2k9.69s2x, 3x, 4x
ultrasharp-4x9.73s2x, 3x, 4x
4xNomosWebPhoto_esrgan9.77s2x, 3x, 4x
remacri-4x9.84s2x, 3x, 4x
unknown-2.0.19.87s2x, 3x, 4x
ultramix-balanced-4x10.0s2x, 3x, 4x
Maximum quality models for professional workflows.

Descreen model performance

ModelLatencySpeed Category
1x_wtp_descreenton_compact0.51sVery Fast
opencomic-ai-descreen-hard-compact0.52sVery Fast
opencomic-ai-descreen-hard-lite3.0sFast
1x_halftone_patch_060000_G8.26sSlow

Artifact removal model performance

ModelLatencySpeed Category
opencomic-ai-artifact-removal-compact0.5sVery Fast
opencomic-ai-artifact-removal-lite2.97sFast
1x_NMKD-Jaywreck3-Lite_320k2.98sFast
1x_NMKD-Jaywreck3-Soft-Lite_320k2.98sFast
1x-SaiyaJin-DeJpeg8.2sSlow
opencomic-ai-artifact-removal8.21sSlow
1x_JPEGDestroyerV2_96000G8.22sSlow

Daemon mode performance

Daemon mode significantly improves performance for batch processing by loading models once and keeping them in memory. This is only available for upscayl models.

Performance improvements

ModelWithout DaemonWith DaemonSpeedup
OpenComic AI Upscale Lite52.087s7.646s6.81x faster
RealESRGAN x4 Plus73.273s23.199s3.16x faster
These benchmarks are for processing 10 images at 512x512px. Performance improvement scales with batch size.

How daemon mode works

  1. Model preloading: Model is loaded into memory once
  2. Persistent process: Daemon stays alive between images
  3. Fast processing: No model loading overhead per image
  4. Automatic cleanup: Daemons close after idle timeout

Enabling daemon mode

import OpenComicAI from 'opencomic-ai-bin';

// Enable up to 3 concurrent daemons (default)
OpenComicAI.setConcurrentDaemons(3);

// Set idle timeout (default 60 seconds)
OpenComicAI.setDaemonIdleTimeout(60000);

// Process multiple images
for (const image of images) {
  await OpenComicAI.pipeline(image.input, image.output, [
    { model: 'realesrgan-x4plus', scale: 4 }
  ]);
}

// Manually close daemons when done
OpenComicAI.closeAllDaemons();

Daemon performance details

Without daemon mode

Each image requires full model loading:
Processing image 1/10: 6.473s  (includes model loading)
Processing image 2/10: 7.809s  (includes model loading)
Processing image 3/10: 7.690s  (includes model loading)
...
Total: 73.273s

With daemon mode enabled

Model loads once, then processes images quickly:
Preload model: 1.165s          (one-time cost)
Processing image 1/10: 2.217s  (inference only)
Processing image 2/10: 2.196s  (inference only)
Processing image 3/10: 2.203s  (inference only)
...
Total: 23.199s

When to use daemon mode

Use daemon mode for:
  • Batch processing multiple images
  • Processing image sequences
  • Server applications
  • Automated workflows
Don’t use daemon mode for:
  • Single image processing
  • Memory-constrained environments
  • Different models for each image

Optimization strategies

For speed

  1. Choose fast models: Use Very Fast or Fast category models
  2. Enable daemon mode: 3-7x speedup for batch processing
  3. Use compact models: OpenComic AI Compact variants are optimized for speed
  4. Pipeline efficiently: Combine models to avoid multiple read/write cycles
// Efficient: Single pipeline
await OpenComicAI.pipeline('./input.jpg', './output.jpg', [
  { model: 'opencomic-ai-descreen-hard-compact' },
  { model: 'opencomic-ai-artifact-removal-compact' },
  { model: 'realesr-animevideov3', scale: 4 }
]);

For quality

  1. Choose quality models: Accept slower processing for better results
  2. Use full-size models: Non-compact variants provide better quality
  3. Multi-pass processing: Apply models multiple times for difficult images
  4. Combine complementary models: Use specialized models for each task
// Quality-focused pipeline
await OpenComicAI.pipeline('./input.jpg', './output.jpg', [
  { model: '1x_halftone_patch_060000_G' },
  { model: 'opencomic-ai-artifact-removal' },
  { model: 'realesrgan-x4plus', scale: 4 }
]);

For balanced workflows

  1. Mix fast and quality models: Use fast models for preprocessing
  2. Scale appropriately: Lower scales process faster
  3. Test different models: Find the sweet spot for your content
// Balanced pipeline
await OpenComicAI.pipeline('./input.jpg', './output.jpg', [
  { model: 'opencomic-ai-descreen-hard-compact' },  // Fast
  { model: 'opencomic-ai-artifact-removal-lite' },  // Medium
  { model: 'realcugan', scale: 4, noise: 0 }       // Fast
]);

Hardware considerations

Performance depends on:
  • GPU: Models run faster with GPU acceleration
  • CPU: Multi-core CPUs improve parallel processing
  • RAM: Daemon mode requires sufficient memory for loaded models
  • Storage: Fast SSD improves file I/O between pipeline stages
For maximum performance, enable daemon mode, use GPU acceleration, and choose models from the Very Fast or Fast categories.

Benchmarking your system

Test different models on your hardware to find optimal choices:
const models = [
  'realesr-animevideov3',
  'realcugan',
  'realesrgan-x4plus-anime'
];

for (const model of models) {
  const start = Date.now();
  await OpenComicAI.pipeline('./test.jpg', './output.jpg', [
    { model, scale: 4 }
  ]);
  console.log(`${model}: ${(Date.now() - start) / 1000}s`);
}

Build docs developers (and LLMs) love