Endpoint
Batch predictions are more efficient than individual requests and count as a single rate limit operation.
Authentication
This endpoint requires authentication. Include your API key in theX-API-Key header.
See Authentication for details.
Availability
Request Parameters
Array of prediction requests. Each item contains the same parameters as the single Predict endpoint.
- Minimum: 2 items
- Maximum: 100 items (Free/Basic), 500 items (Pro), 1000 items (Enterprise)
If true, the entire batch fails if any single prediction fails. If false, failed predictions return error details while successful ones return results.
- Default:
false - Recommended:
falsefor robustness
Enable parallel processing for faster results. May slightly increase costs.
- Default:
true - Available on Pro tier and above
Prediction Item Schema
Each item in thepredictions array should contain:
Response Fields
Indicates if the batch request was processed successfully.
Contains batch processing results.
Total number of predictions in the batch.
Number of predictions that completed successfully.
Number of predictions that failed.
Array of prediction results, maintaining the same order as the request.Each result contains either:
- Success: Same structure as single prediction response
- Error: Error details with the original request ID
Total processing time for the entire batch in milliseconds.
ISO 8601 timestamp when batch processing completed.
Example Request
Example Response
200 Success
Partial Success Response
Whenfail_on_error is false and some predictions fail:
200 Partial Success
Error Responses
400 Bad Request - Batch Too Large
400 Bad Request - Batch Too Large
400 Bad Request - Batch Too Small
400 Bad Request - Batch Too Small
403 Forbidden - Feature Not Available
403 Forbidden - Feature Not Available
500 Internal Server Error
500 Internal Server Error
Batch Size Limits
| Subscription Tier | Maximum Batch Size | Processing Speed |
|---|---|---|
| Free | Not available | - |
| Basic | 100 predictions | Standard |
| Pro | 500 predictions | Parallel |
| Enterprise | 1,000 predictions | Optimized |
Use Cases
Historical Data Analysis
Historical Data Analysis
Process historical air quality data to identify trends and patterns:
Multi-Location Monitoring
Multi-Location Monitoring
Monitor air quality across multiple locations simultaneously:
Time-Series Forecasting
Time-Series Forecasting
Generate predictions for different future scenarios:
Best Practices
Optimize batch processing for better performance and reliability.
Batch Size Optimization
- Start with smaller batches (25-50) to test your integration
- Increase batch size gradually to find optimal throughput
- Consider network latency when choosing batch size
- For real-time applications, prefer smaller batches (< 100)
- For bulk processing, maximize batch size for your tier
Error Handling
- Always set
fail_on_error: falsefor production systems - Implement retry logic for failed predictions
- Log failed predictions for later analysis
- Monitor success rate across batches
Performance Tips
- Enable
parallel_processingfor batches > 50 items - Group predictions by geographic region for better caching
- Include the optional
idfield to track individual predictions - Reuse the same timestamp for simultaneous measurements