Skip to main content
The Batches API lets you submit a collection of requests to be processed asynchronously. This is useful for large workloads where you want to avoid rate limits and reduce cost. Portkey proxies batch requests to the configured provider.
Batch processing is provider-dependent. Check your provider’s documentation for supported endpoints and file formats.

Create a batch

POST /v1/batches Creates a new batch job from a previously uploaded .jsonl file.

Request

x-portkey-provider
string
required
The provider to route the request to (e.g. openai).
x-portkey-api-key
string
required
Your Portkey API key or provider API key.
input_file_id
string
required
The ID of the uploaded file containing the batch requests. Use POST /v1/files to upload this file with purpose=batch.
endpoint
string
required
The API endpoint to run the batch against. For example, /v1/chat/completions.
completion_window
string
required
The time window for completion. Currently 24h is the most commonly supported value.
metadata
object
Optional key-value metadata to attach to the batch object.

Response

id
string
Unique identifier for the batch.
object
string
Always batch.
endpoint
string
The API endpoint the batch runs against.
status
string
Current status of the batch. Values: validating, in_progress, completed, failed, cancelled.
input_file_id
string
The ID of the input file.
output_file_id
string
The ID of the output file once the batch has completed.
created_at
number
Unix timestamp when the batch was created.
completed_at
number
Unix timestamp when the batch completed, or null if still in progress.
request_counts
object
curl https://your-gateway.example.com/v1/batches \
  -X POST \
  -H "Content-Type: application/json" \
  -H "x-portkey-api-key: YOUR_API_KEY" \
  -H "x-portkey-provider: openai" \
  -d '{
    "input_file_id": "file-abc123",
    "endpoint": "/v1/chat/completions",
    "completion_window": "24h"
  }'

List batches

GET /v1/batches Returns a list of batch jobs for the configured provider.

Request

x-portkey-provider
string
required
The provider to route the request to.
x-portkey-api-key
string
required
Your Portkey API key or provider API key.
limit
number
default:"20"
Maximum number of batches to return.
after
string
Cursor for pagination. Pass the id of the last batch from the previous page.

Response

object
string
Always list.
data
array
Array of batch objects.
has_more
boolean
Whether additional batches are available beyond this page.
curl https://your-gateway.example.com/v1/batches \
  -H "x-portkey-api-key: YOUR_API_KEY" \
  -H "x-portkey-provider: openai"

Retrieve a batch

GET /v1/batches/:id Retrieves a single batch job by its ID. Use this to poll the status of an in-progress batch.

Request

id
string
required
The ID of the batch to retrieve.
x-portkey-provider
string
required
The provider to route the request to.
x-portkey-api-key
string
required
Your Portkey API key or provider API key.

Response

Returns a single batch object. See Create a batch for the full field list.
curl https://your-gateway.example.com/v1/batches/batch-xyz789 \
  -H "x-portkey-api-key: YOUR_API_KEY" \
  -H "x-portkey-provider: openai"

Get batch output

GET /v1/batches/*/output Retrieves the output file content for a completed batch. The wildcard path segment accepts the batch ID.
The batch must have status: completed before output is available. Retrieve the batch first to check its output_file_id, then use GET /v1/files/:id/content to download the output directly.

Request

*
string
required
The batch ID. For example, /v1/batches/batch-xyz789/output.
x-portkey-provider
string
required
The provider to route the request to.
x-portkey-api-key
string
required
Your Portkey API key or provider API key.

Response

Returns the raw .jsonl output content. Each line corresponds to a response for a single request in the batch.
curl https://your-gateway.example.com/v1/batches/batch-xyz789/output \
  -H "x-portkey-api-key: YOUR_API_KEY" \
  -H "x-portkey-provider: openai" \
  --output batch_results.jsonl

Cancel a batch

POST /v1/batches/:id/cancel Cancels an in-progress batch. Requests that have already been processed are not reversed.
Cancellation is asynchronous. The batch status transitions to cancelling and then cancelled. Already-processed requests are not refunded.

Request

id
string
required
The ID of the batch to cancel.
x-portkey-provider
string
required
The provider to route the request to.
x-portkey-api-key
string
required
Your Portkey API key or provider API key.

Response

Returns the updated batch object with status: cancelling.
curl https://your-gateway.example.com/v1/batches/batch-xyz789/cancel \
  -X POST \
  -H "x-portkey-api-key: YOUR_API_KEY" \
  -H "x-portkey-provider: openai"

Build docs developers (and LLMs) love