The Upload class from @aws-sdk/lib-storage handles multipart uploads automatically. It is designed for uploading large files, buffers, blobs, or readable streams of unknown size, using configurable concurrency to maximize throughput.
Installation
npm install @aws-sdk/lib-storage @aws-sdk/client-s3
Basic usage
import { Upload } from "@aws-sdk/lib-storage";
import { S3Client, S3 } from "@aws-sdk/client-s3";
try {
const upload = new Upload({
client: new S3Client({}),
params: { Bucket, Key, Body },
// optional: S3 object tags
tags: [],
// optional: number of concurrent part uploads (default: 4)
queueSize: 4,
// optional: minimum size of each part in bytes (default: 5 MB)
partSize: 1024 * 1024 * 5,
// optional: when true, do not call AbortMultipartUpload on failure
leavePartsOnError: false,
});
upload.on("httpUploadProgress", (progress) => {
console.log(progress);
});
await upload.done();
} catch (e) {
console.log(e);
}
Configuration options
| Option | Type | Default | Description |
|---|
client | S3Client | S3 | required | The S3 client instance to use |
params | object | required | Standard S3 PutObject parameters: Bucket, Key, Body, etc. |
queueSize | number | 4 | Number of parts to upload concurrently |
partSize | number | 5242880 | Size of each part in bytes. Must be at least 5 MB (5,242,880 bytes) |
leavePartsOnError | boolean | false | When true, uploaded parts are not cleaned up on failure |
tags | array | [] | S3 object tags to apply to the completed upload |
The Body parameter accepts Buffer, Blob, Readable streams, and ReadableStream. This makes Upload suitable for piping file system streams or HTTP responses directly to S3.
Progress events
Listen to the httpUploadProgress event to track upload progress:
upload.on("httpUploadProgress", (progress) => {
console.log(progress);
// { loaded: 5242880, total: 52428800, part: 1, Key: "my-file", Bucket: "my-bucket" }
});
The progress object contains:
| Field | Description |
|---|
loaded | Bytes uploaded so far |
total | Total bytes to upload (if known) |
part | Part number currently being uploaded |
Key | S3 object key |
Bucket | S3 bucket name |
total may be undefined when uploading a stream of unknown size.
Aborting an upload
Use an AbortController to cancel an in-progress upload:
import { Upload } from "@aws-sdk/lib-storage";
import { S3Client } from "@aws-sdk/client-s3";
const controller = new AbortController();
const upload = new Upload({
client: new S3Client({}),
params: { Bucket, Key, Body },
abortController: controller,
});
// Cancel the upload at any time
controller.abort();
await upload.done(); // resolves after abort completes cleanup
Error handling
By default, Upload calls AbortMultipartUpload when an upload fails, which removes any parts already uploaded to S3.
Set leavePartsOnError: true to disable this behavior and retain the uploaded parts for manual recovery:
const upload = new Upload({
client: new S3Client({}),
params: { Bucket, Key, Body },
leavePartsOnError: true,
});
Leaving parts on error can incur storage costs. If you use leavePartsOnError: true, you are responsible for listing and aborting incomplete multipart uploads to avoid unnecessary charges.