Skip to main content

Reuse clients

Create one client per service + region combination and reuse it across calls. Each client caches credentials, endpoint resolution, and (optionally) the middleware stack. Recreating clients in a loop or per-request wastes this work.
// Creates a new client — and re-resolves credentials and endpoints — on every iteration
for (const item of items) {
  const client = new S3Client({ region, credentials });
  await client.send(new PutObjectCommand(item));
}
If you need to make requests to multiple regions, instantiate a separate client per region. You can share credentials and request handlers between them to avoid redundant credential resolution:
import { fromTemporaryCredentials } from "@aws-sdk/credential-providers";
import { S3Client } from "@aws-sdk/client-s3";

const credentialProvider = fromTemporaryCredentials();

const s3 = {
  east: new S3Client({ region: "us-east-1", credentials: credentialProvider }),
  west: new S3Client({ region: "us-west-2", credentials: credentialProvider }),
};

Do not mutate client.config

The client.config object is not the same as the constructor input. It has been through a process called “config resolution”, where many fields (including region and credentials) are wrapped in async provider functions. Writing back to this object causes errors.
client.config.region = "us-west-2" will throw TypeError: config.region is not a function at the next call. Create a new client instead.
// Throws at runtime:
client.config.region = "us-west-2";  // region is already a function, not a string

// Also unreliable — endpoint depends on operation inputs:
const endpoint = await client.config.endpoint(); // may throw
To resolve the endpoint for a specific operation, use getEndpointFromInstructions from @smithy/middleware-endpoint, passing both the command and its input parameters.

Always read streaming responses to completion

Some operations (most notably S3.GetObject) return a byte stream. Awaiting the command gives you the response headers and status code, but the underlying socket stays open until you read or discard the stream body.
Leaving a streaming response unread exhausts your connection pool. In Node.js this can cause your application to slow down, leak memory, or deadlock.
const response = await client.send(new GetObjectCommand({ Bucket, Key }));

// Socket is still open here — body has not been read
console.log(response.$metadata.httpStatusCode);
Handle the body using one of the built-in methods. Streams can only be read once.
const response = await client.send(new GetObjectCommand({ Bucket, Key }));

// Option 1: buffer as bytes
const bytes = await response.Body.transformToByteArray();

// Option 2: pipe to another destination (e.g. re-upload to S3)
await s3Client.send(new PutObjectCommand({ Bucket: dest, Key, Body: response.Body }));

// Option 3: discard
await (response.Body.destroy?.() ?? response.Body.cancel?.());
Operations with streaming response bodies are marked with <SdkStream> in the “Example Syntax” section of their API reference page.

Allow more connection time for cross-region requests

On Node.js v20 and later, the TCP autoSelectFamilyAttemptTimeout default of 250 ms can be too low for cross-region requests, especially between regions on opposite sides of the globe. This may surface as an AggregateError with code ETIMEDOUT. Increase the timeout at application startup or via a Node.js launch flag:
import net from "node:net";

net.setDefaultAutoSelectFamilyAttemptTimeout(500);
Or as a CLI flag: --network-family-autoselection-attempt-timeout=500

Bundle size: bare-bones vs aggregated clients

V3 ships two styles of client. Choose the one that fits your use case.
StyleImportEffect
Bare-bonesS3Client + individual commandsTree-shakeable — only imported commands are bundled
AggregatedS3All commands included — larger bundle, simpler API
import { S3Client, GetObjectCommand, PutObjectCommand } from "@aws-sdk/client-s3";

const client = new S3Client({});
await client.send(new GetObjectCommand({ Bucket, Key }));
Measured bundle size difference (DynamoDB, minified with esbuild):
  • aws-sdk v2: 3.2 MB
  • @aws-sdk/client-dynamodb v3: 214 kB

Dynamic imports for code splitting

If your application only needs an SDK package in certain code paths, use a dynamic import so it is loaded on demand rather than at startup:
async function uploadIfNeeded(condition: boolean) {
  if (!condition) return;

  const { S3Client, PutObjectCommand } = await import("@aws-sdk/client-s3");
  const client = new S3Client({});
  await client.send(new PutObjectCommand({ Bucket, Key, Body }));
}
This reduces the initial module load — particularly useful for Lambda cold start time.
V3 credential providers use dynamic imports internally. Loading @aws-sdk/credential-provider-node in v3 pulls 8 files (~26 kB) into the module cache, compared to 53 files (~398 kB) for the equivalent v2 provider.

Parallel workloads in Node.js

When making many requests concurrently, configure maxSockets on the NodeHttpHandler to match your parallelism level:
import { S3 } from "@aws-sdk/client-s3";

const s3 = new S3({
  cacheMiddleware: true, // cache middleware resolution when not using custom middleware
  requestHandler: {
    httpsAgent: {
      keepAlive: true,
      maxSockets: 50, // set to your parallel batch size
    },
  },
});
Setting maxSockets too low throttles your parallel workload. Setting it too high risks opening too many file descriptors (EMFILE: too many open files). Match it to your actual batch size.
Batch upload example (10,000 files, 100 at a time):
const files = [/* ... */];
const BATCH_SIZE = 100;

const s3 = new S3({
  requestHandler: {
    httpsAgent: { maxSockets: BATCH_SIZE },
  },
});

const promises: Promise<unknown>[] = [];
while (files.length) {
  promises.push(
    ...files.splice(0, BATCH_SIZE).map((file) =>
      s3.putObject({ Bucket: "my-bucket", Key: file.name, Body: file.contents })
    )
  );
  await Promise.all(promises);
  promises.length = 0;
}

Avoiding streaming deadlock

If your maxSockets value is low and you have parallel streaming responses, you can deadlock. The pattern below is dangerous:
// Deadlock risk: streams are not read before awaiting both responses
const responses = await Promise.all([
  s3.getObject({ Bucket, Key: "file1" }),
  s3.getObject({ Bucket, Key: "file2" }),
]);
await Promise.all(responses.map((r) => r.Body.transformToByteArray()));
The second getObject waits for a socket that the first response is holding open. Fix: pipeline stream reading so responses are processed as they arrive:
const gets = [
  s3.getObject({ Bucket, Key: "file1" }),
  s3.getObject({ Bucket, Key: "file2" }),
];
// Set up the read pipeline before awaiting
const reads = gets.map((get) => get.then((r) => r.Body.transformToByteArray()));
await Promise.all(reads);

Sharing credentials and socket pools across clients

If you need multiple regional clients, share their credentials and request handler to avoid redundant work:
import { S3 } from "@aws-sdk/client-s3";
import { fromNodeProviderChain } from "@aws-sdk/credential-providers";
import { NodeHttpHandler } from "@aws-sdk/config/requestHandler";

const credentials = fromNodeProviderChain();
const requestHandler = new NodeHttpHandler({
  httpsAgent: { maxSockets: 100 },
});

const s3East = new S3({ region: "us-east-1", credentials, requestHandler });
const s3West = new S3({ region: "us-west-2", credentials, requestHandler });

Build docs developers (and LLMs) love