Create one client per service + region combination and reuse it across calls. Each client caches credentials, endpoint resolution, and (optionally) the middleware stack. Recreating clients in a loop or per-request wastes this work.
// Creates a new client — and re-resolves credentials and endpoints — on every iterationfor (const item of items) { const client = new S3Client({ region, credentials }); await client.send(new PutObjectCommand(item));}
If you need to make requests to multiple regions, instantiate a separate client per region. You can share credentials and request handlers between them to avoid redundant credential resolution:
import { fromTemporaryCredentials } from "@aws-sdk/credential-providers";import { S3Client } from "@aws-sdk/client-s3";const credentialProvider = fromTemporaryCredentials();const s3 = { east: new S3Client({ region: "us-east-1", credentials: credentialProvider }), west: new S3Client({ region: "us-west-2", credentials: credentialProvider }),};
The client.config object is not the same as the constructor input. It has been through a process called “config resolution”, where many fields (including region and credentials) are wrapped in async provider functions. Writing back to this object causes errors.
client.config.region = "us-west-2" will throw TypeError: config.region is not a function at the next call. Create a new client instead.
// Throws at runtime:client.config.region = "us-west-2"; // region is already a function, not a string// Also unreliable — endpoint depends on operation inputs:const endpoint = await client.config.endpoint(); // may throw
To resolve the endpoint for a specific operation, use getEndpointFromInstructions from @smithy/middleware-endpoint, passing both the command and its input parameters.
Some operations (most notably S3.GetObject) return a byte stream. Awaiting the command gives you the response headers and status code, but the underlying socket stays open until you read or discard the stream body.
Leaving a streaming response unread exhausts your connection pool. In Node.js this can cause your application to slow down, leak memory, or deadlock.
const response = await client.send(new GetObjectCommand({ Bucket, Key }));// Socket is still open here — body has not been readconsole.log(response.$metadata.httpStatusCode);
Handle the body using one of the built-in methods. Streams can only be read once.
Allow more connection time for cross-region requests
On Node.js v20 and later, the TCP autoSelectFamilyAttemptTimeout default of 250 ms can be too low for cross-region requests, especially between regions on opposite sides of the globe. This may surface as an AggregateError with code ETIMEDOUT.Increase the timeout at application startup or via a Node.js launch flag:
import net from "node:net";net.setDefaultAutoSelectFamilyAttemptTimeout(500);
Or as a CLI flag: --network-family-autoselection-attempt-timeout=500
If your application only needs an SDK package in certain code paths, use a dynamic import so it is loaded on demand rather than at startup:
async function uploadIfNeeded(condition: boolean) { if (!condition) return; const { S3Client, PutObjectCommand } = await import("@aws-sdk/client-s3"); const client = new S3Client({}); await client.send(new PutObjectCommand({ Bucket, Key, Body }));}
This reduces the initial module load — particularly useful for Lambda cold start time.
V3 credential providers use dynamic imports internally. Loading @aws-sdk/credential-provider-node in v3 pulls 8 files (~26 kB) into the module cache, compared to 53 files (~398 kB) for the equivalent v2 provider.
When making many requests concurrently, configure maxSockets on the NodeHttpHandler to match your parallelism level:
import { S3 } from "@aws-sdk/client-s3";const s3 = new S3({ cacheMiddleware: true, // cache middleware resolution when not using custom middleware requestHandler: { httpsAgent: { keepAlive: true, maxSockets: 50, // set to your parallel batch size }, },});
Setting maxSockets too low throttles your parallel workload. Setting it too high risks opening too many file descriptors (EMFILE: too many open files). Match it to your actual batch size.
Batch upload example (10,000 files, 100 at a time):
If your maxSockets value is low and you have parallel streaming responses, you can deadlock. The pattern below is dangerous:
// Deadlock risk: streams are not read before awaiting both responsesconst responses = await Promise.all([ s3.getObject({ Bucket, Key: "file1" }), s3.getObject({ Bucket, Key: "file2" }),]);await Promise.all(responses.map((r) => r.Body.transformToByteArray()));
The second getObject waits for a socket that the first response is holding open.Fix: pipeline stream reading so responses are processed as they arrive:
const gets = [ s3.getObject({ Bucket, Key: "file1" }), s3.getObject({ Bucket, Key: "file2" }),];// Set up the read pipeline before awaitingconst reads = gets.map((get) => get.then((r) => r.Body.transformToByteArray()));await Promise.all(reads);