Skip to main content
Bun provides a fast, native S3 client that works with AWS S3, Cloudflare R2, DigitalOcean Spaces, MinIO, and any other S3-compatible storage service. The API is designed to feel like Bun’s local filesystem API — S3File extends Blob, so methods like .text(), .json(), .stream(), and .arrayBuffer() work the same way they do on local files.
import { s3 } from "bun";

const file = s3.file("user-data/123.json");

// Read from S3
const data = await file.json();

// Write to S3
await file.write(JSON.stringify({ name: "Alice", score: 99 }), {
  type: "application/json",
});

// Generate a presigned download URL (synchronous, no network request)
const url = file.presign({ expiresIn: 3600 });

// Delete the file
await file.delete();

Setup

Bun.s3 — global client using environment variables

Bun.s3 (also importable as s3 from "bun") reads credentials from environment variables automatically:
OptionEnvironment variable
accessKeyIdS3_ACCESS_KEY_ID / AWS_ACCESS_KEY_ID
secretAccessKeyS3_SECRET_ACCESS_KEY / AWS_SECRET_ACCESS_KEY
regionS3_REGION / AWS_REGION
endpointS3_ENDPOINT / AWS_ENDPOINT
bucketS3_BUCKET / AWS_BUCKET
sessionTokenS3_SESSION_TOKEN / AWS_SESSION_TOKEN
S3_* variables take precedence over AWS_* fallbacks.

S3Client — explicit credentials

import { S3Client } from "bun";

const client = new S3Client({
  accessKeyId: "your-access-key",
  secretAccessKey: "your-secret-key",
  bucket: "my-bucket",
  region: "us-east-1",
  // endpoint: "https://s3.us-east-1.amazonaws.com",
});

Reading files

import { s3 } from "bun";

const file = s3.file("reports/q1.json");

const text    = await file.text();
const json    = await file.json();
const bytes   = await file.bytes();       // Uint8Array
const buffer  = await file.arrayBuffer();

// Stream the file
for await (const chunk of file.stream()) {
  process.stdout.write(chunk);
}

Partial reads with slice()

Use byte ranges to avoid downloading an entire large file:
const first1KB = await s3.file("data.bin").slice(0, 1024).bytes();
const partial  = await s3.file("log.txt").slice(0, 512).text();
Internally this uses the HTTP Range header.

Writing files

await s3.file("hello.txt").write("Hello, world!");
For large files, writer() automatically uses S3 multipart upload. You don’t need to configure this explicitly — it happens whenever you stream data in chunks.

Using Bun.write with S3 files

S3File is a Blob, so you can pass it to Bun.write just like a local BunFile:
import { s3 } from "bun";

await Bun.write(s3.file("output/data.json"), JSON.stringify({ hello: "world" }));

Presigned URLs

Presigned URLs grant time-limited access to a specific S3 object without exposing your credentials. Generating them is synchronous — no network request needed.
import { s3 } from "bun";

// Default: GET, expires in 24 hours
const downloadUrl = s3.presign("reports/q1.pdf");

// Upload URL (PUT), expires in 1 hour
const uploadUrl = s3.presign("uploads/avatar.png", {
  method: "PUT",
  expiresIn: 3600,
  type: "image/png",
});

// Force download with a specific filename
const attachmentUrl = s3.presign("reports/q1.pdf", {
  expiresIn: 3600,
  contentDisposition: 'attachment; filename="Q1 Report.pdf"',
});

// Public read access
const publicUrl = s3.file("public/logo.png").presign({
  acl: "public-read",
  expiresIn: 60 * 60 * 24 * 7, // 1 week
});

ACL options

ACLDescription
"public-read"Readable by anyone.
"private"Readable only by the bucket owner.
"public-read-write"Readable and writable by anyone.
"authenticated-read"Readable by bucket owner and authenticated AWS users.
"bucket-owner-read"Readable by the bucket owner.
"bucket-owner-full-control"Full control for the bucket owner.

Redirecting clients to S3

Wrap an S3File in a Response to issue a 302 redirect directly to the presigned URL — no need to proxy the file through your server:
export default {
  fetch(req: Request) {
    const file = s3.file("downloads/installer.dmg");
    return new Response(file); // 302 redirect to presigned URL
  },
};

File metadata and existence checks

const file = s3.file("data/users.csv");

// Check if a file exists
const exists = await file.exists();

// Get detailed metadata
const stat = await file.stat();
// { size: 204800, etag: "\"abc123\"", lastModified: Date, type: "text/csv" }

Deleting files

await s3.file("temp/upload.tmp").delete();

// Static method variant
await S3Client.delete("temp/upload.tmp", credentials);

Listing objects

import { S3Client } from "bun";

// List up to 1000 objects
const result = await S3Client.list(null, credentials);

// List with a prefix and pagination
const uploads = await S3Client.list(
  { prefix: "uploads/", maxKeys: 100 },
  credentials,
);

if (uploads.isTruncated) {
  const more = await S3Client.list(
    {
      prefix: "uploads/",
      maxKeys: 100,
      startAfter: uploads.contents!.at(-1)!.key,
    },
    credentials,
  );
}

S3-compatible services

Point endpoint to any S3-compatible service:
import { S3Client } from "bun";

const s3 = new S3Client({
  accessKeyId: "key",
  secretAccessKey: "secret",
  bucket: "my-bucket",
  region: "us-east-1",
});

s3:// protocol

Use the s3:// protocol in Bun.file() and fetch() for a more ergonomic API:
const file = Bun.file("s3://my-bucket/path/to/file.txt");
const response = await fetch("s3://my-bucket/path/to/file.txt");
Pass S3 credentials inline:
const response = await fetch("s3://my-bucket/file.txt", {
  s3: {
    accessKeyId: "key",
    secretAccessKey: "secret",
    endpoint: "https://s3.us-east-1.amazonaws.com",
  },
});

Error codes

CodeDescription
ERR_S3_MISSING_CREDENTIALSNo credentials found
ERR_S3_INVALID_METHODHTTP method not allowed for this op
ERR_S3_INVALID_PATHObject key is invalid
ERR_S3_INVALID_ENDPOINTEndpoint URL is malformed
ERR_S3_INVALID_SIGNATURERequest signature verification failed
ERR_S3_INVALID_SESSION_TOKENSession token is invalid or expired
Errors from the S3 service itself are instances of S3Error (an Error with name === "S3Error").

Build docs developers (and LLMs) love