bun:sql is Bun’s built-in database client. It provides a single unified API for PostgreSQL, MySQL, and SQLite using tagged template literals that automatically escape parameters — no SQL injection possible.
import { sql } from "bun";
// PostgreSQL (configured via DATABASE_URL)
const users = await sql`
SELECT * FROM users
WHERE active = ${true}
LIMIT ${10}
`;
// MySQL
import { SQL } from "bun";
const mysql = new SQL("mysql://user:pass@localhost:3306/mydb");
const orders = await mysql`SELECT * FROM orders WHERE status = ${"pending"}`;
// SQLite
const sqlite = new SQL("sqlite://myapp.db");
const logs = await sqlite`SELECT * FROM logs WHERE level = ${"error"}`;
Features:
- Tagged template literals protect against SQL injection
- Unified API for PostgreSQL, MySQL, and SQLite
- Automatic connection pooling
- Transactions with savepoints and distributed transactions
- SCRAM-SHA-256 (SASL), MD5, and plain-text authentication
- TLS/SSL support
BigInt support for large integers
- Binary protocol for better performance
Database support
PostgreSQL
PostgreSQL is the default adapter. It is used when the connection string does not match MySQL or SQLite patterns, or when no connection string is provided and DATABASE_URL points to a Postgres instance.
import { sql } from "bun"; // uses DATABASE_URL
import { SQL } from "bun";
const pg = new SQL("postgres://user:pass@localhost:5432/mydb");
await pg`SELECT version()`;
MySQL
Pass a mysql:// or mysql2:// connection string to enable MySQL:
import { SQL } from "bun";
const mysql = new SQL("mysql://user:pass@localhost:3306/mydb");
const users = await mysql`SELECT * FROM users WHERE id = ${userId}`;
SQLite
Pass a sqlite:// URL, :memory:, or a file:// path to use SQLite:
import { SQL } from "bun";
const db = new SQL("sqlite://app.db");
const memory = new SQL(":memory:");
Plain filenames like "myapp.db" require an explicit { adapter: "sqlite" } option to avoid being treated as a PostgreSQL connection.
Connection via environment variables
Bun.sql (the default export) reads credentials from environment variables automatically.
PostgreSQL environment variables
| Variable | Description |
|---|
POSTGRES_URL | Primary connection URL |
DATABASE_URL | Alternative URL (auto-detected) |
PGURL / PG_URL | Alternative URL |
PGHOST | Host (default: localhost) |
PGPORT | Port (default: 5432) |
PGUSERNAME / PGUSER | Username (default: postgres) |
PGPASSWORD | Password |
PGDATABASE | Database name (default: username) |
TLS_POSTGRES_DATABASE_URL | SSL/TLS-enabled connection URL |
MySQL environment variables
| Variable | Default | Description |
|---|
MYSQL_URL | — | Primary MySQL URL |
DATABASE_URL | — | Alternative URL |
MYSQL_HOST | localhost | Host |
MYSQL_PORT | 3306 | Port |
MYSQL_USER | root | Username |
MYSQL_PASSWORD | (empty) | Password |
MYSQL_DATABASE | mysql | Database name |
SQLite environment variables
DATABASE_URL=":memory:"
DATABASE_URL="sqlite://./app.db"
DATABASE_URL="file:///absolute/path/to/app.sqlite"
Parameterized queries
All values interpolated into template literals are automatically parameterized — they are never interpolated as raw SQL strings.
const userId = 42;
const users = await sql`SELECT * FROM users WHERE id = ${userId}`;
This generates a prepared statement equivalent to SELECT * FROM users WHERE id = $1 with [42] as the parameter array.
Inserting data
import { sql } from "bun";
// Single row
const [user] = await sql`
INSERT INTO users (name, email)
VALUES (${name}, ${email})
RETURNING *
`;
// From an object
const userData = { name: "Alice", email: "[email protected]" };
const [newUser] = await sql`INSERT INTO users ${sql(userData)} RETURNING *`;
Bulk insert
Picking specific columns
const user = { name: "Alice", email: "[email protected]", age: 25 };
// Only insert name and email
await sql`INSERT INTO users ${sql(user, "name", "email")}`;
Query results
By default, queries return an array of objects (one per row):
const users = await sql`SELECT * FROM users`;
// [{ id: 1, name: "Alice" }, { id: 2, name: "Bob" }]
.values() — rows as arrays
Useful when column names are duplicated across joined tables.
.raw() — rows as Buffer arrays
const rows = await sql`SELECT * FROM users`.raw();
// [[Buffer, Buffer], [Buffer, Buffer]]
SQL fragments and dynamic queries
Dynamic table names
const table = "users";
await sql`SELECT * FROM ${sql(table)}`;
// With schema
await sql`SELECT * FROM ${sql("public.users")}`;
Conditional clauses
const minAge = 21;
const filterAge = true;
await sql`
SELECT * FROM users
WHERE active = ${true}
${filterAge ? sql`AND age > ${minAge}` : sql``}
`;
Dynamic column updates
const updates = { name: "Alice", email: "[email protected]" };
// Update specific columns
await sql`UPDATE users SET ${sql(updates, "name", "email")} WHERE id = ${1}`;
// Update all columns in the object
await sql`UPDATE users SET ${sql(updates)} WHERE id = ${1}`;
Dynamic WHERE IN lists
await sql`SELECT * FROM users WHERE id IN ${sql([1, 2, 3])}`;
const users = [{ id: 1 }, { id: 2 }, { id: 3 }];
await sql`SELECT * FROM users WHERE id IN ${sql(users, "id")}`;
PostgreSQL array literals
await sql`INSERT INTO tags (items) VALUES (${sql.array(["red", "blue", "green"])})`;
// INSERT INTO tags (items) VALUES (ARRAY['red', 'blue', 'green'])
sql.array is PostgreSQL-only.
Transactions
await sql.begin(async tx => {
await tx`INSERT INTO users (name) VALUES (${"Alice"})`;
await tx`UPDATE accounts SET balance = balance - 100 WHERE user_id = ${1}`;
// auto-commits on return, auto-rolls back on throw
});
Savepoints
await sql.begin(async tx => {
await tx`INSERT INTO users (name) VALUES (${"Alice"})`;
await tx.savepoint(async sp => {
await sp`UPDATE users SET role = ${"admin"}`;
if (shouldRollback) throw new Error("rollback savepoint");
});
await tx`INSERT INTO audit_log (action) VALUES (${"user_created"})`;
});
Distributed transactions (two-phase commit)
// PostgreSQL: prepared transactions
// MySQL: XA transactions
await sql.beginDistributed("tx1", async tx => {
await tx`INSERT INTO users (name) VALUES (${"Alice"})`;
});
// Later, in another session or process:
await sql.commitDistributed("tx1");
// or
await sql.rollbackDistributed("tx1");
Connection pooling
Bun’s SQL client manages a connection pool automatically. No connections are created until the first query runs.
const db = new SQL({
url: "postgres://user:pass@localhost:5432/mydb",
max: 20, // max concurrent connections
idleTimeout: 30, // close idle connections after 30s
maxLifetime: 3600, // max connection lifetime in seconds
connectionTimeout: 10,
});
// two queries run concurrently, each on its own connection
await Promise.all([
db`SELECT * FROM users`,
db`SELECT * FROM orders`,
]);
await db.close(); // wait for all queries to finish
await db.close({ timeout: 5 }); // wait up to 5s then close
await db.close({ timeout: 0 }); // close immediately
Reserved connections
const reserved = await sql.reserve();
try {
await reserved`SET search_path TO myschema`;
await reserved`SELECT * FROM users`;
} finally {
reserved.release();
}
// or with Symbol.dispose
{
using reserved = await sql.reserve();
await reserved`SELECT 1`;
} // auto-released
Prepared statements
By default, Bun caches named prepared statements server-side for better performance. Disable this for compatibility with PGBouncer or when queries are highly dynamic:
const db = new SQL({
// ...
prepare: false, // use unnamed prepared statements
});
Multiple statements and raw SQL
.simple() — multiple statements without parameters
await sql`
CREATE TABLE users (id SERIAL PRIMARY KEY, name TEXT);
CREATE INDEX users_name_idx ON users (name);
`.simple();
sql.file() — load SQL from a file
const results = await sql.file("migrations/001.sql");
const rows = await sql.file("queries/find_user.sql", [userId]);
sql.unsafe() — raw SQL strings
sql.unsafe() does not escape parameters. Only use this when you fully control the input.
const result = await sql.unsafe(`SELECT ${trustedColumn} FROM users WHERE id = $1`, [id]);
Cancelling queries
Bun’s SQL client is lazy — execution only begins when the query is awaited or .execute() is called. Cancel in-progress queries with .cancel():
const query = sql`SELECT pg_sleep(10)`.execute();
setTimeout(() => query.cancel(), 100);
await query;
Preconnection at startup
Start establishing a PostgreSQL connection before your application code runs:
bun --sql-preconnect index.js
DATABASE_URL=postgres://user:pass@localhost/db bun --sql-preconnect index.js
SSL / TLS
const db = new SQL({
hostname: "db.example.com",
username: "user",
password: "secret",
ssl: "verify-full", // disable | prefer | require | verify-ca | verify-full
});
// or inline in connection string
new SQL("postgres://user:[email protected]/mydb?sslmode=verify-full");
| Mode | Description |
|---|
disable | No TLS. Fails if server requires it. |
prefer | Try TLS, fall back to plaintext. Default. |
require | Require TLS, skip certificate verification. |
verify-ca | Verify certificate is signed by a trusted CA. |
verify-full | Verify certificate and hostname. Most secure. |
Dynamic passwords
For rotating credentials or token-based auth:
const db = new SQL({
url: "postgres://[email protected]/mydb",
password: async () => await tokenProvider.getToken(),
});
Error handling
import { SQL } from "bun";
try {
await sql`INSERT INTO users (id) VALUES (${1}), (${1})`; // duplicate
} catch (error) {
if (error instanceof SQL.PostgresError) {
console.log(error.code); // e.g. "23505" (unique_violation)
console.log(error.detail);
console.log(error.hint);
} else if (error instanceof SQL.SQLiteError) {
console.log(error.code); // e.g. "SQLITE_CONSTRAINT"
console.log(error.errno); // e.g. 19
} else if (error instanceof SQL.SQLError) {
console.log(error.message);
}
}
BigInt handling
Numbers that exceed 53-bit safe integer range are returned as strings by default:
const [row] = await sql`SELECT 9223372036854777 as x, 100 as y`;
typeof row.x; // "string" — exceeds Number.MAX_SAFE_INTEGER
typeof row.y; // "number"
To get large numbers as bigint instead:
const db = new SQL({ bigint: true });
const [row] = await db`SELECT 9223372036854777 as x`;
typeof row.x; // "bigint"
Redis
Bun includes a native Redis client with a Promise-based API. Import via bun:
import { redis } from "bun";
// Set a key
await redis.set("greeting", "Hello from Bun!");
// Get a key
const value = await redis.get("greeting");
// Increment a counter
await redis.incr("counter");
// Check existence / delete
const exists = await redis.exists("greeting"); // 1 or 0
await redis.del("greeting");
Connection
By default, redis reads connection info from environment variables (in priority order):
REDIS_URL
VALKEY_URL
- Falls back to
redis://localhost:6379
Create a custom client for explicit credentials or multiple connections:
import { RedisClient } from "bun";
const client = new RedisClient("redis://username:password@localhost:6379");
await client.set("key", "value");
// Always close when done
client.close();
Common commands
// Strings
await redis.set("key", "value", { ex: 60 }); // expire in 60 seconds
await redis.get("key");
await redis.mset({ a: "1", b: "2" });
await redis.mget("a", "b");
// Numbers
await redis.incr("counter");
await redis.incrby("counter", 5);
// Lists
await redis.lpush("list", "a", "b");
await redis.lrange("list", 0, -1);
// Hashes
await redis.hset("hash", { field: "value" });
await redis.hget("hash", "field");
await redis.hgetall("hash");
// Sets
await redis.sadd("set", "member");
await redis.smembers("set");
// Expiry
await redis.expire("key", 60);
await redis.ttl("key");
Pipeline
Send multiple commands in a single round-trip using pipeline():
const pipeline = redis.pipeline();
pipeline.set("a", "1");
pipeline.set("b", "2");
pipeline.get("a");
const results = await pipeline.exec();
// => [null, null, "1"]
Bun’s Redis client supports Redis 7.2+ and is compatible with Valkey.