16 KiB
Production servers often read, upload, and write files to S3-compatible object storage services instead of the local filesystem. Historically, that means local filesystem APIs you use in development can't be used in production. When you use Bun, things are different.
Bun provides fast, native bindings for interacting with S3-compatible object storage services. Bun's S3 API is designed to be simple and feel similar to fetch's Response and Blob APIs (like Bun's local filesystem APIs).
import { s3, write, S3 } from "bun";
const metadata = await s3("123.json", {
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
bucket: "my-bucket",
// endpoint: "https://s3.us-east-1.amazonaws.com",
});
// Download from S3 as JSON
const data = await metadata.json();
// Upload to S3
await write(metadata, JSON.stringify({ name: "John", age: 30 }));
// Presign a URL (synchronous - no network request needed)
const url = metadata.presign({
acl: "public-read",
expiresIn: 60 * 60 * 24, // 1 day
});
S3 is the de facto standard internet filesystem. You can use Bun's S3 API with S3-compatible storage services like:
- AWS S3
- Cloudflare R2
- DigitalOcean Spaces
- MinIO
- Backblaze B2
- ...and any other S3-compatible storage service
Basic Usage
There are several ways to interact with Bun's S3 API.
Using Bun.s3()
The s3() helper function is used to create one-off S3File instances for a single file.
import { s3 } from "bun";
// Using the s3() helper
const s3file = s3("my-file.txt", {
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
bucket: "my-bucket",
// endpoint: "https://s3.us-east-1.amazonaws.com", // optional
// endpoint: "https://<account-id>.r2.cloudflarestorage.com", // Cloudflare R2
// endpoint: "https://<region>.digitaloceanspaces.com", // DigitalOcean Spaces
// endpoint: "http://localhost:9000", // MinIO
});
Reading Files
You can read files from S3 using similar methods to Bun's file system APIs:
// Read an S3File as text
const text = await s3file.text();
// Read an S3File as JSON
const json = await s3file.json();
// Read an S3File as an ArrayBuffer
const buffer = await s3file.arrayBuffer();
// Get only the first 1024 bytes
const partial = await s3file.slice(0, 1024).text();
// Stream the file
const stream = s3file.stream();
for await (const chunk of stream) {
console.log(chunk);
}
Writing Files
Writing to S3 is just as simple:
// Write a string (replacing the file)
await s3file.write("Hello World!");
// Write with content type
await s3file.write(JSON.stringify({ name: "John", age: 30 }), {
type: "application/json",
});
// Write using a writer (streaming)
const writer = s3file.writer({ type: "application/json" });
writer.write("Hello");
writer.write(" World!");
await writer.end();
// Write using Bun.write
await Bun.write(s3file, "Hello World!");
Working with large files (streams)
Bun automatically handles multipart uploads for large files and provides streaming capabilities. The same API that works for local files also works for S3 files.
// Write a large file
const bigFile = Buffer.alloc(10 * 1024 * 1024); // 10MB
const writer = s3file.writer({
// Automatically retry on network errors up to 3 times
retry: 3,
// Queue up to 10 requests at a time
queueSize: 10,
// Upload in 5 MB chunks
partSize: 5 * 1024 * 1024,
});
for (let i = 0; i < 10; i++) {
await writer.write(bigFile);
}
await writer.end();
Presigning URLs
When your production service needs to let users upload files to your server, it's often more reliable for the user to upload directly to S3 instead of your server acting as an intermediary.
To facilitate this, you can presign URLs for S3 files. This generates a URL with a signature that allows a user to securely upload that specific file to S3, without exposing your credentials or granting them unnecessary access to your bucket.
// Generate a presigned URL that expires in 24 hours (default)
const url = s3file.presign();
// Custom expiration time (in seconds)
const url2 = s3file.presign({ expiresIn: 3600 }); // 1 hour
// Using static method
const url3 = Bun.S3.presign("my-file.txt", {
bucket: "my-bucket",
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
// endpoint: "https://s3.us-east-1.amazonaws.com",
// endpoint: "https://<account-id>.r2.cloudflarestorage.com", // Cloudflare R2
expiresIn: 3600,
});
Setting ACLs
To set an ACL (access control list) on a presigned URL, pass the acl option:
const url = s3file.presign({
acl: "public-read",
expiresIn: 3600,
});
You can pass any of the following ACLs:
| ACL | Explanation |
|---|---|
"public-read" |
The object is readable by the public. |
"private" |
The object is readable only by the bucket owner. |
"public-read-write" |
The object is readable and writable by the public. |
"authenticated-read" |
The object is readable by the bucket owner and authenticated users. |
"aws-exec-read" |
The object is readable by the AWS account that made the request. |
"bucket-owner-read" |
The object is readable by the bucket owner. |
"bucket-owner-full-control" |
The object is readable and writable by the bucket owner. |
"log-delivery-write" |
The object is writable by AWS services used for log delivery. |
Expiring URLs
To set an expiration time for a presigned URL, pass the expiresIn option.
const url = s3file.presign({
// Seconds
expiresIn: 3600, // 1 hour
});
method
To set the HTTP method for a presigned URL, pass the method option.
const url = s3file.presign({
method: "PUT",
// method: "DELETE",
// method: "GET",
// method: "HEAD",
// method: "POST",
// method: "PUT",
});
new Response(S3File)
To quickly redirect users to a presigned URL for an S3 file, you can pass an S3File instance to a Response object as the body.
const response = new Response(s3file);
console.log(response);
This will automatically redirect the user to the presigned URL for the S3 file, saving you the memory, time, and bandwidth cost of downloading the file to your server and sending it back to the user.
Response (0 KB) {
ok: false,
url: "",
status: 302,
statusText: "",
headers: Headers {
"location": "https://<account-id>.r2.cloudflarestorage.com/...",
},
redirected: true,
bodyUsed: false
}
Support for S3-Compatible Services
Bun's S3 implementation works with any S3-compatible storage service. Just specify the appropriate endpoint:
import { s3 } from "bun";
// CloudFlare R2
const r2file = s3("my-file.txt", {
accessKeyId: "access-key",
secretAccessKey: "secret-key",
bucket: "my-bucket",
endpoint: "https://<account-id>.r2.cloudflarestorage.com",
});
// DigitalOcean Spaces
const spacesFile = s3("my-file.txt", {
accessKeyId: "access-key",
secretAccessKey: "secret-key",
bucket: "my-bucket",
endpoint: "https://<region>.digitaloceanspaces.com",
});
// MinIO
const minioFile = s3("my-file.txt", {
accessKeyId: "access-key",
secretAccessKey: "secret-key",
bucket: "my-bucket",
endpoint: "http://localhost:9000",
});
Credentials
Credentials are one of the hardest parts of using S3, and we've tried to make it as easy as possible. By default, Bun reads the following environment variables for credentials.
| Option name | Environment variable |
|---|---|
accessKeyId |
S3_ACCESS_KEY_ID |
secretAccessKey |
S3_SECRET_ACCESS_KEY |
region |
S3_REGION |
endpoint |
S3_ENDPOINT |
bucket |
S3_BUCKET |
sessionToken |
S3_SESSION_TOKEN |
If the S3_* environment variable is not set, Bun will also check for the AWS_* environment variable, for each of the above options.
| Option name | Fallback environment variable |
|---|---|
accessKeyId |
AWS_ACCESS_KEY_ID |
secretAccessKey |
AWS_SECRET_ACCESS_KEY |
region |
AWS_REGION |
endpoint |
AWS_ENDPOINT |
bucket |
AWS_BUCKET |
sessionToken |
AWS_SESSION_TOKEN |
These environment variables are read from .env files or from the process environment at initialization time (process.env is not used for this).
These defaults are overriden by the options you pass to s3(credentials), new Bun.S3(credentials), or any of the methods that accept credentials. So if, for example, you use the same credentials for different buckets, you can set the credentials once in your .env file and then pass bucket: "my-bucket" to the s3() helper function without having to specify all the credentials again.
S3 Buckets
Passing around all of these credentials can be cumbersome. To make it easier, you can create a S3 bucket instance.
import { S3 } from "bun";
const bucket = new S3({
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
bucket: "my-bucket",
// sessionToken: "..."
endpoint: "https://s3.us-east-1.amazonaws.com",
// endpoint: "https://<account-id>.r2.cloudflarestorage.com", // Cloudflare R2
// endpoint: "http://localhost:9000", // MinIO
});
// bucket is a function that creates `S3File` instances (lazy)
const file = bucket("my-file.txt");
// Write to S3
await file.write("Hello World!");
// Read from S3
const text = await file.text();
// Write using a Response
await file.write(new Response("Hello World!"));
// Presign a URL
const url = file.presign({
expiresIn: 60 * 60 * 24, // 1 day
acl: "public-read",
});
// Delete the file
await file.unlink();
Read a file from an S3 bucket
The S3 bucket instance is itself a function that creates S3File instances. It provides a more convenient API for interacting with S3.
const s3file = bucket("my-file.txt");
const text = await s3file.text();
const json = await s3file.json();
const bytes = await s3file.bytes();
const arrayBuffer = await s3file.arrayBuffer();
Write a file to S3
To write a file to the bucket, you can use the write method.
const bucket = new Bun.S3({
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
endpoint: "https://s3.us-east-1.amazonaws.com",
bucket: "my-bucket",
});
await bucket.write("my-file.txt", "Hello World!");
await bucket.write("my-file.txt", new Response("Hello World!"));
You can also call .write on the S3File instance created by the S3 bucket instance.
const s3file = bucket("my-file.txt");
await s3file.write("Hello World!", {
type: "text/plain",
});
await s3file.write(new Response("Hello World!"));
Delete a file from S3
To delete a file from the bucket, you can use the delete method.
const bucket = new Bun.S3({
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
bucket: "my-bucket",
});
await bucket.delete("my-file.txt");
You can also use the unlink method, which is an alias for delete.
// "delete" and "unlink" are aliases of each other.
await bucket.unlink("my-file.txt");
S3File
S3File instances are created by calling the S3 instance method or the s3() helper function. Like Bun.file(), S3File instances are lazy. They don't refer to something that necessarily exists at the time of creation. That's why all the methods that don't involve network requests are fully synchronous.
interface S3File extends Blob {
slice(start: number, end?: number): S3File;
exists(): Promise<boolean>;
unlink(): Promise<void>;
presign(options: S3Options): string;
text(): Promise<string>;
json(): Promise<any>;
bytes(): Promise<Uint8Array>;
arrayBuffer(): Promise<ArrayBuffer>;
stream(options: S3Options): ReadableStream;
write(
data:
| string
| Uint8Array
| ArrayBuffer
| Blob
| ReadableStream
| Response
| Request,
options?: BlobPropertyBag,
): Promise<void>;
readonly size: Promise<number>;
// ... more omitted for brevity
}
Like Bun.file(), S3File extends Blob, so all the methods that are available on Blob are also available on S3File. The same API for reading data from a local file is also available for reading data from S3.
| Method | Output |
|---|---|
await s3File.text() |
string |
await s3File.bytes() |
Uint8Array |
await s3File.json() |
JSON |
await s3File.stream() |
ReadableStream |
await s3File.arrayBuffer() |
ArrayBuffer |
That means using S3File instances with fetch(), Response, and other web APIs that accept Blob instances just works.
Partial reads
To read a partial range of a file, you can use the slice method.
const partial = s3file.slice(0, 1024);
// Read the partial range as a Uint8Array
const bytes = await partial.bytes();
// Read the partial range as a string
const text = await partial.text();
Internally, this works by using the HTTP Range header to request only the bytes you want. This slice method is the same as Blob.prototype.slice.
Error codes
When Bun's S3 API throws an error, it will have a code property that matches one of the following values:
ERR_S3_MISSING_CREDENTIALSERR_S3_INVALID_METHODERR_S3_INVALID_PATHERR_S3_INVALID_ENDPOINTERR_S3_INVALID_SIGNATUREERR_S3_INVALID_SESSION_TOKEN
When the S3 Object Storage service returns an error (that is, not Bun), it will be an S3Error instance (an Error instance with the name "S3Error").
S3 static methods
The S3 class provides several static methods for interacting with S3.
S3.presign
To generate a presigned URL for an S3 file, you can use the S3.presign method.
import { S3 } from "bun";
const url = S3.presign("my-file.txt", {
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
bucket: "my-bucket",
expiresIn: 3600,
// endpoint: "https://s3.us-east-1.amazonaws.com",
// endpoint: "https://<account-id>.r2.cloudflarestorage.com", // Cloudflare R2
});
This is the same as S3File.prototype.presign and new S3(credentials).presign, as a static method on the S3 class.
S3.exists
To check if an S3 file exists, you can use the S3.exists method.
import { S3 } from "bun";
const exists = await S3.exists("my-file.txt", {
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
bucket: "my-bucket",
// endpoint: "https://s3.us-east-1.amazonaws.com",
});
The same method also works on S3File instances.
const s3file = Bun.s3("my-file.txt", {
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
bucket: "my-bucket",
});
const exists = await s3file.exists();
S3.size
To get the size of an S3 file, you can use the S3.size method.
import { S3 } from "bun";
const size = await S3.size("my-file.txt", {
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
bucket: "my-bucket",
// endpoint: "https://s3.us-east-1.amazonaws.com",
});
S3.unlink
To delete an S3 file, you can use the S3.unlink method.
import { S3 } from "bun";
await S3.unlink("my-file.txt", {
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
bucket: "my-bucket",
// endpoint: "https://s3.us-east-1.amazonaws.com",
});
s3:// protocol
To make it easier to use the same code for local files and S3 files, the s3:// protocol is supported in fetch and Bun.file().
const response = await fetch("s3://my-bucket/my-file.txt");
const file = Bun.file("s3://my-bucket/my-file.txt");
This is the equivalent of calling Bun.s3("my-file.txt", { bucket: "my-bucket" }).
This s3:// protocol exists to make it easier to use the same code for local files and S3 files.