mirror of
https://github.com/oven-sh/bun
synced 2026-02-25 19:17:20 +01:00
Compare commits
1 Commits
bun-v1.2.0
...
ciro/sni-c
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
64a409e8d3 |
@@ -23,7 +23,7 @@ $ hyperfine --prepare 'rm -rf node_modules' --runs 1 'bun install' 'pnpm install
|
||||
To run the benchmark with offline mode but without lockfiles:
|
||||
|
||||
```sh
|
||||
$ hyperfine --prepare 'rm -rf node_modules' --warmup 1 'rm bun.lock && bun install' 'rm pnpm-lock.yaml && pnpm install --prefer-offline' 'rm yarn.lock && yarn --offline' 'rm package-lock.json && npm install --prefer-offline'
|
||||
$ hyperfine --prepare 'rm -rf node_modules' --warmup 1 'rm bun.lockb && bun install' 'rm pnpm-lock.yaml && pnpm install --prefer-offline' 'rm yarn.lock && yarn --offline' 'rm package-lock.json && npm install --prefer-offline'
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
@@ -68,6 +68,7 @@ const client = new S3Client({
|
||||
});
|
||||
|
||||
// Bun.s3 is a global singleton that is equivalent to `new Bun.S3Client()`
|
||||
Bun.s3 = client;
|
||||
```
|
||||
|
||||
### Working with S3 Files
|
||||
@@ -374,7 +375,7 @@ If the `S3_*` environment variable is not set, Bun will also check for the `AWS_
|
||||
|
||||
These environment variables are read from [`.env` files](/docs/runtime/env) or from the process environment at initialization time (`process.env` is not used for this).
|
||||
|
||||
These defaults are overridden by the options you pass to `s3.file(credentials)`, `new Bun.S3Client(credentials)`, or any of the methods that accept credentials. So if, for example, you use the same credentials for different buckets, you can set the credentials once in your `.env` file and then pass `bucket: "my-bucket"` to the `s3.file()` function without having to specify all the credentials again.
|
||||
These defaults are overridden by the options you pass to `s3(credentials)`, `new Bun.S3Client(credentials)`, or any of the methods that accept credentials. So if, for example, you use the same credentials for different buckets, you can set the credentials once in your `.env` file and then pass `bucket: "my-bucket"` to the `s3()` helper function without having to specify all the credentials again.
|
||||
|
||||
### `S3Client` objects
|
||||
|
||||
@@ -458,7 +459,7 @@ const exists = await client.exists("my-file.txt");
|
||||
|
||||
## `S3File`
|
||||
|
||||
`S3File` instances are created by calling the `S3Client` instance method or the `s3.file()` function. Like `Bun.file()`, `S3File` instances are lazy. They don't refer to something that necessarily exists at the time of creation. That's why all the methods that don't involve network requests are fully synchronous.
|
||||
`S3File` instances are created by calling the `S3` instance method or the `s3()` helper function. Like `Bun.file()`, `S3File` instances are lazy. They don't refer to something that necessarily exists at the time of creation. That's why all the methods that don't involve network requests are fully synchronous.
|
||||
|
||||
```ts
|
||||
interface S3File extends Blob {
|
||||
@@ -481,7 +482,7 @@ interface S3File extends Blob {
|
||||
| Response
|
||||
| Request,
|
||||
options?: BlobPropertyBag,
|
||||
): Promise<number>;
|
||||
): Promise<void>;
|
||||
|
||||
exists(options?: S3Options): Promise<boolean>;
|
||||
unlink(options?: S3Options): Promise<void>;
|
||||
@@ -599,9 +600,7 @@ const exists = await S3Client.exists("my-file.txt", credentials);
|
||||
The same method also works on `S3File` instances.
|
||||
|
||||
```ts
|
||||
import { s3 } from "bun";
|
||||
|
||||
const s3file = s3.file("my-file.txt", {
|
||||
const s3file = Bun.s3("my-file.txt", {
|
||||
...credentials,
|
||||
});
|
||||
const exists = await s3file.exists();
|
||||
|
||||
@@ -20,7 +20,7 @@ const activeUsers = await sql`
|
||||
|
||||
{% features title="Features" %}
|
||||
|
||||
{% icon size=20 name="Shield" /%} Tagged template literals to protect against SQL injection
|
||||
{% icon size=20 name="Shield" /%} Tagged template literals to protect againt SQL injection
|
||||
|
||||
{% icon size=20 name="GitMerge" /%} Transactions
|
||||
|
||||
@@ -561,7 +561,7 @@ The plan is to add more database drivers in the future.
|
||||
|
||||
npm packages like postgres.js, pg, and node-postgres can be used in Bun too. They're great options.
|
||||
|
||||
Two reasons why:
|
||||
Two reaons why:
|
||||
|
||||
1. We think it's simpler for developers to have a database driver built into Bun. The time you spend library shopping is time you could be building your app.
|
||||
2. We leverage some JavaScriptCore engine internals to make it faster to create objects that would be difficult to implement in a library
|
||||
|
||||
@@ -95,7 +95,7 @@ disableManifest = false
|
||||
[install.lockfile]
|
||||
|
||||
# Print a yarn v1 lockfile
|
||||
# Note: it does not load the lockfile, it just converts bun.lock into a yarn.lock
|
||||
# Note: it does not load the lockfile, it just converts bun.lockb into a yarn.lock
|
||||
print = "yarn"
|
||||
|
||||
# Save the lockfile to disk
|
||||
@@ -170,9 +170,9 @@ Bun stores installed packages from npm in `~/.bun/install/cache/${name}@${versio
|
||||
|
||||
When the `node_modules` folder exists, before installing, Bun checks if the `"name"` and `"version"` in `package/package.json` in the expected node_modules folder matches the expected `name` and `version`. This is how it determines whether it should install. It uses a custom JSON parser which stops parsing as soon as it finds `"name"` and `"version"`.
|
||||
|
||||
When a `bun.lock` doesn’t exist or `package.json` has changed dependencies, tarballs are downloaded & extracted eagerly while resolving.
|
||||
When a `bun.lockb` doesn’t exist or `package.json` has changed dependencies, tarballs are downloaded & extracted eagerly while resolving.
|
||||
|
||||
When a `bun.lock` exists and `package.json` hasn’t changed, Bun downloads missing dependencies lazily. If the package with a matching `name` & `version` already exists in the expected location within `node_modules`, Bun won’t attempt to download the tarball.
|
||||
When a `bun.lockb` exists and `package.json` hasn’t changed, Bun downloads missing dependencies lazily. If the package with a matching `name` & `version` already exists in the expected location within `node_modules`, Bun won’t attempt to download the tarball.
|
||||
|
||||
## Platform-specific dependencies?
|
||||
|
||||
@@ -184,9 +184,23 @@ Peer dependencies are handled similarly to yarn. `bun install` will automaticall
|
||||
|
||||
## Lockfile
|
||||
|
||||
`bun.lock` is Bun’s lockfile format. See [our blogpost about the text lockfile](https://bun.sh/blog/bun-lock-text-lockfile).
|
||||
`bun.lockb` is Bun’s binary lockfile format.
|
||||
|
||||
Prior to Bun 1.2, the lockfile was binary and called `bun.lockb`. Old lockfiles can be upgraded to the new format by running `bun install --save-text-lockfile --frozen-lockfile --lockfile-only`, and then deleting `bun.lockb`.
|
||||
## Why is it binary?
|
||||
|
||||
In a word: Performance. Bun’s lockfile saves & loads incredibly quickly, and saves a lot more data than what is typically inside lockfiles.
|
||||
|
||||
## How do I inspect it?
|
||||
|
||||
For now, the easiest thing is to run `bun install -y`. That prints a Yarn v1-style yarn.lock file.
|
||||
|
||||
## What does the lockfile store?
|
||||
|
||||
Packages, metadata for those packages, the hoisted install order, dependencies for each package, what packages those dependencies resolved to, an integrity hash (if available), what each package was resolved to and which version (or equivalent).
|
||||
|
||||
## Why is it fast?
|
||||
|
||||
It uses linear arrays for all data. [Packages](https://github.com/oven-sh/bun/blob/be03fc273a487ac402f19ad897778d74b6d72963/src/install/install.zig#L1825) are referenced by an auto-incrementing integer ID or a hash of the package name. Strings longer than 8 characters are de-duplicated. Prior to saving on disk, the lockfile is garbage-collected & made deterministic by walking the package tree and cloning the packages in dependency order.
|
||||
|
||||
## Cache
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ Running `bun install` will:
|
||||
|
||||
- **Install** all `dependencies`, `devDependencies`, and `optionalDependencies`. Bun will install `peerDependencies` by default.
|
||||
- **Run** your project's `{pre|post}install` and `{pre|post}prepare` scripts at the appropriate time. For security reasons Bun _does not execute_ lifecycle scripts of installed dependencies.
|
||||
- **Write** a `bun.lock` lockfile to the project root.
|
||||
- **Write** a `bun.lockb` lockfile to the project root.
|
||||
|
||||
## Logging
|
||||
|
||||
@@ -136,13 +136,13 @@ To install in production mode (i.e. without `devDependencies` or `optionalDepend
|
||||
$ bun install --production
|
||||
```
|
||||
|
||||
For reproducible installs, use `--frozen-lockfile`. This will install the exact versions of each package specified in the lockfile. If your `package.json` disagrees with `bun.lock`, Bun will exit with an error. The lockfile will not be updated.
|
||||
For reproducible installs, use `--frozen-lockfile`. This will install the exact versions of each package specified in the lockfile. If your `package.json` disagrees with `bun.lockb`, Bun will exit with an error. The lockfile will not be updated.
|
||||
|
||||
```bash
|
||||
$ bun install --frozen-lockfile
|
||||
```
|
||||
|
||||
For more information on Bun's lockfile `bun.lock`, refer to [Package manager > Lockfile](https://bun.sh/docs/install/lockfile).
|
||||
For more information on Bun's binary lockfile `bun.lockb`, refer to [Package manager > Lockfile](https://bun.sh/docs/install/lockfile).
|
||||
|
||||
## Omitting dependencies
|
||||
|
||||
|
||||
@@ -22,12 +22,12 @@ WORKDIR /usr/src/app
|
||||
# this will cache them and speed up future builds
|
||||
FROM base AS install
|
||||
RUN mkdir -p /temp/dev
|
||||
COPY package.json bun.lock /temp/dev/
|
||||
COPY package.json bun.lockb /temp/dev/
|
||||
RUN cd /temp/dev && bun install --frozen-lockfile
|
||||
|
||||
# install with --production (exclude devDependencies)
|
||||
RUN mkdir -p /temp/prod
|
||||
COPY package.json bun.lock /temp/prod/
|
||||
COPY package.json bun.lockb /temp/prod/
|
||||
RUN cd /temp/prod && bun install --frozen-lockfile --production
|
||||
|
||||
# copy node_modules from temp directory
|
||||
|
||||
@@ -53,7 +53,7 @@ app.listen(port, () => {
|
||||
Commit your changes and push to GitHub.
|
||||
|
||||
```sh
|
||||
$ git add app.ts bun.lock package.json
|
||||
$ git add app.ts bun.lockb package.json
|
||||
$ git commit -m "Create simple Express app"
|
||||
$ git push origin main
|
||||
```
|
||||
|
||||
@@ -7,7 +7,7 @@ name: Migrate from npm install to bun install
|
||||
We've put a lot of work into making sure that the migration path from `npm install` to `bun install` is as easy as running `bun install` instead of `npm install`.
|
||||
|
||||
- **Designed for Node.js & Bun**: `bun install` installs a Node.js compatible `node_modules` folder. You can use it in place of `npm install` for Node.js projects without any code changes and without using Bun's runtime.
|
||||
- **Automatically converts `package-lock.json`** to bun's `bun.lock` lockfile format, preserving your existing resolved dependency versions without any manual work on your part. You can secretly use `bun install` in place of `npm install` at work without anyone noticing.
|
||||
- **Automatically converts `package-lock.json`** to bun's `bun.lockb` lockfile format, preserving your existing resolved dependency versions without any manual work on your part. You can secretly use `bun install` in place of `npm install` at work without anyone noticing.
|
||||
- **`.npmrc` compatible**: bun install reads npm registry configuration from npm's `.npmrc`, so you can use the same configuration for both npm and Bun.
|
||||
- **Hardlinks**: On Windows and Linux, `bun install` uses hardlinks to conserve disk space and install times.
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@ Once this is added, run a fresh install. Bun will re-install your dependencies a
|
||||
|
||||
```sh
|
||||
$ rm -rf node_modules
|
||||
$ rm bun.lock
|
||||
$ rm bun.lockb
|
||||
$ bun install
|
||||
```
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
name: Generate a yarn-compatible lockfile
|
||||
name: Generate a human-readable lockfile
|
||||
---
|
||||
|
||||
{% callout %}
|
||||
@@ -8,7 +8,11 @@ Bun v1.1.39 introduced `bun.lock`, a JSONC formatted lockfile. `bun.lock` is hum
|
||||
|
||||
---
|
||||
|
||||
Use the `--yarn` flag to generate a Yarn-compatible `yarn.lock` file (in addition to `bun.lock`).
|
||||
By default Bun generates a binary `bun.lockb` file when you run `bun install`. In some cases, it's preferable to generate a human-readable lockfile instead.
|
||||
|
||||
---
|
||||
|
||||
Use the `--yarn` flag to generate a Yarn-compatible `yarn.lock` file (in addition to `bun.lockb`).
|
||||
|
||||
```sh
|
||||
$ bun install --yarn
|
||||
@@ -25,7 +29,7 @@ print = "yarn"
|
||||
|
||||
---
|
||||
|
||||
To print a Yarn lockfile to your console without writing it to disk, "run" your `bun.lockb` with `bun`.
|
||||
To print a Yarn lockfile to your console without writing it to disk, just "run" your `bun.lockb` with `bun`.
|
||||
|
||||
```sh
|
||||
$ bun bun.lockb
|
||||
|
||||
@@ -41,7 +41,7 @@ Running `bun install` will:
|
||||
|
||||
- **Install** all `dependencies`, `devDependencies`, and `optionalDependencies`. Bun will install `peerDependencies` by default.
|
||||
- **Run** your project's `{pre|post}install` scripts at the appropriate time. For security reasons Bun _does not execute_ lifecycle scripts of installed dependencies.
|
||||
- **Write** a `bun.lock` lockfile to the project root.
|
||||
- **Write** a `bun.lockb` lockfile to the project root.
|
||||
|
||||
To install in production mode (i.e. without `devDependencies`):
|
||||
|
||||
|
||||
@@ -1,10 +1,53 @@
|
||||
Running `bun install` will create a lockfile called `bun.lock`.
|
||||
Running `bun install` will create a binary lockfile called `bun.lockb`.
|
||||
|
||||
https://bun.sh/blog/bun-lock-text-lockfile
|
||||
#### Why is it binary?
|
||||
|
||||
#### Should it be committed to git?
|
||||
In a word: Performance. Bun’s lockfile saves & loads incredibly quickly, and saves a lot more data than what is typically inside lockfiles.
|
||||
|
||||
Yes
|
||||
#### How do I inspect Bun's lockfile?
|
||||
|
||||
Run `bun install -y` to generate a Yarn-compatible `yarn.lock` (v1) that can be inspected more easily.
|
||||
|
||||
#### How do I `git diff` Bun's lockfile?
|
||||
|
||||
Add the following to your local or global `.gitattributes` file:
|
||||
|
||||
```
|
||||
*.lockb binary diff=lockb
|
||||
```
|
||||
|
||||
Then add the following to your local git config with:
|
||||
|
||||
```sh
|
||||
$ git config diff.lockb.textconv bun
|
||||
$ git config diff.lockb.binary true
|
||||
```
|
||||
|
||||
Or to your global git config (system-wide) with the `--global` option:
|
||||
|
||||
```sh
|
||||
$ git config --global diff.lockb.textconv bun
|
||||
$ git config --global diff.lockb.binary true
|
||||
```
|
||||
|
||||
**Why this works:**
|
||||
|
||||
- `textconv` tells git to run `bun` on the file before diffing
|
||||
- `binary` tells git to treat the file as binary (so it doesn't try to diff it line-by-line)
|
||||
|
||||
Running `bun` on a lockfile will print a human-readable diff. So we just need to tell `git` to run `bun` on the lockfile before diffing it.
|
||||
|
||||
#### Platform-specific dependencies?
|
||||
|
||||
Bun stores normalized `cpu` and `os` values from npm in the lockfile, along with the resolved packages. It skips downloading, extracting, and installing packages disabled for the current target at runtime. This means the lockfile won’t change between platforms/architectures even if the packages ultimately installed do change.
|
||||
|
||||
#### What does Bun's lockfile store?
|
||||
|
||||
Packages, metadata for those packages, the hoisted install order, dependencies for each package, what packages those dependencies resolved to, an integrity hash (if available), what each package was resolved to, and which version (or equivalent).
|
||||
|
||||
#### Why is Bun's lockfile fast?
|
||||
|
||||
It uses linear arrays for all data. [Packages](https://github.com/oven-sh/bun/blob/be03fc273a487ac402f19ad897778d74b6d72963/src/install/install.zig#L1825) are referenced by an auto-incrementing integer ID or a hash of the package name. Strings longer than 8 characters are de-duplicated. Prior to saving on disk, the lockfile is garbage-collected & made deterministic by walking the package tree and cloning the packages in dependency order.
|
||||
|
||||
#### Generate a lockfile without installing?
|
||||
|
||||
@@ -26,7 +69,7 @@ To install without creating a lockfile:
|
||||
$ bun install --no-save
|
||||
```
|
||||
|
||||
To install a Yarn lockfile _in addition_ to `bun.lock`.
|
||||
To install a Yarn lockfile _in addition_ to `bun.lockb`.
|
||||
|
||||
{% codetabs %}
|
||||
|
||||
@@ -36,15 +79,42 @@ $ bun install --yarn
|
||||
|
||||
```toml#bunfig.toml
|
||||
[install.lockfile]
|
||||
# whether to save a non-Bun lockfile alongside bun.lock
|
||||
# whether to save a non-Bun lockfile alongside bun.lockb
|
||||
# only "yarn" is supported
|
||||
print = "yarn"
|
||||
```
|
||||
|
||||
{% /codetabs %}
|
||||
|
||||
#### Text-based lockfile
|
||||
### Text-based lockfile
|
||||
|
||||
Bun v1.2 changed the default lockfile format to the text-based `bun.lock`. Existing binary `bun.lockb` lockfiles can be migrated to the new format by running `bun install --save-text-lockfile --frozen-lockfile --lockfile-only` and deleting `bun.lockb`.
|
||||
Bun v1.1.39 introduced `bun.lock`, a JSONC formatted lockfile. `bun.lock` is human-readable and git-diffable without configuration, at [no cost to performance](https://bun.sh/blog/bun-lock-text-lockfile#cached-bun-install-gets-30-faster).
|
||||
|
||||
More information about the new lockfile format can be found on [our blogpost](https://bun.sh/blog/bun-lock-text-lockfile).
|
||||
To generate the lockfile, use `--save-text-lockfile` with `bun install`. You can do this for new projects and existing projects already using `bun.lockb` (resolutions will be preserved).
|
||||
|
||||
```bash
|
||||
$ bun install --save-text-lockfile
|
||||
$ head -n3 bun.lock
|
||||
{
|
||||
"lockfileVersion": 0,
|
||||
"workspaces": {
|
||||
```
|
||||
|
||||
Once `bun.lock` is generated, Bun will use it for all subsequent installs and updates through commands that read and modify the lockfile. If both lockfiles exist, `bun.lock` will be chosen over `bun.lockb`.
|
||||
|
||||
Bun v1.2.0 will switch the default lockfile format to `bun.lock`.
|
||||
|
||||
{% details summary="Configuring lockfile" %}
|
||||
|
||||
```toml
|
||||
[install.lockfile]
|
||||
|
||||
# whether to save the lockfile to disk
|
||||
save = true
|
||||
|
||||
# whether to save a non-Bun lockfile alongside bun.lockb
|
||||
# only "yarn" is supported
|
||||
print = "yarn"
|
||||
```
|
||||
|
||||
{% /details %}
|
||||
|
||||
@@ -6,7 +6,7 @@ It's common for a monorepo to have the following structure:
|
||||
tree
|
||||
<root>
|
||||
├── README.md
|
||||
├── bun.lock
|
||||
├── bun.lockb
|
||||
├── package.json
|
||||
├── tsconfig.json
|
||||
└── packages
|
||||
|
||||
@@ -191,7 +191,7 @@ export default {
|
||||
}),
|
||||
page("install/lockfile", "Lockfile", {
|
||||
description:
|
||||
"Bun's lockfile `bun.lock` tracks your resolved dependency tree, making future installs fast and repeatable.",
|
||||
"Bun's binary lockfile `bun.lockb` tracks your resolved dependency tree, making future installs fast and repeatable.",
|
||||
}),
|
||||
page("install/registries", "Scopes and registries", {
|
||||
description: "How to configure private scopes and custom package registries.",
|
||||
|
||||
@@ -14,7 +14,7 @@ The first time you run this script, Bun will auto-install `"foo"` and cache it.
|
||||
|
||||
To determine which version to install, Bun follows the following algorithm:
|
||||
|
||||
1. Check for a `bun.lock` file in the project root. If it exists, use the version specified in the lockfile.
|
||||
1. Check for a `bun.lockb` file in the project root. If it exists, use the version specified in the lockfile.
|
||||
2. Otherwise, scan up the tree for a `package.json` that includes `"foo"` as a dependency. If found, use the specified semver version or version range.
|
||||
3. Otherwise, use `latest`.
|
||||
|
||||
|
||||
@@ -240,13 +240,13 @@ exact = false
|
||||
|
||||
### `install.saveTextLockfile`
|
||||
|
||||
If false, generate a binary `bun.lockb` instead of a text-based `bun.lock` file when running `bun install` and no lockfile is present.
|
||||
Generate `bun.lock`, a human-readable text-based lockfile. Once generated, Bun will use this file instead of `bun.lockb`, choosing it over the binary lockfile if both are present.
|
||||
|
||||
Default `true` (since Bun v1.2).
|
||||
Default `false`. In Bun v1.2.0 the default lockfile format will change to `bun.lock`.
|
||||
|
||||
```toml
|
||||
[install]
|
||||
saveTextLockfile = false
|
||||
saveTextLockfile = true
|
||||
```
|
||||
|
||||
<!--
|
||||
@@ -315,7 +315,7 @@ Valid values are:
|
||||
|
||||
### `install.frozenLockfile`
|
||||
|
||||
When true, `bun install` will not update `bun.lock`. Default `false`. If `package.json` and the existing `bun.lock` are not in agreement, this will error.
|
||||
When true, `bun install` will not update `bun.lockb`. Default `false`. If `package.json` and the existing `bun.lockb` are not in agreement, this will error.
|
||||
|
||||
```toml
|
||||
[install]
|
||||
@@ -423,7 +423,7 @@ Whether to generate a lockfile on `bun install`. Default `true`.
|
||||
save = true
|
||||
```
|
||||
|
||||
Whether to generate a non-Bun lockfile alongside `bun.lock`. (A `bun.lock` will always be created.) Currently `"yarn"` is the only supported value.
|
||||
Whether to generate a non-Bun lockfile alongside `bun.lockb`. (A `bun.lockb` will always be created.) Currently `"yarn"` is the only supported value.
|
||||
|
||||
```toml
|
||||
[install.lockfile]
|
||||
|
||||
2
jj.js
Normal file
2
jj.js
Normal file
@@ -0,0 +1,2 @@
|
||||
require("fs").writeFileSync("awa2", "meowy", { flag: "a" });
|
||||
require("fs").writeFileSync("awa2", "meowy", { flag: "a" });
|
||||
32
packages/bun-types/bun.d.ts
vendored
32
packages/bun-types/bun.d.ts
vendored
@@ -1267,7 +1267,33 @@ declare module "bun" {
|
||||
}
|
||||
|
||||
var S3Client: S3Client;
|
||||
var s3: S3Client;
|
||||
|
||||
/**
|
||||
* Creates a new S3File instance for working with a single file.
|
||||
*
|
||||
* @param path The path or key of the file
|
||||
* @param options S3 configuration options
|
||||
* @returns `S3File` instance for the specified path
|
||||
*
|
||||
* @example
|
||||
* import { s3 } from "bun";
|
||||
* const file = s3("my-file.txt", {
|
||||
* bucket: "my-bucket",
|
||||
* accessKeyId: "your-access-key",
|
||||
* secretAccessKey: "your-secret-key"
|
||||
* });
|
||||
*
|
||||
* // Read the file
|
||||
* const content = await file.text();
|
||||
*
|
||||
* @example
|
||||
* // Using s3:// protocol
|
||||
* const file = s3("s3://my-bucket/my-file.txt", {
|
||||
* accessKeyId: "your-access-key",
|
||||
* secretAccessKey: "your-secret-key"
|
||||
* });
|
||||
*/
|
||||
function s3(path: string | URL, options?: S3Options): S3File;
|
||||
|
||||
/**
|
||||
* Configuration options for S3 operations
|
||||
@@ -1571,7 +1597,7 @@ declare module "bun" {
|
||||
*
|
||||
* // Write large chunks of data efficiently
|
||||
* for (const chunk of largeDataChunks) {
|
||||
* writer.write(chunk);
|
||||
* await writer.write(chunk);
|
||||
* }
|
||||
* await writer.end();
|
||||
*
|
||||
@@ -1579,7 +1605,7 @@ declare module "bun" {
|
||||
* // Error handling
|
||||
* const writer = file.writer();
|
||||
* try {
|
||||
* writer.write(data);
|
||||
* await writer.write(data);
|
||||
* await writer.end();
|
||||
* } catch (err) {
|
||||
* console.error('Upload failed:', err);
|
||||
|
||||
@@ -64,6 +64,31 @@ struct loop_ssl_data {
|
||||
BIO_METHOD *shared_biom;
|
||||
};
|
||||
|
||||
|
||||
enum us_ssl_sni_result_type {
|
||||
// no cert or error
|
||||
US_SSL_SNI_RESULT_NONE = 0,
|
||||
// we need to parse a new SSL_CTX
|
||||
US_SSL_SNI_RESULT_OPTIONS = 1,
|
||||
// most optimal case
|
||||
US_SSL_SNI_RESULT_SSL_CONTEXT = 2,
|
||||
};
|
||||
union us_ssl_sni_result {
|
||||
struct us_bun_socket_context_options_t options;
|
||||
SSL_CTX* ssl_context;
|
||||
};
|
||||
|
||||
// tagged union for sni result
|
||||
struct us_tagged_ssl_sni_result {
|
||||
uint8_t tag;
|
||||
union us_ssl_sni_result val;
|
||||
};
|
||||
|
||||
void (*us_sni_result_cb)(struct us_internal_ssl_socket_t*, struct us_tagged_ssl_sni_result result);
|
||||
void (*us_sni_callback)(struct us_internal_ssl_socket_t*,
|
||||
const char *hostname, us_tagged_ssl_sni_result result_cb, void* ctx)
|
||||
|
||||
|
||||
struct us_internal_ssl_socket_context_t {
|
||||
struct us_socket_context_t sc;
|
||||
|
||||
@@ -98,6 +123,10 @@ struct us_internal_ssl_socket_context_t {
|
||||
|
||||
us_internal_on_handshake_t on_handshake;
|
||||
void *handshake_data;
|
||||
|
||||
// dynamic sni callback
|
||||
us_sni_callback on_sni_callback;
|
||||
void *on_sni_callback_ctx;
|
||||
};
|
||||
|
||||
// same here, should or shouldn't it
|
||||
@@ -114,6 +143,8 @@ struct us_internal_ssl_socket_t {
|
||||
unsigned int ssl_read_wants_write : 1;
|
||||
unsigned int handshake_state : 2;
|
||||
unsigned int fatal_error : 1;
|
||||
unsigned int sni_callback_running : 1;
|
||||
unsigned int cert_cb_running : 1;
|
||||
};
|
||||
|
||||
int passphrase_cb(char *buf, int size, int rwflag, void *u) {
|
||||
@@ -213,6 +244,11 @@ struct us_internal_ssl_socket_t *ssl_on_open(struct us_internal_ssl_socket_t *s,
|
||||
s->ssl_read_wants_write = 0;
|
||||
s->fatal_error = 0;
|
||||
s->handshake_state = HANDSHAKE_PENDING;
|
||||
s->sni_callback_running = 0;
|
||||
s->cert_cb_running = 0;
|
||||
if(context->on_sni_callback) {
|
||||
SSL_set_cert_cb(s->ssl, us_internal_ssl_cert_cb, s);
|
||||
}
|
||||
|
||||
|
||||
SSL_set_bio(s->ssl, loop_ssl_data->shared_rbio, loop_ssl_data->shared_wbio);
|
||||
@@ -404,7 +440,8 @@ void us_internal_update_handshake(struct us_internal_ssl_socket_t *s) {
|
||||
if (result <= 0) {
|
||||
int err = SSL_get_error(s->ssl, result);
|
||||
// as far as I know these are the only errors we want to handle
|
||||
if (err != SSL_ERROR_WANT_READ && err != SSL_ERROR_WANT_WRITE) {
|
||||
// SSL_ERROR_WANT_X509_LOOKUP is a special case for SNI with means the promise/callback is still running
|
||||
if (err != SSL_ERROR_WANT_READ && err != SSL_ERROR_WANT_WRITE && err != SSL_ERROR_WANT_X509_LOOKUP) {
|
||||
// clear per thread error queue if it may contain something
|
||||
if (err == SSL_ERROR_SSL || err == SSL_ERROR_SYSCALL) {
|
||||
ERR_clear_error();
|
||||
@@ -1341,12 +1378,84 @@ us_internal_ssl_socket_get_sni_userdata(struct us_internal_ssl_socket_t *s) {
|
||||
return SSL_CTX_get_ex_data(SSL_get_SSL_CTX(s->ssl), 0);
|
||||
}
|
||||
|
||||
|
||||
|
||||
void us_internal_ssl_socket_context_sni_result(
|
||||
struct us_internal_ssl_socket_t *s,
|
||||
struct us_tagged_ssl_sni_result result) {
|
||||
|
||||
s->cert_cb_running = 0;
|
||||
|
||||
|
||||
switch(result.tag) {
|
||||
case US_SSL_SNI_RESULT_OPTIONS:
|
||||
enum create_bun_socket_error_t err = CREATE_BUN_SOCKET_ERROR_NONE;
|
||||
SSL_CTX *ssl_context = create_ssl_context_from_bun_options(result.val.options, &err);
|
||||
if (ssl_context) {
|
||||
SSL_set_SSL_CTX(s->ssl, ssl_context);
|
||||
} else {
|
||||
// error in this case lets fallback to the default and continue
|
||||
}
|
||||
break;
|
||||
case US_SSL_SNI_RESULT_SSL_CONTEXT:
|
||||
SSL_CTX *ssl_context = result.val.ssl_context;
|
||||
if (ssl_context) {
|
||||
// set ssl context
|
||||
SSL_set_SSL_CTX(s->ssl, ssl_context);
|
||||
} else {
|
||||
// error in this case lets fallback to the default and continue
|
||||
}
|
||||
break;
|
||||
}
|
||||
// if cert_cb_running is 1 it means we are in the middle of a handshake already so no need to update again
|
||||
// if cert_cb_running is 0 it means this callback is async and we need to update the handshake
|
||||
if(s->cert_cb_running == 0) {
|
||||
// continue handshake
|
||||
us_internal_update_handshake(s);
|
||||
}
|
||||
}
|
||||
int us_internal_ssl_cert_cb(SSL *ssl, void *arg) {
|
||||
|
||||
struct us_internal_ssl_socket_t *s = (struct us_internal_ssl_socket_t *)arg;
|
||||
struct us_internal_ssl_socket_context_t *context =
|
||||
(struct us_internal_ssl_socket_context_t *)us_socket_context(0, &s->s);
|
||||
|
||||
if(!context) return 1;
|
||||
|
||||
if(context->on_sni_callback && s->cert_cb_running == 0) {
|
||||
s->cert_cb_running = 1;
|
||||
s->sni_callback_running = 1;
|
||||
context->on_sni_callback(s, SSL_get_servername(ssl, TLSEXT_NAMETYPE_host_name), us_internal_ssl_socket_context_sni_result, context->on_sni_callback_ctx);
|
||||
s->cert_cb_running = 0;
|
||||
|
||||
// if callback is done, return 1
|
||||
if(s->sni_callback_running == 0) {
|
||||
return 1;
|
||||
}
|
||||
|
||||
// still waiting for callback
|
||||
return -1;
|
||||
}
|
||||
|
||||
// if no callback, use default otherwise still waiting for callback
|
||||
return s->sni_callback_running == 0 ? 1 : -1;
|
||||
}
|
||||
void us_internal_ssl_socket_context_add_sni_callback(
|
||||
struct us_internal_ssl_socket_context_t *context,
|
||||
us_sni_callback cb, void* ctx) {
|
||||
|
||||
|
||||
context->on_sni_callback = cb;
|
||||
context->on_sni_callback_ctx = ctx;
|
||||
}
|
||||
|
||||
/* Todo: return error on failure? */
|
||||
void us_internal_ssl_socket_context_add_server_name(
|
||||
struct us_internal_ssl_socket_context_t *context,
|
||||
const char *hostname_pattern, struct us_socket_context_options_t options,
|
||||
void *user) {
|
||||
|
||||
|
||||
/* Try and construct an SSL_CTX from options */
|
||||
SSL_CTX *ssl_context = create_ssl_context_from_options(options);
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ At its core is the _Bun runtime_, a fast JavaScript runtime designed as a drop-i
|
||||
- Test runner codelens
|
||||
- Debugger support
|
||||
- Run scripts from package.json
|
||||
- Visual lockfile viewer for old binary lockfiles (`bun.lockb`)
|
||||
- Visual lockfile viewer (`bun.lockb`)
|
||||
|
||||
## In-editor error messages
|
||||
|
||||
|
||||
@@ -30,6 +30,7 @@ pub const BunObject = struct {
|
||||
pub const registerMacro = toJSCallback(Bun.registerMacro);
|
||||
pub const resolve = toJSCallback(Bun.resolve);
|
||||
pub const resolveSync = toJSCallback(Bun.resolveSync);
|
||||
pub const s3 = S3File.createJSS3File;
|
||||
pub const serve = toJSCallback(Bun.serve);
|
||||
pub const sha = toJSCallback(JSC.wrapStaticMethod(Crypto.SHA512_256, "hash_", true));
|
||||
pub const shellEscape = toJSCallback(Bun.shellEscape);
|
||||
@@ -71,7 +72,6 @@ pub const BunObject = struct {
|
||||
pub const stdout = toJSGetter(Bun.getStdout);
|
||||
pub const unsafe = toJSGetter(Bun.getUnsafe);
|
||||
pub const S3Client = toJSGetter(Bun.getS3ClientConstructor);
|
||||
pub const s3 = toJSGetter(Bun.getS3DefaultClient);
|
||||
// --- Getters ---
|
||||
|
||||
fn getterName(comptime baseName: anytype) [:0]const u8 {
|
||||
@@ -133,8 +133,6 @@ pub const BunObject = struct {
|
||||
@export(BunObject.semver, .{ .name = getterName("semver") });
|
||||
@export(BunObject.embeddedFiles, .{ .name = getterName("embeddedFiles") });
|
||||
@export(BunObject.S3Client, .{ .name = getterName("S3Client") });
|
||||
@export(BunObject.s3, .{ .name = getterName("s3") });
|
||||
|
||||
// --- Getters --
|
||||
|
||||
// -- Callbacks --
|
||||
@@ -159,6 +157,7 @@ pub const BunObject = struct {
|
||||
@export(BunObject.resolve, .{ .name = callbackName("resolve") });
|
||||
@export(BunObject.resolveSync, .{ .name = callbackName("resolveSync") });
|
||||
@export(BunObject.serve, .{ .name = callbackName("serve") });
|
||||
@export(BunObject.s3, .{ .name = callbackName("s3") });
|
||||
@export(BunObject.sha, .{ .name = callbackName("sha") });
|
||||
@export(BunObject.shellEscape, .{ .name = callbackName("shellEscape") });
|
||||
@export(BunObject.shrink, .{ .name = callbackName("shrink") });
|
||||
@@ -3452,9 +3451,6 @@ pub fn getGlobConstructor(globalThis: *JSC.JSGlobalObject, _: *JSC.JSObject) JSC
|
||||
pub fn getS3ClientConstructor(globalThis: *JSC.JSGlobalObject, _: *JSC.JSObject) JSC.JSValue {
|
||||
return JSC.WebCore.S3Client.getConstructor(globalThis);
|
||||
}
|
||||
pub fn getS3DefaultClient(globalThis: *JSC.JSGlobalObject, _: *JSC.JSObject) JSC.JSValue {
|
||||
return globalThis.bunVM().rareData().s3DefaultClient(globalThis);
|
||||
}
|
||||
pub fn getEmbeddedFiles(globalThis: *JSC.JSGlobalObject, _: *JSC.JSObject) JSC.JSValue {
|
||||
const vm = globalThis.bunVM();
|
||||
const graph = vm.standalone_module_graph orelse return JSC.JSValue.createEmptyArray(globalThis, 0);
|
||||
|
||||
@@ -3877,8 +3877,9 @@ pub const WindowsNamedPipeListeningContext = if (Environment.isWindows) struct {
|
||||
BoringSSL.load();
|
||||
|
||||
const ctx_opts: uws.us_bun_socket_context_options_t = JSC.API.ServerConfig.SSLConfig.asUSockets(ssl_options);
|
||||
var err: uws.create_bun_socket_error_t = .none;
|
||||
// Create SSL context using uSockets to match behavior of node.js
|
||||
const ctx = uws.create_ssl_context_from_bun_options(ctx_opts) orelse return error.InvalidOptions; // invalid options
|
||||
const ctx = uws.create_ssl_context_from_bun_options(ctx_opts, &err) orelse return error.InvalidOptions; // invalid options
|
||||
errdefer BoringSSL.SSL_CTX_free(ctx);
|
||||
this.ctx = ctx;
|
||||
}
|
||||
|
||||
@@ -97,8 +97,9 @@ pub fn SSLWrapper(comptime T: type) type {
|
||||
BoringSSL.load();
|
||||
|
||||
const ctx_opts: uws.us_bun_socket_context_options_t = JSC.API.ServerConfig.SSLConfig.asUSockets(ssl_options);
|
||||
var err: uws.create_bun_socket_error_t = .none;
|
||||
// Create SSL context using uSockets to match behavior of node.js
|
||||
const ctx = uws.create_ssl_context_from_bun_options(ctx_opts) orelse return error.InvalidOptions; // invalid options
|
||||
const ctx = uws.create_ssl_context_from_bun_options(ctx_opts, &err) orelse return error.InvalidOptions; // invalid options
|
||||
errdefer BoringSSL.SSL_CTX_free(ctx);
|
||||
return try This.initWithCTX(ctx, is_client, handlers);
|
||||
}
|
||||
|
||||
@@ -32,7 +32,6 @@
|
||||
macro(semver) \
|
||||
macro(embeddedFiles) \
|
||||
macro(S3Client) \
|
||||
macro(s3) \
|
||||
|
||||
// --- Callbacks ---
|
||||
#define FOR_EACH_CALLBACK(macro) \
|
||||
@@ -59,6 +58,7 @@
|
||||
macro(registerMacro) \
|
||||
macro(resolve) \
|
||||
macro(resolveSync) \
|
||||
macro(s3) \
|
||||
macro(serve) \
|
||||
macro(sha) \
|
||||
macro(shrink) \
|
||||
|
||||
@@ -702,7 +702,6 @@ JSC_DEFINE_HOST_FUNCTION(functionFileURLToPath, (JSC::JSGlobalObject * globalObj
|
||||
Transpiler BunObject_getter_wrap_Transpiler DontDelete|PropertyCallback
|
||||
embeddedFiles BunObject_getter_wrap_embeddedFiles DontDelete|PropertyCallback
|
||||
S3Client BunObject_getter_wrap_S3Client DontDelete|PropertyCallback
|
||||
s3 BunObject_getter_wrap_s3 DontDelete|PropertyCallback
|
||||
allocUnsafe BunObject_callback_allocUnsafe DontDelete|Function 1
|
||||
argv BunObject_getter_wrap_argv DontDelete|PropertyCallback
|
||||
build BunObject_callback_build DontDelete|Function 1
|
||||
@@ -755,6 +754,7 @@ JSC_DEFINE_HOST_FUNCTION(functionFileURLToPath, (JSC::JSGlobalObject * globalObj
|
||||
resolveSync BunObject_callback_resolveSync DontDelete|Function 1
|
||||
revision constructBunRevision ReadOnly|DontDelete|PropertyCallback
|
||||
semver BunObject_getter_wrap_semver ReadOnly|DontDelete|PropertyCallback
|
||||
s3 BunObject_callback_s3 DontDelete|Function 1
|
||||
sql defaultBunSQLObject DontDelete|PropertyCallback
|
||||
postgres defaultBunSQLObject DontDelete|PropertyCallback
|
||||
SQL constructBunSQLObject DontDelete|PropertyCallback
|
||||
|
||||
@@ -50,8 +50,6 @@ temp_pipe_read_buffer: ?*PipeReadBuffer = null,
|
||||
|
||||
aws_signature_cache: AWSSignatureCache = .{},
|
||||
|
||||
s3_default_client: JSC.Strong = .{},
|
||||
|
||||
const PipeReadBuffer = [256 * 1024]u8;
|
||||
const DIGESTED_HMAC_256_LEN = 32;
|
||||
pub const AWSSignatureCache = struct {
|
||||
@@ -437,23 +435,6 @@ pub fn nodeFSStatWatcherScheduler(rare: *RareData, vm: *JSC.VirtualMachine) *Sta
|
||||
};
|
||||
}
|
||||
|
||||
pub fn s3DefaultClient(rare: *RareData, globalThis: *JSC.JSGlobalObject) JSC.JSValue {
|
||||
return rare.s3_default_client.get() orelse {
|
||||
const vm = globalThis.bunVM();
|
||||
var aws_options = bun.S3.S3Credentials.getCredentialsWithOptions(vm.transpiler.env.getS3Credentials(), .{}, null, null, globalThis) catch bun.outOfMemory();
|
||||
defer aws_options.deinit();
|
||||
const client = JSC.WebCore.S3Client.new(.{
|
||||
.credentials = aws_options.credentials.dupe(),
|
||||
.options = aws_options.options,
|
||||
.acl = aws_options.acl,
|
||||
});
|
||||
const js_client = client.toJS(globalThis);
|
||||
js_client.ensureStillAlive();
|
||||
rare.s3_default_client = JSC.Strong.create(js_client, globalThis);
|
||||
return js_client;
|
||||
};
|
||||
}
|
||||
|
||||
pub fn deinit(this: *RareData) void {
|
||||
if (this.temp_pipe_read_buffer) |pipe| {
|
||||
this.temp_pipe_read_buffer = null;
|
||||
@@ -462,7 +443,6 @@ pub fn deinit(this: *RareData) void {
|
||||
|
||||
this.aws_signature_cache.deinit();
|
||||
|
||||
this.s3_default_client.deinit();
|
||||
if (this.boring_ssl_engine) |engine| {
|
||||
_ = bun.BoringSSL.ENGINE_free(engine);
|
||||
}
|
||||
|
||||
@@ -118,7 +118,7 @@ pub const S3Client = struct {
|
||||
},
|
||||
);
|
||||
} else {
|
||||
try writer.writeAll(" {");
|
||||
try writer.writeAll(comptime bun.Output.prettyFmt(" {{", enable_ansi_colors));
|
||||
}
|
||||
|
||||
try writeFormatCredentials(this.credentials, this.options, this.acl, Formatter, formatter, writer, enable_ansi_colors);
|
||||
|
||||
@@ -2641,7 +2641,15 @@ pub const us_bun_socket_context_options_t = extern struct {
|
||||
client_renegotiation_limit: u32 = 3,
|
||||
client_renegotiation_window: u32 = 600,
|
||||
};
|
||||
pub extern fn create_ssl_context_from_bun_options(options: us_bun_socket_context_options_t) ?*BoringSSL.SSL_CTX;
|
||||
|
||||
pub const create_bun_socket_error_t = enum(c_int) {
|
||||
none = 0,
|
||||
load_ca_file,
|
||||
invalid_ca_file,
|
||||
invalid_ca,
|
||||
};
|
||||
|
||||
pub extern fn create_ssl_context_from_bun_options(options: us_bun_socket_context_options_t, err: ?*create_bun_socket_error_t) ?*BoringSSL.SSL_CTX;
|
||||
|
||||
pub const create_bun_socket_error_t = enum(i32) {
|
||||
none = 0,
|
||||
|
||||
@@ -44,6 +44,7 @@ const [H2FrameParser, assertSettings, getPackedSettings, getUnpackedSettings] =
|
||||
|
||||
const sensitiveHeaders = Symbol.for("nodejs.http2.sensitiveHeaders");
|
||||
const bunHTTP2Native = Symbol.for("::bunhttp2native::");
|
||||
const bunHTTP2StreamReadQueue = Symbol.for("::bunhttp2ReadQueue::");
|
||||
|
||||
const bunHTTP2Socket = Symbol.for("::bunhttp2socket::");
|
||||
const bunHTTP2StreamFinal = Symbol.for("::bunHTTP2StreamFinal::");
|
||||
@@ -1506,8 +1507,13 @@ function assertSession(session) {
|
||||
}
|
||||
hideFromStack(assertSession);
|
||||
|
||||
function pushToStream(stream, data) {;
|
||||
stream.push(data);
|
||||
function pushToStream(stream, data) {
|
||||
// if (stream.writableEnded) return;
|
||||
const queue = stream[bunHTTP2StreamReadQueue];
|
||||
if (queue.isEmpty()) {
|
||||
if (stream.push(data)) return;
|
||||
}
|
||||
queue.push(data);
|
||||
}
|
||||
|
||||
enum StreamState {
|
||||
@@ -1545,6 +1551,7 @@ class Http2Stream extends Duplex {
|
||||
[bunHTTP2StreamStatus]: number = 0;
|
||||
|
||||
rstCode: number | undefined = undefined;
|
||||
[bunHTTP2StreamReadQueue]: Array<Buffer> = $createFIFO();
|
||||
[bunHTTP2Headers]: any;
|
||||
[kInfoHeaders]: any;
|
||||
#sentTrailers: any;
|
||||
@@ -1765,11 +1772,18 @@ class Http2Stream extends Duplex {
|
||||
}
|
||||
}
|
||||
|
||||
_read(_size) {
|
||||
// we always use the internal stream queue now
|
||||
_read(size) {
|
||||
const queue = this[bunHTTP2StreamReadQueue];
|
||||
let chunk;
|
||||
while ((chunk = queue.peek())) {
|
||||
if (!this.push(chunk)) {
|
||||
queue.shift();
|
||||
return;
|
||||
}
|
||||
queue.shift();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
end(chunk, encoding, callback) {
|
||||
const status = this[bunHTTP2StreamStatus];
|
||||
|
||||
|
||||
@@ -101,6 +101,7 @@ plugin_impl_with_needle(const OnBeforeParseArguments *args,
|
||||
|
||||
int fetch_result = result->fetchSourceCode(args, result);
|
||||
if (fetch_result != 0) {
|
||||
printf("FUCK\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
@@ -123,6 +124,7 @@ plugin_impl_with_needle(const OnBeforeParseArguments *args,
|
||||
if (needle_count > 0) {
|
||||
char *new_source = (char *)malloc(result->source_len);
|
||||
if (new_source == nullptr) {
|
||||
printf("FUCK\n");
|
||||
exit(1);
|
||||
}
|
||||
memcpy(new_source, result->source_ptr, result->source_len);
|
||||
@@ -146,6 +148,7 @@ plugin_impl_with_needle(const OnBeforeParseArguments *args,
|
||||
} else if (strcmp(needle, "baz") == 0) {
|
||||
needle_atomic_value = &external->baz_count;
|
||||
}
|
||||
printf("FUCK: %d %s\n", needle_count, needle);
|
||||
needle_atomic_value->fetch_add(needle_count);
|
||||
free_counter = &external->compilation_ctx_freed_count;
|
||||
}
|
||||
|
||||
4
test/js/bun/s3/s3-stream-leak-fixture.js
generated
4
test/js/bun/s3/s3-stream-leak-fixture.js
generated
@@ -6,9 +6,9 @@ const { randomUUID } = require("crypto");
|
||||
|
||||
const s3Dest = randomUUID() + "-s3-stream-leak-fixture";
|
||||
|
||||
const s3file = Bun.s3.file(s3Dest);
|
||||
const s3file = Bun.s3(s3Dest);
|
||||
async function readLargeFile() {
|
||||
const stream = Bun.s3.file(s3Dest).stream();
|
||||
const stream = Bun.s3(s3Dest).stream();
|
||||
const reader = stream.getReader();
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
|
||||
4
test/js/bun/s3/s3-text-leak-fixture.js
generated
4
test/js/bun/s3/s3-text-leak-fixture.js
generated
@@ -6,9 +6,9 @@ const { randomUUID } = require("crypto");
|
||||
|
||||
const s3Dest = randomUUID() + "-s3-stream-leak-fixture";
|
||||
|
||||
const s3file = Bun.s3.file(s3Dest);
|
||||
const s3file = Bun.s3(s3Dest);
|
||||
async function readLargeFile() {
|
||||
await Bun.s3.file(s3Dest).text();
|
||||
await Bun.s3(s3Dest).text();
|
||||
}
|
||||
async function run(inputType) {
|
||||
await s3file.write(inputType);
|
||||
|
||||
2
test/js/bun/s3/s3-write-leak-fixture.js
generated
2
test/js/bun/s3/s3-write-leak-fixture.js
generated
@@ -6,7 +6,7 @@ const dest = process.argv.at(-1);
|
||||
const { randomUUID } = require("crypto");
|
||||
const payload = new Buffer(1024 * 1024 + 1, "A".charCodeAt(0)).toString("utf-8");
|
||||
async function writeLargeFile() {
|
||||
const s3file = Bun.s3.file(randomUUID());
|
||||
const s3file = Bun.s3(randomUUID());
|
||||
await s3file.write(payload);
|
||||
await s3file.unlink();
|
||||
}
|
||||
|
||||
2
test/js/bun/s3/s3-writer-leak-fixture.js
generated
2
test/js/bun/s3/s3-writer-leak-fixture.js
generated
@@ -6,7 +6,7 @@ const dest = process.argv.at(-1);
|
||||
const { randomUUID } = require("crypto");
|
||||
const payload = new Buffer(1024 * 256, "A".charCodeAt(0)).toString("utf-8");
|
||||
async function writeLargeFile() {
|
||||
const s3file = Bun.s3.file(randomUUID());
|
||||
const s3file = Bun.s3(randomUUID());
|
||||
const writer = s3file.writer();
|
||||
writer.write(payload);
|
||||
await Bun.sleep(10);
|
||||
|
||||
@@ -1,8 +1,7 @@
|
||||
import { describe, expect, it, beforeAll, afterAll } from "bun:test";
|
||||
import { bunExe, bunEnv, getSecret, tempDirWithFiles, isLinux } from "harness";
|
||||
import { randomUUID } from "crypto";
|
||||
import { S3Client, s3 as defaultS3, file, which } from "bun";
|
||||
const s3 = (...args) => defaultS3.file(...args);
|
||||
import { S3Client, s3, file, which } from "bun";
|
||||
const S3 = (...args) => new S3Client(...args);
|
||||
import child_process from "child_process";
|
||||
import type { S3Options } from "bun";
|
||||
|
||||
Reference in New Issue
Block a user