Compare commits

..

19 Commits

Author SHA1 Message Date
Claude Bot
8612285e3d fix: preserve stack traces for errors after structuredClone
After calling structuredClone() on an Error object, the error would lose
its internal JSC stack trace. When console.error formatted the cloned error,
it fell back to parsing the .stack string property. However, the V8-style
stack trace parser would stop early when encountering frames without
function names (e.g., top-level code execution frames).

These anonymous frames are formatted as "at /path/to/file:line:column"
without parentheses, which the parser previously treated as invalid and
stopped parsing.

This fix updates the V8StackTraceIterator to properly handle frames
without function names by parsing them as anonymous frames with just
the source location information.
2025-11-11 05:01:14 +00:00
github-actions[bot]
0a307ed880 deps: update sqlite to 3.51.0 (#24530) 2025-11-09 01:09:25 -08:00
robobun
b4f85c8866 Update docs example versions to 1.3.2 (#24522)
## Summary

Updated all example version placeholders in documentation from 1.3.1 and
1.2.20 to 1.3.2.

## Changes

Updated version examples in:
- Installation examples (Linux/macOS and Windows install commands)
- Package manager output examples (`bun install`, `bun publish`, `bun
pm` commands)
- Test runner output examples
- Spawn/child process output examples
- Fetch User-Agent header examples in debugging docs
- `Bun.version` API example

## Notes

- Historical version references (e.g., "As of Bun v1.x.x..." or "Bun
v1.x.x+ required") were intentionally **preserved** as they document
when features were introduced
- Generic package.json version examples (non-Bun package versions) were
**preserved**
- Only example outputs and code snippets showing current Bun version
were updated

## Files Changed (13 total)

- `docs/installation.mdx`
- `docs/guides/install/from-npm-install-to-bun-install.mdx`
- `docs/guides/install/add-peer.mdx`
- `docs/bundler/html-static.mdx` (6 occurrences)
- `docs/test/dom.mdx`
- `docs/pm/cli/publish.mdx`
- `docs/pm/cli/pm.mdx`
- `docs/guides/test/snapshot.mdx` (2 occurrences)
- `docs/guides/ecosystem/nuxt.mdx`
- `docs/guides/util/version.mdx`
- `docs/runtime/debugger.mdx` (3 occurrences)
- `docs/runtime/networking/fetch.mdx`
- `docs/runtime/child-process.mdx`

**Total:** 23 version references updated

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Michael H <git@riskymh.dev>
2025-11-09 16:20:04 +11:00
Michael H
614e8292e3 docs: fix discord invite (#24498)
### What does this PR do?

we don't have the discord vanity invite

### How did you verify your code works?
2025-11-08 21:09:57 -08:00
Michael H
3829b6d0aa add .mdx to .gitattributes (#24525)
### What does this PR do?

### How did you verify your code works?
2025-11-08 20:56:38 -08:00
Meghan Denny
f30e3951a7 Bump 2025-11-07 23:58:34 -08:00
Michael H
b131639cc5 ci: run modified tests first (#24463)
Co-authored-by: Meghan Denny <meghan@bun.com>
2025-11-07 21:49:58 -08:00
Jarred Sumner
b9b07172aa Update package.json 2025-11-07 21:22:53 -08:00
Marko Vejnovic
02b474415d bug(ENG-21479): Fix Valkey URL Parsing (#24458)
### What does this PR do?

Fixes https://github.com/oven-sh/bun/issues/24385

### How did you verify your code works?

Confirmed that the test added in the first commit fails on mainline
`bun` and is fixed in this PR.

---------

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-11-07 21:18:07 -08:00
Dylan Conway
de9a38bd11 fix(install): create bun.lock instead of bun.lockb if npm/yarn/pnpm migration fails (#24494)
### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-07 20:58:44 -08:00
Marko Vejnovic
2e57c6bf95 fix(ENG-21492): Fix private git+ssh installs (#24490)
### What does this PR do?

This PR is the fix-only version of
https://github.com/oven-sh/bun/pull/24486. Unfortunately due to
complexity setting up all CI agents to ping private git repos, I was
unable to get CI passing there.

### How did you verify your code works?

I ran this:

```
marko@fedora:~/Desktop/bun-4$ bun add git+ssh://git@github.com:oven-sh/private-install-test-repo.git#5b37e644a2ef23fad0da4027042f01b194b179e8
bun add v1.3.2-canary.108 (44402ad2)
  🔍 Resolving [1/1] error: "git clone" for "git+ssh://git@github.com:oven-sh/private-install-test-repo.git#5b37e644a2ef23fad0da4027042f01b194b179e8" failed
error: InstallFailed cloning repository for git+ssh://git@github.com:oven-sh/private-install-test-repo.git#5b37e644a2ef23fad0da4027042f01b194b179e8
error: git+ssh://git@github.com:oven-sh/private-install-test-repo.git#5b37e644a2ef23fad0da4027042f01b194b179e8 failed to resolve
```

followed by

```
marko@fedora:~/Desktop/bun-4$ BUN_DEBUG_QUIET_LOGS=1 ./build/debug/bun-debug add git+ssh://git@github.com:oven-sh/private-install-test-repo.git#5b37e644a2ef23fad0da4027042f01b194b179e8
bun add v1.3.2 (0db90b25)

installed private-install-test-repo@git+ssh://git@github.com:oven-sh/private-install-test-repo.git#5b37e644a2ef23fad0da4027042f01b194b179e8

[1.61s] done
```

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2025-11-07 19:56:34 -08:00
Lydia Hallie
35a57ff008 Add Upstash guide with Bun Redis (#24493) 2025-11-07 18:32:19 -08:00
Meghan Denny
7a931d5b26 [publish images] 2025-11-07 16:41:57 -08:00
Meghan Denny
2b42be9dcc [publish images] 2025-11-07 16:40:13 -08:00
Meghan Denny
e6be28b8d4 [publish images] 2025-11-07 16:37:57 -08:00
Meghan Denny
d0a1984a20 ci: skip running tests when a PR only changes docs (#24459)
fixes https://linear.app/oven/issue/ENG-21489
2025-11-07 15:52:37 -08:00
Lydia Hallie
1896c75d78 Add deploy guides for AWS Lambda, Google Run, DigitalOcean (#24414)
Adds deployment guides for Bun apps on AWS Lambda, Google Cloud Run, and
DigitalOcean using a custom `Dockerfile`

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-07 15:03:36 -08:00
Marko Vejnovic
3a810da66c build(ENG-21466): Fix sccache not caching across builds (#24423) 2025-11-07 14:33:26 -08:00
Jarred Sumner
0db90b2526 Implement isolated event loop for spawnSync (#24436) 2025-11-07 05:28:33 -08:00
100 changed files with 6630 additions and 1740 deletions

View File

@@ -16,6 +16,7 @@ import {
getEmoji,
getEnv,
getLastSuccessfulBuild,
getSecret,
isBuildkite,
isBuildManual,
isFork,
@@ -1203,6 +1204,43 @@ async function main() {
console.log("Generated options:", options);
}
startGroup("Querying GitHub for files...");
if (options && isBuildkite && !isMainBranch()) {
/** @type {string[]} */
let allFiles = [];
/** @type {string[]} */
let newFiles = [];
let prFileCount = 0;
try {
console.log("on buildkite: collecting new files from PR");
const per_page = 50;
const { BUILDKITE_PULL_REQUEST } = process.env;
for (let i = 1; i <= 10; i++) {
const res = await fetch(
`https://api.github.com/repos/oven-sh/bun/pulls/${BUILDKITE_PULL_REQUEST}/files?per_page=${per_page}&page=${i}`,
{ headers: { Authorization: `Bearer ${getSecret("GITHUB_TOKEN")}` } },
);
const doc = await res.json();
console.log(`-> page ${i}, found ${doc.length} items`);
if (doc.length === 0) break;
for (const { filename, status } of doc) {
prFileCount += 1;
allFiles.push(filename);
if (status !== "added") continue;
newFiles.push(filename);
}
if (doc.length < per_page) break;
}
console.log(`- PR ${BUILDKITE_PULL_REQUEST}, ${prFileCount} files, ${newFiles.length} new files`);
} catch (e) {
console.error(e);
}
if (allFiles.every(filename => filename.startsWith("docs/"))) {
console.log(`- PR is only docs, skipping tests!`);
return;
}
}
startGroup("Generating pipeline...");
const pipeline = await getPipeline(options);
if (!pipeline) {

1
.gitattributes vendored
View File

@@ -16,6 +16,7 @@
*.map text eol=lf whitespace=blank-at-eol,-blank-at-eof,-space-before-tab,tab-in-indent,tabwidth=2
*.md text eol=lf whitespace=blank-at-eol,-blank-at-eof,-space-before-tab,tab-in-indent,tabwidth=2
*.mdc text eol=lf whitespace=blank-at-eol,-blank-at-eof,-space-before-tab,tab-in-indent,tabwidth=2
*.mdx text eol=lf whitespace=blank-at-eol,-blank-at-eof,-space-before-tab,tab-in-indent,tabwidth=2
*.mjs text eol=lf whitespace=blank-at-eol,-blank-at-eof,-space-before-tab,tab-in-indent,tabwidth=2
*.mts text eol=lf whitespace=blank-at-eol,-blank-at-eof,-space-before-tab,tab-in-indent,tabwidth=2

View File

@@ -38,16 +38,36 @@ If no valid issue number is provided, find the best existing file to modify inst
### Writing Tests
Tests use Bun's Jest-compatible test runner with proper test fixtures:
Tests use Bun's Jest-compatible test runner with proper test fixtures.
- For **single-file tests**, prefer `-e` over `tempDir`.
- For **multi-file tests**, prefer `tempDir` and `Bun.spawn`.
```typescript
import { test, expect } from "bun:test";
import { bunEnv, bunExe, normalizeBunSnapshot, tempDir } from "harness";
test("my feature", async () => {
test("(single-file test) my feature", async () => {
await using proc = Bun.spawn({
cmd: [bunExe(), "-e", "console.log('Hello, world!')"],
env: bunEnv,
});
const [stdout, stderr, exitCode] = await Promise.all([
proc.stdout.text(),
proc.stderr.text(),
proc.exited,
]);
expect(normalizeBunSnapshot(stdout)).toMatchInlineSnapshot(`"Hello, world!"`);
expect(exitCode).toBe(0);
});
test("(multi-file test) my feature", async () => {
// Create temp directory with test files
using dir = tempDir("test-prefix", {
"index.js": `console.log("hello");`,
"index.js": `import { foo } from "./foo.ts"; foo();`,
"foo.ts": `export function foo() { console.log("foo"); }`,
});
// Spawn Bun process

2
LATEST
View File

@@ -1 +1 @@
1.3.1
1.3.2

View File

@@ -25,7 +25,7 @@ bun ./index.html
```
```
Bun v1.2.20
Bun v1.3.2
ready in 6.62ms
→ http://localhost:3000/
Press h + Enter to show shortcuts
@@ -51,7 +51,7 @@ bun index.html
```
```
Bun v1.2.20
Bun v1.3.2
ready in 6.62ms
→ http://localhost:3000/
Press h + Enter to show shortcuts
@@ -81,7 +81,7 @@ bun ./index.html ./about.html
```
```txt
Bun v1.2.20
Bun v1.3.2
ready in 6.62ms
→ http://localhost:3000/
Routes:
@@ -104,7 +104,7 @@ bun ./**/*.html
```
```
Bun v1.2.20
Bun v1.3.2
ready in 6.62ms
→ http://localhost:3000/
Routes:
@@ -122,7 +122,7 @@ bun ./index.html ./about/index.html ./about/foo/index.html
```
```
Bun v1.2.20
Bun v1.3.2
ready in 6.62ms
→ http://localhost:3000/
Routes:
@@ -259,7 +259,7 @@ bun ./index.html --console
```
```
Bun v1.2.20
Bun v1.3.2
ready in 6.62ms
→ http://localhost:3000/
Press h + Enter to show shortcuts

View File

@@ -298,7 +298,14 @@
{
"group": "Deployment",
"icon": "rocket",
"pages": ["/guides/deployment/vercel", "/guides/deployment/railway", "/guides/deployment/render"]
"pages": [
"/guides/deployment/vercel",
"/guides/deployment/railway",
"/guides/deployment/render",
"/guides/deployment/aws-lambda",
"/guides/deployment/digital-ocean",
"/guides/deployment/google-cloud-run"
]
},
{
"group": "Runtime & Debugging",
@@ -368,7 +375,8 @@
"/guides/ecosystem/stric",
"/guides/ecosystem/sveltekit",
"/guides/ecosystem/systemd",
"/guides/ecosystem/vite"
"/guides/ecosystem/vite",
"/guides/ecosystem/upstash"
]
},
{

View File

@@ -0,0 +1,204 @@
---
title: Deploy a Bun application on AWS Lambda
sidebarTitle: Deploy on AWS Lambda
mode: center
---
[AWS Lambda](https://aws.amazon.com/lambda/) is a serverless compute service that lets you run code without provisioning or managing servers.
In this guide, we will deploy a Bun HTTP server to AWS Lambda using a `Dockerfile`.
<Note>
Before continuing, make sure you have:
- A Bun application ready for deployment
- An [AWS account](https://aws.amazon.com/)
- [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) installed and configured
- [Docker](https://docs.docker.com/get-started/get-docker/) installed and added to your `PATH`
</Note>
---
<Steps>
<Step title="Create a new Dockerfile">
Make sure you're in the directory containing your project, then create a new `Dockerfile` in the root of your project. This file contains the instructions to initialize the container, copy your local project files into it, install dependencies, and start the application.
```docker Dockerfile icon="docker"
# Use the official AWS Lambda adapter image to handle the Lambda runtime
FROM public.ecr.aws/awsguru/aws-lambda-adapter:0.9.0 AS aws-lambda-adapter
# Use the official Bun image to run the application
FROM oven/bun:debian AS bun_latest
# Copy the Lambda adapter into the container
COPY --from=aws-lambda-adapter /lambda-adapter /opt/extensions/lambda-adapter
# Set the port to 8080. This is required for the AWS Lambda adapter.
ENV PORT=8080
# Set the work directory to `/var/task`. This is the default work directory for Lambda.
WORKDIR "/var/task"
# Copy the package.json and bun.lock into the container
COPY package.json bun.lock ./
# Install the dependencies
RUN bun install --production --frozen-lockfile
# Copy the rest of the application into the container
COPY . /var/task
# Run the application.
CMD ["bun", "index.ts"]
```
<Note>
Make sure that the start command corresponds to your application's entry point. This can also be `CMD ["bun", "run", "start"]` if you have a start script in your `package.json`.
This image installs dependencies and runs your app with Bun inside a container. If your app doesn't have dependencies, you can omit the `RUN bun install --production --frozen-lockfile` line.
</Note>
Create a new `.dockerignore` file in the root of your project. This file contains the files and directories that should be _excluded_ from the container image, such as `node_modules`. This makes your builds faster and smaller:
```docker .dockerignore icon="Docker"
node_modules
Dockerfile*
.dockerignore
.git
.gitignore
README.md
LICENSE
.vscode
.env
# Any other files or directories you want to exclude
```
</Step>
<Step title="Build the Docker image">
Make sure you're in the directory containing your `Dockerfile`, then build the Docker image. In this case, we'll call the image `bun-lambda-demo` and tag it as `latest`.
```bash terminal icon="terminal"
# cd /path/to/your/app
docker build --provenance=false --platform linux/amd64 -t bun-lambda-demo:latest .
```
</Step>
<Step title="Create an ECR repository">
To push the image to AWS Lambda, we first need to create an [ECR repository](https://aws.amazon.com/ecr/) to push the image to.
By running the following command, we:
- Create an ECR repository named `bun-lambda-demo` in the `us-east-1` region
- Get the repository URI, and export the repository URI as an environment variable. This is optional, but make the next steps easier.
```bash terminal icon="terminal"
export ECR_URI=$(aws ecr create-repository --repository-name bun-lambda-demo --region us-east-1 --query 'repository.repositoryUri' --output text)
echo $ECR_URI
```
```txt
[id].dkr.ecr.us-east-1.amazonaws.com/bun-lambda-demo
```
<Note>
If you're using IAM Identity Center (SSO) or have configured AWS CLI with profiles, you'll need to add the `--profile` flag to your AWS CLI commands.
For example, if your profile is named `my-sso-app`, use `--profile my-sso-app`. Check your AWS CLI configuration with `aws configure list-profiles` to see available profiles.
```bash terminal icon="terminal"
export ECR_URI=$(aws ecr create-repository --repository-name bun-lambda-demo --region us-east-1 --profile my-sso-app --query 'repository.repositoryUri' --output text)
echo $ECR_URI
```
</Note>
</Step>
<Step title="Authenticate with the ECR repository">
Log in to the ECR repository:
```bash terminal icon="terminal"
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin $ECR_URI
```
```txt
Login Succeeded
```
<Note>
If using a profile, use the `--profile` flag:
```bash terminal icon="terminal"
aws ecr get-login-password --region us-east-1 --profile my-sso-app | docker login --username AWS --password-stdin $ECR_URI
```
</Note>
</Step>
<Step title="Tag and push the docker image to the ECR repository">
Make sure you're in the directory containing your `Dockerfile`, then tag the docker image with the ECR repository URI.
```bash terminal icon="terminal"
docker tag bun-lambda-demo:latest ${ECR_URI}:latest
```
Then, push the image to the ECR repository.
```bash terminal icon="terminal"
docker push ${ECR_URI}:latest
```
</Step>
<Step title="Create an AWS Lambda function">
Go to **AWS Console** > **Lambda** > [**Create Function**](https://us-east-1.console.aws.amazon.com/lambda/home?region=us-east-1#/create/function?intent=authorFromImage) > Select **Container image**
<Warning>Make sure you've selected the right region, this URL defaults to `us-east-1`.</Warning>
<Frame>
![Create Function](/images/guides/lambda1.png)
</Frame>
Give the function a name, like `my-bun-function`.
</Step>
<Step title="Select the container image">
Then, go to the **Container image URI** section, click on **Browse images**. Select the image we just pushed to the ECR repository.
<Frame>
![Select Container Repository](/images/guides/lambda2.png)
</Frame>
Then, select the `latest` image, and click on **Select image**.
<Frame>
![Select Container Image](/images/guides/lambda3.png)
</Frame>
</Step>
<Step title="Configure the function">
To get a public URL for the function, we need to go to **Additional configurations** > **Networking** > **Function URL**.
Set this to **Enable**, with Auth Type **NONE**.
<Frame>
![Set the Function URL](/images/guides/lambda4.png)
</Frame>
</Step>
<Step title="Create the function">
Click on **Create function** at the bottom of the page, this will create the function.
<Frame>
![Create Function](/images/guides/lambda6.png)
</Frame>
</Step>
<Step title="Get the function URL">
Once the function has been created you'll be redirected to the function's page, where you can see the function URL in the **"Function URL"** section.
<Frame>
![Function URL](/images/guides/lambda5.png)
</Frame>
</Step>
<Step title="Test the function">
🥳 Your app is now live! To test the function, you can either go to the **Test** tab, or call the function URL directly.
```bash terminal icon="terminal"
curl -X GET https://[your-function-id].lambda-url.us-east-1.on.aws/
```
```txt
Hello from Bun on Lambda!
```
</Step>
</Steps>

View File

@@ -0,0 +1,161 @@
---
title: Deploy a Bun application on DigitalOcean
sidebarTitle: Deploy on DigitalOcean
mode: center
---
[DigitalOcean](https://www.digitalocean.com/) is a cloud platform that provides a range of services for building and deploying applications.
In this guide, we will deploy a Bun HTTP server to DigitalOcean using a `Dockerfile`.
<Note>
Before continuing, make sure you have:
- A Bun application ready for deployment
- A [DigitalOcean account](https://www.digitalocean.com/)
- [DigitalOcean CLI](https://docs.digitalocean.com/reference/doctl/how-to/install/#step-1-install-doctl) installed and configured
- [Docker](https://docs.docker.com/get-started/get-docker/) installed and added to your `PATH`
</Note>
---
<Steps>
<Step title="Create a new DigitalOcean Container Registry">
Create a new Container Registry to store the Docker image.
<Tabs>
<Tab title="Through the DigitalOcean dashboard">
In the DigitalOcean dashboard, go to [**Container Registry**](https://cloud.digitalocean.com/registry), and enter the details for the new registry.
<Frame>
![DigitalOcean registry dashboard](/images/guides/digitalocean-7.png)
</Frame>
Make sure the details are correct, then click **Create Registry**.
</Tab>
<Tab title="Through the DigitalOcean CLI">
```bash terminal icon="terminal"
doctl registry create bun-digitalocean-demo
```
```txt
Name Endpoint Region slug
bun-digitalocean-demo registry.digitalocean.com/bun-digitalocean-demo sfo2
```
</Tab>
</Tabs>
You should see the new registry in the [**DigitalOcean registry dashboard**](https://cloud.digitalocean.com/registry):
<Frame>
![DigitalOcean registry dashboard](/images/guides/digitalocean-1.png)
</Frame>
</Step>
<Step title="Create a new Dockerfile">
Make sure you're in the directory containing your project, then create a new `Dockerfile` in the root of your project. This file contains the instructions to initialize the container, copy your local project files into it, install dependencies, and start the application.
```docker Dockerfile icon="docker"
# Use the official Bun image to run the application
FROM oven/bun:debian
# Set the work directory to `/app`
WORKDIR /app
# Copy the package.json and bun.lock into the container
COPY package.json bun.lock ./
# Install the dependencies
RUN bun install --production --frozen-lockfile
# Copy the rest of the application into the container
COPY . .
# Expose the port (DigitalOcean will set PORT env var)
EXPOSE 8080
# Run the application
CMD ["bun", "index.ts"]
```
<Note>
Make sure that the start command corresponds to your application's entry point. This can also be `CMD ["bun", "run", "start"]` if you have a start script in your `package.json`.
This image installs dependencies and runs your app with Bun inside a container. If your app doesn't have dependencies, you can omit the `RUN bun install --production --frozen-lockfile` line.
</Note>
Create a new `.dockerignore` file in the root of your project. This file contains the files and directories that should be _excluded_ from the container image, such as `node_modules`. This makes your builds faster and smaller:
```docker .dockerignore icon="Docker"
node_modules
Dockerfile*
.dockerignore
.git
.gitignore
README.md
LICENSE
.vscode
.env
# Any other files or directories you want to exclude
```
</Step>
<Step title="Authenticate Docker with DigitalOcean registry">
Before building and pushing the Docker image, authenticate Docker with the DigitalOcean Container Registry:
```bash terminal icon="terminal"
doctl registry login
```
```txt
Successfully authenticated with registry.digitalocean.com
```
<Note>
This command authenticates Docker with DigitalOcean's registry using your DigitalOcean credentials. Without this step, the build and push command will fail with a 401 authentication error.
</Note>
</Step>
<Step title="Build and push the Docker image to the DigitalOcean registry">
Make sure you're in the directory containing your `Dockerfile`, then build and push the Docker image to the DigitalOcean registry in one command:
```bash terminal icon="terminal"
docker buildx build --platform=linux/amd64 -t registry.digitalocean.com/bun-digitalocean-demo/bun-digitalocean-demo:latest --push .
```
<Note>
If you're building on an ARM Mac (M1/M2), you must use `docker buildx` with `--platform=linux/amd64` to ensure compatibility with DigitalOcean's infrastructure. Using `docker build` without the platform flag will create an ARM64 image that won't run on DigitalOcean.
</Note>
Once the image is pushed, you should see it in the [**DigitalOcean registry dashboard**](https://cloud.digitalocean.com/registry):
<Frame>
![DigitalOcean registry dashboard](/images/guides/digitalocean-2.png)
</Frame>
</Step>
<Step title="Create a new DigitalOcean App Platform project">
In the DigitalOcean dashboard, go to [**App Platform**](https://cloud.digitalocean.com/apps) > **Create App**. We can create a project directly from the container image.
<Frame>
![DigitalOcean App Platform project dashboard](/images/guides/digitalocean-3.png)
</Frame>
Make sure the details are correct, then click **Next**.
<Frame>
![DigitalOcean App Platform service dashboard](/images/guides/digitalocean-4.png)
</Frame>
Review and configure resource settings, then click **Create app**.
<Frame>
![DigitalOcean App Platform service dashboard](/images/guides/digitalocean-6.png)
</Frame>
</Step>
<Step title="Visit your live application">
🥳 Your app is now live! Once the app is created, you should see it in the App Platform dashboard with the public URL.
<Frame>
![DigitalOcean App Platform app dashboard](/images/guides/digitalocean-5.png)
</Frame>
</Step>
</Steps>

View File

@@ -0,0 +1,197 @@
---
title: Deploy a Bun application on Google Cloud Run
sidebarTitle: Deploy on Google Cloud Run
mode: center
---
[Google Cloud Run](https://cloud.google.com/run) is a managed platform for deploying and scaling serverless applications. Google handles the infrastructure for you.
In this guide, we will deploy a Bun HTTP server to Google Cloud Run using a `Dockerfile`.
<Note>
Before continuing, make sure you have:
- A Bun application ready for deployment
- A [Google Cloud account](https://cloud.google.com/) with billing enabled
- [Google Cloud CLI](https://cloud.google.com/sdk/docs/install) installed and configured
</Note>
---
<Steps>
<Step title={<span>Initialize <code>gcloud</code> by select/creating a project</span>}>
Make sure that you've initialized the Google Cloud CLI. This command logs you in, and prompts you to either select an existing project or create a new one.
For more help with the Google Cloud CLI, see the [official documentation](https://docs.cloud.google.com/sdk/gcloud/reference/init).
```bash terminal icon="terminal"
gcloud init
```
```txt
Welcome! This command will take you through the configuration of gcloud.
You must sign in to continue. Would you like to sign in (Y/n)? Y
You are signed in as [email@example.com].
Pick cloud project to use:
[1] existing-bun-app-1234
[2] Enter a project ID
[3] Create a new project
Please enter numeric choice or text value (must exactly match list item): 3
Enter a Project ID. my-bun-app
Your current project has been set to: [my-bun-app]
The Google Cloud CLI is configured and ready to use!
```
</Step>
<Step title="(Optional) Store your project info in environment variables">
Set variables for your project ID and number so they're easier to reuse in the following steps.
```bash terminal icon="terminal"
PROJECT_ID=$(gcloud projects list --format='value(projectId)' --filter='name="my bun app"')
PROJECT_NUMBER=$(gcloud projects list --format='value(projectNumber)' --filter='name="my bun app"')
echo $PROJECT_ID $PROJECT_NUMBER
```
```txt
my-bun-app-... [PROJECT_NUMBER]
```
</Step>
<Step title="Link a billing account">
List your available billing accounts and link one to your project:
```bash terminal icon="terminal"
gcloud billing accounts list
```
```txt
ACCOUNT_ID NAME OPEN MASTER_ACCOUNT_ID
[BILLING_ACCOUNT_ID] My Billing Account True
```
Link your billing account to your project. Replace `[BILLING_ACCOUNT_ID]` with the ID of your billing account.
```bash terminal icon="terminal"
gcloud billing projects link $PROJECT_ID --billing-account=[BILLING_ACCOUNT_ID]
```
```txt
billingAccountName: billingAccounts/[BILLING_ACCOUNT_ID]
billingEnabled: true
name: projects/my-bun-app-.../billingInfo
projectId: my-bun-app-...
```
</Step>
<Step title="Enable APIs and configure IAM roles">
Activate the necessary services and grant Cloud Build permissions:
```bash terminal icon="terminal"
gcloud services enable run.googleapis.com cloudbuild.googleapis.com
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:$PROJECT_NUMBER-compute@developer.gserviceaccount.com \
--role=roles/run.builder
```
<Note>
These commands enable Cloud Run (`run.googleapis.com`) and Cloud Build (`cloudbuild.googleapis.com`), which are required for deploying from source. Cloud Run runs your containerized app, while Cloud Build handles building and packaging it.
The IAM binding grants the Compute Engine service account (`$PROJECT_NUMBER-compute@developer.gserviceaccount.com`) permission to build and deploy images on your behalf.
</Note>
</Step>
<Step title="Add a Dockerfile">
Create a new `Dockerfile` in the root of your project. This file contains the instructions to initialize the container, copy your local project files into it, install dependencies, and start the application.
```docker Dockerfile icon="docker"
# Use the official Bun image to run the application
FROM oven/bun:latest
# Copy the package.json and bun.lock into the container
COPY package.json bun.lock ./
# Install the dependencies
# Install the dependencies
RUN bun install --production --frozen-lockfile
# Copy the rest of the application into the container
COPY . .
# Run the application
CMD ["bun", "index.ts"]
```
<Note>
Make sure that the start command corresponds to your application's entry point. This can also be `CMD ["bun", "run", "start"]` if you have a start script in your `package.json`.
This image installs dependencies and runs your app with Bun inside a container. If your app doesn't have dependencies, you can omit the `RUN bun install --production --frozen-lockfile` line.
This image installs dependencies and runs your app with Bun inside a container. If your app doesn't have dependencies, you can omit the `RUN bun install --production --frozen-lockfile` line.
</Note>
Create a new `.dockerignore` file in the root of your project. This file contains the files and directories that should be _excluded_ from the container image, such as `node_modules`. This makes your builds faster and smaller:
```docker .dockerignore icon="Docker"
node_modules
Dockerfile*
.dockerignore
.git
.gitignore
README.md
LICENSE
.vscode
.env
# Any other files or directories you want to exclude
```
</Step>
<Step title="Deploy your service">
Make sure you're in the directory containing your `Dockerfile`, then deploy directly from your local source:
<Note>
Update the `--region` flag to your preferred region. You can also omit this flag to get an interactive prompt to
select a region. Update the `--region` flag to your preferred region. You can also omit this flag to get an
interactive prompt to select a region.
</Note>
```bash terminal icon="terminal"
gcloud run deploy my-bun-app --source . --region=us-west1 --allow-unauthenticated
```
```txt
Deploying from source requires an Artifact Registry Docker repository to store built containers. A repository named
[cloud-run-source-deploy] in region [us-west1] will be created.
Do you want to continue (Y/n)? Y
Building using Dockerfile and deploying container to Cloud Run service [my-bun-app] in project [my-bun-app-...] region [us-west1]
✓ Building and deploying... Done.
✓ Validating Service...
✓ Uploading sources...
✓ Building Container... Logs are available at [https://console.cloud.google.com/cloud-build/builds...].
✓ Creating Revision...
✓ Routing traffic...
✓ Setting IAM Policy...
Done.
Service [my-bun-app] revision [my-bun-app-...] has been deployed and is serving 100 percent of traffic.
Service URL: https://my-bun-app-....us-west1.run.app
```
</Step>
<Step title="Visit your live application">
🎉 Your Bun application is now live!
Visit the Service URL (`https://my-bun-app-....us-west1.run.app`) to confirm everything works as expected.
</Step>
</Steps>

View File

@@ -14,7 +14,7 @@ bunx nuxi init my-nuxt-app
✔ Which package manager would you like to use?
bun
◐ Installing dependencies...
bun install v1.3.1 (16b4bf34)
bun install v1.3.2 (16b4bf34)
+ @nuxt/devtools@0.8.2
+ nuxt@3.7.0
785 packages installed [2.67s]

View File

@@ -0,0 +1,87 @@
---
title: Bun Redis with Upstash
sidebarTitle: Upstash with Bun
mode: center
---
[Upstash](https://upstash.com/) is a fully managed Redis database as a service. Upstash works with the Redis® API, which means you can use Bun's native Redis client to connect to your Upstash database.
<Note>TLS is enabled by default for all Upstash Redis databases.</Note>
---
<Steps>
<Step title="Create a new project">
Create a new project by running `bun init`:
```sh terminal icon="terminal"
bun init bun-upstash-redis
cd bun-upstash-redis
```
</Step>
<Step title="Create an Upstash Redis database">
Go to the [Upstash dashboard](https://console.upstash.com/) and create a new Redis database. After completing the [getting started guide](https://upstash.com/docs/redis/overall/getstarted), you'll see your database page with connection information.
The database page displays two connection methods; HTTP and TLS. For Bun's Redis client, you need the **TLS** connection details. This URL starts with `rediss://`.
<Frame>
![Upstash Redis database page](/images/guides/upstash-1.png)
</Frame>
</Step>
<Step title="Connect using Bun's Redis client">
You can connect to Upstash by setting environment variables with Bun's default `redis` client.
Set the `REDIS_URL` environment variable in your `.env` file using the Redis endpoint (not the REST URL):
```env .env icon="settings"
REDIS_URL=rediss://********@********.upstash.io:6379
```
Bun's Redis client reads connection information from `REDIS_URL` by default:
```ts index.ts icon="/icons/typescript.svg"
import { redis } from "bun";
// Reads from process.env.REDIS_URL automatically
await redis.set("counter", "0"); // [!code ++]
```
Alternatively, you can create a custom client using `RedisClient`:
```ts index.ts icon="/icons/typescript.svg"
import { RedisClient } from "bun";
const redis = new RedisClient(process.env.REDIS_URL); // [!code ++]
```
</Step>
<Step title="Use the Redis client">
You can now use the Redis client to interact with your Upstash Redis database:
```ts index.ts icon="/icons/typescript.svg"
import { redis } from "bun";
// Get a value
let counter = await redis.get("counter");
// Set a value if it doesn't exist
if (!counter) {
await redis.set("counter", "0");
}
// Increment the counter
await redis.incr("counter");
// Get the updated value
counter = await redis.get("counter");
console.log(counter);
```
```txt
1
```
The Redis client automatically handles connections in the background. No need to manually connect or disconnect for basic operations.
</Step>
</Steps>

View File

@@ -17,7 +17,7 @@ This will add the package to `peerDependencies` in `package.json`.
```json package.json icon="file-json"
{
"peerDependencies": {
"@types/bun": "^1.3.1" // [!code ++]
"@types/bun": "^1.3.2" // [!code ++]
}
}
```
@@ -29,7 +29,7 @@ Running `bun install` will install peer dependencies by default, unless marked o
```json package.json icon="file-json"
{
"peerDependencies": {
"@types/bun": "^1.3.1"
"@types/bun": "^1.3.2"
},
"peerDependenciesMeta": {
"@types/bun": {

View File

@@ -99,7 +99,7 @@ bun update
bun update @types/bun --latest
# Update a dependency to a specific version
bun update @types/bun@1.3.1
bun update @types/bun@1.3.2
# Update all dependencies to the latest versions
bun update --latest

View File

@@ -64,7 +64,7 @@ Later, when this test file is executed again, Bun will read the snapshot file an
```sh terminal icon="terminal"
bun test
bun test v1.3.1 (9c68abdb)
bun test v1.3.2 (9c68abdb)
```
```txt
@@ -83,7 +83,7 @@ To update snapshots, use the `--update-snapshots` flag.
```sh terminal icon="terminal"
bun test --update-snapshots
bun test v1.3.1 (9c68abdb)
bun test v1.3.2 (9c68abdb)
```
```txt

View File

@@ -7,7 +7,7 @@ mode: center
Get the current version of Bun in a semver format.
```ts index.ts icon="/icons/typescript.svg"
Bun.version; // => "1.3.1"
Bun.version; // => "1.3.2"
```
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 344 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 403 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 872 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 813 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 662 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 593 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 627 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 448 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 568 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 584 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 195 KiB

View File

@@ -38,7 +38,7 @@ Bun ships as a single, dependency-free executable. You can install it via script
</Warning>
For support and discussion, please join the **#windows** channel on our [Discord](https://discord.gg/bun).
For support and discussion, please join the **#windows** channel on our [Discord](https://bun.com/discord).
</Tab>
@@ -209,7 +209,7 @@ Since Bun is a single binary, you can install older versions by re-running the i
To install a specific version, pass the git tag to the install script:
```bash terminal icon="terminal"
curl -fsSL https://bun.com/install | bash -s "bun-v1.3.1"
curl -fsSL https://bun.com/install | bash -s "bun-v1.3.2"
```
</Tab>
@@ -217,7 +217,7 @@ Since Bun is a single binary, you can install older versions by re-running the i
On Windows, pass the version number to the PowerShell install script:
```powershell PowerShell icon="windows"
iex "& {$(irm https://bun.com/install.ps1)} -Version 1.3.1"
iex "& {$(irm https://bun.com/install.ps1)} -Version 1.3.2"
```
</Tab>

View File

@@ -244,7 +244,7 @@ bun pm version
```
```txt
bun pm version v1.3.1 (ca7428e9)
bun pm version v1.3.2 (ca7428e9)
Current package version: v1.0.0
Increment:

View File

@@ -13,7 +13,7 @@ bun publish
```
```txt
bun publish v1.3.1 (ca7428e9)
bun publish v1.3.2 (ca7428e9)
packed 203B package.json
packed 224B README.md

View File

@@ -100,7 +100,7 @@ You can read results from the subprocess via the `stdout` and `stderr` propertie
```ts
const proc = Bun.spawn(["bun", "--version"]);
const text = await proc.stdout.text();
console.log(text); // => "1.3.1\n"
console.log(text); // => "1.3.2\n"
```
Configure the output stream by passing one of the following values to `stdout/stderr`:

View File

@@ -146,11 +146,11 @@ await fetch("https://example.com", {
```
```txt
[fetch] $ curl --http1.1 "https://example.com/" -X POST -H "content-type: application/json" -H "Connection: keep-alive" -H "User-Agent: Bun/1.3.1" -H "Accept: */*" -H "Host: example.com" -H "Accept-Encoding: gzip, deflate, br" --compressed -H "Content-Length: 13" --data-raw "{\"foo\":\"bar\"}"
[fetch] $ curl --http1.1 "https://example.com/" -X POST -H "content-type: application/json" -H "Connection: keep-alive" -H "User-Agent: Bun/1.3.2" -H "Accept: */*" -H "Host: example.com" -H "Accept-Encoding: gzip, deflate, br" --compressed -H "Content-Length: 13" --data-raw "{\"foo\":\"bar\"}"
[fetch] > HTTP/1.1 POST https://example.com/
[fetch] > content-type: application/json
[fetch] > Connection: keep-alive
[fetch] > User-Agent: Bun/1.3.1
[fetch] > User-Agent: Bun/1.3.2
[fetch] > Accept: */*
[fetch] > Host: example.com
[fetch] > Accept-Encoding: gzip, deflate, br
@@ -190,7 +190,7 @@ await fetch("https://example.com", {
[fetch] > HTTP/1.1 POST https://example.com/
[fetch] > content-type: application/json
[fetch] > Connection: keep-alive
[fetch] > User-Agent: Bun/1.3.1
[fetch] > User-Agent: Bun/1.3.2
[fetch] > Accept: */*
[fetch] > Host: example.com
[fetch] > Accept-Encoding: gzip, deflate, br

View File

@@ -342,7 +342,7 @@ This will print the request and response headers to your terminal:
```sh
[fetch] > HTTP/1.1 GET http://example.com/
[fetch] > Connection: keep-alive
[fetch] > User-Agent: Bun/1.3.1
[fetch] > User-Agent: Bun/1.3.2
[fetch] > Accept: */*
[fetch] > Host: example.com
[fetch] > Accept-Encoding: gzip, deflate, br, zstd

View File

@@ -91,6 +91,7 @@ html.dark .shiki span {
footer#footer a[href*="mintlify.com"] {
display: none;
}
.nav-tabs {
width: 100% !important;
}
@@ -144,6 +145,10 @@ html.dark .code-block + .code-block[language="text"] div[data-component-part="co
border-bottom-left-radius: 0px;
}
div.callout .code-block {
margin-bottom: 0px;
}
.code-block[language="shellscript"] code span.line:not(:empty):has(span)::before {
content: "$ ";
color: #6272a4;

View File

@@ -65,7 +65,7 @@ bun test
```
```
bun test v1.2.20
bun test v1.3.2
dom.test.ts:
✓ dom test [0.82ms]

View File

@@ -1,7 +1,7 @@
{
"private": true,
"name": "bun",
"version": "1.3.2",
"version": "1.3.3",
"workspaces": [
"./packages/bun-types",
"./packages/@types/bun"

View File

@@ -10,11 +10,13 @@
},
"files": [
"./*.d.ts",
"./vendor/**/*.d.ts",
"./docs/**/*.md",
"./docs/*.md",
"./CLAUDE.md",
"./README.md"
"./docs/*.md",
"./docs/*.mdx",
"./docs/**/*.md",
"./docs/**/*.mdx",
"./README.md",
"./vendor/**/*.d.ts"
],
"homepage": "https://bun.com",
"dependencies": {
@@ -28,7 +30,7 @@
},
"scripts": {
"prebuild": "echo $(pwd)",
"copy-docs": "rm -rf docs && cp -rL ../../docs/ ./docs && find ./docs -type f -name '*.md' -exec sed -i 's/\\$BUN_LATEST_VERSION/'\"${BUN_VERSION#bun-v}\"'/g' {} +",
"copy-docs": "rm -rf docs && cp -rL ../../docs/ ./docs && find ./docs -type f -name '*.{md,mdx}' -exec sed -i 's/\\$BUN_LATEST_VERSION/'\"${BUN_VERSION#bun-v}\"'/g' {} +",
"build": "bun run copy-docs && cp ../../src/init/rule.md CLAUDE.md && bun scripts/build.ts",
"test": "tsc",
"fmt": "echo $(which biome) && biome format --write ."

View File

@@ -256,6 +256,30 @@ function Install-Tailscale {
Install-Package tailscale
}
function Create-Buildkite-Environment-Hooks {
param (
[Parameter(Mandatory = $true)]
[string]$BuildkiteHome
)
Write-Output "Creating Buildkite environment hooks..."
$hooksDir = Join-Path $BuildkiteHome "hooks"
if (-not (Test-Path $hooksDir)) {
New-Item -Path $hooksDir -ItemType Directory -Force | Out-Null
}
$environmentHook = Join-Path $hooksDir "environment.ps1"
$buildPath = Join-Path $BuildkiteHome "build"
@"
# Buildkite environment hook
`$env:BUILDKITE_BUILD_CHECKOUT_PATH = "$buildPath"
"@ | Set-Content -Path $environmentHook -Encoding UTF8
Write-Output "Environment hook created at $environmentHook"
}
function Install-Buildkite {
if (Which buildkite-agent) {
return
@@ -266,6 +290,14 @@ function Install-Buildkite {
$installScript = Download-File "https://raw.githubusercontent.com/buildkite/agent/main/install.ps1"
Execute-Script $installScript
Refresh-Path
if ($CI) {
$buildkiteHome = "C:\buildkite-agent"
if (-not (Test-Path $buildkiteHome)) {
New-Item -Path $buildkiteHome -ItemType Directory -Force | Out-Null
}
Create-Buildkite-Environment-Hooks -BuildkiteHome $buildkiteHome
}
}
function Install-Build-Essentials {

View File

@@ -1,5 +1,5 @@
#!/bin/sh
# Version: 19
# Version: 20
# A script that installs the dependencies needed to build and test Bun.
# This should work on macOS and Linux with a POSIX shell.
@@ -1391,6 +1391,25 @@ create_buildkite_user() {
for file in $buildkite_files; do
create_file "$file"
done
local opts=$-
set -ef
# I do not want to use create_file because it creates directories with 777
# permissions and files with 664 permissions. This is dumb, for obvious
# reasons.
local hook_dir="${home}/hooks"
mkdir -p -m 755 "${hook_dir}";
cat <<EOF > "${hook_dir}/environment"
#!/bin/sh
set -efu
export BUILDKITE_BUILD_CHECKOUT_PATH=${home}/build
EOF
execute_sudo chown -R "$user:$group" "$hook_dir"
execute_sudo chmod 744 "${hook_dir}/environment"
set +ef -"$opts"
}
install_buildkite() {

View File

@@ -103,7 +103,8 @@ async function build(args) {
await startGroup("CMake Build", () => spawn("cmake", buildArgs, { env }));
if (isCI) {
const target = buildOptions["--target"] || buildOptions["-t"];
if (isCI && target === "build-cpp") {
await startGroup("sccache stats", () => {
spawn("sccache", ["--show-stats"], { env });
});

View File

@@ -185,32 +185,33 @@ if (options["quiet"]) {
isQuiet = true;
}
/** @type {string[]} */
let allFiles = [];
/** @type {string[]} */
let newFiles = [];
let prFileCount = 0;
if (isBuildkite) {
try {
console.log("on buildkite: collecting new files from PR");
const per_page = 50;
for (let i = 1; i <= 5; i++) {
const { BUILDKITE_PULL_REQUEST } = process.env;
for (let i = 1; i <= 10; i++) {
const res = await fetch(
`https://api.github.com/repos/oven-sh/bun/pulls/${process.env.BUILDKITE_PULL_REQUEST}/files?per_page=${per_page}&page=${i}`,
{
headers: {
Authorization: `Bearer ${getSecret("GITHUB_TOKEN")}`,
},
},
`https://api.github.com/repos/oven-sh/bun/pulls/${BUILDKITE_PULL_REQUEST}/files?per_page=${per_page}&page=${i}`,
{ headers: { Authorization: `Bearer ${getSecret("GITHUB_TOKEN")}` } },
);
const doc = await res.json();
console.log(`-> page ${i}, found ${doc.length} items`);
if (doc.length === 0) break;
for (const { filename, status } of doc) {
prFileCount += 1;
allFiles.push(filename);
if (status !== "added") continue;
newFiles.push(filename);
}
if (doc.length < per_page) break;
}
console.log(`- PR ${process.env.BUILDKITE_PULL_REQUEST}, ${prFileCount} files, ${newFiles.length} new files`);
console.log(`- PR ${BUILDKITE_PULL_REQUEST}, ${prFileCount} files, ${newFiles.length} new files`);
} catch (e) {
console.error(e);
}
@@ -1890,6 +1891,27 @@ function getRelevantTests(cwd, testModifiers, testExpectations) {
filteredTests.push(...availableTests);
}
// Prioritize modified test files
if (allFiles.length > 0) {
const modifiedTests = new Set(
allFiles
.filter(filename => filename.startsWith("test/") && isTest(filename))
.map(filename => filename.slice("test/".length)),
);
if (modifiedTests.size > 0) {
return filteredTests
.map(testPath => testPath.replaceAll("\\", "/"))
.sort((a, b) => {
const aModified = modifiedTests.has(a);
const bModified = modifiedTests.has(b);
if (aModified && !bModified) return -1;
if (!aModified && bModified) return 1;
return 0;
});
}
}
return filteredTests;
}
@@ -2615,8 +2637,18 @@ export async function main() {
]);
}
const results = await runTests();
const ok = results.every(({ ok }) => ok);
let doRunTests = true;
if (isCI) {
if (allFiles.every(filename => filename.startsWith("docs/"))) {
doRunTests = false;
}
}
let ok = true;
if (doRunTests) {
const results = await runTests();
ok = results.every(({ ok }) => ok);
}
let waitForUser = false;
while (isCI) {

View File

@@ -322,7 +322,7 @@ pub fn buildWithVm(ctx: bun.cli.Command.Context, cwd: []const u8, vm: *VirtualMa
allocator,
.{ .js = vm.event_loop },
);
const bundled_outputs = bundled_outputs_list.items();
const bundled_outputs = bundled_outputs_list.items;
if (bundled_outputs.len == 0) {
Output.prettyln("done", .{});
Output.flush();

View File

@@ -466,6 +466,25 @@ pub fn spawnMaybeSync(
!jsc_vm.isInspectorEnabled() and
!bun.feature_flag.BUN_FEATURE_FLAG_DISABLE_SPAWNSYNC_FAST_PATH.get();
// For spawnSync, use an isolated event loop to prevent JavaScript timers from firing
// and to avoid interfering with the main event loop
const event_loop: *jsc.EventLoop = if (comptime is_sync)
&jsc_vm.rareData().spawnSyncEventLoop(jsc_vm).event_loop
else
jsc_vm.eventLoop();
if (comptime is_sync) {
jsc_vm.rareData().spawnSyncEventLoop(jsc_vm).prepare(jsc_vm);
}
defer {
if (comptime is_sync) {
jsc_vm.rareData().spawnSyncEventLoop(jsc_vm).cleanup(jsc_vm, jsc_vm.eventLoop());
}
}
const loop_handle = jsc.EventLoopHandle.init(event_loop);
const spawn_options = bun.spawn.SpawnOptions{
.cwd = cwd,
.detached = detached,
@@ -488,7 +507,7 @@ pub fn spawnMaybeSync(
.windows = if (Environment.isWindows) .{
.hide_window = windows_hide,
.verbatim_arguments = windows_verbatim_arguments,
.loop = jsc.EventLoopHandle.init(jsc_vm),
.loop = loop_handle,
},
};
@@ -534,9 +553,8 @@ pub fn spawnMaybeSync(
.result => |result| result,
};
const loop = jsc_vm.eventLoop();
const process = spawned.toProcess(loop, is_sync);
// Use the isolated loop for spawnSync operations
const process = spawned.toProcess(loop_handle, is_sync);
var subprocess = bun.new(Subprocess, .{
.ref_count = .init(),
@@ -571,7 +589,7 @@ pub fn spawnMaybeSync(
.pid_rusage = null,
.stdin = Writable.init(
&stdio[0],
loop,
event_loop,
subprocess,
spawned.stdin,
&promise_for_stream,
@@ -581,7 +599,7 @@ pub fn spawnMaybeSync(
},
.stdout = Readable.init(
stdio[1],
loop,
event_loop,
subprocess,
spawned.stdout,
jsc_vm.allocator,
@@ -590,7 +608,7 @@ pub fn spawnMaybeSync(
),
.stderr = Readable.init(
stdio[2],
loop,
event_loop,
subprocess,
spawned.stderr,
jsc_vm.allocator,
@@ -688,14 +706,15 @@ pub fn spawnMaybeSync(
var send_exit_notification = false;
// This must go before other things happen so that the exit handler is registered before onProcessExit can potentially be called.
if (timeout) |timeout_val| {
subprocess.event_loop_timer.next = bun.timespec.msFromNow(timeout_val);
globalThis.bunVM().timer.insert(&subprocess.event_loop_timer);
subprocess.setEventLoopTimerRefd(true);
}
if (comptime !is_sync) {
// This must go before other things happen so that the exit handler is
// registered before onProcessExit can potentially be called.
if (timeout) |timeout_val| {
subprocess.event_loop_timer.next = bun.timespec.msFromNow(timeout_val);
globalThis.bunVM().timer.insert(&subprocess.event_loop_timer);
subprocess.setEventLoopTimerRefd(true);
}
bun.debugAssert(out != .zero);
if (on_exit_callback.isCell()) {
@@ -743,7 +762,7 @@ pub fn spawnMaybeSync(
}
if (subprocess.stdout == .pipe) {
if (subprocess.stdout.pipe.start(subprocess, loop).asErr()) |err| {
if (subprocess.stdout.pipe.start(subprocess, event_loop).asErr()) |err| {
_ = subprocess.tryKill(subprocess.killSignal);
_ = globalThis.throwValue(err.toJS(globalThis)) catch {};
return error.JSError;
@@ -754,7 +773,7 @@ pub fn spawnMaybeSync(
}
if (subprocess.stderr == .pipe) {
if (subprocess.stderr.pipe.start(subprocess, loop).asErr()) |err| {
if (subprocess.stderr.pipe.start(subprocess, event_loop).asErr()) |err| {
_ = subprocess.tryKill(subprocess.killSignal);
_ = globalThis.throwValue(err.toJS(globalThis)) catch {};
return error.JSError;
@@ -767,15 +786,16 @@ pub fn spawnMaybeSync(
should_close_memfd = false;
// Once everything is set up, we can add the abort listener
// Adding the abort listener may call the onAbortSignal callback immediately if it was already aborted
// Therefore, we must do this at the very end.
if (abort_signal) |signal| {
signal.pendingActivityRef();
subprocess.abort_signal = signal.addListener(subprocess, Subprocess.onAbortSignal);
abort_signal = null;
}
if (comptime !is_sync) {
// Once everything is set up, we can add the abort listener
// Adding the abort listener may call the onAbortSignal callback immediately if it was already aborted
// Therefore, we must do this at the very end.
if (abort_signal) |signal| {
signal.pendingActivityRef();
subprocess.abort_signal = signal.addListener(subprocess, Subprocess.onAbortSignal);
abort_signal = null;
}
if (!subprocess.process.hasExited()) {
jsc_vm.onSubprocessSpawn(subprocess.process);
}
@@ -813,14 +833,50 @@ pub fn spawnMaybeSync(
jsc_vm.onSubprocessSpawn(subprocess.process);
}
// We cannot release heap access while JS is running
var did_timeout = false;
// Use the isolated event loop to tick instead of the main event loop
// This ensures JavaScript timers don't fire and stdin/stdout from the main process aren't affected
{
const old_vm = jsc_vm.uwsLoop().internal_loop_data.jsc_vm;
jsc_vm.uwsLoop().internal_loop_data.jsc_vm = null;
defer {
jsc_vm.uwsLoop().internal_loop_data.jsc_vm = old_vm;
var absolute_timespec = bun.timespec.epoch;
var now = bun.timespec.now();
var user_timespec: bun.timespec = if (timeout) |timeout_ms| now.addMs(timeout_ms) else absolute_timespec;
// Support `AbortSignal.timeout`, but it's best-effort.
// Specifying both `timeout: number` and `AbortSignal.timeout` chooses the soonest one.
// This does mean if an AbortSignal times out it will throw
if (subprocess.abort_signal) |signal| {
if (signal.getTimeout()) |abort_signal_timeout| {
if (abort_signal_timeout.event_loop_timer.state == .ACTIVE) {
if (user_timespec.eql(&.epoch) or abort_signal_timeout.event_loop_timer.next.order(&user_timespec) == .lt) {
user_timespec = abort_signal_timeout.event_loop_timer.next;
}
}
}
}
const has_user_timespec = !user_timespec.eql(&.epoch);
const sync_loop = jsc_vm.rareData().spawnSyncEventLoop(jsc_vm);
while (subprocess.computeHasPendingActivity()) {
// Re-evaluate this at each iteration of the loop since it may change between iterations.
const bun_test_timeout: bun.timespec = if (bun.jsc.Jest.Jest.runner) |runner| runner.getActiveTimeout() else .epoch;
const has_bun_test_timeout = !bun_test_timeout.eql(&.epoch);
if (has_bun_test_timeout) {
switch (bun_test_timeout.orderIgnoreEpoch(user_timespec)) {
.lt => absolute_timespec = bun_test_timeout,
.eq => {},
.gt => absolute_timespec = user_timespec,
}
} else if (has_user_timespec) {
absolute_timespec = user_timespec;
} else {
absolute_timespec = .epoch;
}
const has_timespec = !absolute_timespec.eql(&.epoch);
if (subprocess.stdin == .buffer) {
subprocess.stdin.buffer.watch();
}
@@ -833,10 +889,52 @@ pub fn spawnMaybeSync(
subprocess.stdout.pipe.watch();
}
jsc_vm.tick();
jsc_vm.eventLoop().autoTick();
// Tick the isolated event loop without passing timeout to avoid blocking
// The timeout check is done at the top of the loop
switch (sync_loop.tickWithTimeout(if (has_timespec and !did_timeout) &absolute_timespec else null)) {
.completed => {
now = bun.timespec.now();
},
.timeout => {
now = bun.timespec.now();
const did_user_timeout = has_user_timespec and (absolute_timespec.eql(&user_timespec) or user_timespec.order(&now) == .lt);
if (did_user_timeout) {
did_timeout = true;
_ = subprocess.tryKill(subprocess.killSignal);
}
// Support bun:test timeouts AND spawnSync() timeout.
// There is a scenario where inside of spawnSync() a totally
// different test fails, and that SHOULD be okay.
if (has_bun_test_timeout) {
if (bun_test_timeout.order(&now) == .lt) {
var active_file_strong = bun.jsc.Jest.Jest.runner.?.bun_test_root.active_file
// TODO: add a .cloneNonOptional()?
.clone();
defer active_file_strong.deinit();
var taken_active_file = active_file_strong.take().?;
defer taken_active_file.deinit();
bun.jsc.Jest.Jest.runner.?.removeActiveTimeout(jsc_vm);
// This might internally call `std.c.kill` on this
// spawnSync process. Even if we do that, we still
// need to reap the process. So we may go through
// the event loop again, but it should wake up
// ~instantly so we can drain the events.
jsc.Jest.bun_test.BunTest.bunTestTimeoutCallback(taken_active_file, &absolute_timespec, jsc_vm);
}
}
},
}
}
}
if (globalThis.hasException()) {
// e.g. a termination exception.
return .zero;
}
subprocess.updateHasPendingActivity();
@@ -845,16 +943,11 @@ pub fn spawnMaybeSync(
const stdout = try subprocess.stdout.toBufferedValue(globalThis);
const stderr = try subprocess.stderr.toBufferedValue(globalThis);
const resource_usage: JSValue = if (!globalThis.hasException()) try subprocess.createResourceUsageObject(globalThis) else .zero;
const exitedDueToTimeout = subprocess.event_loop_timer.state == .FIRED;
const exitedDueToTimeout = did_timeout;
const exitedDueToMaxBuffer = subprocess.exited_due_to_maxbuf;
const resultPid = jsc.JSValue.jsNumberFromInt32(subprocess.pid());
subprocess.finalize();
if (globalThis.hasException()) {
// e.g. a termination exception.
return .zero;
}
const sync_value = jsc.JSValue.createEmptyObject(globalThis, 5 + @as(usize, @intFromBool(!signalCode.isEmptyOrUndefinedOrNull())));
sync_value.put(globalThis, jsc.ZigString.static("exitCode"), exitCode);
if (!signalCode.isEmptyOrUndefinedOrNull()) {

View File

@@ -110,8 +110,12 @@ pub fn NewStaticPipeWriter(comptime ProcessType: type) type {
return @sizeOf(@This()) + this.source.memoryCost() + this.writer.memoryCost();
}
pub fn loop(this: *This) *uws.Loop {
return this.event_loop.loop();
pub fn loop(this: *This) *bun.Async.Loop {
if (comptime bun.Environment.isWindows) {
return this.event_loop.loop().uv_loop;
} else {
return this.event_loop.loop();
}
}
pub fn watch(this: *This) void {
@@ -132,7 +136,6 @@ const bun = @import("bun");
const Environment = bun.Environment;
const Output = bun.Output;
const jsc = bun.jsc;
const uws = bun.uws;
const Subprocess = jsc.API.Subprocess;
const Source = Subprocess.Source;

View File

@@ -189,8 +189,12 @@ pub fn eventLoop(this: *PipeReader) *jsc.EventLoop {
return this.event_loop;
}
pub fn loop(this: *PipeReader) *uws.Loop {
return this.event_loop.virtual_machine.uwsLoop();
pub fn loop(this: *PipeReader) *bun.Async.Loop {
if (comptime bun.Environment.isWindows) {
return this.event_loop.virtual_machine.uwsLoop().uv_loop;
} else {
return this.event_loop.virtual_machine.uwsLoop();
}
}
fn deinit(this: *PipeReader) void {
@@ -213,7 +217,6 @@ fn deinit(this: *PipeReader) void {
const bun = @import("bun");
const Environment = bun.Environment;
const default_allocator = bun.default_allocator;
const uws = bun.uws;
const jsc = bun.jsc;
const JSGlobalObject = jsc.JSGlobalObject;

View File

@@ -472,7 +472,11 @@ const StreamTransfer = struct {
}
pub fn loop(this: *StreamTransfer) *Async.Loop {
return this.eventLoop().loop();
if (comptime bun.Environment.isWindows) {
return this.eventLoop().loop().uv_loop;
} else {
return this.eventLoop().loop();
}
}
fn onWritable(this: *StreamTransfer, _: u64, _: AnyResponse) bool {

View File

@@ -8,7 +8,7 @@ pub const AbortSignal = opaque {
extern fn WebCore__AbortSignal__ref(arg0: *AbortSignal) *AbortSignal;
extern fn WebCore__AbortSignal__toJS(arg0: *AbortSignal, arg1: *JSGlobalObject) JSValue;
extern fn WebCore__AbortSignal__unref(arg0: *AbortSignal) void;
extern fn WebCore__AbortSignal__getTimeout(arg0: *AbortSignal) ?*Timeout;
pub fn listen(
this: *AbortSignal,
comptime Context: type,
@@ -138,6 +138,19 @@ pub const AbortSignal = opaque {
return WebCore__AbortSignal__new(global);
}
/// Returns a borrowed handle to the internal Timeout, or null.
///
/// Lifetime: owned by AbortSignal; may become invalid if the timer fires/cancels.
///
/// Thread-safety: not thread-safe; call only on the owning thread/loop.
///
/// Usage: if you need to operate on the Timeout (run/cancel/deinit), hold a ref
/// to `this` for the duration (e.g., `this.ref(); defer this.unref();`) and avoid
/// caching the pointer across turns.
pub fn getTimeout(this: *AbortSignal) ?*Timeout {
return WebCore__AbortSignal__getTimeout(this);
}
pub const Timeout = struct {
event_loop_timer: jsc.API.Timer.EventLoopTimer,

View File

@@ -303,9 +303,70 @@ public:
return true;
}
// For any other frame without parentheses, terminate parsing as before
offset = stack.length();
return false;
// Frames without function names (e.g., top-level code) don't have parentheses
// Format: "/path/to/file.ts:line:column" or "/path/to/file.ts:line"
// Parse these directly as anonymous frames
auto marker1 = 0u;
auto marker2 = line.find(':', marker1);
if (marker2 == WTF::notFound) {
// No colons found, treat entire line as source URL
frame.sourceURL = line;
frame.functionName = StringView();
return true;
}
auto marker3 = line.find(':', marker2 + 1);
if (marker3 == WTF::notFound) {
marker3 = line.length();
auto segment1 = StringView_slice(line, marker1, marker2);
auto segment2 = StringView_slice(line, marker2 + 1, marker3);
if (auto int1 = WTF::parseIntegerAllowingTrailingJunk<unsigned int>(segment2)) {
frame.sourceURL = segment1;
frame.lineNumber = WTF::OrdinalNumber::fromOneBasedInt(int1.value());
} else {
frame.sourceURL = StringView_slice(line, marker1, marker3);
}
frame.functionName = StringView();
return true;
}
// Find the last two colons to extract line:column
while (true) {
auto newcolon = line.find(':', marker3 + 1);
if (newcolon == WTF::notFound)
break;
marker2 = marker3;
marker3 = newcolon;
}
auto marker4 = line.length();
auto segment1 = StringView_slice(line, marker1, marker2);
auto segment2 = StringView_slice(line, marker2 + 1, marker3);
auto segment3 = StringView_slice(line, marker3 + 1, marker4);
if (auto int1 = WTF::parseIntegerAllowingTrailingJunk<unsigned int>(segment2)) {
if (auto int2 = WTF::parseIntegerAllowingTrailingJunk<unsigned int>(segment3)) {
frame.sourceURL = segment1;
frame.lineNumber = WTF::OrdinalNumber::fromOneBasedInt(int1.value());
frame.columnNumber = WTF::OrdinalNumber::fromOneBasedInt(int2.value());
} else {
frame.sourceURL = segment1;
frame.lineNumber = WTF::OrdinalNumber::fromOneBasedInt(int1.value());
}
} else {
if (auto int2 = WTF::parseIntegerAllowingTrailingJunk<unsigned int>(segment3)) {
frame.sourceURL = StringView_slice(line, marker1, marker3);
frame.lineNumber = WTF::OrdinalNumber::fromOneBasedInt(int2.value());
} else {
frame.sourceURL = StringView_slice(line, marker1, marker4);
}
}
frame.functionName = StringView();
return true;
}
auto lineInner = StringView_slice(line, openingParentheses + 1, closingParentheses);

View File

@@ -5474,6 +5474,15 @@ extern "C" JSC::EncodedJSValue WebCore__AbortSignal__abortReason(WebCore::AbortS
return JSC::JSValue::encode(abortSignal->reason().getValue(jsNull()));
}
extern "C" WebCore::AbortSignalTimeout WebCore__AbortSignal__getTimeout(WebCore::AbortSignal* arg0)
{
WebCore::AbortSignal* abortSignal = reinterpret_cast<WebCore::AbortSignal*>(arg0);
if (!abortSignal->hasActiveTimeoutTimer()) {
return nullptr;
}
return abortSignal->getTimeout();
}
extern "C" WebCore::AbortSignal* WebCore__AbortSignal__ref(WebCore::AbortSignal* abortSignal)
{
abortSignal->ref();

File diff suppressed because it is too large Load Diff

View File

@@ -147,9 +147,12 @@ extern "C" {
** [sqlite3_libversion_number()], [sqlite3_sourceid()],
** [sqlite_version()] and [sqlite_source_id()].
*/
#define SQLITE_VERSION "3.50.4"
#define SQLITE_VERSION_NUMBER 3050004
#define SQLITE_SOURCE_ID "2025-07-30 19:33:53 4d8adfb30e03f9cf27f800a2c1ba3c48fb4ca1b08b0f5ed59a4d5ecbf45e20a3"
#define SQLITE_VERSION "3.51.0"
#define SQLITE_VERSION_NUMBER 3051000
#define SQLITE_SOURCE_ID "2025-11-04 19:38:17 fb2c931ae597f8d00a37574ff67aeed3eced4e5547f9120744a-experimental"
#define SQLITE_SCM_BRANCH "unknown"
#define SQLITE_SCM_TAGS "unknown"
#define SQLITE_SCM_DATETIME "2025-11-04T19:38:17.314Z"
/*
** CAPI3REF: Run-Time Library Version Numbers
@@ -169,9 +172,9 @@ extern "C" {
** assert( strcmp(sqlite3_libversion(),SQLITE_VERSION)==0 );
** </pre></blockquote>)^
**
** ^The sqlite3_version[] string constant contains the text of [SQLITE_VERSION]
** macro. ^The sqlite3_libversion() function returns a pointer to the
** to the sqlite3_version[] string constant. The sqlite3_libversion()
** ^The sqlite3_version[] string constant contains the text of the
** [SQLITE_VERSION] macro. ^The sqlite3_libversion() function returns a
** pointer to the sqlite3_version[] string constant. The sqlite3_libversion()
** function is provided for use in DLLs since DLL users usually do not have
** direct access to string constants within the DLL. ^The
** sqlite3_libversion_number() function returns an integer equal to
@@ -371,7 +374,7 @@ typedef int (*sqlite3_callback)(void*,int,char**, char**);
** without having to use a lot of C code.
**
** ^The sqlite3_exec() interface runs zero or more UTF-8 encoded,
** semicolon-separate SQL statements passed into its 2nd argument,
** semicolon-separated SQL statements passed into its 2nd argument,
** in the context of the [database connection] passed in as its 1st
** argument. ^If the callback function of the 3rd argument to
** sqlite3_exec() is not NULL, then it is invoked for each result row
@@ -404,7 +407,7 @@ typedef int (*sqlite3_callback)(void*,int,char**, char**);
** result row is NULL then the corresponding string pointer for the
** sqlite3_exec() callback is a NULL pointer. ^The 4th argument to the
** sqlite3_exec() callback is an array of pointers to strings where each
** entry represents the name of corresponding result column as obtained
** entry represents the name of a corresponding result column as obtained
** from [sqlite3_column_name()].
**
** ^If the 2nd parameter to sqlite3_exec() is a NULL pointer, a pointer
@@ -498,6 +501,9 @@ SQLITE_API int sqlite3_exec(
#define SQLITE_ERROR_MISSING_COLLSEQ (SQLITE_ERROR | (1<<8))
#define SQLITE_ERROR_RETRY (SQLITE_ERROR | (2<<8))
#define SQLITE_ERROR_SNAPSHOT (SQLITE_ERROR | (3<<8))
#define SQLITE_ERROR_RESERVESIZE (SQLITE_ERROR | (4<<8))
#define SQLITE_ERROR_KEY (SQLITE_ERROR | (5<<8))
#define SQLITE_ERROR_UNABLE (SQLITE_ERROR | (6<<8))
#define SQLITE_IOERR_READ (SQLITE_IOERR | (1<<8))
#define SQLITE_IOERR_SHORT_READ (SQLITE_IOERR | (2<<8))
#define SQLITE_IOERR_WRITE (SQLITE_IOERR | (3<<8))
@@ -532,6 +538,8 @@ SQLITE_API int sqlite3_exec(
#define SQLITE_IOERR_DATA (SQLITE_IOERR | (32<<8))
#define SQLITE_IOERR_CORRUPTFS (SQLITE_IOERR | (33<<8))
#define SQLITE_IOERR_IN_PAGE (SQLITE_IOERR | (34<<8))
#define SQLITE_IOERR_BADKEY (SQLITE_IOERR | (35<<8))
#define SQLITE_IOERR_CODEC (SQLITE_IOERR | (36<<8))
#define SQLITE_LOCKED_SHAREDCACHE (SQLITE_LOCKED | (1<<8))
#define SQLITE_LOCKED_VTAB (SQLITE_LOCKED | (2<<8))
#define SQLITE_BUSY_RECOVERY (SQLITE_BUSY | (1<<8))
@@ -590,7 +598,7 @@ SQLITE_API int sqlite3_exec(
** Note in particular that passing the SQLITE_OPEN_EXCLUSIVE flag into
** [sqlite3_open_v2()] does *not* cause the underlying database file
** to be opened using O_EXCL. Passing SQLITE_OPEN_EXCLUSIVE into
** [sqlite3_open_v2()] has historically be a no-op and might become an
** [sqlite3_open_v2()] has historically been a no-op and might become an
** error in future versions of SQLite.
*/
#define SQLITE_OPEN_READONLY 0x00000001 /* Ok for sqlite3_open_v2() */
@@ -684,7 +692,7 @@ SQLITE_API int sqlite3_exec(
** SQLite uses one of these integer values as the second
** argument to calls it makes to the xLock() and xUnlock() methods
** of an [sqlite3_io_methods] object. These values are ordered from
** lest restrictive to most restrictive.
** least restrictive to most restrictive.
**
** The argument to xLock() is always SHARED or higher. The argument to
** xUnlock is either SHARED or NONE.
@@ -925,7 +933,7 @@ struct sqlite3_io_methods {
** connection. See also [SQLITE_FCNTL_FILE_POINTER].
**
** <li>[[SQLITE_FCNTL_SYNC_OMITTED]]
** No longer in use.
** The SQLITE_FCNTL_SYNC_OMITTED file-control is no longer used.
**
** <li>[[SQLITE_FCNTL_SYNC]]
** The [SQLITE_FCNTL_SYNC] opcode is generated internally by SQLite and
@@ -1000,7 +1008,7 @@ struct sqlite3_io_methods {
**
** <li>[[SQLITE_FCNTL_VFSNAME]]
** ^The [SQLITE_FCNTL_VFSNAME] opcode can be used to obtain the names of
** all [VFSes] in the VFS stack. The names are of all VFS shims and the
** all [VFSes] in the VFS stack. The names of all VFS shims and the
** final bottom-level VFS are written into memory obtained from
** [sqlite3_malloc()] and the result is stored in the char* variable
** that the fourth parameter of [sqlite3_file_control()] points to.
@@ -1014,7 +1022,7 @@ struct sqlite3_io_methods {
** ^The [SQLITE_FCNTL_VFS_POINTER] opcode finds a pointer to the top-level
** [VFSes] currently in use. ^(The argument X in
** sqlite3_file_control(db,SQLITE_FCNTL_VFS_POINTER,X) must be
** of type "[sqlite3_vfs] **". This opcodes will set *X
** of type "[sqlite3_vfs] **". This opcode will set *X
** to a pointer to the top-level VFS.)^
** ^When there are multiple VFS shims in the stack, this opcode finds the
** upper-most shim only.
@@ -1204,7 +1212,7 @@ struct sqlite3_io_methods {
** <li>[[SQLITE_FCNTL_EXTERNAL_READER]]
** The EXPERIMENTAL [SQLITE_FCNTL_EXTERNAL_READER] opcode is used to detect
** whether or not there is a database client in another process with a wal-mode
** transaction open on the database or not. It is only available on unix.The
** transaction open on the database or not. It is only available on unix. The
** (void*) argument passed with this file-control should be a pointer to a
** value of type (int). The integer value is set to 1 if the database is a wal
** mode database and there exists at least one client in another process that
@@ -1222,6 +1230,15 @@ struct sqlite3_io_methods {
** database is not a temp db, then the [SQLITE_FCNTL_RESET_CACHE] file-control
** purges the contents of the in-memory page cache. If there is an open
** transaction, or if the db is a temp-db, this opcode is a no-op, not an error.
**
** <li>[[SQLITE_FCNTL_FILESTAT]]
** The [SQLITE_FCNTL_FILESTAT] opcode returns low-level diagnostic information
** about the [sqlite3_file] objects used access the database and journal files
** for the given schema. The fourth parameter to [sqlite3_file_control()]
** should be an initialized [sqlite3_str] pointer. JSON text describing
** various aspects of the sqlite3_file object is appended to the sqlite3_str.
** The SQLITE_FCNTL_FILESTAT opcode is usually a no-op, unless compile-time
** options are used to enable it.
** </ul>
*/
#define SQLITE_FCNTL_LOCKSTATE 1
@@ -1267,6 +1284,7 @@ struct sqlite3_io_methods {
#define SQLITE_FCNTL_RESET_CACHE 42
#define SQLITE_FCNTL_NULL_IO 43
#define SQLITE_FCNTL_BLOCK_ON_CONNECT 44
#define SQLITE_FCNTL_FILESTAT 45
/* deprecated names */
#define SQLITE_GET_LOCKPROXYFILE SQLITE_FCNTL_GET_LOCKPROXYFILE
@@ -1629,7 +1647,7 @@ struct sqlite3_vfs {
** SQLite interfaces so that an application usually does not need to
** invoke sqlite3_initialize() directly. For example, [sqlite3_open()]
** calls sqlite3_initialize() so the SQLite library will be automatically
** initialized when [sqlite3_open()] is called if it has not be initialized
** initialized when [sqlite3_open()] is called if it has not been initialized
** already. ^However, if SQLite is compiled with the [SQLITE_OMIT_AUTOINIT]
** compile-time option, then the automatic calls to sqlite3_initialize()
** are omitted and the application must call sqlite3_initialize() directly
@@ -1886,21 +1904,21 @@ struct sqlite3_mem_methods {
** The [sqlite3_mem_methods]
** structure is filled with the currently defined memory allocation routines.)^
** This option can be used to overload the default memory allocation
** routines with a wrapper that simulations memory allocation failure or
** routines with a wrapper that simulates memory allocation failure or
** tracks memory usage, for example. </dd>
**
** [[SQLITE_CONFIG_SMALL_MALLOC]] <dt>SQLITE_CONFIG_SMALL_MALLOC</dt>
** <dd> ^The SQLITE_CONFIG_SMALL_MALLOC option takes single argument of
** <dd> ^The SQLITE_CONFIG_SMALL_MALLOC option takes a single argument of
** type int, interpreted as a boolean, which if true provides a hint to
** SQLite that it should avoid large memory allocations if possible.
** SQLite will run faster if it is free to make large memory allocations,
** but some application might prefer to run slower in exchange for
** but some applications might prefer to run slower in exchange for
** guarantees about memory fragmentation that are possible if large
** allocations are avoided. This hint is normally off.
** </dd>
**
** [[SQLITE_CONFIG_MEMSTATUS]] <dt>SQLITE_CONFIG_MEMSTATUS</dt>
** <dd> ^The SQLITE_CONFIG_MEMSTATUS option takes single argument of type int,
** <dd> ^The SQLITE_CONFIG_MEMSTATUS option takes a single argument of type int,
** interpreted as a boolean, which enables or disables the collection of
** memory allocation statistics. ^(When memory allocation statistics are
** disabled, the following SQLite interfaces become non-operational:
@@ -1945,7 +1963,7 @@ struct sqlite3_mem_methods {
** ^If pMem is NULL and N is non-zero, then each database connection
** does an initial bulk allocation for page cache memory
** from [sqlite3_malloc()] sufficient for N cache lines if N is positive or
** of -1024*N bytes if N is negative, . ^If additional
** of -1024*N bytes if N is negative. ^If additional
** page cache memory is needed beyond what is provided by the initial
** allocation, then SQLite goes to [sqlite3_malloc()] separately for each
** additional cache line. </dd>
@@ -1974,7 +1992,7 @@ struct sqlite3_mem_methods {
** <dd> ^(The SQLITE_CONFIG_MUTEX option takes a single argument which is a
** pointer to an instance of the [sqlite3_mutex_methods] structure.
** The argument specifies alternative low-level mutex routines to be used
** in place the mutex routines built into SQLite.)^ ^SQLite makes a copy of
** in place of the mutex routines built into SQLite.)^ ^SQLite makes a copy of
** the content of the [sqlite3_mutex_methods] structure before the call to
** [sqlite3_config()] returns. ^If SQLite is compiled with
** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then
@@ -2016,7 +2034,7 @@ struct sqlite3_mem_methods {
**
** [[SQLITE_CONFIG_GETPCACHE2]] <dt>SQLITE_CONFIG_GETPCACHE2</dt>
** <dd> ^(The SQLITE_CONFIG_GETPCACHE2 option takes a single argument which
** is a pointer to an [sqlite3_pcache_methods2] object. SQLite copies of
** is a pointer to an [sqlite3_pcache_methods2] object. SQLite copies off
** the current page cache implementation into that object.)^ </dd>
**
** [[SQLITE_CONFIG_LOG]] <dt>SQLITE_CONFIG_LOG</dt>
@@ -2033,7 +2051,7 @@ struct sqlite3_mem_methods {
** the logger function is a copy of the first parameter to the corresponding
** [sqlite3_log()] call and is intended to be a [result code] or an
** [extended result code]. ^The third parameter passed to the logger is
** log message after formatting via [sqlite3_snprintf()].
** a log message after formatting via [sqlite3_snprintf()].
** The SQLite logging interface is not reentrant; the logger function
** supplied by the application must not invoke any SQLite interface.
** In a multi-threaded application, the application-defined logger
@@ -2224,7 +2242,7 @@ struct sqlite3_mem_methods {
** These constants are the available integer configuration options that
** can be passed as the second parameter to the [sqlite3_db_config()] interface.
**
** The [sqlite3_db_config()] interface is a var-args functions. It takes a
** The [sqlite3_db_config()] interface is a var-args function. It takes a
** variable number of parameters, though always at least two. The number of
** parameters passed into sqlite3_db_config() depends on which of these
** constants is given as the second parameter. This documentation page
@@ -2336,17 +2354,20 @@ struct sqlite3_mem_methods {
**
** [[SQLITE_DBCONFIG_ENABLE_FTS3_TOKENIZER]]
** <dt>SQLITE_DBCONFIG_ENABLE_FTS3_TOKENIZER</dt>
** <dd> ^This option is used to enable or disable the
** [fts3_tokenizer()] function which is part of the
** [FTS3] full-text search engine extension.
** There must be two additional arguments.
** The first argument is an integer which is 0 to disable fts3_tokenizer() or
** positive to enable fts3_tokenizer() or negative to leave the setting
** unchanged.
** The second parameter is a pointer to an integer into which
** is written 0 or 1 to indicate whether fts3_tokenizer is disabled or enabled
** following this call. The second parameter may be a NULL pointer, in
** which case the new setting is not reported back. </dd>
** <dd> ^This option is used to enable or disable using the
** [fts3_tokenizer()] function - part of the [FTS3] full-text search engine
** extension - without using bound parameters as the parameters. Doing so
** is disabled by default. There must be two additional arguments. The first
** argument is an integer. If it is passed 0, then using fts3_tokenizer()
** without bound parameters is disabled. If it is passed a positive value,
** then calling fts3_tokenizer without bound parameters is enabled. If it
** is passed a negative value, this setting is not modified - this can be
** used to query for the current setting. The second parameter is a pointer
** to an integer into which is written 0 or 1 to indicate the current value
** of this setting (after it is modified, if applicable). The second
** parameter may be a NULL pointer, in which case the value of the setting
** is not reported back. Refer to [FTS3] documentation for further details.
** </dd>
**
** [[SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION]]
** <dt>SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION</dt>
@@ -2358,8 +2379,8 @@ struct sqlite3_mem_methods {
** When the first argument to this interface is 1, then only the C-API is
** enabled and the SQL function remains disabled. If the first argument to
** this interface is 0, then both the C-API and the SQL function are disabled.
** If the first argument is -1, then no changes are made to state of either the
** C-API or the SQL function.
** If the first argument is -1, then no changes are made to the state of either
** the C-API or the SQL function.
** The second parameter is a pointer to an integer into which
** is written 0 or 1 to indicate whether [sqlite3_load_extension()] interface
** is disabled or enabled following this call. The second parameter may
@@ -2477,7 +2498,7 @@ struct sqlite3_mem_methods {
** [[SQLITE_DBCONFIG_LEGACY_ALTER_TABLE]]
** <dt>SQLITE_DBCONFIG_LEGACY_ALTER_TABLE</dt>
** <dd>The SQLITE_DBCONFIG_LEGACY_ALTER_TABLE option activates or deactivates
** the legacy behavior of the [ALTER TABLE RENAME] command such it
** the legacy behavior of the [ALTER TABLE RENAME] command such that it
** behaves as it did prior to [version 3.24.0] (2018-06-04). See the
** "Compatibility Notice" on the [ALTER TABLE RENAME documentation] for
** additional information. This feature can also be turned on and off
@@ -2526,7 +2547,7 @@ struct sqlite3_mem_methods {
** <dt>SQLITE_DBCONFIG_LEGACY_FILE_FORMAT</dt>
** <dd>The SQLITE_DBCONFIG_LEGACY_FILE_FORMAT option activates or deactivates
** the legacy file format flag. When activated, this flag causes all newly
** created database file to have a schema format version number (the 4-byte
** created database files to have a schema format version number (the 4-byte
** integer found at offset 44 into the database header) of 1. This in turn
** means that the resulting database file will be readable and writable by
** any SQLite version back to 3.0.0 ([dateof:3.0.0]). Without this setting,
@@ -2553,7 +2574,7 @@ struct sqlite3_mem_methods {
** the database handle both when the SQL statement is prepared and when it
** is stepped. The flag is set (collection of statistics is enabled)
** by default. <p>This option takes two arguments: an integer and a pointer to
** an integer.. The first argument is 1, 0, or -1 to enable, disable, or
** an integer. The first argument is 1, 0, or -1 to enable, disable, or
** leave unchanged the statement scanstatus option. If the second argument
** is not NULL, then the value of the statement scanstatus setting after
** processing the first argument is written into the integer that the second
@@ -2596,8 +2617,8 @@ struct sqlite3_mem_methods {
** <dd>The SQLITE_DBCONFIG_ENABLE_ATTACH_WRITE option enables or disables the
** ability of the [ATTACH DATABASE] SQL command to open a database for writing.
** This capability is enabled by default. Applications can disable or
** reenable this capability using the current DBCONFIG option. If the
** the this capability is disabled, the [ATTACH] command will still work,
** reenable this capability using the current DBCONFIG option. If
** this capability is disabled, the [ATTACH] command will still work,
** but the database will be opened read-only. If this option is disabled,
** then the ability to create a new database using [ATTACH] is also disabled,
** regardless of the value of the [SQLITE_DBCONFIG_ENABLE_ATTACH_CREATE]
@@ -2631,7 +2652,7 @@ struct sqlite3_mem_methods {
**
** <p>Most of the SQLITE_DBCONFIG options take two arguments, so that the
** overall call to [sqlite3_db_config()] has a total of four parameters.
** The first argument (the third parameter to sqlite3_db_config()) is a integer.
** The first argument (the third parameter to sqlite3_db_config()) is an integer.
** The second argument is a pointer to an integer. If the first argument is 1,
** then the option becomes enabled. If the first integer argument is 0, then the
** option is disabled. If the first argument is -1, then the option setting
@@ -2921,7 +2942,7 @@ SQLITE_API int sqlite3_is_interrupted(sqlite3*);
** ^These routines return 0 if the statement is incomplete. ^If a
** memory allocation fails, then SQLITE_NOMEM is returned.
**
** ^These routines do not parse the SQL statements thus
** ^These routines do not parse the SQL statements and thus
** will not detect syntactically incorrect SQL.
**
** ^(If SQLite has not been initialized using [sqlite3_initialize()] prior
@@ -3038,7 +3059,7 @@ SQLITE_API int sqlite3_busy_timeout(sqlite3*, int ms);
** indefinitely if possible. The results of passing any other negative value
** are undefined.
**
** Internally, each SQLite database handle store two timeout values - the
** Internally, each SQLite database handle stores two timeout values - the
** busy-timeout (used for rollback mode databases, or if the VFS does not
** support blocking locks) and the setlk-timeout (used for blocking locks
** on wal-mode databases). The sqlite3_busy_timeout() method sets both
@@ -3068,7 +3089,7 @@ SQLITE_API int sqlite3_setlk_timeout(sqlite3*, int ms, int flags);
** This is a legacy interface that is preserved for backwards compatibility.
** Use of this interface is not recommended.
**
** Definition: A <b>result table</b> is memory data structure created by the
** Definition: A <b>result table</b> is a memory data structure created by the
** [sqlite3_get_table()] interface. A result table records the
** complete query results from one or more queries.
**
@@ -3211,7 +3232,7 @@ SQLITE_API char *sqlite3_vsnprintf(int,char*,const char*, va_list);
** ^Calling sqlite3_free() with a pointer previously returned
** by sqlite3_malloc() or sqlite3_realloc() releases that memory so
** that it might be reused. ^The sqlite3_free() routine is
** a no-op if is called with a NULL pointer. Passing a NULL pointer
** a no-op if it is called with a NULL pointer. Passing a NULL pointer
** to sqlite3_free() is harmless. After being freed, memory
** should neither be read nor written. Even reading previously freed
** memory might result in a segmentation fault or other severe error.
@@ -3229,13 +3250,13 @@ SQLITE_API char *sqlite3_vsnprintf(int,char*,const char*, va_list);
** sqlite3_free(X).
** ^sqlite3_realloc(X,N) returns a pointer to a memory allocation
** of at least N bytes in size or NULL if insufficient memory is available.
** ^If M is the size of the prior allocation, then min(N,M) bytes
** of the prior allocation are copied into the beginning of buffer returned
** ^If M is the size of the prior allocation, then min(N,M) bytes of the
** prior allocation are copied into the beginning of the buffer returned
** by sqlite3_realloc(X,N) and the prior allocation is freed.
** ^If sqlite3_realloc(X,N) returns NULL and N is positive, then the
** prior allocation is not freed.
**
** ^The sqlite3_realloc64(X,N) interfaces works the same as
** ^The sqlite3_realloc64(X,N) interface works the same as
** sqlite3_realloc(X,N) except that N is a 64-bit unsigned integer instead
** of a 32-bit signed integer.
**
@@ -3285,7 +3306,7 @@ SQLITE_API sqlite3_uint64 sqlite3_msize(void*);
** was last reset. ^The values returned by [sqlite3_memory_used()] and
** [sqlite3_memory_highwater()] include any overhead
** added by SQLite in its implementation of [sqlite3_malloc()],
** but not overhead added by the any underlying system library
** but not overhead added by any underlying system library
** routines that [sqlite3_malloc()] may call.
**
** ^The memory high-water mark is reset to the current value of
@@ -3737,7 +3758,7 @@ SQLITE_API void sqlite3_progress_handler(sqlite3*, int, int(*)(void*), void*);
** there is no harm in trying.)
**
** ^(<dt>[SQLITE_OPEN_SHAREDCACHE]</dt>
** <dd>The database is opened [shared cache] enabled, overriding
** <dd>The database is opened with [shared cache] enabled, overriding
** the default shared cache setting provided by
** [sqlite3_enable_shared_cache()].)^
** The [use of shared cache mode is discouraged] and hence shared cache
@@ -3745,7 +3766,7 @@ SQLITE_API void sqlite3_progress_handler(sqlite3*, int, int(*)(void*), void*);
** this option is a no-op.
**
** ^(<dt>[SQLITE_OPEN_PRIVATECACHE]</dt>
** <dd>The database is opened [shared cache] disabled, overriding
** <dd>The database is opened with [shared cache] disabled, overriding
** the default shared cache setting provided by
** [sqlite3_enable_shared_cache()].)^
**
@@ -4163,7 +4184,7 @@ SQLITE_API void sqlite3_free_filename(sqlite3_filename);
** subsequent calls to other SQLite interface functions.)^
**
** ^The sqlite3_errstr(E) interface returns the English-language text
** that describes the [result code] E, as UTF-8, or NULL if E is not an
** that describes the [result code] E, as UTF-8, or NULL if E is not a
** result code for which a text error message is available.
** ^(Memory to hold the error message string is managed internally
** and must not be freed by the application)^.
@@ -4171,7 +4192,7 @@ SQLITE_API void sqlite3_free_filename(sqlite3_filename);
** ^If the most recent error references a specific token in the input
** SQL, the sqlite3_error_offset() interface returns the byte offset
** of the start of that token. ^The byte offset returned by
** sqlite3_error_offset() assumes that the input SQL is UTF8.
** sqlite3_error_offset() assumes that the input SQL is UTF-8.
** ^If the most recent error does not reference a specific token in the input
** SQL, then the sqlite3_error_offset() function returns -1.
**
@@ -4196,6 +4217,34 @@ SQLITE_API const void *sqlite3_errmsg16(sqlite3*);
SQLITE_API const char *sqlite3_errstr(int);
SQLITE_API int sqlite3_error_offset(sqlite3 *db);
/*
** CAPI3REF: Set Error Codes And Message
** METHOD: sqlite3
**
** Set the error code of the database handle passed as the first argument
** to errcode, and the error message to a copy of nul-terminated string
** zErrMsg. If zErrMsg is passed NULL, then the error message is set to
** the default message associated with the supplied error code. Subsequent
** calls to [sqlite3_errcode()] and [sqlite3_errmsg()] and similar will
** return the values set by this routine in place of what was previously
** set by SQLite itself.
**
** This function returns SQLITE_OK if the error code and error message are
** successfully set, SQLITE_NOMEM if an OOM occurs, and SQLITE_MISUSE if
** the database handle is NULL or invalid.
**
** The error code and message set by this routine remains in effect until
** they are changed, either by another call to this routine or until they are
** changed to by SQLite itself to reflect the result of some subsquent
** API call.
**
** This function is intended for use by SQLite extensions or wrappers. The
** idea is that an extension or wrapper can use this routine to set error
** messages and error codes and thus behave more like a core SQLite
** feature from the point of view of an application.
*/
SQLITE_API int sqlite3_set_errmsg(sqlite3 *db, int errcode, const char *zErrMsg);
/*
** CAPI3REF: Prepared Statement Object
** KEYWORDS: {prepared statement} {prepared statements}
@@ -4270,8 +4319,8 @@ SQLITE_API int sqlite3_limit(sqlite3*, int id, int newVal);
**
** These constants define various performance limits
** that can be lowered at run-time using [sqlite3_limit()].
** The synopsis of the meanings of the various limits is shown below.
** Additional information is available at [limits | Limits in SQLite].
** A concise description of these limits follows, and additional information
** is available at [limits | Limits in SQLite].
**
** <dl>
** [[SQLITE_LIMIT_LENGTH]] ^(<dt>SQLITE_LIMIT_LENGTH</dt>
@@ -4336,7 +4385,7 @@ SQLITE_API int sqlite3_limit(sqlite3*, int id, int newVal);
/*
** CAPI3REF: Prepare Flags
**
** These constants define various flags that can be passed into
** These constants define various flags that can be passed into the
** "prepFlags" parameter of the [sqlite3_prepare_v3()] and
** [sqlite3_prepare16_v3()] interfaces.
**
@@ -4423,7 +4472,7 @@ SQLITE_API int sqlite3_limit(sqlite3*, int id, int newVal);
** there is a small performance advantage to passing an nByte parameter that
** is the number of bytes in the input string <i>including</i>
** the nul-terminator.
** Note that nByte measure the length of the input in bytes, not
** Note that nByte measures the length of the input in bytes, not
** characters, even for the UTF-16 interfaces.
**
** ^If pzTail is not NULL then *pzTail is made to point to the first byte
@@ -4557,7 +4606,7 @@ SQLITE_API int sqlite3_prepare16_v3(
**
** ^The sqlite3_expanded_sql() interface returns NULL if insufficient memory
** is available to hold the result, or if the result would exceed the
** the maximum string length determined by the [SQLITE_LIMIT_LENGTH].
** maximum string length determined by the [SQLITE_LIMIT_LENGTH].
**
** ^The [SQLITE_TRACE_SIZE_LIMIT] compile-time option limits the size of
** bound parameter expansions. ^The [SQLITE_OMIT_TRACE] compile-time
@@ -4745,7 +4794,7 @@ typedef struct sqlite3_value sqlite3_value;
**
** The context in which an SQL function executes is stored in an
** sqlite3_context object. ^A pointer to an sqlite3_context object
** is always first parameter to [application-defined SQL functions].
** is always the first parameter to [application-defined SQL functions].
** The application-defined SQL function implementation will pass this
** pointer through into calls to [sqlite3_result_int | sqlite3_result()],
** [sqlite3_aggregate_context()], [sqlite3_user_data()],
@@ -4869,9 +4918,11 @@ typedef struct sqlite3_context sqlite3_context;
** associated with the pointer P of type T. ^D is either a NULL pointer or
** a pointer to a destructor function for P. ^SQLite will invoke the
** destructor D with a single argument of P when it is finished using
** P. The T parameter should be a static string, preferably a string
** literal. The sqlite3_bind_pointer() routine is part of the
** [pointer passing interface] added for SQLite 3.20.0.
** P, even if the call to sqlite3_bind_pointer() fails. Due to a
** historical design quirk, results are undefined if D is
** SQLITE_TRANSIENT. The T parameter should be a static string,
** preferably a string literal. The sqlite3_bind_pointer() routine is
** part of the [pointer passing interface] added for SQLite 3.20.0.
**
** ^If any of the sqlite3_bind_*() routines are called with a NULL pointer
** for the [prepared statement] or with a prepared statement for which
@@ -5482,7 +5533,7 @@ SQLITE_API int sqlite3_column_type(sqlite3_stmt*, int iCol);
**
** ^The sqlite3_finalize() function is called to delete a [prepared statement].
** ^If the most recent evaluation of the statement encountered no errors
** or if the statement is never been evaluated, then sqlite3_finalize() returns
** or if the statement has never been evaluated, then sqlite3_finalize() returns
** SQLITE_OK. ^If the most recent evaluation of statement S failed, then
** sqlite3_finalize(S) returns the appropriate [error code] or
** [extended error code].
@@ -5714,7 +5765,7 @@ SQLITE_API int sqlite3_create_window_function(
/*
** CAPI3REF: Text Encodings
**
** These constant define integer codes that represent the various
** These constants define integer codes that represent the various
** text encodings supported by SQLite.
*/
#define SQLITE_UTF8 1 /* IMP: R-37514-35566 */
@@ -5806,7 +5857,7 @@ SQLITE_API int sqlite3_create_window_function(
** result.
** Every function that invokes [sqlite3_result_subtype()] should have this
** property. If it does not, then the call to [sqlite3_result_subtype()]
** might become a no-op if the function is used as term in an
** might become a no-op if the function is used as a term in an
** [expression index]. On the other hand, SQL functions that never invoke
** [sqlite3_result_subtype()] should avoid setting this property, as the
** purpose of this property is to disable certain optimizations that are
@@ -5933,7 +5984,7 @@ SQLITE_API SQLITE_DEPRECATED int sqlite3_memory_alarm(void(*)(void*,sqlite3_int6
** sqlite3_value_nochange(X) interface returns true if and only if
** the column corresponding to X is unchanged by the UPDATE operation
** that the xUpdate method call was invoked to implement and if
** and the prior [xColumn] method call that was invoked to extracted
** the prior [xColumn] method call that was invoked to extract
** the value for that column returned without setting a result (probably
** because it queried [sqlite3_vtab_nochange()] and found that the column
** was unchanging). ^Within an [xUpdate] method, any value for which
@@ -6206,6 +6257,7 @@ SQLITE_API void sqlite3_set_auxdata(sqlite3_context*, int N, void*, void (*)(voi
** or a NULL pointer if there were no prior calls to
** sqlite3_set_clientdata() with the same values of D and N.
** Names are compared using strcmp() and are thus case sensitive.
** It returns 0 on success and SQLITE_NOMEM on allocation failure.
**
** If P and X are both non-NULL, then the destructor X is invoked with
** argument P on the first of the following occurrences:
@@ -8882,9 +8934,18 @@ SQLITE_API int sqlite3_status64(
** ^The sqlite3_db_status() routine returns SQLITE_OK on success and a
** non-zero [error code] on failure.
**
** ^The sqlite3_db_status64(D,O,C,H,R) routine works exactly the same
** way as sqlite3_db_status(D,O,C,H,R) routine except that the C and H
** parameters are pointer to 64-bit integers (type: sqlite3_int64) instead
** of pointers to 32-bit integers, which allows larger status values
** to be returned. If a status value exceeds 2,147,483,647 then
** sqlite3_db_status() will truncate the value whereas sqlite3_db_status64()
** will return the full value.
**
** See also: [sqlite3_status()] and [sqlite3_stmt_status()].
*/
SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int resetFlg);
SQLITE_API int sqlite3_db_status64(sqlite3*,int,sqlite3_int64*,sqlite3_int64*,int);
/*
** CAPI3REF: Status Parameters for database connections
@@ -8981,6 +9042,10 @@ SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int r
** If an IO or other error occurs while writing a page to disk, the effect
** on subsequent SQLITE_DBSTATUS_CACHE_WRITE requests is undefined.)^ ^The
** highwater mark associated with SQLITE_DBSTATUS_CACHE_WRITE is always 0.
** <p>
** ^(There is overlap between the quantities measured by this parameter
** (SQLITE_DBSTATUS_CACHE_WRITE) and SQLITE_DBSTATUS_TEMPBUF_SPILL.
** Resetting one will reduce the other.)^
** </dd>
**
** [[SQLITE_DBSTATUS_CACHE_SPILL]] ^(<dt>SQLITE_DBSTATUS_CACHE_SPILL</dt>
@@ -8996,6 +9061,18 @@ SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int r
** <dd>This parameter returns zero for the current value if and only if
** all foreign key constraints (deferred or immediate) have been
** resolved.)^ ^The highwater mark is always 0.
**
** [[SQLITE_DBSTATUS_TEMPBUF_SPILL] ^(<dt>SQLITE_DBSTATUS_TEMPBUF_SPILL</dt>
** <dd>^(This parameter returns the number of bytes written to temporary
** files on disk that could have been kept in memory had sufficient memory
** been available. This value includes writes to intermediate tables that
** are part of complex queries, external sorts that spill to disk, and
** writes to TEMP tables.)^
** ^The highwater mark is always 0.
** <p>
** ^(There is overlap between the quantities measured by this parameter
** (SQLITE_DBSTATUS_TEMPBUF_SPILL) and SQLITE_DBSTATUS_CACHE_WRITE.
** Resetting one will reduce the other.)^
** </dd>
** </dl>
*/
@@ -9012,7 +9089,8 @@ SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int r
#define SQLITE_DBSTATUS_DEFERRED_FKS 10
#define SQLITE_DBSTATUS_CACHE_USED_SHARED 11
#define SQLITE_DBSTATUS_CACHE_SPILL 12
#define SQLITE_DBSTATUS_MAX 12 /* Largest defined DBSTATUS */
#define SQLITE_DBSTATUS_TEMPBUF_SPILL 13
#define SQLITE_DBSTATUS_MAX 13 /* Largest defined DBSTATUS */
/*
@@ -9777,7 +9855,7 @@ SQLITE_API void sqlite3_log(int iErrCode, const char *zFormat, ...);
** is the number of pages currently in the write-ahead log file,
** including those that were just committed.
**
** The callback function should normally return [SQLITE_OK]. ^If an error
** ^The callback function should normally return [SQLITE_OK]. ^If an error
** code is returned, that error will propagate back up through the
** SQLite code base to cause the statement that provoked the callback
** to report an error, though the commit will have still occurred. If the
@@ -9785,13 +9863,26 @@ SQLITE_API void sqlite3_log(int iErrCode, const char *zFormat, ...);
** that does not correspond to any valid SQLite error code, the results
** are undefined.
**
** A single database handle may have at most a single write-ahead log callback
** registered at one time. ^Calling [sqlite3_wal_hook()] replaces any
** previously registered write-ahead log callback. ^The return value is
** a copy of the third parameter from the previous call, if any, or 0.
** ^Note that the [sqlite3_wal_autocheckpoint()] interface and the
** [wal_autocheckpoint pragma] both invoke [sqlite3_wal_hook()] and will
** overwrite any prior [sqlite3_wal_hook()] settings.
** ^A single database handle may have at most a single write-ahead log
** callback registered at one time. ^Calling [sqlite3_wal_hook()]
** replaces the default behavior or previously registered write-ahead
** log callback.
**
** ^The return value is a copy of the third parameter from the
** previous call, if any, or 0.
**
** ^The [sqlite3_wal_autocheckpoint()] interface and the
** [wal_autocheckpoint pragma] both invoke [sqlite3_wal_hook()] and
** will overwrite any prior [sqlite3_wal_hook()] settings.
**
** ^If a write-ahead log callback is set using this function then
** [sqlite3_wal_checkpoint_v2()] or [PRAGMA wal_checkpoint]
** should be invoked periodically to keep the write-ahead log file
** from growing without bound.
**
** ^Passing a NULL pointer for the callback disables automatic
** checkpointing entirely. To re-enable the default behavior, call
** sqlite3_wal_autocheckpoint(db,1000) or use [PRAGMA wal_checkpoint].
*/
SQLITE_API void *sqlite3_wal_hook(
sqlite3*,
@@ -9808,7 +9899,7 @@ SQLITE_API void *sqlite3_wal_hook(
** to automatically [checkpoint]
** after committing a transaction if there are N or
** more frames in the [write-ahead log] file. ^Passing zero or
** a negative value as the nFrame parameter disables automatic
** a negative value as the N parameter disables automatic
** checkpoints entirely.
**
** ^The callback registered by this function replaces any existing callback
@@ -9824,9 +9915,10 @@ SQLITE_API void *sqlite3_wal_hook(
**
** ^Every new [database connection] defaults to having the auto-checkpoint
** enabled with a threshold of 1000 or [SQLITE_DEFAULT_WAL_AUTOCHECKPOINT]
** pages. The use of this interface
** is only necessary if the default setting is found to be suboptimal
** for a particular application.
** pages.
**
** ^The use of this interface is only necessary if the default setting
** is found to be suboptimal for a particular application.
*/
SQLITE_API int sqlite3_wal_autocheckpoint(sqlite3 *db, int N);
@@ -9891,6 +9983,11 @@ SQLITE_API int sqlite3_wal_checkpoint(sqlite3 *db, const char *zDb);
** ^This mode works the same way as SQLITE_CHECKPOINT_RESTART with the
** addition that it also truncates the log file to zero bytes just prior
** to a successful return.
**
** <dt>SQLITE_CHECKPOINT_NOOP<dd>
** ^This mode always checkpoints zero frames. The only reason to invoke
** a NOOP checkpoint is to access the values returned by
** sqlite3_wal_checkpoint_v2() via output parameters *pnLog and *pnCkpt.
** </dl>
**
** ^If pnLog is not NULL, then *pnLog is set to the total number of frames in
@@ -9961,6 +10058,7 @@ SQLITE_API int sqlite3_wal_checkpoint_v2(
** See the [sqlite3_wal_checkpoint_v2()] documentation for details on the
** meaning of each of these checkpoint modes.
*/
#define SQLITE_CHECKPOINT_NOOP -1 /* Do no work at all */
#define SQLITE_CHECKPOINT_PASSIVE 0 /* Do as much as possible w/o blocking */
#define SQLITE_CHECKPOINT_FULL 1 /* Wait for writers, then checkpoint */
#define SQLITE_CHECKPOINT_RESTART 2 /* Like FULL but wait for readers */
@@ -10788,7 +10886,7 @@ typedef struct sqlite3_snapshot {
** The [sqlite3_snapshot_get()] interface is only available when the
** [SQLITE_ENABLE_SNAPSHOT] compile-time option is used.
*/
SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_get(
SQLITE_API int sqlite3_snapshot_get(
sqlite3 *db,
const char *zSchema,
sqlite3_snapshot **ppSnapshot
@@ -10837,7 +10935,7 @@ SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_get(
** The [sqlite3_snapshot_open()] interface is only available when the
** [SQLITE_ENABLE_SNAPSHOT] compile-time option is used.
*/
SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_open(
SQLITE_API int sqlite3_snapshot_open(
sqlite3 *db,
const char *zSchema,
sqlite3_snapshot *pSnapshot
@@ -10854,7 +10952,7 @@ SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_open(
** The [sqlite3_snapshot_free()] interface is only available when the
** [SQLITE_ENABLE_SNAPSHOT] compile-time option is used.
*/
SQLITE_API SQLITE_EXPERIMENTAL void sqlite3_snapshot_free(sqlite3_snapshot*);
SQLITE_API void sqlite3_snapshot_free(sqlite3_snapshot*);
/*
** CAPI3REF: Compare the ages of two snapshot handles.
@@ -10881,7 +10979,7 @@ SQLITE_API SQLITE_EXPERIMENTAL void sqlite3_snapshot_free(sqlite3_snapshot*);
** This interface is only available if SQLite is compiled with the
** [SQLITE_ENABLE_SNAPSHOT] option.
*/
SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_cmp(
SQLITE_API int sqlite3_snapshot_cmp(
sqlite3_snapshot *p1,
sqlite3_snapshot *p2
);
@@ -10909,7 +11007,7 @@ SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_cmp(
** This interface is only available if SQLite is compiled with the
** [SQLITE_ENABLE_SNAPSHOT] option.
*/
SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_recover(sqlite3 *db, const char *zDb);
SQLITE_API int sqlite3_snapshot_recover(sqlite3 *db, const char *zDb);
/*
** CAPI3REF: Serialize a database
@@ -10983,12 +11081,13 @@ SQLITE_API unsigned char *sqlite3_serialize(
**
** The sqlite3_deserialize(D,S,P,N,M,F) interface causes the
** [database connection] D to disconnect from database S and then
** reopen S as an in-memory database based on the serialization contained
** in P. The serialized database P is N bytes in size. M is the size of
** the buffer P, which might be larger than N. If M is larger than N, and
** the SQLITE_DESERIALIZE_READONLY bit is not set in F, then SQLite is
** permitted to add content to the in-memory database as long as the total
** size does not exceed M bytes.
** reopen S as an in-memory database based on the serialization
** contained in P. If S is a NULL pointer, the main database is
** used. The serialized database P is N bytes in size. M is the size
** of the buffer P, which might be larger than N. If M is larger than
** N, and the SQLITE_DESERIALIZE_READONLY bit is not set in F, then
** SQLite is permitted to add content to the in-memory database as
** long as the total size does not exceed M bytes.
**
** If the SQLITE_DESERIALIZE_FREEONCLOSE bit is set in F, then SQLite will
** invoke sqlite3_free() on the serialization buffer when the database
@@ -11055,6 +11154,54 @@ SQLITE_API int sqlite3_deserialize(
#define SQLITE_DESERIALIZE_RESIZEABLE 2 /* Resize using sqlite3_realloc64() */
#define SQLITE_DESERIALIZE_READONLY 4 /* Database is read-only */
/*
** CAPI3REF: Bind array values to the CARRAY table-valued function
**
** The sqlite3_carray_bind(S,I,P,N,F,X) interface binds an array value to
** one of the first argument of the [carray() table-valued function]. The
** S parameter is a pointer to the [prepared statement] that uses the carray()
** functions. I is the parameter index to be bound. P is a pointer to the
** array to be bound, and N is the number of eements in the array. The
** F argument is one of constants [SQLITE_CARRAY_INT32], [SQLITE_CARRAY_INT64],
** [SQLITE_CARRAY_DOUBLE], [SQLITE_CARRAY_TEXT], or [SQLITE_CARRAY_BLOB] to
** indicate the datatype of the array being bound. The X argument is not a
** NULL pointer, then SQLite will invoke the function X on the P parameter
** after it has finished using P, even if the call to
** sqlite3_carray_bind() fails. The special-case finalizer
** SQLITE_TRANSIENT has no effect here.
*/
SQLITE_API int sqlite3_carray_bind(
sqlite3_stmt *pStmt, /* Statement to be bound */
int i, /* Parameter index */
void *aData, /* Pointer to array data */
int nData, /* Number of data elements */
int mFlags, /* CARRAY flags */
void (*xDel)(void*) /* Destructor for aData */
);
/*
** CAPI3REF: Datatypes for the CARRAY table-valued function
**
** The fifth argument to the [sqlite3_carray_bind()] interface musts be
** one of the following constants, to specify the datatype of the array
** that is being bound into the [carray table-valued function].
*/
#define SQLITE_CARRAY_INT32 0 /* Data is 32-bit signed integers */
#define SQLITE_CARRAY_INT64 1 /* Data is 64-bit signed integers */
#define SQLITE_CARRAY_DOUBLE 2 /* Data is doubles */
#define SQLITE_CARRAY_TEXT 3 /* Data is char* */
#define SQLITE_CARRAY_BLOB 4 /* Data is struct iovec */
/*
** Versions of the above #defines that omit the initial SQLITE_, for
** legacy compatibility.
*/
#define CARRAY_INT32 0 /* Data is 32-bit signed integers */
#define CARRAY_INT64 1 /* Data is 64-bit signed integers */
#define CARRAY_DOUBLE 2 /* Data is doubles */
#define CARRAY_TEXT 3 /* Data is char* */
#define CARRAY_BLOB 4 /* Data is struct iovec */
/*
** Undo the hack that converts floating point types to integer for
** builds on processors without floating point support.
@@ -12314,14 +12461,32 @@ SQLITE_API void sqlite3changegroup_delete(sqlite3_changegroup*);
** update the "main" database attached to handle db with the changes found in
** the changeset passed via the second and third arguments.
**
** All changes made by these functions are enclosed in a savepoint transaction.
** If any other error (aside from a constraint failure when attempting to
** write to the target database) occurs, then the savepoint transaction is
** rolled back, restoring the target database to its original state, and an
** SQLite error code returned. Additionally, starting with version 3.51.0,
** an error code and error message that may be accessed using the
** [sqlite3_errcode()] and [sqlite3_errmsg()] APIs are left in the database
** handle.
**
** The fourth argument (xFilter) passed to these functions is the "filter
** callback". If it is not NULL, then for each table affected by at least one
** change in the changeset, the filter callback is invoked with
** the table name as the second argument, and a copy of the context pointer
** passed as the sixth argument as the first. If the "filter callback"
** returns zero, then no attempt is made to apply any changes to the table.
** Otherwise, if the return value is non-zero or the xFilter argument to
** is NULL, all changes related to the table are attempted.
** callback". This may be passed NULL, in which case all changes in the
** changeset are applied to the database. For sqlite3changeset_apply() and
** sqlite3_changeset_apply_v2(), if it is not NULL, then it is invoked once
** for each table affected by at least one change in the changeset. In this
** case the table name is passed as the second argument, and a copy of
** the context pointer passed as the sixth argument to apply() or apply_v2()
** as the first. If the "filter callback" returns zero, then no attempt is
** made to apply any changes to the table. Otherwise, if the return value is
** non-zero, all changes related to the table are attempted.
**
** For sqlite3_changeset_apply_v3(), the xFilter callback is invoked once
** per change. The second argument in this case is an sqlite3_changeset_iter
** that may be queried using the usual APIs for the details of the current
** change. If the "filter callback" returns zero in this case, then no attempt
** is made to apply the current change. If it returns non-zero, the change
** is applied.
**
** For each table that is not excluded by the filter callback, this function
** tests that the target database contains a compatible table. A table is
@@ -12342,11 +12507,11 @@ SQLITE_API void sqlite3changegroup_delete(sqlite3_changegroup*);
** one such warning is issued for each table in the changeset.
**
** For each change for which there is a compatible table, an attempt is made
** to modify the table contents according to the UPDATE, INSERT or DELETE
** change. If a change cannot be applied cleanly, the conflict handler
** function passed as the fifth argument to sqlite3changeset_apply() may be
** invoked. A description of exactly when the conflict handler is invoked for
** each type of change is below.
** to modify the table contents according to each UPDATE, INSERT or DELETE
** change that is not excluded by a filter callback. If a change cannot be
** applied cleanly, the conflict handler function passed as the fifth argument
** to sqlite3changeset_apply() may be invoked. A description of exactly when
** the conflict handler is invoked for each type of change is below.
**
** Unlike the xFilter argument, xConflict may not be passed NULL. The results
** of passing anything other than a valid function pointer as the xConflict
@@ -12442,12 +12607,6 @@ SQLITE_API void sqlite3changegroup_delete(sqlite3_changegroup*);
** This can be used to further customize the application's conflict
** resolution strategy.
**
** All changes made by these functions are enclosed in a savepoint transaction.
** If any other error (aside from a constraint failure when attempting to
** write to the target database) occurs, then the savepoint transaction is
** rolled back, restoring the target database to its original state, and an
** SQLite error code returned.
**
** If the output parameters (ppRebase) and (pnRebase) are non-NULL and
** the input is a changeset (not a patchset), then sqlite3changeset_apply_v2()
** may set (*ppRebase) to point to a "rebase" that may be used with the
@@ -12497,6 +12656,23 @@ SQLITE_API int sqlite3changeset_apply_v2(
void **ppRebase, int *pnRebase, /* OUT: Rebase data */
int flags /* SESSION_CHANGESETAPPLY_* flags */
);
SQLITE_API int sqlite3changeset_apply_v3(
sqlite3 *db, /* Apply change to "main" db of this handle */
int nChangeset, /* Size of changeset in bytes */
void *pChangeset, /* Changeset blob */
int(*xFilter)(
void *pCtx, /* Copy of sixth arg to _apply() */
sqlite3_changeset_iter *p /* Handle describing change */
),
int(*xConflict)(
void *pCtx, /* Copy of sixth arg to _apply() */
int eConflict, /* DATA, MISSING, CONFLICT, CONSTRAINT */
sqlite3_changeset_iter *p /* Handle describing change and conflict */
),
void *pCtx, /* First argument passed to xConflict */
void **ppRebase, int *pnRebase, /* OUT: Rebase data */
int flags /* SESSION_CHANGESETAPPLY_* flags */
);
/*
** CAPI3REF: Flags for sqlite3changeset_apply_v2
@@ -12916,6 +13092,23 @@ SQLITE_API int sqlite3changeset_apply_v2_strm(
void **ppRebase, int *pnRebase,
int flags
);
SQLITE_API int sqlite3changeset_apply_v3_strm(
sqlite3 *db, /* Apply change to "main" db of this handle */
int (*xInput)(void *pIn, void *pData, int *pnData), /* Input function */
void *pIn, /* First arg for xInput */
int(*xFilter)(
void *pCtx, /* Copy of sixth arg to _apply() */
sqlite3_changeset_iter *p
),
int(*xConflict)(
void *pCtx, /* Copy of sixth arg to _apply() */
int eConflict, /* DATA, MISSING, CONFLICT, CONSTRAINT */
sqlite3_changeset_iter *p /* Handle describing change and conflict */
),
void *pCtx, /* First argument passed to xConflict */
void **ppRebase, int *pnRebase,
int flags
);
SQLITE_API int sqlite3changeset_concat_strm(
int (*xInputA)(void *pIn, void *pData, int *pnData),
void *pInA,

View File

@@ -124,6 +124,8 @@ public:
size_t memoryCost() const;
AbortSignalTimeout getTimeout() const { return m_timeout; }
private:
enum class Aborted : bool {
No,

View File

@@ -512,6 +512,15 @@ pub fn tick(this: *EventLoop) void {
this.global.handleRejectedPromises();
}
pub fn tickWithoutJS(this: *EventLoop) void {
const ctx = this.virtual_machine;
this.tickConcurrent();
while (this.tickWithCount(ctx) > 0) {
this.tickConcurrent();
}
}
pub fn waitForPromise(this: *EventLoop, promise: jsc.AnyPromise) void {
const jsc_vm = this.virtual_machine.jsc_vm;
switch (promise.status(jsc_vm)) {
@@ -652,6 +661,12 @@ pub fn getActiveTasks(globalObject: *jsc.JSGlobalObject, _: *jsc.CallFrame) bun.
return result;
}
pub fn deinit(this: *EventLoop) void {
this.tasks.deinit();
this.immediate_tasks.clearAndFree(bun.default_allocator);
this.next_immediate_tasks.clearAndFree(bun.default_allocator);
}
pub const AnyEventLoop = @import("./event_loop/AnyEventLoop.zig").AnyEventLoop;
pub const ConcurrentPromiseTask = @import("./event_loop/ConcurrentPromiseTask.zig").ConcurrentPromiseTask;
pub const WorkTask = @import("./event_loop/WorkTask.zig").WorkTask;

View File

@@ -0,0 +1,188 @@
//! Isolated event loop for spawnSync operations.
//!
//! This provides a completely separate event loop instance to ensure that:
//! - JavaScript timers don't fire during spawnSync
//! - stdin/stdout from the main process aren't affected
//! - The subprocess runs in complete isolation
//! - We don't recursively run the main event loop
//!
//! Implementation approach:
//! - Creates a separate uws.Loop instance with its own kqueue/epoll fd (POSIX) or libuv loop (Windows)
//! - Wraps it in a full jsc.EventLoop instance
//! - On POSIX: temporarily overrides vm.event_loop_handle to point to isolated loop
//! - On Windows: stores isolated loop pointer in EventLoop.uws_loop
//! - Minimal handler callbacks (wakeup/pre/post are no-ops)
//!
//! Similar to Node.js's approach in vendor/node/src/spawn_sync.cc but adapted for Bun's architecture.
const SpawnSyncEventLoop = @This();
/// Separate JSC EventLoop instance for this spawnSync
/// This is a FULL event loop, not just a handle
event_loop: jsc.EventLoop,
/// Completely separate uws.Loop instance - critical for avoiding recursive event loop execution
uws_loop: *uws.Loop,
/// On POSIX, we need to temporarily override the VM's event_loop_handle
/// Store the original so we can restore it
original_event_loop_handle: @FieldType(jsc.VirtualMachine, "event_loop_handle") = undefined,
uv_timer: if (bun.Environment.isWindows) ?*bun.windows.libuv.Timer else void = if (bun.Environment.isWindows) null else {},
did_timeout: bool = false,
/// Minimal handler for the isolated loop
const Handler = struct {
pub fn wakeup(loop: *uws.Loop) callconv(.C) void {
_ = loop;
// No-op: we don't need to wake up from another thread for spawnSync
}
pub fn pre(loop: *uws.Loop) callconv(.C) void {
_ = loop;
// No-op: no pre-tick work needed for spawnSync
}
pub fn post(loop: *uws.Loop) callconv(.C) void {
_ = loop;
// No-op: no post-tick work needed for spawnSync
}
};
pub fn init(self: *SpawnSyncEventLoop, vm: *jsc.VirtualMachine) void {
const loop = uws.Loop.create(Handler);
self.* = .{
.event_loop = undefined,
.uws_loop = loop,
};
// Initialize the JSC EventLoop with empty state
// CRITICAL: On Windows, store our isolated loop pointer
self.event_loop = .{
.tasks = jsc.EventLoop.Queue.init(bun.default_allocator),
.global = vm.global,
.virtual_machine = vm,
.uws_loop = if (bun.Environment.isWindows) self.uws_loop else {},
};
// Set up the loop's internal data to point to this isolated event loop
self.uws_loop.internal_loop_data.setParentEventLoop(jsc.EventLoopHandle.init(&self.event_loop));
self.uws_loop.internal_loop_data.jsc_vm = null;
}
fn onCloseUVTimer(timer: *bun.windows.libuv.Timer) callconv(.C) void {
bun.default_allocator.destroy(timer);
}
pub fn deinit(this: *SpawnSyncEventLoop) void {
if (comptime bun.Environment.isWindows) {
if (this.uv_timer) |timer| {
timer.stop();
timer.unref();
this.uv_timer = null;
libuv.uv_close(@alignCast(@ptrCast(timer)), @ptrCast(&onCloseUVTimer));
}
}
this.event_loop.deinit();
this.uws_loop.deinit();
}
/// Configure the event loop for a specific VM context
pub fn prepare(this: *SpawnSyncEventLoop, vm: *jsc.VirtualMachine) void {
this.event_loop.global = vm.global;
this.did_timeout = false;
this.event_loop.virtual_machine = vm;
this.original_event_loop_handle = vm.event_loop_handle;
vm.event_loop_handle = if (bun.Environment.isPosix) this.uws_loop else this.uws_loop.uv_loop;
}
/// Restore the original event loop handle after spawnSync completes
pub fn cleanup(this: *SpawnSyncEventLoop, vm: *jsc.VirtualMachine, prev_event_loop: *jsc.EventLoop) void {
vm.event_loop_handle = this.original_event_loop_handle;
vm.event_loop = prev_event_loop;
if (bun.Environment.isWindows) {
if (this.uv_timer) |timer| {
timer.stop();
timer.unref();
}
}
}
/// Get an EventLoopHandle for this isolated loop
pub fn handle(this: *SpawnSyncEventLoop) jsc.EventLoopHandle {
return jsc.EventLoopHandle.init(&this.event_loop);
}
fn onUVTimer(timer_: *bun.windows.libuv.Timer) callconv(.C) void {
const this: *SpawnSyncEventLoop = @ptrCast(@alignCast(timer_.data));
this.did_timeout = true;
this.uws_loop.uv_loop.stop();
}
const TickState = enum { timeout, completed };
fn prepareTimerOnWindows(this: *SpawnSyncEventLoop, ts: *const bun.timespec) void {
const timer: *bun.windows.libuv.Timer = this.uv_timer orelse brk: {
const uv_timer: *bun.windows.libuv.Timer = bun.default_allocator.create(bun.windows.libuv.Timer) catch |e| bun.handleOom(e);
uv_timer.* = std.mem.zeroes(bun.windows.libuv.Timer);
uv_timer.init(this.uws_loop.uv_loop);
break :brk uv_timer;
};
timer.start(ts.msUnsigned(), 0, &onUVTimer);
timer.ref();
this.uv_timer = timer;
timer.data = this;
}
/// Tick the isolated event loop with an optional timeout
/// This is similar to the main event loop's tick but completely isolated
pub fn tickWithTimeout(this: *SpawnSyncEventLoop, timeout: ?*const bun.timespec) TickState {
const duration: ?*const bun.timespec = if (timeout) |ts| &ts.duration(&.now()) else null;
if (bun.Environment.isWindows) {
if (duration) |ts| {
prepareTimerOnWindows(this, ts);
}
}
// Tick the isolated uws loop with the specified timeout
// This will only process I/O related to this subprocess
// and will NOT interfere with the main event loop
this.uws_loop.tickWithTimeout(duration);
if (timeout) |ts| {
if (bun.Environment.isWindows) {
this.uv_timer.?.unref();
this.uv_timer.?.stop();
} else {
this.did_timeout = bun.timespec.now().order(ts) != .lt;
}
}
this.event_loop.tickWithoutJS();
const did_timeout = this.did_timeout;
this.did_timeout = false;
if (did_timeout) {
return .timeout;
}
return .completed;
}
/// Check if the loop has any active handles
pub fn isActive(this: *const SpawnSyncEventLoop) bool {
return this.uws_loop.isActive();
}
const std = @import("std");
const bun = @import("bun");
const jsc = bun.jsc;
const uws = bun.uws;
const libuv = bun.windows.libuv;

View File

@@ -42,6 +42,8 @@ valkey_context: ValkeyContext = .{},
tls_default_ciphers: ?[:0]const u8 = null,
#spawn_sync_event_loop: bun.ptr.Owned(?*SpawnSyncEventLoop) = .initNull(),
const PipeReadBuffer = [256 * 1024]u8;
const DIGESTED_HMAC_256_LEN = 32;
pub const AWSSignatureCache = struct {
@@ -537,6 +539,7 @@ pub fn deinit(this: *RareData) void {
bun.default_allocator.destroy(pipe);
}
this.#spawn_sync_event_loop.deinit();
this.aws_signature_cache.deinit();
this.s3_default_client.deinit();
@@ -569,6 +572,17 @@ pub fn websocketDeflate(this: *RareData) *WebSocketDeflate.RareData {
};
}
pub const SpawnSyncEventLoop = @import("./event_loop/SpawnSyncEventLoop.zig");
pub fn spawnSyncEventLoop(this: *RareData, vm: *jsc.VirtualMachine) *SpawnSyncEventLoop {
return this.#spawn_sync_event_loop.get() orelse brk: {
this.#spawn_sync_event_loop = .new(undefined);
const ptr: *SpawnSyncEventLoop = this.#spawn_sync_event_loop.get().?;
ptr.init(vm);
break :brk ptr;
};
}
const IPC = @import("./ipc.zig");
const UUID = @import("./uuid.zig");
const WebSocketDeflate = @import("../http/websocket_client/WebSocketDeflate.zig");

View File

@@ -97,6 +97,22 @@ pub const TestRunner = struct {
bun_test_root: bun_test.BunTestRoot,
pub fn getActiveTimeout(this: *const TestRunner) bun.timespec {
const active_file = this.bun_test_root.active_file.get() orelse return .epoch;
if (active_file.timer.state != .ACTIVE or active_file.timer.next.eql(&.epoch)) {
return .epoch;
}
return active_file.timer.next;
}
pub fn removeActiveTimeout(this: *TestRunner, vm: *jsc.VirtualMachine) void {
const active_file = this.bun_test_root.active_file.get() orelse return;
if (active_file.timer.state != .ACTIVE or active_file.timer.next.eql(&.epoch)) {
return;
}
vm.timer.remove(&active_file.timer);
}
pub const Summary = struct {
pass: u32 = 0,
expectations: u32 = 0,

View File

@@ -159,7 +159,11 @@ pub fn eventLoop(this: *const FileReader) jsc.EventLoopHandle {
}
pub fn loop(this: *const FileReader) *bun.Async.Loop {
return this.eventLoop().loop();
if (comptime bun.Environment.isWindows) {
return this.eventLoop().loop().uv_loop;
} else {
return this.eventLoop().loop();
}
}
pub fn setup(

View File

@@ -357,7 +357,11 @@ pub fn setup(this: *FileSink, options: *const FileSink.Options) bun.sys.Maybe(vo
}
pub fn loop(this: *FileSink) *bun.Async.Loop {
return this.event_loop_handle.loop();
if (comptime bun.Environment.isWindows) {
return this.event_loop_handle.loop().uv_loop;
} else {
return this.event_loop_handle.loop();
}
}
pub fn eventLoop(this: *FileSink) jsc.EventLoopHandle {

View File

@@ -1548,7 +1548,7 @@ pub const BundleV2 = struct {
bake_options: BakeOptions,
alloc: std.mem.Allocator,
event_loop: EventLoop,
) !bun.collections.ArrayListDefault(options.OutputFile) {
) !std.ArrayList(options.OutputFile) {
var this = try BundleV2.init(
server_transpiler,
bake_options,
@@ -1596,7 +1596,7 @@ pub const BundleV2 = struct {
);
if (chunks.len == 0) {
return bun.collections.ArrayListDefault(options.OutputFile).init();
return std.ArrayList(options.OutputFile).init(bun.default_allocator);
}
return try this.linker.generateChunksInParallel(chunks, false);

View File

@@ -2,7 +2,7 @@ pub fn generateChunksInParallel(
c: *LinkerContext,
chunks: []Chunk,
comptime is_dev_server: bool,
) !if (is_dev_server) void else bun.collections.ArrayListDefault(options.OutputFile) {
) !if (is_dev_server) void else std.ArrayList(options.OutputFile) {
const trace = bun.perf.trace("Bundler.generateChunksInParallel");
defer trace.end();

View File

@@ -127,8 +127,12 @@ pub const ProcessHandle = struct {
return this.state.event_loop;
}
pub fn loop(this: *This) *bun.uws.Loop {
return this.state.event_loop.loop;
pub fn loop(this: *This) *bun.Async.Loop {
if (comptime bun.Environment.isWindows) {
return this.state.event_loop.loop.uv_loop;
} else {
return this.state.event_loop.loop;
}
}
};

View File

@@ -634,6 +634,11 @@ pub const Loop = extern struct {
this.active_handles -= 1;
}
pub fn stop(this: *Loop) void {
log("stop", .{});
uv_stop(this);
}
pub fn isActive(this: *Loop) bool {
const loop_alive = uv_loop_alive(this) != 0;
// This log may be helpful if you are curious what exact handles are active

View File

@@ -165,6 +165,10 @@ pub const PosixLoop = extern struct {
pub fn shouldEnableDateHeaderTimer(this: *const PosixLoop) bool {
return this.internal_loop_data.shouldEnableDateHeaderTimer();
}
pub fn deinit(this: *PosixLoop) void {
c.us_loop_free(this);
}
};
pub const WindowsLoop = extern struct {
@@ -261,6 +265,10 @@ pub const WindowsLoop = extern struct {
c.uws_loop_date_header_timer_update(this);
}
pub fn deinit(this: *WindowsLoop) void {
c.us_loop_free(this);
}
fn NewHandler(comptime UserType: type, comptime callback_fn: fn (UserType) void) type {
return struct {
loop: *Loop,

View File

@@ -457,6 +457,10 @@ pub fn isTLS(this: *WindowsNamedPipe) bool {
return this.flags.is_ssl;
}
pub fn loop(this: *WindowsNamedPipe) *bun.Async.Loop {
return this.vm.uvLoop();
}
pub fn encodeAndWrite(this: *WindowsNamedPipe, data: []const u8) i32 {
log("encodeAndWrite (len: {})", .{data.len});
if (this.wrapper) |*wrapper| {

View File

@@ -779,8 +779,12 @@ pub const SecurityScanSubprocess = struct {
return &this.manager.event_loop;
}
pub fn loop(this: *const SecurityScanSubprocess) *bun.uws.Loop {
return this.manager.event_loop.loop();
pub fn loop(this: *const SecurityScanSubprocess) *bun.Async.Loop {
if (comptime bun.Environment.isWindows) {
return this.manager.event_loop.loop().uv_loop;
} else {
return this.manager.event_loop.loop();
}
}
pub fn onReaderDone(this: *SecurityScanSubprocess) void {

View File

@@ -526,7 +526,7 @@ pub fn isGitHubShorthand(npa_str: []const u8) bool {
return seen_slash and does_not_end_with_slash;
}
const UrlProtocol = union(enum) {
pub const UrlProtocol = union(enum) {
well_formed: WellDefinedProtocol,
// A protocol which is not known by the library. Includes the : character, but not the
@@ -545,7 +545,7 @@ const UrlProtocol = union(enum) {
}
};
const UrlProtocolPair = struct {
pub const UrlProtocolPair = struct {
const Self = @This();
url: union(enum) {
@@ -723,11 +723,10 @@ fn normalizeProtocol(npa_str: []const u8) UrlProtocolPair {
return .{ .url = .{ .unmanaged = npa_str }, .protocol = .unknown };
}
/// Attempt to correct an scp-style URL into a proper URL, parsable with jsc.URL. Potentially
/// mutates the original input.
/// Attempt to correct an scp-style URL into a proper URL, parsable with jsc.URL.
///
/// This function assumes that the input is an scp-style URL.
fn correctUrl(
pub fn correctUrl(
url_proto_pair: *const UrlProtocolPair,
allocator: std.mem.Allocator,
) error{OutOfMemory}!UrlProtocolPair {

View File

@@ -47,8 +47,12 @@ pub const LifecycleScriptSubprocess = struct {
pub const OutputReader = bun.io.BufferedReader;
pub fn loop(this: *const LifecycleScriptSubprocess) *bun.uws.Loop {
return this.manager.event_loop.loop();
pub fn loop(this: *const LifecycleScriptSubprocess) *bun.Async.Loop {
if (comptime bun.Environment.isWindows) {
return this.manager.event_loop.loop().uv_loop;
} else {
return this.manager.event_loop.loop();
}
}
pub fn eventLoop(this: *const LifecycleScriptSubprocess) *jsc.AnyEventLoop {

View File

@@ -1257,7 +1257,7 @@ pub fn saveToDisk(this: *Lockfile, load_result: *const LoadResult, options: *con
break :bytes writer_buf.list.items;
}
var bytes = bun.collections.ArrayListDefault(u8).init();
var bytes = std.ArrayList(u8).init(bun.default_allocator);
var total_size: usize = 0;
var end_pos: usize = 0;
@@ -1265,9 +1265,9 @@ pub fn saveToDisk(this: *Lockfile, load_result: *const LoadResult, options: *con
Output.err(err, "failed to serialize lockfile", .{});
Global.crash();
};
if (bytes.items().len >= end_pos)
bytes.items()[end_pos..][0..@sizeOf(usize)].* = @bitCast(total_size);
break :bytes bytes.toOwnedSlice() catch bun.outOfMemory();
if (bytes.items.len >= end_pos)
bytes.items[end_pos..][0..@sizeOf(usize)].* = @bitCast(total_size);
break :bytes bytes.items;
};
defer bun.default_allocator.free(bytes);

View File

@@ -30,7 +30,7 @@ pub fn detectAndLoadOtherLockfile(
.step = .migrating,
.value = err,
.lockfile_path = "package-lock.json",
.format = .binary,
.format = .text,
} };
};
@@ -54,7 +54,7 @@ pub fn detectAndLoadOtherLockfile(
.step = .migrating,
.value = err,
.lockfile_path = "yarn.lock",
.format = .binary,
.format = .text,
} };
};
@@ -126,7 +126,7 @@ pub fn detectAndLoadOtherLockfile(
.step = .migrating,
.value = err,
.lockfile_path = "pnpm-lock.yaml",
.format = .binary,
.format = .text,
} };
};

View File

@@ -395,10 +395,35 @@ pub const Repository = extern struct {
return null;
}
if (strings.hasPrefixComptime(url, "git@") or strings.hasPrefixComptime(url, "ssh://")) {
if (strings.hasPrefixComptime(url, "git@")) {
return url;
}
if (strings.hasPrefixComptime(url, "ssh://")) {
// TODO(markovejnovic): This is a stop-gap. One of the problems with the implementation
// here is that we should integrate hosted_git_info more thoroughly into the codebase
// to avoid the allocation and copy here. For now, the thread-local buffer is a good
// enough solution to avoid having to handle init/deinit.
// Fix malformed ssh:// URLs with colons using hosted_git_info.correctUrl
// ssh://git@github.com:user/repo -> ssh://git@github.com/user/repo
var pair = hosted_git_info.UrlProtocolPair{
.url = .{ .unmanaged = url },
.protocol = .{ .well_formed = .git_plus_ssh },
};
var corrected = hosted_git_info.correctUrl(&pair, bun.default_allocator) catch {
return url; // If correction fails, return original
};
defer corrected.deinit();
// Copy corrected URL to thread-local buffer
const corrected_str = corrected.urlSlice();
const result = ssh_path_buf[0..corrected_str.len];
bun.copy(u8, result, corrected_str);
return result;
}
if (Dependency.isSCPLikePath(url)) {
ssh_path_buf[0.."ssh://git@".len].* = "ssh://git@".*;
var rest = ssh_path_buf["ssh://git@".len..];
@@ -675,6 +700,7 @@ const string = []const u8;
const Dependency = @import("./dependency.zig");
const DotEnv = @import("../env_loader.zig");
const Environment = @import("../env.zig");
const hosted_git_info = @import("./hosted_git_info.zig");
const std = @import("std");
const FileSystem = @import("../fs.zig").FileSystem;

View File

@@ -709,6 +709,7 @@ const WindowsBufferedReaderVTable = struct {
chunk: []const u8,
hasMore: ReadState,
) bool = null,
loop: *const fn (*anyopaque) *Async.Loop,
};
pub const WindowsBufferedReader = struct {
@@ -757,12 +758,16 @@ pub const WindowsBufferedReader = struct {
fn onReaderError(this: *anyopaque, err: bun.sys.Error) void {
return Type.onReaderError(@as(*Type, @alignCast(@ptrCast(this))), err);
}
fn loop(this: *anyopaque) *Async.Loop {
return Type.loop(@as(*Type, @alignCast(@ptrCast(this))));
}
};
return .{
.vtable = .{
.onReadChunk = if (@hasDecl(Type, "onReadChunk")) &fns.onReadChunk else null,
.onReaderDone = &fns.onReaderDone,
.onReaderError = &fns.onReaderError,
.loop = &fns.loop,
},
};
}
@@ -909,7 +914,10 @@ pub const WindowsBufferedReader = struct {
pub fn start(this: *WindowsBufferedReader, fd: bun.FileDescriptor, _: bool) bun.sys.Maybe(void) {
bun.assert(this.source == null);
const source = switch (Source.open(uv.Loop.get(), fd)) {
// Use the event loop from the parent, not the global one
// This is critical for spawnSync to use its isolated loop
const loop = this.vtable.loop(this.parent);
const source = switch (Source.open(loop, fd)) {
.err => |err| return .{ .err = err },
.result => |source| source,
};
@@ -1058,7 +1066,7 @@ pub const WindowsBufferedReader = struct {
file_ptr.iov = uv.uv_buf_t.init(buf);
this.flags.has_inflight_read = true;
if (uv.uv_fs_read(uv.Loop.get(), &file_ptr.fs, file_ptr.file, @ptrCast(&file_ptr.iov), 1, if (this.flags.use_pread) @intCast(this._offset) else -1, onFileRead).toError(.write)) |err| {
if (uv.uv_fs_read(this.vtable.loop(this.parent), &file_ptr.fs, file_ptr.file, @ptrCast(&file_ptr.iov), 1, if (this.flags.use_pread) @intCast(this._offset) else -1, onFileRead).toError(.write)) |err| {
file_ptr.complete(false);
this.flags.has_inflight_read = false;
this.flags.is_paused = true;
@@ -1108,7 +1116,7 @@ pub const WindowsBufferedReader = struct {
file.iov = uv.uv_buf_t.init(buf);
this.flags.has_inflight_read = true;
if (uv.uv_fs_read(uv.Loop.get(), &file.fs, file.file, @ptrCast(&file.iov), 1, if (this.flags.use_pread) @intCast(this._offset) else -1, onFileRead).toError(.write)) |err| {
if (uv.uv_fs_read(this.vtable.loop(this.parent), &file.fs, file.file, @ptrCast(&file.iov), 1, if (this.flags.use_pread) @intCast(this._offset) else -1, onFileRead).toError(.write)) |err| {
file.complete(false);
this.flags.has_inflight_read = false;
return .{ .err = err };

View File

@@ -258,7 +258,9 @@ pub fn PosixBufferedWriter(Parent: type, function_table: anytype) type {
pub fn registerPoll(this: *PosixWriter) void {
var poll = this.getPoll() orelse return;
switch (poll.registerWithFd(bun.uws.Loop.get(), .writable, .dispatch, poll.fd)) {
// Use the event loop from the parent, not the global one
const loop = this.parent.eventLoop().loop();
switch (poll.registerWithFd(loop, .writable, .dispatch, poll.fd)) {
.err => |err| {
onError(this.parent, err);
},
@@ -897,7 +899,10 @@ fn BaseWindowsPipeWriter(
else => @compileError("Expected `bun.FileDescriptor` or `*bun.MovableIfWindowsFd` but got: " ++ @typeName(rawfd)),
};
bun.assert(this.source == null);
const source = switch (Source.open(uv.Loop.get(), fd)) {
// Use the event loop from the parent, not the global one
// This is critical for spawnSync to use its isolated loop
const loop = this.parent.loop();
const source = switch (Source.open(loop, fd)) {
.result => |source| source,
.err => |err| return .{ .err = err },
};
@@ -1059,7 +1064,7 @@ pub fn WindowsBufferedWriter(Parent: type, function_table: anytype) type {
file.prepare();
this.write_buffer = uv.uv_buf_t.init(buffer);
if (uv.uv_fs_write(uv.Loop.get(), &file.fs, file.file, @ptrCast(&this.write_buffer), 1, -1, onFsWriteComplete).toError(.write)) |err| {
if (uv.uv_fs_write(this.parent.loop(), &file.fs, file.file, @ptrCast(&this.write_buffer), 1, -1, onFsWriteComplete).toError(.write)) |err| {
file.complete(false);
this.close();
onError(this.parent, err);
@@ -1097,7 +1102,7 @@ pub fn WindowsBufferedWriter(Parent: type, function_table: anytype) type {
/// Basic std.ArrayList(u8) + usize cursor wrapper
pub const StreamBuffer = struct {
list: bun.collections.ArrayListDefault(u8) = bun.collections.ArrayListDefault(u8).init(),
list: std.ArrayList(u8) = std.ArrayList(u8).init(bun.default_allocator),
cursor: usize = 0,
pub fn reset(this: *StreamBuffer) void {
@@ -1107,19 +1112,19 @@ pub const StreamBuffer = struct {
}
pub fn maybeShrink(this: *StreamBuffer) void {
if (this.list.capacity() > std.heap.pageSize()) {
if (this.list.capacity > std.heap.pageSize()) {
// workaround insane zig decision to make it undefined behavior to resize .len < .capacity
this.list.expandToCapacity(undefined);
this.list.expandToCapacity();
this.list.shrinkAndFree(std.heap.pageSize());
}
}
pub fn memoryCost(this: *const StreamBuffer) usize {
return this.list.capacity();
return this.list.capacity;
}
pub fn size(this: *const StreamBuffer) usize {
return this.list.items().len - this.cursor;
return this.list.items.len - this.cursor;
}
pub fn isEmpty(this: *const StreamBuffer) bool {
@@ -1152,7 +1157,7 @@ pub const StreamBuffer = struct {
pub fn writeTypeAsBytesAssumeCapacity(this: *StreamBuffer, comptime T: type, data: T) void {
var byte_list = bun.ByteList.moveFromList(&this.list);
defer this.list = byte_list.moveToListManaged(this.list.allocator());
defer this.list = byte_list.moveToListManaged(this.list.allocator);
byte_list.writeTypeAsBytesAssumeCapacity(T, data);
}
@@ -1164,20 +1169,20 @@ pub const StreamBuffer = struct {
{
var byte_list = bun.ByteList.moveFromList(&this.list);
defer this.list = byte_list.moveToListManaged(this.list.allocator());
_ = try byte_list.writeLatin1(this.list.allocator(), buffer);
defer this.list = byte_list.moveToListManaged(this.list.allocator);
_ = try byte_list.writeLatin1(this.list.allocator, buffer);
}
return this.list.items()[this.cursor..];
return this.list.items[this.cursor..];
} else if (comptime @TypeOf(writeFn) == @TypeOf(&writeUTF16) and writeFn == &writeUTF16) {
{
var byte_list = bun.ByteList.moveFromList(&this.list);
defer this.list = byte_list.moveToListManaged(this.list.allocator());
defer this.list = byte_list.moveToListManaged(this.list.allocator);
_ = try byte_list.writeUTF16(this.list.allocator(), buffer);
_ = try byte_list.writeUTF16(this.list.allocator, buffer);
}
return this.list.items()[this.cursor..];
return this.list.items[this.cursor..];
} else if (comptime @TypeOf(writeFn) == @TypeOf(&write) and writeFn == &write) {
return buffer;
} else {
@@ -1193,25 +1198,25 @@ pub const StreamBuffer = struct {
}
var byte_list = bun.ByteList.moveFromList(&this.list);
defer this.list = byte_list.moveToListManaged(this.list.allocator());
defer this.list = byte_list.moveToListManaged(this.list.allocator);
_ = try byte_list.writeLatin1(this.list.allocator(), buffer);
_ = try byte_list.writeLatin1(this.list.allocator, buffer);
}
pub fn writeUTF16(this: *StreamBuffer, buffer: []const u16) OOM!void {
var byte_list = bun.ByteList.moveFromList(&this.list);
defer this.list = byte_list.moveToListManaged(this.list.allocator());
defer this.list = byte_list.moveToListManaged(this.list.allocator);
_ = try byte_list.writeUTF16(this.list.allocator(), buffer);
_ = try byte_list.writeUTF16(this.list.allocator, buffer);
}
pub fn slice(this: *const StreamBuffer) []const u8 {
return this.list.items()[this.cursor..];
return this.list.items[this.cursor..];
}
pub fn deinit(this: *StreamBuffer) void {
this.cursor = 0;
if (this.list.capacity() > 0) {
if (this.list.capacity > 0) {
this.list.clearAndFree();
}
}
@@ -1404,7 +1409,7 @@ pub fn WindowsStreamingWriter(comptime Parent: type, function_table: anytype) ty
file.prepare();
this.write_buffer = uv.uv_buf_t.init(bytes);
if (uv.uv_fs_write(uv.Loop.get(), &file.fs, file.file, @ptrCast(&this.write_buffer), 1, -1, onFsWriteComplete).toError(.write)) |err| {
if (uv.uv_fs_write(this.parent.loop(), &file.fs, file.file, @ptrCast(&this.write_buffer), 1, -1, onFsWriteComplete).toError(.write)) |err| {
file.complete(false);
this.last_write_result = .{ .err = err };
onError(this.parent, err);

View File

@@ -909,13 +909,9 @@ function ClientRequest(input, options, cb) {
this[kEmitState] = 0;
this.setSocketKeepAlive = (_enable = true, _initialDelay = 0) => {
$debug(`${NODE_HTTP_WARNING}\n`, "WARN: ClientRequest.setSocketKeepAlive is a no-op");
};
this.setSocketKeepAlive = (_enable = true, _initialDelay = 0) => {};
this.setNoDelay = (_noDelay = true) => {
$debug(`${NODE_HTTP_WARNING}\n`, "WARN: ClientRequest.setNoDelay is a no-op");
};
this.setNoDelay = (_noDelay = true) => {};
this[kClearTimeout] = () => {
const timeoutTimer = this[kTimeoutTimer];

View File

@@ -59,16 +59,16 @@ pub const S3ListObjectsV2Result = struct {
continuation_token: ?[]const u8,
next_continuation_token: ?[]const u8,
start_after: ?[]const u8,
common_prefixes: ?bun.collections.ArrayListDefault([]const u8),
contents: ?bun.collections.ArrayListDefault(S3ListObjectsContents),
common_prefixes: ?std.ArrayList([]const u8),
contents: ?std.ArrayList(S3ListObjectsContents),
pub fn deinit(this: *const @This()) void {
if (this.contents) |contents| {
for (contents.items()) |*item| item.deinit();
for (contents.items) |*item| item.deinit();
contents.deinit();
}
if (this.common_prefixes) |common_prefixes| {
common_prefixes.deinitShallow();
common_prefixes.deinit();
}
}
@@ -115,9 +115,9 @@ pub const S3ListObjectsV2Result = struct {
}
if (this.contents) |contents| {
const jsContents = try JSValue.createEmptyArray(globalObject, contents.items().len);
const jsContents = try JSValue.createEmptyArray(globalObject, contents.items.len);
for (contents.items(), 0..) |item, i| {
for (contents.items, 0..) |item, i| {
const objectInfo = JSValue.createEmptyObject(globalObject, 1);
objectInfo.put(globalObject, jsc.ZigString.static("key"), try bun.String.createUTF8ForJS(globalObject, item.key));
@@ -165,9 +165,9 @@ pub const S3ListObjectsV2Result = struct {
}
if (this.common_prefixes) |common_prefixes| {
const jsCommonPrefixes = try JSValue.createEmptyArray(globalObject, common_prefixes.items().len);
const jsCommonPrefixes = try JSValue.createEmptyArray(globalObject, common_prefixes.items.len);
for (common_prefixes.items(), 0..) |prefix, i| {
for (common_prefixes.items, 0..) |prefix, i| {
const jsPrefix = JSValue.createEmptyObject(globalObject, 1);
jsPrefix.put(globalObject, jsc.ZigString.static("prefix"), try bun.String.createUTF8ForJS(globalObject, prefix));
try jsCommonPrefixes.putIndex(globalObject, @intCast(i), jsPrefix);
@@ -196,8 +196,8 @@ pub fn parseS3ListObjectsResult(xml: []const u8) !S3ListObjectsV2Result {
.start_after = null,
};
var contents = bun.collections.ArrayListDefault(S3ListObjectsContents).init();
var common_prefixes = bun.collections.ArrayListDefault([]const u8).init();
var contents = std.ArrayList(S3ListObjectsContents).init(bun.default_allocator);
var common_prefixes = std.ArrayList([]const u8).init(bun.default_allocator);
// we dont use trailing ">" as it may finish with xmlns=...
if (strings.indexOf(xml, "<ListBucketResult")) |delete_result_pos| {
@@ -482,17 +482,17 @@ pub fn parseS3ListObjectsResult(xml: []const u8) !S3ListObjectsV2Result {
}
}
if (contents.items().len != 0) {
if (contents.items.len != 0) {
result.contents = contents;
} else {
for (contents.items()) |*item| item.deinit();
for (contents.items) |*item| item.deinit();
contents.deinit();
}
if (common_prefixes.items().len != 0) {
if (common_prefixes.items.len != 0) {
result.common_prefixes = common_prefixes;
} else {
common_prefixes.deinitShallow();
common_prefixes.deinit();
}
}

View File

@@ -46,8 +46,12 @@ pub fn eventLoop(this: *IOReader) jsc.EventLoopHandle {
return this.evtloop;
}
pub fn loop(this: *IOReader) *bun.uws.Loop {
return this.evtloop.loop();
pub fn loop(this: *IOReader) *bun.Async.Loop {
if (comptime bun.Environment.isWindows) {
return this.evtloop.loop().uv_loop;
} else {
return this.evtloop.loop();
}
}
pub fn init(fd: bun.FileDescriptor, evtloop: jsc.EventLoopHandle) *IOReader {

View File

@@ -185,6 +185,14 @@ pub fn eventLoop(this: *IOWriter) jsc.EventLoopHandle {
return this.evtloop;
}
pub fn loop(this: *IOWriter) *bun.Async.Loop {
if (comptime bun.Environment.isWindows) {
return this.evtloop.loop().uv_loop;
} else {
return this.evtloop.loop();
}
}
/// Idempotent write call
fn write(this: *IOWriter) enum {
suspended,

View File

@@ -1035,8 +1035,12 @@ pub const PipeReader = struct {
return p.reader.buffer().items[this.written..];
}
pub fn loop(this: *CapturedWriter) *uws.Loop {
return this.parent().event_loop.loop();
pub fn loop(this: *CapturedWriter) *bun.Async.Loop {
if (comptime bun.Environment.isWindows) {
return this.parent().event_loop.loop().uv_loop;
} else {
return this.parent().event_loop.loop();
}
}
pub fn parent(this: *CapturedWriter) *PipeReader {
@@ -1340,8 +1344,12 @@ pub const PipeReader = struct {
return this.event_loop;
}
pub fn loop(this: *PipeReader) *uws.Loop {
return this.event_loop.loop();
pub fn loop(this: *PipeReader) *bun.Async.Loop {
if (comptime bun.Environment.isWindows) {
return this.event_loop.loop().uv_loop;
} else {
return this.event_loop.loop();
}
}
fn deinit(this: *PipeReader) void {
@@ -1402,7 +1410,6 @@ const Output = bun.Output;
const assert = bun.assert;
const default_allocator = bun.default_allocator;
const strings = bun.strings;
const uws = bun.uws;
const jsc = bun.jsc;
const JSGlobalObject = jsc.JSGlobalObject;

View File

@@ -857,10 +857,10 @@ pub const String = extern struct {
pub fn createFormatForJS(globalObject: *jsc.JSGlobalObject, comptime fmt: [:0]const u8, args: anytype) bun.JSError!jsc.JSValue {
jsc.markBinding(@src());
var builder = bun.collections.ArrayListDefault(u8).init();
var builder = std.ArrayList(u8).init(bun.default_allocator);
defer builder.deinit();
bun.handleOom(builder.writer().print(fmt, args));
return bun.cpp.BunString__createUTF8ForJS(globalObject, builder.items().ptr, builder.items().len);
return bun.cpp.BunString__createUTF8ForJS(globalObject, builder.items.ptr, builder.items.len);
}
pub fn parseDate(this: *String, globalObject: *jsc.JSGlobalObject) bun.JSError!f64 {

View File

@@ -274,16 +274,45 @@ pub const JSValkeyClient = struct {
else
bun.String.static("valkey://localhost:6379");
defer url_str.deref();
var fallback_url_buf: [2048]u8 = undefined;
// Parse and validate the URL using URL.zig's fromString which returns null for invalid URLs
const parsed_url = URL.fromString(url_str) orelse {
if (url_str.tag != .StaticZigString) {
const url_utf8 = url_str.toUTF8WithoutRef(this_allocator);
defer url_utf8.deinit();
return globalObject.throwInvalidArguments("Invalid URL format: \"{s}\"", .{url_utf8.slice()});
// TODO(markovejnovic): The following check for :// is a stop-gap. It is my expectation
// that URL.fromString returns null if the protocol is not specified. This is not, in-fact,
// the case right now and I do not understand why. It will take some work in JSC to
// understand why this is happening, but since I need to uncork valkey, I'm adding this as
// a stop-gap.
const parsed_url = get_url: {
const url_slice = url_str.toUTF8WithoutRef(this_allocator);
defer url_slice.deinit();
const url_byte_slice = url_slice.slice();
if (url_byte_slice.len == 0) {
return globalObject.throwInvalidArguments("Invalid URL format", .{});
}
// This should never happen since our default URL is valid
return globalObject.throwInvalidArguments("Invalid URL format", .{});
if (bun.strings.contains(url_byte_slice, "://")) {
break :get_url URL.fromString(url_str) orelse {
return globalObject.throwInvalidArguments("Invalid URL format", .{});
};
}
const corrected_url = get_url_slice: {
const written = std.fmt.bufPrintZ(
&fallback_url_buf,
"valkey://{s}",
.{url_byte_slice},
) catch {
return globalObject.throwInvalidArguments("URL is too long.", .{});
};
break :get_url_slice fallback_url_buf[0..written.len];
};
break :get_url URL.fromUTF8(corrected_url) orelse {
return globalObject.throwInvalidArguments("Invalid URL format", .{});
};
};
defer parsed_url.deinit();

View File

@@ -27,14 +27,47 @@ Use `bun:test` with files that end in `*.test.{ts,js,jsx,tsx,mjs,cjs}`. If it's
When spawning Bun processes, use `bunExe` and `bunEnv` from `harness`. This ensures the same build of Bun is used to run the test and ensures debug logging is silenced.
##### Use `-e` for single-file tests
```ts
import { bunEnv, bunExe, tempDir } from "harness";
import { test, expect } from "bun:test";
test("spawns a Bun process", async () => {
test("single-file test spawns a Bun process", async () => {
await using proc = Bun.spawn({
cmd: [bunExe(), "-e", "console.log('Hello, world!')"],
env: bunEnv,
});
const [stdout, stderr, exitCode] = await Promise.all([
proc.stdout.text(),
proc.stderr.text(),
proc.exited,
]);
expect(stderr).toBe("");
expect(stdout).toBe("Hello, world!\n");
expect(exitCode).toBe(0);
});
```
##### When multi-file tests are required:
```ts
import { bunEnv, bunExe, tempDir } from "harness";
import { test, expect } from "bun:test";
test("multi-file test spawns a Bun process", async () => {
// If a test MUST use multiple files:
using dir = tempDir("my-test-prefix", {
"my.fixture.ts": `
console.log("Hello, world!");
import { foo } from "./foo.ts";
foo();
`,
"foo.ts": `
export function foo() {
console.log("Hello, world!");
}
`,
});

11
test/_util/collection.ts Normal file
View File

@@ -0,0 +1,11 @@
/**
* Computes the Cartesian product of multiple arrays.
*
* @param arrays An array of arrays for which to compute the Cartesian product.
* @returns An array containing all combinations of elements from the input arrays.
*/
export function cartesianProduct<T, U>(left: T[], right: U[]): [T, U][] {
return left.flatMap(leftItem =>
right.map(rightItem => [leftItem, rightItem] as [T, U])
);
}

View File

@@ -28,7 +28,7 @@ describe("console depth", () => {
function normalizeOutput(output: string): string {
// Normalize line endings and trim whitespace
return output.replace(/\r\n/g, "\n").replace(/\r/g, "\n").trim();
return output.replace(/\r\n?/g, "\n").trim();
}
test("default console depth should be 2", async () => {

View File

@@ -16,7 +16,7 @@ test("custom registry doesn't have multiple trailing slashes in pathname", async
port: 0,
async fetch(req) {
urls.push(req.url);
return new Response("ok");
return Response.json({ broken: true, message: "This is a test response" });
},
});
const { port, hostname } = server;
@@ -39,7 +39,7 @@ registry = "http://${hostname}:${port}/prefixed-route/"
}),
);
Bun.spawnSync({
await using proc = Bun.spawn({
cmd: [bunExe(), "install", "--force"],
env: bunEnv,
cwd: package_dir,
@@ -48,6 +48,9 @@ registry = "http://${hostname}:${port}/prefixed-route/"
stdin: "ignore",
});
// The install should fail, but we're just testing the request goes to the right route.
expect(await proc.exited).toBe(1);
expect(urls.length).toBe(1);
expect(urls).toEqual([`http://${hostname}:${port}/prefixed-route/react`]);
});

View File

@@ -1,6 +1,6 @@
import { beforeAll, expect, setDefaultTimeout, test } from "bun:test";
import fs from "fs";
import { bunEnv, bunExe, tmpdirSync } from "harness";
import { bunEnv, bunExe, tempDirWithFiles, tmpdirSync } from "harness";
import { join } from "path";
beforeAll(() => {
@@ -131,3 +131,38 @@ test("npm lockfile with relative workspaces", async () => {
expect(exitCode).toBe(0);
});
const lockfiles = ["package-lock.json", "yarn.lock", "pnpm-lock.yaml"];
for (const lockfile of lockfiles) {
test(`should create bun.lock if ${lockfile} migration fails`, async () => {
const testDir = tempDirWithFiles("migration-failure", {
"package.json": JSON.stringify({
name: "pkg",
dependencies: {
"dep-1": "file:dep-1",
},
}),
[lockfile]: "{}",
"dep-1/package.json": JSON.stringify({
name: "dep-1",
}),
});
const { exited } = Bun.spawn({
cmd: [bunExe(), "install"],
cwd: testDir,
stderr: "ignore",
stdout: "ignore",
});
expect(await exited).toBe(0);
expect(
await Promise.all([
fs.promises.exists(join(testDir, "bun.lock")),
fs.promises.exists(join(testDir, "bun.lockb")),
]),
).toEqual([true, false]);
});
}

View File

@@ -1,9 +1,9 @@
import { spawnSync, write } from "bun";
import { write } from "bun";
import { describe, expect, test } from "bun:test";
import { bunEnv, bunExe, tmpdirSync } from "harness";
import { join } from "path";
describe("redact", async () => {
describe.concurrent("redact", async () => {
const tests = [
{
title: "url password",
@@ -71,7 +71,7 @@ describe("redact", async () => {
]);
// once without color
let proc = spawnSync({
await using proc1 = Bun.spawn({
cmd: [bunExe(), "install"],
cwd: testDir,
env: { ...bunEnv, NO_COLOR: "1" },
@@ -79,13 +79,13 @@ describe("redact", async () => {
stderr: "pipe",
});
let out = proc.stdout.toString();
let err = proc.stderr.toString();
expect(proc.exitCode).toBe(+!!bunfig);
expect(err).toContain(expected || "*");
const [out1, err1, exitCode1] = await Promise.all([proc1.stdout.text(), proc1.stderr.text(), proc1.exited]);
expect(exitCode1).toBe(+!!bunfig);
expect(err1).toContain(expected || "*");
// once with color
proc = spawnSync({
await using proc2 = Bun.spawn({
cmd: [bunExe(), "install"],
cwd: testDir,
env: { ...bunEnv, NO_COLOR: undefined, FORCE_COLOR: "1" },
@@ -93,10 +93,10 @@ describe("redact", async () => {
stderr: "pipe",
});
out = proc.stdout.toString();
err = proc.stderr.toString();
expect(proc.exitCode).toBe(+!!bunfig);
expect(err).toContain(expected || "*");
const [out2, err2, exitCode2] = await Promise.all([proc2.stdout.text(), proc2.stderr.text(), proc2.exited]);
expect(exitCode2).toBe(+!!bunfig);
expect(err2).toContain(expected || "*");
});
}
});

View File

@@ -1,4 +1,4 @@
import { expect, test } from "bun:test";
import { test } from "bun:test";
import { bunEnv, bunExe } from "harness";
test("test timeout kills dangling processes", async () => {

View File

@@ -5,7 +5,7 @@ import path from "path";
if (isFlaky && isLinux) {
test.todo("processes get killed");
} else {
test.each([true, false])(`processes get killed (sync: %p)`, async sync => {
test.concurrent.each([true, false])(`processes get killed (sync: %p)`, async sync => {
const { exited, stdout, stderr } = Bun.spawn({
cmd: [
bunExe(),

View File

@@ -10,7 +10,7 @@
".stdDir()": 42,
".stdFile()": 16,
"// autofix": 164,
": [^=]+= undefined,$": 255,
": [^=]+= undefined,$": 256,
"== alloc.ptr": 0,
"== allocator.ptr": 0,
"@import(\"bun\").": 0,

View File

@@ -67,26 +67,3 @@ test("spawnSync AbortSignal works as timeout", async () => {
const end = performance.now();
expect(end - start).toBeLessThan(100);
});
// TODO: this test should fail.
// It passes because we are ticking the event loop incorrectly in spawnSync.
// it should be ticking a different event loop.
test("spawnSync AbortSignal...executes javascript?", async () => {
const start = performance.now();
var signal = AbortSignal.timeout(10);
signal.addEventListener("abort", () => {
console.log("abort", performance.now());
});
const subprocess = Bun.spawnSync({
cmd: [bunExe(), "--eval", "await Bun.sleep(100000)"],
env: bunEnv,
stdout: "inherit",
stderr: "inherit",
stdin: "inherit",
signal,
});
console.log("after", performance.now());
expect(subprocess.success).toBeFalse();
const end = performance.now();
expect(end - start).toBeLessThan(100);
});

View File

@@ -0,0 +1,161 @@
import { describe, expect, test } from "bun:test";
import { bunEnv, bunExe } from "harness";
describe.concurrent("spawnSync isolated event loop", () => {
test("JavaScript timers should not fire during spawnSync", async () => {
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
let timerFired = false;
// Set a timer that should NOT fire during spawnSync
const interval = setInterval(() => {
timerFired = true;
console.log("TIMER_FIRED");
process.exit(1);
}, 1);
// Run a subprocess synchronously
const result = Bun.spawnSync({
cmd: ["${bunExe()}", "-e", "Bun.sleepSync(16)"],
env: process.env,
});
clearInterval(interval);
console.log("SUCCESS: Timer did not fire during spawnSync");
process.exit(0);
`,
],
env: bunEnv,
stderr: "pipe",
stdout: "pipe",
});
const [stdout, exitCode] = await Promise.all([proc.stdout.text(), proc.exited]);
expect(stdout).toContain("SUCCESS");
expect(stdout).not.toContain("TIMER_FIRED");
expect(stdout).not.toContain("FAIL");
expect(exitCode).toBe(0);
});
test("microtasks should not drain during spawnSync", async () => {
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
queueMicrotask(() => {
console.log("MICROTASK_FIRED");
process.exit(1);
});
// Run a subprocess synchronously
const result = Bun.spawnSync({
cmd: ["${bunExe()}", "-e", "42"],
env: process.env,
});
console.log("SUCCESS: Timer did not fire during spawnSync");
process.exit(0);
`,
],
env: bunEnv,
stderr: "pipe",
stdout: "pipe",
});
const [stdout, exitCode] = await Promise.all([proc.stdout.text(), proc.exited]);
expect(stdout).toContain("SUCCESS");
expect(stdout).not.toContain("MICROTASK_FIRED");
expect(stdout).not.toContain("FAIL");
expect(exitCode).toBe(0);
});
test("stdin/stdout from main process should not be affected by spawnSync", async () => {
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
// Write to stdout before spawnSync
console.log("BEFORE");
// Run a subprocess synchronously
const result = Bun.spawnSync({
cmd: ["echo", "SUBPROCESS"],
env: process.env,
});
// Write to stdout after spawnSync
console.log("AFTER");
// Verify subprocess output
const subprocessOut = new TextDecoder().decode(result.stdout);
if (!subprocessOut.includes("SUBPROCESS")) {
console.log("FAIL: Subprocess output missing");
process.exit(1);
}
console.log("SUCCESS");
process.exit(0);
`,
],
env: bunEnv,
stderr: "pipe",
stdout: "pipe",
});
const [stdout, exitCode] = await Promise.all([proc.stdout.text(), proc.exited]);
expect(stdout).toContain("BEFORE");
expect(stdout).toContain("AFTER");
expect(stdout).toContain("SUCCESS");
expect(exitCode).toBe(0);
});
test("multiple spawnSync calls should each use isolated event loop", async () => {
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
let timerCount = 0;
// Set timers that should NOT fire during spawnSync
setTimeout(() => { timerCount++; }, 10);
setTimeout(() => { timerCount++; }, 20);
setTimeout(() => { timerCount++; }, 30);
// Run multiple subprocesses synchronously
for (let i = 0; i < 3; i++) {
const result = Bun.spawnSync({
cmd: ["${bunExe()}", "-e", "Bun.sleepSync(50)"],
});
if (timerCount > 0) {
console.log(\`FAIL: Timer fired during spawnSync iteration \${i}\`);
process.exit(1);
}
}
console.log("SUCCESS: No timers fired during any spawnSync call");
process.exit();
`,
],
env: bunEnv,
stderr: "pipe",
stdout: "pipe",
});
const [stdout, exitCode] = await Promise.all([proc.stdout.text(), proc.exited]);
expect(stdout).toContain("SUCCESS");
expect(stdout).not.toContain("FAIL");
expect(exitCode).toBe(0);
});
});

View File

@@ -7,7 +7,6 @@
*/
import { bunEnv, bunExe, exampleSite, randomPort } from "harness";
import { createTest } from "node-harness";
import { spawnSync } from "node:child_process";
import { EventEmitter, once } from "node:events";
import nodefs, { unlinkSync } from "node:fs";
import http, {
@@ -832,7 +831,12 @@ describe("node:http", () => {
it("should correctly stream a multi-chunk response #5320", async done => {
runTest(done, (server, serverPort, done) => {
const req = request({ host: "localhost", port: `${serverPort}`, path: "/multi-chunk-response", method: "GET" });
const req = request({
host: "localhost",
port: `${serverPort}`,
path: "/multi-chunk-response",
method: "GET",
});
req.on("error", err => done(err));
@@ -1046,9 +1050,10 @@ describe("node:http", () => {
});
});
test("test unix socket server", done => {
test("test unix socket server", async () => {
const { promise, resolve, reject } = Promise.withResolvers();
const socketPath = `${tmpdir()}/bun-server-${Math.random().toString(32)}.sock`;
const server = createServer((req, res) => {
await using server = createServer((req, res) => {
expect(req.method).toStrictEqual("GET");
expect(req.url).toStrictEqual("/bun?a=1");
res.writeHead(200, {
@@ -1059,18 +1064,20 @@ describe("node:http", () => {
res.end();
});
server.listen(socketPath, () => {
// TODO: unix socket is not implemented in fetch.
const output = spawnSync("curl", ["--unix-socket", socketPath, "http://localhost/bun?a=1"]);
server.listen(socketPath, async () => {
try {
expect(output.stdout.toString()).toStrictEqual("Bun\n");
done();
const response = await fetch(`http://localhost/bun?a=1`, {
unix: socketPath,
});
const text = await response.text();
expect(text).toBe("Bun\n");
resolve();
} catch (err) {
done(err);
} finally {
server.close();
reject(err);
}
});
await promise;
});
test("should not decompress gzip, issue#4397", async () => {
@@ -1284,26 +1291,26 @@ describe("server.address should be valid IP", () => {
});
it("should propagate exception in sync data handler", async () => {
const { exitCode, stdout } = Bun.spawnSync({
await using proc = Bun.spawn({
cmd: [bunExe(), "run", path.join(import.meta.dir, "node-http-error-in-data-handler-fixture.1.js")],
stdout: "pipe",
stderr: "inherit",
env: bunEnv,
});
expect(stdout.toString()).toContain("Test passed");
const [stdout, exitCode] = await Promise.all([proc.stdout.text(), proc.exited]);
expect(stdout).toContain("Test passed");
expect(exitCode).toBe(0);
});
it("should propagate exception in async data handler", async () => {
const { exitCode, stdout } = Bun.spawnSync({
await using proc = Bun.spawn({
cmd: [bunExe(), "run", path.join(import.meta.dir, "node-http-error-in-data-handler-fixture.2.js")],
stdout: "pipe",
stderr: "inherit",
env: bunEnv,
});
expect(stdout.toString()).toContain("Test passed");
const [stdout, exitCode] = await Promise.all([proc.stdout.text(), proc.exited]);
expect(stdout).toContain("Test passed");
expect(exitCode).toBe(0);
});

View File

@@ -469,7 +469,12 @@ describe("browserify path tests", () => {
const failures = [];
const cwd = process.cwd();
const cwdParent = path.dirname(cwd);
const parentIsRoot = isWindows ? cwdParent.match(/^[A-Z]:\\$/) : cwdParent === "/";
const parentIsRoot = (levels = 1) => {
const dir = Array(levels)
.fill()
.reduce(wd => path.dirname(wd), cwd);
return isWindows ? dir.match(/^[a-zA-Z]:\\$/) : dir === "/";
};
const relativeTests = [
[
@@ -529,19 +534,19 @@ describe("browserify path tests", () => {
["/webp4ck-hot-middleware", "/webpack/buildin/module.js", "../webpack/buildin/module.js"],
["/webpack-hot-middleware", "/webp4ck/buildin/module.js", "../webp4ck/buildin/module.js"],
["/var/webpack-hot-middleware", "/var/webpack/buildin/module.js", "../webpack/buildin/module.js"],
["/app/node_modules/pkg", "../static", `../../..${parentIsRoot ? "" : path.posix.resolve("../")}/static`],
["/app/node_modules/pkg", "../static", `../../..${parentIsRoot() ? "" : path.posix.resolve("../")}/static`],
[
"/app/node_modules/pkg",
"../../static",
`../../..${parentIsRoot ? "" : path.posix.resolve("../../")}/static`,
`../../..${parentIsRoot(2) ? "" : path.posix.resolve("../../")}/static`,
],
["/app", "../static", `..${parentIsRoot ? "" : path.posix.resolve("../")}/static`],
["/app", "../static", `..${parentIsRoot() ? "" : path.posix.resolve("../")}/static`],
["/app", "../".repeat(64) + "static", "../static"],
[".", "../static", cwd == "/" ? "static" : "../static"],
["/", "../static", parentIsRoot ? "static" : `${path.posix.resolve("../")}/static`.slice(1)],
["/", "../static", parentIsRoot() ? "static" : `${path.posix.resolve("../")}/static`.slice(1)],
["../", "../", ""],
["../", "../../", parentIsRoot ? "" : ".."],
["../../", "../", parentIsRoot ? "" : path.basename(cwdParent)],
["../", "../../", parentIsRoot() ? "" : ".."],
["../../", "../", parentIsRoot() ? "" : path.basename(cwdParent)],
["../../", "../../", ""],
],
],

View File

@@ -39,6 +39,7 @@ if (common.isWindows) {
switch (process.argv[2]) {
case 'child':
console.log('child started');
setTimeout(() => {
debug('child fired');
process.exit(1);

View File

@@ -4,21 +4,21 @@ import { bunEnv, nodeExe } from "harness";
import { join } from "path";
const fixtureDir = join(import.meta.dirname, "fixtures");
function postNodeFormData(port) {
const result = Bun.spawnSync({
async function postNodeFormData(port) {
const result = Bun.spawn({
cmd: [nodeExe(), join(fixtureDir, "node-form-data.fetch.fixture.js"), port?.toString()],
env: bunEnv,
stdio: ["inherit", "inherit", "inherit"],
});
expect(result.exitCode).toBe(0);
expect(await result.exited).toBe(0);
}
function postNodeAction(port) {
const result = Bun.spawnSync({
async function postNodeAction(port) {
const result = Bun.spawn({
cmd: [nodeExe(), join(fixtureDir, "node-action.fetch.fixture.js"), port?.toString()],
env: bunEnv,
stdio: ["inherit", "inherit", "inherit"],
});
expect(result.exitCode).toBe(0);
expect(await result.exited).toBe(0);
}
describe("astro", async () => {
@@ -66,7 +66,7 @@ describe("astro", async () => {
});
test("is able todo a POST request to an astro action using node", async () => {
postNodeAction(previewServer.port);
await postNodeAction(previewServer.port);
});
test("is able to post form data to an astro using bun", async () => {
@@ -89,6 +89,6 @@ describe("astro", async () => {
});
});
test("is able to post form data to an astro using node", async () => {
postNodeFormData(previewServer.port);
await postNodeFormData(previewServer.port);
});
});

View File

@@ -87,15 +87,17 @@ function getBody() {
return body;
}
async function iterate() {
const promises = [];
for (let j = 0; j < batch; j++) {
promises.push(fetch(server, { method: "POST", body: getBody() }));
}
await Promise.all(promises);
}
try {
for (let i = 0; i < iterations; i++) {
{
const promises = [];
for (let j = 0; j < batch; j++) {
promises.push(fetch(server, { method: "POST", body: getBody() }));
}
await Promise.all(promises);
}
await iterate();
{
Bun.gc(true);

View File

@@ -1,5 +1,5 @@
import { describe, expect, test } from "bun:test";
import { bunEnv, bunExe, bunRun, tls as COMMON_CERT, gc, isCI } from "harness";
import { bunEnv, bunExe, tls as COMMON_CERT, gc, isCI } from "harness";
import { once } from "node:events";
import { createServer } from "node:http";
import { join } from "node:path";
@@ -17,7 +17,7 @@ describe("fetch doesn't leak", () => {
},
});
const proc = Bun.spawn({
await using proc = Bun.spawn({
env: {
...bunEnv,
SERVER: server.url.href,
@@ -76,7 +76,7 @@ describe("fetch doesn't leak", () => {
env.COUNT = "1000";
}
const proc = Bun.spawn({
await using proc = Bun.spawn({
env,
stderr: "inherit",
stdout: "inherit",
@@ -114,7 +114,7 @@ describe.each(["FormData", "Blob", "Buffer", "String", "URLSearchParams", "strea
const rss = [];
const process = Bun.spawn({
await using process = Bun.spawn({
cmd: [
bunExe(),
"--smol",
@@ -189,16 +189,19 @@ test("should not leak using readable stream", async () => {
const buffer = Buffer.alloc(1024 * 128, "b");
using server = Bun.serve({
port: 0,
fetch: req => {
return new Response(buffer);
},
routes: { "/*": new Response(buffer) },
});
const { stdout, stderr } = bunRun(join(import.meta.dir, "fetch-leak-test-fixture-6.js"), {
...bunEnv,
SERVER_URL: server.url.href,
MAX_MEMORY_INCREASE: "5", // in MB
await using proc = Bun.spawn([bunExe(), join(import.meta.dir, "fetch-leak-test-fixture-6.js")], {
env: {
...bunEnv,
SERVER_URL: server.url.href,
MAX_MEMORY_INCREASE: "5", // in MB
},
stdout: "pipe",
stderr: "pipe",
});
expect(stderr).toBe("");
expect(stdout).toContain("done");
const [exited, stdout, stderr] = await Promise.all([proc.exited, proc.stdout.text(), proc.stderr.text()]);
expect(stdout + stderr).toContain("done");
expect(exited).toBe(0);
});

View File

@@ -0,0 +1,98 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe } from "harness";
test("structuredClone() should not lose Error stack trace", async () => {
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
function okay() {
const error = new Error("OKAY");
console.error(error);
}
function broken() {
const error = new Error("BROKEN");
structuredClone(error);
console.error(error);
}
function main() {
okay();
broken();
}
main();
`,
],
env: bunEnv,
stderr: "pipe",
});
const [stderr, exitCode] = await Promise.all([proc.stderr.text(), proc.exited]);
// Both errors should have full stack traces
// The "okay" error should have the full stack
expect(stderr).toContain("at okay");
expect(stderr).toContain("at main");
// The "broken" error should ALSO have the full stack after structuredClone
const lines = stderr.split("\n");
const brokenErrorIndex = lines.findIndex(line => line.includes("BROKEN"));
expect(brokenErrorIndex).toBeGreaterThan(-1);
// Find the stack trace lines after BROKEN
const stackLinesAfterBroken = lines.slice(brokenErrorIndex);
const stackTraceStr = stackLinesAfterBroken.join("\n");
// Should have "at broken" in the stack
expect(stackTraceStr).toContain("at broken");
// Should also have "at main" in the stack (not just the first line)
expect(stackTraceStr).toContain("at main");
// CRITICAL: Should also have the top-level frame (the one that calls main())
// This is the frame that was being lost after structuredClone
// It appears as "at /path/to/file:line" without a function name
// Count the number of "at " occurrences in the BROKEN error stack trace
const brokenStackMatches = stackTraceStr.match(/\s+at\s+/g);
const okayErrorIndex = lines.findIndex(line => line.includes("OKAY"));
const okayStackLines = lines.slice(okayErrorIndex);
const okayStackTraceStr = okayStackLines.slice(0, brokenErrorIndex - okayErrorIndex).join("\n");
const okayStackMatches = okayStackTraceStr.match(/\s+at\s+/g);
// Both errors should have the same number of stack frames (or at least 3)
// Before the fix, BROKEN would only show 2 frames instead of 3+
expect(brokenStackMatches?.length).toBeGreaterThanOrEqual(3);
expect(okayStackMatches?.length).toBeGreaterThanOrEqual(3);
expect(exitCode).toBe(0);
});
test("error.stack should remain intact after structuredClone", async () => {
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
function broken() {
const error = new Error("BROKEN");
structuredClone(error);
console.log(error.stack);
}
broken();
`,
],
env: bunEnv,
stdout: "pipe",
});
const [stdout, exitCode] = await Promise.all([proc.stdout.text(), proc.exited]);
// The stack should contain both "at broken" and be properly formatted
expect(stdout).toContain("Error: BROKEN");
expect(stdout).toContain("at broken");
expect(exitCode).toBe(0);
});

View File

@@ -0,0 +1,37 @@
// https://github.com/oven-sh/bun/issues/24385
// Test for Redis client import regression
import { cartesianProduct } from "_util/collection";
import { expect, test } from "bun:test";
import { bunEnv, bunExe } from "harness";
test.concurrent.each(
cartesianProduct(
["REDIS_URL", "VALKEY_URL"],
[
"localhost:6379",
"redis+tls+unix:///tmp/redis.sock",
"redis+tls://localhost:6379",
"redis+unix:///tmp/redis.sock",
"redis://localhost:6379",
"rediss://localhost:6379",
"valkey://localhost:6379",
],
).map(([k, v]) => ({ key: k, value: v })),
)("Redis loads with $key=$value", async ({ key, value }) => {
const env = { ...bunEnv, [key]: value };
await using proc = Bun.spawn({
// We need to call redis.duplicate() since Bun lazily imports redis.
cmd: [bunExe(), "-e", 'import { redis } from "bun"; const d = redis.duplicate(); console.log("success");'],
env,
stderr: "pipe",
stdout: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stderr).not.toContain("Expected url protocol to be one of");
expect(stdout).toContain("success");
expect(exitCode).toBe(0);
});