Compare commits

...

411 Commits

Author SHA1 Message Date
Claude Bot
5d2030cfc5 fix(wasm): disable Wasm OSR on Linux x64 to prevent crashes
Disables useWasmOSR on Linux x64 to work around a JavaScriptCore bug
that causes crashes (SIGILL/segfault) when calling Emscripten-exported
Wasm functions via direct method calls after many iterations.

The crash occurs specifically with direct method call patterns like
`module._func(arg)` vs working alternatives like `const fn = module._func; fn(arg)`.

This workaround trades some Wasm performance (no on-stack replacement
from interpreter to JIT) for stability. Wasm code still gets JIT-compiled
after enough executions via the normal tiering mechanism.

Fixes #26366
Fixes #17841

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 18:54:23 +00:00
robobun
2a9980076d feat(windows): Add Windows ARM64 support (#26215) 2026-01-22 04:22:45 -08:00
Dylan Conway
1da41b7f91 update WebKit (#26324)
### What does this PR do?
Updates WebKit to
87c6cde57d
### How did you verify your code works?

---------

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-21 20:17:46 -08:00
robobun
136d345752 fix(install): show dependency name when file: path resolution fails (#26340)
## Summary
- When `bun install` encounters a stale lockfile with a `file:`
dependency path that differs from the package.json, it now shows which
dependency caused the issue instead of the misleading "Bun could not
find a package.json file to install from" error.

## Test plan
- Added regression test in `test/regression/issue/26337.test.ts`
- Verified test fails with system bun (`USE_SYSTEM_BUN=1 bun test
test/regression/issue/26337.test.ts`)
- Verified test passes with debug build (`bun bd test
test/regression/issue/26337.test.ts`)

Fixes #26337

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 18:41:15 -08:00
Alistair Smith
93d5cc6e56 fix: rename compile "Target" ➜ "CompileTarget" 2026-01-21 13:58:32 -08:00
Jarred Sumner
37c41137f8 feat(transpiler): add replMode option for REPL transforms (#26246)
## Summary

Add a new `replMode` option to `Bun.Transpiler` that transforms code for
interactive REPL evaluation. This enables building a Node.js-compatible
REPL using `Bun.Transpiler` with `vm.runInContext` for persistent
variable scope.

## Features

- **Expression result capture**: Wraps the last expression in `{
__proto__: null, value: expr }` for result capture
- **IIFE wrappers**: Uses sync/async IIFE wrappers to avoid extra
parentheses around object literals in output
- **Variable hoisting**: Hoists `var`/`let`/`const` declarations outside
the IIFE for persistence across REPL lines
- **const → let conversion**: Converts `const` to `let` for REPL
mutability (allows re-declaration)
- **Function hoisting**: Hoists function declarations with
`this.funcName = funcName` assignment for vm context persistence
- **Class hoisting**: Hoists class declarations with `var` for vm
context persistence
- **Object literal detection**: Auto-detects object literals (code
starting with `{` without trailing `;`) and wraps them in parentheses

## Usage

```typescript
import vm from "node:vm";

const transpiler = new Bun.Transpiler({
  loader: "tsx",
  replMode: true,
});

const context = vm.createContext({ console, Promise });

async function repl(code: string) {
  const transformed = transpiler.transformSync(code);
  const result = await vm.runInContext(transformed, context);
  return result.value;
}

// Example REPL session
await repl("var x = 10");        // 10
await repl("x + 5");             // 15
await repl("class Counter {}"); // [class Counter]
await repl("new Counter()");    // Counter {}
await repl("{a: 1, b: 2}");     // {a: 1, b: 2} (auto-detected object literal)
await repl("await Promise.resolve(42)"); // 42
```

## Transform Examples

| Input | Output |
|-------|--------|
| `42` | `(() => { return { __proto__: null, value: 42 }; })()` |
| `var x = 10` | `var x; (() => { return { __proto__: null, value: x =
10 }; })()` |
| `await fetch()` | `(async () => { return { __proto__: null, value:
await fetch() }; })()` |
| `{a: 1}` | `(() => { return { __proto__: null, value: ({a: 1}) };
})()` |
| `class Foo {}` | `var Foo; (() => { return { __proto__: null, value:
Foo = class Foo {} }; })()` |

## Files Changed

- `src/ast/repl_transforms.zig`: New module containing REPL transform
logic
- `src/ast/P.zig`: Calls REPL transforms after parsing in REPL mode
- `src/bun.js/api/JSTranspiler.zig`: Adds `replMode` config option and
object literal detection
- `src/options.zig`, `src/runtime.zig`, `src/transpiler.zig`: Propagate
`repl_mode` flag
- `packages/bun-types/bun.d.ts`: TypeScript type definitions
- `test/js/bun/transpiler/repl-transform.test.ts`: Test cases

## Testing

```bash
bun bd test test/js/bun/transpiler/repl-transform.test.ts
```

34 tests covering:
- Basic transform output
- REPL session with node:vm
- Variable persistence across lines
- Object literal detection
- Edge cases (empty input, comments, TypeScript, etc.)
- No-transform cases (await inside async functions)

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-21 13:39:25 -08:00
Dylan Conway
dc203e853a fix patch-bounds-check.test.ts (#26344)
### What does this PR do?

### How did you verify your code works?
2026-01-21 13:22:21 -08:00
robobun
2febdb5b49 feat(cli): add --cpu-prof-md flag for markdown CPU profile output (#26327)
## Summary
- Adds `--cpu-prof-md` flag that outputs CPU profiling data in markdown
format optimized for GitHub rendering and LLM analysis
- Complements the existing `--cpu-prof` flag which outputs Chrome
DevTools JSON format
- `--cpu-prof-md` works standalone or combined with `--cpu-prof` to
generate both formats

## Usage
```bash
# Markdown only
bun --cpu-prof-md script.js

# Both formats
bun --cpu-prof --cpu-prof-md script.js
```

## Example Output

# CPU Profile

| Duration | Samples | Interval | Functions |
|----------|---------|----------|----------|
| 255.7ms | 178 | 1ms | 32 |

**Top 10:** \`fibonacci\` 23.6%, \`fibonacci\` 12.6%, \`parseModule\`
11.7%, \`(anonymous)\` 9.5%, \`loadAndEvaluateModule\` 5.5%,
\`requestSatisfyUtil\` 3.7%, \`main\` 2.7%,
\`moduleDeclarationInstantiation\` 2.6%, \`loadModule\` 2.5%,
\`cacheSatisfyAndReturn\` 2.5%

## Hot Functions (Self Time)

| Self% | Self | Total% | Total | Function | Location |
|------:|-----:|-------:|------:|----------|----------|
| 23.6% | 60.5ms | 23.6% | 60.5ms | \`fibonacci\` | /tmp/test-profile.js
|
| 12.6% | 32.3ms | 100.0% | 1.29s | \`fibonacci\` |
/tmp/test-profile.js:3 |
| 11.7% | 29.9ms | 11.7% | 29.9ms | \`parseModule\` | [native code] |
| 9.5% | 24.3ms | 43.4% | 111.0ms | \`(anonymous)\` | [native code] |
| 5.5% | 14.2ms | 99.9% | 255.5ms | \`loadAndEvaluateModule\` | [native
code] |

## Call Tree (Total Time)

| Total% | Total | Self% | Self | Function | Location |
|-------:|------:|------:|-----:|----------|----------|
| 100.0% | 1.29s | 12.6% | 32.3ms | \`fibonacci\` |
/tmp/test-profile.js:3 |
| 99.9% | 255.5ms | 5.5% | 14.2ms | \`loadAndEvaluateModule\` | [native
code] |
| 86.0% | 219.9ms | 1.3% | 3.3ms | \`moduleEvaluation\` | [native code]
|
| 43.4% | 111.0ms | 9.5% | 24.3ms | \`(anonymous)\` | [native code] |

## Function Details

### \`fibonacci\`

- **Location:** \`/tmp/test-profile.js:3\`
- **Self:** 12.6% (32.3ms) | **Total:** 100.0% (1.29s)
- **Called by:** \`fibonacci\` (864), \`main\` (68)
- **Calls:** \`fibonacci\` (864), \`fibonacci\` (44), \`fibonacci\` (2)

### \`main\`

- **Location:** \`/tmp/test-profile.js:9\`
- **Self:** 0.0% (0us) | **Total:** 38.4% (98.2ms)
- **Called by:** \`(module)\` (72)
- **Calls:** \`fibonacci\` (68), \`inspect\` (2), \`fibonacci\` (2)

## Files

| Self% | Self | File |
|------:|-----:|------|
| 58.8% | 150.6ms | \`[native code]\` |
| 40.1% | 102.6ms | \`/tmp/test-profile.js\` |
| 0.9% | 2.4ms | \`bun:main\` |

## Test plan
- [x] `--cpu-prof-md` generates `.md` file with markdown tables
- [x] `--cpu-prof-md` works standalone without `--cpu-prof`
- [x] Both flags together generate both `.cpuprofile` and `.md` files
- [x] Custom filename with `--cpu-prof-name` works
- [x] Custom directory with `--cpu-prof-dir` works
- [x] All 9 tests pass

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-21 13:21:01 -08:00
Jarred Sumner
b6b3626c14 fix(bindings): handle errors from String.toJS() for oversized strings (#26213)
## Summary

- When a string exceeds `WTF::String::MaxLength` (~4GB),
`bun.String.createUninitialized()` returns a `.Dead` tag
- The C++ layer now properly throws `ERR_STRING_TOO_LONG` when this
happens
- Updated `String.toJS()` in Zig to return `bun.JSError!jsc.JSValue`
instead of just `jsc.JSValue`
- Updated ~40 Zig caller files to handle the error with `try`
- C++ callers updated with `RETURN_IF_EXCEPTION` checks

## Test plan

- [x] `bun bd test test/js/node/buffer.test.js` - 449 tests pass
- [x] `bun bd
test/js/node/test/parallel/test-buffer-tostring-rangeerror.js` - passes

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
2026-01-21 13:01:25 -08:00
Dylan Conway
7f70b01259 feat: support CPU profiling via environment variables (#26313)
Adds environment variable support for enabling CPU profiling without CLI
flags:

- `BUN_CPU_PROFILE=1` — enables the CPU profiler
- `BUN_CPU_PROFILE_DIR=<path>` — sets the output directory for the
profile
- `BUN_CPU_PROFILE_NAME=<name>` — sets the profile file name

These are used as fallbacks when the corresponding `--cpu-prof` CLI
options are not provided. This is useful for profiling in contexts where
modifying the command line isn't practical (e.g. scripts invoked by
other tools).
2026-01-20 22:43:36 -08:00
robobun
08103aa2ff fix(compile): ensure bytecode alignment accounts for section header (#26299)
## Summary

Fixes bytecode alignment in standalone executables to prevent crashes
when loading bytecode cache on Windows.

The bytecode offset needs to be aligned such that when loaded at
runtime, the bytecode pointer is 128-byte aligned. Previously, alignment
was based on arbitrary memory addresses during compilation, which didn't
account for the 8-byte section header prepended at runtime. This caused
the bytecode to be misaligned, leading to segfaults in
`JSC::CachedJSValue::decode` on Windows.

## Root Cause

At runtime, embedded data starts 8 bytes after the PE/Mach-O section
virtual address (which is page-aligned, hence 128-byte aligned). For
bytecode at offset `O` to be aligned:
```
(section_va + 8 + O) % 128 == 0
=> (8 + O) % 128 == 0
=> O % 128 == 120
```

The previous code used `std.mem.alignInSlice()` which found aligned
addresses based on the compilation buffer's arbitrary address, not
accounting for the 8-byte header offset at load time.

## Changes

- **`src/StandaloneModuleGraph.zig`**: Calculate bytecode offset to
satisfy `offset % 128 == 120` instead of using `alignInSlice`
- **`test/regression/issue/26298.test.ts`**: Added regression tests for
bytecode cache in standalone executables

## Test plan

- [x] Added regression test `test/regression/issue/26298.test.ts` with 3
test cases
- [x] Existing `HelloWorldBytecode` test passes
- [x] Build succeeds

Fixes #26298

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 22:42:38 -08:00
Jarred Sumner
bb4d150aed Try using internal string-width in node:readline (#26306)
### What does this PR do?

Remove NFKDC normalization and stripVTControlCharacters since
Bun.stringWidth does this now

### How did you verify your code works?

ci

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-20 22:41:14 -08:00
Dylan Conway
45af3335e6 Missing changes from #26080 (#26308)
### What does this PR do?

### How did you verify your code works?
2026-01-20 14:51:36 -08:00
robobun
dbad2857ea fix(test): delete setTimeout.clock property when disabling fake timers (#26285)
## Summary

- Fixes the `setTimeout.clock` property not being properly deleted after
`jest.useRealTimers()` is called
- Previously, the property was set to `false` instead of deleted,
causing `hasOwnProperty` checks to return `true`
- This broke React Testing Library and other libraries that check for
fake timers using `Object.prototype.hasOwnProperty.call(setTimeout,
'clock')`

## Changes

- Added `JSValue.deleteProperty()` binding in Zig to call JSC's
`deleteProperty()` method
- Updated `setFakeTimerMarker()` in `FakeTimers.zig` to delete the
`clock` property when disabling fake timers
- Updated existing test in `test/regression/issue/25869.test.ts` to
verify correct behavior
- Added new regression test in `test/regression/issue/26284.test.ts`

## Test plan

- [x] Verified new test fails with system bun (before fix)
- [x] Verified new test passes with debug build (after fix)
- [x] Verified existing fake timer tests still pass
- [x] Verified test for issue #25869 passes with fix

Fixes #26284

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 12:51:54 -08:00
robobun
6140eb5faf fix(windows): chunk large buffers in preadv/pwritev to avoid integer overflow (#26303)
## Summary
- Fix panic "integer does not fit in destination type" when
reading/writing large files on Windows
- Add chunking for iovec arrays that exceed `c_uint` max entries
- Add chunking for individual buffers that exceed 4GB (`u32` max)

The libuv functions `uv_fs_read` and `uv_fs_write` have two size
limitations:
1. `nbufs` parameter is `c_uint` (32-bit), limiting the number of iovec
entries
2. `uv_buf_t.len` is `ULONG` (u32 on Windows), limiting individual
buffer sizes to 4GB

This change processes large operations in chunks, accumulating results
and updating file positions between chunks.

## Test plan
- [x] Verified `bun run zig:check-all` passes on all platforms
- [x] Verified `bun bd` builds successfully
- [x] Verified basic file read/write operations work correctly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-20 12:47:07 -08:00
Alistair Smith
497a4d4818 Fix duplicate exports when two entrypoints share symbols (#26089)
### What does this PR do?

Fixes #5344
Fixes #6356

### How did you verify your code works?

Some test coverage

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
2026-01-20 12:40:33 -08:00
Dylan Conway
66d8397bd7 fix(compile): fix native module export corruption with multiple NAPI modules (#26080)
## Summary

Fixes native module export corruption when compiling multiple NAPI
modules with `bun build --compile` on Linux.

- When loading multiple `.node` files in a compiled binary, the second
module would incorrectly get the first module's exports
- Root cause: memfd file descriptors were closed after dlopen, allowing
fd reuse. Since dlopen caches by path (`/proc/self/fd/N`), it returned
the wrong cached handle
- This bug occurs when loading native modules in quick succession, as
the fd number is likely to be reused immediately after being closed
- Fix: Disable the memfd optimization and always use temp files with
unique paths

## Test plan

- [x] Added regression test in `test/regression/issue/26045/`
- [x] Test fails with production bun (v1.3.6)
- [x] Test passes with the fix

Fixes https://github.com/oven-sh/bun/issues/26045

🤖 Generated with [Claude Code](https://claude.com/claude-code)
2026-01-20 12:40:12 -08:00
robobun
170f8f7962 fix(tls): use correct variable in setVerifyMode (#26255)
## Summary
- Fix typo in `setVerifyMode` where `reject_unauthorized` was
incorrectly reading from `request_cert_js` instead of
`reject_unauthorized_js`

## Test plan
- Existing TLS renegotiation tests pass
- Code inspection shows the fix is correct (simple variable name typo)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 22:47:18 -08:00
robobun
5f470278d1 fix(update): 'l' key now selects package in interactive update (#26265)
## Summary
- The 'l' key in `bun update --interactive` now correctly selects the
package when toggling between Target and Latest versions
- Previously, pressing 'l' would toggle `use_latest` but not mark the
package as selected, causing the underline indicator to disappear and
the package not being included when confirming

## Test plan
- [x] Added regression test `test/regression/issue/24131.test.ts` that
verifies 'l' selects the package
- [x] Test fails with system bun (before fix) and passes with debug
build (after fix)
- [x] `bun bd test test/regression/issue/24131.test.ts` passes

Fixes #24131

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 22:46:36 -08:00
robobun
3a4daa95ac fix(bundler): fix out-of-bounds access in DynamicBitSetUnmanaged.bytes() (#26283)
## Summary

Fixes an index out-of-bounds panic that occurs during bundler code
splitting on Windows.

- The `bytes()` function in `DynamicBitSetUnmanaged` was accessing
`masks[0..numMasks(bit_length) + 1]`, reading one element past the
allocated array
- When the bit set has exactly one mask (bit_length <= 64), this causes
a panic: "index out of bounds: index 1, len 1"
- The bug manifests when sorting chunk deduplication keys derived from
`AutoBitSet.bytes()`

The fix simply removes the erroneous `+ 1` from the slice bounds.

## Test plan

- [x] `bun bd test test/bundler/bundler_splitting.test.ts` - passes
- [x] `bun bd test test/bundler/esbuild/splitting.test.ts` - passes
- [x] `bun bd test test/bundler/bundler_regressions.test.ts` - passes
- [x] `bun bd test test/bundler/bundler_edgecase.test.ts` - passes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 22:45:56 -08:00
robobun
d3f8bec565 fix(Terminal): callbacks not invoked inside AsyncLocalStorage.run() (#26288)
## Summary
- Fixed `Bun.Terminal` callbacks (data, exit, drain) not being invoked
when the terminal is created inside `AsyncLocalStorage.run()`

## Root Cause
The bug was a redundant `isCallable()` check when storing callbacks in
`initTerminal()`:

1. In `Options.parseFromJS()`, callbacks are validated with
`isCallable()`, then wrapped with `withAsyncContextIfNeeded()`
2. Inside `AsyncLocalStorage.run()`, `withAsyncContextIfNeeded()`
returns an `AsyncContextFrame` object that wraps the callback + async
context
3. An `AsyncContextFrame` is NOT callable - it's a wrapper object. So
the second `isCallable()` check fails
4. Because the check fails, the callback is never stored via
`js.gc.set()`
5. When `onReadChunk()` tries to get the callback, it returns `null` and
the callback is never invoked

## Fix
Removed the redundant `isCallable()` check in `initTerminal()`. The
check was already performed in `parseFromJS()` before wrapping. Other
similar patterns (socket Handlers, Timer) simply store the wrapped
callback without re-checking.

Fixes #26286

## Test plan
- [x] Added regression test in `test/regression/issue/26286.test.ts`
- [x] Verified test fails with system Bun (times out because callback
never invoked)
- [x] Verified test passes with debug build

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 22:45:22 -08:00
robobun
62834e1bfe fix(init): resolve TypeScript errors in react-tailwind template build.ts (#26258)
## Summary
- Fixes TypeScript errors in the react-tailwind template's `build.ts`
when used with the template's strict `tsconfig.json`

## Test plan
- Added regression test `test/regression/issue/24364.test.ts` that
verifies TypeScript compilation passes
- Verified test fails with old template code and passes with fix

Closes #24364

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 17:12:42 -08:00
robobun
704252e85f fix(npm): show helpful error when postinstall script hasn't run (#26259)
## Summary
- Replaces empty placeholder executables with shell scripts that print
helpful error messages
- The scripts exit with code 1 instead of silently succeeding with code
0
- Helps users diagnose issues when installing with `--ignore-scripts` or
using pnpm

## Problem
When installing the `bun` npm package with `--ignore-scripts` or using
pnpm (which skips postinstall by default), the placeholder `bun.exe` and
`bunx.exe` files were empty, causing them to silently exit with code 0
and produce no output. This made it very difficult for users to
understand why bun wasn't working.

## Solution
The placeholder files are now shell scripts that:
1. Print a clear error message explaining the issue
2. Provide instructions on how to fix it (manually running postinstall
or reinstalling without `--ignore-scripts`)
3. Exit with code 1 to indicate failure

Example output when running the placeholder:
```
Error: Bun's postinstall script was not run.

This occurs when using --ignore-scripts during installation, or when using a
package manager like pnpm that does not run postinstall scripts by default.

To fix this, run the postinstall script manually:
  cd node_modules/bun && node install.js

Or reinstall bun without the --ignore-scripts flag.
```

## Test plan
- [x] Added regression test that verifies the placeholder script
behavior
- [x] Test passes with `bun bd test test/regression/issue/24329.test.ts`

Fixes #24329

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 17:11:38 -08:00
robobun
04f441453d fix(assert): partialDeepStrictEqual now correctly handles Map subset checking (#26257)
## Summary

- Fixed `assert.partialDeepStrictEqual` to correctly handle Map subset
checking
- Previously, Map comparison used `Bun.deepEquals` which required exact
equality
- Now properly checks that all entries in the expected Map exist in the
actual Map with matching values

Fixes #24338

## Test plan

- Added comprehensive test suite in
`test/regression/issue/24338.test.ts` covering:
  - Basic subset checking (key2 in Map with key1 and key2)
  - Exact match cases
  - Empty expected Map
  - Multiple matching entries
  - Nested objects as values
  - Failure cases when expected has more keys
  - Failure cases when key is missing in actual
  - Failure cases when values differ
  - Nested Map values
  - Non-string keys

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 17:10:47 -08:00
robobun
362839c987 fix(websocket): forward URL credentials as Authorization header (#26278)
## Summary

- Extracts credentials from WebSocket URL (`ws://user:pass@host`) and
sends them as Basic Authorization header
- User-provided `Authorization` header takes precedence over URL
credentials
- Credentials are properly URL-decoded before being Base64-encoded

Fixes #24388

## Test plan

- [x] Added regression test `test/regression/issue/24388.test.ts` with 5
test cases:
  - Basic credentials in URL
  - Empty password
  - No credentials (no header sent)
  - Custom Authorization header takes precedence
  - Special characters (URL-encoded) in credentials
- [x] Tests pass with `bun bd test test/regression/issue/24388.test.ts`
- [x] Tests fail with `USE_SYSTEM_BUN=1 bun test` (confirming the bug
existed)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 17:04:44 -08:00
robobun
0dda0f6310 fix(socket): free resources when socket is reused for reconnection (#26280)
## Summary
- Fix memory leak in socket reconnection path by freeing old resources
before reassignment
- Add regression test for socket connection/close operations

## Problem
When sockets are reused in `connectInner` (common with MongoDB driver
reconnection patterns), the old connection metadata was being
overwritten without being freed first. This caused memory leaks of:
- `connection` (hostname/path strings)
- `protos` (ALPN protocol strings)
- `server_name` (SNI hostname string)
- `socket_context` (SSL context)

## Solution
This fix ensures these resources are properly freed before reassignment
when a socket is reused for reconnection. This matches the cleanup
pattern already used in the socket's `deinit()` function.

## Test plan
- [x] Added regression test `test/regression/issue/24118.test.ts`
- [x] Verified build compiles successfully
- [x] Verified test passes with `bun bd test
test/regression/issue/24118.test.ts`

Fixes #24118

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-19 15:46:52 -08:00
robobun
5e8a84e5e7 fix(fs): Dirent.isFIFO() incorrectly returns true for unknown type (#26263)
## Summary
- Fix `fs.Dirent.isFIFO()` incorrectly returning `true` for unknown file
types (e.g., on sshfs/NFS mounts)
- Remove the `EventPort` check from `isFIFO()` since `EventPort = 0 =
Unknown`
- Add regression test for the fix

Fixes #24129

## Root Cause
In `NodeDirent.cpp`, the `isFIFO()` method was checking:
```cpp
type == static_cast<int32_t>(DirEntType::NamedPipe) || type == static_cast<int32_t>(DirEntType::EventPort)
```

Since `EventPort = 0` and `Unknown = 0` (they share the same enum
value), any file with unknown type (returned by filesystems like sshfs,
NFS, etc. that don't populate `d_type`) would incorrectly trigger
`isFIFO() === true`.

## Test plan
- [x] Regression test: `bun bd test test/regression/issue/24129.test.ts`
- [x] Existing Dirent tests: `bun bd test test/js/node/fs/fs.test.ts -t
"Dirent"`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 15:44:27 -08:00
robobun
01fac4a63c fix(shell): prevent double-free of ShellArgs in error path (#26277)
## Summary
- Fixes a double-free bug in the shell interpreter error handling path

## What Changed
When `interpreter.init()` succeeds but `globalThis.hasException()` is
true, the code was calling `shargs.deinit()` before
`interpreter.finalize()`. However, `interpreter.args` points to `shargs`
after `init()` succeeds, so calling `interpreter.finalize()` ->
`deinitFromFinalizer()` -> `this.args.deinit()` would then try to deinit
an already-freed `ShellArgs`, causing a double-free.

This is a related issue to #24368, which reported crashes during GC
finalization of ShellInterpreter objects. While the main fix for #24368
was added in v1.3.6 (commit 367eeb308e), this fixes an additional
double-free bug in the error handling path.

## Test plan
- [x] Added regression tests in `test/regression/issue/24368.test.ts`
- [x] Tests pass with `bun bd test test/regression/issue/24368.test.ts`
- [x] Basic shell functionality verified

Fixes #24368

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-19 15:24:57 -08:00
robobun
f8adf01f51 fix(install): handle null metadata gracefully instead of panicking (#26238)
## Summary

- Fixed a panic in `bun add` when HTTP requests fail before receiving
response headers
- The panic "Assertion failure: Expected metadata to be set" now becomes
a graceful error message

## Root Cause

In `src/install/PackageManagerTask.zig`, the code assumed `metadata` is
always non-null and panicked when it wasn't. However, `metadata` can be
null when:
- HTTP request fails before receiving response headers
- Network connection is refused/lost
- Timeout occurs before response
- Firewall blocks/corrupts the response

## Fix

Replaced the panic with proper error handling, following the existing
pattern in `runTasks.zig`:

```zig
const metadata = manifest.network.response.metadata orelse {
    // Handle the error gracefully instead of panicking
    const err = manifest.network.response.fail orelse error.HTTPError;
    // ... show user-friendly error message
};
```

## Test plan

- [x] Added regression test `test/regression/issue/26236.test.ts`
- [x] Test verifies Bun shows graceful error instead of panicking
- [x] `bun bd test test/regression/issue/26236.test.ts` passes

Fixes #26236

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-19 11:26:25 -08:00
robobun
d85d40ea29 fix(ffi): respect C_INCLUDE_PATH and LIBRARY_PATH env vars (#26250)
## Summary
- Make `bun:ffi`'s TinyCC compiler check standard C compiler environment
variables
- Add support for `C_INCLUDE_PATH` (include paths) and `LIBRARY_PATH`
(library paths)
- Fixes compilation on NixOS and other systems that don't use standard
FHS paths

## Test plan
- [x] Added regression test `test/regression/issue/26249.test.ts` that
verifies:
  - Single path in `C_INCLUDE_PATH` works
  - Multiple colon-separated paths in `C_INCLUDE_PATH` work
- [x] Verified test fails with system bun (without fix)
- [x] Verified test passes with debug build (with fix)
- [x] Verified existing `cc.test.ts` tests still pass

Closes #26249

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 11:24:02 -08:00
Jarred Sumner
c47f84348a Update CLAUDE.md 2026-01-18 14:07:30 -08:00
SUZUKI Sosuke
f8a049e9f2 perf(buffer): optimize swap16/swap64 with __builtin_bswap (#26190)
## Summary

Optimize `Buffer.swap16()` and `Buffer.swap64()` by replacing
byte-by-byte swapping loops with `__builtin_bswap16/64` compiler
intrinsics.

## Problem

`Buffer.swap16` and `Buffer.swap64` were significantly slower than
Node.js due to inefficient byte-level operations:

- **swap16**: Swapped bytes one at a time in a loop
- **swap64**: Used a nested loop with 4 byte swaps per 8-byte element

## Solution

Replace the manual byte swapping with `__builtin_bswap16/64` intrinsics,
which compile to single CPU instructions (`BSWAP` on x86, `REV` on ARM).

Use `memcpy` for loading/storing values to handle potentially unaligned
buffers safely.

## Benchmark Results (64KB buffer, Apple M4 Max)

| Operation | Bun 1.3.6 | Node.js 24 | This PR | Improvement |
|-----------|-----------|------------|---------|-------------|
| swap16    | 1.00 µs   | 0.57 µs    | 0.56 µs | **1.79x faster** |
| swap32 | 0.55 µs | 0.77 µs | 0.54 µs | (no change, already fast) |
| swap64    | 2.02 µs   | 0.58 µs    | 0.56 µs | **3.6x faster** |

Bun now matches or exceeds Node.js performance for all swap operations.

## Notes

- `swap32` was not modified as the compiler already optimizes the 4-byte
swap pattern
- All existing tests pass
2026-01-18 13:33:04 -08:00
SUZUKI Sosuke
12a45b7cbf Remove dead data URL check in fetch implementation (#26197)
## Summary
- Remove unreachable dead code that checked for data URLs in
`fetchImpl()`
- Data URLs are already handled earlier in the function via the
`dispatch_request` block which processes `.data` scheme URLs
- This redundant check at lines 375-387 could never be reached

## Test plan
- [ ] Verify existing fetch tests pass with `bun bd test
test/js/web/fetch/`
- [ ] Confirm data URL fetching still works correctly (handled by
earlier code path)

## Changelog
<!-- CHANGELOG:START -->
<!-- No user-facing changes - internal code cleanup only -->
<!-- CHANGELOG:END -->

🤖 Generated with [Claude Code](https://claude.com/claude-code) (100%
12-shotted by claude-opus-4-5)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 13:32:20 -08:00
robobun
039c89442f chore: bump TinyCC to latest upstream (Jan 2026) (#26210)
## What does this PR do?

Updates the oven-sh/tinycc fork to the latest upstream TinyCC,
incorporating 30+ upstream commits while preserving all Bun-specific
patches.

### Upstream changes incorporated
- Build system improvements (c2str.exe handling, cross-compilation)
- macOS 15 compatibility fixes
- libtcc debugging support
- pic/pie support for i386
- arm64 alignment and symbol offset fixes
- RISC-V 64 improvements (pointer difference, assembly, Zicsr extension)
- Relocation updates
- Preprocessor improvements (integer literal overflow handling)
- x86-64 cvts*2si fix
- Various bug fixes

### Bun-specific patches preserved
- Fix crash on macOS x64 (libxcselect.dylib memory handling)
- Implement `-framework FrameworkName` on macOS (for framework header
parsing)
- Add missing #ifdef guards for TCC_IS_NATIVE
- Make `__attribute__(deprecated)` a no-op
- Fix `__has_include` with framework paths
- Support attributes after identifiers in enums
- Fix dlsym behavior on macOS (RTLD_SELF first, then RTLD_DEFAULT)
- Various tccmacho.c improvements

### Related PRs
- TinyCC fork CI is passing:
https://github.com/oven-sh/tinycc/actions/runs/21105489093

## How did you verify your code works?

- [x] TinyCC fork CI passes on all platforms (Linux
x86_64/arm64/armv7/riscv64, macOS x86_64/arm64, Windows i386/x86_64)
- [ ] Bun CI passes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 13:31:21 -08:00
robobun
c3b4e5568c fix(http): check socket state before operations in doRedirect (#26221)
## Summary
- Fix assertion failure when using HTTP proxy with redirects and socket
closes during redirect processing
- Add `isClosedOrHasError()` checks before `releaseSocket` and
`closeSocket` in `doRedirect`

Fixes #26220

## Root Cause
In `doRedirect` (`src/http.zig:786-797`), the code called
`releaseSocket` or `closeSocket` without checking if the socket was
already closed. When `onClose` is triggered while `is_redirect_pending`
is true, it calls `doRedirect`, but the socket is already closed at that
point, causing the assertion in `HTTPContext.zig:168` to fail:

```zig
assert(!socket.isClosed());  // FAILS - socket IS closed
```

## Fix
Added `!socket.isClosedOrHasError()` checks before socket operations in
`doRedirect`, matching the pattern already used at line 1790 in the same
file.

## Test plan
- [x] All existing proxy redirect tests pass (`bun bd test
test/js/bun/http/proxy.test.ts`)
- [x] Build completes successfully (`bun bd`)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 13:19:30 -08:00
robobun
3d46ae2fa4 fix(node-fetch): convert old-style Node.js streams to Web streams (#26226)
## Summary
- Fix multipart uploads using form-data + node-fetch@2 +
fs.createReadStream() being truncated
- Convert old-style Node.js streams (that don't implement
`Symbol.asyncIterator`) to Web ReadableStreams before passing to native
fetch

## Test plan
- [x] New tests in `test/regression/issue/26225.test.ts` verify:
  - Multipart uploads with form-data and createReadStream work correctly
  - Async iterable bodies still work (regression test)
  - Large file streams work correctly
- [x] Tests fail with `USE_SYSTEM_BUN=1` and pass with debug build

Fixes #26225

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 13:19:02 -08:00
wovw
716801e92d gitignore: add .direnv dir (#26198)
### What does this PR do?

The `.direnv` folder is created by [direnv](https://direnv.net/) when
using `use flake` in `.envrc` to automatically load the Nix development
shell. Since the repo already includes a flake.nix, developers on NixOS
commonly use direnv (via nix-direnv) to auto-load the environment. This
folder contains cached environment data and should not be committed.
2026-01-18 00:17:14 -08:00
wovw
939f5cf7af fix(nix): disable fortify hardening for debug builds (#26199)
### What does this PR do?

NixOS enables security hardening flags by default in `mkShell` /
`devShells` e.g. `_FORTIFY_SOURCE=2`. This flag adds runtime buffer
overflow checks but requires compiler optimization (`-O1` or higher) to
work, since it needs to inline functions to insert checks.
Debug builds use `-O0` (no optimization), which causes this compilation
error:
`error: _FORTIFY_SOURCE requires compiling with optimization (-O)
[-Werror,-W#warnings]`

This patch is a standard Nix way to disable this specific flag while
keeping other hardening features intact. It doesn't affect release
builds since it's scoped to `devShells`.

### How did you verify your code works?

`bun bd test` successfully runs test cases.
2026-01-18 00:17:01 -08:00
SUZUKI Sosuke
496aeb97f9 refactor(wrapAnsi): use WTF::find for character searches (#26200)
## Summary

This PR addresses the review feedback from #26061
([comment](https://github.com/oven-sh/bun/pull/26061#discussion_r2697257836))
requesting the use of `WTF::find` for newline searches in
`wrapAnsi.cpp`.

## Changes

### 1. CRLF Normalization (lines 628-639)
Replaced manual loop with `WTF::findNextNewline` which provides
SIMD-optimized detection for `\r`, `\n`, and `\r\n` sequences.

**Before:**
```cpp
for (size_t i = 0; i < input.size(); ++i) {
    if (i + 1 < input.size() && input[i] == '\r' && input[i + 1] == '\n') {
        normalized.append(static_cast<Char>('\n'));
        i++;
    } else {
        normalized.append(input[i]);
    }
}
```

**After:**
```cpp
size_t pos = 0;
while (pos < input.size()) {
    auto newline = WTF::findNextNewline(input, pos);
    if (newline.position == WTF::notFound) {
        normalized.append(std::span { input.data() + pos, input.size() - pos });
        break;
    }
    if (newline.position > pos)
        normalized.append(std::span { input.data() + pos, newline.position - pos });
    normalized.append(static_cast<Char>('\n'));
    pos = newline.position + newline.length;
}
```

### 2. Word Length Calculation (lines 524-533)
Replaced manual loop with `WTF::find` for space character detection.

**Before:**
```cpp
for (const Char* it = lineStart; it <= lineEnd; ++it) {
    if (it == lineEnd || *it == ' ') {
        // word boundary logic
    }
}
```

**After:**
```cpp
auto lineSpan = std::span<const Char>(lineStart, lineEnd);
size_t wordStartIdx = 0;
while (wordStartIdx <= lineSpan.size()) {
    size_t spacePos = WTF::find(lineSpan, static_cast<Char>(' '), wordStartIdx);
    // word boundary logic using spacePos
}
```

## Benchmark Results

Tested on Apple M4 Max. No performance regression observed - most
benchmarks show slight improvements.

| Benchmark | Before | After | Change |
|-----------|--------|-------|--------|
| Short text (45 chars) | 613 ns | 583 ns | -4.9% |
| Medium text (810 chars) | 10.85 µs | 10.31 µs | -5.0% |
| Long text (8100 chars) | 684 µs | 102 µs | -85% * |
| Colored short | 1.26 µs | 806 ns | -36% |
| Colored medium | 19.24 µs | 13.80 µs | -28% |
| Japanese (full-width) | 7.74 µs | 7.43 µs | -4.0% |
| Emoji text | 9.35 µs | 9.27 µs | -0.9% |
| Hyperlink (OSC 8) | 5.73 µs | 5.58 µs | -2.6% |

\* Large variance in baseline measurement

## Testing

- All 35 existing tests pass
- Manual verification of CRLF normalization and word wrapping edge cases

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-17 23:43:02 -08:00
robobun
3b5f2fe756 chore(deps): update BoringSSL fork to latest upstream (#26212)
## Summary

Updates the BoringSSL fork to the latest upstream (337 commits since
last update) with bug fixes for Node.js crypto compatibility.

### Upstream BoringSSL Changes (337 commits)

| Category | Count |
|----------|-------|
| API Changes (including namespacing) | 42 |
| Code Cleanup/Refactoring | 35 |
| Testing/CI | 32 |
| Build System (Bazel, CMake) | 27 |
| Bug Fixes | 25 |
| Post-Quantum Cryptography | 14 |
| TLS/SSL Changes | 12 |
| Rust Bindings/Wrappers | 9 |
| Performance Improvements | 8 |
| Documentation | 8 |

#### Highlights

**Post-Quantum Cryptography**
- ML-DSA (Module-Lattice Digital Signature Algorithm): Full EVP
integration, Wycheproof tests, external mu verification
- SLH-DSA: Implementation of pure SLH-DSA-SHAKE-256f
- Merkle Tree Certificates: New support for verifying signatureless MTCs

**Major API Changes**
- New `CRYPTO_IOVEC` based AEAD APIs for zero-copy I/O across all
ciphers
- Massive namespacing effort moving internal symbols into `bssl`
namespace
- `bssl::Span` modernization to match `std::span` behavior

**TLS/SSL**
- Added `TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256` support
- HMAC on SHA-384 for TLS 1.3
- Improved Lucky 13 mitigation

**Build System**
- Bazel 8.x and 9.0.0 compatibility
- CI upgrades: Ubuntu 24.04, Android NDK r29

---

### Bun-specific Patches (in oven-sh/boringssl)

1. **Fix SHA512-224 EVP final buffer size** (`digests.cc.inc`)
- `BCM_sha512_224_final` writes 32 bytes but `EVP_MD.md_size` is 28
bytes
   - Now uses a temp buffer to avoid buffer overwrite

2. **Fix `EVP_do_all_sorted` to return only lowercase names**
(`evp_do_all.cc`)
- `EVP_CIPHER_do_all_sorted` and `EVP_MD_do_all_sorted` now return only
lowercase names
- Matches Node.js behavior for `crypto.getCiphers()` and
`crypto.getHashes()`

---

### Changes in Bun

- Updated BoringSSL commit hash to
`4f4f5ef8ebc6e23cbf393428f0ab1b526773f7ac`
- Removed `ignoreSHA512_224` parameter from `ncrypto::getDigestByName()`
to enable SHA512-224 support
- Removed special SHA512-224 buffer handling in `JSHash.cpp` (no longer
needed after BoringSSL fix)

## Test plan
- [x] `crypto.createHash('sha512-224')` works correctly
- [x] `crypto.getHashes()` returns lowercase names (md4, md5, sha1,
sha256, etc.)
- [x] `crypto.getCiphers()` returns lowercase names (aes-128-cbc,
aes-256-gcm, etc.)
- [x] `test/regression/issue/crypto-names.test.ts` passes
- [x] All CI tests pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 23:39:04 -08:00
robobun
f833f11afa fix(bake): respect --no-clear-screen in DevServer HMR (#26184) 2026-01-16 21:33:36 -08:00
robobun
b2e5c6c7d1 Upgrade WebKit to ea1bfb85d259 (#26161)
## Summary
- Upgrades WebKit from `c4d4cae03ece` to `ea1bfb85d259`
- Merges upstream WebKit changes into oven-sh/webkit fork

## WebKit Upgrade Summary (JavaScriptCore Changes)

### JSType Enum Changes

**No breaking changes to JSType enum from upstream.** The diff showing
`InternalFieldTupleType` removal is actually showing Bun's custom
addition - upstream WebKit does not have this type. The Bun fork
maintains `InternalFieldTupleType` after `DerivedStringObjectType`,
which is preserved during the upgrade.

### Notable Performance Improvements

#### ARM64 Conditional Compare Chain (ccmp/ccmn)
- **Commit:** `2cd6a734ed6c`
- Implements ARM64 `ccmp`/`ccmn` instruction chaining for compound
boolean expressions
- Converts patterns like `if (x0 == 0 && x1 == 1)` into efficient
conditional compare sequences
- Reduces branch prediction misses and code size
- Introduces new Air opcodes: `CompareOnFlags`,
`CompareConditionallyOnFlags`, `BranchOnFlags`

#### Extended Constant Materialization for Float16/Float/Double/V128
- **Commit:** `0521cc7f331a`
- Enhanced ARM64 constant materialization using `movi`, `mvni`, and
vector `fmov`
- Avoids memory loads for Float constants (32-bit values can now be
materialized directly)
- Adds `FPImm128` and `Move128ToVector` Air instructions

#### DFG/FTL Storage Pointer Improvements
- **Commits:** `00c0add58ec3`, `7051d3ac1f34`
- FTL Phis now properly support storage (butterfly) pointers
- Introduces `KnownStorageUse` for all storage operands in DFG/FTL
- Fixes issues with Array allocation sinking when creating storage Phis
- Improves GC safety by ensuring butterfly pointers are properly tracked

### Bug Fixes

#### Thread Termination Race Condition
- **Commit:** `23922a766f07`
- Fixes race condition in `VM::m_hasTerminationRequest` between main
thread and worker threads
- Moves `setHasTerminationRequest()` call into `VMTraps::handleTraps()`
to eliminate race

#### ThrowScope Exception Clearing
- **Commit:** `67abaaa35c4d`
- ThrowScopes can no longer accidentally clear termination exceptions
- Introduces `tryClearException()` which fails on termination exceptions
- Affects iterator operations, promises, and WebCore stream handling

#### Bytecode Cache JIT Threshold
- **Commit:** `e0644034f46e`
- Functions loaded from bytecode cache now correctly set JIT threshold
- Previously, cached functions would JIT immediately on first execution

#### Wasm Fixes
- **Commit:** `8579516f4b61` - Fix JIT-less Wasm-to-JS i31ref
marshalling for i31 values in double format
- **Commit:** `22b6a610f6ff` - Fix nullability for wasm js-string
builtins return types (`cast`, `fromCharCode`, `fromCodePoint`,
`concat`, `substring`)
- **Commit:** `5ad2efd177db` - Optimize Wasm BlockSignature to avoid
lock contention during parsing

#### 32-bit ARM (Armv7) Fix
- **Commit:** `9cc23c0e75b7`
- Fixes tail call shuffler register allocation on 32-bit ARM
- Prevents assertion failures when JSValue can load via FPR but GPRs are
exhausted

### New Features

#### Temporal PlainYearMonth Support
- **Commit:** `d865004780e6`
- Enables all PlainYearMonth test262 tests
- Fixes several bugs in month code handling and rounding modes

#### Wasm IPInt Execution Tracing
- **Commit:** `634156af4114`
- Adds `--traceWasmIPIntExecution` option for debugging WebAssembly
interpreter execution

### Code Quality Improvements

- **Commit:** `31bc5e6778d4` - `JSRegExpStringIterator` reduced from 56
to 40 bytes by merging boolean fields into bitfield
- **Commit:** `cda948675446` - Fix fragile include dependency in
`JSC::getCallDataInline`
- **Commit:** `bd87f5db107e` - Fix unretained local variable warnings in
JavaScriptCore/API

## Merge Conflicts Resolved

Fixed 4 merge conflicts related to Bun-specific patches:
1. `Source/JavaScriptCore/API/JSVirtualMachine.mm` - Removed
JSLockHolder as per Bun's patch
2. `Source/JavaScriptCore/runtime/JSBoundFunction.h` - Used relative
includes instead of framework includes
3. `Source/JavaScriptCore/runtime/JSObjectInlines.h` - Used relative
includes and updated `JSFunction.h` to `JSFunctionInlines.h`
4. `Source/WTF/wtf/text/WTFString.h` - Preserved ExternalStringImpl
support

## Test plan
- [x] WebKit builds successfully (`bun build.ts debug`)
- [x] JSType enum values verified to be compatible
- [ ] CI builds and tests pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: vadim-anthropic <vadim@anthropic.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2026-01-16 18:46:48 -08:00
robobun
1344151576 fix(json): prevent stack overflow in JSONC parser on deeply nested input (#26174)
## Summary
- Add stack overflow protection to JSON/JSONC parser to prevent
segmentation faults
- Parser now throws `RangeError: Maximum call stack size exceeded`
instead of crashing
- Fixes DoS vulnerability when parsing deeply nested JSON structures
(~150k+ depth)

## Test plan
- [x] Added regression tests for deeply nested arrays and objects (25k
depth)
- [x] Verified system Bun v1.3.6 crashes with segfault at 150k depth
- [x] Verified fix throws proper error instead of crashing
- [x] All existing JSONC tests pass

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 16:23:01 -08:00
SUZUKI Sosuke
44df912d37 Add Bun.wrapAnsi() for text wrapping with ANSI escape code preservation (#26061)
## Summary

Adds `Bun.wrapAnsi()`, a native implementation of the popular
[wrap-ansi](https://www.npmjs.com/package/wrap-ansi) npm package for
wrapping text with ANSI escape codes.

## API

```typescript
Bun.wrapAnsi(string: string, columns: number, options?: WrapAnsiOptions): string

interface WrapAnsiOptions {
  hard?: boolean;              // default: false - Break words longer than columns
  wordWrap?: boolean;          // default: true - Wrap at word boundaries
  trim?: boolean;              // default: true - Trim leading/trailing whitespace
  ambiguousIsNarrow?: boolean; // default: true - Treat ambiguous-width chars as narrow
}
```

## Features

- Wraps text to fit within specified column width
- Preserves ANSI escape codes (SGR colors/styles)
- Supports OSC 8 hyperlinks
- Respects Unicode display widths (full-width characters, emoji)
- Normalizes `\r\n` to `\n`

## Implementation Details

The implementation closes and reopens ANSI codes around line breaks for
robust terminal compatibility. This differs slightly from the npm
package in edge cases but produces visually equivalent output.

### Behavioral Differences from npm wrap-ansi

1. **ANSI code preservation**: Bun always maintains complete ANSI escape
sequences. The npm version can output malformed codes (missing ESC
character) in certain edge cases with `wordWrap: false, trim: false`.

2. **Newline ANSI handling**: Bun closes and reopens ANSI codes around
newlines for robustness. The npm version sometimes keeps them spanning
across newlines. The visual output is equivalent.

## Tests

- 27 custom tests covering basic functionality, ANSI codes, Unicode, and
options
- 23 tests ported from the npm package (MIT licensed, credited in file
header)
- All 50 tests pass

## Benchmark

<!-- Benchmark results will be added -->
```
$ cd /Users/sosuke/code/bun/bench && ../build/release/bun snippets/wrap-ansi.js
clk: ~3.82 GHz
cpu: Apple M4 Max
runtime: bun 1.3.7 (arm64-darwin)

benchmark                    avg (min … max) p75   p99    (min … top 1%)
-------------------------------------------- -------------------------------
Short text (45 chars) - npm    25.81 µs/iter  21.71 µs  █
                      (16.79 µs … 447.38 µs) 110.96 µs ▆█▃▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
Short text (45 chars) - Bun   685.55 ns/iter 667.00 ns    █
                       (459.00 ns … 2.16 ms)   1.42 µs ▁▁▁█▃▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁

summary
  Short text (45 chars) - Bun
   37.65x faster than Short text (45 chars) - npm

-------------------------------------------- -------------------------------
Medium text (810 chars) - npm 568.12 µs/iter 578.00 µs  ▄▅█▆▆▃
                     (525.25 µs … 944.71 µs) 700.75 µs ▄██████▆▅▄▃▃▂▂▂▁▁▁▁▁▁
Medium text (810 chars) - Bun  11.22 µs/iter  11.28 µs                     █
                       (11.04 µs … 11.46 µs)  11.33 µs █▁▁▁██▁█▁▁▁▁█▁█▁▁█▁▁█

summary
  Medium text (810 chars) - Bun
   50.62x faster than Medium text (810 chars) - npm

-------------------------------------------- -------------------------------
Long text (8100 chars) - npm    7.66 ms/iter   7.76 ms     ▂▂▅█   ▅
                         (7.31 ms … 8.10 ms)   8.06 ms ▃▃▄▃█████▇▇███▃▆▆▆▄▁▃
Long text (8100 chars) - Bun  112.14 µs/iter 113.50 µs        █
                     (102.50 µs … 146.04 µs) 124.92 µs ▁▁▁▁▁▁██▇▅█▃▂▂▂▂▁▁▁▁▁

summary
  Long text (8100 chars) - Bun
   68.27x faster than Long text (8100 chars) - npm

-------------------------------------------- -------------------------------
Colored short - npm            28.46 µs/iter  28.56 µs              █
                       (27.90 µs … 29.34 µs)  28.93 µs ▆▁▆▁▁▆▁▁▆▆▆▁▆█▁▁▁▁▁▁▆
Colored short - Bun           861.64 ns/iter 867.54 ns         ▂  ▇█▄▂
                     (839.68 ns … 891.12 ns) 882.04 ns ▃▅▄▅▆▆▇▆██▇████▆▃▅▅▅▂

summary
  Colored short - Bun
   33.03x faster than Colored short - npm

-------------------------------------------- -------------------------------
Colored medium - npm          557.84 µs/iter 562.63 µs      ▂▃█▄
                     (508.08 µs … 911.92 µs) 637.96 µs ▁▁▁▂▄█████▅▂▂▁▁▁▁▁▁▁▁
Colored medium - Bun           14.91 µs/iter  14.94 µs ██  ████ ██ █      ██
                       (14.77 µs … 15.17 µs)  15.06 µs ██▁▁████▁██▁█▁▁▁▁▁▁██

summary
  Colored medium - Bun
   37.41x faster than Colored medium - npm

-------------------------------------------- -------------------------------
Colored long - npm              7.84 ms/iter   7.90 ms       █  ▅
                         (7.53 ms … 8.38 ms)   8.19 ms ▂▂▂▄▃▆██▇██▇▃▂▃▃▃▄▆▂▂
Colored long - Bun            176.73 µs/iter 175.42 µs       █
                       (162.50 µs … 1.37 ms) 204.46 µs ▁▁▂▄▇██▅▂▂▂▁▁▁▁▁▁▁▁▁▁

summary
  Colored long - Bun
   44.37x faster than Colored long - npm

-------------------------------------------- -------------------------------
Hard wrap long - npm            8.05 ms/iter   8.12 ms       ▃ ▇█
                         (7.67 ms … 8.53 ms)   8.50 ms ▄▁▁▁▃▄█████▄▃▂▆▄▃▂▂▂▂
Hard wrap long - Bun          111.85 µs/iter 112.33 µs         ▇█
                     (101.42 µs … 145.42 µs) 123.88 µs ▁▁▁▁▁▁▁████▄▃▂▂▂▁▁▁▁▁

summary
  Hard wrap long - Bun
   72.01x faster than Hard wrap long - npm

-------------------------------------------- -------------------------------
Hard wrap colored - npm         8.82 ms/iter   8.92 ms   ▆ ██
                         (8.55 ms … 9.47 ms)   9.32 ms ▆▆████▆▆▄▆█▄▆▄▄▁▃▁▃▄▃
Hard wrap colored - Bun       174.38 µs/iter 175.54 µs   █ ▂
                     (165.75 µs … 210.25 µs) 199.50 µs ▁▃█▆███▃▂▃▂▂▂▂▂▁▁▁▁▁▁

summary
  Hard wrap colored - Bun
   50.56x faster than Hard wrap colored - npm

-------------------------------------------- -------------------------------
Japanese (full-width) - npm    51.00 µs/iter  52.67 µs    █▂   █▄
                      (40.71 µs … 344.88 µs)  66.13 µs ▁▁▃██▄▃▅██▇▄▃▄▃▂▂▁▁▁▁
Japanese (full-width) - Bun     7.46 µs/iter   7.46 µs       █
                        (6.50 µs … 34.92 µs)   9.38 µs ▁▁▁▁▁██▆▂▁▂▁▁▁▁▁▁▁▁▁▁

summary
  Japanese (full-width) - Bun
   6.84x faster than Japanese (full-width) - npm

-------------------------------------------- -------------------------------
Emoji text - npm              173.63 µs/iter 222.17 µs   █
                     (129.42 µs … 527.25 µs) 249.58 µs ▁▃█▆▃▃▃▁▁▁▁▁▁▁▂▄▆▄▂▂▁
Emoji text - Bun                9.42 µs/iter   9.47 µs           ██
                         (9.32 µs … 9.52 µs)   9.50 µs █▁▁███▁▁█▁██▁▁▁▁██▁▁█

summary
  Emoji text - Bun
   18.44x faster than Emoji text - npm

-------------------------------------------- -------------------------------
Hyperlink (OSC 8) - npm       208.00 µs/iter 254.25 µs   █
                     (169.58 µs … 542.17 µs) 281.00 µs ▁▇█▃▃▂▂▂▁▁▁▁▁▁▁▃▃▅▃▂▁
Hyperlink (OSC 8) - Bun         6.00 µs/iter   6.06 µs      █           ▄
                         (5.88 µs … 6.11 µs)   6.10 µs ▅▅▅▁▅█▅▁▅▁█▁▁▅▅▅▅█▅▁█

summary
  Hyperlink (OSC 8) - Bun
   34.69x faster than Hyperlink (OSC 8) - npm

-------------------------------------------- -------------------------------
No trim long - npm              8.32 ms/iter   8.38 ms  █▇
                        (7.61 ms … 13.67 ms)  11.74 ms ▃████▄▂▃▂▂▃▁▁▁▁▁▁▁▁▁▂
No trim long - Bun             93.92 µs/iter  94.42 µs           █▂
                      (82.75 µs … 162.38 µs) 103.83 µs ▁▁▁▁▁▁▁▁▄███▄▃▂▂▁▁▁▁▁

summary
  No trim long - Bun
   88.62x faster than No trim long - npm
```

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-16 16:12:23 -08:00
robobun
05434add3e fix(bundler): legal comments no longer break module.exports = require() redirect optimization (#26113)
## Summary

- Legal comments (`/*! ... */`) were preventing the `module.exports =
require()` redirect optimization from being applied to CommonJS wrapper
modules
- The fix scans all parts to find a single meaningful statement,
skipping comments, directives, and empty statements
- If exactly one such statement exists and matches the `module.exports =
require()` pattern, the redirect optimization is now applied

This fixes an issue where wrapper modules like Express's `index.js`:
```js
/*!
 * express
 * MIT Licensed
 */

'use strict';

module.exports = require('./lib/express');
```

Were generating unnecessary wrapper functions instead of being
redirected directly to the target module.

## Test plan

- [x] Added regression test in `test/regression/issue/3179.test.ts`
- [x] Verified test fails with system bun and passes with the fix
- [x] Tested manual reproduction scenario

Fixes #3179

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 15:57:27 -08:00
robobun
7e9fa4ab08 feat(scripts): enhance buildkite-failures.ts to fetch and save full logs (#26177)
## Summary
- Fetches complete logs from BuildKite's public API (no token required)
- Saves logs to `/tmp/bun-build-{number}-{platform}-{step}.log`
- Shows log file path in output for each failed job
- Displays brief error summary (unique errors, max 5)
- Adds help text with usage examples (`--help`)
- Groups failures by type (build/test/other)
- Shows annotation counts with link to view full annotations
- Documents usage in CLAUDE.md

## Test plan
- [x] Tested with build #35051 (9 failed jobs)
- [x] Verified logs saved to `/tmp/bun-build-35051-*.log`
- [x] Verified error extraction and deduplication works
- [x] Verified `--help` flag shows usage

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 15:37:31 -08:00
Dylan Conway
6f6f76f0c0 fix(macho): only update signature size on ARM64 with codesigning enabled (#26175)
The signature size adjustment was being applied unconditionally, but it
should only happen when building for ARM64 and codesigning is enabled.
This prevents incorrect offset calculations on non-ARM64 platforms.

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-16 14:18:48 -08:00
vadim-anthropic
8da29af1ae feat(node:inspector): implement Profiler API (#25939) 2026-01-16 10:12:28 -08:00
robobun
bcbb4fc35d fix(cli): show helpful error for unsupported file types instead of "File not found" (#26126)
## Summary

- When running `bun <file>` on a file with an unsupported type (e.g.,
`.css`, `.yaml`, `.toml`), Bun now shows a helpful error message instead
of the misleading "File not found"
- Tracks when a file is resolved but has a loader that can't be run
directly
- Shows the actual file path and file type in the error message

**Before:**
```
error: File not found "test.css"
```

**After:**
```
error: Cannot run "/path/to/test.css"
note: Bun cannot run css files directly
```

## Test plan

- [x] Added regression test in `test/regression/issue/1365.test.ts`
- [x] Test verifies unsupported files show "Cannot run" error
- [x] Test verifies nonexistent files still show "File not found"
- [x] Test fails with `USE_SYSTEM_BUN=1` and passes with debug build

Fixes #1365

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-15 23:40:45 -08:00
robobun
ad4aabf486 fix(Request): set cache and mode options correctly (#26099)
## Summary

- `new Request()` was ignoring `cache` and `mode` options, always
returning hardcoded default values ("default" for cache, "navigate" for
mode)
- Added proper storage and handling of these options in the Request
struct
- Both options are now correctly parsed from the constructor init object
and preserved when cloning

Fixes #2993

## Test plan

- [x] Added regression test in `test/regression/issue/2993.test.ts`
- [x] Tests verify all valid cache values: "default", "no-store",
"reload", "no-cache", "force-cache", "only-if-cached"
- [x] Tests verify all valid mode values: "same-origin", "no-cors",
"cors", "navigate"
- [x] Tests verify default values (cache: "default", mode: "cors")
- [x] Tests verify `Request.clone()` preserves options
- [x] Tests verify `new Request(request)` preserves options
- [x] Tests verify `new Request(request, init)` allows overriding
options


🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 23:02:16 -08:00
robobun
5b25a3abdb fix: don't call Bun.serve() on exported Server instances (#26144)
## Summary

- Fixes the entry point wrapper to distinguish between Server
configuration objects and already-running Server instances
- When a Server object from `Bun.serve()` is exported as the default
export, Bun no longer tries to call `Bun.serve()` on it again

## Root Cause

The entry point wrapper in `src/bundler/entry_points.zig` checks if the
default export has a `fetch` method to auto-start servers:

```javascript
if (typeof entryNamespace?.default?.fetch === 'function' || ...) {
   const server = Bun.serve(entryNamespace.default);
}
```

However, `Server` objects returned from `Bun.serve()` also have a
`fetch` method (for programmatic request handling), so the wrapper
mistakenly tried to call `Bun.serve(server)` on an already-running
server.

## Solution

Added an `isServerConfig()` helper that checks:
1. The object has a `fetch` function or `app` property (config object
indicators)
2. The object does NOT have a `stop` method (Server instance indicator)

Server instances have `stop`, `reload`, `upgrade`, etc. methods, while
config objects don't.

## Test plan

- [x] Added regression test that verifies exporting a Server as default
export works without errors
- [x] Added test that verifies config objects with `fetch` still trigger
auto-start
- [x] Verified test fails with `USE_SYSTEM_BUN=1` and passes with the
fix

Fixes #26142

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 20:28:32 -08:00
robobun
12243b9715 fix(ws): pass selected protocol from handleProtocols to upgrade response (#26118)
## Summary
- Fixes the `handleProtocols` option not setting the selected protocol
in WebSocket upgrade responses
- Removes duplicate protocol header values in responses

## Test plan
- Added regression tests in `test/regression/issue/3613.test.ts`
- Verified using fetch to check actual response headers contain the
correct protocol

Fixes #3613

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-15 18:22:01 -08:00
Ciro Spaciari
5d3f37d7ae feat(s3): add Content-Encoding header support for S3 uploads (#26149)
## Summary

Add support for setting the `Content-Encoding` header in S3 `.write()`
and `.writer()` calls, following the same pattern as
`Content-Disposition`.

This allows users to specify the encoding of uploaded content:

```typescript
// With .write()
await s3file.write("compressed data", { contentEncoding: "gzip" });

// With .writer()
const writer = s3file.writer({ contentEncoding: "gzip" });
writer.write("compressed data");
await writer.end();

// With bucket.write()
await bucket.write("key", data, { contentEncoding: "br" });
```

## Implementation

- Extended `SignedHeaders.Key` from 6 bits to 7 bits (64→128
combinations) to accommodate the new header
- Added `content_encoding` to `S3CredentialsWithOptions`, `SignOptions`,
and `SignResult` structs
- Updated `CanonicalRequest` format strings to include
`content-encoding` in AWS SigV4 signing
- Added `getContentEncoding()` method to `Headers` for fetch-based S3
uploads
- Expanded `_headers` array from 9 to 10 elements
- Pass `content_encoding` through all S3 upload paths (upload,
uploadStream, writableStream)

## Test plan

- Added tests for "should be able to set content-encoding" 
- Added tests for "should be able to set content-encoding in writer"
- Tests verify the Content-Encoding header is properly set on uploaded
objects via presigned URL fetch
- All 4 new tests pass with `bun bd test` and fail with
`USE_SYSTEM_BUN=1` (confirming the feature is new)

## Changelog

> Describe your changes in 1-2 sentences. These will be featured on
[bun.sh/blog](https://bun.sh/blog) and Bun's release notes.

Added `contentEncoding` option to S3 `.write()` and `.writer()` methods,
allowing users to set the `Content-Encoding` header when uploading
objects.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 18:09:33 -08:00
robobun
2a483631fb fix(http): allow body on GET/HEAD/OPTIONS requests for Node.js compatibility (#26145)
## Summary

Fixed `http.request()` and `https.request()` hanging indefinitely when a
GET request includes a body (via `req.write()`).

### Approach

Instead of adding a public `allowGetBody` option to `fetch()`, this PR
creates a dedicated internal function `nodeHttpClient` that:
- Uses a comptime parameter to avoid code duplication
- Allows body on GET/HEAD/OPTIONS requests (Node.js behavior)
- Is only accessible internally via `$newZigFunction`
- Keeps the public `Bun.fetch()` API unchanged (Web Standards compliant)

### Implementation

1. **fetch.zig**: Refactored to use `fetchImpl(comptime allow_get_body:
bool, ...)` shared implementation
- `Bun__fetch_()` calls `fetchImpl(false, ...)` - validates body on
GET/HEAD/OPTIONS
- `nodeHttpClient()` calls `fetchImpl(true, ...)` - allows body on
GET/HEAD/OPTIONS

2. **_http_client.ts**: Uses `$newZigFunction("fetch.zig",
"nodeHttpClient", 2)` for HTTP requests

## Test plan

- [x] Added regression test at `test/regression/issue/26143.test.ts`
- [x] Test verifies GET requests with body complete successfully
- [x] Test verifies HEAD requests with body complete successfully
- [x] Test verifies `Bun.fetch()` still throws on GET with body (Web
Standards)
- [x] Test fails on current release (v1.3.6) and passes with this fix

Fixes #26143

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Ciro Spaciari <ciro.spaciari@gmail.com>
Co-authored-by: Ciro Spaciari MacBook <ciro@anthropic.com>
2026-01-15 17:46:07 -08:00
robobun
cdcff11221 fix(cli): handle BrokenPipe gracefully in bun completions (#26097)
## Summary
- Fixes `bun completions` crashing with `BrokenPipe` error when piped to
commands that close stdout early (e.g., `bun completions | true`)
- The fix catches `error.BrokenPipe` and exits cleanly with status 0
instead of propagating the error

## Test plan
- [x] Added regression test that pipes `bun completions` to `true` and
verifies no BrokenPipe error occurs
- [x] Verified test fails with system Bun and passes with fixed build

Fixes #2977

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-15 17:44:59 -08:00
robobun
dfa704cc62 fix(s3): add contentDisposition and type support to presign() (#25999)
## Summary
- S3 `File.presign()` was ignoring the `contentDisposition` and `type`
options
- These options are now properly included as
`response-content-disposition` and `response-content-type` query
parameters in the presigned URL
- Added `content_type` field to `SignOptions` and
`S3CredentialsWithOptions` structs
- Added parsing for the `type` option in `getCredentialsWithOptions()`
- Query parameters are added in correct alphabetical order for AWS
Signature V4 compliance

## Test plan
- [x] Added regression test in `test/regression/issue/25750.test.ts`
- [x] Verified tests pass with debug build: `bun bd test
test/regression/issue/25750.test.ts`
- [x] Verified tests fail with system bun (without fix):
`USE_SYSTEM_BUN=1 bun test test/regression/issue/25750.test.ts`
- [x] Verified existing S3 presign tests still pass
- [x] Verified existing S3 signature order tests still pass

Fixes #25750

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 15:33:43 -08:00
SUZUKI Sosuke
f01467d3dc perf(buffer): optimize Buffer.from(array) by using setFromArrayLike directly (#26135)
## Summary

Optimizes `Buffer.from(array)` by bypassing `JSC::construct()` overhead
(~30ns) and leveraging JSC's internal array optimizations.

## Changes

- For JSArray inputs, directly use `setFromArrayLike()` which internally
detects array indexing types (Int32Shape/DoubleShape) and uses bulk copy
operations (`copyFromInt32ShapeArray`/`copyFromDoubleShapeArray`)
- Array-like objects and iterables continue to use the existing slow
path
- Added mitata benchmark for measuring performance

## Benchmark Results

| Test | Before | After | Improvement |
|------|--------|-------|-------------|
| Buffer.from(int32[8]) | ~85ns | ~43ns | ~50% faster |
| Buffer.from(int32[64]) | ~207ns | ~120ns | ~42% faster |
| Buffer.from(int32[1024]) | ~1.85μs | ~1.32μs | ~29% faster |
| Buffer.from(double[8]) | ~86ns | ~50ns | ~42% faster |
| Buffer.from(double[64]) | ~212ns | ~151ns | ~29% faster |

Bun is now faster than Node.js for these operations.

## Test

All 449 buffer tests pass.
2026-01-15 12:10:47 -08:00
Jarred Sumner
b268004715 Upgrade WebKit to d5bd162d9ab2 (#25958) 2026-01-15 10:26:43 -08:00
Alistair Smith
97feb66189 Double the hardcoded max http header count (#26130)
### What does this PR do?

Doubles the hardcoded max http header count

### How did you verify your code works?

ci (?)

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-15 00:35:37 -08:00
Jarred Sumner
8466f12671 Update outdated doc. io_uring has not been required in years. 2026-01-15 00:34:49 -08:00
robobun
ed75a0e2d1 fix(http): use correct client certificate for mTLS fetch() requests (#26129)
## Summary
- Fixes bug where `fetch()` with mTLS would use the first client
certificate for all subsequent requests to the same host, ignoring
per-request `tls` options
- Corrects `SSLConfig.isSame()` to properly compare all fields (was
incorrectly returning early when both optional fields were null)
- Sets `disable_keepalive=true` when reusing cached SSL contexts to
prevent socket pooling issues

Fixes #26125

## Test plan
- [x] Added regression test `test/regression/issue/26125.test.ts`
- [x] Verified test fails with system Bun 1.3.6 (demonstrates the bug)
- [x] Verified test passes with patched build

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 19:22:08 -08:00
Jarred Sumner
fdb956f2fe Update CLAUDE.md 2026-01-14 19:09:29 -08:00
Jarred Sumner
d4e5197208 Fix exception scope verification error 2026-01-14 19:09:18 -08:00
robobun
7d640cccd1 fix(tls): check SSL_is_init_finished directly for _secureEstablished (#26086)
## Summary

Fixes flaky test
`test/js/node/test/parallel/test-http-url.parse-https.request.js` where
`request.socket._secureEstablished` sometimes returned `false` even when
the TLS handshake had completed.

## Root Cause

There's a race condition between when the TLS handshake completes and
when the `on_handshake` callback fires. The HTTP request handler could
start executing before the callback set `httpResponseData->isAuthorized
= true`, causing `_secureEstablished` to return `false`.

## Previous Failed Approach (PR #25946)

Attempted to trigger the handshake callback earlier in `ssl_on_data`,
but this broke gRPC and HTTP/2 tests because the callback has side
effects that disrupted the data processing.

## This Fix

Instead of changing when the callback fires, directly query OpenSSL's
`SSL_is_init_finished()` when checking `_secureEstablished`:

1. Added `us_socket_is_ssl_handshake_finished()` API that wraps
`SSL_is_init_finished()`
2. Modified `JSNodeHTTPServerSocket::isAuthorized()` to use this
function directly

This approach is non-invasive - it doesn't change any TLS processing
logic, just reads the correct state at the point where it's needed.

## Test plan

- [x] Original flaky test passes under high parallelism (50/50 runs)
- [x] gRPC tests pass (`test-channel-credentials.test.ts`)
- [x] All `test-http-url.parse-*.js` tests pass
- [x] HTTPS tests pass (`test-https-simple.js`, `test-https-agent.js`)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-14 18:48:09 -08:00
robobun
65a9a2a580 fix(process): emit EPIPE error on broken pipe for process.stdout.write() (#26124)
## Summary
- Fixes the broken pipe behavior for `process.stdout.write()` to match
Node.js
- When writing to a broken pipe (stdout destroyed), the process now
properly exits with code 1 instead of 0
- EPIPE errors are now properly propagated to JavaScript via the
stream's error event

## Test plan
- [x] Added regression test `test/regression/issue/1632.test.ts`
- [x] Verified test fails with system bun (exit code 0) and passes with
debug build (exit code 1)
- [x] Verified `console.log` still ignores errors (uses `catch {}`) and
doesn't crash
- [x] Verified callback-based `process.stdout.write()` receives EPIPE
error

## Changes
1. **`src/io/PipeWriter.zig`**: Return EPIPE as an error instead of
treating it as successful end-of-file (`.done`)
2. **`src/shell/IOWriter.zig`**: Track `broken_pipe` flag when EPIPE is
received via `onError` callback, and propagate error properly
3. **`src/js/internal/fs/streams.ts`**: When a write fails without a
callback, emit the error on the stream via `this.destroy(err)` to match
Node.js behavior

Fixes #1632

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 18:37:54 -08:00
robobun
393198d190 fix(install): quote workspace: versions in yarn lockfile (#26106)
## Summary
- Add colon (`:`) to the list of characters that require quoting in yarn
lockfile version strings
- This fixes yarn parse errors when using `workspace:*` dependencies in
monorepo setups

Fixes #3192

## Test plan
- [x] Added regression test that verifies `workspace:*` versions are
properly quoted
- [x] Test fails with system bun (before fix)
- [x] Test passes with debug build (after fix)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 18:22:32 -08:00
robobun
798b48c898 fix(runtime): exclude globalThis from auto-serve detection (#26107)
## Summary
- When a module exports `globalThis` (e.g., `module.exports =
globalThis`), Bun's auto-serve detection incorrectly triggered because
`globalThis.fetch` is the Fetch API function
- Scripts that export globalThis (like `core-js/es/global-this.js`)
would start a development server on port 3000 instead of exiting
normally
- Added explicit check to skip auto-serve when the default export is
`globalThis` itself

Fixes #440

## Test plan
- [x] Added test case `test/regression/issue/440.test.ts` that verifies:
  - `module.exports = globalThis` does not start a server
  - `export default globalThis` does not start a server
- [x] Verified test fails with system Bun (without fix)
- [x] Verified test passes with debug build (with fix)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 17:41:21 -08:00
robobun
6104705f5f fix(yaml): fix memory leak in YAML parser (#26090)
## Summary
- Fix memory leak in YAML parser that caused segfaults after high-volume
parsing
- Added `defer parser.deinit()` to free internal data structures
(context, block_indents, anchors, tag_handles, whitespace_buf)
- Fixes #26088

## Test plan
- [x] Added regression test at `test/regression/issue/26088.test.ts`
- [x] Verified YAML parsing still works correctly with debug build
- [x] Ran subset of YAML tests to confirm no regressions


🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-14 17:13:30 -08:00
robobun
11aedbe402 fix(fs.watch): emit 'change' events for files in watched directories on Linux (#26009)
## Summary
- Fixes #3657 - `fs.watch` on directory doesn't emit `change` events for
files created after watch starts

When watching a directory with `fs.watch`, files created after the watch
was established would only emit a 'rename' event on creation, but
subsequent modifications would not emit 'change' events.

## Root Cause

The issue was twofold:
1. `watch_dir_mask` in INotifyWatcher.zig was missing `IN.MODIFY`, so
the inotify system call was not subscribed to file modification events
for watched directories.
2. When directory events were processed in path_watcher.zig, all events
were hardcoded to emit 'rename' instead of properly distinguishing
between file creation/deletion ('rename') and file modification
('change').

## Changes

- Adds `IN.MODIFY` to `watch_dir_mask` to receive modification events
- Adds a `create` flag to `WatchEvent.Op` to track `IN.CREATE` events
- Updates directory event processing to emit 'change' for pure write
events and 'rename' for create/delete/move events

## Test plan
- [x] Added regression test `test/regression/issue/3657.test.ts`
- [x] Verified test fails with system Bun (before fix)
- [x] Verified test passes with debug build (after fix)
- [x] Verified manual reproduction from issue now works correctly

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 16:46:20 -08:00
robobun
05df51ff84 fix(runner): filter out non-JS files from node tests (#26092)
## Summary
- `isNodeTest()` was only checking if the path included the node test
directories but not verifying the file was actually a JavaScript file
- This caused `test/js/node/test/parallel/CLAUDE.md` to be incorrectly
treated as a test file
- Added `isJavaScript(path)` check to filter out non-JS files

## Test plan
- [x] Verify CLAUDE.md is no longer picked up as a test file

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 16:28:29 -08:00
robobun
b72af3d329 fix(compile): respect autoloadBunfig: false when execArgv is present (#26017)
## Summary

Fixes #25640

- Fixed bug where compiled binaries with `autoloadBunfig: false` would
still load `bunfig.toml` when `execArgv` was also provided
- The issue was that `Command.init(.AutoCommand)` was called to parse
execArgv, which loaded bunfig before checking the disable flag

## Test plan

- [x] Added tests for `autoloadBunfig: false` with `execArgv` in
`test/bundler/bundler_compile_autoload.test.ts`
- [x] Verified tests pass with debug build: `bun bd test
test/bundler/bundler_compile_autoload.test.ts`
- [x] Verified tests fail with system bun (demonstrates fix works):
`USE_SYSTEM_BUN=1 bun test test/bundler/bundler_compile_autoload.test.ts
-t "AutoloadBunfigDisabledWithExecArgv"`
- [x] All existing autoload tests still pass (22 tests total)

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 16:08:49 -08:00
robobun
f27c6768ce fix(bundler): include lazy chunks in frontend.files for compiled fullstack builds (#26024)
## Summary

- Fixed lazy-loaded chunks from dynamic imports not appearing in
`frontend.files` when using `--splitting` with `--compile` in fullstack
builds
- Updated `computeChunks.zig` to mark non-entry-point chunks as browser
chunks when they contain browser-targeted files
- Updated `HTMLImportManifest.zig` to include browser chunks from server
builds in the files manifest

Fixes #25628

## Test plan

- [ ] Added regression test `test/regression/issue/25628.test.ts` that
verifies lazy chunks appear in `frontend.files`
- [ ] Manually verified: system bun reports `CHUNK_COUNT:1` (bug), debug
bun reports `CHUNK_COUNT:2` (fix)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 16:08:06 -08:00
robobun
c57d0f73b4 fix(css): preserve logical border-radius properties (#26006)
## Summary
- CSS logical border-radius properties (`border-start-start-radius`,
`border-start-end-radius`, `border-end-end-radius`,
`border-end-start-radius`) were being silently dropped when processed by
the CSS bundler
- The bug was in `src/css/properties/border_radius.zig` where
`VendorPrefix{}` (all fields false) was used instead of `VendorPrefix{
.none = true }` when computing prefixes for logical properties
- This caused the properties to be dropped by a later `isEmpty()` check
since an empty prefix struct was returned

## Test plan
- [x] Added regression test `test/regression/issue/25785.test.ts`
- [x] Verified test fails with system Bun (`USE_SYSTEM_BUN=1 bun test`)
- [x] Verified test passes with fixed bun-debug (`bun bd test`)

Fixes #25785

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-14 13:34:31 -08:00
robobun
6a27a25e5b fix(debugger): retroactively report tests when TestReporter.enable is called (#25986)
## Summary
- Fixes #25972: TestReporter domain events not firing when debugger
connects after test discovery

When a debugger client connects and enables the TestReporter domain
after tests have been discovered (e.g., using `--inspect` instead of
`--inspect-wait`), the `TestReporter.found`, `TestReporter.start`, and
`TestReporter.end` events would not fire. This is because tests
discovered without an enabled debugger have `test_id_for_debugger = 0`,
and the event emission code checks for non-zero IDs.

The fix retroactively assigns test IDs and reports discovered tests when
`TestReporter.enable` is called:

1. Check if there's an active test file in collection or execution phase
2. Iterate through the test tree (DescribeScopes and test entries)
3. Assign unique `test_id_for_debugger` values to each test/describe
4. Send `TestReporter.found` events for each discovered test

## Test plan
- [ ] Verify IDE integrations can now receive test telemetry when
connecting after test discovery
- [ ] Ensure existing `--inspect-wait` behavior continues to work
(debugger enabled before discovery)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 13:32:51 -08:00
robobun
2b86ab0cd3 fix(shell): implement long listing format for ls -l builtin (#25991)
## Summary
- Implements the `-l` (long listing) flag functionality for the shell
`ls` builtin
- The flag was being parsed but never used - output was identical to
short format
- Now displays proper long listing format: file type, permissions, hard
link count, UID, GID, size, modification time, and filename

## Test plan
- [x] Added regression test in `test/regression/issue/25831.test.ts`
- [x] Test passes with debug build: `bun bd test
test/regression/issue/25831.test.ts`
- [x] Test fails with system bun (confirming the bug exists):
`USE_SYSTEM_BUN=1 bun test test/regression/issue/25831.test.ts`

Example output with fix:
```
$ bun -e 'import { $ } from "bun"; console.log(await $`ls -l`.text())'
drwxr-xr-x   2  1000  1000     4096 Jan 12 15:30 subdir
-rw-r--r--   1  1000  1000       11 Jan 12 15:30 file.txt
```

Fixes #25831

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 13:31:06 -08:00
robobun
6e6896510a fix(cli): prevent --version/--help interception in standalone executables with compile-exec-argv (#26083)
## Summary

Fixes https://github.com/oven-sh/bun/issues/26082

- Fixes a bug where standalone executables compiled with
`--compile-exec-argv` would intercept `--version`, `-v`, `--help`, and
`-h` flags before user code could handle them
- CLI applications using libraries like `commander` can now properly
implement their own version and help commands

## Root Cause

When `--compile-exec-argv` is used, `Command.init` was being called with
`.AutoCommand`, which parses ALL arguments (including user arguments).
The `Arguments.parse` function intercepts `--version`/`--help` flags for
`AutoCommand`, preventing them from reaching user code.

## Fix

Temporarily set `bun.argv` to only include the executable name +
embedded exec argv options when calling `Command.init`. This ensures:
1. Bun's embedded options (like `--smol`, `--use-system-ca`) are
properly parsed
2. User arguments (including `--version`/`--help`) are NOT intercepted
by Bun's parser
3. User arguments are properly passed through to user code

## Test plan

- [x] Added tests for `--version`, `-v`, `--help`, and `-h` flags in
`compile-argv.test.ts`
- [x] Verified tests fail with `USE_SYSTEM_BUN=1` (proving the bug
exists)
- [x] Verified tests pass with debug build
- [x] Verified existing compile-argv tests still pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 13:10:53 -08:00
robobun
5a71ead8a2 Add CLAUDE.md for Node.js compatibility tests (#26084)
## Summary
- Adds a CLAUDE.md file to `test/js/node/test/parallel/` documenting
that these are official Node.js tests
- Explains that these tests should not be modified since they come from
the Node.js repository
- Documents how to run these tests with debug builds (`bun bd
<file-path>` instead of `bun bd test`)

## Test plan
- [x] Verified file was created correctly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 13:07:38 -08:00
robobun
a9b5f5cbd1 fix(sql): prevent hang in sequential MySQL transactions with returned array queries (#26048)
## Summary

- Fix a hang in sequential MySQL transactions where an INSERT is awaited
followed by a SELECT returned in an array
- The issue occurred because `handleResultSetOK`'s defer block only
called `queue.advance()` without flushing, causing queries added during
the JS callback to not be properly sent
- Changed to call `flushQueue()` instead of just `advance()` to ensure
data is actually sent to the server

Fixes #26030

## Test plan

- Added regression test `test/regression/issue/26030.test.ts` with three
test cases:
- `Sequential transactions with INSERT and returned SELECT should not
hang` - reproduces the exact pattern from the bug report
- `Sequential transactions with returned array of multiple queries` -
tests returning multiple queries in array
- `Many sequential transactions with awaited INSERT and returned SELECT`
- stress tests with 5 sequential transactions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 12:53:04 -08:00
robobun
7333500df8 fix(bundler): rename named function expressions when shadowing outer symbol (#26027)
## Summary
- Fixed a bug where named function expressions were not renamed when
their name shadowed an outer symbol that's referenced inside the
function body
- This caused infinite recursion at runtime when namespace imports were
inlined
- Particularly affected Svelte 5 apps in dev mode

## Test plan
- [x] Added regression test that reproduces the issue
- [x] Verified test fails with system bun and passes with fix
- [x] Ran bundler tests (bundler_regressions, bundler_naming,
bundler_edgecase, bundler_minify) - all pass

## Root cause
The bundler was skipping `function_args` scopes when renaming symbols.
This meant named function expression names (which are declared in the
function_args scope) were never considered for renaming when they
collided with outer symbols.

For example, this code:
```javascript
import * as $ from './lib';
$.doSomething(function get() {
  return $.get(123);  // Should call outer get
});
```

Would be bundled as:
```javascript
function get(x) { return x * 2; } // from lib
doSomething(function get() {
  return get(123);  // Calls itself - infinite recursion!
});
```

Instead of:
```javascript
function get(x) { return x * 2; }
doSomething(function get2() {  // Renamed to avoid collision
  return get(123);  // Correctly calls outer get
});
```

Fixes #25648

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-14 12:52:41 -08:00
robobun
e6733333f0 fix(sql): MySQL VARCHAR with binary collations returns string instead of Buffer (#26064)
## Summary

- Fixed MySQL VARCHAR/CHAR/TEXT columns with binary collations (like
`utf8mb4_bin`) being incorrectly returned as `Buffer` instead of
`string`
- The fix checks for `character_set == 63` (binary collation) in
addition to the BINARY flag to properly distinguish true binary types

Fixes #26063

## Root Cause

PR #26011 introduced a fix for binary column handling that checked
`column.flags.BINARY` to determine if data should be returned as
`Buffer`. However, MySQL sets the BINARY flag on VARCHAR/CHAR/TEXT
columns with binary collations (like `utf8mb4_bin`) even though they
should return strings.

The proper way to detect true binary types (BINARY, VARBINARY, BLOB) is
to check if `character_set == 63` (the "binary" collation), not just the
BINARY flag.

## Changes

1. **Text Protocol** (`ResultSet.zig:143-148`): Updated binary check to
`column.flags.BINARY and column.character_set == 63`
2. **Binary Protocol** (`DecodeBinaryValue.zig:154-156`): Added
`character_set` parameter and updated binary check

## Test plan

- [ ] Added regression test `test/regression/issue/26063.test.ts` that
tests VARCHAR, CHAR, and TEXT columns with `utf8mb4_bin` collation
return strings
- [ ] Test verifies that true BINARY/VARBINARY/BLOB columns still return
Buffers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-14 12:50:36 -08:00
Ciro Spaciari
22bebfc467 respect agent options and connectOpts in https module (#25937) 2026-01-14 11:52:53 -08:00
robobun
1800093a64 fix(install): use scope-specific registry for scoped packages in frozen lockfile (#26047)
## Summary
- Fixed `bun install --frozen-lockfile` to use scope-specific registry
for scoped packages when the lockfile has an empty registry URL

When parsing a `bun.lock` file with an empty registry URL for a scoped
package (like `@example/test-package`), bun was unconditionally using
the default npm registry (`https://registry.npmjs.org/`) instead of
looking up the scope-specific registry from `bunfig.toml`.

For example, with this configuration in `bunfig.toml`:
```toml
[install.scopes]
example = { url = "https://npm.pkg.github.com" }
```

And this lockfile entry with an empty registry URL:
```json
"@example/test-package": ["@example/test-package@1.0.0", "", {}, "sha512-AAAA"]
```

bun would try to fetch from
`https://registry.npmjs.org/@example/test-package/-/...` instead of
`https://npm.pkg.github.com/@example/test-package/-/...`.

The fix uses `manager.scopeForPackageName()` (the same pattern used in
`pnpm.zig`) to look up the correct scope-specific registry URL.

## Test plan
- [x] Added regression test `test/regression/issue/026039.test.ts` that
verifies:
  - Scoped packages use the scope-specific registry from `bunfig.toml`
  - Non-scoped packages continue to use the default registry
- [x] Verified test fails with system bun (without fix) and passes with
debug build (with fix)

Fixes #26039

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-14 10:22:30 -08:00
Jarred Sumner
967a6a2021 Fix blocking realpathSync call (#26056)
### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-13 23:05:46 -08:00
Jarred Sumner
49d0fbd2de Update 25716.test.ts 2026-01-13 22:38:31 -08:00
Tommy D. Rossi
af2317deb4 fix(bundler): allow reactFastRefresh Bun.build option with non-browser targets (#26035)
### What does this PR do?

Previously, reactFastRefresh was silently ignored when target was not
'browser', even when explicitly enabled. This was confusing as there was
no warning or error.

This change removes the `target == .browser` check, trusting explicit
user intent. If users enable reactFastRefresh with a non-browser target,
the transform will now be applied. If `$RefreshReg$` is not defined at
runtime, it will fail fast with a clear error rather than silently doing
nothing.

Use case: Terminal UIs (like [termcast](https://termcast.app)) need
React Fast Refresh with target: 'bun' for hot reloading in non-browser
environments.

### How did you verify your code works?

Updated existing test removing target browser
2026-01-13 22:13:17 -08:00
robobun
ab009fe00d fix(init): respect --minimal flag for agent rule files (#26051)
## Summary
- Fixes `bun init --minimal` creating Cursor rules files and CLAUDE.md
when it shouldn't
- Adds regression test to verify `--minimal` only creates package.json
and tsconfig.json

## Test plan
- [x] Verify test fails with system bun (unfixed): `USE_SYSTEM_BUN=1 bun
test test/cli/init/init.test.ts -t "bun init --minimal"`
- [x] Verify test passes with debug build: `bun bd test
test/cli/init/init.test.ts -t "bun init --minimal"`
- [x] All existing init tests pass: `bun bd test
test/cli/init/init.test.ts`

Fixes #26050

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 18:33:42 -08:00
Jarred Sumner
fbd800551b Bump 2026-01-13 15:06:36 -08:00
robobun
113cdd9648 fix(completions): add update command to Fish completions (#25978)
## Summary

- Add the `update` subcommand to Fish shell completions
- Apply the install/add/remove flags (--global, --dry-run, --force,
etc.) to the `update` command

Previously, Fish shell autocompletion for `bun update --gl<TAB>` would
not work because:
1. The `update` command was missing from the list of built-in commands
2. The install/add/remove flags were not being applied to `update`

Fixes #25953

## Test plan

- [x] Verify `update` appears in subcommand completions (`bun <TAB>`)
- [x] Verify `--global` flag completion works (`bun update --gl<TAB>`)
- [x] Verify other install flags work with update (--dry-run, --force,
etc.)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 15:05:00 -08:00
robobun
3196178fa7 fix(timers): add _idleStart property to Timeout object (#26021)
## Summary

- Add `_idleStart` property (getter/setter) to the Timeout object
returned by `setTimeout()` and `setInterval()`
- The property returns a monotonic timestamp (in milliseconds)
representing when the timer was created
- This mimics Node.js's behavior where `_idleStart` is the libuv
timestamp at timer creation time

## Test plan

- [x] Verified test fails with `USE_SYSTEM_BUN=1 bun test
test/regression/issue/25639.test.ts`
- [x] Verified test passes with `bun bd test
test/regression/issue/25639.test.ts`
- [x] Manual verification:
  ```bash
  # Bun with fix - _idleStart exists
./build/debug/bun-debug -e "const t = setTimeout(() => {}, 0);
console.log('_idleStart' in t, typeof t._idleStart); clearTimeout(t)"
  # Output: true number
  
  # Node.js reference - same behavior
node -e "const t = setTimeout(() => {}, 0); console.log('_idleStart' in
t, typeof t._idleStart); clearTimeout(t)"
  # Output: true number
  ```

Closes #25639

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 19:35:11 -08:00
robobun
d530ed993d fix(css): restore handler context after minifying nested rules (#25997)
## Summary
- Fixes handler context not being restored after minifying nested CSS
rules
- Adds regression test for the issue

## Test plan
- [x] Test fails with `USE_SYSTEM_BUN=1 bun test
test/regression/issue/25794.test.ts`
- [x] Test passes with `bun bd test test/regression/issue/25794.test.ts`

Fixes #25794

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 14:55:27 -08:00
Dylan Conway
959169dfaf feat(archive): change API to constructor-based with S3 support (#25940)
## Summary
- Change Archive API from `Bun.Archive.from(data)` to `new
Bun.Archive(data, options?)`
- Change compression options from `{ gzip: true }` to `{ compress:
"gzip", level?: number }`
- Default to no compression when no options provided
- Use `{ compress: "gzip" }` to enable gzip compression (level 6 by
default)
- Add Archive support for S3 and local file writes via `Bun.write()`

## New API

```typescript
// Create archive - defaults to uncompressed tar
const archive = new Bun.Archive({
  "hello.txt": "Hello, World!",
  "data.json": JSON.stringify({ foo: "bar" }),
});

// Enable gzip compression
const compressed = new Bun.Archive(files, { compress: "gzip" });

// Gzip with custom level (1-12)
const maxCompression = new Bun.Archive(files, { compress: "gzip", level: 12 });

// Write to local file
await Bun.write("archive.tar", archive);           // uncompressed by default
await Bun.write("archive.tar.gz", compressed);     // gzipped

// Write to S3
await client.write("archive.tar.gz", compressed);          // S3Client.write()
await Bun.write("s3://bucket/archive.tar.gz", compressed); // S3 URL
await s3File.write(compressed);                            // s3File.write()

// Get bytes/blob (uses compression setting from constructor)
const bytes = await archive.bytes();
const blob = await archive.blob();
```

## TypeScript Types

```typescript
type ArchiveCompression = "gzip";

type ArchiveOptions = {
  compress?: "gzip";
  level?: number;  // 1-12, default 6 when gzip enabled
};
```

## Test plan
- [x] 98 archive tests pass
- [x] S3 integration tests updated to new API
- [x] TypeScript types updated
- [x] Documentation updated with new examples

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-12 14:54:21 -08:00
SUZUKI Sosuke
461ad886bd fix(http): fix Strong reference leak in server response streaming (#25965)
## Summary

Fix a memory leak in `RequestContext.doRenderWithBody()` where
`Strong.Impl` memory was leaked when proxying streaming responses
through Bun's HTTP server.

## Problem

When a streaming response (e.g., from a proxied fetch request) was
forwarded through Bun's server:

1. `response_body_readable_stream_ref` was initialized at line 1836
(from `lock.readable`) or line 1841 (via `Strong.init()`)
2. For `.Bytes` streams with `has_received_last_chunk=false`, a **new**
Strong reference was created at line 1902
3. The old Strong reference was **never deinit'd**, causing
`Strong.Impl` memory to leak

This leak accumulated over time with every streaming response proxied
through the server.

## Solution

Add `this.response_body_readable_stream_ref.deinit()` before creating
the new Strong reference. This is safe because:

- `stream` exists as a stack-local variable
- JSC's conservative GC tracks stack-local JSValues
- No GC can occur between consecutive synchronous Zig statements
- Therefore, `stream` won't be collected between `deinit()` and
`Strong.init()`

## Test

Added `test/js/web/fetch/server-response-stream-leak.test.ts` which:
- Creates a backend server that returns delayed streaming responses
- Creates a proxy server that forwards the streaming responses
- Makes 200 requests and checks that ReadableStream objects don't
accumulate
- Fails on system Bun v1.3.5 (202 leaked), passes with the fix

## Related

Similar to the Strong reference leak fixes in:
- #23313 (fetch memory leak)
- #25846 (fetch cyclic reference leak)
2026-01-12 14:41:58 -08:00
Markus Schmidt
b6abbd50a0 fix(Bun.SQL): handle binary columns in MySQL correctly (#26011)
## What does this PR do?
Currently binary columns are returned as strings which means they get
corrupted when encoded in UTF8. This PR returns binary columns as
Buffers which is what user's actually expect and is also consistent with
PostgreSQL and SQLite.
### How did you verify your code works?
I added tests to verify the correct behavior. Before there were no tests
for binary columns at all.

This fixes #23991
2026-01-12 11:56:02 -08:00
Alex Miller
beccd01647 fix(FileSink): add Promise<number> to FileSink.write() return type (#25962)
Co-authored-by: Alistair Smith <hi@alistair.sh>
2026-01-11 12:51:16 -08:00
github-actions[bot]
35eb53994a deps: update sqlite to 3.51.200 (#25957)
## What does this PR do?

Updates SQLite to version 3.51.200

Compare: https://sqlite.org/src/vdiff?from=3.51.1&to=3.51.200

Auto-updated by [this
workflow](https://github.com/oven-sh/bun/actions/workflows/update-sqlite3.yml)

Co-authored-by: Jarred-Sumner <709451+Jarred-Sumner@users.noreply.github.com>
2026-01-10 22:46:17 -08:00
robobun
ebf39e9811 fix(install): prevent use-after-free when retrying failed HTTP requests (#25949) 2026-01-10 18:03:24 -08:00
Ciro Spaciari
b610e80ee0 fix(http): properly handle pipelined data in CONNECT requests (#25938)
Fixes #25862

### What does this PR do?

When a client sends pipelined data immediately after CONNECT request
headers in the same TCP segment, Bun now properly delivers this data to
the `head` parameter of the 'connect' event handler, matching Node.js
behavior.

This enables compatibility with Cap'n Proto's KJ HTTP library used by
Cloudflare's workerd runtime, which pipelines RPC data after CONNECT.

### How did you verify your code works?
<img width="694" height="612" alt="CleanShot 2026-01-09 at 15 30 22@2x"
src="https://github.com/user-attachments/assets/3ffe840e-1792-429c-8303-d98ac3e6912a"
/>

Tests

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-09 19:08:02 -08:00
robobun
7076a49bb1 feat(archive): add TypeScript types, docs, and files() benchmark (#25922)
## Summary

- Add comprehensive TypeScript type definitions for `Bun.Archive` in
`bun.d.ts`
  - `ArchiveInput` and `ArchiveCompression` types
- Full JSDoc documentation with examples for all methods (`from`,
`write`, `extract`, `blob`, `bytes`, `files`)
- Add documentation page at `docs/runtime/archive.mdx`
  - Quickstart examples
  - Creating and extracting archives
  - `files()` method with glob filtering
  - Compression support
  - Full API reference section
- Add Archive to docs sidebar under "Data & Storage"
- Add `files()` benchmark comparing `Bun.Archive.files()` vs node-tar
- Shows ~7x speedup for reading archive contents into memory (59µs vs
434µs)

## Test plan

- [x] TypeScript types compile correctly
- [x] Documentation renders properly in Mintlify format
- [x] Benchmark runs successfully and shows performance comparison
- [x] Verified `files()` method works correctly with both Bun.Archive
and node-tar

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-09 19:00:19 -08:00
robobun
d4a966f8ae fix(install): prevent symlink path traversal in tarball extraction (#25584)
## Summary

- Fixes a path traversal vulnerability via symlink when installing
GitHub packages
- Validates symlink targets before creation to ensure they stay within
the extraction directory
- Rejects absolute symlinks and relative paths that would escape the
extraction directory

## Details

When extracting GitHub tarballs, Bun did not validate symlink targets. A
malicious tarball could:
1. Create a symlink pointing outside the extraction directory (e.g.,
`../../../../../../../tmp`)
2. Include a file entry through that symlink path (e.g.,
`symlink-to-tmp/pwned.txt`)

When extracted, the symlink would be created first, then the file would
be written through it, ending up outside the intended package directory
(e.g., `/tmp/pwned.txt`).

### The Fix

Added `isSymlinkTargetSafe()` function that:
1. Rejects absolute symlink targets (starting with `/`)
2. Normalizes the combined path (symlink location + target) and rejects
if the result starts with `..` (would escape)

## Test plan

- [x] Added regression test
`test/cli/install/symlink-path-traversal.test.ts`
- [x] Tests verify relative path traversal symlinks are blocked
- [x] Tests verify absolute symlink targets are blocked  
- [x] Tests verify safe relative symlinks within the package still work
- [x] Verified test fails with system bun (vulnerable) and passes with
debug build (fixed)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2026-01-09 16:56:31 -08:00
Dylan Conway
7704dca660 feat(build): add --compile-executable-path CLI flag (#25934)
## Summary

Adds a new CLI flag `--compile-executable-path` that allows specifying a
custom Bun executable path for cross-compilation instead of downloading
from the npm registry.

## Usage

```bash
bun build --compile --target=bun-linux-x64 \
  --compile-executable-path=/path/to/bun-linux-x64 app.ts
```

## Motivation

The `executablePath` option was already available in the JavaScript
`Bun.build()` API. This exposes the same functionality from the CLI.

## Changes

- Added `--compile-executable-path <STR>` CLI parameter in
`src/cli/Arguments.zig`
- Added `compile_executable_path` field to `BundlerOptions` in
`src/cli.zig`
- Wired the option through to `StandaloneModuleGraph.toExecutable()` in
`src/cli/build_command.zig`

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-09 16:21:40 -08:00
Jarred Sumner
1e0f51ddcc Revert "feat(shell): add $.trace for analyzing shell commands without execution (#25667)"
This reverts commit 6b5de25d8a.
2026-01-09 16:20:18 -08:00
Ciro Spaciari
32a76904fe remove agent in global WebSocket add agent support in ws module (#25935)
### What does this PR do?
remove agent in global WebSocket (in node.js it uses dispatcher not
agent) add agent support in ws module (this actually uses agent)
### How did you verify your code works?
Tests
2026-01-09 16:18:47 -08:00
Jarred Sumner
367eeb308e Fixes #23177 (#25936)
### What does this PR do?

Fixes #23177

### How did you verify your code works?
2026-01-09 15:14:00 -08:00
freya
1879b7eeca Note that Windows API handles should be u64, not ptr (#25886)
### What does this PR do?

Update FFI documentation with a note on Windows API handle values, as
pointer encoding to double causes intermittent failures with some
classes of handles.

Putting it in this accordion feels less than ideal but it's also a very
specific use case.

The specific issue I experienced: HDCs and HBITMAPs are basically 32
bits, although they are typedef'd in the headers to HANDLE. They are
returned from and passed to the GDI APIs as sign-extended versions of
the underlying 32-bit value, and so when going through the ptr <->
double pathway of bun FFI, the bottom 11 bits of those values are lost
if the original value had bit 31 set, and subsequent calls will fail.
Probably this is fixable by correctly encoding 'negative' pointers in
the double representation, and I might tackle that if I find time.
2026-01-09 14:04:14 -08:00
robobun
70fa6af355 feat: add Bun.Archive API for creating and extracting tarballs (#25665)
## Summary

- Adds new `Bun.Archive` API for working with tar archives
- `Bun.Archive.from(data)` - Create archive from object, Blob,
TypedArray, or ArrayBuffer
- `Bun.Archive.write(path, data, compress?)` - Write archive to disk
(async)
- `archive.extract(path)` - Extract to directory, returns
`Promise<number>` (file count)
- `archive.blob(compress?)` - Get archive as Blob (async)
- `archive.bytes(compress?)` - Get archive as Uint8Array (async)

Key implementation details:
- Uses existing libarchive bindings for tarball creation/extraction via
`extractToDisk`
- Uses libdeflate for gzip compression
- Immediate byte copying for GC safety (no JSValue protection, no
`hasPendingActivity`)
- Async operations run on worker pool threads with proper VM reference
handling
- Growing memory buffer via `archive_write_open2` callbacks for
efficient tarball creation

## Test plan

- [x] 65 comprehensive tests covering:
  - Normal operations (create, extract, blob, bytes, write)
  - GC safety (unreferenced archives, mutation isolation)  
  - Error handling (invalid args, corrupted data, I/O errors)
- Edge cases (large files, many files, special characters, path
normalization)
  - Concurrent operations

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-09 00:33:35 -08:00
robobun
eb5b498c62 fix(test): make fake timers work with testing-library/react (#25915)
## Summary

Fixes #25869

Two fixes to enable `jest.useFakeTimers()` to work with
`@testing-library/react` and `@testing-library/user-event`:

- Set `setTimeout.clock = true` when fake timers are enabled.
testing-library/react's `jestFakeTimersAreEnabled()` checks for this
property to determine if `jest.advanceTimersByTime()` should be called
when draining the microtask queue. Without this, testing-library never
advances timers.

- Make `advanceTimersByTime(0)` fire `setTimeout(fn, 0)` timers.
`setTimeout(fn, 0)` is internally scheduled with a 1ms delay per HTML
spec. Jest/testing-library expect `advanceTimersByTime(0)` to fire such
"immediate" timers, but we were advancing by 0ms so they never fired.

## Test plan

- [x] All 30 existing fake timer tests pass
- [x] New regression test validates both fixes
- [x] Original user-event reproduction now works (test completes instead
of hanging)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-08 20:25:35 -08:00
Dylan Conway
596e83c918 fix: correct logic bugs in libarchive, s3 credentials, and postgres bindings (#25905)
## Summary

- **libarchive.zig:110**: Fix self-assignment bug where `this.pos` was
assigned to itself instead of `new_pos`
- **s3/credentials.zig:165,176,199**: Fix impossible range checks -
`and` should be `or` for pageSize, partSize, and retry validation (a
value cannot be both less than MIN and greater than MAX simultaneously)
- **postgres.zig:14**: Fix copy-paste error where createConnection
function was internally named "createQuery"

## Test plan

- [ ] Verify S3 credential validation now properly rejects out-of-range
values for pageSize, partSize, and retry
- [ ] Verify libarchive seek operations work correctly
- [ ] Verify postgres createConnection function has correct internal
name

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
2026-01-08 19:46:06 -08:00
SUZUKI Sosuke
3842a5ee18 Fix stack precommit crash on Windows (#25891)
### What does this PR do?

Attempt to fix stack precommit crash on Windows

https://github.com/oven-sh/WebKit/pull/128

### How did you verify your code works?
2026-01-08 18:12:51 -08:00
robobun
50daf5df27 fix(io): respect mode option when copying files with Bun.write() (#25906)
## Summary
- Fixes #25903 - `Bun.write()` mode option ignored when copying from
`Bun.file()`
- The destination file now correctly uses the specified `mode` option
instead of default permissions
- Works on Linux (via open flags), macOS (chmod after clonefile), and
Windows (chmod after copyfile)

## Test plan
- [x] Added regression test in `test/regression/issue/25903.test.ts`
- [x] Test passes with `bun bd test test/regression/issue/25903.test.ts`
- [x] Test fails with `USE_SYSTEM_BUN=1 bun test
test/regression/issue/25903.test.ts` (verifies the bug exists)

## Changes
- `src/bun.js/webcore/Blob.zig`: Add `mode` field to `WriteFileOptions`
and parse from options
- `src/bun.js/webcore/blob/copy_file.zig`: Use `destination_mode` in
`CopyFile` struct and `doOpenFile`
- `packages/bun-types/bun.d.ts`: Add `mode` option to BunFile copy
overloads

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-01-08 17:51:42 -08:00
Ciro Spaciari
c90c0e69cb feat(websocket): add HTTP/HTTPS proxy support (#25614)
## Summary

Add `proxy` option to WebSocket constructor for connecting through HTTP
CONNECT proxies.

### Features
- Support for `ws://` and `wss://` through HTTP proxies
- Support for `ws://` and `wss://` through HTTPS proxies (with
`rejectUnauthorized: false`)
- Proxy authentication via URL credentials (Basic auth)
- Custom proxy headers support
- Full TLS options (`ca`, `cert`, `key`, etc.) for target connections
using `SSLConfig.fromJS`

### API

```javascript
// String format
new WebSocket("wss://example.com", { proxy: "http://proxy:8080" })

// With credentials
new WebSocket("wss://example.com", { proxy: "http://user:pass@proxy:8080" })

// Object format with custom headers
new WebSocket("wss://example.com", {
  proxy: { url: "http://proxy:8080", headers: { "X-Custom": "value" } }
})

// HTTPS proxy
new WebSocket("ws://example.com", {
  proxy: "https://proxy:8443",
  tls: { rejectUnauthorized: false }
})
```

### Implementation

| File | Changes |
|------|---------|
| `WebSocketUpgradeClient.zig` | Proxy state machine and CONNECT
handling |
| `WebSocketProxyTunnel.zig` | **New** - TLS tunnel inside CONNECT for
wss:// through HTTP proxy |
| `JSWebSocket.cpp` | Parse proxy option and TLS options using
`SSLConfig.fromJS` |
| `WebSocket.cpp` | Pass proxy parameters to Zig, handle HTTPS proxy
socket selection |
| `bun.d.ts` | Add `proxy` and full TLS options to WebSocket types |

### Supported Scenarios

| Scenario | Status |
|----------|--------|
| ws:// through HTTP proxy |  Working |
| wss:// through HTTP proxy |  Working (TLS tunnel) |
| ws:// through HTTPS proxy |  Working (with `rejectUnauthorized:
false`) |
| wss:// through HTTPS proxy |  Working (with `rejectUnauthorized:
false`) |
| Proxy authentication (Basic) |  Working |
| Custom proxy headers |  Working |
| Custom CA for HTTPS proxy |   Working |

## Test plan

- [x] API tests verify proxy option is accepted in various formats
- [x] Functional tests with local HTTP CONNECT proxy server
- [x] Proxy authentication tests (Basic auth)
- [x] HTTPS proxy tests with `rejectUnauthorized: false`
- [x] Error handling tests (auth failures, wrong credentials)

Run tests: `bun test test/js/web/websocket/websocket-proxy.test.ts`

## Changelog

- Added `proxy` option to `WebSocket` constructor for HTTP/HTTPS proxy
support
- Added full TLS options (`ca`, `cert`, `key`, `passphrase`, etc.) to
`WebSocket` constructor

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-08 16:21:34 -08:00
robobun
24b97994e3 feat(bundler): add files option for in-memory bundling (#25852)
## Summary

Add support for in-memory entrypoints and files in `Bun.build` via the
`files` option:

```ts
await Bun.build({
  entrypoints: ["/app/index.ts"],
  files: {
    "/app/index.ts": `
      import { greet } from "./greet.ts";
      console.log(greet("World"));
    `,
    "/app/greet.ts": `
      export function greet(name: string) {
        return "Hello, " + name + "!";
      }
    `,
  },
});
```

### Features

- **Bundle entirely from memory**: No files on disk needed
- **Override files on disk**: In-memory files take priority over disk
files
- **Mix disk and virtual files**: Real files can import virtual files
and vice versa
- **Multiple content types**: Supports `string`, `Blob`, `TypedArray`,
and `ArrayBuffer`

### Use Cases

- Code generation at build time
- Injecting build-time constants
- Testing with mock modules
- Bundling dynamically generated code
- Overriding configuration files for different environments

### Implementation Details

- Added `FileMap` struct in `JSBundler.zig` with `resolve`, `get`,
`contains`, `fromJS`, and `deinit` methods
- Uses `"memory"` namespace to avoid `pathWithPrettyInitialized`
allocation issues during linking phase
- FileMap checks added in:
  - `runResolver` (entry point resolution)
  - `runResolutionForParseTask` (import resolution)
  - `enqueueEntryPoints` (entry point handling)
  - `getCodeForParseTaskWithoutPlugins` (file content reading)
- Root directory defaults to cwd when all entrypoints are in the FileMap
- Added TypeScript types with JSDoc documentation
- Added bundler documentation with examples

## Test plan

- [x] Basic in-memory file bundling
- [x] In-memory files with absolute imports
- [x] In-memory files with relative imports (same dir, subdirs, parent
dirs)
- [x] Nested/chained imports between in-memory files
- [x] TypeScript and JSX support
- [x] Blob, Uint8Array, and ArrayBuffer content types
- [x] Re-exports and default exports
- [x] In-memory file overrides real file on disk
- [x] Real file on disk imports in-memory file via relative path
- [x] Mixed disk and memory files with complex import graphs

Run tests with: `bun bd test test/bundler/bundler_files.test.ts`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2026-01-08 15:05:41 -08:00
Jarred Sumner
dda9a9b0fd Increase shard count for Windows & Linux test runs (#25913)
### What does this PR do?

### How did you verify your code works?
2026-01-08 14:38:35 -08:00
robobun
eeef013365 Add Bun.JSONC API for parsing JSON with comments and trailing commas (#22115)
## Summary

This PR implements a new `Bun.JSONC.parse()` API that allows parsing
JSONC (JSON with Comments) files. It addresses the feature request from
issue #16257 by providing a native API for parsing JSON with comments
and trailing commas.

The implementation follows the same pattern as `Bun.YAML` and
`Bun.TOML`, leveraging the existing `TSConfigParser` which already
handles JSONC parsing internally.

## Features

- **Parse JSON with comments**: Supports both `//` single-line and `/*
*/` block comments
- **Handle trailing commas**: Works with trailing commas in objects and
arrays
- **Full JavaScript object conversion**: Returns native JavaScript
objects/arrays
- **Error handling**: Proper error throwing for invalid JSON
- **TypeScript compatibility**: Works with TypeScript config files and
other JSONC formats

## Usage Example

```javascript
const result = Bun.JSONC.parse(`{
  // This is a comment
  "name": "my-app",
  "version": "1.0.0", // trailing comma is allowed
  "dependencies": {
    "react": "^18.0.0",
  },
}`);
// Returns native JavaScript object
```

## Implementation Details

- Created `JSONCObject.zig` following the same pattern as
`YAMLObject.zig` and `TOMLObject.zig`
- Uses the existing `TSConfigParser` from `json.zig` which already
handles comments and trailing commas
- Added proper C++ bindings and exports following Bun's established
patterns
- Comprehensive test suite covering various JSONC features

## Test Plan

- [x] Basic JSON parsing works
- [x] Single-line comments (`//`) are handled correctly
- [x] Block comments (`/* */`) are handled correctly  
- [x] Trailing commas in objects and arrays work
- [x] Complex nested structures parse correctly
- [x] Error handling for invalid JSON
- [x] Empty objects and arrays work
- [x] Boolean and null values work correctly

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Jarred-Sumner <709451+Jarred-Sumner@users.noreply.github.com>
2026-01-08 13:27:47 -08:00
Kaj Kowalski
65d006aae0 fix(parser): fix bytecode CJS pragma detection after shebang (#25868)
### What does this PR do?

Fix bytecode CJS pragma detection when source file contains a shebang.

When bundling with `--bytecode` and the source file has a shebang, the
output silently fails to execute (exits 0, no output).

Reproduction:
[github.com/kjanat/bun-bytecode-banner-bug](https://github.com/kjanat/bun-bytecode-banner-bug)

```js
// Bundled output:
#!/usr/bin/env bun           // shebang preserved
// @bun @bytecode @bun-cjs   // pragma on line 2
(function(exports, require, module, __filename, __dirname) { ... })
```

The pragma parser in `hasBunPragma()` correctly skips the shebang line,
but uses `self.lexer.end` instead of `contents.len` when scanning for
`@bun-cjs`/`@bytecode` tokens. This causes the pragma to not be
recognized.

**Fix:**

```zig
// Before
while (cursor < self.lexer.end) : (cursor += 1) {

// After
while (cursor < end) : (cursor += 1) {
```

Where `end` is already defined as `contents.len` at the top of the
function.

### How did you verify your code works?

- Added bundler test `banner/SourceHashbangWithBytecodeAndCJSTargetBun`
in `test/bundler/bundler_banner.test.ts`
- Added regression tests in
`test/regression/issue/bun-bytecode-shebang.test.ts` that verify:
  - CJS wrapper executes when source has shebang
  - CJS wrapper executes when source has shebang + bytecode pragma
- End-to-end: bundled bytecode output with source shebang runs correctly
- Ran the tests in the
[kjanat/bun-bytecode-banner-bug](https://github.com/kjanat/bun-bytecode-banner-bug)
repo to verify the issue is fixed

---------

Signed-off-by: Kaj Kowalski <info@kajkowalski.nl>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-08 11:32:08 -08:00
Alistair Smith
8b59b8d17d types: Missing methods on udp socket 2026-01-08 09:38:31 +00:00
robobun
a1f1252771 refactor(test): migrate bun-install tests to concurrent execution (#25895) 2026-01-08 01:06:03 -08:00
Jarred Sumner
bf1e4922b4 Speed up some more tests (#25892)
### What does this PR do?

### How did you verify your code works?
2026-01-07 23:39:10 -08:00
SUZUKI Sosuke
fbf47d0256 fix: use JSPromise helper methods instead of direct internal field manipulation (#25889)
## Summary

Fixes `ASSERTION FAILED: isUInt32AsAnyInt()` errors that occurred
intermittently, particularly in inspector-related tests.

## Problem

The code was directly manipulating JSPromise internal fields using
`asUInt32AsAnyInt()`:

```cpp
promise->internalField(JSC::JSPromise::Field::Flags).get().asUInt32AsAnyInt()
```

This caused assertion failures when the internal field state was not a
valid uint32.

## Solution

Use WebKit's official JSPromise helper methods instead:

| Before | After |
|--------|-------|
| Direct `internalField` manipulation for rejected promise |
`promise->rejectAsHandled(vm, globalObject, value)` |
| Direct `internalField` manipulation for resolved promise |
`promise->fulfill(vm, globalObject, value)` |
| Direct `internalField` manipulation for handled flag |
`promise->markAsHandled()` |

## Files Changed

- `src/bun.js/bindings/ZigGlobalObject.cpp`
- `src/bun.js/bindings/ModuleLoader.cpp`
- `src/bake/BakeGlobalObject.cpp`

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 23:23:27 -08:00
robobun
f83214e0a9 test(http2): refactor tests to use describe.concurrent and await using (#25893)
## Summary
- Use `describe.concurrent` at module scope for parallel test execution
across node/bun executables and padding strategies
- Replace `Bun.spawnSync` with async `Bun.spawn` in memory leak test
- Replace `beforeEach`/`afterEach` server setup with `await using` in
each test
- Add `Symbol.asyncDispose` to `nodeEchoServer` helper for proper
cleanup
- Fix IPv6/IPv4 binding issue by explicitly binding echo server to
127.0.0.1

## Test plan
- [x] Run `bun test test/js/node/http2/node-http2.test.js` - all 245
tests pass (6 skipped)
- [x] Verify tests run faster due to concurrent execution

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-01-07 23:15:50 -08:00
robobun
81debb4269 feat(bundler): add metafile support matching esbuild format (#25842) 2026-01-07 22:46:51 -08:00
robobun
962ac0c2fd refactor(test): use describe.concurrent and async spawn in bun-run.test.ts (#25890)
## Summary

- Wrap all tests in `describe.concurrent` at module scope for parallel
test execution
- Replace `Bun.spawnSync` with `Bun.spawn` + `await` throughout
- Replace `run_dir`/`writeFile` pattern with `tempDir` for automatic
cleanup via `using` declarations
- Remove `beforeEach` hook that created shared temp directory

## Test plan

- [x] All 291 tests pass with `bun bd test
test/cli/install/bun-run.test.ts`
- [x] All tests pass with `USE_SYSTEM_BUN=1 bun test
test/cli/install/bun-run.test.ts`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-01-07 21:52:41 -08:00
Jarred Sumner
bdc95c2dc5 Speed up shell leak test (#25880)
### What does this PR do?
18s -> 3s

### How did you verify your code works?
2026-01-07 21:31:33 -08:00
Jarred Sumner
29a6c0d263 Speed up require-cache.test.ts (#25887)
### What does this PR do?

21.92s -> 6s

### How did you verify your code works?
2026-01-07 21:13:28 -08:00
Jarred Sumner
39e2c22e1a fix(http): disable keep-alive on proxy authentication failure (407) (#25884)
## Summary

- Disable HTTP keep-alive when a proxy returns a 407 (Proxy
Authentication Required) status code
- This prevents subsequent requests from trying to reuse a connection
that the proxy server has closed
- Refactored proxy tests to use `describe.concurrent` and async
`Bun.spawn` patterns

## Test plan

- [x] Added test `simultaneous proxy auth failures should not hang` that
verifies multiple concurrent requests with invalid proxy credentials
complete without hanging
- [x] Existing proxy tests pass

🤖 Generated with [Claude Code](https://claude.ai/code)
2026-01-07 20:49:30 -08:00
robobun
b20a70dc40 fix: use JSONParseWithException for proper error handling (#25881)
## Summary
- **SQLClient.cpp**: Fix bug where `RETURN_IF_EXCEPTION` after
`JSONParse` would never trigger on JSON parse failure since `JSONParse`
doesn't throw
- **BunString.cpp**: Simplify by using `JSONParseWithException` instead
of manually checking for empty result and throwing

## Details

`JSC::JSONParse` returns an empty `JSValue` on failure **without
throwing an exception**. This means that `RETURN_IF_EXCEPTION(scope,
{})` will never catch JSON parsing errors when used after `JSONParse`.

Before this fix in `SQLClient.cpp`:
```cpp
JSC::JSValue json = JSC::JSONParse(globalObject, str);
RETURN_IF_EXCEPTION(scope, {});  // This never triggers on parse failure!
return json;  // Returns empty JSValue
```

This could cause issues when parsing invalid JSON data from SQL
databases (e.g., PostgreSQL's JSON/JSONB columns).

`JSONParseWithException` properly throws a `SyntaxError` exception that
`RETURN_IF_EXCEPTION` can catch.

## Test plan
- [x] Build succeeds with `bun bd`
- The changes follow the same pattern used in `ModuleLoader.cpp` and
`BunObject.cpp`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-01-07 20:13:01 -08:00
Jarred Sumner
1f22f4447d Align temp directory resolution with os.tmpdir() (#25878)
## Summary
- Aligns Bun's temp directory resolution with Node.js's `os.tmpdir()`
behavior
- Checks `TMPDIR`, `TMP`, and `TEMP` environment variables in order
(matching Node.js)
- Uses `bun.once` for lazy initialization instead of mutable static
state
- Removes `setTempdir` function and simplifies the API to use
`RealFS.tmpdirPath()` directly

## Test plan
- [ ] Verify temp directory resolution matches Node.js behavior
- [ ] Test with various environment variable configurations
- [ ] Ensure existing tests pass with `bun bd test`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-07 16:09:49 -08:00
Bradley Walters
ff590e9cfd fix(cmake): fix include paths for local WebKit (#25815)
### What does this PR do?

Without this change, building with `-DWEBKIT_LOCAL=ON` fails with:
```
/work/bun/src/bun.js/bindings/BunObject.cpp:12:10: fatal error: 'JavaScriptCore/JSBase.h' file not found
   12 | #include <JavaScriptCore/JSBase.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
```

The reason for this is because the directory structure differs between
downloaded and local WebKit.

Downloaded WebKit:
```
build/debug/cache/webkit-6d0f3aac0b817cc0/
  └── include/
      └── JavaScriptCore/
          └── JSBase.h          ← Direct path
```

Local WebKit:
```
vendor/WebKit/WebKitBuild/Debug/
  └── JavaScriptCore/Headers/
      └── JavaScriptCore/
          └── JSBase.h          ← Nested path
```

The include paths are thus configured differently for each build type.

For Remote WebKit (when WEBKIT_LOCAL=OFF):
- SetupWebKit.cmake line 22 sets: WEBKIT_INCLUDE_PATH =
${WEBKIT_PATH}/include
- BuildBun.cmake line 1253 adds:
include_directories(${WEBKIT_INCLUDE_PATH})
- This resolves to: build/debug/cache/webkit-6d0f3aac0b817cc0/include/
- So #include <JavaScriptCore/JSBase.h> finds the file at
include/JavaScriptCore/JSBase.h 

For Local WebKit (when WEBKIT_LOCAL=ON):
- The original code only added:
${WEBKIT_PATH}/JavaScriptCore/Headers/JavaScriptCore
- This resolves to:
vendor/WebKit/WebKitBuild/Debug/JavaScriptCore/Headers/JavaScriptCore/
- So #include <JavaScriptCore/JSBase.h> fails because there's no
JavaScriptCore/ subdirectory at that level 
- The fix adds: ${WEBKIT_PATH}/JavaScriptCore/Headers
- Now the include path includes:
vendor/WebKit/WebKitBuild/Debug/JavaScriptCore/Headers/
- So #include <JavaScriptCore/JSBase.h> finds the file at
Headers/JavaScriptCore/JSBase.h 

### How did you verify your code works?

Built locally.

Co-authored-by: Carl Smedstad <carsme@archlinux.org>
2026-01-07 15:46:46 -08:00
Bradley Walters
18f242daa1 feat(cmake): simplify bindgenv2 error handling using COMMAND_ERROR_IS_FATAL (#25814)
### What does this PR do?

In CMake, failure to execute a process will place a message string in
the RESULT_VARIABLE. If the message string starts with 'No' such as in
'No such file or directory' then CMake will interpret that as the
boolean false and not halt the build. The new code uses a built-in
option to halt the build on any failure, so the script will halt
correctly if that error occurs. This could also be fixed by quoting, but
might as well use the CMake feature.

I encountered this error when I improperly defined BUN_EXECUTABLE.

### How did you verify your code works?

<details>

<summary>Ran the build with invalid executable path:</summary>

```
$ node ./scripts/build.mjs \
                   -GNinja \
                   -DCMAKE_BUILD_TYPE=Debug \
                   -B build/debug \
                   --log-level=NOTICE \
                   -DBUN_EXECUTABLE="foo" \
                   -DNPM_EXECUTABLE="$(which npm)" \
                   -DZIG_EXECUTABLE="$(which zig)" \
                   -DENABLE_ASAN=OFF

Globbed 1971 sources [178.21ms]
CMake Configure
  $ cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug -B /work/bun/build/debug --log-level NOTICE -DBUN_EXECUTABLE=foo -DNPM_EXECUTABLE=/bin/npm -DZIG_EXECUTABLE=/bin/zig -DENABLE_ASAN=OFF -S /work/bun -DCACHE_STRATEGY=auto
sccache: Using local cache strategy.
```

</details>

Result:

```
CMake Error at cmake/targets/BuildBun.cmake:422 (execute_process):
  execute_process failed command indexes:

    1: "Abnormal exit with child return code: no such file or directory"

Call Stack (most recent call first):
  CMakeLists.txt:66 (include)
```
2026-01-07 15:46:00 -08:00
Bradley Walters
fbc175692f feat(cmake): log download failure from SetupWebKit.cmake (#25813)
### What does this PR do?

Before this change, downloading WebKit would fail silently if there were
e.g. a connection or TLS certificate issue. I experienced this when
trying to build bun in an overly-restrictive sandbox.

After this change, the failure reason will be visible.

### How did you verify your code works?

<details>

<summary>Brought down my network interface and ran the build:</summary>

```sh
$ node ./scripts/build.mjs \
                   -GNinja \
                   -DCMAKE_BUILD_TYPE=Debug \
                   -B build/debug \
                   --log-level=NOTICE \
                   -DBUN_EXECUTABLE="$(which node)" \
                   -DNPM_EXECUTABLE="$(which npm)" \
                   -DZIG_EXECUTABLE="$(which zig)" \
                   -DENABLE_ASAN=OFF

Globbed 1971 sources [180.67ms]
CMake Configure
  $ cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug -B /work/bun/build/debug --log-level NOTICE -DBUN_EXECUTABLE=/bin/node -DNPM_EXECUTABLE=/bin/npm -DZIG_EXECUTABLE=/bin/zig -DENABLE_ASAN=OFF -S /work/bun -DCACHE_STRATEGY=auto
sccache: Using local cache strategy.
...
```

</details>

Result:

```
CMake Error at cmake/tools/SetupWebKit.cmake:99 (message):
  Failed to download WebKit: 6;"Could not resolve hostname"
Call Stack (most recent call first):
  cmake/targets/BuildBun.cmake:1204 (include)
  CMakeLists.txt:66 (include)
```
2026-01-07 15:44:57 -08:00
Jarred Sumner
22315000e0 Update CLAUDE.md 2026-01-07 12:33:21 -08:00
M Waheed
bc02c18dc5 Fix typo in PM2 configuration example
According to PM2 documentation, the name of the application is defined
by "name" key, not "title"
2026-01-07 14:04:06 +00:00
Dylan Conway
4c492c66b8 fix(bundler): fix --compile with 8+ embedded files (#25859)
## Summary

Fixes #20821

When `bun build --compile` was used with 8 or more embedded files, the
compiled binary would silently fail to execute any code (exit code 0, no
output).

**Root cause:** Chunks were sorted alphabetically by their `entry_bits`
key bytes. For entry point 0, the key starts with bit 0 set (byte
pattern `0x01`), but for entry point 8, the key has bit 8 set in the
byte (pattern `0x00, 0x01`). Alphabetically, `0x00 < 0x01`, so entry
point 8's chunk sorted before entry point 0.

This caused the wrong entry point to be identified as the main entry,
resulting in asset wrapper code being executed instead of the user's
code.

**Fix:** Custom sort that ensures `entry_point_id=0` (the main entry
point) always sorts first, with remaining chunks sorted alphabetically
for determinism.

## Test plan

- Added regression test `compile/ManyEmbeddedFiles` that embeds 8 files
and verifies the main entry point runs correctly
- Verified manually with reproduction case from issue

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-06 23:05:01 +00:00
Jack Kleeman
46801ec926 Send http2 window resize frames after half the window is used up (#25847)
### What does this PR do?
It is standard practice to send HTTP2 window resize frames once half the
window is used up:
- [nghttp2](https://en.wikipedia.org/wiki/Nghttp2#HTTP/2_implementation)
- [Go](https://github.com/golang/net/blob/master/http2/flow.go#L31)
- [Rust
h2](https://github.com/hyperium/h2/blob/master/src/proto/streams/flow_control.rs#L17)

The current behaviour of Bun is to send a window update once the client
has sent 65535 bytes exactly. This leads to an interruption in
throughput while the window update is received.

This is not just a performance concern however, I think some clients
will not handle this well, as it means that if you stop sending data
even 1 byte before 65535, you have a deadlock. The reason I came across
this issue is that it appears that the Rust `hyper` crate always
reserves an additional 1 byte from the connection for each http2 stream
(https://github.com/hyperium/hyper/blob/master/src/proto/h2/mod.rs#L122).
This means that when you have two concurrent requests, the client treats
it like the window is exhausted when it actually has a byte remaining,
leading to a sequential dependency between the requests that can create
deadlocks if they depend on running concurrently. I accept this is not a
problem with bun, but its a happy accident that we can resolve such
off-by-one issues by increasing the window size once it is 50% utilized

### How did you verify your code works?
Using wireshark, bun debug logging, and client logging I can observe
that the window updates are now sent after 32767 bytes. This also
resolves the h2 crate client issue.
2026-01-06 11:37:50 -08:00
robobun
5617b92a5a test: refactor spawnSync to spawn with describe.concurrent (#25849)
## Summary

- Refactor 16 test files to use async `Bun.spawn` instead of
`Bun.spawnSync`
- Wrap tests in `describe.concurrent` blocks for parallel execution
- Use `await using` for automatic resource cleanup

## Performance Improvement

| Test File | Before | After | Improvement |
|-----------|--------|-------|-------------|
| `node-module-module.test.js` (28 tests) | ~325ms | ~185ms | **43%
faster** |
| `non-english-import.test.js` (3 tests) | ~238ms | ~157ms | **34%
faster** |

## Files Changed

- `test/cli/run/commonjs-invalid.test.ts`
- `test/cli/run/commonjs-no-export.test.ts`
- `test/cli/run/empty-file.test.ts`
- `test/cli/run/jsx-symbol-collision.test.ts`
- `test/cli/run/run-cjs.test.ts`
- `test/cli/run/run-extensionless.test.ts`
- `test/cli/run/run-shell.test.ts`
- `test/cli/run/run-unicode.test.ts`
- `test/js/bun/resolve/non-english-import.test.js`
- `test/js/node/module/node-module-module.test.js`
- `test/regression/issue/00631.test.ts`
- `test/regression/issue/03216.test.ts`
- `test/regression/issue/03830.test.ts`
- `test/regression/issue/04011.test.ts`
- `test/regression/issue/04893.test.ts`
- `test/regression/issue/hashbang-still-works.test.ts`

## Test plan

- [x] All refactored tests pass with `USE_SYSTEM_BUN=1 bun test <file>`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-01-06 15:37:56 +00:00
Jarred Sumner
c6a73fc23e Hardcode __NR_close_range (#25840)
### What does this PR do?

Fixes https://github.com/oven-sh/bun/issues/25839

### How did you verify your code works?
2026-01-06 15:07:19 +00:00
Andrew Johnston
3de2dc1287 fix: use the computed size of the Offsets struct instead of hard-co… (#25816)
### What does this PR do?

- I was trying to understand how the SEA bundling worked in Bun and
noticed the size of the `Offsets` struct is hard-coded here to 32. It
should use the computed size to be future proof to changes in the
schema.

### How did you verify your code works?

- I didn't. Can add tests if this is not covered by existing tests.
ChatGPT agreed with me though. =)
2026-01-06 15:04:28 +00:00
SUZUKI Sosuke
370e6fb9fa fix(fetch): fix ReadableStream memory leak when using stream body (#25846)
## Summary

This PR fixes a memory leak that occurs when `fetch()` is called with a
`ReadableStream` body. The ReadableStream objects were not being
properly released, causing them to accumulate in memory.

## Problem

When using `fetch()` with a ReadableStream body:

```javascript
const stream = new ReadableStream({
  start(controller) {
    controller.enqueue(new TextEncoder().encode("data"));
    controller.close();
  }
});

await fetch(url, { method: "POST", body: stream });
```

The ReadableStream objects leak because `FetchTasklet.clearData()` has a
conditional that prevents `detach()` from being called on ReadableStream
request bodies after streaming has started.

### Root Cause

The problematic condition in `clearData()`:

```zig
if (this.request_body != .ReadableStream or this.is_waiting_request_stream_start) {
    this.request_body.detach();
}
```

After `startRequestStream()` is called:
- `is_waiting_request_stream_start` becomes `false`
- `request_body` is still `.ReadableStream`
- The condition evaluates to `(false or false) = false`
- `detach()` is skipped → **memory leak**

### Why the Original Code Was Wrong

The original code appears to assume that when `startRequestStream()` is
called, ownership of the Strong reference is transferred to
`ResumableSink`. However, this is incorrect:

1. `startRequestStream()` creates a **new independent** Strong reference
in `ResumableSink` (see `ResumableSink.zig:119`)
2. The FetchTasklet's original reference is **not transferred** - it
becomes redundant
3. Strong references in Bun are independent - calling `deinit()` on one
does not affect the other

## Solution

Remove the conditional and always call `detach()`:

```zig
// Always detach request_body regardless of type.
// When request_body is a ReadableStream, startRequestStream() creates
// an independent Strong reference in ResumableSink, so FetchTasklet's
// reference becomes redundant and must be released to avoid leaks.
this.request_body.detach();
```

### Safety Analysis

This change is safe because:

1. **Strong references are independent**: Each Strong reference
maintains its own ref count. Detaching FetchTasklet's reference doesn't
affect ResumableSink's reference
2. **Idempotency**: `detach()` is safe to call on already-detached
references
3. **Timing**: `clearData()` is only called from `deinit()` after
streaming has completed (ref_count = 0)
4. **No UAF risk**: `deinit()` only runs when ref_count reaches 0, which
means all streaming operations have completed

## Test Results

Before fix (with system Bun):
```
Expected: <= 100
Received: 501   (Request objects leaked)
Received: 1002  (ReadableStream objects leaked)
```

After fix:
```
6 pass
0 fail
```

## Test Coverage

Added comprehensive tests in
`test/js/web/fetch/fetch-cyclic-reference.test.ts` covering:
- Response stream leaks with cyclic references
- Streaming response body leaks
- Request body stream leaks with cyclic references
- ReadableStream body leaks (no cyclic reference needed to reproduce)
- Concurrent fetch operations with cyclic references

---------

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-01-06 15:00:52 +00:00
Dylan Conway
91f7a94d84 Add symbols.def to link-metadata.json (#25841)
Include the Windows module definition file (symbols.def) in
link-metadata.json, similar to how
   symbols.txt and symbols.dyn are already included for macOS and Linux.

   🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-05 17:45:41 -08:00
Prithvish Baidya
9ab6365a13 Add support for Requester Pays in S3 operations (#25514)
- Introduced `requestPayer` option in S3-related functions and
structures to handle Requester Pays buckets.
- Updated S3 client methods to accept and propagate the `requestPayer`
flag.
- Enhanced documentation for the `requestPayer` option in the S3 type
definitions.
- Adjusted existing S3 operations to utilize the `requestPayer`
parameter where applicable, ensuring compatibility with AWS S3's
Requester Pays feature.
- Ensured that the new functionality is integrated into multipart
uploads and simple requests.

### What does this PR do?

This change allows users to specify whether they are willing to pay for
data transfer costs when accessing objects in Requester Pays buckets,
improving flexibility and compliance with AWS S3's billing model.

This closes #25499

### How did you verify your code works?

I have added a new test file to verify this functionality, and all my
tests pass.
I also tested this against an actual S3 bucket which can only be
accessed if requester pays. I can confirm that it's accessible with
`requestPayer` is `true`, and the default of `false` does not allow
access.

An example bucket is here: s3://hl-mainnet-evm-blocks/0/0/1.rmp.lz4
(my usecase is indexing [hyperliquid block
data](https://hyperliquid.gitbook.io/hyperliquid-docs/for-developers/hyperevm/raw-hyperevm-block-data)
which is stored in s3, and I want to use bun to index faster)

---------

Co-authored-by: Alistair Smith <hi@alistair.sh>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Ciro Spaciari <ciro.spaciari@gmail.com>
2026-01-05 15:04:20 -08:00
Alistair Smith
bf937f7294 sql: filter out undefined values in INSERT helper instead of treating as NULL (#25830)
### What does this PR do?

the `sql()` helper now filters out `undefined` values in INSERT
statements instead of converting them to `NULL`. This allows columns
with `DEFAULT` values to use their defaults when `undefined` is passed,
rather than being overridden with `NULL`.

**Before:** `sql({ foo: undefined, id: "123" })` in INSERT would
generate `(foo, id) VALUES (NULL, "123")`, causing NOT NULL constraint
violations even when the column has a DEFAULT.

**After:** `sql({ foo: undefined, id: "123" })` in INSERT generates
`(id) VALUES ("123")`, omitting the undefined column entirely and
letting the database use the DEFAULT value.

Also fixes a data loss bug in bulk inserts where columns were determined
only from the first item - now all items are checked, so values in later
items aren't silently dropped.

  Fixes #25829

  ### How did you verify your code works?

  - Added regression test for #25829 (NOT NULL column with DEFAULT)
- Added tests for bulk insert with mixed undefined patterns which is the
data loss scenario
2026-01-05 15:03:20 -08:00
Meghan Denny
ce9788716f test: resolve this napi TODO (#25287)
since https://github.com/oven-sh/bun/pull/20772

Co-authored-by: Ciro Spaciari <ciro.spaciari@gmail.com>
2026-01-05 12:07:28 -08:00
robobun
4301af9f3e Harden TLS hostname verification (#25727)
## Summary
- Tighten wildcard certificate matching logic for improved security
- Add tests for wildcard hostname verification edge cases

## Test plan
- [x] `bun bd test test/js/web/fetch/fetch.tls.wildcard.test.ts` passes
- [x] Existing TLS tests continue to pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-01-05 10:21:49 -08:00
Jarred Sumner
8d1de78c7e Deflake stress.test.ts 2026-01-05 17:21:34 +00:00
Darwin ❤️❤️❤️
27ff6aaae0 fix(web): make URLSearchParams.prototype.size configurable (#25762) 2026-01-02 04:57:48 -08:00
robobun
779764332a feat(cli): add --grep as alias for -t/--test-name-pattern in bun test (#25788) 2026-01-02 04:52:47 -08:00
Alex Miller
0141a4fac9 docs: fix shell prompt rendering and remove hardcoded prompts (#25792) 2026-01-02 01:43:19 +00:00
robobun
113830d3cf docs: update npmrc.mdx with all supported .npmrc fields (#25783) 2025-12-31 01:46:37 -08:00
robobun
d9ae93e025 fix(cmake): fix JSON parsing in SetupBuildkite.cmake (#25755)
## Summary

- Fix CMake JSON parsing error when Buildkite API returns commit
messages with newlines

CMake's `file(READ ...)` reads files with literal newlines, which breaks
`string(JSON ...)` when the JSON contains escape sequences like `\n` in
string values (e.g., commit messages from Buildkite API).

Use `file(STRINGS ...)` to read line-by-line, then join with `\n` to
preserve valid JSON escape sequences while avoiding literal newlines.

## Test plan

- Verify CMake configure succeeds when Buildkite build has commit
messages with newlines

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-30 23:51:29 -08:00
robobun
604c83c8a6 perf(ipc): fix O(n²) JSON scanning for large chunked messages (#25743)
## Summary

- Fix O(n²) performance bug in JSON mode IPC when receiving large
messages that arrive in chunks
- Add `JsonIncomingBuffer` wrapper that tracks newline positions to
avoid re-scanning
- Each byte is now scanned exactly once (on arrival or when preceding
message is consumed)

## Problem

When data arrives in chunks in JSON mode, `decodeIPCMessage` was calling
`indexOfChar(data, '\n')` on the ENTIRE accumulated buffer every time.
For a 10MB message arriving in 160 chunks of 64KB:

- Chunk 1: scan 64KB
- Chunk 2: scan 128KB  
- Chunk 3: scan 192KB
- ...
- Chunk 160: scan 10MB

Total: ~800MB scanned for one 10MB message.

## Solution

Introduced a `JsonIncomingBuffer` struct that:
1. Tracks `newline_pos: ?u32` - position of known upcoming newline (if
any)
2. On `append(bytes)`: Only scans new chunk for `\n` if no position is
cached
3. On `consume(bytes)`: Updates or re-scans as needed after message
processing

This ensures O(n) scanning instead of O(n²).

## Test plan

- [x] `bun run zig:check-all` passes (all platforms compile)
- [x] `bun bd test test/js/bun/spawn/spawn.ipc.test.ts` - 4 tests pass
- [x] `bun bd test test/js/node/child_process/child_process_ipc.test.js`
- 1 test pass
- [x] `bun bd test test/js/bun/spawn/bun-ipc-inherit.test.ts` - 1 test
pass
- [x] `bun bd test test/js/bun/spawn/spawn.ipc.bun-node.test.ts` - 1
test pass
- [x] `bun bd test test/js/bun/spawn/spawn.ipc.node-bun.test.ts` - 1
test pass
- [x] `bun bd test
test/js/node/child_process/child_process_ipc_large_disconnect.test.js` -
1 test pass
- [x] Manual verification with `child-process-send-cb-more.js` (32KB
messages)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-12-29 20:02:18 -08:00
robobun
370b25c086 perf(Buffer.indexOf): use SIMD-optimized search functions (#25745)
## Summary

Optimize `Buffer.indexOf` and `Buffer.includes` by replacing
`std::search` with Highway SIMD-optimized functions:

- **Single-byte search**: `highway_index_of_char` - SIMD-accelerated
character search
- **Multi-byte search**: `highway_memmem` - SIMD-accelerated substring
search

These Highway functions are already used throughout Bun's codebase for
fast string searching (URL parsing, JS lexer, etc.) and provide
significant speedups for pattern matching in large buffers.

### Changes

```cpp
// Before: std::search (scalar)
auto it = std::search(haystack.begin(), haystack.end(), needle.begin(), needle.end());

// After: SIMD-optimized
if (valueLength == 1) {
    size_t result = highway_index_of_char(haystackPtr, haystackLen, valuePtr[0]);
} else {
    void* result = highway_memmem(haystackPtr, haystackLen, valuePtr, valueLength);
}
```

## Test plan

- [x] Debug build compiles successfully
- [x] Basic functionality verified (`indexOf`, `includes` with single
and multi-byte patterns)
- [ ] Existing Buffer tests pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-29 17:27:12 -08:00
Tommy D. Rossi
538be1399c feat(bundler): expose reactFastRefresh option in Bun.build API (#25731)
Fixes #25716

Adds support for a `reactFastRefresh: boolean` option in the `Bun.build`
JavaScript API, matching the existing `--react-fast-refresh` CLI flag.

```ts
const result = await Bun.build({
    reactFastRefresh: true,
    entrypoints: ["src/App.tsx"],
});
```

When enabled, the bundler adds React Fast Refresh transform code
(`$RefreshReg$`, `$RefreshSig$`) to the output.
2025-12-28 22:07:47 -08:00
robobun
d04b86d34f perf: use jsonStringifyFast for faster JSON serialization (#25733)
## Summary

Apply the same optimization technique from PR #25717 (Response.json) to
other APIs that use JSON.stringify internally:

- **IPC message serialization** (`ipc.zig`) - used for inter-process
communication
- **console.log with %j format** (`ConsoleObject.zig`) - commonly used
for debugging
- **PostgreSQL JSON/JSONB types** (`PostgresRequest.zig`) - database
operations
- **MySQL JSON type** (`MySQLTypes.zig`) - database operations
- **Jest %j/%o format specifiers** (`jest.zig`) - test output formatting
- **Transpiler tsconfig/macros** (`JSTranspiler.zig`) - build
configuration

### Root Cause

When calling `JSONStringify(globalObject, value, 0)`, the space
parameter `0` becomes `jsNumber(0)`, which is NOT `undefined`. This
causes JSC's FastStringifier (SIMD-optimized) to bail out:

```cpp
// In WebKit's JSONObject.cpp FastStringifier::stringify()
if (!space.isUndefined()) {
    logOutcome("space"_s);
    return { };  // Bail out to slow path
}
```

Using `jsonStringifyFast` which passes `jsUndefined()` triggers the fast
path.

### Expected Performance Improvement

Based on PR #25717 results, these changes should provide ~3x speedup for
JSON serialization in the affected APIs.

## Test plan

- [x] Debug build compiles successfully
- [x] Basic functionality verified (IPC, console.log %j, Response.json)
- [x] Existing tests pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-28 18:01:07 -08:00
robobun
37fc8e99f7 Harden WebSocket client decompression (#25724)
## Summary
- Add maximum decompressed message size limit to WebSocket client
deflate handling
- Add test coverage for decompression limits

## Test plan
- Run `bun test
test/js/web/websocket/websocket-permessage-deflate-edge-cases.test.ts`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-28 17:58:24 -08:00
robobun
6b5de25d8a feat(shell): add $.trace for analyzing shell commands without execution (#25667)
## Summary

Adds `Bun.$.trace` for tracing shell commands without executing them.

```js
const result = $.trace`cat /tmp/file.txt > output.txt`;
// { operations: [...], cwd: "...", success: true, error: null }
```

## Test plan

- [x] `bun bd test test/js/bun/shell/trace.test.ts`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-12-27 17:25:52 -08:00
Alex Miller
7b49654db6 fix(io): Prevent data corruption in Bun.write for files >2GB (#25720)
Closes #8254

Fixes a data corruption bug in `Bun.write()` where files larger than 2GB
would have chunks skipped resulting in corrupted output with missing
data.

The `doWriteLoop` had an issue where it would essentially end up
offsetting twice every 2GB chunks:
-  it first sliced the buffer by `total_written`: 
```remain = remain[@min(this.total_written, remain.len)..]``` 
-  it would then increment `bytes_blob.offset`: 
`this.bytes_blob.offset += @truncate(wrote)` 

but because `sharedView()` already uses the blob offset `slice_ = slice_[this.offset..]` it would end up doubling the offset.

In a local reproduction writing a 16GB file with each 2GB chunk filled with incrementing values `[1, 2, 3, 4, 5, 6, 7, 8]`, the buggy version produced: `[1, 3, 5, 7, …]`, skipping every other chunk.

The fix is to simply remove the redundant manual offset and rely only on `total_written` to track write progress.
2025-12-27 16:58:36 -08:00
SUZUKI Sosuke
603bbd18a0 Enable CHECK_REF_COUNTED_LIFECYCLE in WebKit (#25705)
### What does this PR do?

Enables `CHECK_REF_COUNTED_LIFECYCLE` in WebKit (
https://github.com/oven-sh/WebKit/pull/121 )

See also
a978fae619

#### `CHECK_REF_COUNTED_LIFECYCLE`?

A compile-time macro that enables lifecycle validation for
reference-counted objects in debug builds.

**Definition**
```cpp
  #if ASSERT_ENABLED || ENABLE(SECURITY_ASSERTIONS)
  #define CHECK_REF_COUNTED_LIFECYCLE 1
  #else
  #define CHECK_REF_COUNTED_LIFECYCLE 0
  #endif
```
**Purpose**

  Detects three categories of bugs:
1. Missing adoption - Objects stored in RefPtr without using adoptRef()
2. Ref during destruction - ref() called while destructor is running
(causes dangling pointers)
  3. Thread safety violations - Unsafe ref/deref across threads

**Implementation**

  When enabled, RefCountDebugger adds two tracking flags:
  - m_deletionHasBegun - Set when destructor starts
  - m_adoptionIsRequired - Cleared when adoptRef() is called

These flags are checked on every ref()/deref() call, with assertions
failing on violations.

**Motivation**

  Refactored debug code into a separate RefCountDebugger class to:
  - Improve readability of core refcount logic
- Eliminate duplicate code across RefCounted, ThreadSafeRefCounted, etc.
  - Simplify adding new refcount classes

 **Overhead**

Zero in release builds - the flags and checks are compiled out entirely.

### How did you verify your code works?

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-27 15:02:11 -08:00
robobun
1d7cb4bbad perf(Response.json): use JSC's FastStringifier by passing undefined for space (#25717)
## Summary

- Fix performance regression where `Response.json()` was 2-3x slower
than `JSON.stringify() + new Response()`
- Root cause: The existing code called `JSC::JSONStringify` with
`indent=0`, which internally passes `jsNumber(0)` as the space
parameter. This bypasses WebKit's FastStringifier optimization.
- Fix: Add a new `jsonStringifyFast` binding that passes `jsUndefined()`
for the space parameter, triggering JSC's FastStringifier
(SIMD-optimized) code path.

## Root Cause Analysis

In WebKit's `JSONObject.cpp`, the `stringify()` function has this logic:

```cpp
static NEVER_INLINE String stringify(JSGlobalObject& globalObject, JSValue value, JSValue replacer, JSValue space)
{
    // ...
    if (String result = FastStringifier<Latin1Character, BufferMode::StaticBuffer>::stringify(globalObject, value, replacer, space, failureReason); !result.isNull())
        return result;
    // Falls back to slow Stringifier...
}
```

And `FastStringifier::stringify()` checks:
```cpp
if (!space.isUndefined()) {
    logOutcome("space"_s);
    return { };  // Bail out to slow path
}
```

So when we called `JSONStringify(globalObject, value, (unsigned)0)`, it
converted to `jsNumber(0)` which is NOT `undefined`, causing
FastStringifier to bail out.

## Performance Results

### Before (3.5x slower than manual approach)
```
Response.json():                2415ms
JSON.stringify() + Response():  689ms
Ratio:                          3.50x
```

### After (parity with manual approach)
```
Response.json():                ~700ms  
JSON.stringify() + Response():  ~700ms
Ratio:                          ~1.09x
```

## Test plan

- [x] Existing `Response.json()` tests pass
(`test/regression/issue/21257.test.ts`)
- [x] Response tests pass (`test/js/web/fetch/response.test.ts`)
- [x] Manual verification that output is correct for various JSON inputs

Fixes #25693

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Sosuke Suzuki <sosuke@bun.com>
2025-12-27 15:01:28 -08:00
SUZUKI Sosuke
01de0ecbd9 Add simple benchmark for Array.of (#25711)
**before:**

```
$ bun bench/snippets/array-of.js
cpu: Apple M4 Max
runtime: bun 1.3.5 (arm64-darwin)

benchmark                                  time (avg)             (min … max)       p75       p99      p999
----------------------------------------------------------------------------- -----------------------------
int: Array.of(1,2,3,4,5)                 9.19 ns/iter       (8.1 ns … 108 ns)   9.28 ns  13.63 ns  69.44 ns
int: Array.of(100 elements)             1'055 ns/iter   (94.58 ns … 1'216 ns)  1'108 ns  1'209 ns  1'216 ns
double: Array.of(1.1,2.2,3.3,4.4,5.5)   10.34 ns/iter      (8.81 ns … 102 ns)  10.17 ns  17.19 ns  73.51 ns
double: Array.of(100 elements)          1'073 ns/iter     (124 ns … 1'215 ns)  1'136 ns  1'204 ns  1'215 ns
object: Array.of(obj x5)                19.19 ns/iter     (16.58 ns … 122 ns)  19.06 ns   77.6 ns  85.75 ns
object: Array.of(100 elements)          1'340 ns/iter     (294 ns … 1'568 ns)  1'465 ns  1'537 ns  1'568 ns
```

**after:**

```
$ ./build/release/bun bench/snippets/array-of.js
cpu: Apple M4 Max
runtime: bun 1.3.6 (arm64-darwin)

benchmark                                  time (avg)             (min … max)       p75       p99      p999
----------------------------------------------------------------------------- -----------------------------
int: Array.of(1,2,3,4,5)                 2.68 ns/iter    (2.14 ns … 92.96 ns)   2.52 ns   3.95 ns  59.73 ns
int: Array.of(100 elements)             23.69 ns/iter     (18.88 ns … 155 ns)  20.91 ns  83.82 ns  96.66 ns
double: Array.of(1.1,2.2,3.3,4.4,5.5)    3.62 ns/iter    (2.97 ns … 75.44 ns)   3.46 ns   5.05 ns  65.82 ns
double: Array.of(100 elements)          26.96 ns/iter     (20.14 ns … 156 ns)  24.45 ns  87.75 ns  96.88 ns
object: Array.of(obj x5)                11.82 ns/iter     (9.6 ns … 87.38 ns)  11.23 ns  68.99 ns  77.09 ns
object: Array.of(100 elements)            236 ns/iter       (206 ns … 420 ns)    273 ns    325 ns    386 ns
```
2025-12-27 00:05:57 -08:00
Oleksandr Herasymov
d3a5f2eef2 perf: speed up Bun.hash.crc32 by switching to zlib CRC32 (#25692)
## What does this PR do?
Switch `Bun.hash.crc32` to use `zlib`'s CRC32 implementation. Bun
already links `zlib`, which provides highly optimized,
hardware-accelerated CRC32. Because `zlib.crc32` takes a 32-bit length,
chunk large inputs to avoid truncation/regressions on buffers >4GB.

Note: This was tried before (PR #12164 by Jarred), which switched CRC32
to zlib for speed. This proposal keeps that approach and adds explicit
chunking to avoid the >4GB length pitfall.

**Problem:** `Bun.hash.crc32` is a significant outlier in
microbenchmarks compared to other hash functions (about 21x slower than
`zlib.crc32` in a 1MB test on M1).

**Root cause:** `Bun.hash.crc32` uses Zig's `std.hash.Crc32`
implementation, which is software-only and does not leverage hardware
acceleration (e.g., `PCLMULQDQ` on x86 or `CRC32` instructions on ARM).

**Implementation:**
https://github.com/oven-sh/bun/blob/main/src/bun.js/api/HashObject.zig

```zig
pub const crc32 = hashWrap(struct {
    pub fn hash(seed: u32, bytes: []const u8) u32 {
        // zlib takes a 32-bit length, so chunk large inputs to avoid truncation.
        var crc: u64 = seed;
        var offset: usize = 0;
        while (offset < bytes.len) {
            const remaining = bytes.len - offset;
            const max_len: usize = std.math.maxInt(u32);
            const chunk_len: u32 = if (remaining > max_len) @intCast(max_len) else @intCast(remaining);
            crc = bun.zlib.crc32(crc, bytes.ptr + offset, chunk_len);
            offset += chunk_len;
        }
        return @intCast(crc);
    }
});
```

### How did you verify your code works?
**Benchmark (1MB payload):**
- **Before:** Bun 1.3.5 `Bun.hash.crc32` = 2,644,444 ns/op vs
`zlib.crc32` = 124,324 ns/op (~21x slower)
- **After (local bun-debug):** `Bun.hash.crc32` = 360,591 ns/op vs
`zlib.crc32` = 359,069 ns/op (~1.0x), results match

## Test environment
- **Machine:** MacBook Pro 13" (M1, 2020)
- **OS:** macOS 15.7.3
- **Baseline Bun:** 1.3.5
- **After Bun:** local `bun-debug` (build/debug)
2025-12-26 23:41:10 -08:00
robobun
b51e993bc2 fix: reject null bytes in spawn args, env, and shell arguments (#25698)
## Summary

- Reject null bytes in command-line arguments passed to `Bun.spawn` and
`Bun.spawnSync`
- Reject null bytes in environment variable keys and values
- Reject null bytes in shell (`$`) template literal arguments

This prevents null byte injection attacks (CWE-158) where null bytes in
strings could cause unintended truncation when passed to the OS,
potentially allowing attackers to bypass file extension validation or
create files with unexpected names.

## Test plan

- [x] Added tests in `test/js/bun/spawn/null-byte-injection.test.ts`
- [x] Tests pass with debug build: `bun bd test
test/js/bun/spawn/null-byte-injection.test.ts`
- [x] Tests fail with system Bun (confirming the fix works)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-12-26 23:39:37 -08:00
SUZUKI Sosuke
92f105dbe1 Add microbench for String#includes (#25699)
note: This is due to constant folding by the JIT. For `String#includes`
on dynamic strings, the performance improvement is not this significant.

**before:**
```
$ bun ./bench/snippets/string-includes.mjs
cpu: Apple M4 Max
runtime: bun 1.3.5 (arm64-darwin)

benchmark                                  time (avg)             (min … max)       p75       p99      p999
----------------------------------------------------------------------------- -----------------------------
String.includes - short, hit (middle)   82.24 ns/iter     (14.95 ns … 881 ns)  84.98 ns    470 ns    791 ns
String.includes - short, hit (start)    37.44 ns/iter      (8.46 ns … 774 ns)  26.08 ns    379 ns    598 ns
String.includes - short, hit (end)      97.27 ns/iter     (16.93 ns … 823 ns)    107 ns    537 ns    801 ns
String.includes - short, miss             102 ns/iter       (0 ps … 1'598 µs)     42 ns    125 ns    167 ns
String.includes - long, hit (middle)    16.01 ns/iter     (14.34 ns … 115 ns)  16.03 ns   20.1 ns   53.1 ns
String.includes - long, miss              945 ns/iter       (935 ns … 972 ns)    948 ns    960 ns    972 ns
String.includes - with position          9.83 ns/iter    (8.44 ns … 58.45 ns)   9.83 ns  12.31 ns  15.69 ns
```

**after:**
```
$ ./build/release/bun bench/snippets/string-includes.mjs
cpu: Apple M4 Max
runtime: bun 1.3.6 (arm64-darwin)

benchmark                                  time (avg)             (min … max)       p75       p99      p999
----------------------------------------------------------------------------- -----------------------------
String.includes - short, hit (middle)     243 ps/iter     (203 ps … 10.13 ns)    244 ps    325 ps    509 ps !
String.includes - short, hit (start)      374 ps/iter     (244 ps … 19.78 ns)    387 ps    488 ps    691 ps
String.includes - short, hit (end)        708 ps/iter     (407 ps … 18.03 ns)    651 ps   2.62 ns   2.69 ns
String.includes - short, miss            1.49 ns/iter     (407 ps … 27.93 ns)   2.87 ns   3.09 ns   3.78 ns
String.includes - long, hit (middle)     3.28 ns/iter      (3.05 ns … 118 ns)   3.15 ns   8.75 ns  16.07 ns
String.includes - long, miss             7.28 ns/iter      (3.44 ns … 698 ns)   9.34 ns  42.85 ns    240 ns
String.includes - with position          7.97 ns/iter       (3.7 ns … 602 ns)   9.68 ns  52.19 ns    286 ns
```
2025-12-26 21:49:00 -08:00
Dylan Conway
d0bd1b121f Fix DCE producing invalid syntax for empty objects in spreads (#25710)
## Summary
- Fixes dead code elimination producing invalid syntax like `{ ...a, x:
}` when simplifying empty objects in spread contexts
- The issue was that `simplifyUnusedExpr` and `joinAllWithCommaCallback`
could return `E.Missing` instead of `null` to indicate "no side effects"
- Added checks to return `null` when the result is `E.Missing`

Fixes #25609

## Test plan
- [x] Added regression test that fails on v1.3.5 and passes with fix
- [x] `bun bd test test/regression/issue/25609.test.ts` passes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-26 21:48:19 -08:00
robobun
81b4a40fbd [publish images] Remove sccache, use ccache only (#25682)
## Summary
- Remove sccache support entirely, use ccache only
- Missing ccache no longer fails the build (just skips caching)
- Remove S3 distributed cache support

## Changes
- Remove `cmake/tools/SetupSccache.cmake` and S3 distributed cache
support
- Simplify `CMakeLists.txt` to only use ccache
- Update `SetupCcache.cmake` to not fail when ccache is missing
- Replace sccache with ccache in bootstrap scripts (sh, ps1)
- Update `.buildkite/Dockerfile` to install ccache instead of sccache
- Update `flake.nix` and `shell.nix` to use ccache
- Update documentation (CONTRIBUTING.md, contributing.mdx,
building-windows.mdx)
- Remove `scripts/build-cache/` directory (was only for sccache S3
access)

## Test plan
- [x] Build completes successfully with `bun bd`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-26 20:24:27 -08:00
Nico Cevallos
5715b54614 add test for dependency order when a package's name is larger than 8 characters + fix (#25697)
### What does this PR do?

- Add test that is broken before the changes in the code and fix
previous test making script in dependency takes a bit of time to be
executed. Without the `setTimeout` in the tests, due race conditions it
always success. I tried adding a test combining both tests, with
dependencies `dep0` and `larger-than-8-char`, but if the timeout is the
same it success.
- Fix for the use case added, by using the correct buffer for
`Dependency.name` otherwise it gets garbage when package name is larger
than 8 characters. This should fix #12203

### How did you verify your code works?

Undo the changes in the code to verify the new test fails and check it
again after adding the changes in the code.
2025-12-25 23:49:23 -08:00
Jarred Sumner
28fd495b39 Deflake test/js/bun/resolve/load-same-js-file-a-lot.test.ts 2025-12-25 17:43:43 -08:00
SUZUKI Sosuke
699d8b1e1c Upgrade WebKit Dec 24, 2025 (#25684)
- WTFMove → WTF::move / std::move: Replaced WTFMove() macro with
WTF::move() function for WTF types, std::move() for std types
- SortedArrayMap removed: Replaced with if-else chains in
EventFactory.cpp, JSCryptoKeyUsage.cpp
- Wasm::Memory::create signature changed: Removed VM parameter
- URLPattern allocation: Changed from WTF_MAKE_ISO_ALLOCATED to
WTF_MAKE_TZONE_ALLOCATED

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-12-25 14:00:58 -08:00
robobun
2247c3859a chore: convert .cursor/rules to .claude/skills (#25683)
## Summary
- Migrate Cursor rules to Claude Code skills format
- Add 4 new skills for development guidance:
  - `writing-dev-server-tests`: HMR/dev server test guidance
  - `implementing-jsc-classes-cpp`: C++ JSC class implementation  
  - `implementing-jsc-classes-zig`: Zig JSC bindings generator
  - `writing-bundler-tests`: bundler test guidance with itBundled
- Remove all `.cursor/rules/` files

## Test plan
- [x] Skills follow Claude Code skill authoring guidelines
- [x] Each skill has proper YAML frontmatter with name and description
- [x] Skills are concise and actionable

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-24 23:37:26 -08:00
Jarred Sumner
08e03814e5 [publish images] Fix CI, remove broken freebsd image step 2025-12-24 20:02:56 -08:00
Jarred Sumner
0dd4f025b6 [publish images] (+ add Object.hasOwn benchmark) 2025-12-24 19:55:44 -08:00
Jarred Sumner
79067037ff Add Promise.race microbenchmark 2025-12-23 22:53:24 -08:00
Aiden Cline
822d75a380 fix(@types/bun): add missing autoloadTsconfig and autoloadPackageJson types (#25501)
### What does this PR do?

Adds missing types, fixes typo

### How did you verify your code works?

Add missing types from: 
https://github.com/oven-sh/bun/pull/25340/changes

---------

Co-authored-by: Alistair Smith <hi@alistair.sh>
2025-12-24 06:47:07 +00:00
SUZUKI Sosuke
bffccf3d5f Upgrade WebKit 2025/12/07 (#25429)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
2025-12-23 22:24:18 -08:00
robobun
0300150324 docs: fix incorrect [env] section documentation in bunfig.toml (#25634)
## Summary
- Fixed documentation that incorrectly claimed you could use `[env]` as
a TOML section to set environment variables directly
- The `env` option in bunfig.toml only controls whether automatic `.env`
file loading is disabled (via `env = false`)
- Updated to show the correct approaches: using preload scripts or
`.env` files with `--env-file`

## Test plan
- Documentation-only change, no code changes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-23 15:31:12 -08:00
robobun
34a1e2adad fix: use LLVM unstable repo for Debian Trixie (13) (#25657)
## Summary

- Fix LLVM installation on Debian Trixie (13) by using the unstable
repository from apt.llvm.org

The `llvm.sh` script doesn't automatically detect that Debian Trixie
needs to use the unstable repository. This is because trixie's `VERSION`
is `13 (trixie)` rather than `testing`, and apt.llvm.org doesn't have a
dedicated trixie repository.

Without this fix, the LLVM installation falls back to Debian's main
repository packages, which don't include `libclang-rt-19-dev` (the
compiler-rt sanitizer libraries) by default. This causes builds with
ASan (AddressSanitizer) to fail with:

```
ld.lld: error: cannot open /usr/lib/llvm-19/lib/clang/19/lib/x86_64-pc-linux-gnu/libclang_rt.asan.a: No such file or directory
```

This was breaking the [Daily Docker
Build](https://github.com/oven-sh/bun-development-docker-image/actions/runs/20437290601)
in the bun-development-docker-image repo.

## Test plan

- [ ] Wait for the PR CI to pass
- [ ] After merging, the next Daily Docker Build should succeed

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-22 23:10:15 -08:00
robobun
8484e1b827 perf: shrink ConcurrentTask from 24 bytes to 16 bytes (#25636) 2025-12-22 12:07:24 -08:00
robobun
3898ed5e3f perf: pack boolean flags and reorder fields to reduce struct padding (#25627) 2025-12-21 17:12:42 -08:00
Jarred Sumner
c08ffadf56 perf(linux): add memfd optimizations and typed flags (#25597)
## Summary

- Add `MemfdFlags` enum to replace raw integer flags for `memfd_create`,
providing semantic clarity for different use cases (`executable`,
`non_executable`, `cross_process`)
- Add support for `MFD_EXEC` and `MFD_NOEXEC_SEAL` flags (Linux 6.3+)
with automatic fallback to older kernel flags when `EINVAL` is returned
- Use memfd + `/proc/self/fd/{fd}` path for loading embedded `.node`
files in standalone builds, avoiding disk writes entirely on Linux

## Test plan

- [ ] Verify standalone builds with embedded `.node` files work on Linux
- [ ] Verify fallback works on older kernels (pre-6.3)
- [ ] Verify subprocess stdio memfd still works correctly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-19 23:18:21 -08:00
Dylan Conway
fa983247b2 fix(create): crash when running postinstall task with --no-install (#25616)
## Summary
- Fix segmentation fault in `bun create` when using `--no-install` with
a template that has a `bun-create.postinstall` task starting with "bun "
- The bug was caused by unconditionally slicing `argv[2..]` which
created an empty array when `npm_client` was null
- Added check for `npm_client != null` before slicing

## Reproduction
```bash
# Create template with bun-create.postinstall
mkdir -p ~/.bun-create/test-template
echo '{"name":"test","bun-create":{"postinstall":"bun install"}}' > ~/.bun-create/test-template/package.json

# This would crash before the fix
bun create test-template /tmp/my-app --no-install
```

## Test plan
- [x] Verified the reproduction case crashes before the fix
- [x] Verified the reproduction case works after the fix

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 23:17:51 -08:00
Dylan Conway
99b0a16c33 fix: prevent out-of-bounds access in NO_PROXY parsing (#25617)
## Summary
- Fix out-of-bounds access when parsing `NO_PROXY` environment variable
with empty entries
- Empty entries (e.g., `"localhost, , example.com"`) would cause a panic
when checking if the host starts with a dot
- Skip empty entries after trimming whitespace

fixes BUN-110G
fixes BUN-128V

## Test plan
- [x] Verify `NO_PROXY="localhost, , example.com"` no longer crashes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-19 23:17:29 -08:00
Dylan Conway
085e25d5d1 fix: protect StringOrBuffer from GC in async operations (#25594)
## Summary

- Fix use-after-free crash in async zstd compression, scrypt, and
JSTranspiler operations
- When `StringOrBuffer.fromJSMaybeAsync` is called with `is_async=true`,
the buffer's JSValue is now protected from garbage collection
- Previously, the buffer could be GC'd while a worker thread was still
accessing it, causing segfaults in zstd's `HIST_count_simple` and
similar functions

Fixes BUN-167Z

## Changes

- `fromJSMaybeAsync`: Call `protect()` on buffer when `is_async=true`
- `fromJSWithEncodingMaybeAsync`: Same protection for the early return
path
- `Scrypt`: Fix cleanup to use `deinitAndUnprotect()` for async path,
add missing `deinit()` in sync path
- `JSTranspiler`: Use new protection mechanism instead of manual
`protect()`/`unprotect()` calls
- Simplify `createOnJSThread` signatures to not return errors (OOM is
handled internally)
- Update all callers to use renamed/simplified APIs

## Test plan

- [x] Code review of all callsites to verify correct protect/unprotect
pairing
- [ ] Run existing zstd tests
- [ ] Run existing scrypt tests
- [ ] Run existing transpiler tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 17:30:26 -08:00
Jarred Sumner
ce5c336ea5 Revert "fix: memory leaks in IPC message handling (#25602)"
This reverts commit 05b12e0ed0.

The tests did not fail with system version of Bun.
2025-12-19 17:28:54 -08:00
robobun
05b12e0ed0 fix: memory leaks in IPC message handling (#25602)
## Summary

- Add periodic memory reclamation for IPC buffers after processing
messages
- Fix missing `deref()` on `bun.String` created from `cmd` property in
`handleIPCMessage`
- Add `reclaimMemory()` function to shrink incoming buffer and send
queue when they exceed 2MB capacity
- Track message count to trigger memory reclamation every 256 messages

The incoming `ByteList` buffer and send queue `ArrayList` would grow but
never shrink, causing memory accumulation during sustained IPC
messaging.

## Test plan

- [x] Added regression tests in
`test/js/bun/spawn/spawn-ipc-memory.test.ts`
- [x] Existing IPC tests pass (`spawn.ipc.test.ts`)
- [x] Existing cluster tests pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-19 17:27:09 -08:00
Angus Comrie
d9459f8540 Fix postgres empty check when handling arrays (#25607)
### What does this PR do?
Closes #25505. This adjusts the byte length check in `DataCell:
fromBytes` to 12 bytes instead of 16, as zero-dimensional arrays will
have a shorter preamble.

### How did you verify your code works?
Test suite passes, and I've added a new test that fails in the main
branch but passes with this change. The issue only seems to crop up when
a connection is _reused_, which is curious.
2025-12-19 14:49:12 -08:00
Jarred Sumner
e79b512a9d Propagate debugger CLI config in single-file executables (#25600) 2025-12-19 09:49:02 -08:00
robobun
9902039b1f fix: memory leaks in error-handling code for Brotli, Zstd, and Zlib compression state machines (#25592)
## Summary

Fix several memory leaks in the compression libraries:

- **NativeBrotli/NativeZstd reset()** - Each call to `reset()` allocated
a new encoder/decoder without freeing the previous one
- **NativeBrotli/NativeZstd init() error paths** - If `setParams()`
failed after `stream.init()` succeeded, the instance was leaked
- **NativeZstd init()** - If `setPledgedSrcSize()` failed after context
creation, the context was leaked
- **ZlibCompressorArrayList** - After `deflateInit2_()` succeeded, if
`ensureTotalCapacityPrecise()` failed with OOM, zlib internal state was
never freed
- **NativeBrotli close()** - Now sets state to null to prevent potential
double-free (defensive)
- **LibdeflateState** - Added `deinit()` for API consistency

## Test plan

- [x] Added regression test that calls `reset()` 100k times and measures
memory growth
- [x] Test shows memory growth dropped from ~600MB to ~10MB for Brotli
- [x] Verified no double-frees by tracing code paths
- [x] Existing zlib tests pass (except pre-existing timeout in debug
build)

Before fix (system bun 1.3.3):
```
Memory growth after 100000 reset() calls: 624.38 MB  (BrotliCompress)
Memory growth after 100000 reset() calls: 540.63 MB  (BrotliDecompress)
```

After fix:
```
Memory growth after 100000 reset() calls: 11.84 MB   (BrotliCompress)
Memory growth after 100000 reset() calls: 0.16 MB    (BrotliDecompress)
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-18 21:42:14 -08:00
Dylan Conway
f3fd7506ef fix(windows): handle UV_UNKNOWN and UV_EAI_* error codes in libuv errno mapping (#25596)
## Summary
- Add missing `UV_UNKNOWN` and `UV_EAI_*` error code mappings to the
`errno()` function in `ReturnCode`
- Fixes panic "integer does not fit in destination type" on Windows when
libuv returns unmapped error codes
- Speculative fix for BUN-131E

## Root Cause
The `errno()` function was missing mappings for `UV_UNKNOWN` (-4094) and
all `UV_EAI_*` address info errors (-3000 to -3014). When libuv returned
these codes, the switch fell through to `else => null`, and the caller
at `sys_uv.zig:317` assumed success and tried to cast the negative
return code to `usize`, causing a panic.

This was triggered in `readFileWithOptions` -> `preadv` when:
- Memory-mapped file operations encounter exceptions (file
modified/truncated by another process, network drive issues)
- Windows returns error codes that libuv cannot map to standard errno
values

## Crash Report
```
Bun v1.3.5 (1e86ceb) on windows x86_64baseline []
panic: integer does not fit in destination type
sys_uv.zig:294: preadv
node_fs.zig:5039: readFileWithOptions
```

## Test plan
- [ ] This fix prevents a panic, converting it to a proper error.
Testing would require triggering `UV_UNKNOWN` from libuv, which is
difficult to do reliably (requires memory-mapped file exceptions or
unusual Windows errors).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-18 21:41:41 -08:00
robobun
c21c51a0ff test(security-scanner): add TTY prompt tests using Bun.Terminal (#25587)
Co-authored-by: Alistair Smith <hi@alistair.sh>
2025-12-19 05:21:44 +00:00
robobun
0bbf6c74b5 test: add describe blocks for grouping in bun-types.test.ts (#25598)
Co-authored-by: Alistair Smith <hi@alistair.sh>
2025-12-19 04:53:18 +00:00
Dylan Conway
57cbbc09e4 fix: correct off-by-one bounds checks in bundler and package installer (#25582)
## Summary

- Fix two off-by-one bounds check errors that used `>` instead of `>=`
- Both bugs could cause undefined behavior (array out-of-bounds access)
when an index equals the array length

## The Bugs

### 1. `src/install/postinstall_optimizer.zig:62`

```zig
// Before (buggy):
if (resolution > metas.len) continue;
const meta: *const Meta = &metas[resolution];  // Out-of-bounds when resolution == metas.len

// After (fixed):
if (resolution >= metas.len) continue;
```

### 2. `src/bundler/linker_context/doStep5.zig:10`

```zig
// Before (buggy):
if (id > c.graph.meta.len) return;
const resolved_exports = &c.graph.meta.items(.resolved_exports)[id];  // Out-of-bounds when id == c.graph.meta.len

// After (fixed):
if (id >= c.graph.meta.len) return;
```

## Why These Are Bugs

Valid array indices are `0` to `len - 1`. When `index == len`:
- `index > len` evaluates to `false` → check passes
- `array[index]` accesses `array[len]` → out-of-bounds / undefined
behavior

## Codebase Patterns

The rest of the codebase correctly uses `>=` for these checks:
- `lockfile.zig:484`: `if (old_resolution >= old.packages.len)
continue;`
- `lockfile.zig:522`: `if (old_resolution >= old.packages.len)
continue;`
- `LinkerContext.zig:389`: `if (source_index >= import_records_list.len)
continue;`
- `LinkerContext.zig:1667`: `if (source_index >= c.graph.ast.len) {`

## Test plan

- [x] Verified fix aligns with existing codebase patterns
- [ ] CI passes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-18 18:04:28 -08:00
Jarred Sumner
7f589ffb4b Disable coderabbit enrichment 2025-12-18 18:03:23 -08:00
Francis F
cea59d7fc0 docs(sqlite): fix .run() return value documentation (#25060)
Co-authored-by: Alistair Smith <hi@alistair.sh>
2025-12-18 20:44:35 +00:00
Jarred Sumner
4ea1454e4a Delete unused workflow 2025-12-18 12:04:28 -08:00
Dylan Conway
8941a363c3 fix: dupe ca string in .npmrc to prevent use-after-free (#25563)
## Summary

- Fix use-after-free bug when parsing `ca` option from `.npmrc`
- The `ca` string was being stored directly from the parser's arena
without duplication
- Since the parser arena is freed at the end of `loadNpmrc`, this
created a dangling pointer

## The Bug

In `src/ini.zig`, the `ca` string wasn't being duplicated like all other
string properties:

```zig
// Lines 983-986 explicitly warn about this:
// Need to be very, very careful here with strings.
// They are allocated in the Parser's arena, which of course gets
// deinitialized at the end of the scope.
// We need to dupe all strings

// Line 981: Parser arena is freed here
defer parser.deinit();

// Line 1016-1020: THE BUG - string not duped!
if (out.asProperty("ca")) |query| {
    if (query.expr.asUtf8StringLiteral()) |str| {
        install.ca = .{
            .str = str,  // ← Dangling pointer after parser.deinit()!
        };
```

All other string properties in the same function correctly duplicate:
- `registry` (line 996): `try allocator.dupe(u8, str)`
- `cache` (line 1002): `try allocator.dupe(u8, str)`
- `cafile` (line 1037): `asStringCloned(allocator)`
- `ca` array items (line 1026): `asStringCloned(allocator)`

## User Impact

When a user has `ca=<certificate>` in their `.npmrc` file:
1. The certificate string is parsed and stored
2. The parser arena is freed
3. `install.ca.str` becomes a dangling pointer
4. Later TLS/SSL operations access freed memory
5. Could cause crashes, undefined behavior, or security issues

## Test plan

- Code inspection confirms this matches the pattern used for all other
string properties
- The fix adds `try allocator.dupe(u8, str)` to match `cache`,
`registry`, etc.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-17 19:56:25 -08:00
Dylan Conway
722ac3aa5a fix: check correct variable in subprocess stdin cleanup (#25562)
## Summary

- Fix typo in `onProcessExit` where `existing_stdin_value.isCell()` was
checked instead of `existing_value.isCell()`
- Since `existing_stdin_value` is always `.zero` at that point, the
condition was always false, making the inner block dead code

## The Bug

In `src/bun.js/api/bun/subprocess.zig:593`:

```zig
var existing_stdin_value = jsc.JSValue.zero;  // Line 590 - always .zero
if (this_jsvalue != .zero) {
    if (jsc.Codegen.JSSubprocess.stdinGetCached(this_jsvalue)) |existing_value| {
        if (existing_stdin_value.isCell()) {  // BUG! Should be existing_value
            // This block was DEAD CODE - never executed
```

Compare with the correct pattern used elsewhere:
```zig
// shell/subproc.zig:251-252 (CORRECT)
if (jsc.Codegen.JSSubprocess.stdinGetCached(subprocess.this_jsvalue)) |existing_value| {
    jsc.WebCore.FileSink.JSSink.setDestroyCallback(existing_value, 0);  // Uses existing_value
}
```

## Impact

The dead code prevented:
- Recovery of stdin from cached JS value when `weak_file_sink_stdin_ptr`
is null
- Proper cleanup via `onAttachedProcessExit` on the FileSink  
- `setDestroyCallback` cleanup in `onProcessExit`

Note: The user-visible impact was mitigated by redundant cleanup paths
in `Writable.zig` that also call `setDestroyCallback`.

## Test plan

- Code inspection confirms this is a straightforward typo fix
- Existing subprocess tests continue to pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-17 18:34:58 -08:00
Dylan Conway
a333d02f84 fix: correct inverted buffer allocation logic in Postgres array parsing (#25564)
## Summary

- Fix inverted buffer allocation logic when parsing strings in Postgres
arrays
- Strings larger than 16KB were incorrectly using the stack buffer
instead of dynamically allocating
- This caused spurious `InvalidByteSequence` errors for valid data

## The Bug

In `src/sql/postgres/DataCell.zig`, the condition for when to use
dynamic allocation was inverted:

```zig
// BEFORE (buggy):
const needs_dynamic_buffer = str_bytes.len < stack_buffer.len;  // TRUE when SMALL

// AFTER (fixed):
const needs_dynamic_buffer = str_bytes.len > stack_buffer.len;  // TRUE when LARGE
```

## What happened with large strings (>16KB):

1. `needs_dynamic_buffer` = false (e.g., 20000 < 16384 is false)
2. Uses `stack_buffer[0..]` which is only 16KB
3. `unescapePostgresString` hits bounds check and returns
`BufferTooSmall`
4. Error converted to `InvalidByteSequence`
5. User gets error even though data is valid

## User Impact

Users with Postgres arrays containing JSON or string elements larger
than 16KB would get spurious InvalidByteSequence errors even though
their data was perfectly valid.

## Test plan

- Code inspection confirms the logic was inverted
- The fix aligns with the intended behavior: use stack buffer for small
strings, dynamic allocation for large strings

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-17 18:34:17 -08:00
Dylan Conway
c1acb0b9a4 fix(shell): prevent double-close of fd when using &> redirect with builtins (#25568)
## Summary

- Fix double-close of file descriptor when using `&>` redirect with
shell builtin commands
- Add `dupeRef()` helper for cleaner reference counting semantics
- Add tests for `&>` and `&>>` redirects with builtins

## Test plan

- [x] Added tests in `test/js/bun/shell/file-io.test.ts` that reproduce
the bug
- [x] All file-io tests pass

## The Bug

When using `&>` to redirect both stdout and stderr to the same file with
a shell builtin command (e.g., `pwd &> file.txt`), the code was creating
two separate `IOWriter` instances that shared the same file descriptor.
When both `IOWriter`s were destroyed, they both tried to close the same
fd, causing an `EBADF` (bad file descriptor) error.

```javascript
import { $ } from "bun";
await $`pwd &> output.txt`; // Would crash with EBADF
```

## The Fix

1. Share a single `IOWriter` between stdout and stderr when both are
redirected to the same file, with proper reference counting
2. Rename `refSelf` to `dupeRef` for clarity across `IOReader`,
`IOWriter`, `CowFd`, and add it to `Blob` for consistency
3. Fix the `Body.Value` blob case to also properly reference count when
the same blob is assigned to multiple outputs

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Latest model <noreply@anthropic.com>
2025-12-17 18:33:53 -08:00
Jarred Sumner
ffd2240c31 Bump 2025-12-17 11:42:54 -08:00
190n
fa5a5bbe55 fix: v8::Value::IsInt32()/IsUint32() edge cases (#25548)
### What does this PR do?

- fixes both functions returning false for double-encoded values (even
if the numeric value is a valid int32/uint32)
- fixes IsUint32() returning false for values that don't fit in int32
- fixes the test from #22462 not testing anything (the native functions
were being passed a callback to run garbage collection as the first
argument, so it was only ever testing what the type check APIs returned
for that function)
- extends the test to cover the first edge case above

### How did you verify your code works?

The new tests fail without these fixes.

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-12-17 00:52:16 -08:00
Dylan Conway
1e86cebd74 Add bun_version to link metadata (#25545)
## Summary
- Add `bun_version` field to `link-metadata.json`
- Pass `VERSION` CMake variable to the metadata script as `BUN_VERSION`
env var

This ensures the build version is captured in the link metadata JSON
file, which is useful for tracking which version produced a given build
artifact.

## Test plan
- Build with `bun bd` and verify `link-metadata.json` includes
`bun_version`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-16 19:53:05 -08:00
robobun
bc47f87450 fix(ini): support env var expansion in quoted .npmrc values (#25518)
## Summary

Fixes environment variable expansion in quoted `.npmrc` values and adds
support for the `?` optional modifier.

### Changes

**Simplified quoted value handling:**
- Removed unnecessary `isProperlyQuoted` check that added complexity
without benefit
- When JSON.parse succeeds for quoted strings, expand env vars in the
result
- When JSON.parse fails for single-quoted strings like `'${VAR}'`, still
expand env vars

**Added `?` modifier support (matching npm behavior):**
- `${VAR}` - if VAR is undefined, leaves as `${VAR}` (no expansion)
- `${VAR?}` - if VAR is undefined, expands to empty string

This applies consistently to both quoted and unquoted values.

### Examples

```ini
# Env var found - all expand to the value
token = ${NPM_TOKEN}
token = "${NPM_TOKEN}"
token = '${NPM_TOKEN}'

# Env var NOT found - left as-is
token = ${NPM_TOKEN}         # → ${NPM_TOKEN}
token = "${NPM_TOKEN}"       # → ${NPM_TOKEN}
token = '${NPM_TOKEN}'       # → ${NPM_TOKEN}

# Optional modifier (?) - expands to empty if not found
token = ${NPM_TOKEN?}        # → (empty)
token = "${NPM_TOKEN?}"      # → (empty)
auth = "Bearer ${TOKEN?}"    # → Bearer 
```

### Test Plan

- Added 8 new tests for the `?` modifier covering quoted and unquoted
values
- Verified all expected values match `npm config get` behavior
- All 30 ini tests pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2025-12-16 19:49:23 -08:00
Dylan Conway
698b004ea4 Add step in CI to upload link metadata (#25448)
### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-16 14:30:10 -08:00
robobun
b135c207ed fix(yaml): remove YAML 1.1 legacy boolean values for YAML 1.2 compliance (#25537)
## Summary

- Remove YAML 1.1 legacy boolean values (`yes/no/on/off/y/Y`) that are
not part of the YAML 1.2 Core Schema
- Keep YAML 1.2 Core Schema compliant values: `true/True/TRUE`,
`false/False/FALSE`, `null/Null/NULL`, `0x` hex, `0o` octal
- Add comprehensive roundtrip tests for YAML 1.2 compliance

**Removed (now parsed as strings):**
- `yes`, `Yes`, `YES` (were `true`)
- `no`, `No`, `NO` (were `false`)
- `on`, `On`, `ON` (were `true`)
- `off`, `Off`, `OFF` (were `false`)
- `y`, `Y` (were `true`)

This fixes a common pain point where GitHub Actions workflow files with
`on:` keys would have the key parsed as boolean `true` instead of the
string `"on"`.

## YAML 1.2 Core Schema Specification

From [YAML 1.2.2 Section 10.3.2 Tag
Resolution](https://yaml.org/spec/1.2.2/#1032-tag-resolution):

| Regular expression | Resolved to tag |
|-------------------|-----------------|
| `null \| Null \| NULL \| ~` | tag:yaml.org,2002:null |
| `/* Empty */` | tag:yaml.org,2002:null |
| `true \| True \| TRUE \| false \| False \| FALSE` |
tag:yaml.org,2002:bool |
| `[-+]? [0-9]+` | tag:yaml.org,2002:int (Base 10) |
| `0o [0-7]+` | tag:yaml.org,2002:int (Base 8) |
| `0x [0-9a-fA-F]+` | tag:yaml.org,2002:int (Base 16) |
| `[-+]? ( \. [0-9]+ \| [0-9]+ ( \. [0-9]* )? ) ( [eE] [-+]? [0-9]+ )?`
| tag:yaml.org,2002:float |
| `[-+]? ( \.inf \| \.Inf \| \.INF )` | tag:yaml.org,2002:float
(Infinity) |
| `\.nan \| \.NaN \| \.NAN` | tag:yaml.org,2002:float (Not a number) |

Note: `yes`, `no`, `on`, `off`, `y`, `n` are **not** in the YAML 1.2
Core Schema boolean list. These were removed from YAML 1.1 as noted in
[YAML 1.2 Section 1.2](https://yaml.org/spec/1.2.2/#12-yaml-history):

> The YAML 1.2 specification was published in 2009. Its primary focus
was making YAML a strict superset of JSON. **It also removed many of the
problematic implicit typing recommendations.**

## Test plan

- [x] Updated existing YAML tests to reflect YAML 1.2 Core Schema
behavior
- [x] Added roundtrip tests (stringify → parse) for YAML 1.2 compliance
- [x] Verified tests fail with system Bun (YAML 1.1 behavior) and pass
with debug build (YAML 1.2)
- [x] Run `bun bd test test/js/bun/yaml/yaml.test.ts`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-16 14:29:39 -08:00
Ciro Spaciari
a1dd26d7db fix(usockets) fix last_write_failed flag (#25496)
https://github.com/oven-sh/bun/pull/25361 needs to be merged before this
PR

## Summary
- Move `last_write_failed` flag from loop-level to per-socket flag for
correctness

## Changes

- Move `last_write_failed` from `loop->data` to `socket->flags`
- More semantically correct since write status is per-socket, not
per-loop

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-16 14:26:42 -08:00
Christian Rishøj
7c06320d0f fix(ws): fix zlib version mismatch on Windows (segfault) (#25538)
## Summary

Fixes #24593 - WebSocket segfault on Windows when publishing large
messages with `perMessageDeflate: true`.
Also fixes #21028 (duplicate issue).
Also closes #25457 (alternative PR).

**Root cause:** 

On Windows, the C++ code was compiled against system zlib headers
(1.3.1) but linked against Bun's vendored Cloudflare zlib (1.2.8).

This version mismatch caused `deflateInit2()` to return
`Z_VERSION_ERROR` (-6), leaving the deflate stream in an invalid state.
All subsequent `deflate()` calls returned `Z_STREAM_ERROR` (-2),
producing zero output, which then caused an integer underflow when
subtracting the 4-byte trailer → segfault in memcpy.

**Fix:** 

Add `${VENDOR_PATH}/zlib` to the C++ include paths in
`cmake/targets/BuildBun.cmake`. This ensures the vendored zlib headers
are found before system headers, maintaining header/library version
consistency.

This is a simpler alternative to #25457 which worked around the issue by
using libdeflate exclusively.

## Test plan

- [x] Added regression test `test/regression/issue/24593.test.ts` with 4
test cases:
  - Large ~109KB JSON message publish (core reproduction)
  - Multiple rapid publishes (buffer corruption)
  - Broadcast to multiple subscribers
  - Messages at CORK_BUFFER_SIZE boundary (16KB)
- [x] Tests pass on Windows (was crashing before fix)
- [x] Tests pass on macOS

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-16 14:23:29 -08:00
robobun
dd04c57258 feat: implement V8 Value type checking APIs (#22462)
## Summary

This PR implements four V8 C++ API methods for type checking that are
commonly used by native Node.js modules:
- `v8::Value::IsMap()` - checks if value is a Map
- `v8::Value::IsArray()` - checks if value is an Array  
- `v8::Value::IsInt32()` - checks if value is a 32-bit integer
- `v8::Value::IsBigInt()` - checks if value is a BigInt

## Implementation Details

The implementation maps V8's type checking APIs to JavaScriptCore's
equivalent functionality:
- `IsMap()` uses JSC's `inherits<JSC::JSMap>()` check
- `IsArray()` uses JSC's `isArray()` function with the global object
- `IsInt32()` uses JSC's `isInt32()` method  
- `IsBigInt()` uses JSC's `isBigInt()` method

## Changes

- Added method declarations to `V8Value.h`
- Implemented the methods in `V8Value.cpp` 
- Added symbol exports to `napi.zig` (both Unix and Windows mangled
names)
- Added symbols to `symbols.txt` and `symbols.dyn`
- Added comprehensive tests in `v8-module/main.cpp` and `v8.test.ts`

## Testing

The implementation has been verified to:
- Compile successfully without errors
- Export the correct symbols in the binary
- Follow established patterns in the V8 compatibility layer

Tests cover various value types including empty and populated
Maps/Arrays, different numeric ranges, BigInts, and other JavaScript
types.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-12-15 19:50:11 -08:00
robobun
344b2c1dfe fix: Response.clone() no longer locks body when body was accessed before clone (#25484)
## Summary
- Fix bug where `Response.clone()` would lock the original response's
body when `response.body` was accessed before cloning
- Apply the same fix to `Request.clone()`

## Root Cause
When `response.body` was accessed before calling `response.clone()`, the
original response's body would become locked after cloning. This
happened because:

1. When the cloned response was wrapped with `toJS()`,
`checkBodyStreamRef()` was called which moved the stream from
`Locked.readable` to `js.gc.stream` and cleared `Locked.readable`
2. The subsequent code tried to get the stream from `Locked.readable`,
which was now empty, so the body cache update was skipped
3. The JavaScript-level body property cache still held the old locked
stream

## Fix
Updated the cache update logic to:
1. For the cloned response: use `js.gc.stream.get()` instead of
`Locked.readable.get()` since `toJS()` already moved the stream
2. For the original response: use `Locked.readable.get()` which still
holds the teed stream since `checkBodyStreamRef` hasn't been called yet

## Reproduction
```javascript
const readableStream = new ReadableStream({
  start(controller) {
    controller.enqueue(new TextEncoder().encode("Hello, world!"));
    controller.close();
  },
});

const response = new Response(readableStream);
console.log(response.body?.locked); // Accessing body before clone
const cloned = response.clone();
console.log(response.body?.locked); // Expected: false, Actual: true 
console.log(cloned.body?.locked);   // Expected: false, Actual: false 
```

## Test plan
- [x] Added regression tests for `Response.clone()` in
`test/js/web/fetch/response.test.ts`
- [x] Added regression test for `Request.clone()` in
`test/js/web/request/request.test.ts`
- [x] Verified tests fail with system bun (before fix) and pass with
debug build (after fix)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-12-15 18:46:02 -08:00
Ciro Spaciari
aef0b5b4a6 fix(usockets): safely handle socket reallocation during context adoption (#25361)
## Summary
- Fix use-after-free vulnerability during socket adoption by properly
tracking reallocated sockets
- Add safety checks to prevent linking closed sockets to context lists
- Properly track socket state with new `is_closed`, `adopted`, and
`is_tls` flags

## What does this PR do?

This PR improves event loop stability by addressing potential
use-after-free issues that can occur when sockets are reallocated during
adoption (e.g., when upgrading a TCP socket to TLS).

### Key Changes

**Socket State Tracking
([internal.h](packages/bun-usockets/src/internal/internal.h))**
- Added `is_closed` flag to explicitly track when a socket has been
closed
- Added `adopted` flag to mark sockets that were reallocated during
context adoption
- Added `is_tls` flag to track TLS socket state for proper low-priority
queue handling

**Safe Socket Adoption
([context.c](packages/bun-usockets/src/context.c))**
- When `us_poll_resize()` returns a new pointer (reallocation occurred),
the old socket is now:
  - Marked as closed (`is_closed = 1`)
  - Added to the closed socket cleanup list
  - Marked as adopted (`adopted = 1`)
  - Has its `prev` pointer set to the new socket for event redirection
- Added guards to
`us_internal_socket_context_link_socket/listen_socket/connecting_socket`
to prevent linking already-closed sockets

**Event Loop Handling ([loop.c](packages/bun-usockets/src/loop.c))**
- After callbacks that can trigger socket adoption (`on_open`,
`on_writable`, `on_data`), the event loop now checks if the socket was
reallocated and redirects to the new socket
- Low-priority socket handling now properly checks `is_closed` state and
uses `is_tls` flag for correct SSL handling

**Poll Resize Safety
([epoll_kqueue.c](packages/bun-usockets/src/eventing/epoll_kqueue.c))**
- Changed `us_poll_resize()` to always allocate new memory with
`us_calloc()` instead of `us_realloc()` to ensure the old pointer
remains valid for cleanup
- Now takes `old_ext_size` parameter to correctly calculate memory sizes
- Re-enabled `us_internal_loop_update_pending_ready_polls()` call in
`us_poll_change()` to ensure pending events are properly redirected

### How did you verify your code works?
Run existing CI and existing socket upgrade tests under asan build
2025-12-15 18:43:51 -08:00
robobun
740fb23315 fix(windows): improve bunx metadata validation (#25012)
## Summary

- Improved validation for bunx metadata files on Windows
- Added graceful error handling for malformed metadata instead of
crashing
- Added regression test for the fix

## Test plan

- [x] Run `bun bd test test/cli/install/bunx.test.ts -t "should not
crash on corrupted"`
- [x] Manual testing with corrupted `.bunx` files
- [x] Verified normal operation still works

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-12-15 18:37:09 -08:00
robobun
2dd997c4b5 fix(node): support duplicate dlopen calls with DLHandleMap (#24404)
## Summary

Fixes an issue where loading the same native module
(NODE_MODULE_CONTEXT_AWARE) multiple times would fail with:
```
symbol 'napi_register_module_v1' not found in native module
```

Fixes https://github.com/oven-sh/bun/issues/23136
Fixes https://github.com/oven-sh/bun/issues/21432

## Root Cause

When a native module is loaded for the first time:
1. `dlopen()` loads the shared library
2. Static constructors run and call `node_module_register()`
3. The module registers successfully

On subsequent loads of the same module:
1. `dlopen()` returns the same handle (library already loaded)
2. Static constructors **do not run again**
3. No registration occurs, leading to the "symbol not found" error

## Solution

Implemented a thread-safe `DLHandleMap` to cache and replay module
registrations:

1. **Thread-local storage** captures the `node_module*` during static
constructor execution
2. **After successful first load**, save the registration to the global
map
3. **On subsequent loads**, look up the cached registration and replay
it

This approach matches Node.js's `global_handle_map` implementation.

## Changes

- Created `src/bun.js/bindings/DLHandleMap.h` - thread-safe singleton
cache
- Added thread-local storage in `src/bun.js/bindings/v8/node.cpp`
- Modified `src/bun.js/bindings/BunProcess.cpp` to save/lookup cached
modules
- Also includes the exports fix (using `toObject()` to match Node.js
behavior)

## Test Plan

Added `test/js/node/process/dlopen-duplicate-load.test.ts` with tests
that:
- Build a native addon using node-gyp
- Load it twice with `process.dlopen`
- Verify both loads succeed
- Test with different exports objects

All tests pass.

## Related Issue

Fixes the second bug discovered in the segfault investigation.

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-12-15 18:35:26 -08:00
robobun
4061e1cb4f fix: handle EINVAL from copy_file_range on eCryptfs (#25534)
## Summary
- Add `EINVAL` and `OPNOTSUPP` to the list of errors that trigger
fallback from `copy_file_range` to `sendfile`/read-write loop
- Fixes `Bun.write` and `fs.copyFile` failing on eCryptfs filesystems

## Test plan
- [x] Existing `copyFile` tests pass (`bun bd test
test/js/node/fs/fs.test.ts -t "copyFile"`)
- [x] Existing `copy_file_range` fallback tests pass (`bun bd test
test/js/bun/io/bun-write.test.js -t "should work when copyFileRange is
not available"`)

Fixes #13968

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-12-15 17:47:08 -08:00
robobun
6386eef8aa fix(bunx): handle empty string arguments on Windows (#25025)
## Summary

Fixes #13316
Fixes #18275

Running `bunx cowsay ""` (or any package with an empty string argument)
on Windows caused a panic. Additionally, `bunx concurrently "command
with spaces"` was splitting quoted arguments incorrectly.

**Repro #13316:**
```bash
bunx cowsay ""
# panic(main thread): reached unreachable code
```

**Repro #18275:**
```bash
bunx concurrently "bun --version" "bun --version"
# Only runs once, arguments split incorrectly
# Expected: ["bun --version", "bun --version"]
# Actual: ["bun", "--version", "bun", "--version"]
```

## Root Cause

The bunx fast path on Windows bypasses libuv and calls `CreateProcessW`
directly to save 5-12ms. The command line building logic had two issues:

1. **Empty strings**: Not quoted at all, resulting in invalid command
line
2. **Arguments with spaces**: Not quoted, causing them to be split into
multiple arguments

## Solution

Implement Windows command-line argument quoting using libuv's proven
algorithm:
- Port of libuv's `quote_cmd_arg` function (process backwards + reverse)
- Empty strings become `""`
- Strings with spaces/tabs/quotes are wrapped in quotes
- Backslashes before quotes are properly escaped per Windows rules

**Why not use libuv directly?**
- Normal `Bun.spawn()` uses `uv_spawn()` which handles quoting
internally
- bunx fast path bypasses libuv to save 5-12ms (calls `CreateProcessW`
directly)
- libuv's `quote_cmd_arg` is a static function (not exported)
- Solution: port the algorithm to Zig

## Test Plan

- [x] Added regression test for empty strings (#13316)
- [x] Added regression test for arguments with spaces (#18275)
- [x] Verified system bun (v1.3.3) fails both tests
- [x] Verified fix passes both tests
- [x] Implementation based on battle-tested libuv algorithm

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-12-15 17:29:04 -08:00
robobun
3394fd3bdd fix(node:url): return empty string for invalid domains in domainToASCII/domainToUnicode (#25196)
## Summary
- Fixes `url.domainToASCII` and `url.domainToUnicode` to return empty
string instead of throwing `TypeError` when given invalid domains
- Per Node.js docs: "if `domain` is an invalid domain, the empty string
is returned"

## Test plan
- [x] Run `bun bd test test/regression/issue/24191.test.ts` - all 2
tests pass
- [x] Verify tests fail with system Bun (`USE_SYSTEM_BUN=1`) to confirm
fix validity
- [x] Manual verification: `url.domainToASCII('xn--iñvalid.com')`
returns `""`

## Example

Before (bug):
```
$ bun -e "import url from 'node:url'; console.log(url.domainToASCII('xn--iñvalid.com'))"
TypeError: domainToASCII failed
```

After (fixed):
```
$ bun -e "import url from 'node:url'; console.log(url.domainToASCII('xn--iñvalid.com'))"
(empty string output)
```

Closes #24191

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-15 17:26:32 -08:00
robobun
5a8cdc08f0 docs(bundler): add comprehensive Bun.build compile API documentation (#25536) 2025-12-15 16:20:50 -08:00
Jarred Sumner
dcc3386611 [publish images] 2025-12-15 15:34:04 -08:00
robobun
8dc79641c8 fix(http): support proxy passwords longer than 4096 characters (#25530)
## Summary
- Fixes silent 401 Unauthorized errors when using proxies with long
passwords (e.g., JWT tokens > 4096 chars)
- Bun was silently dropping proxy passwords exceeding 4095 characters,
falling through to code that only encoded the username

## Changes
- Added `PercentEncoding.decodeWithFallback` which uses a 4KB stack
buffer for the common case and falls back to heap allocation only for
larger inputs
- Updated proxy auth encoding in `AsyncHTTP.zig` to use the new fallback
method

## Test plan
- [x] Added test case that verifies passwords > 4096 chars are handled
correctly
- [x] Test fails with system bun (v1.3.3), passes with this fix
- [x] All 29 proxy tests pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-15 13:21:41 -08:00
robobun
d865ef41e2 feat: add Bun.Terminal API for pseudo-terminal (PTY) support (#25415)
## Summary

This PR adds a new `Bun.Terminal` API for creating and managing
pseudo-terminals (PTYs), enabling interactive terminal applications in
Bun.

### Features

- **Standalone Terminal**: Create PTYs directly with `new
Bun.Terminal(options)`
- **Spawn Integration**: Spawn processes with PTY attached via
`Bun.spawn({ terminal: options })`
- **Full PTY Control**: Write data, resize, set raw mode, and handle
callbacks

## Examples

### Basic Terminal with Spawn (Recommended)

```typescript
const proc = Bun.spawn(["bash"], {
  terminal: {
    cols: 80,
    rows: 24,
    data(terminal, data) {
      // Handle output from the terminal
      process.stdout.write(data);
    },
    exit(terminal, code, signal) {
      console.log(`Process exited with code ${code}`);
    },
  },
});

// Write commands to the terminal
proc.terminal.write("echo Hello from PTY!\n");
proc.terminal.write("exit\n");

await proc.exited;
proc.terminal.close();
```

### Interactive Shell

```typescript
// Create an interactive shell that mirrors to stdout
const proc = Bun.spawn(["bash", "-i"], {
  terminal: {
    cols: process.stdout.columns || 80,
    rows: process.stdout.rows || 24,
    data(term, data) {
      process.stdout.write(data);
    },
  },
});

// Forward stdin to the terminal
process.stdin.setRawMode(true);
for await (const chunk of process.stdin) {
  proc.terminal.write(chunk);
}
```

### Running Interactive Programs (vim, htop, etc.)

```typescript
const proc = Bun.spawn(["vim", "file.txt"], {
  terminal: {
    cols: process.stdout.columns,
    rows: process.stdout.rows,
    data(term, data) {
      process.stdout.write(data);
    },
  },
});

// Handle terminal resize
process.stdout.on("resize", () => {
  proc.terminal.resize(process.stdout.columns, process.stdout.rows);
});

// Forward input
process.stdin.setRawMode(true);
for await (const chunk of process.stdin) {
  proc.terminal.write(chunk);
}
```

### Capturing Colored Output

```typescript
const chunks: Uint8Array[] = [];

const proc = Bun.spawn(["ls", "--color=always"], {
  terminal: {
    data(term, data) {
      chunks.push(data);
    },
  },
});

await proc.exited;
proc.terminal.close();

// Output includes ANSI color codes
const output = Buffer.concat(chunks).toString();
console.log(output);
```

### Standalone Terminal (Advanced)

```typescript
const terminal = new Bun.Terminal({
  cols: 80,
  rows: 24,
  data(term, data) {
    console.log("Received:", data.toString());
  },
});

// Use terminal.stdin as the fd for child process stdio
const proc = Bun.spawn(["bash"], {
  stdin: terminal.stdin,
  stdout: terminal.stdin,
  stderr: terminal.stdin,
});

terminal.write("echo hello\n");

// Clean up
terminal.close();
```

### Testing TTY Detection

```typescript
const proc = Bun.spawn([
  "bun", "-e", 
  "console.log('isTTY:', process.stdout.isTTY)"
], {
  terminal: {},
});

// Output: isTTY: true
```

## API

### `Bun.spawn()` with `terminal` option

```typescript
const proc = Bun.spawn(cmd, {
  terminal: {
    cols?: number,        // Default: 80
    rows?: number,        // Default: 24  
    name?: string,        // Default: "xterm-256color"
    data?: (terminal: Terminal, data: Uint8Array) => void,
    exit?: (terminal: Terminal, code: number, signal: string | null) => void,
    drain?: (terminal: Terminal) => void,
  }
});

// Access the terminal
proc.terminal.write(data);
proc.terminal.resize(cols, rows);
proc.terminal.setRawMode(enabled);
proc.terminal.close();

// Note: proc.stdin, proc.stdout, proc.stderr return null when terminal is used
```

### `new Bun.Terminal(options)`

```typescript
const terminal = new Bun.Terminal({
  cols?: number,
  rows?: number,
  name?: string,
  data?: (terminal, data) => void,
  exit?: (terminal, code, signal) => void,
  drain?: (terminal) => void,
});

terminal.stdin;   // Slave fd (for child process)
terminal.stdout;  // Master fd (for reading)
terminal.closed;  // boolean
terminal.write(data);
terminal.resize(cols, rows);
terminal.setRawMode(enabled);
terminal.ref();
terminal.unref();
terminal.close();
await terminal[Symbol.asyncDispose]();
```

## Implementation Details

- Uses `openpty()` to create pseudo-terminal pairs
- Properly manages file descriptor lifecycle with reference counting
- Integrates with Bun's event loop via `BufferedReader` and
`StreamingWriter`
- Supports `await using` syntax for automatic cleanup
- POSIX only (Linux, macOS) - not available on Windows

## Test Results

- 80 tests passing
- Covers: construction, writing, reading, resize, raw mode, callbacks,
spawn integration, error handling, GC safety

## Changes

- `src/bun.js/api/bun/Terminal.zig` - Terminal implementation
- `src/bun.js/api/bun/Terminal.classes.ts` - Class definition for
codegen
- `src/bun.js/api/bun/subprocess.zig` - Added terminal field and getter
- `src/bun.js/api/bun/js_bun_spawn_bindings.zig` - Terminal option
parsing
- `src/bun.js/api/BunObject.classes.ts` - Terminal getter on Subprocess
- `packages/bun-types/bun.d.ts` - TypeScript types
- `docs/runtime/child-process.mdx` - Documentation
- `test/js/bun/terminal/terminal.test.ts` - Comprehensive tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-12-15 12:51:13 -08:00
robobun
e66b4639bd fix: use LLVM unstable repo for Debian trixie/forky [publish images] (#25470)
## Summary
- Fix Docker image build failure on Debian trixie by using LLVM's
`unstable` repository instead of the non-existent `trixie` repository
- The LLVM apt repository (`apt.llvm.org`) doesn't have packages for
Debian trixie (13) or forky - attempts to access
`llvm-toolchain-trixie-19` return 404
- Pass `-n=unstable` flag to `llvm.sh` when running on these Debian
versions

## Test plan
- [ ] Verify the Docker image build succeeds at
https://github.com/oven-sh/bun-development-docker-image/actions

Fixes the build failure from:
https://github.com/oven-sh/bun-development-docker-image/actions/runs/20105199193

Error was:
```
E: The repository 'http://apt.llvm.org/trixie llvm-toolchain-trixie-19 Release' does not have a Release file.
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-15 12:45:45 -08:00
robobun
8698d25c52 fix: ensure TLS handshake callback fires before HTTP request handler (#25525)
## Summary

Fixes a flaky test (`test-http-url.parse-https.request.js`) where
`request.socket._secureEstablished` was intermittently `false` when the
HTTP request handler was called on HTTPS servers.

## Root Cause

The `isAuthorized` flag was stored in
`HttpContextData::flags.isAuthorized`, which is **shared across all
sockets** in the same context. This meant multiple concurrent TLS
connections could overwrite each other's authorization state, and the
value could be stale when read.

## Fix

Moved the `isAuthorized` flag from the context-level `HttpContextData`
to the per-socket `AsyncSocketData` base class. This ensures each socket
has its own authorization state that is set correctly during its TLS
handshake callback.

## Changes

- **`AsyncSocketData.h`**: Added per-socket `bool isAuthorized` field
- **`HttpContext.h`**: Updated handshake callback to set per-socket flag
instead of context-level flag
- **`JSNodeHTTPServerSocket.cpp`**: Updated `isAuthorized()` to read
from per-socket `AsyncSocketData` (via `HttpResponseData` which inherits
from it)

## Testing

Ran the flaky test 50+ times with 100% pass rate.

Also verified gRPC and HTTP2 tests still pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-15 12:44:26 -08:00
github-actions[bot]
81a5c79928 deps: update c-ares to v1.34.6 (#25509)
## What does this PR do?

Updates c-ares to version v1.34.6

Compare:
d3a507e920...3ac47ee46e

Auto-updated by [this
workflow](https://github.com/oven-sh/bun/actions/workflows/update-cares.yml)

Co-authored-by: Jarred-Sumner <709451+Jarred-Sumner@users.noreply.github.com>
2025-12-15 11:36:18 -08:00
Alistair Smith
fa996ad1a8 fix: Support @types/node@25.0.2 (#25532)
### What does this PR do?

CI failed again because of a change in @types/node

### How did you verify your code works?

bun-types.test.ts passes
2025-12-15 11:29:04 -08:00
robobun
ed1d6e595c skip HandleScope GC test on ASAN builds (#25523)
## Summary
- Skip `test_handle_scope_gc` test on ASAN builds due to false positives
from dynamic library boundary crossing (Bun built with ASAN+UBSAN,
native addon without sanitizers)

## Test plan
- CI should pass on ASAN builds with this test skipped
- Non-ASAN builds continue to run the test normally

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-15 11:11:11 -08:00
Jarred Sumner
7c98b0f440 Deflake test/js/node/test/parallel/test-fs-readdir-stack-overflow.js 2025-12-14 22:49:51 -08:00
robobun
f0d18d73c9 Disable CodeRabbit issue enrichment (#25520) 2025-12-14 14:56:58 -08:00
Ciro Spaciari
a5712b92b8 Fix 100% CPU usage with idle WebSocket connections on macOS (kqueue) (#25475)
### What does this PR do?

Fixes a bug where idle WebSocket connections would cause 100% CPU usage
on macOS and other BSD systems using kqueue.

**Root cause:** The kqueue event filter comparison was using bitwise AND
(`&`) instead of equality (`==`) when checking the filter type. Combined
with missing `EV_ONESHOT` flags on writable events, this caused the
event loop to continuously spin even when no actual I/O was pending.

**Changes:**
1. **Fixed filter comparison** in `epoll_kqueue.c`: Changed `filter &
EVFILT_READ` to `filter == EVFILT_READ` (same for `EVFILT_WRITE`). The
filter field is a value, not a bitmask.

2. **Added `EV_ONESHOT` flag** to writable events: kqueue writable
events now use one-shot mode to prevent continuous triggering.

3. **Re-arm writable events when needed**: After a one-shot writable
event fires, the code now properly updates the poll state and re-arms
the writable event if another write is still pending.

### How did you verify your code works?

Added a test that:
1. Creates a TLS WebSocket server and client
2. Sends messages then lets the connection sit idle
3. Measures CPU usage over 3 seconds
4. Fails if CPU usage exceeds 2% (expected is ~0.XX% when idle)
2025-12-12 11:10:22 -08:00
robobun
7dcd49f832 fix(install): only apply default trusted dependencies to npm packages (#25163)
## Summary
- The default trusted dependencies list should only apply to packages
installed from npm
- Non-npm sources (file:, link:, git:, github:) now require explicit
trustedDependencies
- This prevents malicious packages from spoofing trusted names through
local paths or git repos

## Test plan
- [x] Added test: file: dependency named "esbuild" does NOT auto-run
postinstall scripts
- [x] Added test: file: dependency runs scripts when explicitly added to
trustedDependencies
- [x] Verified tests fail with system bun (old behavior) and pass with
new build
- [x] Build compiles successfully

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2025-12-11 17:44:41 -08:00
robobun
c59a6997cd feat(bundler): add statically-analyzable dead-code elimination via feature flags (#25462)
## Summary
- Adds `import { feature } from "bun:bundle"` for compile-time feature
flag checking
- `feature("FLAG_NAME")` calls are replaced with `true`/`false` at
bundle time
- Enables dead-code elimination through `--feature=FLAG_NAME` CLI
argument
- Works in `bun build`, `bun run`, and `bun test`
- Available in both CLI and `Bun.build()` JavaScript API

## Usage

```ts
import { feature } from "bun:bundle";

if (feature("SUPER_SECRET")) {
  console.log("Secret feature enabled!");
} else {
  console.log("Normal mode");
}
```

### CLI
```bash
# Enable feature during build
bun build --feature=SUPER_SECRET index.ts

# Enable at runtime
bun run --feature=SUPER_SECRET index.ts

# Enable in tests
bun test --feature=SUPER_SECRET
```

### JavaScript API
```ts
await Bun.build({
  entrypoints: ['./index.ts'],
  outdir: './out',
  features: ['SUPER_SECRET', 'ANOTHER_FLAG'],
});
```

## Implementation
- Added `bundler_feature_flags` (as `*const bun.StringSet`) to
`RuntimeFeatures` and `BundleOptions`
- Added `bundler_feature_flag_ref` to Parser struct to track the
`feature` import
- Handle `bun:bundle` import at parse time (similar to macros) - capture
ref, return empty statement
- Handle `feature()` calls in `e_call` visitor - replace with boolean
based on flags
- Wire feature flags through CLI arguments and `Bun.build()` API to
bundler options
- Added `features` option to `JSBundler.zig` for JavaScript API support
- Added TypeScript types in `bun.d.ts`
- Added documentation to `docs/bundler/index.mdx`

## Test plan
- [x] Basic feature flag enabled/disabled tests (both CLI and API
backends)
- [x] Multiple feature flags test
- [x] Dead code elimination verification tests
- [x] Error handling for invalid arguments
- [x] Runtime tests with `bun run --feature=FLAG`
- [x] Test runner tests with `bun test --feature=FLAG`
- [x] Aliased import tests (`import { feature as checkFeature }`)
- [x] Ternary operator DCE tests
- [x] Tests use `itBundled` with both `backend: "cli"` and `backend:
"api"`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Alistair Smith <hi@alistair.sh>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-12-11 17:44:14 -08:00
Alistair Smith
1d50af7fe8 @types/bun: Update to @types/node@25, fallback to PropertyKey in test expect matchers when keyof unknown is used (#25460)
more accurately, developers cannot pass a value when expect values
resolve to never. this is easy to fall into when using the
`toContainKey*` matchers. falling back to PropertyKey when this happens
is a sensible/reasonable default

### What does this PR do?

fixes #25456, cc @MonsterDeveloper
fixes #25461

### How did you verify your code works?

bun types integration test
2025-12-10 18:15:55 -08:00
Jarred Sumner
98cee5a57e Improve Bun.stringWidth accuracy and robustness (#25447)
This PR significantly improves `Bun.stringWidth` to handle a wider
variety of Unicode characters and escape sequences correctly.

## Zero-width character handling

Added support for many previously unhandled zero-width characters:
- Soft hyphen (U+00AD)
- Word joiner and invisible operators (U+2060-U+2064)
- Lone surrogates (U+D800-U+DFFF)
- Arabic formatting characters (U+0600-U+0605, U+06DD, U+070F, U+08E2)
- Indic script combining marks (Devanagari through Malayalam)
- Thai and Lao combining marks
- Combining Diacritical Marks Extended and Supplement
- Tag characters (U+E0000-U+E007F)

## ANSI escape sequence handling

### CSI sequences
- Now properly handles ALL CSI final bytes (0x40-0x7E), not just `m`
- This means cursor movement (A/B/C/D), erase (J/K), scroll (S/T), and
other CSI commands are now correctly excluded from width calculation

### OSC sequences
- Added support for OSC sequences (ESC ] ... BEL/ST)
- OSC 8 hyperlinks are now properly handled
- Supports both BEL (0x07) and ST (ESC \) terminators

### ESC ESC fix
- Fixed state machine bug where `ESC ESC` would incorrectly reset state
- Now correctly handles consecutive ESC characters

## Emoji handling

Added proper grapheme-aware emoji width calculation:
- Flag emoji (regional indicator pairs) → width 2
- Skin tone modifiers → width 2
- ZWJ sequences (family, professions, etc.) → width 2
- Keycap sequences → width 2
- Variation selectors (VS15 for text, VS16 for emoji presentation)
- Uses ICU's `UCHAR_EMOJI` property for accurate emoji detection

## Test coverage

Added comprehensive test suite with **94 tests** covering:
- All zero-width character categories
- All CSI final bytes
- OSC sequences with various terminators
- Emoji edge cases (flags, skin tones, ZWJ, keycaps, variation
selectors)
- East Asian width (CJK, fullwidth, halfwidth katakana)
- Indic and Thai script combining marks
- Fuzzer-like stress tests for robustness

## Breaking changes

This is a behavior change - `stringWidth` will return different values
for some inputs. However, the new values are more accurate
representations of terminal display width:

| Input | Old | New | Why |
|-------|-----|-----|-----|
| Flag emoji 🇺🇸 | 1 | 2 | Flags display as 2 cells |
| Skin tone 👋🏽 | 4 | 2 | Emoji + modifier = 1 grapheme |
| ZWJ family 👨‍👩‍👧 | 8 | 2 | ZWJ sequence = 1 grapheme |
| Word joiner U+2060 | 1 | 0 | Invisible character |
| OSC 8 hyperlinks | counted URL | just visible text | URLs are
invisible |
| Cursor movement ESC[5A | counted | 0 | Control sequence |

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
2025-12-10 16:17:57 -08:00
Elfayeur - Remi
ac0099ebc6 docs: fix code highlight (#25411)
### What does this PR do?

Fix code highlight line, see problem:
<img width="684" height="663" alt="Screenshot 2025-12-08 at 11 40 39 AM"
src="https://github.com/user-attachments/assets/9894a7b7-ddd6-4bad-b7de-3e0e55ecd8cd"
/>


### How did you verify your code works?

Co-authored-by: Michael H <git@riskymh.dev>
2025-12-10 16:06:45 +11:00
Ryan Machado
64146d47f9 Update os-signals.mdx (#25372)
### What does this PR do?

Remove a loose section from os-signals documentation

### How did you verify your code works?

Just a small documentation change.

Co-authored-by: Michael H <git@riskymh.dev>
2025-12-10 16:06:39 +11:00
robobun
a2d8b75962 fix(yaml): quote strings ending with colons (#25443)
## Summary
- Fixes strings ending with colons (e.g., `"tin:"`) not being quoted in
YAML.stringify output
- This caused YAML.parse to fail with "Unexpected token" when parsing
the output back

## Test plan
- Added regression tests in `test/regression/issue/25439.test.ts`
- Verified round-trip works for various strings ending with colons
- Ran existing YAML tests to ensure no regressions

Fixes #25439

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-09 18:20:26 -08:00
Kyle
a15fe76bf2 add brotli and zstd to CompressionStream and DecompressionStream types (#25374)
### What does this PR do?

- removes the `Unimplemented in Bun` comment on `CompressionStream` and
`DecompressionStream`
- updates the types for `CompressionStream` and `DecompressionStream` to
add a new internal `CompressionFormat` type to the constructor, which
adds `brotli` and `zstd` to the union
- adds tests for brotli and zstd usage
- adds lib.dom.d.ts exclusions for brotli and zstd as these don't exist
in the DOM version of CompressionFormat

fixes #25367

### How did you verify your code works?

typechecks and tests
2025-12-09 17:56:55 -08:00
robobun
8dc084af5f fix(fetch): ignore proxy object without url property (#25414)
## Summary
- When a URL object is passed as the proxy option, or when a proxy
object lacks a "url" property, ignore it instead of throwing an error
- This fixes a regression introduced in 1.3.4 where libraries like taze
that pass URL objects as proxy values would fail

## Test plan
- Added test: "proxy as URL object should be ignored (no url property)"
- passes a URL object directly as proxy
- Updated test: "proxy object without url is ignored (regression
#25413)" - proxy object with headers but no url
- Updated test: "proxy object with null url is ignored (regression
#25413)" - proxy object where url is null
- All 29 proxy tests pass

Fixes #25413

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-12-09 12:31:45 -08:00
Alistair Smith
2028e21d60 fmt bun.d.ts 2025-12-08 18:00:09 -08:00
Ciro Spaciari
f25ea59683 feat(s3): add Content-Disposition support for S3 uploads (#25363)
### What does this PR do?
- Add `contentDisposition` option to S3 file uploads to control the
`Content-Disposition` HTTP header
- Support passing `contentDisposition` through all S3 upload paths
(simple uploads, multipart uploads, and streaming uploads)
- Add TypeScript types for the new option
Fixes https://github.com/oven-sh/bun/issues/25362
### How did you verify your code works?
Test
2025-12-08 15:30:20 -08:00
Jarred Sumner
55c6afb498 Deflake test/js/bun/net/socket.test.ts 2025-12-08 11:16:26 -08:00
Jarred Sumner
0aca002161 Deflake test/js/bun/util/sleep.test.ts 2025-12-08 11:14:52 -08:00
robobun
4980736786 docs(server): fix satisfies Serve type in export default example (#25410) 2025-12-08 09:25:00 -08:00
robobun
3af0d23d53 docs: expand single-file executable file embedding documentation (#25408)
## Summary

- Expanded documentation for embedding files in single-file executables
with `with { type: "file" }`
- Added clear explanation of how the import attribute works and path
transformation at build time
- Added examples for reading embedded files with both `Bun.file()` and
Node.js `fs` APIs
- Added practical examples: JSON configs, HTTP static assets, templates,
binary files (WASM, fonts)
- Improved `Bun.embeddedFiles` section with a dynamic asset server
example

## Test plan

- [x] Verified all code examples compile and run correctly with `bun
build --compile`
- [x] Tested `Bun.file()` reads embedded files correctly
- [x] Tested `node:fs` APIs (`readFileSync`, `promises.readFile`,
`stat`) work with embedded files
- [x] Tested `Bun.embeddedFiles` returns correct blob array
- [x] Tested `--asset-naming` flag removes content hashes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-12-07 18:43:00 -08:00
Hamidreza Hanafi
9c96937329 fix(transpiler): preserve simplified property values in object spread expressions (#25401)
Fixes #25398

### What does this PR do?

Fixes a bug where object expressions with spread properties and nullish
coalescing to empty objects (e.g., `k?.x ?? {}`) would produce invalid
JavaScript output like `k?.x ?? ` (missing `{}`).

### Root Cause

In `src/ast/SideEffects.zig`, the `simplifyUnusedExpr` function handles
unused object expressions with spread properties. When simplifying
property values:

1. The code creates a mutable copy `prop` from the original `prop_`
2. When a property value is simplified (e.g., `k?.x ?? {}` → `k?.x`), it
updates `prop.value`
3. **Bug:** The code then wrote back `prop_` (the original) instead of
`prop` (the modified copy)

Because `simplifyUnusedExpr` mutates the AST in place when handling
nullish coalescing (setting `bin.right` to empty), the original `prop_`
now contained an expression with `bin.right` as an empty/missing
expression, resulting in invalid output.

### How did you verify your code works?
- Added regression test in `test/regression/issue/25398.test.ts`
- Verified the original reproduction case passes
- Verified existing CommonJS tests continue to pass
- Verified test fails with system bun and passes with the fix
2025-12-07 16:33:37 -08:00
robobun
d4eaaf8363 docs: document autoload options for standalone executables (#25385)
## Summary

- Document new default behavior in v1.3.4: `tsconfig.json` and
`package.json` loading is now disabled by default for standalone
executables
- Add documentation for `--compile-autoload-tsconfig` and
`--compile-autoload-package-json` CLI flags
- Document all four JavaScript API options: `autoloadTsconfig`,
`autoloadPackageJson`, `autoloadDotenv`, `autoloadBunfig`
- Note that `.env` and `bunfig.toml` may also be disabled by default in
a future version

## Test plan

- [ ] Review rendered documentation for accuracy and formatting

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-12-07 15:44:06 -08:00
Jarred Sumner
e1aa437694 Bump 2025-12-07 15:42:23 -08:00
robobun
73c3f0004f fix(vm): delete internal Loader property from node:vm global object (#25397) 2025-12-07 13:29:32 -08:00
robobun
b80cb629c6 fix(docs): correct mock documentation examples (#25384)
## Summary

- Fix `mock.mock.args` to `mock.mock.calls` in mock-functions.mdx (the
`.args` property doesn't exist)
- Fix mock.restore example to use module methods instead of spy
functions (calling spy functions after restore returns `undefined`)
- Add missing `vi` import in Vitest compatibility example

## Test plan

- [x] Verified each code block works by running tests against the debug
build

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-06 22:17:51 -08:00
Jarred Sumner
8773f7ab65 Delete TODO.md 2025-12-06 19:58:21 -08:00
Dylan Conway
5eb2145b31 fix(compile): use 8-byte header for embedded section to ensure bytecode alignment (#25377)
## Summary
- Change the size header in embedded Mach-O and PE sections from `u32`
(4 bytes) to `u64` (8 bytes)
- Ensures the data payload starts at an 8-byte aligned offset, which is
required for the bytecode cache

## Test plan
- [x] Test standalone compilation on macOS
- [ ] Test standalone compilation on Windows

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-06 16:37:09 -08:00
Jarred Sumner
cde167cacd Revert "Add Tanstack Start to bun init (#24648)"
Adding a 260 KB bun header image is not a good use of binary size

This reverts commit 830fd9b0ae.
2025-12-05 18:32:51 -08:00
robobun
6ce419d3f8 fix(napi): napi_typeof returns napi_object for String objects (#25365)
## Summary

- Fix `napi_typeof` to return `napi_object` for boxed String objects
(`new String("hello")`) instead of incorrectly returning `napi_string`
- Add regression test for boxed primitive objects (String, Number,
Boolean)

The issue was that `StringObjectType` and `DerivedStringObjectType` JSC
cell types were falling through to return `napi_string`, but these
represent object wrappers around strings, not primitive strings.

## Test plan

- [x] `bun bd test test/napi/napi.test.ts -t "napi_typeof"` passes
- [x] Test fails with `USE_SYSTEM_BUN=1` (confirming the bug exists in
released version)

Fixes #25351

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-05 18:27:06 -08:00
Alistair Smith
05508a627d Reapply "use event.message when no event.error in HMR during event" (#25360)
This reverts commit b4c8379447.

### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-05 17:38:56 -08:00
robobun
23383b32b0 feat(compile): add --compile-autoload-tsconfig and --compile-autoload-package-json flags (#25340)
## Summary

By default, standalone executables no longer load `tsconfig.json` and
`package.json` at runtime. This improves startup performance and
prevents unexpected behavior from config files in the runtime
environment.

- Added `--compile-autoload-tsconfig` / `--no-compile-autoload-tsconfig`
CLI flags (default: false)
- Added `--compile-autoload-package-json` /
`--no-compile-autoload-package-json` CLI flags (default: false)
- Added `autoloadTsconfig` and `autoloadPackageJson` options to the
`Bun.build()` compile config
- Flags are stored in `StandaloneModuleGraph.Flags` and applied at
runtime boot

This follows the same pattern as the existing
`--compile-autoload-dotenv` and `--compile-autoload-bunfig` flags.

## Test plan

- [x] Added tests in `test/bundler/bundler_compile_autoload.test.ts`
- [x] Verified standalone executables work correctly with runtime config
files that differ from compile-time configs
- [x] Verified the new CLI flags are properly parsed and applied
- [x] Verified the JS API options work correctly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-12-05 14:43:53 -08:00
eroderust
0d5a7c36ed chore: remove duplicate words in comment (#25347) 2025-12-05 11:19:47 -08:00
Alistair Smith
b4c8379447 Revert "use event.message when no event.error in HMR during event"
This reverts commit 438aaf9e95.
2025-12-05 11:16:52 -08:00
Alistair Smith
438aaf9e95 use event.message when no event.error in HMR during event 2025-12-05 11:14:45 -08:00
robobun
4d60b6f69d docs: clarify SQLite embed example requires existing database (#25329)
Co-authored-by: Alistair Smith <hi@alistair.sh>
2025-12-03 22:02:39 -08:00
pfg
e9e93244cb remove CMakeCache before building (#24860)
So it doesn't cache flags that are passed to the build
2025-12-01 22:02:46 -08:00
pfg
800a937cc2 Add fake timers for bun:test (#23764)
Fixes ENG-21288

TODO: Test with `@testing-library/react` `waitFor`

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-01 21:59:11 -08:00
Lydia Hallie
830fd9b0ae Add Tanstack Start to bun init (#24648)
Co-authored-by: Alistair Smith <hi@alistair.sh>
2025-12-01 21:05:47 -08:00
Meghan Denny
fe0aba79f4 test: add regression tests for building docker containers (#25210) 2025-12-01 20:20:06 -08:00
sdm2345
a4aaec5b2f Fix http.Agent connection pool not reusing connections (#24351)
This fixes a critical bug where `http.Agent` with `keepAlive: true` was
not reusing connections, causing a 66% performance degradation compared
to Node.js. Every request was establishing a new TCP/TLS connection
instead of reusing existing ones.

**Root Cause:**

Three independent bugs were causing the connection pool to fail:

1. **TypeScript layer** (`src/js/node/_http_client.ts:271`)
   - Reading wrong property: `keepalive` instead of `keepAlive`
   - User's `keepAlive: true` setting was being ignored

2. **Request header handling** (`src/http.zig:591`)
   - Only handled `Connection: close`, ignored `Connection: keep-alive`
   - Missing explicit flag update for keep-alive header

3. **Response header handling** (`src/http.zig:2240`)
   - Used compile-time function `eqlComptime` at runtime (always failed)
   - Inverted logic: disabled pool when NOT "keep-alive"
   - Ignored case-sensitivity (should use `eqlIgnoreCase` per RFC 7230)

**Performance Impact:**

- **Before**: All requests ~940ms, stddev 33ms (0% improvement) 
- **After**: First request ~930ms, subsequent ~320ms (65.9% improvement)

- Performance now matches Node.js (65.9% vs 66.5% improvement)
- QPS increased from 4.2 to 12.2 req/s (190% improvement)

**Files Changed:**
- `src/js/node/_http_client.ts` - Fix property name (1 line)
- `src/http.zig` - Fix request/response header handling (5 lines)

Fixes #12053

### What does this PR do?

This PR fixes the HTTP connection pool by correcting three bugs:

1. **Fixes TypeScript property name**: Changes `this[kAgent]?.keepalive`
to `this[kAgent]?.keepAlive` to properly read the user's keepAlive
setting from http.Agent

2. **Adds keep-alive request header handling**: Explicitly sets
`disable_keepalive = false` when receiving `Connection: keep-alive`
header

3. **Fixes response header parsing**: 
- Replaces compile-time `strings.eqlComptime()` with runtime
`std.ascii.eqlIgnoreCase()`
- Corrects inverted logic to properly enable connection pool on
`Connection: keep-alive`
   - Makes header comparison case-insensitive per RFC 7230

All three bugs must be fixed together - any single bug would cause the
connection pool to fail.

### How did you verify your code works?

**Test 1: Minimal reproduction with 10 sequential HTTPS requests**
```typescript
const agent = new https.Agent({ keepAlive: true });
// Make 10 requests to https://api.example.com
```

Results:
- First request: 930ms (cold start with TCP/TLS handshake)
- Subsequent requests: ~320ms average (connection reused)
- **Improvement: 65.9%** (matches Node.js 66.5%)
- Verified across 3 repeated test runs for stability

**Test 2: Response header validation**
- Confirmed server returns `Connection: Keep-Alive`
- Verified Bun correctly parses and applies the header

**Test 3: Performance comparison**

| Runtime | First Request | Subsequent Avg | Improvement | QPS |
|---------|--------------|----------------|-------------|-----|
| Node.js v20.18.0 | 938ms | 314ms | **66.5%** | 12.2 |
| Bun v1.3.1 (broken) | 935ms | 942ms | -0.7%  | 4.2 |
| Bun v1.3.2 (fixed) | 930ms | 317ms | **65.9%**  | 12.2 |

Bun now performs identically to Node.js, confirming the connection pool
works correctly.
2025-12-01 17:31:15 -08:00
Meghan Denny
24bc8aa416 ci: remove ubuntu 24 (#25288)
redundant with 25
2025-12-01 17:01:14 -08:00
robobun
2ab6efeea3 fix(ffi): restore CString constructor functionality (#25257)
## Summary
- Fix regression where `new Bun.FFI.CString(ptr)` throws "function is
not a constructor"
- Pass the same function as both call and constructor callbacks for
CString

## Root Cause
PR #24910 replaced `jsc.createCallback` with `jsc.JSFunction.create` for
all FFI functions. However, `JSFunction.create` doesn't allow
constructor calls by default (it uses `callHostFunctionAsConstructor`
which throws). The old `createCallback` used `JSFFIFunction` which
allowed the same function to be called with `new`.

## Fix
Pass the same function as both the `implementation` and `constructor`
option to `JSFunction.create` for CString specifically. This allows `new
CString(ptr)` to work while keeping the refactoring from #24910.

Additionally, the `bun:ffi` module now replaces `Bun.FFI.CString` with
the proper JS CString class after loading, so users get the full class
with `.ptr`, `.byteOffset`, etc. properties.

## Test plan
- [x] Added regression test `test/regression/issue/25231.test.ts`
- [x] Test fails with `USE_SYSTEM_BUN=1` (v1.3.3), passes with fix
- [x] Verified reproduction case from issue works

Fixes #25231

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-01 15:47:27 -08:00
Amdadul Haq
6745bdaa85 Add protocol property to serve.d.ts (#25267) 2025-12-01 13:43:14 -08:00
Lydia Hallie
dce7a02f4d Docs: Minor fixes and improvements (#25284)
This PR addresses several issues opened for the docs:

- Add callout for SQLite caching behavior between prepare() and query()
- Fix SQLite types and fix deprecated exec to run
- Fix Secrets API example
- Update SolidStart guide
- Add bun upgrade guide
- Prefer `process.versions.bun` over `typeof Bun` for detection
- Document complete `bunx` flags
- Improve Nitro preset documentation for Nuxt

Fixes #23165, #24424, #24294, #25175, #18433, #16804, #22967, #22527,
#10560, #14744
2025-12-01 13:32:08 -08:00
Michael H
9c420c9eff fix production build for vscode extention (#25274) 2025-12-01 12:59:27 -08:00
github-actions[bot]
9c2ca4b8fd deps: update sqlite to 3.51.100 (#25243)
## What does this PR do?

Updates SQLite to version 3.51.100

Compare: https://sqlite.org/src/vdiff?from=3.51.0&to=3.51.100

Auto-updated by [this
workflow](https://github.com/oven-sh/bun/actions/workflows/update-sqlite3.yml)

Co-authored-by: Jarred-Sumner <709451+Jarred-Sumner@users.noreply.github.com>
2025-12-01 11:58:34 -08:00
robobun
e624f1e571 fix(jest): handle null SourceOrigin in jest.mock() to prevent crash (#25281)
## Summary
- Added null check for `sourceOrigin` before accessing its URL in
`jest.mock()`
- When `callerSourceOrigin()` returns null (e.g., when called with
invalid arguments), the code now safely returns early instead of
crashing

## Test plan
- [x] Added regression test `test/regression/issue/ENG-24434.test.ts`
- [x] `bun bd test test/regression/issue/ENG-24434.test.ts` passes

Fixes ENG-24434

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-01 11:54:03 -08:00
robobun
27381063b6 fix(windows): use GetConsoleCP() instead of GetConsoleOutputCP() for input codepage (#25252)
## Summary

- Fix typo where `GetConsoleOutputCP()` was called twice instead of
calling `GetConsoleCP()` for the input codepage
- Add missing `GetConsoleCP()` extern declaration in windows.zig

The code was saving the output codepage twice, meaning the input
codepage was never properly saved and thus couldn't be correctly
restored.

## Note

This fix corrects a bug in the codepage save/restore logic, but **may
not fully resolve the garbled text issue** in #25151. The garbled text
problem occurs when `bunx` (without `--bun`) runs a package via Node.js,
and that package tries to spawn `bun`. The error message from cmd.exe
gets garbled on non-English Windows systems.

Further investigation may be needed to determine if additional codepage
handling is required when spawning processes.

Related to #25151

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-11-30 23:11:33 -08:00
Michael H
9ca8de6eb9 vscode test runner add the new test functions to static analysis (#25256) 2025-11-30 17:31:17 -08:00
robobun
fdcfac6a75 fix(node:tls): use SSL_session_reused for TLSSocket.isSessionReused (#25258)
## Summary
Fixes `TLSSocket.isSessionReused()` to use BoringSSL's
`SSL_session_reused()` API instead of incorrectly checking if a session
was set.

The previous implementation returned `!!this[ksession]` which would
return `true` if `setSession()` was called, even if the session wasn't
actually reused by the SSL layer. This fix correctly uses the native SSL
API like Node.js does.

## Changes
- Added native `isSessionReused` function in Zig that calls
`SSL_session_reused()`
- Updated `TLSSocket.prototype.isSessionReused` to use the native
implementation
- Added regression tests

## Test plan
- [x] `bun bd test test/regression/issue/25190.test.ts` passes
- [x] `bun bd test test/js/node/tls/node-tls-connect.test.ts` passes

Fixes #25190

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-30 17:00:25 -08:00
robobun
ce1981c525 fix(node:assert): handle Number and Boolean wrappers in deepStrictEqual (#25201)
## Summary
- Fixes `assert.deepStrictEqual()` to properly compare Number and
Boolean wrapper objects
- Previously, `new Number(1)` and `new Number(2)` were incorrectly
considered equal because they have no enumerable properties
- Now correctly extracts and compares internal values using
`JSC::sameValue()`, then falls through to check own properties

## Test plan
- [x] Run `bun bd test test/regression/issue/24045.test.ts` - all 6
tests pass
- [x] Verify tests fail with system Bun (`USE_SYSTEM_BUN=1`) to confirm
fix validity
- [x] Verified behavior matches Node.js exactly (see table below)

## Node.js Compatibility

| Test Case | Node.js | Bun |
|-----------|---------|-----|
| Different Number values (`new Number(1)` vs `new Number(2)`) | throws
| throws |
| Same Number values (`new Number(1)` vs `new Number(1)`) | equal |
equal |
| 0 vs -0 (`new Number(0)` vs `new Number(-0)`) | throws | throws |
| NaN equals NaN (`new Number(NaN)` vs `new Number(NaN)`) | equal |
equal |
| Different Boolean values (`new Boolean(true)` vs `new Boolean(false)`)
| throws | throws |
| Same Boolean values | equal | equal |
| Number wrapper vs primitive (`new Number(1)` vs `1`) | throws | throws
|
| Number vs Boolean wrapper | throws | throws |
| Same value, different own properties | throws | throws |
| Same value, same own properties | equal | equal |
| Different own property values | throws | throws |

## Example

Before (bug):
```javascript
assert.deepStrictEqual(new Number(1), new Number(2)); // passes incorrectly
```

After (fixed):
```javascript
assert.deepStrictEqual(new Number(1), new Number(2)); // throws AssertionError
```

Closes #24045

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-11-30 16:58:58 -08:00
Dylan Conway
cc3fc5a1d3 fix ENG-24015 (#25222)
### What does this PR do?
Ensures `ptr` is either a number or heap big int before converting to a
number.

also fixes ENG-24039
### How did you verify your code works?
Added a test
2025-11-29 19:13:32 -08:00
Dylan Conway
d83e0eb1f1 fix ENG-24017 (#25224)
### What does this PR do?
Fixes checking for exceptions when creating empty or used readable
streams

also fixes ENG-24038
### How did you verify your code works?
Added a test for creating empty streams
2025-11-29 19:13:06 -08:00
Dylan Conway
72b9525507 update bunfig telemetry docs (#25237)
### What does this PR do?

### How did you verify your code works?
2025-11-29 19:12:18 -08:00
robobun
0f7494569e fix(console): implement %j format specifier for JSON output (#25195)
## Summary
- Implements the `%j` format specifier for `console.log` and related
console methods
- `%j` outputs the JSON stringified representation of the value
- Previously, `%j` was not recognized and was left as literal text in
the output

## Test plan
- [x] Run `bun bd test test/regression/issue/24234.test.ts` - all 5
tests pass
- [x] Verify tests fail with system Bun (`USE_SYSTEM_BUN=1`) to confirm
fix validity
- [x] Manual verification: `console.log('%j', {foo: 'bar'})` outputs
`{"foo":"bar"}`

## Example

Before (bug):
```
$ bun -e "console.log('%j %s', {foo: 'bar'}, 'hello')"
%j [object Object] hello
```

After (fixed):
```
$ bun -e "console.log('%j %s', {foo: 'bar'}, 'hello')"
{"foo":"bar"} hello
```

Closes #24234

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-28 22:57:55 -08:00
Meghan Denny
9fd6b54c10 ci: fix windows git dependencies tests (#25213) 2025-11-28 22:56:54 -08:00
robobun
19acc4dcac fix(buffer): handle string allocation failures in encoding operations (#25214)
## Summary
- Add proper bounds checking for encoding operations that produce larger
output than input
- Handle allocation failures gracefully by returning appropriate errors
- Add defensive checks in string initialization functions

## Test plan
- Added test case for encoding operations with large buffers
- Verified existing buffer tests still pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-28 22:56:28 -08:00
Meghan Denny
56da7c4fd9 zig: address a VM todo (#23678) 2025-11-28 19:38:26 -08:00
Meghan Denny
5bdb8ec0cb all: update to debian 13 (#24055) [publish images] 2025-11-28 15:01:40 -08:00
Meghan Denny
4cf9b794c9 ci: update buildkite agent to v3.114.0 (#25127) [publish images] 2025-11-28 15:01:20 -08:00
Meghan Denny
998ec54da9 test: fix spacing in sql.test.ts (#24691) 2025-11-28 14:40:58 -08:00
Jarred Sumner
0305f3d4d2 feat(url): implement URLPattern API (#25168)
## Summary

Implements the [URLPattern Web
API](https://developer.mozilla.org/en-US/docs/Web/API/URLPattern) based
on WebKit's implementation. URLPattern provides declarative pattern
matching for URLs, similar to how regular expressions work for strings.

### Features

- **Constructor**: Create patterns from strings or `URLPatternInit`
dictionaries
- **`test()`**: Check if a URL matches the pattern (returns boolean)
- **`exec()`**: Extract matched groups from a URL (returns
`URLPatternResult` or null)
- **Pattern properties**: `protocol`, `username`, `password`,
`hostname`, `port`, `pathname`, `search`, `hash`
- **`hasRegExpGroups`**: Detect if the pattern uses custom regular
expressions

### Example Usage

```js
// Match URLs with a user ID parameter
const pattern = new URLPattern({ pathname: '/users/:id' });

pattern.test('https://example.com/users/123'); // true
pattern.test('https://example.com/posts/456'); // false

const result = pattern.exec('https://example.com/users/123');
console.log(result.pathname.groups.id); // "123"

// Wildcard matching
const filesPattern = new URLPattern({ pathname: '/files/*' });
const match = filesPattern.exec('https://example.com/files/image.png');
console.log(match.pathname.groups[0]); // "image.png"
```

## Implementation Notes

- Adapted from WebKit's URLPattern implementation
- Modified JS bindings to work with Bun's infrastructure (simpler
`convertDictionary` patterns, WTF::Variant handling)
- Added IsoSubspaces for proper GC integration

## Test Plan

- [x] 408 tests from Web Platform Tests pass
- [x] Tests fail with system Bun (URLPattern not defined), pass with
debug build
- [x] Manual testing of basic functionality

Fixes https://github.com/oven-sh/bun/issues/2286

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-28 00:04:30 -08:00
Henrique Fonseca
1006a4fac2 Fix incorrect file name in React component test example 🌟 (#25167)
# What does this PR do?

Nyaa~ This PR fixes a small mistake in the documentation where the code
block for a React component test example was using the wrong filename!
(;ω;)💦

It was previously labeled as `matchers.d.ts`, but it should be something
like `myComponent.test.tsx` to properly reflect a test file for a React
component using `@testing-library/react`. 🧁

This makes the example clearer and more accurate for developers using
Bun to test their React components~! 💻🌸💕

# How did you verify your code works?

It's just docs, one single line 🥺

Pwease review and merge it when you can, senpai~~! UwU 🌈🫧
2025-11-27 23:15:34 -08:00
Michael H
c7f7d9bb82 run fmt (#25148)
prettier released a new update which seems to have changed a few
logistics
2025-11-28 17:51:45 +11:00
robobun
37bce389a0 docs: document inlining process.env.* values in static HTML bundling (#25084)
## Summary

- Add documentation for the `env` option that inlines `process.env.*`
values in frontend code when bundling HTML files
- Document runtime configuration via `bunfig.toml` `[serve.static]`
section for `bun ./index.html`
- Document production build configuration via CLI (`--env=PUBLIC_*`) and
`Bun.build` API (`env: "PUBLIC_*"`)
- Explain prefix filtering to avoid exposing sensitive environment
variables

## Test plan

- [x] Verify documentation renders correctly in local preview
- [x] Cross-reference with existing `env` documentation in
bundler/index.mdx

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Michael H <git@riskymh.dev>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-28 17:51:11 +11:00
robobun
bab583497c fix(cli): correct --dry-run help text for bun publish (#25137)
## Summary
- Fix `bun publish --help` showing incorrect `--dry-run` description
("Don't install anything" → "Perform a dry run without making changes")
- The `--dry-run` flag is in a shared params array used by multiple
commands, so the new generic message works for all of them

Fixes #24806

## Test plan
- [x] Verify `bun publish --help` shows "Perform a dry run without
making changes" for --dry-run
- [x] Regression test added that validates the correct help text is
shown
- [x] Test passes with debug build, fails with system bun (validating it
tests the right thing)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-27 18:12:07 -08:00
robobun
a83fceafc7 fix(http2): return server from setTimeout for method chaining (#25138)
## Summary
- Make `Http2Server.setTimeout()` and `Http2SecureServer.setTimeout()`
return `this` to enable method chaining
- Matches Node.js behavior where `server.setTimeout(1000).listen()`
works

Fixes #24924

## Test plan
- [x] Test that `Http2Server.setTimeout()` returns server instance
- [x] Test that `Http2SecureServer.setTimeout()` returns server instance
- [x] Test method chaining works (e.g.,
`server.setTimeout(1000).close()`)
- [x] Tests pass with debug build, fail with system bun

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-27 16:46:15 -08:00
robobun
ef8eef3df8 fix(http): stricter validation in chunked encoding parser (#25159)
## Summary
- Adds stricter validation for chunk boundaries in the HTTP chunked
transfer encoding parser
- Ensures conformance with RFC 9112 requirements for chunk formatting
- Adds additional test coverage for chunked encoding edge cases

## Test plan
- Added new tests in `test/js/bun/http/request-smuggling.test.ts`
- All existing HTTP tests pass
- `bun bd test test/js/bun/http/request-smuggling.test.ts` passes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 16:29:35 -08:00
robobun
69b571da41 Delete claude.yml workflow (#25157) 2025-11-27 12:26:50 -08:00
robobun
908ab9ce30 feat(fetch): add proxy object format with headers support (#25090)
## Summary

- Extends `fetch()` proxy option to accept an object format: `proxy: {
url: string, headers?: Headers }`
- Allows sending custom headers to the proxy server (useful for proxy
authentication, custom routing headers, etc.)
- Headers are sent in CONNECT requests (for HTTPS targets) and direct
proxy requests (for HTTP targets)
- User-provided `Proxy-Authorization` header overrides auto-generated
credentials from URL

## Usage

```typescript
// Old format (still works)
fetch(url, { proxy: "http://proxy.example.com:8080" });

// New object format with headers
fetch(url, {
  proxy: {
    url: "http://proxy.example.com:8080",
    headers: {
      "Proxy-Authorization": "Bearer token",
      "X-Custom-Proxy-Header": "value"
    }
  }
});
```

## Test plan

- [x] Test proxy object with url string works same as string proxy
- [x] Test proxy object with headers sends headers to proxy (HTTP
target)
- [x] Test proxy object with headers sends headers in CONNECT request
(HTTPS target)
- [x] Test proxy object with Headers instance
- [x] Test proxy object with empty headers
- [x] Test proxy object with undefined headers
- [x] Test user-provided Proxy-Authorization overrides URL credentials
- [x] All existing proxy tests pass (25 total)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 15:11:45 -08:00
robobun
43c46b1f77 fix(FormData): throw error instead of assertion failure on very large input (#25006)
## Summary

- Fix crash in `FormData.from()` when called with very large ArrayBuffer
input
- Add length check in C++ `toString` function against both Bun's
synthetic limit and WebKit's `String::MaxLength`
- For UTF-8 tagged strings, use simdutf to calculate actual UTF-16
length only when byte length exceeds the limit

## Root Cause

When `FormData.from()` was called with a very large ArrayBuffer (e.g.,
`new Uint32Array(913148244)` = ~3.6GB), the code would crash with:

```
ASSERTION FAILED: data.size() <= MaxLength
vendor/WebKit/Source/WTF/wtf/text/StringImpl.h(886)
```

The `toString()` function in `helpers.h` was only checking against
`Bun__stringSyntheticAllocationLimit` (which defaults to ~4GB), but not
against WebKit's `String::MaxLength` (INT32_MAX, ~2GB). When the input
exceeded `String::MaxLength`, `createWithoutCopying()` would fail with
an assertion.

## Changes

1. **helpers.h**: Added `|| str.len > WTF::String::MaxLength` checks to
all three code paths in `toString()`:
- UTF-8 tagged pointer path (with simdutf length calculation only when
needed)
   - External pointer path
   - Non-copying creation path

2. **url.zig**: Reverted the incorrect Zig-side check (UTF-8 byte length
!= UTF-16 character length)

## Test plan

- [x] Added test that verifies FormData.from with oversized input
doesn't crash
- [x] Verified original crash case now returns empty FormData instead of
crashing:
  ```js
  const v3 = new Uint32Array(913148244);
  FormData.from(v3); // No longer crashes
  ```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-11-26 13:46:08 -08:00
robobun
a0c5f3dc69 fix(mmap): use coerceToInt64 for offset/size to prevent assertion failure (#25101)
## Summary

- Fix assertion failure in `Bun.mmap` when `offset` or `size` options
are non-numeric values
- Add validation to reject negative `offset`/`size` with clear error
messages

Minimal reproduction: `Bun.mmap("", { offset: null });`

## Root Cause

`Bun.mmap` was calling `toInt64()` directly on the `offset` and `size`
options without validating they are numbers first. `toInt64()` has an
assertion that the value must be a number or BigInt, which fails when
non-numeric values like `null` or functions are passed.

## Test plan

- [x] Added tests for negative offset/size rejection
- [x] Added tests for non-number inputs (null, undefined)
- [x] `bun bd test test/js/bun/util/mmap.test.js` passes

Closes ENG-22413

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 13:37:41 -08:00
robobun
5965ff18ea fix(test): fix assertion failure in expect.extend with non-JSFunction callables (#25099)
## Summary

- Fix debug assertion failure in `JSWrappingFunction` when
`expect.extend()` is called with objects containing non-`JSFunction`
callables
- The crash occurred because `jsCast<JSFunction*>` was used, which
asserts the value inherits from `JSFunction`, but callable class
constructors (like `Expect`) inherit from `InternalFunction` instead

## Changes

- Change `JSWrappingFunction` to store `JSObject*` instead of
`JSFunction*`
- Use `jsDynamicCast` instead of `jsCast` in `getWrappedFunction`
- Use `getObject()` instead of `jsCast` in `create()`

## Reproduction

```js
const jest = Bun.jest();
jest.expect.extend(jest);
```

Before fix (debug build):
```
ASSERTION FAILED: !from || from->JSCell::inherits(std::remove_pointer<To>::type::info())
JSCast.h(40) : To JSC::jsCast(From *) [To = JSC::JSFunction *, From = JSC::JSCell]
```

After fix: Properly throws `TypeError: expect.extend: 'jest' is not a
valid matcher`

## Test plan

- [x] Added regression test
`test/regression/issue/fuzzer-ENG-22942.test.ts`
- [x] Existing `expect-extend.test.js` tests pass (27 tests)
- [x] Build succeeds

Fixes ENG-22942

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 13:34:02 -08:00
robobun
44f2328111 fix(fs.access): handle Windows device paths correctly (#25023)
## Summary

Fixes #23292

`fs.access()` and `fs.accessSync()` threw EUNKNOWN (-134) when checking
named pipes on Windows (paths like `\.\pipe\name`), but Node.js worked
fine.

**Repro:**
```ts
// Server creates pipe at \.\pipe\bun-test
import net from 'net';
const server = net.createServer();
server.listen('\\.\pipe\bun-test');

// Client tries to check if pipe exists
import fs from 'fs';
fs.accessSync('\\.\pipe\bun-test', fs.constants.F_OK);
// Error: EUNKNOWN: unknown error, access '\.\pipe\bun-test'
```

## Root Cause

The `osPathKernel32` function normalizes paths before passing to Windows
APIs. The normalization logic treats a single `.` as a "current
directory" component and removes it, so `\.\pipe\name` incorrectly
became `\pipe\name` - an invalid path.

## Solution

Detect Windows device paths (starting with `\.\` or `\?\`) and skip
normalization for these special paths, preserving the device prefix.

## Test Plan

- [x] Added regression test `test/regression/issue/23292.test.ts`
- [x] Test fails with system bun (v1.3.3): 3 failures (EUNKNOWN)
- [x] Test passes with fix: 4 pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-26 00:07:51 -08:00
robobun
85e0a723f3 Add analytics tracking for CPU profiling and heap snapshots (#25053)
## Summary

- Add `cpu_profile` and `heap_snapshot` counters to `Analytics.Features`
- Export `heap_snapshot` to C++ as `Bun__Feature__heap_snapshot`
- Increment `cpu_profile` when `--cpu-prof` flag is used
- Increment `heap_snapshot` in all heap snapshot creation locations:
  - `Bun.generateHeapSnapshot()`
  - `bun:jsc` `generateHeapSnapshotForDebugging()`
  - `console.takeHeapSnapshot()`
  - Internal `JSC__JSGlobalObject__generateHeapSnapshot()`

## Test plan

- [x] Build succeeds
- [x] Heap snapshot generation works
- [x] CPU profiling works with `--cpu-prof`
- [x] Existing tests pass: `test/js/bun/util/v8-heap-snapshot.test.ts`
- [x] Existing tests pass: `test/cli/run/cpu-prof.test.ts`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 00:02:43 -08:00
robobun
d50c5a385f Add analytics tracking for HTTP client proxy usage (#25056)
## Summary
- Added `http_client_proxy` counter to `analytics.Features` struct
- Incremented counter in `ProxyTunnel.onOpen()` when proxy tunnel
connection opens successfully

This allows tracking HTTP client proxy usage in analytics/crash reports
alongside other features like `fetch`, `WebSocket`, `http_server`, etc.

## Test plan
- [x] Build completes successfully (`bun bd`)
- [x] Existing proxy tests pass (`bun bd test
test/js/bun/http/proxy.test.ts`)
- [x] Counter is properly integrated into the analytics framework

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 00:02:26 -08:00
Jarred Sumner
bb7641b3b0 fix: use GC-safe operations when populating error stack traces (#25096)
## Summary
- Adds `FinalizerSafety` enum to control whether `functionName` uses
GC-safe operations when iterating over stack traces
- Uses the enum in `populateStackFrameMetadata` to choose between the
richer callee-based path vs the GC-safe path
- Updates call sites in `ZigException.cpp` and
`FormatStackTraceForJS.cpp`

Fixes https://github.com/oven-sh/bun/issues/25094
Fixes https://github.com/oven-sh/bun/issues/20399
Fixes https://github.com/oven-sh/bun/issues/22662

## Test plan
- [ ] Add regression test

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 23:55:41 -08:00
Meghan Denny
22f4dfae7c ci: fix release step
don't try to release freebsd yet
2025-11-25 22:27:02 -08:00
Meghan Denny
9fce97bac3 scripts: add freebsd support to bootstrap.sh (#24534) 2025-11-25 17:16:38 -08:00
Dylan Conway
2fb3aa8991 update minimumReleaseAge (#25057)
### What does this PR do?

### How did you verify your code works?
2025-11-25 11:06:24 -08:00
robobun
dc25d66b00 fix(Buffer): improve input validation in *Write methods (#25011)
## Summary
Improve bounds checking logic in Buffer.*Write methods (utf8Write,
base64urlWrite, etc.) to properly handle edge cases with non-numeric
offset and length arguments, matching Node.js behavior.

## Changes
- Handle non-numeric offset by converting to integer (treating invalid
values as 0)
- Clamp length to available buffer space instead of throwing
- Reorder operations to check buffer state after argument conversion

## Node.js Compatibility

This matches Node.js's C++ implementation in `node_buffer.cc`:

**Offset handling via `ParseArrayIndex`**
([node_buffer.cc:211-234](https://github.com/nodejs/node/blob/main/src/node_buffer.cc#L211-L234)):
```cpp
inline MUST_USE_RESULT Maybe<bool> ParseArrayIndex(Environment* env,
                                                   Local<Value> arg,
                                                   size_t def,
                                                   size_t* ret) {
  if (arg->IsUndefined()) {
    *ret = def;
    return Just(true);
  }

  int64_t tmp_i;
  if (!arg->IntegerValue(env->context()).To(&tmp_i))
    return Nothing<bool>();
  // ...
}
```
V8's `IntegerValue` converts non-numeric values (including NaN) to 0.

**Length clamping in `SlowWriteString`**
([node_buffer.cc:1498-1502](https://github.com/nodejs/node/blob/main/src/node_buffer.cc#L1498-L1502)):
```cpp
THROW_AND_RETURN_IF_OOB(ParseArrayIndex(env, args[2], 0, &offset));
THROW_AND_RETURN_IF_OOB(
    ParseArrayIndex(env, args[3], ts_obj_length - offset, &max_length));

max_length = std::min(ts_obj_length - offset, max_length);
```
Node.js clamps `max_length` to available buffer space rather than
throwing.

## Test plan
- Added regression tests for all `*Write` methods verifying proper
handling of edge cases
- Verified behavior matches Node.js
- All 447 buffer tests pass

fixes ENG-21985, fixes ENG-21863, fixes ENG-21751, fixes ENG-21984

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-24 23:34:36 -08:00
robobun
ae29340708 fix: prevent calling class constructors marked with call: false (#25047)
## Summary

Fixes a crash (ENG-22243) where calling class constructors marked with
`call: false` would create invalid instances instead of throwing an
error.

## Root Cause

When a class definition has `call: false` (like `Bun.RedisClient`), the
code generator was still allowing the constructor to be invoked without
`new`. This created invalid instances that caused a buffer overflow
during garbage collection.

## The Fix

Modified `src/codegen/generate-classes.ts` to properly check the `call`
property:
- When `call: false`: throws `TypeError: Class constructor X cannot be
invoked without 'new'`
- When `call: true`: behaves as before, allowing construction without
`new`

## Test Plan

- [x] Added regression test in `test/regression/issue/22243.test.ts`
- [x] Test fails with system bun (has the bug)
- [x] Test passes with fixed build
- [x] Verified `Bun.RedisClient()` now throws proper error
- [x] Verified `new Bun.RedisClient()` still works

## Before

```bash
$ bun -e "Bun.RedisClient()"
# Creates invalid instance, no error
```

## After

```bash
$ bun -e "Bun.RedisClient()"
TypeError: Class constructor RedisClient cannot be invoked without 'new'
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-24 23:34:16 -08:00
robobun
123ac9dc2d chore: disable comment-lint and labeled workflows (#25059)
## Summary
- Disable `comment-lint.yml` (C++ linter comment workflow)
- Disable `labeled.yml` (Issue/PR labeled automation workflow)

Workflows are disabled by renaming them to `.yml.disabled`.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-24 21:15:57 -08:00
Marko Vejnovic
48617563b5 ENG-21534: Satisfy aikido (#24880)
### What does this PR do?

- Bumps some packages
- Does some _best practices_ in certain areas to minimize Aikido noise.

### How did you verify your code works?

CI.

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-24 20:16:03 -08:00
robobun
cc393e43f2 fix(test): use putDirectMayBeIndex in spyOn for indexed property keys (#25020)
## Summary
- Fix `spyOn` crash when using indexed property keys (e.g., `spyOn(arr,
0)`)

## Test plan
- [x] Added tests for `spyOn` with numeric indexed properties
- [x] Added tests for `spyOn` with string indexed properties (e.g.,
`"0"`)
- [x] All existing `spyOn` tests pass
- [x] Full `mock-fn.test.js` test suite passes

Fixes ENG-21973

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-24 19:27:33 -08:00
robobun
3f0681996f fix(indexOfLine): properly coerce non-number offset argument (#25021)
## Summary
- Fix assertion failure when `Bun.indexOfLine` is called with a
non-number offset argument
- Changed from `.to(u32)` to `.coerce(i32, globalThis)` for proper
JavaScript type coercion

## Test plan
- [x] Added regression test in `test/js/bun/util/index-of-line.test.ts`
- [x] `bun bd test test/js/bun/util/index-of-line.test.ts` passes

Closes ENG-21997

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-11-24 19:27:14 -08:00
robobun
e14d5593c5 fix(install): use >= instead of > for resolution bounds check (#25028)
## Summary

- Fix off-by-one error in `preprocessUpdateRequests` where the bounds
check used `>` instead of `>=` when validating package IDs from the
resolution buffer
- When `old_resolution == packages.len`, the check `> packages.len`
passes but `resolutions_of_yore[old_resolution]` is out of bounds since
valid indices are `0` to `packages.len-1`
- This causes an internal assertion failure during `bun install` with
update requests

## The Bug

```zig
// BEFORE (buggy) - at lockfile.zig:484 and :522
if (old_resolution > old.packages.len) continue;
const res = resolutions_of_yore[old_resolution];  // OOB when old_resolution == packages.len

// AFTER (fixed)
if (old_resolution >= old.packages.len) continue;
const res = resolutions_of_yore[old_resolution];  // Now safe
```

## Crash Report

From
[bun.report](https://bun.report/1.3.3/wi1274e01cAggkggB+rt/F+pvBiw3rDqul/Doyi4Emzi5Ewj44FuvbgjMog00yDCYKERNEL32.DLLut0LCSntdll.dll4zijBA0eNrzzCtJLcpLzFFILC5OLSrJzM9TSEvMzCktSgUAiSkKPg/view):

```
panic: Internal assertion failure
- lockfile.zig:523: preprocessUpdateRequests
- install_with_manager.zig:605: installWithManager
- updatePackageJSONAndInstall.zig:340
Features: extracted_packages, text_lockfile
```

## Test plan

- [x] `bun run zig:check` passes
- [ ] CI passes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-24 19:22:45 -08:00
taylor.fish
7335cb747b Fix conversions from JSValue to FFI pointer (#25045)
Fixes this issue, where two identical JS numbers could become two
different FFI pointers:

```c
// gcc -fpic -shared -o main.so
#include <stdio.h>

void* getPtr(void) {
    return (void*)123;
}

void printPtr(void* ptr) {
    printf("%zu\n", (size_t)ptr);
}
```

```js
import { dlopen, FFIType } from "bun:ffi";

const lib = dlopen("./main.so", {
  getPtr: { args: [], returns: FFIType.ptr },
  printPtr: { args: [FFIType.ptr], returns: FFIType.void },
});

const ptr = lib.symbols.getPtr();
console.log(`${typeof ptr} ${ptr}`);

const ptr2 = Number(String(ptr));
console.log(`${typeof ptr2} ${ptr2}`);

console.log(`pointers equal? ${ptr === ptr2}`);
lib.symbols.printPtr(ptr);
lib.symbols.printPtr(ptr2);
```

```console
$ bun main.js
number 123
number 123
pointers equal? true
123
18446744073709551615
```

Fixes #20072

(For internal tracking: fixes ENG-22327)
2025-11-24 17:34:39 -08:00
Marko Vejnovic
9c8575f975 bug(#19588): Fix undici bindings overflow (#25046)
### What does this PR do?

- Fixes an overflow that happened in the undici bindings generator.
- Establishes a pattern using `std::tuple` so this doesn't happen again.

Fixes:

-
https://bun-p9.sentry.io/issues/6597708115/?query=createUndiciInternalBinding&referrer=issue-stream
- https://github.com/oven-sh/bun/issues/19588

### How did you verify your code works?

CI.

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-24 17:31:11 -08:00
robobun
0da132ef6d fix(test): skip grpc-js resolver tests that use unavailable domain (#25039)
## Summary

- Skip 2 tests that use `grpctest.kleinsch.com` (domain no longer
exists)
- Fix flaky "should not keep repeating failed resolutions" test

These tests were originally skipped when added in #14286, but were
accidentally un-skipped in #20051. This restores them to match upstream
grpc-node.

## To re-enable these tests in the future

Bun could set up its own DNS TXT record at `*.bun.sh`. According to the
[gRPC A2
spec](https://github.com/grpc/proposal/blob/master/A2-service-configs-in-dns.md):

**DNS Setup needed:**
1. A record: `grpctest.bun.sh` → any valid IP (e.g., `127.0.0.1`)
2. TXT record: `_grpc_config.grpctest.bun.sh` with value:
   ```

grpc_config=[{"serviceConfig":{"loadBalancingPolicy":"round_robin","methodConfig":[{"name":[{"service":"MyService","method":"Foo"}],"waitForReady":true}]}}]
   ```

Then update the tests to use `grpctest.bun.sh` instead.

## Test plan

- [x] `bun bd test test/js/third_party/grpc-js/test-resolver.test.ts`
passes (20 pass, 3 skip, 1 todo, 0 fail)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-24 10:34:26 -08:00
Jarred Sumner
0d863ba237 Don't use globalThis.takeException when rejecting Promise values (#24986)
### What does this PR do?

We can't use globalThis.takeException() because it throws out of memory
error when we instead need to take the exception.

### How did you verify your code works?
2025-11-23 20:27:15 -08:00
Michael H
f31db64bd4 cross-platform bun bd (#24983)
closes #24969
2025-11-23 15:09:43 -08:00
robobun
ddcec61f59 fix: use >= instead of > for String.max_length() check (#24988)
## Summary

- Fixed boundary check in `String.zig` to use `>=` instead of `>` for
`max_length()` comparisons
- Strings fail when the length is exactly equal to `max_length()`, not
just when exceeding it
- This affects both `createExternal` and
`createExternalGloballyAllocated` functions

## Test plan

- Existing tests should continue to pass
- Strings with length exactly equal to `max_length()` will now be
properly rejected

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-23 01:42:32 -08:00
Dylan Conway
29051f9340 fix(Bun.plugin): return on invalid target error (#24945)
### What does this PR do?

### How did you verify your code works?

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-11-23 01:41:42 -08:00
robobun
7076fbbe68 fix(glob): fix typo that caused patterns like .*/* to escape cwd boundary (#24939)
## Summary

- Fixed a typo in `makeComponent` that incorrectly identified
2-character patterns starting with `.` (like `.*`) as `..` (DotBack)
patterns
- The condition checked `pattern[component.start] == '.'` twice instead
of checking both characters at positions 0 and 1
- This caused patterns like `.*/*` to be parsed as `../` + `*`, making
the glob walker traverse into parent directories

Fixes #24936

## Test plan

- [x] Added tests in `test/js/bun/glob/scan.test.ts` that verify
patterns like `.*/*` and `.*/**/*.ts` don't escape the cwd boundary
- [x] Tests fail with system bun (bug reproduced) and pass with the fix
- [x] All existing glob tests pass (169 tests)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-11-23 01:41:17 -08:00
Marko Vejnovic
67be07fca4 Fix fuzzilli_command.zig (#24941)
### What does this PR do?

Needed to fix `fuzzilli_command.zig` to get it to build

### How did you verify your code works?
2025-11-23 00:34:27 -08:00
Dylan Conway
25b91e5c86 Update JSValue.toSliceClone to use JSError (#24949)
### What does this PR do?
Removes a TODO
### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-23 00:32:38 -08:00
Jarred Sumner
3c60fcda33 Fixes #24711 (#24982)
### What does this PR do?

Calling `.withAsyncContextIfNeeded` on a Promise is both unnecessary and
incorrect

### How did you verify your code works?
2025-11-22 23:50:34 -08:00
robobun
8da699681e test: add security scanner integration tests for minimum-release-age (#24944) 2025-11-22 15:08:12 -08:00
Alistair Smith
7a06dfcb89 fix: collect all dependencies from workspace packages in scanner (#24942)
### What does this PR do?

Fixes #23688

### How did you verify your code works?

Another test
2025-11-21 18:31:45 -08:00
Michael H
9ed53283a4 bump versions to v1.3.3 (#24933) 2025-11-21 14:22:28 -08:00
Michael H
4450d738fa docs: more consistency + minor updates (#24764)
Co-authored-by: RiskyMH <git@riskymh.dev>
2025-11-21 14:06:19 -08:00
robobun
7ec1aa8c95 docs: document new features from v1.3.2 and v1.3.3 (#24932)
## What does this PR do?

Adds missing documentation for features introduced in Bun v1.3.2 and
v1.3.3:

- **Standalone executable config flags**
(`docs/bundler/executables.mdx`): Document
`--no-compile-autoload-dotenv` and `--no-compile-autoload-bunfig` flags
that control automatic config file loading in compiled binaries
- **Test retry/repeats** (`docs/test/writing-tests.mdx`): Document the
`retry` and `repeats` test options for handling flaky tests
- **Disable env file loading**
(`docs/runtime/environment-variables.mdx`): Document `--no-env-file`
flag and `env = false` bunfig option

## How did you verify your code works?

- [x] Verified documentation is accurate against source code
implementation in `src/cli/Arguments.zig`
- [x] Verified features are not already documented elsewhere
- [x] Cross-referenced with v1.3.2 and v1.3.3 release notes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-21 12:45:57 -08:00
Marko Vejnovic
abb1b0c4d7 test(ENG-21524): Fuzzilli Stop-Gap (#24826)
### What does this PR do?

Adds [@mschwarzl's Fuzzilli Support
PR](https://github.com/oven-sh/bun/pull/23862) with the changes
necessary to be able to:

- Run it in CI
- Make no impact on `debug` and `release` mode.

### How did you verify your code works?

---------

Co-authored-by: Martin Schwarzl <mschwarzl@cloudflare.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2025-11-20 23:37:31 -08:00
Dylan Conway
274e01c737 remove jsc.createCallback (#24910)
### What does this PR do?
This was creating `Zig::FFIFunction` when we could instead use a plain
`JSC::JSFunction`
### How did you verify your code works?
Added a test
2025-11-20 20:56:02 -08:00
Conner Phillippi
a0c5edb15b Add new package paths to .aikido configuration 2025-11-20 17:35:16 -08:00
Meghan Denny
5702b39ef1 runtime: implement CompressionStream/DecompressionStream (#24757)
Closes https://github.com/oven-sh/bun/issues/1723
Closes https://github.com/oven-sh/bun/pull/22214
Closes https://github.com/oven-sh/bun/pull/24241

also supports the `"brotli"` and `"zstd"` formats

<img width="1244" height="547" alt="image"
src="https://github.com/user-attachments/assets/aecf4489-29ad-411d-9f6b-3bee50ed1b27"
/>
2025-11-20 17:14:37 -08:00
Dylan Conway
b72ba31441 fix(Blob.prototype.stream): handle undefined chunkSize (#24900)
### What does this PR do?
`blob.stream(undefined)`
### How did you verify your code works?
Added a test

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-20 17:01:24 -08:00
Meghan Denny
b92d2edcff Rename test-http-chunked-encoding-must be-valid-after-without-flushHeaders.ts to test-http-chunked-encoding-must-be-valid-after-without-flushHeaders.ts 2025-11-20 15:36:31 -08:00
Meghan Denny
a4af0aa4a8 Rename test-http-should-not-emit-or-throw error-when-writing-after-socket.end.ts to test-http-should-not-emit-or-throw-error-when-writing-after-socket.end.ts 2025-11-20 15:36:02 -08:00
Conner Phillippi
28b950e2b0 Add 'bench' to excluded paths in .aikido (#24879)
### What does this PR do?

### How did you verify your code works?
2025-11-19 23:46:30 -08:00
Conner Phillippi
595ad7de93 Add .aikido configuration file to exclude test and scripts directories (#24877) 2025-11-19 23:37:55 -08:00
Alistair Smith
b38ba38a18 types: correct ReadableStream methods, allow Response instance for serve routes under a method (#24872) 2025-11-19 23:08:49 -08:00
Jarred Sumner
788f03454d Show debugger in crash reports (#24871)
### What does this PR do?

Show debugger in crash reports

### How did you verify your code works?
2025-11-19 22:52:01 -08:00
Dylan Conway
0e23375d20 fix ENG-21527 (#24861)
### What does this PR do?
fixes ENG-21527
### How did you verify your code works?
Added a test
2025-11-19 22:44:21 -08:00
Jarred Sumner
d584c86d5b Delete incorrect debug assertion 2025-11-19 22:16:41 -08:00
Dylan Conway
0480d55a67 fix(YAML): handle exponential merge keys (#24729)
### What does this PR do?
Fixes ENG-21490
### How did you verify your code works?
Added a test that would previously fail due to timeout. It also confirms
the parsed result is correct.

---------

Co-authored-by: taylor.fish <contact@taylor.fish>
2025-11-19 21:20:55 -08:00
Jarred Sumner
9189fc4fa1 Fixes #24817 (#24864)
### What does this PR do?

Fixes #24817

### How did you verify your code works?
Test

---------

Co-authored-by: taylor.fish <contact@taylor.fish>
2025-11-19 21:17:51 -08:00
Dylan Conway
b554626662 fix ENG-21528 (#24865)
### What does this PR do?
Makes sure we are creating error messages with an allocator that will
not `deinit` at the end of function scope on error.

fixes ENG-21528
### How did you verify your code works?
Added a test
2025-11-19 20:31:37 -08:00
Dylan Conway
0054506538 fix(windows): close worker libuv loop (#24811)
### What does this PR do?
We need to call `uv_loop_close` in order to remove the threadlocal loop
from a list in libuv so it won't be used later. This explains the crash
reports because they have `workers_terminated` in features.

Fixes #24804
Closes BUN-3NV
Closes ENG-21523
### How did you verify your code works?
Manually. I'm not sure how to write a test yet other than manually
clicking sleep

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Zack Radisic <zack@theradisic.com>
2025-11-19 18:54:41 -08:00
Meghan Denny
1b36d35253 cmake: separate variable for ZIG_COMPILER_SAFE default is unnecessary (#24797) 2025-11-18 17:51:19 -08:00
Ciro Spaciari
9a9473d4b9 fix(createEmptyObject) fix some createEmptyObject values part 2 (#24827)
### What does this PR do?
We must use the right number of properties or we should set it to 0

### How did you verify your code works?
Read the code to check the amount of properties + CI
2025-11-18 14:02:21 -08:00
Ciro Spaciari
2d954995cd Fix Response wrapper and us_internal_disable_sweep_timer (#24623)
### What does this PR do?
Fix Response wrapper and us_internal_disable_sweep_timer
Fixes
https://linear.app/oven/issue/ENG-21510/panic-attempt-to-use-null-value-in-responsezig
### How did you verify your code works?
CI
2025-11-18 14:00:53 -08:00
Meghan Denny
11b20aa508 runtime: fix small leak in Bun.listen (#24799)
pulled out of https://github.com/oven-sh/bun/pull/21663

Co-authored-by: Ciro Spaciari <ciro.spaciari@gmail.com>
2025-11-18 12:00:56 -08:00
Meghan Denny
cac8e62635 zig: switch exhaustively on os + arch more (#24796) 2025-11-18 10:49:21 -08:00
Meghan Denny
f03957474e runtime: fix small leak in Bun.spawn (#24798)
pulled out of https://github.com/oven-sh/bun/pull/21663
2025-11-18 09:57:19 -05:00
Meghan Denny
ab80bbe4c2 runtime: fix small leak in node:net.SocketAddress (#24800)
pulled out of https://github.com/oven-sh/bun/pull/21663
2025-11-18 09:55:45 -05:00
Meghan Denny
af498a0483 runtime: fix small leak in Blob deinit (#24802)
pulled out of https://github.com/oven-sh/bun/pull/21663
2025-11-18 09:55:15 -05:00
robobun
7c485177ee Add compile-time flags to control .env and bunfig.toml autoloading (#24790)
## Summary

This PR adds two new compile options to control whether standalone
executables autoload `.env` files and `bunfig.toml` configuration files.

## New Options

### JavaScript API
```js
await Bun.build({
  entrypoints: ["./entry.ts"],
  compile: {
    autoloadDotenv: false,  // Disable .env loading (default: true)
    autoloadBunfig: false,  // Disable bunfig.toml loading (default: true)
  }
});
```

### CLI Flags
```bash
bun build --compile --no-compile-autoload-dotenv entry.ts
bun build --compile --no-compile-autoload-bunfig entry.ts
bun build --compile --compile-autoload-dotenv entry.ts
bun build --compile --compile-autoload-bunfig entry.ts
```

## Implementation

The flags are stored in a new `Flags` packed struct in
`StandaloneModuleGraph`:
```zig
pub const Flags = packed struct(u32) {
    disable_default_env_files: bool = false,
    disable_autoload_bunfig: bool = false,
    _padding: u30 = 0,
};
```

These flags are:
1. Set during compilation from CLI args or JS API options
2. Serialized into the `StandaloneModuleGraph` embedded in the
executable
3. Read at runtime in `bootStandalone()` to conditionally load config
files

## Testing

Manually tested and verified:
-  Default behavior loads `.env` files
-  `--no-compile-autoload-dotenv` disables `.env` loading
-  `--compile-autoload-dotenv` explicitly enables `.env` loading
-  Default behavior loads `bunfig.toml` (verified with preload script)
-  `--no-compile-autoload-bunfig` disables `bunfig.toml` loading

Test cases added in `test/bundler/bundler_compile_autoload.test.ts`

## Files Changed

- `src/StandaloneModuleGraph.zig` - Added Flags struct, updated
encode/decode
- `src/bun.js.zig` - Checks flags in bootStandalone()
- `src/bun.js/api/JSBundler.zig` - Added autoload options to
CompileOptions
- `src/bundler/bundle_v2.zig` - Pass flags to toExecutable()
- `src/cli.zig` - Added flags to BundlerOptions
- `src/cli/Arguments.zig` - Added CLI argument parsing
- `src/cli/build_command.zig` - Pass flags from context
- `test/bundler/expectBundled.ts` - Support new compile options
- `test/bundler/bundler_compile_autoload.test.ts` - New test file

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-11-18 09:46:44 -05:00
Marko Vejnovic
9513c1d1d9 chore: HttpContext.h cleanup (#24730) 2025-11-17 13:36:03 -08:00
robobun
509a97a435 Add --no-env-file flag to disable automatic .env loading (#24767)
## Summary

Implements `--no-env-file` CLI flag and bunfig configuration options to
disable automatic `.env` file loading at runtime and in the bundler.

## Motivation

Users may want to disable automatic `.env` file loading for:
- Production environments where env vars are managed externally
- CI/CD pipelines where .env files should be ignored
- Testing scenarios where explicit env control is needed
- Security contexts where .env files should not be trusted

## Changes

### CLI Flag
- Added `--no-env-file` flag that disables loading of default .env files
- Still respects explicit `--env-file` arguments for intentional env
loading

### Bunfig Configuration
Added support for disabling .env loading via `bunfig.toml`:
- `env = false` - disables default .env file loading
- `env = null` - disables default .env file loading  
- `env.file = false` - disables default .env file loading
- `env.file = null` - disables default .env file loading

### Implementation
- Added `disable_default_env_files` field to `api.TransformOptions` with
serialization support
- Added `disable_default_env_files` field to `options.Env` struct
- Implemented `loadEnvConfig` in bunfig parser to handle env
configuration
- Wired up flag throughout runtime and bundler code paths
- Preserved package.json script runner behavior (always skips default
.env files)

## Tests

Added comprehensive test suite (`test/cli/run/no-envfile.test.ts`) with
9 tests covering:
- `--no-env-file` flag with `.env`, `.env.local`,
`.env.development.local`
- Bunfig configurations: `env = false`, `env.file = false`, `env = true`
- `--no-env-file` with `-e` eval flag
- `--no-env-file` combined with `--env-file` (explicit files still load)
- Production mode behavior

All tests pass with debug bun and fail with system bun (as expected).

## Example Usage

```bash
# Disable all default .env files
bun --no-env-file index.js

# Disable defaults but load explicit file
bun --no-env-file --env-file .env.production index.js

# Disable via bunfig.toml
cat > bunfig.toml << 'CONFIG'
env = false
CONFIG
bun index.js
```

## Files Changed
- `src/cli/Arguments.zig` - CLI flag parsing
- `src/api/schema.zig` - API schema field with encode/decode
- `src/options.zig` - Env struct field and wiring
- `src/bunfig.zig` - Config parsing with loadEnvConfig
- `src/transpiler.zig` - Runtime wiring
- `src/bun.js.zig` - Runtime wiring
- `src/cli/exec_command.zig` - Runtime wiring
- `src/cli/run_command.zig` - Preserved package.json script runner
behavior
- `test/cli/run/no-envfile.test.ts` - Comprehensive test suite

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-17 15:04:42 -05:00
Dylan Conway
983bb52df7 fix #24550 (#24726)
### What does this PR do?
Fixes a regression introduced in Bun v1.3.2 with #24283.

We are not able to skip `sharp` lifecycle scripts before v0.33.0 because
previous versions did not use optional dependencies with prebuilds.

Fixes #24550
Fixes ENG-21519
### How did you verify your code works?
Manually

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-17 15:04:20 -05:00
Meghan Denny
8b5b36ec7a runtime: fix n-api ThreadSafeFunction finalizer (#24771)
Closes https://github.com/oven-sh/bun/issues/24552
Closes https://github.com/oven-sh/bun/issues/24664
Closes https://github.com/oven-sh/bun/issues/24702
Closes https://github.com/oven-sh/bun/issues/24703
Closes https://github.com/oven-sh/bun/issues/24768
2025-11-17 11:23:13 -08:00
Michael H
87eca6bbc7 docs: re-apply many recent changes that somehow aren't present (#24719)
lots of recent changes aren't present, so this reapplies them
2025-11-16 19:23:01 +11:00
Meghan Denny
2cb8d4eae8 cmake: remove GIT_CHANGED_SOURCES (#24737)
dead code
2025-11-15 16:45:37 -08:00
Meghan Denny
e53ceb62ec zig: fix missing uses of bun.callmod_inline (#24738)
results in better strack traces in debug mode
2025-11-15 16:36:15 -08:00
pfg
277fc558e2 only-failures fix (#24701)
### What does this PR do?

Removes these accidental blank lines

<img width="170" height="139" alt="image"
src="https://github.com/user-attachments/assets/b44d6496-a497-4be6-9666-8134a70d7324"
/>


### How did you verify your code works?
2025-11-14 19:52:43 -08:00
Dylan Conway
5908bfbfc6 fix(YAML.stringify): number-like strings prefixed with 0 (#24731)
### What does this PR do?
Ensures strings that would parse as a number with leading zeroes aren't
emitted without quotes.

fixes #23691

### How did you verify your code works?
Added a test
2025-11-14 17:43:36 -08:00
Dylan Conway
19f21c00bd fix #24510 (#24563)
### What does this PR do?
The assertion was too strict.

This pr changes to assertion to allow multiple of the same dependency id
to be present. Also changes all the assertions to debug assertions.

fixes #24510
### How did you verify your code works?
Manually, and added a new test

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Marko Vejnovic <marko@bun.com>
2025-11-14 16:49:21 -08:00
Lydia Hallie
8650e7ace4 Docs: Add templates to guides (#24732)
Adds template cards to the TanStack Start and Next.js guides
2025-11-14 16:45:21 -08:00
robobun
b2c219a56c Implement retry and repeats options for bun:test (#23713)
Fixes #16051, Fixes ENG-21437

Implements retry/repeats

```ts
test("my test", () => {
    if (Math.random() < 0.1) throw new Error("uh oh!");
}, {repeats: 20});
```

```
Error: uh oh!
✗ my test
```

```ts
test("my test", () => {
    if (Math.random() < 0.1) throw new Error("uh oh!");
}, {retry: 5});
```

```
Error: uh oh!
✓ my test (attempt 2)
```

Also fixes a bug where onTestFinished inside a test would not run if the
test failed

```ts
test("abc", () => {
    onTestFinished(() => { console.log("hello" });
    throw new Error("uh oh!");
});
```

```
Error: uh oh!
hello
```

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: pfg <pfg@pfg.pw>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-14 16:21:04 -08:00
Luke Parker
f216673f98 fix: Add missing SIGWINCH for windows (#24704)
### What does this PR do?
Fixes https://github.com/oven-sh/bun/issues/22288
Fixes #22402
Fixes https://github.com/oven-sh/bun/issues/23224
Fixes https://github.com/oven-sh/bun/issues/17803

cc: Should unblock opencode/opentui window resize on windows
https://github.com/sst/opentui/issues/152

### How did you verify your code works?
Clone the linked repro, verified latest bun failed, node worked, then
iterated till my local bun worked.

Here is a screenshot of the branch working with bun on windows

<img width="1427" height="891" alt="image"
src="https://github.com/user-attachments/assets/18642db7-4cb6-4758-bb76-a38d277cbc23"
/>

Additionally using bun vs bun-debug on a little test for our downstream
package proves this works

<img width="1137" height="679" alt="image"
src="https://github.com/user-attachments/assets/4dbe7605-ced9-4bcb-84f0-ed793f8aa942"
/>
<img width="1138" height="684" alt="image"
src="https://github.com/user-attachments/assets/f658b3b9-e4bc-4bfa-84f0-e1eb3af83d89"
/>
2025-11-14 14:05:47 -08:00
Lydia Hallie
a70f2b7ff9 Docs: Add custor server instructions to TanStack guide (#24723)
Add docs on how to deploy a custom Bun server for TanStack Start. Based
on [this
example](https://github.com/TanStack/router/tree/main/examples/react/start-bun/server.ts)
2025-11-14 11:53:23 -08:00
Braden Wong
65a215bb4e docs(watch): use relativePath parameter name in recursive example (#24716)
This updates the documentation for `fs.watch()` to use `relativePath`
instead of `filename` in the recursive example, following the same
convention from PR #23990.

When `recursive: true` is set on `fs.watch()`, the callback receives a
relative path to the changed file rather than just a simple filename.
Using `relativePath` as the parameter name makes this distinction
clearer to users.

**Related to:** https://github.com/oven-sh/bun/pull/23990

Co-authored-by: Michael H <git@riskymh.dev>
2025-11-15 01:10:35 +11:00
Michael H
c3c91442ac docs: fix custom loader example to be correct & other file-type doc updates (#24677) 2025-11-14 16:32:00 +11:00
Nino
93ab167a8d docs: fix environment variable syntax in executable example (#24706)
### What does this PR do?

Fixes a typo in the docs. 

`bun_BE_BUN=1` doesn't work, it has to be capitalized `BUN_BE_BUN=1`
2025-11-13 21:31:07 -08:00
pfg
d8ee26509c Fix progress showing kb for downloading packages instead of count (#24700)
- show bytes for upgrading bun
- show no unit for other progress bars

Fix for issue introduced in #24266
2025-11-13 19:29:16 -08:00
Ciro Spaciari
21d582a3cd fix(createEmptyObject) fix some createEmptyObject values (#22512)
### What does this PR do?
We must use the right number of properties (not more or less) or we
should set it to 0
### How did you verify your code works?
Read the code, this will avoid potencial crashs and improve stability
2025-11-13 15:19:18 -08:00
Meghan Denny
d7bf4fb443 ci/format: update bun version (#24693) 2025-11-13 15:00:40 -08:00
Ciro Spaciari
263d1ab178 update(crypto) update root certificates to NSS 3.117 (#24607)
### What does this PR do?
This is the certdata.txt[0] from NSS 3.117, released on 2025-11-11.

This is the version of NSS that will ship in Firefox 145.0 on
2025-11-11.

Certificates added:
- OISTE Server Root ECC G1
-  OISTE Server Root RSA G1

[0]
https://hg.mozilla.org/projects/nss/raw-file/NSS_3_117_RTM/lib/ckfw/builtins/certdata.txt
765c9e86-0a91-4dad-b410-801cd60f8b32

Fixes https://linear.app/oven/issue/ENG-21508/update-root-certs
### How did you verify your code works?
CI
2025-11-13 13:26:34 -08:00
Marko Vejnovic
08843030f5 [publish images] 2025-11-13 12:04:47 -08:00
Kristjan Broder Lund
9ccc8fb795 docs: format code blocks correctly (#24672)
### What does this PR do?

The code blocks were not properly formatted, and did not render
correctly

`main`:
<img width="699" height="196" alt="image"
src="https://github.com/user-attachments/assets/c08bc29e-9481-47ae-bafe-dd94b22d0c09"
/>

this pr:
<img width="691" height="306" alt="image"
src="https://github.com/user-attachments/assets/947fb9d7-04f3-42e8-aafe-0d70127fefd1"
/>

### How did you verify your code works?

ran docs locally with mint

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Michael H <git@riskymh.dev>
2025-11-13 20:36:45 +11:00
S.T.P
4c03d3b8b6 docs: remove the redundant tags (#24668)
### What does this PR do?

Remove the redundant code block tag

<img width="1469" height="918" alt="image"
src="https://github.com/user-attachments/assets/3eb3b499-3165-409c-9360-2fe1872162ed"
/>

After change

<img width="1458" height="1006" alt="image"
src="https://github.com/user-attachments/assets/69eac47c-28cd-4459-9478-0098b51f78fe"
/>


### How did you verify your code works?

Preview the documentation locally

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Michael H <git@riskymh.dev>
2025-11-13 18:37:22 +11:00
Marko Vejnovic
6d2ce3892b build(ENG-21514): Fix sccache invocation (#24651)
### What does this PR do?

Fixes some miswritten `cmake` steps so that `sccache` actually works

### How did you verify your code works?
2025-11-12 14:51:08 -08:00
Marko Vejnovic
e03d3bee10 ci(ENG-21502): Fix sccache not working inside Docker (#24597) 2025-11-12 14:40:12 -08:00
Caio Borghi
fff47f0267 docs: update EdgeDB references to Gel rebrand (#24487)
## Summary
EdgeDB has rebranded to Gel. This PR comprehensively updates all
documentation to reflect the rebrand.

## Changes Made

### Documentation & Branding
- **Guide title**: "Use EdgeDB with Bun" → "Use Gel with Bun"
- **File renamed**: `docs/guides/ecosystem/edgedb.mdx` → `gel.mdx`
- **Description**: Added "(formerly EdgeDB)" note
- **All path references**: Updated from `/guides/ecosystem/edgedb` to
`/guides/ecosystem/gel`

### CLI Commands
- `edgedb project init` → `gel project init`
- `edgedb` → `gel` (REPL)
- `edgedb migration create` → `gel migration create`
- `edgedb migrate` → `gel migrate`

### npm Packages
- `edgedb` → `gel`
- `@edgedb/generate` → `@gel/generate`

### Installation & Documentation URLs
- Installation link: `docs.geldata.com/learn/installation` (functional)
- Documentation reference: `docs.geldata.com/` (operational)
- Installation scripts: Verified working (`https://www.geldata.com/sh`
and `ps1`)
- Added Homebrew option: `brew install geldata/tap/gel-cli`

### Code Examples
- Updated all imports: `import { createClient } from "gel"`
- Updated codegen commands: `bunx @gel/generate`

## Verified
All commands verified against official Gel documentation at
https://docs.geldata.com/

Fixes #17721

---------

Co-authored-by: Lydia Hallie <lydiajuliettehallie@gmail.com>
2025-11-12 14:18:59 -08:00
Ciro Spaciari
4e1d9a2cbc remove dead code in src/bake/DevServer/SerializedFailure.zig (#24635)
### What does this PR do?
remove dead code in src/bake/DevServer/SerializedFailure.zig
### How did you verify your code works?
It builds
2025-11-12 13:39:36 -08:00
Ciro Spaciari
1f0c885e91 proper handle on_data if we receive null (#24624)
### What does this PR do?
If for some reason data is null we should handle as empty
Fixes
https://linear.app/oven/issue/ENG-21511/panic-attempt-to-use-null-value-in-socket-on-data
### How did you verify your code works?
Ci
2025-11-12 12:42:06 -08:00
Ciro Spaciari
ab32a2fc4a fix(bun getcompletes) add windows support and remove TODO panic (#24620)
### What does this PR do?
Fixes https://linear.app/oven/issue/ENG-21509/panic-todo-in-completions
### How did you verify your code works?
Test
2025-11-12 12:41:47 -08:00
Ciro Spaciari
8912957aa5 compatibility(node:net) _handle.fd property (#24575)
### What does this PR do?
Expose fd property in _handle for node:net/node:tls
Fixes
https://linear.app/oven/issue/ENG-21507/expose-fd-in-nodenetnodetls

### How did you verify your code works?
Test

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-11-12 12:28:55 -08:00
Ciro Spaciari
df4e42bf1c fix(DevServer) remove panic in case of source type none (#24634)
### What does this PR do?
Remove panic in case of source type none, so we can handle it more
gracefully, we can discuss if this is the best solution but looks
sensible for me. This is really hard to repro but can happen when
deleting files referred by dynamic imports.


Fixes
https://linear.app/oven/issue/ENG-21513/panic-missing-internal-precomputed-line-count-in-renderjson-on
Fixes https://github.com/oven-sh/bun/issues/21714
### How did you verify your code works?
CI

---------

Co-authored-by: taylor.fish <contact@taylor.fish>
2025-11-12 12:28:17 -08:00
Ciro Spaciari
f67bec90c5 refactor(us_socket_t.zig) safer use of intCast (#24622)
### What does this PR do?
make sure to always safe intCast in us_socket_t
### How did you verify your code works?
Compiles
2025-11-12 11:02:39 -08:00
Michael H
fa099336da docs: node does support "import path re-mapping" (#17133)
fixes #4545
2025-11-13 06:02:12 +11:00
Michael H
7f8dff64c4 docs: revert minifier doc's format (#24639) 2025-11-12 11:01:25 -08:00
Michael H
98a01e5d2a docs: fix some pages (#24632) 2025-11-13 06:00:05 +11:00
Michael H
d1fa27acce docs: document more loaders (#24616)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-12 10:48:03 -08:00
Michael H
cf6662d48f types: document configVersion in BunLockFile (#24641) 2025-11-12 10:47:30 -08:00
Marko Vejnovic
2563a9b3ad build(ENG-21491): Improve sccache behavior on developer machines (#24568)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-12 09:11:33 -08:00
Meghan Denny
e0aae8adc1 ci: remove unified-builds and unified-tests options (#24626) 2025-11-11 22:52:46 -08:00
Marko Vejnovic
6b8a75f6ab chore(ENG-21504): Remove bit-rotted scripts (#24606)
### What does this PR do?

Removes some scripts which haven't been tested in a while.

### How did you verify your code works?

CI passes
2025-11-11 22:39:20 -08:00
pfg
97c113d010 remove unused writer type parameters in src/css/ (#24571)
No longer needed after zig upgrade

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-11 21:09:50 -08:00
Meghan Denny
7f4e65464e zig: fix spurious dependency loop compile error in ResumableSink (#24618) 2025-11-11 20:22:24 -08:00
Meghan Denny
4b05629131 ci: update no-validate-exceptions.txt 2025-11-11 20:21:24 -08:00
Ciro Spaciari
d868c6019c fix(DevServer) unconditional unwrap in IncrementalGraph (#24608)
### What does this PR do?
Fixes
https://linear.app/oven/issue/ENG-21505/panic-attempt-to-use-null-value-at-incrementalgraph-by-misusing-jscode

When calling `takeJSBundleToList/takeJSBundle` the desired behavior is
to get only JS chunks from the graph, and we can contain CSS chunks in
the graph we can just continue and ignore in this case keeping the
desired behavior in a safe way instead of unconditional unwrapping
something that is not guaranteed to have a jsCode.

### How did you verify your code works?
CI
2025-11-11 16:46:28 -08:00
robobun
0c42b46af3 docs: remove outdated version mentions (1.0.x and 1.1.x) (#24570)
## Summary

Remove outdated version mentions (1.0.x and 1.1.x) from documentation
for better consistency. These versions are over a year old - you should
be using a recent version of bun :).

## What changed

**Removed version mentions from:**
- `docs/pm/lifecycle.mdx` - v1.0.16 (trusted dependencies)
- `docs/bundler/executables.mdx` - v1.0.23, v1.1.25, v1.1.30 (various
features)
- `docs/guides/install/jfrog-artifactory.mdx` - v1.0.3+ (env var
comment)
- `docs/guides/install/azure-artifacts.mdx` - v1.0.3+ (env var comment)
- `docs/runtime/workers.mdx` - v1.1.13, v1.1.35 (blob URLs, preload)
- `docs/runtime/networking/dns.mdx` - v1.1.9 (DNS caching)
- `docs/guides/runtime/import-html.mdx` - v1.1.5
- `docs/guides/runtime/define-constant.mdx` - v1.1.5
- `docs/runtime/sqlite.mdx` - v1.1.31

**Kept version mentions in:**
- All 1.2.x versions (still recent, less than a year old)
- Benchmark version numbers (e.g., S3 performance comparison with
v1.1.44)
- `docs/guides/install/yarnlock.mdx` (bun.lock introduction context)
- `docs/project/building-windows.mdx` (build requirements)
- `docs/runtime/http/websockets.mdx` (performance benchmarks)

## Why

The docs lack consistency around version mentions - we don't document
every feature's version, so keeping scattered old version numbers looks
inconsistent. These changes represent a small percentage of features
added recently, and users on ancient versions have bigger problems than
needing to know exactly when a feature landed.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: RiskyMH <git@riskymh.dev>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-12 10:44:52 +11:00
Nathan Soares
1cee6cf36b docs: Change code block header from package.json to tsconfig.json (#24511)
Co-authored-by: Michael H <git@riskymh.dev>
2025-11-12 10:25:23 +11:00
pfg
9671a98dca remove unused "recommended zig version" (#24611)
Dead code.
2025-11-11 15:10:57 -08:00
yinheli
c6aa5a97dc fix(docs): Remove duplicate sections in guides.jsx (#24595)
### What does this PR do?

This PR fixes an issue on the Guides page where duplicate sections were
being displayed. The problem was caused by a misplaced return statement
and a duplicated JSX block introduced in commit
[1606a9f24e](https://github.com/oven-sh/bun/blob/1606a9f24e/docs/snippets/guides.jsx#L504-L514).
2025-11-11 14:53:59 -08:00
robobun
925e8bcfe1 Format download sizes in human-readable format (#24266)
## Summary

- Use `std.fmt.fmtIntSizeBin` to format progress indicators with byte
sizes
- Improves readability during operations like `bun upgrade`
- Changes display from raw bytes (e.g., "23982378/2398284") to
human-readable format (e.g., "23.2MiB/100MiB")

## Changes

Modified `src/Progress.zig`:
- Updated progress formatting to use `std.fmt.fmtIntSizeBin` for both
current and total sizes
- Applied to both progress with total (`[current/total unit]`) and
without total (`[current unit]`)

## Test plan

- [x] Build succeeds with `bun bd`
- [ ] Manual verification with `bun upgrade` shows human-readable sizes

Fixes #24226 fixes #7826

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pfg <pfg@pfg.pw>
2025-11-10 20:02:00 -08:00
robobun
b87ac4a781 Update ci_info with more CI detection (#23708)
Fixes ENG-21481

Updates ci_info to include more CIs. It makes it codegen the ci
detection based on the json from the ci-info package. Also it supports
setting CI=true to force ci detected.

---------

Co-authored-by: pfg <pfg@pfg.pw>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-10 19:58:02 -08:00
robobun
b876938f6d docs: update documentation for Bun v1.3.2 features (#24503)
## Summary
Updates documentation for all major features and changes introduced in
Bun v1.3.2 blog post.

## Changes

### Package Manager
-  Document `configVersion` system for controlling default linker
behavior
-  Clarify that "existing projects (made pre-v1.3.2)" use hoisted
installs for backward compatibility
-  Add smart postinstall script optimization with environment variable
flags
-  Document improved Git dependency resolution with HTTP tarball
optimization
-  Add `bun list` alias for `bun pm ls`

### Testing
-  Document new `onTestFinished` lifecycle hook with simple example
-  Add to lifecycle hooks table in test documentation

### Runtime & Performance
-  Add CPU profiling with `--cpu-prof` flag documentation
-  Place after memory usage section for better flow

### WebSockets
-  Add `subscriptions` getter to existing pub/sub example
-  Add TypeScript reference for the subscriptions property

## Documentation Improvements
All documentation now consistently:
- Uses "made pre-v1.3.2" to clarify existing project behavior
- Simplifies default linker explanations with clear references to
`/docs/pm/isolated-installs`
- Uses `/docs/pm/isolated-installs` for all internal references
- Avoids confusing technical details in favor of user-friendly summaries

## Files Modified
- `docs/guides/install/add-git.mdx` - Added GitHub tarball optimization
note
- `docs/pm/cli/install.mdx` - Added installation strategies and smart
postinstall docs
- `docs/pm/cli/pm.mdx` - Added bun list alias
- `docs/pm/isolated-installs.mdx` - Updated default behavior section
with configVersion table
- `docs/project/benchmarking.mdx` - Added CPU profiling section
- `docs/runtime/bunfig.mdx` - Clarified install.linker defaults
- `docs/runtime/http/websockets.mdx` - Added subscriptions to example
and TypeScript interface
- `docs/test/lifecycle.mdx` - Added onTestFinished hook documentation

## Diff

````diff
diff --git a/docs/guides/install/add-git.mdx b/docs/guides/install/add-git.mdx
index 70950e1a63..7f8f3c8d81 100644
--- a/docs/guides/install/add-git.mdx
+++ b/docs/guides/install/add-git.mdx
@@ -33,6 +33,8 @@ bun add git@github.com:lodash/lodash.git
 bun add github:colinhacks/zod
 ```
 
+**Note:** GitHub dependencies download via HTTP tarball when possible for faster installation.
+
 ---
 
 See [Docs > Package manager](https://bun.com/docs/cli/install) for complete documentation of Bun's package manager.
diff --git a/docs/pm/cli/install.mdx b/docs/pm/cli/install.mdx
index 7affb62646..dde268b7e5 100644
--- a/docs/pm/cli/install.mdx
+++ b/docs/pm/cli/install.mdx
@@ -88,6 +88,13 @@ Lifecycle scripts will run in parallel during installation. To adjust the maximu
 bun install --concurrent-scripts 5
 ```
 
+Bun automatically optimizes postinstall scripts for popular packages (like `esbuild`, `sharp`, etc.) by determining which scripts need to run. To disable these optimizations:
+
+```bash terminal icon="terminal"
+BUN_FEATURE_FLAG_DISABLE_NATIVE_DEPENDENCY_LINKER=1 bun install
+BUN_FEATURE_FLAG_DISABLE_IGNORE_SCRIPTS=1 bun install
+```
+
 ---
 
 ## Workspaces
@@ -231,7 +238,7 @@ Bun supports installing dependencies from Git, GitHub, and local or remotely-hos
 
 Bun supports two package installation strategies that determine how dependencies are organized in `node_modules`:
 
-### Hoisted installs (default for single projects)
+### Hoisted installs
 
 The traditional npm/Yarn approach that flattens dependencies into a shared `node_modules` directory:
 
@@ -249,7 +256,15 @@ bun install --linker isolated
 
 Isolated installs create a central package store in `node_modules/.bun/` with symlinks in the top-level `node_modules`. This ensures packages can only access their declared dependencies.
 
-For complete documentation on isolated installs, refer to [Package manager > Isolated installs](/pm/isolated-installs).
+### Default strategy
+
+The default linker strategy depends on whether you're starting fresh or have an existing project:
+
+- **New workspaces/monorepos**: `isolated` (prevents phantom dependencies)
+- **New single-package projects**: `hoisted` (traditional npm behavior)
+- **Existing projects (made pre-v1.3.2)**: `hoisted` (preserves backward compatibility)
+
+The default is controlled by a `configVersion` field in your lockfile. For a detailed explanation, see [Package manager > Isolated installs](/docs/pm/isolated-installs).
 
 ---
 
@@ -319,8 +334,7 @@ dryRun = false
 concurrentScripts = 16 # (cpu count or GOMAXPROCS) x2
 
 # installation strategy: "hoisted" or "isolated"
-# default: "hoisted" (for single-project projects)
-# default: "isolated" (for monorepo projects)
+# default varies by project type - see /docs/pm/isolated-installs
 linker = "hoisted"
 
 
diff --git a/docs/pm/cli/pm.mdx b/docs/pm/cli/pm.mdx
index fc297753d3..9c8faa7da1 100644
--- a/docs/pm/cli/pm.mdx
+++ b/docs/pm/cli/pm.mdx
@@ -115,6 +115,8 @@ To print a list of installed dependencies in the current project and their resol
 
 ```bash terminal icon="terminal"
 bun pm ls
+# or
+bun list
 ```
 
 ```txt
@@ -130,6 +132,8 @@ To print all installed dependencies, including nth-order dependencies.
 
 ```bash terminal icon="terminal"
 bun pm ls --all
+# or
+bun list --all
 ```
 
 ```txt
diff --git a/docs/pm/isolated-installs.mdx b/docs/pm/isolated-installs.mdx
index 73c6748b15..17afe02fe1 100644
--- a/docs/pm/isolated-installs.mdx
+++ b/docs/pm/isolated-installs.mdx
@@ -5,7 +5,7 @@ description: "Strict dependency isolation similar to pnpm's approach"
 
 Bun provides an alternative package installation strategy called **isolated installs** that creates strict dependency isolation similar to pnpm's approach. This mode prevents phantom dependencies and ensures reproducible, deterministic builds.
 
-This is the default installation strategy for monorepo projects.
+This is the default installation strategy for **new** workspace/monorepo projects (with `configVersion = 1` in the lockfile). Existing projects continue using hoisted installs unless explicitly configured.
 
 ## What are isolated installs?
 
@@ -43,8 +43,23 @@ linker = "isolated"
 
 ### Default behavior
 
-- For monorepo projects, Bun uses the **isolated** installation strategy by default.
-- For single-project projects, Bun uses the **hoisted** installation strategy by default.
+The default linker strategy depends on your project's lockfile `configVersion`:
+
+| `configVersion` | Using workspaces? | Default Linker |
+| --------------- | ----------------- | -------------- |
+| `1`             |                 | `isolated`     |
+| `1`             |                 | `hoisted`      |
+| `0`             |                 | `hoisted`      |
+| `0`             |                 | `hoisted`      |
+
+**New projects**: Default to `configVersion = 1`. In workspaces, v1 uses the isolated linker by default; otherwise it uses hoisted linking.
+
+**Existing Bun projects (made pre-v1.3.2)**: If your existing lockfile doesn't have a version yet, Bun sets `configVersion = 0` when you run `bun install`, preserving the previous hoisted linker default.
+
+**Migrations from other package managers**:
+
+- From pnpm: `configVersion = 1` (using isolated installs in workspaces)
+- From npm or yarn: `configVersion = 0` (using hoisted installs)
 
 You can override the default behavior by explicitly specifying the `--linker` flag or setting it in your configuration file.
 
diff --git a/docs/project/benchmarking.mdx b/docs/project/benchmarking.mdx
index 1263a06729..2ab8bcafc8 100644
--- a/docs/project/benchmarking.mdx
+++ b/docs/project/benchmarking.mdx
@@ -216,3 +216,26 @@ numa nodes:       1
    elapsed:       0.068 s
    process: user: 0.061 s, system: 0.014 s, faults: 0, rss: 57.4 MiB, commit: 64.0 MiB
 ```
+
+## CPU profiling
+
+Profile JavaScript execution to identify performance bottlenecks with the `--cpu-prof` flag.
+
+```sh terminal icon="terminal"
+bun --cpu-prof script.js
+```
+
+This generates a `.cpuprofile` file you can open in Chrome DevTools (Performance tab → Load profile) or VS Code's CPU profiler.
+
+### Options
+
+```sh terminal icon="terminal"
+bun --cpu-prof --cpu-prof-name my-profile.cpuprofile script.js
+bun --cpu-prof --cpu-prof-dir ./profiles script.js
+```
+
+| Flag                         | Description          |
+| ---------------------------- | -------------------- |
+| `--cpu-prof`                 | Enable profiling     |
+| `--cpu-prof-name <filename>` | Set output filename  |
+| `--cpu-prof-dir <dir>`       | Set output directory |
diff --git a/docs/runtime/bunfig.mdx b/docs/runtime/bunfig.mdx
index 91005c1607..5b7fe49823 100644
--- a/docs/runtime/bunfig.mdx
+++ b/docs/runtime/bunfig.mdx
@@ -497,9 +497,9 @@ print = "yarn"
 
 ### `install.linker`
 
-Configure the default linker strategy. Default `"hoisted"` for single-project projects, `"isolated"` for monorepo projects.
+Configure the linker strategy for installing dependencies. Defaults to `"isolated"` for new workspaces, `"hoisted"` for new single-package projects and existing projects (made pre-v1.3.2).
 
-For complete documentation refer to [Package manager > Isolated installs](/pm/isolated-installs).
+For complete documentation refer to [Package manager > Isolated installs](/docs/pm/isolated-installs).
 
 ```toml title="bunfig.toml" icon="settings"
 [install]
diff --git a/docs/runtime/http/websockets.mdx b/docs/runtime/http/websockets.mdx
index b33f37c29f..174043200d 100644
--- a/docs/runtime/http/websockets.mdx
+++ b/docs/runtime/http/websockets.mdx
@@ -212,6 +212,9 @@ const server = Bun.serve({
       // this is a group chat
       // so the server re-broadcasts incoming message to everyone
       server.publish("the-group-chat", `${ws.data.username}: ${message}`);
+
+      // inspect current subscriptions
+      console.log(ws.subscriptions); // ["the-group-chat"]
     },
     close(ws) {
       const msg = `${ws.data.username} has left the chat`;
@@ -393,6 +396,7 @@ interface ServerWebSocket {
   readonly data: any;
   readonly readyState: number;
   readonly remoteAddress: string;
+  readonly subscriptions: string[];
   send(message: string | ArrayBuffer | Uint8Array, compress?: boolean): number;
   close(code?: number, reason?: string): void;
   subscribe(topic: string): void;
diff --git a/docs/test/lifecycle.mdx b/docs/test/lifecycle.mdx
index 6427175df6..3837f0e948 100644
--- a/docs/test/lifecycle.mdx
+++ b/docs/test/lifecycle.mdx
@@ -6,11 +6,12 @@ description: "Learn how to use beforeAll, beforeEach, afterEach, and afterAll li
 The test runner supports the following lifecycle hooks. This is useful for loading test fixtures, mocking data, and configuring the test environment.
 
 | Hook             | Description                                                |
-| ------------ | --------------------------- |
+| ---------------- | ---------------------------------------------------------- |
 | `beforeAll`      | Runs once before all tests.                                |
 | `beforeEach`     | Runs before each test.                                     |
 | `afterEach`      | Runs after each test.                                      |
 | `afterAll`       | Runs once after all tests.                                 |
+| `onTestFinished` | Runs after a single test finishes (after all `afterEach`). |
 
 ## Per-Test Setup and Teardown
 
@@ -90,6 +91,23 @@ describe("test group", () => {
 });
 ```
 
+### `onTestFinished`
+
+Use `onTestFinished` to run a callback after a single test completes. It runs after all `afterEach` hooks.
+
+```ts title="test.ts" icon="/icons/typescript.svg"
+import { test, onTestFinished } from "bun:test";
+
+test("cleanup after test", () => {
+  onTestFinished(() => {
+    // runs after all afterEach hooks
+    console.log("test finished");
+  });
+});
+```
+
+Not supported in concurrent tests; use `test.serial` instead.
+
 ## Global Setup and Teardown
 
 To scope the hooks to an entire multi-file test run, define the hooks in a separate file.
````

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Michael H <git@riskymh.dev>
Co-authored-by: Lydia Hallie <lydiajuliettehallie@gmail.com>
2025-11-10 18:18:07 -08:00
Lydia Hallie
6b70b71895 Add TanStack Start guide (#24572)
Adds a guide on how to build and deploy TanStack Start with Bun
2025-11-10 17:38:48 -08:00
Marko Vejnovic
80a5b59fe5 bug(ENG-21501): Fix integer overflow in hosted_git_info.zig (#24561)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-10 16:30:47 -08:00
Alistair Smith
d87a928b94 Remove dependency on React's types in @types/bun 2025-11-10 15:50:45 -08:00
pfg
05d0475c6c Update to zig 0.15.2 (#24204)
Fixes ENG-21287

Build times, from `bun run build && echo '//' >> src/main.zig && time
bun run build`

|Platform|0.14.1|0.15.2|Speedup|
|-|-|-|-|
|macos debug asan|126.90s|106.27s|1.19x|
|macos debug noasan|60.62s|50.85s|1.19x|
|linux debug asan|292.77s|241.45s|1.21x|
|linux debug noasan|146.58s|130.94s|1.12x|
|linux debug use_llvm=false|n/a|78.27s|1.87x|
|windows debug asan|177.13s|142.55s|1.24x|

Runtime performance:

- next build memory usage may have gone up by 5%. Otherwise seems the
same. Some code with writers may have gotten slower, especially one
instance of a counting writer and a few instances of unbuffered writers
that now have vtable overhead.
- File size reduced by 800kb (from 100.2mb to 99.4mb)

Improvements:

- `@export` hack is no longer needed for watch
- native x86_64 backend for linux builds faster. to use it, set use_llvm
false and no_link_obj false. also set `ASAN_OPTIONS=detect_leaks=0`
otherwise it will spam the output with tens of thousands of lines of
debug info errors. may need to use the zig lldb fork for debugging.
- zig test-obj, which we will be able to use for zig unit tests

Still an issue:

- false 'dependency loop' errors remain in watch mode
- watch mode crashes observed

Follow-up:

- [ ] search `comptime Writer: type` and `comptime W: type` and remove
- [ ] remove format_mode in our zig fork
- [ ] remove deprecated.zig autoFormatLabelFallback
- [ ] remove deprecated.zig autoFormatLabel
- [ ] remove deprecated.BufferedWriter and BufferedReader
- [ ] remove override_no_export_cpp_apis as it is no longer needed
- [ ] css Parser(W) -> Parser, and remove all the comptime writer: type
params
- [ ] remove deprecated writer fully

Files that add lines:

```
649     src/deprecated.zig
167     scripts/pack-codegen-for-zig-team.ts
54      scripts/cleartrace-impl.js
46      scripts/cleartrace.ts
43      src/windows.zig
18      src/fs.zig
17      src/bun.js/ConsoleObject.zig
16      src/output.zig
12      src/bun.js/test/debug.zig
12      src/bun.js/node/node_fs.zig
8       src/env_loader.zig
7       src/css/printer.zig
7       src/cli/init_command.zig
7       src/bun.js/node.zig
6       src/string/escapeRegExp.zig
6       src/install/PnpmMatcher.zig
5       src/bun.js/webcore/Blob.zig
4       src/crash_handler.zig
4       src/bun.zig
3       src/install/lockfile/bun.lock.zig
3       src/cli/update_interactive_command.zig
3       src/cli/pack_command.zig
3       build.zig
2       src/Progress.zig
2       src/install/lockfile/lockfile_json_stringify_for_debugging.zig
2       src/css/small_list.zig
2       src/bun.js/webcore/prompt.zig
1       test/internal/ban-words.test.ts
1       test/internal/ban-limits.json
1       src/watcher/WatcherTrace.zig
1       src/transpiler.zig
1       src/shell/builtin/cp.zig
1       src/js_printer.zig
1       src/io/PipeReader.zig
1       src/install/bin.zig
1       src/css/selectors/selector.zig
1       src/cli/run_command.zig
1       src/bun.js/RuntimeTranspilerStore.zig
1       src/bun.js/bindings/JSRef.zig
1       src/bake/DevServer.zig
```

Files that remove lines:

```
-1      src/test/recover.zig
-1      src/sql/postgres/SocketMonitor.zig
-1      src/sql/mysql/MySQLRequestQueue.zig
-1      src/sourcemap/CodeCoverage.zig
-1      src/css/values/color_js.zig
-1      src/compile_target.zig
-1      src/bundler/linker_context/convertStmtsForChunk.zig
-1      src/bundler/bundle_v2.zig
-1      src/bun.js/webcore/blob/read_file.zig
-1      src/ast/base.zig
-2      src/sql/postgres/protocol/ArrayList.zig
-2      src/shell/builtin/mkdir.zig
-2      src/install/PackageManager/patchPackage.zig
-2      src/install/PackageManager/PackageManagerDirectories.zig
-2      src/fmt.zig
-2      src/css/declaration.zig
-2      src/css/css_parser.zig
-2      src/collections/baby_list.zig
-2      src/bun.js/bindings/ZigStackFrame.zig
-2      src/ast/E.zig
-3      src/StandaloneModuleGraph.zig
-3      src/deps/picohttp.zig
-3      src/deps/libuv.zig
-3      src/btjs.zig
-4      src/threading/Futex.zig
-4      src/shell/builtin/touch.zig
-4      src/meta.zig
-4      src/install/lockfile.zig
-4      src/css/selectors/parser.zig
-5      src/shell/interpreter.zig
-5      src/css/error.zig
-5      src/bun.js/web_worker.zig
-5      src/bun.js.zig
-6      src/cli/test_command.zig
-6      src/bun.js/VirtualMachine.zig
-6      src/bun.js/uuid.zig
-6      src/bun.js/bindings/JSValue.zig
-9      src/bun.js/test/pretty_format.zig
-9      src/bun.js/api/BunObject.zig
-14     src/install/install_binding.zig
-14     src/fd.zig
-14     src/bun.js/node/path.zig
-14     scripts/pack-codegen-for-zig-team.sh
-17     src/bun.js/test/diff_format.zig
```

`git diff --numstat origin/main...HEAD | awk '{ print ($1-$2)"\t"$3 }' |
sort -rn`

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Co-authored-by: Meghan Denny <meghan@bun.com>
Co-authored-by: tayor.fish <contact@taylor.fish>
2025-11-10 14:38:26 -08:00
Lydia Hallie
143ad2ea58 Remove bun version from Vercel guide (#24562) 2025-11-10 14:09:04 -08:00
Dylan Conway
6f9843ea9a fix(install): bun pm ls with unresolved dependencies (#24541)
### What does this PR do?
Fixes `bun pm ls --all` crash with unresolved optional peer
dependencies.
Fixes `bun pm ls` crash with empty lockfiles.

Fixes #24502 
### How did you verify your code works?
Added a test for both crashes

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-10 11:19:57 -08:00
github-actions[bot]
0a307ed880 deps: update sqlite to 3.51.0 (#24530) 2025-11-09 01:09:25 -08:00
robobun
b4f85c8866 Update docs example versions to 1.3.2 (#24522)
## Summary

Updated all example version placeholders in documentation from 1.3.1 and
1.2.20 to 1.3.2.

## Changes

Updated version examples in:
- Installation examples (Linux/macOS and Windows install commands)
- Package manager output examples (`bun install`, `bun publish`, `bun
pm` commands)
- Test runner output examples
- Spawn/child process output examples
- Fetch User-Agent header examples in debugging docs
- `Bun.version` API example

## Notes

- Historical version references (e.g., "As of Bun v1.x.x..." or "Bun
v1.x.x+ required") were intentionally **preserved** as they document
when features were introduced
- Generic package.json version examples (non-Bun package versions) were
**preserved**
- Only example outputs and code snippets showing current Bun version
were updated

## Files Changed (13 total)

- `docs/installation.mdx`
- `docs/guides/install/from-npm-install-to-bun-install.mdx`
- `docs/guides/install/add-peer.mdx`
- `docs/bundler/html-static.mdx` (6 occurrences)
- `docs/test/dom.mdx`
- `docs/pm/cli/publish.mdx`
- `docs/pm/cli/pm.mdx`
- `docs/guides/test/snapshot.mdx` (2 occurrences)
- `docs/guides/ecosystem/nuxt.mdx`
- `docs/guides/util/version.mdx`
- `docs/runtime/debugger.mdx` (3 occurrences)
- `docs/runtime/networking/fetch.mdx`
- `docs/runtime/child-process.mdx`

**Total:** 23 version references updated

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Michael H <git@riskymh.dev>
2025-11-09 16:20:04 +11:00
Michael H
614e8292e3 docs: fix discord invite (#24498)
### What does this PR do?

we don't have the discord vanity invite

### How did you verify your code works?
2025-11-08 21:09:57 -08:00
Michael H
3829b6d0aa add .mdx to .gitattributes (#24525)
### What does this PR do?

### How did you verify your code works?
2025-11-08 20:56:38 -08:00
Meghan Denny
f30e3951a7 Bump 2025-11-07 23:58:34 -08:00
1840 changed files with 105359 additions and 37947 deletions

19
.aikido Normal file
View File

@@ -0,0 +1,19 @@
exclude:
paths:
- test
- scripts
- bench
- packages/bun-lambda
- packages/bun-release
- packages/bun-wasm
- packages/bun-vscode
- packages/bun-plugin-yaml
- packages/bun-plugin-svelte
- packages/bun-native-plugin-rs
- packages/bun-native-bundler-plugin-api
- packages/bun-inspector-protocol
- packages/bun-inspector-frontend
- packages/bun-error
- packages/bun-debug-adapter-protocol
- packages/bun-build-mdx-rs
- packages/@types/bun

View File

@@ -26,7 +26,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
wget curl git python3 python3-pip ninja-build \
software-properties-common apt-transport-https \
ca-certificates gnupg lsb-release unzip \
libxml2-dev ruby ruby-dev bison gawk perl make golang \
libxml2-dev ruby ruby-dev bison gawk perl make golang ccache \
&& add-apt-repository ppa:ubuntu-toolchain-r/test \
&& apt-get update \
&& apt-get install -y gcc-13 g++-13 libgcc-13-dev libstdc++-13-dev \
@@ -35,7 +35,8 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
&& wget https://apt.llvm.org/llvm.sh \
&& chmod +x llvm.sh \
&& ./llvm.sh ${LLVM_VERSION} all \
&& rm llvm.sh
&& rm llvm.sh \
&& rm -rf /var/lib/apt/lists/*
RUN --mount=type=tmpfs,target=/tmp \
@@ -48,14 +49,6 @@ RUN --mount=type=tmpfs,target=/tmp \
wget -O /tmp/cmake.sh "$cmake_url" && \
sh /tmp/cmake.sh --skip-license --prefix=/usr
RUN --mount=type=tmpfs,target=/tmp \
sccache_version="0.12.0" && \
arch=$(uname -m) && \
sccache_url="https://github.com/mozilla/sccache/releases/download/v${sccache_version}/sccache-v${sccache_version}-${arch}-unknown-linux-musl.tar.gz" && \
wget -O /tmp/sccache.tar.gz "$sccache_url" && \
tar -xzf /tmp/sccache.tar.gz -C /tmp && \
install -m755 /tmp/sccache-v${sccache_version}-${arch}-unknown-linux-musl/sccache /usr/local/bin
RUN update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-13 130 \
--slave /usr/bin/g++ g++ /usr/bin/g++-13 \
--slave /usr/bin/gcc-ar gcc-ar /usr/bin/gcc-ar-13 \
@@ -133,6 +126,18 @@ RUN ARCH=$(if [ "$TARGETARCH" = "arm64" ]; then echo "arm64"; else echo "amd64";
RUN mkdir -p /var/cache/buildkite-agent /var/log/buildkite-agent /var/run/buildkite-agent /etc/buildkite-agent /var/lib/buildkite-agent/cache/bun
# The following is necessary to configure buildkite to use a stable
# checkout directory for ccache to be effective.
RUN mkdir -p -m 755 /var/lib/buildkite-agent/hooks && \
cat <<'EOF' > /var/lib/buildkite-agent/hooks/environment
#!/bin/sh
set -efu
export BUILDKITE_BUILD_CHECKOUT_PATH=/var/lib/buildkite-agent/build
EOF
RUN chmod 744 /var/lib/buildkite-agent/hooks/environment
COPY ../*/agent.mjs /var/bun/scripts/
ENV BUN_INSTALL_CACHE=/var/lib/buildkite-agent/cache/bun

View File

@@ -114,6 +114,8 @@ const buildPlatforms = [
{ os: "linux", arch: "x64", abi: "musl", baseline: true, distro: "alpine", release: "3.22" },
{ os: "windows", arch: "x64", release: "2019" },
{ os: "windows", arch: "x64", baseline: true, release: "2019" },
// TODO: Enable when Windows ARM64 CI runners are ready
// { os: "windows", arch: "aarch64", release: "2019" },
];
/**
@@ -124,21 +126,20 @@ const testPlatforms = [
{ os: "darwin", arch: "aarch64", release: "13", tier: "previous" },
{ os: "darwin", arch: "x64", release: "14", tier: "latest" },
{ os: "darwin", arch: "x64", release: "13", tier: "previous" },
{ os: "linux", arch: "aarch64", distro: "debian", release: "12", tier: "latest" },
{ os: "linux", arch: "x64", distro: "debian", release: "12", tier: "latest" },
{ os: "linux", arch: "x64", baseline: true, distro: "debian", release: "12", tier: "latest" },
{ os: "linux", arch: "x64", profile: "asan", distro: "debian", release: "12", tier: "latest" },
{ os: "linux", arch: "aarch64", distro: "debian", release: "13", tier: "latest" },
{ os: "linux", arch: "x64", distro: "debian", release: "13", tier: "latest" },
{ os: "linux", arch: "x64", baseline: true, distro: "debian", release: "13", tier: "latest" },
{ os: "linux", arch: "x64", profile: "asan", distro: "debian", release: "13", tier: "latest" },
{ os: "linux", arch: "aarch64", distro: "ubuntu", release: "25.04", tier: "latest" },
{ os: "linux", arch: "aarch64", distro: "ubuntu", release: "24.04", tier: "latest" },
{ os: "linux", arch: "x64", distro: "ubuntu", release: "25.04", tier: "latest" },
{ os: "linux", arch: "x64", distro: "ubuntu", release: "24.04", tier: "latest" },
{ os: "linux", arch: "x64", baseline: true, distro: "ubuntu", release: "25.04", tier: "latest" },
{ os: "linux", arch: "x64", baseline: true, distro: "ubuntu", release: "24.04", tier: "latest" },
{ os: "linux", arch: "aarch64", abi: "musl", distro: "alpine", release: "3.22", tier: "latest" },
{ os: "linux", arch: "x64", abi: "musl", distro: "alpine", release: "3.22", tier: "latest" },
{ os: "linux", arch: "x64", abi: "musl", baseline: true, distro: "alpine", release: "3.22", tier: "latest" },
{ os: "windows", arch: "x64", release: "2019", tier: "oldest" },
{ os: "windows", arch: "x64", release: "2019", baseline: true, tier: "oldest" },
// TODO: Enable when Windows ARM64 CI runners are ready
// { os: "windows", arch: "aarch64", release: "2019", tier: "oldest" },
];
/**
@@ -556,7 +557,6 @@ function getBuildBunStep(platform, options) {
/**
* @typedef {Object} TestOptions
* @property {string} [buildId]
* @property {boolean} [unifiedTests]
* @property {string[]} [testFiles]
* @property {boolean} [dryRun]
*/
@@ -569,12 +569,13 @@ function getBuildBunStep(platform, options) {
*/
function getTestBunStep(platform, options, testOptions = {}) {
const { os, profile } = platform;
const { buildId, unifiedTests, testFiles } = testOptions;
const { buildId, testFiles } = testOptions;
const args = [`--step=${getTargetKey(platform)}-build-bun`];
if (buildId) {
args.push(`--build-id=${buildId}`);
}
if (testFiles) {
args.push(...testFiles.map(testFile => `--include=${testFile}`));
}
@@ -591,7 +592,7 @@ function getTestBunStep(platform, options, testOptions = {}) {
agents: getTestAgent(platform, options),
retry: getRetry(),
cancel_on_build_failing: isMergeQueue(),
parallelism: unifiedTests ? undefined : os === "darwin" ? 2 : 10,
parallelism: os === "darwin" ? 2 : 20,
timeout_in_minutes: profile === "asan" || os === "windows" ? 45 : 30,
env: {
ASAN_OPTIONS: "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=0",
@@ -773,8 +774,6 @@ function getBenchmarkStep() {
* @property {Platform[]} [buildPlatforms]
* @property {Platform[]} [testPlatforms]
* @property {string[]} [testFiles]
* @property {boolean} [unifiedBuilds]
* @property {boolean} [unifiedTests]
*/
/**
@@ -945,22 +944,6 @@ function getOptionsStep() {
default: "false",
options: booleanOptions,
},
{
key: "unified-builds",
select: "Do you want to build each platform in a single step?",
hint: "If true, builds will not be split into separate steps (this will likely slow down the build)",
required: false,
default: "false",
options: booleanOptions,
},
{
key: "unified-tests",
select: "Do you want to run tests in a single step?",
hint: "If true, tests will not be split into separate steps (this will be very slow)",
required: false,
default: "false",
options: booleanOptions,
},
],
};
}
@@ -1026,8 +1009,6 @@ async function getPipelineOptions() {
buildImages: parseBoolean(options["build-images"]),
publishImages: parseBoolean(options["publish-images"]),
testFiles: parseArray(options["test-files"]),
unifiedBuilds: parseBoolean(options["unified-builds"]),
unifiedTests: parseBoolean(options["unified-tests"]),
buildPlatforms: buildPlatformKeys?.length
? buildPlatformKeys.flatMap(key => buildProfiles.map(profile => ({ ...buildPlatformsMap.get(key), profile })))
: Array.from(buildPlatformsMap.values()),
@@ -1093,7 +1074,7 @@ async function getPipeline(options = {}) {
const imagePlatforms = new Map(
buildImages || publishImages
? [...buildPlatforms, ...testPlatforms]
.filter(({ os }) => os === "linux" || os === "windows")
.filter(({ os }) => os !== "darwin")
.map(platform => [getImageKey(platform), platform])
: [],
);
@@ -1109,7 +1090,7 @@ async function getPipeline(options = {}) {
});
}
let { skipBuilds, forceBuilds, unifiedBuilds, dryRun } = options;
let { skipBuilds, forceBuilds, dryRun } = options;
dryRun = dryRun || !!buildImages;
/** @type {string | undefined} */
@@ -1127,7 +1108,7 @@ async function getPipeline(options = {}) {
const includeASAN = !isMainBranch();
if (!buildId) {
const relevantBuildPlatforms = includeASAN
let relevantBuildPlatforms = includeASAN
? buildPlatforms
: buildPlatforms.filter(({ profile }) => profile !== "asan");
@@ -1140,13 +1121,16 @@ async function getPipeline(options = {}) {
dependsOn.push(`${imageKey}-build-image`);
}
const steps = [];
steps.push(getBuildCppStep(target, options));
steps.push(getBuildZigStep(target, options));
steps.push(getLinkBunStep(target, options));
return getStepWithDependsOn(
{
key: getTargetKey(target),
group: getTargetLabel(target),
steps: unifiedBuilds
? [getBuildBunStep(target, options)]
: [getBuildCppStep(target, options), getBuildZigStep(target, options), getLinkBunStep(target, options)],
steps,
},
...dependsOn,
);
@@ -1155,13 +1139,13 @@ async function getPipeline(options = {}) {
}
if (!isMainBranch()) {
const { skipTests, forceTests, unifiedTests, testFiles } = options;
const { skipTests, forceTests, testFiles } = options;
if (!skipTests || forceTests) {
steps.push(
...testPlatforms.map(target => ({
key: getTargetKey(target),
group: getTargetLabel(target),
steps: [getTestBunStep(target, options, { unifiedTests, testFiles, buildId })],
steps: [getTestBunStep(target, options, { testFiles, buildId })],
})),
);
}

View File

@@ -36,16 +36,20 @@ function Log-Debug {
}
}
# Detect system architecture
$script:IsARM64 = [System.Runtime.InteropServices.RuntimeInformation]::OSArchitecture -eq [System.Runtime.InteropServices.Architecture]::Arm64
$script:VsArch = if ($script:IsARM64) { "arm64" } else { "amd64" }
# Load Visual Studio environment if not already loaded
function Ensure-VSEnvironment {
if ($null -eq $env:VSINSTALLDIR) {
Log-Info "Loading Visual Studio environment..."
Log-Info "Loading Visual Studio environment for $script:VsArch..."
$vswhere = "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe"
if (!(Test-Path $vswhere)) {
throw "Command not found: vswhere (did you install Visual Studio?)"
}
$vsDir = & $vswhere -prerelease -latest -property installationPath
if ($null -eq $vsDir) {
$vsDir = Get-ChildItem -Path "C:\Program Files\Microsoft Visual Studio\2022" -Directory -ErrorAction SilentlyContinue
@@ -54,20 +58,20 @@ function Ensure-VSEnvironment {
}
$vsDir = $vsDir.FullName
}
Push-Location $vsDir
try {
$vsShell = Join-Path -Path $vsDir -ChildPath "Common7\Tools\Launch-VsDevShell.ps1"
. $vsShell -Arch amd64 -HostArch amd64
. $vsShell -Arch $script:VsArch -HostArch $script:VsArch
} finally {
Pop-Location
}
Log-Success "Visual Studio environment loaded"
}
if ($env:VSCMD_ARG_TGT_ARCH -eq "x86") {
throw "Visual Studio environment is targeting 32 bit, but only 64 bit is supported."
throw "Visual Studio environment is targeting 32 bit x86, but only 64-bit architectures (x64/arm64) are supported."
}
}
@@ -186,8 +190,10 @@ function Install-KeyLocker {
}
# Download MSI installer
$msiUrl = "https://bun-ci-assets.bun.sh/Keylockertools-windows-x64.msi"
$msiPath = Join-Path $env:TEMP "Keylockertools-windows-x64.msi"
# Note: KeyLocker tools currently only available for x64, but works on ARM64 via emulation
$msiArch = "x64"
$msiUrl = "https://bun-ci-assets.bun.sh/Keylockertools-windows-${msiArch}.msi"
$msiPath = Join-Path $env:TEMP "Keylockertools-windows-${msiArch}.msi"
Log-Info "Downloading MSI from: $msiUrl"
Log-Info "Downloading to: $msiPath"

View File

@@ -219,6 +219,9 @@ function create_release() {
bun-windows-x64-profile.zip
bun-windows-x64-baseline.zip
bun-windows-x64-baseline-profile.zip
# TODO: Enable when Windows ARM64 CI runners are ready
# bun-windows-aarch64.zip
# bun-windows-aarch64-profile.zip
)
function upload_artifact() {

View File

@@ -6,8 +6,7 @@ To do that:
- git fetch upstream
- git merge upstream main
- Fix the merge conflicts
- cd ../../ (back to bun)
- make jsc-build (this will take about 7 minutes)
- bun build.ts debug
- While it compiles, in another task review the JSC commits between the last version of Webkit and the new version. Write up a summary of the webkit changes in a file called "webkit-changes.md"
- bun run build:local (build a build of Bun with the new Webkit, make sure it compiles)
- After making sure it compiles, run some code to make sure things work. something like ./build/debug-local/bun-debug --print '42' should be all you need
@@ -21,3 +20,7 @@ To do that:
- commit + push (without adding the webkit-changes.md file)
- create PR titled "Upgrade Webkit to the <commit-sha>", paste your webkit-changes.md into the PR description
- delete the webkit-changes.md file
Things to check for a successful upgrade:
- Did JSType in vendor/WebKit/Source/JavaScriptCore have any recent changes? Does the enum values align with whats present in src/bun.js/bindings/JSType.zig?
- Were there any changes to the webcore code generator? If there are C++ compilation errors, check for differences in some of the generated code in like vendor/WebKit/source/WebCore/bindings/scripts/test/JS/

View File

@@ -0,0 +1,184 @@
---
name: implementing-jsc-classes-cpp
description: Implements JavaScript classes in C++ using JavaScriptCore. Use when creating new JS classes with C++ bindings, prototypes, or constructors.
---
# Implementing JavaScript Classes in C++
## Class Structure
For publicly accessible Constructor and Prototype, create 3 classes:
1. **`class Foo : public JSC::DestructibleObject`** - if C++ fields exist; otherwise use `JSC::constructEmptyObject` with `putDirectOffset`
2. **`class FooPrototype : public JSC::JSNonFinalObject`**
3. **`class FooConstructor : public JSC::InternalFunction`**
No public constructor? Only Prototype and class needed.
## Iso Subspaces
Classes with C++ fields need subspaces in:
- `src/bun.js/bindings/webcore/DOMClientIsoSubspaces.h`
- `src/bun.js/bindings/webcore/DOMIsoSubspaces.h`
```cpp
template<typename MyClassT, JSC::SubspaceAccess mode>
static JSC::GCClient::IsoSubspace* subspaceFor(JSC::VM& vm) {
if constexpr (mode == JSC::SubspaceAccess::Concurrently)
return nullptr;
return WebCore::subspaceForImpl<MyClassT, WebCore::UseCustomHeapCellType::No>(
vm,
[](auto& spaces) { return spaces.m_clientSubspaceForMyClassT.get(); },
[](auto& spaces, auto&& space) { spaces.m_clientSubspaceForMyClassT = std::forward<decltype(space)>(space); },
[](auto& spaces) { return spaces.m_subspaceForMyClassT.get(); },
[](auto& spaces, auto&& space) { spaces.m_subspaceForMyClassT = std::forward<decltype(space)>(space); });
}
```
## Property Definitions
```cpp
static JSC_DECLARE_HOST_FUNCTION(jsFooProtoFuncMethod);
static JSC_DECLARE_CUSTOM_GETTER(jsFooGetter_property);
static const HashTableValue JSFooPrototypeTableValues[] = {
{ "property"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsFooGetter_property, 0 } },
{ "method"_s, static_cast<unsigned>(PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsFooProtoFuncMethod, 1 } },
};
```
## Prototype Class
```cpp
class JSFooPrototype final : public JSC::JSNonFinalObject {
public:
using Base = JSC::JSNonFinalObject;
static constexpr unsigned StructureFlags = Base::StructureFlags;
static JSFooPrototype* create(JSC::VM& vm, JSC::JSGlobalObject* globalObject, JSC::Structure* structure) {
JSFooPrototype* prototype = new (NotNull, allocateCell<JSFooPrototype>(vm)) JSFooPrototype(vm, structure);
prototype->finishCreation(vm);
return prototype;
}
template<typename, JSC::SubspaceAccess>
static JSC::GCClient::IsoSubspace* subspaceFor(JSC::VM& vm) { return &vm.plainObjectSpace(); }
DECLARE_INFO;
static JSC::Structure* createStructure(JSC::VM& vm, JSC::JSGlobalObject* globalObject, JSC::JSValue prototype) {
auto* structure = JSC::Structure::create(vm, globalObject, prototype, JSC::TypeInfo(JSC::ObjectType, StructureFlags), info());
structure->setMayBePrototype(true);
return structure;
}
private:
JSFooPrototype(JSC::VM& vm, JSC::Structure* structure) : Base(vm, structure) {}
void finishCreation(JSC::VM& vm);
};
void JSFooPrototype::finishCreation(VM& vm) {
Base::finishCreation(vm);
reifyStaticProperties(vm, JSFoo::info(), JSFooPrototypeTableValues, *this);
JSC_TO_STRING_TAG_WITHOUT_TRANSITION();
}
```
## Getter/Setter/Function Definitions
```cpp
// Getter
JSC_DEFINE_CUSTOM_GETTER(jsFooGetter_prop, (JSGlobalObject* globalObject, EncodedJSValue thisValue, PropertyName)) {
VM& vm = globalObject->vm();
auto scope = DECLARE_THROW_SCOPE(vm);
JSFoo* thisObject = jsDynamicCast<JSFoo*>(JSValue::decode(thisValue));
if (UNLIKELY(!thisObject)) {
Bun::throwThisTypeError(*globalObject, scope, "JSFoo"_s, "prop"_s);
return {};
}
return JSValue::encode(jsBoolean(thisObject->value()));
}
// Function
JSC_DEFINE_HOST_FUNCTION(jsFooProtoFuncMethod, (JSGlobalObject* globalObject, CallFrame* callFrame)) {
VM& vm = globalObject->vm();
auto scope = DECLARE_THROW_SCOPE(vm);
auto* thisObject = jsDynamicCast<JSFoo*>(callFrame->thisValue());
if (UNLIKELY(!thisObject)) {
Bun::throwThisTypeError(*globalObject, scope, "Foo"_s, "method"_s);
return {};
}
return JSValue::encode(thisObject->doSomething(vm, globalObject));
}
```
## Constructor Class
```cpp
class JSFooConstructor final : public JSC::InternalFunction {
public:
using Base = JSC::InternalFunction;
static constexpr unsigned StructureFlags = Base::StructureFlags;
static JSFooConstructor* create(JSC::VM& vm, JSC::Structure* structure, JSC::JSObject* prototype) {
JSFooConstructor* constructor = new (NotNull, JSC::allocateCell<JSFooConstructor>(vm)) JSFooConstructor(vm, structure);
constructor->finishCreation(vm, prototype);
return constructor;
}
DECLARE_INFO;
template<typename CellType, JSC::SubspaceAccess>
static JSC::GCClient::IsoSubspace* subspaceFor(JSC::VM& vm) { return &vm.internalFunctionSpace(); }
static JSC::Structure* createStructure(JSC::VM& vm, JSC::JSGlobalObject* globalObject, JSC::JSValue prototype) {
return JSC::Structure::create(vm, globalObject, prototype, JSC::TypeInfo(JSC::InternalFunctionType, StructureFlags), info());
}
private:
JSFooConstructor(JSC::VM& vm, JSC::Structure* structure) : Base(vm, structure, callFoo, constructFoo) {}
void finishCreation(JSC::VM& vm, JSC::JSObject* prototype) {
Base::finishCreation(vm, 0, "Foo"_s);
putDirectWithoutTransition(vm, vm.propertyNames->prototype, prototype, JSC::PropertyAttribute::DontEnum | JSC::PropertyAttribute::DontDelete | JSC::PropertyAttribute::ReadOnly);
}
};
```
## Structure Caching
Add to `ZigGlobalObject.h`:
```cpp
JSC::LazyClassStructure m_JSFooClassStructure;
```
Initialize in `ZigGlobalObject.cpp`:
```cpp
m_JSFooClassStructure.initLater([](LazyClassStructure::Initializer& init) {
Bun::initJSFooClassStructure(init);
});
```
Visit in `visitChildrenImpl`:
```cpp
m_JSFooClassStructure.visit(visitor);
```
## Expose to Zig
```cpp
extern "C" JSC::EncodedJSValue Bun__JSFooConstructor(Zig::GlobalObject* globalObject) {
return JSValue::encode(globalObject->m_JSFooClassStructure.constructor(globalObject));
}
extern "C" EncodedJSValue Bun__Foo__toJS(Zig::GlobalObject* globalObject, Foo* foo) {
auto* structure = globalObject->m_JSFooClassStructure.get(globalObject);
return JSValue::encode(JSFoo::create(globalObject->vm(), structure, globalObject, WTFMove(foo)));
}
```
Include `#include "root.h"` at the top of C++ files.

View File

@@ -0,0 +1,206 @@
---
name: implementing-jsc-classes-zig
description: Creates JavaScript classes using Bun's Zig bindings generator (.classes.ts). Use when implementing new JS APIs in Zig with JSC integration.
---
# Bun's JavaScriptCore Class Bindings Generator
Bridge JavaScript and Zig through `.classes.ts` definitions and Zig implementations.
## Architecture
1. **Zig Implementation** (.zig files)
2. **JavaScript Interface Definition** (.classes.ts files)
3. **Generated Code** (C++/Zig files connecting them)
## Class Definition (.classes.ts)
```typescript
define({
name: "TextDecoder",
constructor: true,
JSType: "object",
finalize: true,
proto: {
decode: { args: 1 },
encoding: { getter: true, cache: true },
fatal: { getter: true },
},
});
```
Options:
- `name`: Class name
- `constructor`: Has public constructor
- `JSType`: "object", "function", etc.
- `finalize`: Needs cleanup
- `proto`: Properties/methods
- `cache`: Cache property values via WriteBarrier
## Zig Implementation
```zig
pub const TextDecoder = struct {
pub const js = JSC.Codegen.JSTextDecoder;
pub const toJS = js.toJS;
pub const fromJS = js.fromJS;
pub const fromJSDirect = js.fromJSDirect;
encoding: []const u8,
fatal: bool,
pub fn constructor(
globalObject: *JSGlobalObject,
callFrame: *JSC.CallFrame,
) bun.JSError!*TextDecoder {
return bun.new(TextDecoder, .{ .encoding = "utf-8", .fatal = false });
}
pub fn decode(
this: *TextDecoder,
globalObject: *JSGlobalObject,
callFrame: *JSC.CallFrame,
) bun.JSError!JSC.JSValue {
const args = callFrame.arguments();
if (args.len < 1 or args.ptr[0].isUndefinedOrNull()) {
return globalObject.throw("Input cannot be null", .{});
}
return JSC.JSValue.jsString(globalObject, "result");
}
pub fn getEncoding(this: *TextDecoder, globalObject: *JSGlobalObject) JSC.JSValue {
return JSC.JSValue.createStringFromUTF8(globalObject, this.encoding);
}
fn deinit(this: *TextDecoder) void {
// Release resources
}
pub fn finalize(this: *TextDecoder) void {
this.deinit();
bun.destroy(this);
}
};
```
**Key patterns:**
- Use `bun.JSError!JSValue` return type for error handling
- Use `globalObject` not `ctx`
- `deinit()` for cleanup, `finalize()` called by GC
- Update `src/bun.js/bindings/generated_classes_list.zig`
## CallFrame Access
```zig
const args = callFrame.arguments();
const first_arg = args.ptr[0]; // Access as slice
const argCount = args.len;
const thisValue = callFrame.thisValue();
```
## Property Caching
For `cache: true` properties, generated accessors:
```zig
// Get cached value
pub fn encodingGetCached(thisValue: JSC.JSValue) ?JSC.JSValue {
const result = TextDecoderPrototype__encodingGetCachedValue(thisValue);
if (result == .zero) return null;
return result;
}
// Set cached value
pub fn encodingSetCached(thisValue: JSC.JSValue, globalObject: *JSC.JSGlobalObject, value: JSC.JSValue) void {
TextDecoderPrototype__encodingSetCachedValue(thisValue, globalObject, value);
}
```
## Error Handling
```zig
pub fn method(this: *MyClass, globalObject: *JSGlobalObject, callFrame: *JSC.CallFrame) bun.JSError!JSC.JSValue {
const args = callFrame.arguments();
if (args.len < 1) {
return globalObject.throw("Missing required argument", .{});
}
return JSC.JSValue.jsString(globalObject, "Success!");
}
```
## Memory Management
```zig
pub fn deinit(this: *TextDecoder) void {
this._encoding.deref();
if (this.buffer) |buffer| {
bun.default_allocator.free(buffer);
}
}
pub fn finalize(this: *TextDecoder) void {
JSC.markBinding(@src());
this.deinit();
bun.default_allocator.destroy(this);
}
```
## Creating a New Binding
1. Define interface in `.classes.ts`:
```typescript
define({
name: "MyClass",
constructor: true,
finalize: true,
proto: {
myMethod: { args: 1 },
myProperty: { getter: true, cache: true },
},
});
```
2. Implement in `.zig`:
```zig
pub const MyClass = struct {
pub const js = JSC.Codegen.JSMyClass;
pub const toJS = js.toJS;
pub const fromJS = js.fromJS;
value: []const u8,
pub const new = bun.TrivialNew(@This());
pub fn constructor(globalObject: *JSGlobalObject, callFrame: *JSC.CallFrame) bun.JSError!*MyClass {
return MyClass.new(.{ .value = "" });
}
pub fn myMethod(this: *MyClass, globalObject: *JSGlobalObject, callFrame: *JSC.CallFrame) bun.JSError!JSC.JSValue {
return JSC.JSValue.jsUndefined();
}
pub fn getMyProperty(this: *MyClass, globalObject: *JSGlobalObject) JSC.JSValue {
return JSC.JSValue.jsString(globalObject, this.value);
}
pub fn deinit(this: *MyClass) void {}
pub fn finalize(this: *MyClass) void {
this.deinit();
bun.destroy(this);
}
};
```
3. Add to `src/bun.js/bindings/generated_classes_list.zig`
## Generated Components
- **C++ Classes**: `JSMyClass`, `JSMyClassPrototype`, `JSMyClassConstructor`
- **Method Bindings**: `MyClassPrototype__myMethodCallback`
- **Property Accessors**: `MyClassPrototype__myPropertyGetterWrap`
- **Zig Bindings**: External function declarations, cached value accessors

View File

@@ -0,0 +1,222 @@
---
name: writing-bundler-tests
description: Guides writing bundler tests using itBundled/expectBundled in test/bundler/. Use when creating or modifying bundler, transpiler, or code transformation tests.
---
# Writing Bundler Tests
Bundler tests use `itBundled()` from `test/bundler/expectBundled.ts` to test Bun's bundler.
## Basic Usage
```typescript
import { describe } from "bun:test";
import { itBundled, dedent } from "./expectBundled";
describe("bundler", () => {
itBundled("category/TestName", {
files: {
"index.js": `console.log("hello");`,
},
run: {
stdout: "hello",
},
});
});
```
Test ID format: `category/TestName` (e.g., `banner/CommentBanner`, `minify/Empty`)
## File Setup
```typescript
{
files: {
"index.js": `console.log("test");`,
"lib.ts": `export const foo = 123;`,
"nested/file.js": `export default {};`,
},
entryPoints: ["index.js"], // defaults to first file
runtimeFiles: { // written AFTER bundling
"extra.js": `console.log("added later");`,
},
}
```
## Bundler Options
```typescript
{
outfile: "/out.js",
outdir: "/out",
format: "esm" | "cjs" | "iife",
target: "bun" | "browser" | "node",
// Minification
minifyWhitespace: true,
minifyIdentifiers: true,
minifySyntax: true,
// Code manipulation
banner: "// copyright",
footer: "// end",
define: { "PROD": "true" },
external: ["lodash"],
// Advanced
sourceMap: "inline" | "external",
splitting: true,
treeShaking: true,
drop: ["console"],
}
```
## Runtime Verification
```typescript
{
run: {
stdout: "expected output", // exact match
stdout: /regex/, // pattern match
partialStdout: "contains this", // substring
stderr: "error output",
exitCode: 1,
env: { NODE_ENV: "production" },
runtime: "bun" | "node",
// Runtime errors
error: "ReferenceError: x is not defined",
},
}
```
## Bundle Errors/Warnings
```typescript
{
bundleErrors: {
"/file.js": ["error message 1", "error message 2"],
},
bundleWarnings: {
"/file.js": ["warning message"],
},
}
```
## Dead Code Elimination (DCE)
Add markers in source code:
```javascript
// KEEP - this should survive
const used = 1;
// REMOVE - this should be eliminated
const unused = 2;
```
```typescript
{
dce: true,
dceKeepMarkerCount: 5, // expected KEEP markers
}
```
## Capture Pattern
Verify exact transpilation with `capture()`:
```typescript
itBundled("string/Folding", {
files: {
"index.ts": `capture(\`\${1 + 1}\`);`,
},
capture: ['"2"'], // expected captured value
minifySyntax: true,
});
```
## Post-Bundle Assertions
```typescript
{
onAfterBundle(api) {
api.expectFile("out.js").toContain("console.log");
api.assertFileExists("out.js");
const content = api.readFile("out.js");
expect(content).toMatchSnapshot();
const values = api.captureFile("out.js");
expect(values).toEqual(["2"]);
},
}
```
## Common Patterns
**Simple output verification:**
```typescript
itBundled("banner/Comment", {
banner: "// copyright",
files: { "a.js": `console.log("Hello")` },
onAfterBundle(api) {
api.expectFile("out.js").toContain("// copyright");
},
});
```
**Multi-file CJS/ESM interop:**
```typescript
itBundled("cjs/ImportSyntax", {
files: {
"entry.js": `import lib from './lib.cjs'; console.log(lib);`,
"lib.cjs": `exports.foo = 'bar';`,
},
run: { stdout: '{"foo":"bar"}' },
});
```
**Error handling:**
```typescript
itBundled("edgecase/InvalidLoader", {
files: { "index.js": `...` },
bundleErrors: {
"index.js": ["Unsupported loader type"],
},
});
```
## Test Organization
```text
test/bundler/
├── bundler_banner.test.ts
├── bundler_string.test.ts
├── bundler_minify.test.ts
├── bundler_cjs.test.ts
├── bundler_edgecase.test.ts
├── bundler_splitting.test.ts
├── css/
├── transpiler/
└── expectBundled.ts
```
## Running Tests
```bash
bun bd test test/bundler/bundler_banner.test.ts
BUN_BUNDLER_TEST_FILTER="banner/Comment" bun bd test bundler_banner.test.ts
BUN_BUNDLER_TEST_DEBUG=1 bun bd test bundler_minify.test.ts
```
## Key Points
- Use `dedent` for readable multi-line code
- File paths are relative (e.g., `/index.js`)
- Use `capture()` to verify exact transpilation results
- Use `.toMatchSnapshot()` for complex outputs
- Pass array to `run` for multiple test scenarios

View File

@@ -0,0 +1,94 @@
---
name: writing-dev-server-tests
description: Guides writing HMR/Dev Server tests in test/bake/. Use when creating or modifying dev server, hot reloading, or bundling tests.
---
# Writing HMR/Dev Server Tests
Dev server tests validate hot-reloading robustness and reliability.
## File Structure
- `test/bake/bake-harness.ts` - shared utilities: `devTest`, `prodTest`, `devAndProductionTest`, `Dev` class, `Client` class
- `test/bake/client-fixture.mjs` - subprocess for `Client` (page loading, IPC queries)
- `test/bake/dev/*.test.ts` - dev server and hot reload tests
- `test/bake/dev-and-prod.ts` - tests running on both dev and production mode
## Test Categories
- `bundle.test.ts` - DevServer-specific bundling bugs
- `css.test.ts` - CSS bundling issues
- `plugins.test.ts` - development mode plugins
- `ecosystem.test.ts` - library compatibility (prefer concrete bugs over full package tests)
- `esm.test.ts` - ESM features in development
- `html.test.ts` - HTML file handling
- `react-spa.test.ts` - React, react-refresh transform, server components
- `sourcemap.test.ts` - source map correctness
## devTest Basics
```ts
import { devTest, emptyHtmlFile } from "../bake-harness";
devTest("html file is watched", {
files: {
"index.html": emptyHtmlFile({
scripts: ["/script.ts"],
body: "<h1>Hello</h1>",
}),
"script.ts": `console.log("hello");`,
},
async test(dev) {
await dev.fetch("/").expect.toInclude("<h1>Hello</h1>");
await dev.patch("index.html", { find: "Hello", replace: "World" });
await dev.fetch("/").expect.toInclude("<h1>World</h1>");
await using c = await dev.client("/");
await c.expectMessage("hello");
await c.expectReload(async () => {
await dev.patch("index.html", { find: "World", replace: "Bar" });
});
await c.expectMessage("hello");
},
});
```
## Key APIs
- **`files`**: Initial filesystem state
- **`dev.fetch()`**: HTTP requests
- **`dev.client()`**: Opens browser instance
- **`dev.write/patch/delete`**: Filesystem mutations (wait for hot-reload automatically)
- **`c.expectMessage()`**: Assert console.log output
- **`c.expectReload()`**: Wrap code that causes hard reload
**Important**: Use `dev.write/patch/delete` instead of `node:fs` - they wait for hot-reload.
## Testing Errors
```ts
devTest("import then create", {
files: {
"index.html": `<!DOCTYPE html><html><head></head><body><script type="module" src="/script.ts"></script></body></html>`,
"script.ts": `import data from "./data"; console.log(data);`,
},
async test(dev) {
const c = await dev.client("/", {
errors: ['script.ts:1:18: error: Could not resolve: "./data"'],
});
await c.expectReload(async () => {
await dev.write("data.ts", "export default 'data';");
});
await c.expectMessage("data");
},
});
```
Specify expected errors with the `errors` option:
```ts
await dev.delete("other.ts", {
errors: ['index.ts:1:16: error: Could not resolve: "./other"'],
});
```

View File

@@ -0,0 +1,268 @@
---
name: zig-system-calls
description: Guides using bun.sys for system calls and file I/O in Zig. Use when implementing file operations instead of std.fs or std.posix.
---
# System Calls & File I/O in Zig
Use `bun.sys` instead of `std.fs` or `std.posix` for cross-platform syscalls with proper error handling.
## bun.sys.File (Preferred)
For most file operations, use the `bun.sys.File` wrapper:
```zig
const File = bun.sys.File;
const file = switch (File.open(path, bun.O.RDWR, 0o644)) {
.result => |f| f,
.err => |err| return .{ .err = err },
};
defer file.close();
// Read/write
_ = try file.read(buffer).unwrap();
_ = try file.writeAll(data).unwrap();
// Get file info
const stat = try file.stat().unwrap();
const size = try file.getEndPos().unwrap();
// std.io compatible
const reader = file.reader();
const writer = file.writer();
```
### Complete Example
```zig
const File = bun.sys.File;
pub fn writeFile(path: [:0]const u8, data: []const u8) File.WriteError!void {
const file = switch (File.open(path, bun.O.WRONLY | bun.O.CREAT | bun.O.TRUNC, 0o664)) {
.result => |f| f,
.err => |err| return err.toError(),
};
defer file.close();
_ = switch (file.writeAll(data)) {
.result => {},
.err => |err| return err.toError(),
};
}
```
## Why bun.sys?
| Aspect | bun.sys | std.fs/std.posix |
| ----------- | -------------------------------- | ------------------- |
| Return Type | `Maybe(T)` with detailed Error | Generic error union |
| Windows | Full support with libuv fallback | Limited/POSIX-only |
| Error Info | errno, syscall tag, path, fd | errno only |
| EINTR | Automatic retry | Manual handling |
## Error Handling with Maybe(T)
`bun.sys` functions return `Maybe(T)` - a tagged union:
```zig
const sys = bun.sys;
// Pattern 1: Switch on result/error
switch (sys.read(fd, buffer)) {
.result => |bytes_read| {
// use bytes_read
},
.err => |err| {
// err.errno, err.syscall, err.fd, err.path
if (err.getErrno() == .AGAIN) {
// handle EAGAIN
}
},
}
// Pattern 2: Unwrap with try (converts to Zig error)
const bytes = try sys.read(fd, buffer).unwrap();
// Pattern 3: Unwrap with default
const value = sys.stat(path).unwrapOr(default_stat);
```
## Low-Level File Operations
Only use these when `bun.sys.File` doesn't meet your needs.
### Opening Files
```zig
const sys = bun.sys;
// Use bun.O flags (cross-platform normalized)
const fd = switch (sys.open(path, bun.O.RDONLY, 0)) {
.result => |fd| fd,
.err => |err| return .{ .err = err },
};
defer fd.close();
// Common flags
bun.O.RDONLY, bun.O.WRONLY, bun.O.RDWR
bun.O.CREAT, bun.O.TRUNC, bun.O.APPEND
bun.O.NONBLOCK, bun.O.DIRECTORY
```
### Reading & Writing
```zig
// Single read (may return less than buffer size)
switch (sys.read(fd, buffer)) {
.result => |n| { /* n bytes read */ },
.err => |err| { /* handle error */ },
}
// Read until EOF or buffer full
const total = try sys.readAll(fd, buffer).unwrap();
// Position-based read/write
sys.pread(fd, buffer, offset)
sys.pwrite(fd, data, offset)
// Vector I/O
sys.readv(fd, iovecs)
sys.writev(fd, iovecs)
```
### File Info
```zig
sys.stat(path) // Follow symlinks
sys.lstat(path) // Don't follow symlinks
sys.fstat(fd) // From file descriptor
sys.fstatat(fd, path)
// Linux-only: faster selective stat
sys.statx(path, &.{ .size, .mtime })
```
### Path Operations
```zig
sys.unlink(path)
sys.unlinkat(dir_fd, path)
sys.rename(from, to)
sys.renameat(from_dir, from, to_dir, to)
sys.readlink(path, buf)
sys.readlinkat(fd, path, buf)
sys.link(T, src, dest)
sys.linkat(src_fd, src, dest_fd, dest)
sys.symlink(target, dest)
sys.symlinkat(target, dirfd, dest)
sys.mkdir(path, mode)
sys.mkdirat(dir_fd, path, mode)
sys.rmdir(path)
```
### Permissions
```zig
sys.chmod(path, mode)
sys.fchmod(fd, mode)
sys.fchmodat(fd, path, mode, flags)
sys.chown(path, uid, gid)
sys.fchown(fd, uid, gid)
```
### Closing File Descriptors
Close is on `bun.FD`:
```zig
fd.close(); // Asserts on error (use in defer)
// Or if you need error info:
if (fd.closeAllowingBadFileDescriptor(null)) |err| {
// handle error
}
```
## Directory Operations
```zig
var buf: bun.PathBuffer = undefined;
const cwd = try sys.getcwd(&buf).unwrap();
const cwdZ = try sys.getcwdZ(&buf).unwrap(); // Zero-terminated
sys.chdir(path, destination)
```
### Directory Iteration
Use `bun.DirIterator` instead of `std.fs.Dir.Iterator`:
```zig
var iter = bun.iterateDir(dir_fd);
while (true) {
switch (iter.next()) {
.result => |entry| {
if (entry) |e| {
const name = e.name.slice();
const kind = e.kind; // .file, .directory, .sym_link, etc.
} else {
break; // End of directory
}
},
.err => |err| return .{ .err = err },
}
}
```
## Socket Operations
**Important**: `bun.sys` has limited socket support. For network I/O:
- **Non-blocking sockets**: Use `uws.Socket` (libuwebsockets) exclusively
- **Pipes/blocking I/O**: Use `PipeReader.zig` and `PipeWriter.zig`
Available in bun.sys:
```zig
sys.setsockopt(fd, level, optname, value)
sys.socketpair(domain, socktype, protocol, nonblocking_status)
```
Do NOT use `bun.sys` for socket read/write - use `uws.Socket` instead.
## Other Operations
```zig
sys.ftruncate(fd, size)
sys.lseek(fd, offset, whence)
sys.dup(fd)
sys.dupWithFlags(fd, flags)
sys.fcntl(fd, cmd, arg)
sys.pipe()
sys.mmap(...)
sys.munmap(memory)
sys.access(path, mode)
sys.futimens(fd, atime, mtime)
sys.utimens(path, atime, mtime)
```
## Error Type
```zig
const err: bun.sys.Error = ...;
err.errno // Raw errno value
err.getErrno() // As std.posix.E enum
err.syscall // Which syscall failed (Tag enum)
err.fd // Optional: file descriptor
err.path // Optional: path string
```
## Key Points
- Prefer `bun.sys.File` wrapper for most file operations
- Use low-level `bun.sys` functions only when needed
- Use `bun.O.*` flags instead of `std.os.O.*`
- Handle `Maybe(T)` with switch or `.unwrap()`
- Use `defer fd.close()` for cleanup
- EINTR is handled automatically in most functions
- For sockets, use `uws.Socket` not `bun.sys`

View File

@@ -1,5 +1,9 @@
language: en-US
issue_enrichment:
auto_enrich:
enabled: false
reviews:
profile: assertive
request_changes_workflow: false

View File

@@ -1,41 +0,0 @@
---
description:
globs: src/**/*.cpp,src/**/*.zig
alwaysApply: false
---
### Build Commands
- **Build debug version**: `bun bd` or `bun run build:debug`
- Creates a debug build at `./build/debug/bun-debug`
- Compilation takes ~2.5 minutes
- **Run tests with your debug build**: `bun bd test <test-file>`
- **CRITICAL**: Never use `bun test` directly - it won't include your changes
- **Run any command with debug build**: `bun bd <command>`
### Run a file
To run a file, use:
```sh
bun bd <file> <...args>
```
**CRITICAL**: Never use `bun <file>` directly. It will not have your changes.
### Logging
`BUN_DEBUG_$(SCOPE)=1` enables debug logs for a specific debug log scope.
Debug logs look like this:
```zig
const log = bun.Output.scoped(.${SCOPE}, .hidden);
// ...later
log("MY DEBUG LOG", .{})
```
### Code Generation
Code generation happens automatically as part of the build process. There are no commands to run.

View File

@@ -1,139 +0,0 @@
---
description: Writing HMR/Dev Server tests
globs: test/bake/*
---
# Writing HMR/Dev Server tests
Dev server tests validate that hot-reloading is robust, correct, and reliable. Remember to write thorough, yet concise tests.
## File Structure
- `test/bake/bake-harness.ts` - shared utilities and test harness
- primary test functions `devTest` / `prodTest` / `devAndProductionTest`
- class `Dev` (controls subprocess for dev server)
- class `Client` (controls a happy-dom subprocess for having the page open)
- more helpers
- `test/bake/client-fixture.mjs` - subprocess for what `Client` controls. it loads a page and uses IPC to query parts of the page, run javascript, and much more.
- `test/bake/dev/*.test.ts` - these call `devTest` to test dev server and hot reloading
- `test/bake/dev-and-prod.ts` - these use `devAndProductionTest` to run the same test on dev and production mode. these tests cannot really test hot reloading for obvious reasons.
## Categories
bundle.test.ts - Bundle tests are tests concerning bundling bugs that only occur in DevServer.
css.test.ts - CSS tests concern bundling bugs with CSS files
plugins.test.ts - Plugin tests concern plugins in development mode.
ecosystem.test.ts - These tests involve ensuring certain libraries are correct. It is preferred to test more concrete bugs than testing entire packages.
esm.test.ts - ESM tests are about various esm features in development mode.
html.test.ts - HTML tests are tests relating to HTML files themselves.
react-spa.test.ts - Tests relating to React, our react-refresh transform, and basic server component transforms.
sourcemap.test.ts - Tests verifying source-maps are correct.
## `devTest` Basics
A test takes in two primary inputs: `files` and `async test(dev) {`
```ts
import { devTest, emptyHtmlFile } from "../bake-harness";
devTest("html file is watched", {
files: {
"index.html": emptyHtmlFile({
scripts: ["/script.ts"],
body: "<h1>Hello</h1>",
}),
"script.ts": `
console.log("hello");
`,
},
async test(dev) {
await dev.fetch("/").expect.toInclude("<h1>Hello</h1>");
await dev.fetch("/").expect.toInclude("<h1>Hello</h1>");
await dev.patch("index.html", {
find: "Hello",
replace: "World",
});
await dev.fetch("/").expect.toInclude("<h1>World</h1>");
// Works
await using c = await dev.client("/");
await c.expectMessage("hello");
// Editing HTML reloads
await c.expectReload(async () => {
await dev.patch("index.html", {
find: "World",
replace: "Hello",
});
await dev.fetch("/").expect.toInclude("<h1>Hello</h1>");
});
await c.expectMessage("hello");
await c.expectReload(async () => {
await dev.patch("index.html", {
find: "Hello",
replace: "Bar",
});
await dev.fetch("/").expect.toInclude("<h1>Bar</h1>");
});
await c.expectMessage("hello");
await c.expectReload(async () => {
await dev.patch("script.ts", {
find: "hello",
replace: "world",
});
});
await c.expectMessage("world");
},
});
```
`files` holds the initial state, and the callback runs with the server running. `dev.fetch()` runs HTTP requests, while `dev.client()` opens a browser instance to the code.
Functions `dev.write` and `dev.patch` and `dev.delete` mutate the filesystem. Do not use `node:fs` APIs, as the dev server ones are hooked to wait for hot-reload, and all connected clients to receive changes.
When a change performs a hard-reload, that must be explicitly annotated with `expectReload`. This tells `client-fixture.mjs` that the test is meant to reload the page once; All other hard reloads automatically fail the test.
Client's have `console.log` instrumented, so that any unasserted logs fail the test. This makes it more obvious when an extra reload or re-evaluation. Messages are awaited via `c.expectMessage("log")` or with multiple arguments if there are multiple logs.
## Testing for bundling errors
By default, a client opening a page to an error will fail the test. This makes testing errors explicit.
```ts
devTest("import then create", {
files: {
"index.html": `
<!DOCTYPE html>
<html>
<head></head>
<body>
<script type="module" src="/script.ts"></script>
</body>
</html>
`,
"script.ts": `
import data from "./data";
console.log(data);
`,
},
async test(dev) {
const c = await dev.client("/", {
errors: ['script.ts:1:18: error: Could not resolve: "./data"'],
});
await c.expectReload(async () => {
await dev.write("data.ts", "export default 'data';");
});
await c.expectMessage("data");
},
});
```
Many functions take an options value to allow specifying it will produce errors. For example, this delete is going to cause a resolution failure.
```ts
await dev.delete("other.ts", {
errors: ['index.ts:1:16: error: Could not resolve: "./other"'],
});
```

View File

@@ -1,413 +0,0 @@
---
description: JavaScript class implemented in C++
globs: *.cpp
alwaysApply: false
---
# Implementing JavaScript classes in C++
If there is a publicly accessible Constructor and Prototype, then there are 3 classes:
- IF there are C++ class members we need a destructor, so `class Foo : public JSC::DestructibleObject`, if no C++ class fields (only JS properties) then we don't need a class at all usually. We can instead use JSC::constructEmptyObject(vm, structure) and `putDirectOffset` like in [NodeFSStatBinding.cpp](mdc:src/bun.js/bindings/NodeFSStatBinding.cpp).
- class FooPrototype : public JSC::JSNonFinalObject
- class FooConstructor : public JSC::InternalFunction
If there is no publicly accessible Constructor, just the Prototype and the class is necessary. In some cases, we can avoid the prototype entirely (but that's rare).
If there are C++ fields on the Foo class, the Foo class will need an iso subspace added to [DOMClientIsoSubspaces.h](mdc:src/bun.js/bindings/webcore/DOMClientIsoSubspaces.h) and [DOMIsoSubspaces.h](mdc:src/bun.js/bindings/webcore/DOMIsoSubspaces.h). Prototype and Constructor do not need subspaces.
Usually you'll need to #include "root.h" at the top of C++ files or you'll get lint errors.
Generally, defining the subspace looks like this:
```c++
class Foo : public JSC::DestructibleObject {
// ...
template<typename MyClassT, JSC::SubspaceAccess mode>
static JSC::GCClient::IsoSubspace* subspaceFor(JSC::VM& vm)
{
if constexpr (mode == JSC::SubspaceAccess::Concurrently)
return nullptr;
return WebCore::subspaceForImpl<MyClassT, WebCore::UseCustomHeapCellType::No>(
vm,
[](auto& spaces) { return spaces.m_clientSubspaceFor${MyClassT}.get(); },
[](auto& spaces, auto&& space) { spaces.m_clientSubspaceFor${MyClassT} = std::forward<decltype(space)>(space); },
[](auto& spaces) { return spaces.m_subspaceFo${MyClassT}.get(); },
[](auto& spaces, auto&& space) { spaces.m_subspaceFor${MyClassT} = std::forward<decltype(space)>(space); });
}
```
It's better to put it in the .cpp file instead of the .h file, when possible.
## Defining properties
Define properties on the prototype. Use a const HashTableValues like this:
```C++
static JSC_DECLARE_HOST_FUNCTION(jsX509CertificateProtoFuncCheckEmail);
static JSC_DECLARE_HOST_FUNCTION(jsX509CertificateProtoFuncCheckHost);
static JSC_DECLARE_HOST_FUNCTION(jsX509CertificateProtoFuncCheckIP);
static JSC_DECLARE_HOST_FUNCTION(jsX509CertificateProtoFuncCheckIssued);
static JSC_DECLARE_HOST_FUNCTION(jsX509CertificateProtoFuncCheckPrivateKey);
static JSC_DECLARE_HOST_FUNCTION(jsX509CertificateProtoFuncToJSON);
static JSC_DECLARE_HOST_FUNCTION(jsX509CertificateProtoFuncToLegacyObject);
static JSC_DECLARE_HOST_FUNCTION(jsX509CertificateProtoFuncToString);
static JSC_DECLARE_HOST_FUNCTION(jsX509CertificateProtoFuncVerify);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_ca);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_fingerprint);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_fingerprint256);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_fingerprint512);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_subject);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_subjectAltName);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_infoAccess);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_keyUsage);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_issuer);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_issuerCertificate);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_publicKey);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_raw);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_serialNumber);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_validFrom);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_validTo);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_validFromDate);
static JSC_DECLARE_CUSTOM_GETTER(jsX509CertificateGetter_validToDate);
static const HashTableValue JSX509CertificatePrototypeTableValues[] = {
{ "ca"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_ca, 0 } },
{ "checkEmail"_s, static_cast<unsigned>(PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsX509CertificateProtoFuncCheckEmail, 2 } },
{ "checkHost"_s, static_cast<unsigned>(PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsX509CertificateProtoFuncCheckHost, 2 } },
{ "checkIP"_s, static_cast<unsigned>(PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsX509CertificateProtoFuncCheckIP, 1 } },
{ "checkIssued"_s, static_cast<unsigned>(PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsX509CertificateProtoFuncCheckIssued, 1 } },
{ "checkPrivateKey"_s, static_cast<unsigned>(PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsX509CertificateProtoFuncCheckPrivateKey, 1 } },
{ "fingerprint"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_fingerprint, 0 } },
{ "fingerprint256"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_fingerprint256, 0 } },
{ "fingerprint512"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_fingerprint512, 0 } },
{ "infoAccess"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_infoAccess, 0 } },
{ "issuer"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_issuer, 0 } },
{ "issuerCertificate"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_issuerCertificate, 0 } },
{ "keyUsage"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_keyUsage, 0 } },
{ "publicKey"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_publicKey, 0 } },
{ "raw"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_raw, 0 } },
{ "serialNumber"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_serialNumber, 0 } },
{ "subject"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_subject, 0 } },
{ "subjectAltName"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_subjectAltName, 0 } },
{ "toJSON"_s, static_cast<unsigned>(PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsX509CertificateProtoFuncToJSON, 0 } },
{ "toLegacyObject"_s, static_cast<unsigned>(PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsX509CertificateProtoFuncToLegacyObject, 0 } },
{ "toString"_s, static_cast<unsigned>(PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsX509CertificateProtoFuncToString, 0 } },
{ "validFrom"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_validFrom, 0 } },
{ "validFromDate"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessorOrValue), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_validFromDate, 0 } },
{ "validTo"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessor), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_validTo, 0 } },
{ "validToDate"_s, static_cast<unsigned>(PropertyAttribute::ReadOnly | PropertyAttribute::CustomAccessorOrValue), NoIntrinsic, { HashTableValue::GetterSetterType, jsX509CertificateGetter_validToDate, 0 } },
{ "verify"_s, static_cast<unsigned>(PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsX509CertificateProtoFuncVerify, 1 } },
};
```
### Creating a prototype class
Follow a pattern like this:
```c++
class JSX509CertificatePrototype final : public JSC::JSNonFinalObject {
public:
using Base = JSC::JSNonFinalObject;
static constexpr unsigned StructureFlags = Base::StructureFlags;
static JSX509CertificatePrototype* create(JSC::VM& vm, JSC::JSGlobalObject* globalObject, JSC::Structure* structure)
{
JSX509CertificatePrototype* prototype = new (NotNull, allocateCell<JSX509CertificatePrototype>(vm)) JSX509CertificatePrototype(vm, structure);
prototype->finishCreation(vm);
return prototype;
}
template<typename, JSC::SubspaceAccess>
static JSC::GCClient::IsoSubspace* subspaceFor(JSC::VM& vm)
{
return &vm.plainObjectSpace();
}
DECLARE_INFO;
static JSC::Structure* createStructure(JSC::VM& vm, JSC::JSGlobalObject* globalObject, JSC::JSValue prototype)
{
auto* structure = JSC::Structure::create(vm, globalObject, prototype, JSC::TypeInfo(JSC::ObjectType, StructureFlags), info());
structure->setMayBePrototype(true);
return structure;
}
private:
JSX509CertificatePrototype(JSC::VM& vm, JSC::Structure* structure)
: Base(vm, structure)
{
}
void finishCreation(JSC::VM& vm);
};
const ClassInfo JSX509CertificatePrototype::s_info = { "X509Certificate"_s, &Base::s_info, nullptr, nullptr, CREATE_METHOD_TABLE(JSX509CertificatePrototype) };
void JSX509CertificatePrototype::finishCreation(VM& vm)
{
Base::finishCreation(vm);
reifyStaticProperties(vm, JSX509Certificate::info(), JSX509CertificatePrototypeTableValues, *this);
JSC_TO_STRING_TAG_WITHOUT_TRANSITION();
}
} // namespace Bun
```
### Getter definition:
```C++
JSC_DEFINE_CUSTOM_GETTER(jsX509CertificateGetter_ca, (JSGlobalObject * globalObject, EncodedJSValue thisValue, PropertyName))
{
VM& vm = globalObject->vm();
auto scope = DECLARE_THROW_SCOPE(vm);
JSX509Certificate* thisObject = jsDynamicCast<JSX509Certificate*>(JSValue::decode(thisValue));
if (UNLIKELY(!thisObject)) {
Bun::throwThisTypeError(*globalObject, scope, "JSX509Certificate"_s, "ca"_s);
return {};
}
return JSValue::encode(jsBoolean(thisObject->view().isCA()));
}
```
### Setter definition
```C++
JSC_DEFINE_CUSTOM_SETTER(jsImportMetaObjectSetter_require, (JSGlobalObject * jsGlobalObject, JSC::EncodedJSValue thisValue, JSC::EncodedJSValue encodedValue, PropertyName propertyName))
{
ImportMetaObject* thisObject = jsDynamicCast<ImportMetaObject*>(JSValue::decode(thisValue));
if (UNLIKELY(!thisObject))
return false;
JSValue value = JSValue::decode(encodedValue);
if (!value.isCell()) {
// TODO:
return true;
}
thisObject->requireProperty.set(thisObject->vm(), thisObject, value.asCell());
return true;
}
```
### Function definition
```C++
JSC_DEFINE_HOST_FUNCTION(jsX509CertificateProtoFuncToJSON, (JSGlobalObject * globalObject, CallFrame* callFrame))
{
VM& vm = globalObject->vm();
auto scope = DECLARE_THROW_SCOPE(vm);
auto *thisObject = jsDynamicCast<MyClassT*>(callFrame->thisValue());
if (UNLIKELY(!thisObject)) {
Bun::throwThisTypeError(*globalObject, scope, "MyClass"_s, "myFunctionName"_s);
return {};
}
return JSValue::encode(functionThatReturnsJSValue(vm, globalObject, thisObject));
}
```
### Constructor definition
```C++
JSC_DECLARE_HOST_FUNCTION(callStats);
JSC_DECLARE_HOST_FUNCTION(constructStats);
class JSStatsConstructor final : public JSC::InternalFunction {
public:
using Base = JSC::InternalFunction;
static constexpr unsigned StructureFlags = Base::StructureFlags;
static JSStatsConstructor* create(JSC::VM& vm, JSC::Structure* structure, JSC::JSObject* prototype)
{
JSStatsConstructor* constructor = new (NotNull, JSC::allocateCell<JSStatsConstructor>(vm)) JSStatsConstructor(vm, structure);
constructor->finishCreation(vm, prototype);
return constructor;
}
DECLARE_INFO;
template<typename CellType, JSC::SubspaceAccess>
static JSC::GCClient::IsoSubspace* subspaceFor(JSC::VM& vm)
{
return &vm.internalFunctionSpace();
}
static JSC::Structure* createStructure(JSC::VM& vm, JSC::JSGlobalObject* globalObject, JSC::JSValue prototype)
{
return JSC::Structure::create(vm, globalObject, prototype, JSC::TypeInfo(JSC::InternalFunctionType, StructureFlags), info());
}
private:
JSStatsConstructor(JSC::VM& vm, JSC::Structure* structure)
: Base(vm, structure, callStats, constructStats)
{
}
void finishCreation(JSC::VM& vm, JSC::JSObject* prototype)
{
Base::finishCreation(vm, 0, "Stats"_s);
putDirectWithoutTransition(vm, vm.propertyNames->prototype, prototype, JSC::PropertyAttribute::DontEnum | JSC::PropertyAttribute::DontDelete | JSC::PropertyAttribute::ReadOnly);
}
};
```
### Structure caching
If there's a class, prototype, and constructor:
1. Add the `JSC::LazyClassStructure` to [ZigGlobalObject.h](mdc:src/bun.js/bindings/ZigGlobalObject.h)
2. Initialize the class structure in [ZigGlobalObject.cpp](mdc:src/bun.js/bindings/ZigGlobalObject.cpp) in `void GlobalObject::finishCreation(VM& vm)`
3. Visit the class structure in visitChildren in [ZigGlobalObject.cpp](mdc:src/bun.js/bindings/ZigGlobalObject.cpp) in `void GlobalObject::visitChildrenImpl`
```c++#ZigGlobalObject.cpp
void GlobalObject::finishCreation(VM& vm) {
// ...
m_JSStatsBigIntClassStructure.initLater(
[](LazyClassStructure::Initializer& init) {
// Call the function to initialize our class structure.
Bun::initJSBigIntStatsClassStructure(init);
});
```
Then, implement the function that creates the structure:
```c++
void setupX509CertificateClassStructure(LazyClassStructure::Initializer& init)
{
auto* prototypeStructure = JSX509CertificatePrototype::createStructure(init.vm, init.global, init.global->objectPrototype());
auto* prototype = JSX509CertificatePrototype::create(init.vm, init.global, prototypeStructure);
auto* constructorStructure = JSX509CertificateConstructor::createStructure(init.vm, init.global, init.global->functionPrototype());
auto* constructor = JSX509CertificateConstructor::create(init.vm, init.global, constructorStructure, prototype);
auto* structure = JSX509Certificate::createStructure(init.vm, init.global, prototype);
init.setPrototype(prototype);
init.setStructure(structure);
init.setConstructor(constructor);
}
```
If there's only a class, use `JSC::LazyProperty<JSGlobalObject, Structure>` instead of `JSC::LazyClassStructure`:
1. Add the `JSC::LazyProperty<JSGlobalObject, Structure>` to @ZigGlobalObject.h
2. Initialize the class structure in @ZigGlobalObject.cpp in `void GlobalObject::finishCreation(VM& vm)`
3. Visit the lazy property in visitChildren in @ZigGlobalObject.cpp in `void GlobalObject::visitChildrenImpl`
void GlobalObject::finishCreation(VM& vm) {
// ...
this.m_myLazyProperty.initLater([](const JSC::LazyProperty<JSC::JSGlobalObject, JSC::Structure>::Initializer& init) {
init.set(Bun::initMyStructure(init.vm, reinterpret_cast<Zig::GlobalObject\*>(init.owner)));
});
```
Then, implement the function that creates the structure:
```c++
Structure* setupX509CertificateStructure(JSC::VM &vm, Zig::GlobalObject* globalObject)
{
// If there is a prototype:
auto* prototypeStructure = JSX509CertificatePrototype::createStructure(init.vm, init.global, init.global->objectPrototype());
auto* prototype = JSX509CertificatePrototype::create(init.vm, init.global, prototypeStructure);
// If there is no prototype or it only has
auto* structure = JSX509Certificate::createStructure(init.vm, init.global, prototype);
init.setPrototype(prototype);
init.setStructure(structure);
init.setConstructor(constructor);
}
```
Then, use the structure by calling `globalObject.m_myStructureName.get(globalObject)`
```C++
JSC_DEFINE_HOST_FUNCTION(x509CertificateConstructorConstruct, (JSGlobalObject * globalObject, CallFrame* callFrame))
{
VM& vm = globalObject->vm();
auto scope = DECLARE_THROW_SCOPE(vm);
if (!callFrame->argumentCount()) {
Bun::throwError(globalObject, scope, ErrorCode::ERR_MISSING_ARGS, "X509Certificate constructor requires at least one argument"_s);
return {};
}
JSValue arg = callFrame->uncheckedArgument(0);
if (!arg.isCell()) {
Bun::throwError(globalObject, scope, ErrorCode::ERR_INVALID_ARG_TYPE, "X509Certificate constructor argument must be a Buffer, TypedArray, or string"_s);
return {};
}
auto* zigGlobalObject = defaultGlobalObject(globalObject);
Structure* structure = zigGlobalObject->m_JSX509CertificateClassStructure.get(zigGlobalObject);
JSValue newTarget = callFrame->newTarget();
if (UNLIKELY(zigGlobalObject->m_JSX509CertificateClassStructure.constructor(zigGlobalObject) != newTarget)) {
auto scope = DECLARE_THROW_SCOPE(vm);
if (!newTarget) {
throwTypeError(globalObject, scope, "Class constructor X509Certificate cannot be invoked without 'new'"_s);
return {};
}
auto* functionGlobalObject = defaultGlobalObject(getFunctionRealm(globalObject, newTarget.getObject()));
RETURN_IF_EXCEPTION(scope, {});
structure = InternalFunction::createSubclassStructure(globalObject, newTarget.getObject(), functionGlobalObject->NodeVMScriptStructure());
RETURN_IF_EXCEPTION(scope, {});
}
return JSValue::encode(createX509Certificate(vm, globalObject, structure, arg));
}
```
### Expose to Zig
To expose the constructor to zig:
```c++
extern "C" JSC::EncodedJSValue Bun__JSBigIntStatsObjectConstructor(Zig::GlobalObject* globalobject)
{
return JSValue::encode(globalobject->m_JSStatsBigIntClassStructure.constructor(globalobject));
}
```
Zig:
```zig
extern "c" fn Bun__JSBigIntStatsObjectConstructor(*JSC.JSGlobalObject) JSC.JSValue;
pub const getBigIntStatsConstructor = Bun__JSBigIntStatsObjectConstructor;
```
To create an object (instance) of a JS class defined in C++ from Zig, follow the \_\_toJS convention like this:
```c++
// X509* is whatever we need to create the object
extern "C" EncodedJSValue Bun__X509__toJS(Zig::GlobalObject* globalObject, X509* cert)
{
// ... implementation details
auto* structure = globalObject->m_JSX509CertificateClassStructure.get(globalObject);
return JSValue::encode(JSX509Certificate::create(globalObject->vm(), structure, globalObject, WTFMove(cert)));
}
```
And from Zig:
```zig
const X509 = opaque {
// ... class
extern fn Bun__X509__toJS(*JSC.JSGlobalObject, *X509) JSC.JSValue;
pub fn toJS(this: *X509, globalObject: *JSC.JSGlobalObject) JSC.JSValue {
return Bun__X509__toJS(globalObject, this);
}
};
```

View File

@@ -1,203 +0,0 @@
# Registering Functions, Objects, and Modules in Bun
This guide documents the process of adding new functionality to the Bun global object and runtime.
## Overview
Bun's architecture exposes functionality to JavaScript through a set of carefully registered functions, objects, and modules. Most core functionality is implemented in Zig, with JavaScript bindings that make these features accessible to users.
There are several key ways to expose functionality in Bun:
1. **Global Functions**: Direct methods on the `Bun` object (e.g., `Bun.serve()`)
2. **Getter Properties**: Lazily initialized properties on the `Bun` object (e.g., `Bun.sqlite`)
3. **Constructor Classes**: Classes available through the `Bun` object (e.g., `Bun.ValkeyClient`)
4. **Global Modules**: Modules that can be imported directly (e.g., `import {X} from "bun:*"`)
## The Registration Process
Adding new functionality to Bun involves several coordinated steps across multiple files:
### 1. Implement the Core Functionality in Zig
First, implement your feature in Zig, typically in its own directory in `src/`. Examples:
- `src/valkey/` for Redis/Valkey client
- `src/semver/` for SemVer functionality
- `src/smtp/` for SMTP client
### 2. Create JavaScript Bindings
Create bindings that expose your Zig functionality to JavaScript:
- Create a class definition file (e.g., `js_bindings.classes.ts`) to define the JavaScript interface
- Implement `JSYourFeature` struct in a file like `js_your_feature.zig`
Example from a class definition file:
```typescript
// Example from a .classes.ts file
import { define } from "../../codegen/class-definitions";
export default [
define({
name: "YourFeature",
construct: true,
finalize: true,
hasPendingActivity: true,
memoryCost: true,
klass: {},
JSType: "0b11101110",
proto: {
yourMethod: {
fn: "yourZigMethod",
length: 1,
},
property: {
getter: "getProperty",
},
},
values: ["cachedValues"],
}),
];
```
### 3. Register with BunObject in `src/bun.js/bindings/BunObject+exports.h`
Add an entry to the `FOR_EACH_GETTER` macro:
```c
// In BunObject+exports.h
#define FOR_EACH_GETTER(macro) \
macro(CSRF) \
macro(CryptoHasher) \
... \
macro(YourFeature) \
```
### 4. Create a Getter Function in `src/bun.js/api/BunObject.zig`
Implement a getter function in `BunObject.zig` that returns your feature:
```zig
// In BunObject.zig
pub const YourFeature = toJSGetter(Bun.getYourFeatureConstructor);
// In the exportAll() function:
@export(&BunObject.YourFeature, .{ .name = getterName("YourFeature") });
```
### 5. Implement the Getter Function in a Relevant Zig File
Implement the function that creates your object:
```zig
// In your main module file (e.g., src/your_feature/your_feature.zig)
pub fn getYourFeatureConstructor(globalThis: *JSC.JSGlobalObject, _: *JSC.JSObject) JSC.JSValue {
return JSC.API.YourFeature.getConstructor(globalThis);
}
```
### 6. Add to Build System
Ensure your files are included in the build system by adding them to the appropriate targets.
## Example: Adding a New Module
Here's a comprehensive example of adding a hypothetical SMTP module:
1. Create implementation files in `src/smtp/`:
- `index.zig`: Main entry point that exports everything
- `SmtpClient.zig`: Core SMTP client implementation
- `js_smtp.zig`: JavaScript bindings
- `js_bindings.classes.ts`: Class definition
2. Define your JS class in `js_bindings.classes.ts`:
```typescript
import { define } from "../../codegen/class-definitions";
export default [
define({
name: "EmailClient",
construct: true,
finalize: true,
hasPendingActivity: true,
configurable: false,
memoryCost: true,
klass: {},
JSType: "0b11101110",
proto: {
send: {
fn: "send",
length: 1,
},
verify: {
fn: "verify",
length: 0,
},
close: {
fn: "close",
length: 0,
},
},
values: ["connectionPromise"],
}),
];
```
3. Add getter to `BunObject+exports.h`:
```c
#define FOR_EACH_GETTER(macro) \
macro(CSRF) \
... \
macro(SMTP) \
```
4. Add getter function to `BunObject.zig`:
```zig
pub const SMTP = toJSGetter(Bun.getSmtpConstructor);
// In exportAll:
@export(&BunObject.SMTP, .{ .name = getterName("SMTP") });
```
5. Implement getter in your module:
```zig
pub fn getSmtpConstructor(globalThis: *JSC.JSGlobalObject, _: *JSC.JSObject) JSC.JSValue {
return JSC.API.JSEmailClient.getConstructor(globalThis);
}
```
## Best Practices
1. **Follow Naming Conventions**: Align your naming with existing patterns
2. **Reference Existing Modules**: Study similar modules like Valkey or S3Client for guidance
3. **Memory Management**: Be careful with memory management and reference counting
4. **Error Handling**: Use `bun.JSError!JSValue` for proper error propagation
5. **Documentation**: Add JSDoc comments to your JavaScript bindings
6. **Testing**: Add tests for your new functionality
## Common Gotchas
- Be sure to handle reference counting properly with `ref()`/`deref()`
- Always implement proper cleanup in `deinit()` and `finalize()`
- For network operations, manage socket lifetimes correctly
- Use `JSC.Codegen` correctly to generate necessary binding code
## Related Files
- `src/bun.js/bindings/BunObject+exports.h`: Registration of getters and functions
- `src/bun.js/api/BunObject.zig`: Implementation of getters and object creation
- `src/bun.js/api/BunObject.classes.ts`: Class definitions
- `.cursor/rules/zig-javascriptcore-classes.mdc`: More details on class bindings
## Additional Resources
For more detailed information on specific topics:
- See `zig-javascriptcore-classes.mdc` for details on creating JS class bindings
- Review existing modules like `valkey`, `sqlite`, or `s3` for real-world examples

View File

@@ -1,91 +0,0 @@
---
description: Writing tests for Bun
globs:
---
# Writing tests for Bun
## Where tests are found
You'll find all of Bun's tests in the `test/` directory.
* `test/`
* `cli/` - CLI command tests, like `bun install` or `bun init`
* `js/` - JavaScript & TypeScript tests
* `bun/` - `Bun` APIs tests, separated by category, for example: `glob/` for `Bun.Glob` tests
* `node/` - Node.js module tests, separated by module, for example: `assert/` for `node:assert` tests
* `test/` - Vendored Node.js tests, taken from the Node.js repository (does not conform to Bun's test style)
* `web/` - Web API tests, separated by category, for example: `fetch/` for `Request` and `Response` tests
* `third_party/` - npm package tests, to validate that basic usage works in Bun
* `napi/` - N-API tests
* `v8/` - V8 C++ API tests
* `bundler/` - Bundler, transpiler, CSS, and `bun build` tests
* `regression/issue/[number]` - Regression tests, always make one when fixing a particular issue
## How tests are written
Bun's tests are written as JavaScript and TypeScript files with the Jest-style APIs, like `test`, `describe`, and `expect`. They are tested using Bun's own test runner, `bun test`.
```js
import { describe, test, expect } from "bun:test";
import assert, { AssertionError } from "assert";
describe("assert(expr)", () => {
test.each([true, 1, "foo"])(`assert(%p) does not throw`, expr => {
expect(() => assert(expr)).not.toThrow();
});
test.each([false, 0, "", null, undefined])(`assert(%p) throws`, expr => {
expect(() => assert(expr)).toThrow(AssertionError);
});
});
```
## Testing conventions
* See `test/harness.ts` for common test utilities and helpers
* Be rigorous and test for edge-cases and unexpected inputs
* Use data-driven tests, e.g. `test.each`, to reduce boilerplate when possible
* When you need to test Bun as a CLI, use the following pattern:
```js
import { test, expect } from "bun:test";
import { spawn } from "bun";
import { bunExe, bunEnv } from "harness";
test("bun --version", async () => {
const { exited, stdout: stdoutStream, stderr: stderrStream } = spawn({
cmd: [bunExe(), "--version"],
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [ exitCode, stdout, stderr ] = await Promise.all([
exited,
new Response(stdoutStream).text(),
new Response(stderrStream).text(),
]);
expect({ exitCode, stdout, stderr }).toMatchObject({
exitCode: 0,
stdout: expect.stringContaining(Bun.version),
stderr: "",
});
});
```
## Before writing a test
* If you are fixing a bug, write the test first and make sure it fails (as expected) with the canary version of Bun
* If you are fixing a Node.js compatibility bug, create a throw-away snippet of code and test that it works as you expect in Node.js, then that it fails (as expected) with the canary version of Bun
* When the expected behaviour is ambigious, defer to matching what happens in Node.js
* Always attempt to find related tests in an existing test file before creating a new test file

View File

@@ -1,509 +0,0 @@
---
description: How Zig works with JavaScriptCore bindings generator
globs:
alwaysApply: false
---
# Bun's JavaScriptCore Class Bindings Generator
This document explains how Bun's class bindings generator works to bridge Zig and JavaScript code through JavaScriptCore (JSC).
## Architecture Overview
Bun's binding system creates a seamless bridge between JavaScript and Zig, allowing Zig implementations to be exposed as JavaScript classes. The system has several key components:
1. **Zig Implementation** (.zig files)
2. **JavaScript Interface Definition** (.classes.ts files)
3. **Generated Code** (C++/Zig files that connect everything)
## Class Definition Files
### JavaScript Interface (.classes.ts)
The `.classes.ts` files define the JavaScript API using a declarative approach:
```typescript
// Example: encoding.classes.ts
define({
name: "TextDecoder",
constructor: true,
JSType: "object",
finalize: true,
proto: {
decode: {
// Function definition
args: 1,
},
encoding: {
// Getter with caching
getter: true,
cache: true,
},
fatal: {
// Read-only property
getter: true,
},
ignoreBOM: {
// Read-only property
getter: true,
},
},
});
```
Each class definition specifies:
- The class name
- Whether it has a constructor
- JavaScript type (object, function, etc.)
- Properties and methods in the `proto` field
- Caching strategy for properties
- Finalization requirements
### Zig Implementation (.zig)
The Zig files implement the native functionality:
```zig
// Example: TextDecoder.zig
pub const TextDecoder = struct {
// Expose generated bindings as `js` namespace with trait conversion methods
pub const js = JSC.Codegen.JSTextDecoder;
pub const toJS = js.toJS;
pub const fromJS = js.fromJS;
pub const fromJSDirect = js.fromJSDirect;
// Internal state
encoding: []const u8,
fatal: bool,
ignoreBOM: bool,
// Constructor implementation - note use of globalObject
pub fn constructor(
globalObject: *JSGlobalObject,
callFrame: *JSC.CallFrame,
) bun.JSError!*TextDecoder {
// Implementation
return bun.new(TextDecoder, .{
// Fields
});
}
// Prototype methods - note return type includes JSError
pub fn decode(
this: *TextDecoder,
globalObject: *JSGlobalObject,
callFrame: *JSC.CallFrame,
) bun.JSError!JSC.JSValue {
// Implementation
}
// Getters
pub fn getEncoding(this: *TextDecoder, globalObject: *JSGlobalObject) JSC.JSValue {
return JSC.JSValue.createStringFromUTF8(globalObject, this.encoding);
}
pub fn getFatal(this: *TextDecoder, globalObject: *JSGlobalObject) JSC.JSValue {
return JSC.JSValue.jsBoolean(this.fatal);
}
// Cleanup - note standard pattern of using deinit/deref
fn deinit(this: *TextDecoder) void {
// Release any retained resources
// Free the pointer at the end.
bun.destroy(this);
}
// Finalize - called by JS garbage collector. This should call deinit, or deref if reference counted.
pub fn finalize(this: *TextDecoder) void {
this.deinit();
}
};
```
Key components in the Zig file:
- The struct containing native state
- `pub const js = JSC.Codegen.JS<ClassName>` to include generated code
- Constructor and methods using `bun.JSError!JSValue` return type for proper error handling
- Consistent use of `globalObject` parameter name instead of `ctx`
- Methods matching the JavaScript interface
- Getters/setters for properties
- Proper resource cleanup pattern with `deinit()` and `finalize()`
- Update `src/bun.js/bindings/generated_classes_list.zig` to include the new class
## Code Generation System
The binding generator produces C++ code that connects JavaScript and Zig:
1. **JSC Class Structure**: Creates C++ classes for the JS object, prototype, and constructor
2. **Memory Management**: Handles GC integration through JSC's WriteBarrier
3. **Method Binding**: Connects JS function calls to Zig implementations
4. **Type Conversion**: Converts between JS values and Zig types
5. **Property Caching**: Implements the caching system for properties
The generated C++ code includes:
- A JSC wrapper class (`JSTextDecoder`)
- A prototype class (`JSTextDecoderPrototype`)
- A constructor function (`JSTextDecoderConstructor`)
- Function bindings (`TextDecoderPrototype__decodeCallback`)
- Property getters/setters (`TextDecoderPrototype__encodingGetterWrap`)
## CallFrame Access
The `CallFrame` object provides access to JavaScript execution context:
```zig
pub fn decode(
this: *TextDecoder,
globalObject: *JSGlobalObject,
callFrame: *JSC.CallFrame
) bun.JSError!JSC.JSValue {
// Get arguments
const input = callFrame.argument(0);
const options = callFrame.argument(1);
// Get this value
const thisValue = callFrame.thisValue();
// Implementation with error handling
if (input.isUndefinedOrNull()) {
return globalObject.throw("Input cannot be null or undefined", .{});
}
// Return value or throw error
return JSC.JSValue.jsString(globalObject, "result");
}
```
CallFrame methods include:
- `argument(i)`: Get the i-th argument
- `argumentCount()`: Get the number of arguments
- `thisValue()`: Get the `this` value
- `callee()`: Get the function being called
## Property Caching and GC-Owned Values
The `cache: true` option in property definitions enables JSC's WriteBarrier to efficiently store values:
```typescript
encoding: {
getter: true,
cache: true, // Enable caching
}
```
### C++ Implementation
In the generated C++ code, caching uses JSC's WriteBarrier:
```cpp
JSC_DEFINE_CUSTOM_GETTER(TextDecoderPrototype__encodingGetterWrap, (...)) {
auto& vm = JSC::getVM(lexicalGlobalObject);
Zig::GlobalObject *globalObject = reinterpret_cast<Zig::GlobalObject*>(lexicalGlobalObject);
auto throwScope = DECLARE_THROW_SCOPE(vm);
JSTextDecoder* thisObject = jsCast<JSTextDecoder*>(JSValue::decode(encodedThisValue));
JSC::EnsureStillAliveScope thisArg = JSC::EnsureStillAliveScope(thisObject);
// Check for cached value and return if present
if (JSValue cachedValue = thisObject->m_encoding.get())
return JSValue::encode(cachedValue);
// Get value from Zig implementation
JSC::JSValue result = JSC::JSValue::decode(
TextDecoderPrototype__getEncoding(thisObject->wrapped(), globalObject)
);
RETURN_IF_EXCEPTION(throwScope, {});
// Store in cache for future access
thisObject->m_encoding.set(vm, thisObject, result);
RELEASE_AND_RETURN(throwScope, JSValue::encode(result));
}
```
### Zig Accessor Functions
For each cached property, the generator creates Zig accessor functions that allow Zig code to work with these GC-owned values:
```zig
// External function declarations
extern fn TextDecoderPrototype__encodingSetCachedValue(JSC.JSValue, *JSC.JSGlobalObject, JSC.JSValue) callconv(JSC.conv) void;
extern fn TextDecoderPrototype__encodingGetCachedValue(JSC.JSValue) callconv(JSC.conv) JSC.JSValue;
/// `TextDecoder.encoding` setter
/// This value will be visited by the garbage collector.
pub fn encodingSetCached(thisValue: JSC.JSValue, globalObject: *JSC.JSGlobalObject, value: JSC.JSValue) void {
JSC.markBinding(@src());
TextDecoderPrototype__encodingSetCachedValue(thisValue, globalObject, value);
}
/// `TextDecoder.encoding` getter
/// This value will be visited by the garbage collector.
pub fn encodingGetCached(thisValue: JSC.JSValue) ?JSC.JSValue {
JSC.markBinding(@src());
const result = TextDecoderPrototype__encodingGetCachedValue(thisValue);
if (result == .zero)
return null;
return result;
}
```
### Benefits of GC-Owned Values
This system provides several key benefits:
1. **Automatic Memory Management**: The JavaScriptCore GC tracks and manages these values
2. **Proper Garbage Collection**: The WriteBarrier ensures values are properly visited during GC
3. **Consistent Access**: Zig code can easily get/set these cached JS values
4. **Performance**: Cached values avoid repeated computation or serialization
### Use Cases
GC-owned cached values are particularly useful for:
1. **Computed Properties**: Store expensive computation results
2. **Lazily Created Objects**: Create objects only when needed, then cache them
3. **References to Other Objects**: Store references to other JS objects that need GC tracking
4. **Memoization**: Cache results based on input parameters
The WriteBarrier mechanism ensures that any JS values stored in this way are properly tracked by the garbage collector.
## Memory Management and Finalization
The binding system handles memory management across the JavaScript/Zig boundary:
1. **Object Creation**: JavaScript `new TextDecoder()` creates both a JS wrapper and a Zig struct
2. **Reference Tracking**: JSC's GC tracks all JS references to the object
3. **Finalization**: When the JS object is collected, the finalizer releases Zig resources
Bun uses a consistent pattern for resource cleanup:
```zig
// Resource cleanup method - separate from finalization
pub fn deinit(this: *TextDecoder) void {
// Release resources like strings
this._encoding.deref(); // String deref pattern
// Free any buffers
if (this.buffer) |buffer| {
bun.default_allocator.free(buffer);
}
}
// Called by the GC when object is collected
pub fn finalize(this: *TextDecoder) void {
JSC.markBinding(@src()); // For debugging
this.deinit(); // Clean up resources
bun.default_allocator.destroy(this); // Free the object itself
}
```
Some objects that hold references to other JS objects use `.deref()` instead:
```zig
pub fn finalize(this: *SocketAddress) void {
JSC.markBinding(@src());
this._presentation.deref(); // Release references
this.destroy();
}
```
## Error Handling with JSError
Bun uses `bun.JSError!JSValue` return type for proper error handling:
```zig
pub fn decode(
this: *TextDecoder,
globalObject: *JSGlobalObject,
callFrame: *JSC.CallFrame
) bun.JSError!JSC.JSValue {
// Throwing an error
if (callFrame.argumentCount() < 1) {
return globalObject.throw("Missing required argument", .{});
}
// Or returning a success value
return JSC.JSValue.jsString(globalObject, "Success!");
}
```
This pattern allows Zig functions to:
1. Return JavaScript values on success
2. Throw JavaScript exceptions on error
3. Propagate errors automatically through the call stack
## Type Safety and Error Handling
The binding system includes robust error handling:
```cpp
// Example of type checking in generated code
JSTextDecoder* thisObject = jsDynamicCast<JSTextDecoder*>(callFrame->thisValue());
if (UNLIKELY(!thisObject)) {
scope.throwException(lexicalGlobalObject,
Bun::createInvalidThisError(lexicalGlobalObject, callFrame->thisValue(), "TextDecoder"_s));
return {};
}
```
## Prototypal Inheritance
The binding system creates proper JavaScript prototype chains:
1. **Constructor**: JSTextDecoderConstructor with standard .prototype property
2. **Prototype**: JSTextDecoderPrototype with methods and properties
3. **Instances**: Each JSTextDecoder instance with **proto** pointing to prototype
This ensures JavaScript inheritance works as expected:
```cpp
// From generated code
void JSTextDecoderConstructor::finishCreation(VM& vm, JSC::JSGlobalObject* globalObject, JSTextDecoderPrototype* prototype)
{
Base::finishCreation(vm, 0, "TextDecoder"_s, PropertyAdditionMode::WithoutStructureTransition);
// Set up the prototype chain
putDirectWithoutTransition(vm, vm.propertyNames->prototype, prototype, PropertyAttribute::DontEnum | PropertyAttribute::DontDelete | PropertyAttribute::ReadOnly);
ASSERT(inherits(info()));
}
```
## Performance Considerations
The binding system is optimized for performance:
1. **Direct Pointer Access**: JavaScript objects maintain a direct pointer to Zig objects
2. **Property Caching**: WriteBarrier caching avoids repeated native calls for stable properties
3. **Memory Management**: JSC garbage collection integrated with Zig memory management
4. **Type Conversion**: Fast paths for common JavaScript/Zig type conversions
## Creating a New Class Binding
To create a new class binding in Bun:
1. **Define the class interface** in a `.classes.ts` file:
```typescript
define({
name: "MyClass",
constructor: true,
finalize: true,
proto: {
myMethod: {
args: 1,
},
myProperty: {
getter: true,
cache: true,
},
},
});
```
2. **Implement the native functionality** in a `.zig` file:
```zig
pub const MyClass = struct {
// Generated bindings
pub const js = JSC.Codegen.JSMyClass;
pub const toJS = js.toJS;
pub const fromJS = js.fromJS;
pub const fromJSDirect = js.fromJSDirect;
// State
value: []const u8,
pub const new = bun.TrivialNew(@This());
// Constructor
pub fn constructor(
globalObject: *JSGlobalObject,
callFrame: *JSC.CallFrame,
) bun.JSError!*MyClass {
const arg = callFrame.argument(0);
// Implementation
}
// Method
pub fn myMethod(
this: *MyClass,
globalObject: *JSGlobalObject,
callFrame: *JSC.CallFrame,
) bun.JSError!JSC.JSValue {
// Implementation
}
// Getter
pub fn getMyProperty(this: *MyClass, globalObject: *JSGlobalObject) JSC.JSValue {
return JSC.JSValue.jsString(globalObject, this.value);
}
// Resource cleanup
pub fn deinit(this: *MyClass) void {
// Clean up resources
}
pub fn finalize(this: *MyClass) void {
this.deinit();
bun.destroy(this);
}
};
```
3. **The binding generator** creates all necessary C++ and Zig glue code to connect JavaScript and Zig, including:
- C++ class definitions
- Method and property bindings
- Memory management utilities
- GC integration code
## Generated Code Structure
The binding generator produces several components:
### 1. C++ Classes
For each Zig class, the system generates:
- **JS<Class>**: Main wrapper that holds a pointer to the Zig object (`JSTextDecoder`)
- **JS<Class>Prototype**: Contains methods and properties (`JSTextDecoderPrototype`)
- **JS<Class>Constructor**: Implementation of the JavaScript constructor (`JSTextDecoderConstructor`)
### 2. C++ Methods and Properties
- **Method Callbacks**: `TextDecoderPrototype__decodeCallback`
- **Property Getters/Setters**: `TextDecoderPrototype__encodingGetterWrap`
- **Initialization Functions**: `finishCreation` methods for setting up the class
### 3. Zig Bindings
- **External Function Declarations**:
```zig
extern fn TextDecoderPrototype__decode(*TextDecoder, *JSC.JSGlobalObject, *JSC.CallFrame) callconv(JSC.conv) JSC.EncodedJSValue;
```
- **Cached Value Accessors**:
```zig
pub fn encodingGetCached(thisValue: JSC.JSValue) ?JSC.JSValue { ... }
pub fn encodingSetCached(thisValue: JSC.JSValue, globalObject: *JSC.JSGlobalObject, value: JSC.JSValue) void { ... }
```
- **Constructor Helpers**:
```zig
pub fn create(globalObject: *JSC.JSGlobalObject) bun.JSError!JSC.JSValue { ... }
```
### 4. GC Integration
- **Memory Cost Calculation**: `estimatedSize` method
- **Child Visitor Methods**: `visitChildrenImpl` and `visitAdditionalChildren`
- **Heap Analysis**: `analyzeHeap` for debugging memory issues
This architecture makes it possible to implement high-performance native functionality in Zig while exposing a clean, idiomatic JavaScript API to users.

1
.gitattributes vendored
View File

@@ -16,6 +16,7 @@
*.map text eol=lf whitespace=blank-at-eol,-blank-at-eof,-space-before-tab,tab-in-indent,tabwidth=2
*.md text eol=lf whitespace=blank-at-eol,-blank-at-eof,-space-before-tab,tab-in-indent,tabwidth=2
*.mdc text eol=lf whitespace=blank-at-eol,-blank-at-eof,-space-before-tab,tab-in-indent,tabwidth=2
*.mdx text eol=lf whitespace=blank-at-eol,-blank-at-eof,-space-before-tab,tab-in-indent,tabwidth=2
*.mjs text eol=lf whitespace=blank-at-eol,-blank-at-eof,-space-before-tab,tab-in-indent,tabwidth=2
*.mts text eol=lf whitespace=blank-at-eol,-blank-at-eof,-space-before-tab,tab-in-indent,tabwidth=2

View File

@@ -1,66 +0,0 @@
name: Claude Code
on:
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
issues:
types: [opened, assigned]
pull_request_review:
types: [submitted]
jobs:
claude:
if: |
github.repository == 'oven-sh/bun' &&
(
(github.event_name == 'issue_comment' && (github.event.comment.author_association == 'MEMBER' || github.event.comment.author_association == 'OWNER' || github.event.comment.author_association == 'COLLABORATOR')) ||
(github.event_name == 'pull_request_review_comment' && (github.event.comment.author_association == 'MEMBER' || github.event.comment.author_association == 'OWNER' || github.event.comment.author_association == 'COLLABORATOR')) ||
(github.event_name == 'pull_request_review' && (github.event.review.author_association == 'MEMBER' || github.event.review.author_association == 'OWNER' || github.event.review.author_association == 'COLLABORATOR')) ||
(github.event_name == 'issues' && (github.event.issue.author_association == 'MEMBER' || github.event.issue.author_association == 'OWNER' || github.event.issue.author_association == 'COLLABORATOR'))
) &&
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
runs-on: claude
env:
IS_SANDBOX: 1
container:
image: localhost:5000/claude-bun:latest
options: --privileged --user 1000:1000
permissions:
contents: read
id-token: write
steps:
- name: Checkout repository
working-directory: /workspace/bun
run: |
git config --global user.email "claude-bot@bun.sh" && \
git config --global user.name "Claude Bot" && \
git config --global url."git@github.com:".insteadOf "https://github.com/" && \
git config --global url."git@github.com:".insteadOf "http://github.com/" && \
git config --global --add safe.directory /workspace/bun && \
git config --global push.default current && \
git config --global pull.rebase true && \
git config --global init.defaultBranch main && \
git config --global core.editor "vim" && \
git config --global color.ui auto && \
git config --global fetch.prune true && \
git config --global diff.colorMoved zebra && \
git config --global merge.conflictStyle diff3 && \
git config --global rerere.enabled true && \
git config --global core.autocrlf input
git fetch origin ${{ github.event.pull_request.head.sha }}
git checkout ${{ github.event.pull_request.head.ref }}
git reset --hard origin/${{ github.event.pull_request.head.ref }}
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@v1
with:
timeout_minutes: "180"
claude_args: |
--dangerously-skip-permissions
--system-prompt "You are working on the Bun codebase"
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}

View File

@@ -1,58 +0,0 @@
name: Codex Test Sync
on:
pull_request:
types: [labeled, opened]
env:
BUN_VERSION: "1.2.15"
jobs:
sync-node-tests:
runs-on: ubuntu-latest
if: |
(github.event.action == 'labeled' && github.event.label.name == 'codex') ||
(github.event.action == 'opened' && contains(github.event.pull_request.labels.*.name, 'codex')) ||
contains(github.head_ref, 'codex')
permissions:
contents: write
pull-requests: write
steps:
- name: Checkout
uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
fetch-depth: 0
- name: Setup Bun
uses: ./.github/actions/setup-bun
with:
bun-version: ${{ env.BUN_VERSION }}
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v44
with:
files: |
test/js/node/test/parallel/**/*.{js,mjs,ts}
test/js/node/test/sequential/**/*.{js,mjs,ts}
- name: Sync tests
if: steps.changed-files.outputs.any_changed == 'true'
shell: bash
run: |
echo "Changed test files:"
echo "${{ steps.changed-files.outputs.all_changed_files }}"
# Process each changed test file
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
# Extract test name from file path
test_name=$(basename "$file" | sed 's/\.[^.]*$//')
echo "Syncing test: $test_name"
bun node:test:cp "$test_name"
done
- name: Commit changes
uses: stefanzweifel/git-auto-commit-action@v5
with:
commit_message: "Sync Node.js tests with upstream"

View File

@@ -9,7 +9,7 @@ on:
pull_request:
merge_group:
env:
BUN_VERSION: "1.2.20"
BUN_VERSION: "1.3.2"
LLVM_VERSION: "19.1.7"
LLVM_VERSION_MAJOR: "19"

2
.gitignore vendored
View File

@@ -1,4 +1,5 @@
.claude/settings.local.json
.direnv
.DS_Store
.env
.envrc
@@ -9,6 +10,7 @@
.ninja_deps
.ninja_log
.npm
.npmrc
.npm.gz
.parcel-cache
.swcrc

View File

@@ -9,3 +9,6 @@ test/snippets
test/js/node/test
test/napi/node-napi-tests
bun.lock
# the output codeblocks need to stay minified
docs/bundler/minifier.mdx

16
.vscode/settings.json vendored
View File

@@ -27,18 +27,22 @@
"git.ignoreLimitWarning": true,
// Zig
"zig.initialSetupDone": true,
"zig.buildOption": "build",
// "zig.initialSetupDone": true,
// "zig.buildOption": "build",
"zig.zls.zigLibPath": "${workspaceFolder}/vendor/zig/lib",
"zig.buildArgs": ["-Dgenerated-code=./build/debug/codegen", "--watch", "-fincremental"],
"zig.zls.buildOnSaveStep": "check",
"zig.buildOnSaveArgs": [
"-Dgenerated-code=./build/debug/codegen",
"--watch",
"-fincremental"
],
// "zig.zls.buildOnSaveStep": "check",
// "zig.zls.enableBuildOnSave": true,
// "zig.buildOnSave": true,
"zig.buildFilePath": "${workspaceFolder}/build.zig",
// "zig.buildFilePath": "${workspaceFolder}/build.zig",
"zig.path": "${workspaceFolder}/vendor/zig/zig.exe",
"zig.zls.path": "${workspaceFolder}/vendor/zig/zls.exe",
"zig.formattingProvider": "zls",
"zig.zls.enableInlayHints": false,
// "zig.zls.enableInlayHints": false,
"[zig]": {
"editor.tabSize": 4,
"editor.useTabStops": false,

View File

@@ -6,10 +6,12 @@ This is the Bun repository - an all-in-one JavaScript runtime & toolkit designed
- **Build Bun**: `bun bd`
- Creates a debug build at `./build/debug/bun-debug`
- **CRITICAL**: no need for a timeout, the build is really fast!
- **CRITICAL**: do not set a timeout when running `bun bd`
- **Run tests with your debug build**: `bun bd test <test-file>`
- **CRITICAL**: Never use `bun test` directly - it won't include your changes
- **Run any command with debug build**: `bun bd <command>`
- **Run with JavaScript exception scope verification**: `BUN_JSC_validateExceptionChecks=1
BUN_JSC_dumpSimulatedThrows=1 bun bd <command>`
Tip: Bun is already installed and in $PATH. The `bd` subcommand is a package.json script.
@@ -94,7 +96,7 @@ test("(multi-file test) my feature", async () => {
- Always use `port: 0`. Do not hardcode ports. Do not use your own random port number function.
- Use `normalizeBunSnapshot` to normalize snapshot output of the test.
- NEVER write tests that check for no "panic" or "uncaught exception" or similar in the test output. That is NOT a valid test.
- NEVER write tests that check for no "panic" or "uncaught exception" or similar in the test output. These tests will never fail in CI.
- Use `tempDir` from `"harness"` to create a temporary directory. **Do not** use `tmpdirSync` or `fs.mkdtempSync` to create temporary directories.
- When spawning processes, tests should expect(stdout).toBe(...) BEFORE expect(exitCode).toBe(0). This gives you a more useful error message on test failure.
- **CRITICAL**: Do not write flaky tests. Do not use `setTimeout` in tests. Instead, `await` the condition to be met. You are not testing the TIME PASSING, you are testing the CONDITION.
@@ -209,3 +211,24 @@ Built-in JavaScript modules use special syntax and are organized as:
12. **Branch names must start with `claude/`** - This is a requirement for the CI to work.
**ONLY** push up changes after running `bun bd test <file>` and ensuring your tests pass.
## Debugging CI Failures
Use `scripts/buildkite-failures.ts` to fetch and analyze CI build failures:
```bash
# View failures for current branch
bun run scripts/buildkite-failures.ts
# View failures for a specific build number
bun run scripts/buildkite-failures.ts 35051
# View failures for a GitHub PR
bun run scripts/buildkite-failures.ts #26173
bun run scripts/buildkite-failures.ts https://github.com/oven-sh/bun/pull/26173
# Wait for build to complete (polls every 10s until pass/fail)
bun run scripts/buildkite-failures.ts --wait
```
The script fetches logs from BuildKite's public API and saves complete logs to `/tmp/bun-build-{number}-{platform}-{step}.log`. It displays a summary of errors and the file path for each failed job. Use `--wait` to poll continuously until the build completes or fails.

View File

@@ -25,16 +25,6 @@ if(CMAKE_HOST_APPLE)
endif()
include(SetupLLVM)
find_program(SCCACHE_PROGRAM sccache)
if(SCCACHE_PROGRAM AND NOT DEFINED ENV{NO_SCCACHE})
include(SetupSccache)
else()
find_program(CCACHE_PROGRAM ccache)
if(CCACHE_PROGRAM)
include(SetupCcache)
endif()
endif()
# --- Project ---
parse_package_json(VERSION_VARIABLE DEFAULT_VERSION)
@@ -57,6 +47,8 @@ include(SetupEsbuild)
include(SetupZig)
include(SetupRust)
include(SetupCcache)
# Generate dependency versions header
include(GenerateDependencyVersions)

View File

@@ -23,7 +23,7 @@ Using your system's package manager, install Bun's dependencies:
{% codetabs group="os" %}
```bash#macOS (Homebrew)
$ brew install automake cmake coreutils gnu-sed go icu4c libiconv libtool ninja pkg-config rust ruby sccache
$ brew install automake ccache cmake coreutils gnu-sed go icu4c libiconv libtool ninja pkg-config rust ruby
```
```bash#Ubuntu/Debian
@@ -65,43 +65,28 @@ $ brew install bun
{% /codetabs %}
### Optional: Install `sccache`
### Optional: Install `ccache`
sccache is used to cache compilation artifacts, significantly speeding up builds. It must be installed with S3 support:
ccache is used to cache compilation artifacts, significantly speeding up builds:
```bash
# For macOS
$ brew install sccache
$ brew install ccache
# For Linux. Note that the version in your package manager may not have S3 support.
$ cargo install sccache --features=s3
# For Ubuntu/Debian
$ sudo apt install ccache
# For Arch
$ sudo pacman -S ccache
# For Fedora
$ sudo dnf install ccache
# For openSUSE
$ sudo zypper install ccache
```
This will install `sccache` with S3 support. Our build scripts will automatically detect and use `sccache` with our shared S3 cache. **Note**: Not all versions of `sccache` are compiled with S3 support, hence we recommend installing it via `cargo`.
#### Registering AWS Credentials for `sccache` (Core Developers Only)
Core developers have write access to the shared S3 cache. To enable write access, you must log in with AWS credentials. The easiest way to do this is to use the [`aws` CLI](https://aws.amazon.com/cli/) and invoke [`aws configure` to provide your AWS security info](https://docs.aws.amazon.com/cli/latest/reference/configure/).
The `cmake` scripts should automatically detect your AWS credentials from the environment or the `~/.aws/credentials` file.
<details>
<summary>Logging in to the `aws` CLI</summary>
1. Install the AWS CLI by following [the official guide](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
2. Log in to your AWS account console. A team member should provide you with your credentials.
3. Click your name in the top right > Security credentials.
4. Scroll to "Access keys" and create a new access key.
5. Run `aws configure` in your terminal and provide the access key ID and secret access key when prompted.
</details>
<details>
<summary>Common Issues You May Encounter</summary>
- To confirm that the cache is being used, you can use the `sccache --show-stats` command right after a build. This will expose very useful statistics, including cache hits/misses.
- If you have multiple AWS profiles configured, ensure that the correct profile is set in the `AWS_PROFILE` environment variable.
- `sccache` follows a server-client model. If you run into weird issues where `sccache` refuses to use S3, even though you have AWS credentials configured, try killing any running `sccache` servers with `sccache --stop-server` and then re-running the build.
</details>
Our build scripts will automatically detect and use `ccache` if available. You can check cache statistics with `ccache --show-stats`.
## Install LLVM
@@ -201,7 +186,7 @@ Bun generally takes about 2.5 minutes to compile a debug build when there are Zi
- Batch up your changes
- Ensure zls is running with incremental watching for LSP errors (if you use VSCode and install Zig and run `bun run build` once to download Zig, this should just work)
- Prefer using the debugger ("CodeLLDB" in VSCode) to step through the code.
- Use debug logs. `BUN_DEBUG_<scope>=1` will enable debug logging for the corresponding `Output.scoped(.<scope>, .hidden)` logs. You can also set `BUN_DEBUG_QUIET_LOGS=1` to disable all debug logging that isn't explicitly enabled. To dump debug lgos into a file, `BUN_DEBUG=<path-to-file>.log`. Debug logs are aggressively removed in release builds.
- Use debug logs. `BUN_DEBUG_<scope>=1` will enable debug logging for the corresponding `Output.scoped(.<scope>, .hidden)` logs. You can also set `BUN_DEBUG_QUIET_LOGS=1` to disable all debug logging that isn't explicitly enabled. To dump debug logs into a file, `BUN_DEBUG=<path-to-file>.log`. Debug logs are aggressively removed in release builds.
- src/js/\*\*.ts changes are pretty much instant to rebuild. C++ changes are a bit slower, but still much faster than the Zig code (Zig is one compilation unit, C++ is many).
## Code generation scripts

2
LATEST
View File

@@ -1 +1 @@
1.3.1
1.3.6

View File

@@ -54,7 +54,7 @@ Bun supports Linux (x64 & arm64), macOS (x64 & Apple Silicon) and Windows (x64).
curl -fsSL https://bun.com/install | bash
# on windows
powershell -c "irm bun.com/install.ps1 | iex"
powershell -c "irm bun.sh/install.ps1 | iex"
# with npm
npm install -g bun
@@ -104,13 +104,13 @@ bun upgrade --canary
- [File types (Loaders)](https://bun.com/docs/runtime/loaders)
- [TypeScript](https://bun.com/docs/runtime/typescript)
- [JSX](https://bun.com/docs/runtime/jsx)
- [Environment variables](https://bun.com/docs/runtime/env)
- [Environment variables](https://bun.com/docs/runtime/environment-variables)
- [Bun APIs](https://bun.com/docs/runtime/bun-apis)
- [Web APIs](https://bun.com/docs/runtime/web-apis)
- [Node.js compatibility](https://bun.com/docs/runtime/nodejs-apis)
- [Node.js compatibility](https://bun.com/docs/runtime/nodejs-compat)
- [Single-file executable](https://bun.com/docs/bundler/executables)
- [Plugins](https://bun.com/docs/runtime/plugins)
- [Watch mode / Hot Reloading](https://bun.com/docs/runtime/hot)
- [Watch mode / Hot Reloading](https://bun.com/docs/runtime/watch-mode)
- [Module resolution](https://bun.com/docs/runtime/modules)
- [Auto-install](https://bun.com/docs/runtime/autoimport)
- [bunfig.toml](https://bun.com/docs/runtime/bunfig)
@@ -230,7 +230,7 @@ bun upgrade --canary
- Ecosystem
- [Use React and JSX](https://bun.com/guides/ecosystem/react)
- [Use EdgeDB with Bun](https://bun.com/guides/ecosystem/edgedb)
- [Use Gel with Bun](https://bun.com/guides/ecosystem/gel)
- [Use Prisma with Bun](https://bun.com/guides/ecosystem/prisma)
- [Add Sentry to a Bun app](https://bun.com/guides/ecosystem/sentry)
- [Create a Discord bot](https://bun.com/guides/ecosystem/discordjs)

View File

@@ -1,5 +1,6 @@
{
"lockfileVersion": 1,
"configVersion": 0,
"workspaces": {
"": {
"name": "bench",
@@ -22,7 +23,9 @@
"react-dom": "^18.3.1",
"string-width": "7.1.0",
"strip-ansi": "^7.1.0",
"tar": "^7.4.3",
"tinycolor2": "^1.6.0",
"wrap-ansi": "^9.0.0",
"zx": "^7.2.3",
},
"devDependencies": {
@@ -107,6 +110,8 @@
"@fastify/proxy-addr": ["@fastify/proxy-addr@5.0.0", "", { "dependencies": { "@fastify/forwarded": "^3.0.0", "ipaddr.js": "^2.1.0" } }, "sha512-37qVVA1qZ5sgH7KpHkkC4z9SK6StIsIcOmpjvMPXNb3vx2GQxhZocogVYbr2PbbeLCQxYIPDok307xEvRZOzGA=="],
"@isaacs/fs-minipass": ["@isaacs/fs-minipass@4.0.1", "", { "dependencies": { "minipass": "^7.0.4" } }, "sha512-wgm9Ehl2jpeqP3zw/7mo3kRHFp5MEDhqAdwy1fTGkHAwnkGOVsgpvQhL8B5n1qlb01jV3n/bI0ZfZp5lWA1k4w=="],
"@jridgewell/gen-mapping": ["@jridgewell/gen-mapping@0.1.1", "", { "dependencies": { "@jridgewell/set-array": "^1.0.0", "@jridgewell/sourcemap-codec": "^1.4.10" } }, "sha512-sQXCasFk+U8lWYEe66WxRDOE9PjVz4vSM51fTu3Hw+ClTpUSQb718772vH3pyS5pShp6lvQM7SxgIDXXXmOX7w=="],
"@jridgewell/resolve-uri": ["@jridgewell/resolve-uri@3.1.0", "", {}, "sha512-F2msla3tad+Mfht5cJq7LSXcdudKTWCVYUgw6pLFOOHSTtZlj6SWNYAp+AhuqLmWdBO2X5hPrLcu8cVP8fy28w=="],
@@ -165,7 +170,7 @@
"ansi-regex": ["ansi-regex@6.0.1", "", {}, "sha512-n5M855fKb2SsfMIiFFoVrABHJC8QtHwVx+mHWP3QcEqBHYienj5dHSgjbxtC0WEZXYt4wcD6zrQElDPhFuZgfA=="],
"ansi-styles": ["ansi-styles@3.2.1", "", { "dependencies": { "color-convert": "^1.9.0" } }, "sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA=="],
"ansi-styles": ["ansi-styles@6.2.3", "https://artifactory.infra.ant.dev:443/artifactory/api/npm/npm-all/ansi-styles/-/ansi-styles-6.2.3.tgz", {}, "sha512-4Dj6M28JB+oAH8kFkTLUo+a2jwOFkuqb3yucU0CANcRRUbxS0cP0nZYCGjcc3BNXwRIsUVmDGgzawme7zvJHvg=="],
"atomic-sleep": ["atomic-sleep@1.0.0", "", {}, "sha512-kNOjDqAh7px0XWNI+4QbzoiR/nTkHAWNud2uvnJquD1/x5a7EQZMJT0AczqK0Qn67oY/TTQ1LbUKajZpp3I9tQ=="],
@@ -181,6 +186,8 @@
"chalk": ["chalk@5.3.0", "", {}, "sha512-dLitG79d+GV1Nb/VYcCDFivJeK1hiukt9QjRNVOsUtTy1rR1YJsmpGGTZ3qJos+uw7WmWF4wUwBd9jxjocFC2w=="],
"chownr": ["chownr@3.0.0", "", {}, "sha512-+IxzY9BZOQd/XuYPRmrvEVjF/nqj5kgT4kEq7VofrDoM1MxoRjEWkrCC3EtLi59TVawxTAn+orJwFQcrqEN1+g=="],
"color": ["color@4.2.3", "", { "dependencies": { "color-convert": "^2.0.1", "color-string": "^1.9.0" } }, "sha512-1rXeuUUiGGrykh+CeBdu5Ie7OJwinCgQY0bc7GCRxy5xVHy+moaqkpL/jqQq0MtQOeYcrqEz4abc5f0KtU7W4A=="],
"color-convert": ["color-convert@2.0.1", "", { "dependencies": { "color-name": "~1.1.4" } }, "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ=="],
@@ -361,6 +368,10 @@
"minimist": ["minimist@1.2.8", "", {}, "sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA=="],
"minipass": ["minipass@7.1.2", "", {}, "sha512-qOOzS1cBTWYF4BH8fVePDBOO9iptMnGUEZwNc/cMWnTV2nVLZ7VoNWEPHkYczZA0pdoA7dl6e7FL659nX9S2aw=="],
"minizlib": ["minizlib@3.1.0", "", { "dependencies": { "minipass": "^7.1.2" } }, "sha512-KZxYo1BUkWD2TVFLr0MQoM8vUUigWD3LlD83a/75BqC+4qE0Hb1Vo5v1FgcfaNXvfXzr+5EhQ6ing/CaBijTlw=="],
"mitata": ["mitata@1.0.25", "", {}, "sha512-0v5qZtVW5vwj9FDvYfraR31BMDcRLkhSFWPTLaxx/Z3/EvScfVtAAWtMI2ArIbBcwh7P86dXh0lQWKiXQPlwYA=="],
"ms": ["ms@2.1.2", "", {}, "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w=="],
@@ -457,6 +468,8 @@
"supports-color": ["supports-color@5.5.0", "", { "dependencies": { "has-flag": "^3.0.0" } }, "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow=="],
"tar": ["tar@7.5.2", "", { "dependencies": { "@isaacs/fs-minipass": "^4.0.0", "chownr": "^3.0.0", "minipass": "^7.1.2", "minizlib": "^3.1.0", "yallist": "^5.0.0" } }, "sha512-7NyxrTE4Anh8km8iEy7o0QYPs+0JKBTj5ZaqHg6B39erLg0qYXN3BijtShwbsNSvQ+LN75+KV+C4QR/f6Gwnpg=="],
"thread-stream": ["thread-stream@3.1.0", "", { "dependencies": { "real-require": "^0.2.0" } }, "sha512-OqyPZ9u96VohAyMfJykzmivOrY2wfMSf3C5TtFJVgN+Hm6aj+voFhlK+kZEIv2FBh1X6Xp3DlnCOfEQ3B2J86A=="],
"through": ["through@2.3.8", "", {}, "sha512-w89qg7PI8wAdvX60bMDP+bFoD5Dvhm9oLheFp5O4a2QF0cSBGsBX4qZmadPMvVqlLJBBci+WqGGOAPvcDeNSVg=="],
@@ -481,7 +494,9 @@
"which": ["which@3.0.1", "", { "dependencies": { "isexe": "^2.0.0" }, "bin": { "node-which": "bin/which.js" } }, "sha512-XA1b62dzQzLfaEOSQFTCOd5KFf/1VSzZo7/7TUjnya6u0vGGKzU96UQBZTAThCb2j4/xjBAyii1OhRLJEivHvg=="],
"yallist": ["yallist@3.1.1", "", {}, "sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g=="],
"wrap-ansi": ["wrap-ansi@9.0.2", "https://artifactory.infra.ant.dev:443/artifactory/api/npm/npm-all/wrap-ansi/-/wrap-ansi-9.0.2.tgz", { "dependencies": { "ansi-styles": "^6.2.1", "string-width": "^7.0.0", "strip-ansi": "^7.1.0" } }, "sha512-42AtmgqjV+X1VpdOfyTGOYRi0/zsoLqtXQckTmqTeybT+BDIbM/Guxo7x3pE2vtpr1ok6xRqM9OpBe+Jyoqyww=="],
"yallist": ["yallist@5.0.0", "", {}, "sha512-YgvUTfwqyc7UXVMrB+SImsVYSmTS8X/tSrtdNZMImM+n7+QTriRXyXim0mBrTXNeqzVF0KWGgHPeiyViFFrNDw=="],
"yaml": ["yaml@2.3.4", "", {}, "sha512-8aAvwVUSHpfEqTQ4w/KMlf3HcRdt50E5ODIQJBw1fQ5RL34xabzxtUlzTXVqc4rkZsPbvrXKWnABCD7kWSmocA=="],
@@ -491,8 +506,6 @@
"@babel/highlight/chalk": ["chalk@2.4.2", "", { "dependencies": { "ansi-styles": "^3.2.1", "escape-string-regexp": "^1.0.5", "supports-color": "^5.3.0" } }, "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ=="],
"ansi-styles/color-convert": ["color-convert@1.9.3", "", { "dependencies": { "color-name": "1.1.3" } }, "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg=="],
"avvio/fastq": ["fastq@1.19.1", "", { "dependencies": { "reusify": "^1.0.4" } }, "sha512-GwLTyxkCXjXbxqIhTsMI2Nui8huMPtnxg7krajPJAjnEG/iiOS7i+zCtWGZR9G0NBKbXKh6X9m9UIsYX/N6vvQ=="],
"cross-spawn/which": ["which@2.0.2", "", { "dependencies": { "isexe": "^2.0.0" }, "bin": { "node-which": "./bin/node-which" } }, "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA=="],
@@ -501,8 +514,14 @@
"light-my-request/process-warning": ["process-warning@4.0.1", "", {}, "sha512-3c2LzQ3rY9d0hc1emcsHhfT9Jwz0cChib/QN89oME2R451w5fy3f0afAhERFZAwrbDU43wk12d0ORBpDVME50Q=="],
"lru-cache/yallist": ["yallist@3.1.1", "", {}, "sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g=="],
"npm-run-path/path-key": ["path-key@4.0.0", "", {}, "sha512-haREypq7xkM7ErfgIyA0z+Bj4AGKlMSdlQE2jvJo6huWD1EdkKYV+G/T4nq0YEF2vgTT8kqMFKo1uHn950r4SQ=="],
"ansi-styles/color-convert/color-name": ["color-name@1.1.3", "", {}, "sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw=="],
"@babel/highlight/chalk/ansi-styles": ["ansi-styles@3.2.1", "", { "dependencies": { "color-convert": "^1.9.0" } }, "sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA=="],
"@babel/highlight/chalk/ansi-styles/color-convert": ["color-convert@1.9.3", "", { "dependencies": { "color-name": "1.1.3" } }, "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg=="],
"@babel/highlight/chalk/ansi-styles/color-convert/color-name": ["color-name@1.1.3", "", {}, "sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw=="],
}
}

View File

@@ -1,5 +1,6 @@
{
"lockfileVersion": 1,
"configVersion": 0,
"workspaces": {
"": {
"name": "installbench",
@@ -12,7 +13,7 @@
"@trpc/server": "^11.0.0",
"drizzle-orm": "^0.41.0",
"esbuild": "^0.25.11",
"next": "^15.2.3",
"next": "15.5.7",
"next-auth": "5.0.0-beta.25",
"postgres": "^3.4.4",
"react": "^19.0.0",
@@ -175,23 +176,23 @@
"@jridgewell/trace-mapping": ["@jridgewell/trace-mapping@0.3.31", "", { "dependencies": { "@jridgewell/resolve-uri": "3.1.2", "@jridgewell/sourcemap-codec": "1.5.5" } }, "sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw=="],
"@next/env": ["@next/env@15.5.6", "", {}, "sha512-3qBGRW+sCGzgbpc5TS1a0p7eNxnOarGVQhZxfvTdnV0gFI61lX7QNtQ4V1TSREctXzYn5NetbUsLvyqwLFJM6Q=="],
"@next/env": ["@next/env@15.5.7", "", {}, "sha512-4h6Y2NyEkIEN7Z8YxkA27pq6zTkS09bUSYC0xjd0NpwFxjnIKeZEeH591o5WECSmjpUhLn3H2QLJcDye3Uzcvg=="],
"@next/swc-darwin-arm64": ["@next/swc-darwin-arm64@15.5.6", "", { "os": "darwin", "cpu": "arm64" }, "sha512-ES3nRz7N+L5Umz4KoGfZ4XX6gwHplwPhioVRc25+QNsDa7RtUF/z8wJcbuQ2Tffm5RZwuN2A063eapoJ1u4nPg=="],
"@next/swc-darwin-arm64": ["@next/swc-darwin-arm64@15.5.7", "", { "os": "darwin", "cpu": "arm64" }, "sha512-IZwtxCEpI91HVU/rAUOOobWSZv4P2DeTtNaCdHqLcTJU4wdNXgAySvKa/qJCgR5m6KI8UsKDXtO2B31jcaw1Yw=="],
"@next/swc-darwin-x64": ["@next/swc-darwin-x64@15.5.6", "", { "os": "darwin", "cpu": "x64" }, "sha512-JIGcytAyk9LQp2/nuVZPAtj8uaJ/zZhsKOASTjxDug0SPU9LAM3wy6nPU735M1OqacR4U20LHVF5v5Wnl9ptTA=="],
"@next/swc-darwin-x64": ["@next/swc-darwin-x64@15.5.7", "", { "os": "darwin", "cpu": "x64" }, "sha512-UP6CaDBcqaCBuiq/gfCEJw7sPEoX1aIjZHnBWN9v9qYHQdMKvCKcAVs4OX1vIjeE+tC5EIuwDTVIoXpUes29lg=="],
"@next/swc-linux-arm64-gnu": ["@next/swc-linux-arm64-gnu@15.5.6", "", { "os": "linux", "cpu": "arm64" }, "sha512-qvz4SVKQ0P3/Im9zcS2RmfFL/UCQnsJKJwQSkissbngnB/12c6bZTCB0gHTexz1s6d/mD0+egPKXAIRFVS7hQg=="],
"@next/swc-linux-arm64-gnu": ["@next/swc-linux-arm64-gnu@15.5.7", "", { "os": "linux", "cpu": "arm64" }, "sha512-NCslw3GrNIw7OgmRBxHtdWFQYhexoUCq+0oS2ccjyYLtcn1SzGzeM54jpTFonIMUjNbHmpKpziXnpxhSWLcmBA=="],
"@next/swc-linux-arm64-musl": ["@next/swc-linux-arm64-musl@15.5.6", "", { "os": "linux", "cpu": "arm64" }, "sha512-FsbGVw3SJz1hZlvnWD+T6GFgV9/NYDeLTNQB2MXoPN5u9VA9OEDy6fJEfePfsUKAhJufFbZLgp0cPxMuV6SV0w=="],
"@next/swc-linux-arm64-musl": ["@next/swc-linux-arm64-musl@15.5.7", "", { "os": "linux", "cpu": "arm64" }, "sha512-nfymt+SE5cvtTrG9u1wdoxBr9bVB7mtKTcj0ltRn6gkP/2Nu1zM5ei8rwP9qKQP0Y//umK+TtkKgNtfboBxRrw=="],
"@next/swc-linux-x64-gnu": ["@next/swc-linux-x64-gnu@15.5.6", "", { "os": "linux", "cpu": "x64" }, "sha512-3QnHGFWlnvAgyxFxt2Ny8PTpXtQD7kVEeaFat5oPAHHI192WKYB+VIKZijtHLGdBBvc16tiAkPTDmQNOQ0dyrA=="],
"@next/swc-linux-x64-gnu": ["@next/swc-linux-x64-gnu@15.5.7", "", { "os": "linux", "cpu": "x64" }, "sha512-hvXcZvCaaEbCZcVzcY7E1uXN9xWZfFvkNHwbe/n4OkRhFWrs1J1QV+4U1BN06tXLdaS4DazEGXwgqnu/VMcmqw=="],
"@next/swc-linux-x64-musl": ["@next/swc-linux-x64-musl@15.5.6", "", { "os": "linux", "cpu": "x64" }, "sha512-OsGX148sL+TqMK9YFaPFPoIaJKbFJJxFzkXZljIgA9hjMjdruKht6xDCEv1HLtlLNfkx3c5w2GLKhj7veBQizQ=="],
"@next/swc-linux-x64-musl": ["@next/swc-linux-x64-musl@15.5.7", "", { "os": "linux", "cpu": "x64" }, "sha512-4IUO539b8FmF0odY6/SqANJdgwn1xs1GkPO5doZugwZ3ETF6JUdckk7RGmsfSf7ws8Qb2YB5It33mvNL/0acqA=="],
"@next/swc-win32-arm64-msvc": ["@next/swc-win32-arm64-msvc@15.5.6", "", { "os": "win32", "cpu": "arm64" }, "sha512-ONOMrqWxdzXDJNh2n60H6gGyKed42Ieu6UTVPZteXpuKbLZTH4G4eBMsr5qWgOBA+s7F+uB4OJbZnrkEDnZ5Fg=="],
"@next/swc-win32-arm64-msvc": ["@next/swc-win32-arm64-msvc@15.5.7", "", { "os": "win32", "cpu": "arm64" }, "sha512-CpJVTkYI3ZajQkC5vajM7/ApKJUOlm6uP4BknM3XKvJ7VXAvCqSjSLmM0LKdYzn6nBJVSjdclx8nYJSa3xlTgQ=="],
"@next/swc-win32-x64-msvc": ["@next/swc-win32-x64-msvc@15.5.6", "", { "os": "win32", "cpu": "x64" }, "sha512-pxK4VIjFRx1MY92UycLOOw7dTdvccWsNETQ0kDHkBlcFH1GrTLUjSiHU1ohrznnux6TqRHgv5oflhfIWZwVROQ=="],
"@next/swc-win32-x64-msvc": ["@next/swc-win32-x64-msvc@15.5.7", "", { "os": "win32", "cpu": "x64" }, "sha512-gMzgBX164I6DN+9/PGA+9dQiwmTkE4TloBNx8Kv9UiGARsr9Nba7IpcBRA1iTV9vwlYnrE3Uy6I7Aj6qLjQuqw=="],
"@panva/hkdf": ["@panva/hkdf@1.2.1", "", {}, "sha512-6oclG6Y3PiDFcoyk8srjLfVKyMfVCKJ27JwNPViuXziFpmdz+MZnZN/aKY0JGXgYuO/VghU0jcOAZgWXZ1Dmrw=="],
@@ -323,7 +324,7 @@
"nanoid": ["nanoid@3.3.11", "", { "bin": { "nanoid": "bin/nanoid.cjs" } }, "sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w=="],
"next": ["next@15.5.6", "", { "dependencies": { "@next/env": "15.5.6", "@swc/helpers": "0.5.15", "caniuse-lite": "1.0.30001752", "postcss": "8.4.31", "styled-jsx": "5.1.6" }, "optionalDependencies": { "@next/swc-darwin-arm64": "15.5.6", "@next/swc-darwin-x64": "15.5.6", "@next/swc-linux-arm64-gnu": "15.5.6", "@next/swc-linux-arm64-musl": "15.5.6", "@next/swc-linux-x64-gnu": "15.5.6", "@next/swc-linux-x64-musl": "15.5.6", "@next/swc-win32-arm64-msvc": "15.5.6", "@next/swc-win32-x64-msvc": "15.5.6", "sharp": "0.34.4" }, "peerDependencies": { "react": "19.2.0", "react-dom": "19.2.0" }, "bin": { "next": "dist/bin/next" } }, "sha512-zTxsnI3LQo3c9HSdSf91O1jMNsEzIXDShXd4wVdg9y5shwLqBXi4ZtUUJyB86KGVSJLZx0PFONvO54aheGX8QQ=="],
"next": ["next@15.5.7", "", { "dependencies": { "@next/env": "15.5.7", "@swc/helpers": "0.5.15", "caniuse-lite": "^1.0.30001579", "postcss": "8.4.31", "styled-jsx": "5.1.6" }, "optionalDependencies": { "@next/swc-darwin-arm64": "15.5.7", "@next/swc-darwin-x64": "15.5.7", "@next/swc-linux-arm64-gnu": "15.5.7", "@next/swc-linux-arm64-musl": "15.5.7", "@next/swc-linux-x64-gnu": "15.5.7", "@next/swc-linux-x64-musl": "15.5.7", "@next/swc-win32-arm64-msvc": "15.5.7", "@next/swc-win32-x64-msvc": "15.5.7", "sharp": "^0.34.3" }, "peerDependencies": { "@opentelemetry/api": "^1.1.0", "@playwright/test": "^1.51.1", "babel-plugin-react-compiler": "*", "react": "^18.2.0 || 19.0.0-rc-de68d2f4-20241204 || ^19.0.0", "react-dom": "^18.2.0 || 19.0.0-rc-de68d2f4-20241204 || ^19.0.0", "sass": "^1.3.0" }, "optionalPeers": ["@opentelemetry/api", "@playwright/test", "babel-plugin-react-compiler", "sass"], "bin": { "next": "dist/bin/next" } }, "sha512-+t2/0jIJ48kUpGKkdlhgkv+zPTEOoXyr60qXe68eB/pl3CMJaLeIGjzp5D6Oqt25hCBiBTt8wEeeAzfJvUKnPQ=="],
"next-auth": ["next-auth@5.0.0-beta.25", "", { "dependencies": { "@auth/core": "0.37.2" }, "peerDependencies": { "next": "15.5.6", "react": "19.2.0" } }, "sha512-2dJJw1sHQl2qxCrRk+KTQbeH+izFbGFPuJj5eGgBZFYyiYYtvlrBeUw1E/OJJxTRjuxbSYGnCTkUIRsIIW0bog=="],

View File

@@ -26,7 +26,7 @@
"@trpc/server": "^11.0.0",
"drizzle-orm": "^0.41.0",
"esbuild": "^0.25.11",
"next": "^15.2.3",
"next": "15.5.7",
"next-auth": "5.0.0-beta.25",
"postgres": "^3.4.4",
"react": "^19.0.0",

View File

@@ -18,7 +18,9 @@
"react": "^18.3.1",
"react-dom": "^18.3.1",
"string-width": "7.1.0",
"wrap-ansi": "^9.0.0",
"strip-ansi": "^7.1.0",
"tar": "^7.4.3",
"tinycolor2": "^1.6.0",
"zx": "^7.2.3"
},

View File

@@ -13,7 +13,5 @@ export function run(opts = {}) {
}
export const bench = Mitata.bench;
export function group(_name, fn) {
return Mitata.group(fn);
}
export const group = Mitata.group;
export const summary = Mitata.summary;

477
bench/snippets/archive.mjs Normal file
View File

@@ -0,0 +1,477 @@
import { mkdirSync, mkdtempSync, rmSync, writeFileSync } from "node:fs";
import { tmpdir } from "node:os";
import { join } from "node:path";
import { Pack, Unpack } from "tar";
import { bench, group, run } from "../runner.mjs";
// Check if Bun.Archive is available
const hasBunArchive = typeof Bun !== "undefined" && typeof Bun.Archive !== "undefined";
// Test data sizes
const smallContent = "Hello, World!";
const mediumContent = Buffer.alloc(10 * 1024, "x").toString(); // 10KB
const largeContent = Buffer.alloc(100 * 1024, "x").toString(); // 100KB
// Create test files for node-tar (it reads from filesystem)
const setupDir = mkdtempSync(join(tmpdir(), "archive-bench-setup-"));
function setupNodeTarFiles(prefix, files) {
const dir = join(setupDir, prefix);
mkdirSync(dir, { recursive: true });
for (const [name, content] of Object.entries(files)) {
const filePath = join(dir, name);
const fileDir = join(filePath, "..");
mkdirSync(fileDir, { recursive: true });
writeFileSync(filePath, content);
}
return dir;
}
// Setup directories for different test cases
const smallFilesDir = setupNodeTarFiles("small", {
"file1.txt": smallContent,
"file2.txt": smallContent,
"file3.txt": smallContent,
});
const mediumFilesDir = setupNodeTarFiles("medium", {
"file1.txt": mediumContent,
"file2.txt": mediumContent,
"file3.txt": mediumContent,
});
const largeFilesDir = setupNodeTarFiles("large", {
"file1.txt": largeContent,
"file2.txt": largeContent,
"file3.txt": largeContent,
});
const manyFilesEntries = {};
for (let i = 0; i < 100; i++) {
manyFilesEntries[`file${i}.txt`] = smallContent;
}
const manyFilesDir = setupNodeTarFiles("many", manyFilesEntries);
// Pre-create archives for extraction benchmarks
let smallTarGzBuffer, mediumTarGzBuffer, largeTarGzBuffer, manyFilesTarGzBuffer;
let smallTarBuffer, mediumTarBuffer, largeTarBuffer, manyFilesTarBuffer;
let smallBunArchiveGz, mediumBunArchiveGz, largeBunArchiveGz, manyFilesBunArchiveGz;
let smallBunArchive, mediumBunArchive, largeBunArchive, manyFilesBunArchive;
// Create tar buffer using node-tar (with optional gzip)
async function createNodeTarBuffer(cwd, files, gzip = false) {
return new Promise(resolve => {
const pack = new Pack({ cwd, gzip });
const bufs = [];
pack.on("data", chunk => bufs.push(chunk));
pack.on("end", () => resolve(Buffer.concat(bufs)));
for (const file of files) {
pack.add(file);
}
pack.end();
});
}
// Extract tar buffer using node-tar
async function extractNodeTarBuffer(buffer, cwd) {
return new Promise((resolve, reject) => {
const unpack = new Unpack({ cwd });
unpack.on("end", resolve);
unpack.on("error", reject);
unpack.end(buffer);
});
}
// Initialize gzipped archives
smallTarGzBuffer = await createNodeTarBuffer(smallFilesDir, ["file1.txt", "file2.txt", "file3.txt"], true);
mediumTarGzBuffer = await createNodeTarBuffer(mediumFilesDir, ["file1.txt", "file2.txt", "file3.txt"], true);
largeTarGzBuffer = await createNodeTarBuffer(largeFilesDir, ["file1.txt", "file2.txt", "file3.txt"], true);
manyFilesTarGzBuffer = await createNodeTarBuffer(manyFilesDir, Object.keys(manyFilesEntries), true);
// Initialize uncompressed archives
smallTarBuffer = await createNodeTarBuffer(smallFilesDir, ["file1.txt", "file2.txt", "file3.txt"], false);
mediumTarBuffer = await createNodeTarBuffer(mediumFilesDir, ["file1.txt", "file2.txt", "file3.txt"], false);
largeTarBuffer = await createNodeTarBuffer(largeFilesDir, ["file1.txt", "file2.txt", "file3.txt"], false);
manyFilesTarBuffer = await createNodeTarBuffer(manyFilesDir, Object.keys(manyFilesEntries), false);
const smallFiles = { "file1.txt": smallContent, "file2.txt": smallContent, "file3.txt": smallContent };
const mediumFiles = { "file1.txt": mediumContent, "file2.txt": mediumContent, "file3.txt": mediumContent };
const largeFiles = { "file1.txt": largeContent, "file2.txt": largeContent, "file3.txt": largeContent };
if (hasBunArchive) {
smallBunArchiveGz = await Bun.Archive.from(smallFiles).bytes("gzip");
mediumBunArchiveGz = await Bun.Archive.from(mediumFiles).bytes("gzip");
largeBunArchiveGz = await Bun.Archive.from(largeFiles).bytes("gzip");
manyFilesBunArchiveGz = await Bun.Archive.from(manyFilesEntries).bytes("gzip");
smallBunArchive = await Bun.Archive.from(smallFiles).bytes();
mediumBunArchive = await Bun.Archive.from(mediumFiles).bytes();
largeBunArchive = await Bun.Archive.from(largeFiles).bytes();
manyFilesBunArchive = await Bun.Archive.from(manyFilesEntries).bytes();
}
// Create reusable extraction directories (overwriting is fine)
const extractDirNodeTar = mkdtempSync(join(tmpdir(), "archive-bench-extract-node-"));
const extractDirBun = mkdtempSync(join(tmpdir(), "archive-bench-extract-bun-"));
const writeDirNodeTar = mkdtempSync(join(tmpdir(), "archive-bench-write-node-"));
const writeDirBun = mkdtempSync(join(tmpdir(), "archive-bench-write-bun-"));
// ============================================================================
// Create .tar (uncompressed) benchmarks
// ============================================================================
group("create .tar (3 small files)", () => {
bench("node-tar", async () => {
await createNodeTarBuffer(smallFilesDir, ["file1.txt", "file2.txt", "file3.txt"], false);
});
if (hasBunArchive) {
bench("Bun.Archive", async () => {
await Bun.Archive.from(smallFiles).bytes();
});
}
});
group("create .tar (3 x 100KB files)", () => {
bench("node-tar", async () => {
await createNodeTarBuffer(largeFilesDir, ["file1.txt", "file2.txt", "file3.txt"], false);
});
if (hasBunArchive) {
bench("Bun.Archive", async () => {
await Bun.Archive.from(largeFiles).bytes();
});
}
});
group("create .tar (100 small files)", () => {
bench("node-tar", async () => {
await createNodeTarBuffer(manyFilesDir, Object.keys(manyFilesEntries), false);
});
if (hasBunArchive) {
bench("Bun.Archive", async () => {
await Bun.Archive.from(manyFilesEntries).bytes();
});
}
});
// ============================================================================
// Create .tar.gz (compressed) benchmarks
// ============================================================================
group("create .tar.gz (3 small files)", () => {
bench("node-tar", async () => {
await createNodeTarBuffer(smallFilesDir, ["file1.txt", "file2.txt", "file3.txt"], true);
});
if (hasBunArchive) {
bench("Bun.Archive", async () => {
await Bun.Archive.from(smallFiles).bytes("gzip");
});
}
});
group("create .tar.gz (3 x 100KB files)", () => {
bench("node-tar", async () => {
await createNodeTarBuffer(largeFilesDir, ["file1.txt", "file2.txt", "file3.txt"], true);
});
if (hasBunArchive) {
bench("Bun.Archive", async () => {
await Bun.Archive.from(largeFiles).bytes("gzip");
});
}
});
group("create .tar.gz (100 small files)", () => {
bench("node-tar", async () => {
await createNodeTarBuffer(manyFilesDir, Object.keys(manyFilesEntries), true);
});
if (hasBunArchive) {
bench("Bun.Archive", async () => {
await Bun.Archive.from(manyFilesEntries).bytes("gzip");
});
}
});
// ============================================================================
// Extract .tar (uncompressed) benchmarks
// ============================================================================
group("extract .tar (3 small files)", () => {
bench("node-tar", async () => {
await extractNodeTarBuffer(smallTarBuffer, extractDirNodeTar);
});
if (hasBunArchive) {
bench("Bun.Archive", async () => {
await Bun.Archive.from(smallBunArchive).extract(extractDirBun);
});
}
});
group("extract .tar (3 x 100KB files)", () => {
bench("node-tar", async () => {
await extractNodeTarBuffer(largeTarBuffer, extractDirNodeTar);
});
if (hasBunArchive) {
bench("Bun.Archive", async () => {
await Bun.Archive.from(largeBunArchive).extract(extractDirBun);
});
}
});
group("extract .tar (100 small files)", () => {
bench("node-tar", async () => {
await extractNodeTarBuffer(manyFilesTarBuffer, extractDirNodeTar);
});
if (hasBunArchive) {
bench("Bun.Archive", async () => {
await Bun.Archive.from(manyFilesBunArchive).extract(extractDirBun);
});
}
});
// ============================================================================
// Extract .tar.gz (compressed) benchmarks
// ============================================================================
group("extract .tar.gz (3 small files)", () => {
bench("node-tar", async () => {
await extractNodeTarBuffer(smallTarGzBuffer, extractDirNodeTar);
});
if (hasBunArchive) {
bench("Bun.Archive", async () => {
await Bun.Archive.from(smallBunArchiveGz).extract(extractDirBun);
});
}
});
group("extract .tar.gz (3 x 100KB files)", () => {
bench("node-tar", async () => {
await extractNodeTarBuffer(largeTarGzBuffer, extractDirNodeTar);
});
if (hasBunArchive) {
bench("Bun.Archive", async () => {
await Bun.Archive.from(largeBunArchiveGz).extract(extractDirBun);
});
}
});
group("extract .tar.gz (100 small files)", () => {
bench("node-tar", async () => {
await extractNodeTarBuffer(manyFilesTarGzBuffer, extractDirNodeTar);
});
if (hasBunArchive) {
bench("Bun.Archive", async () => {
await Bun.Archive.from(manyFilesBunArchiveGz).extract(extractDirBun);
});
}
});
// ============================================================================
// Write .tar to disk benchmarks
// ============================================================================
let writeCounter = 0;
group("write .tar to disk (3 small files)", () => {
bench("node-tar + writeFileSync", async () => {
const buffer = await createNodeTarBuffer(smallFilesDir, ["file1.txt", "file2.txt", "file3.txt"], false);
writeFileSync(join(writeDirNodeTar, `archive-${writeCounter++}.tar`), buffer);
});
if (hasBunArchive) {
bench("Bun.Archive.write", async () => {
await Bun.Archive.write(join(writeDirBun, `archive-${writeCounter++}.tar`), smallFiles);
});
}
});
group("write .tar to disk (3 x 100KB files)", () => {
bench("node-tar + writeFileSync", async () => {
const buffer = await createNodeTarBuffer(largeFilesDir, ["file1.txt", "file2.txt", "file3.txt"], false);
writeFileSync(join(writeDirNodeTar, `archive-${writeCounter++}.tar`), buffer);
});
if (hasBunArchive) {
bench("Bun.Archive.write", async () => {
await Bun.Archive.write(join(writeDirBun, `archive-${writeCounter++}.tar`), largeFiles);
});
}
});
group("write .tar to disk (100 small files)", () => {
bench("node-tar + writeFileSync", async () => {
const buffer = await createNodeTarBuffer(manyFilesDir, Object.keys(manyFilesEntries), false);
writeFileSync(join(writeDirNodeTar, `archive-${writeCounter++}.tar`), buffer);
});
if (hasBunArchive) {
bench("Bun.Archive.write", async () => {
await Bun.Archive.write(join(writeDirBun, `archive-${writeCounter++}.tar`), manyFilesEntries);
});
}
});
// ============================================================================
// Write .tar.gz to disk benchmarks
// ============================================================================
group("write .tar.gz to disk (3 small files)", () => {
bench("node-tar + writeFileSync", async () => {
const buffer = await createNodeTarBuffer(smallFilesDir, ["file1.txt", "file2.txt", "file3.txt"], true);
writeFileSync(join(writeDirNodeTar, `archive-${writeCounter++}.tar.gz`), buffer);
});
if (hasBunArchive) {
bench("Bun.Archive.write", async () => {
await Bun.Archive.write(join(writeDirBun, `archive-${writeCounter++}.tar.gz`), smallFiles, "gzip");
});
}
});
group("write .tar.gz to disk (3 x 100KB files)", () => {
bench("node-tar + writeFileSync", async () => {
const buffer = await createNodeTarBuffer(largeFilesDir, ["file1.txt", "file2.txt", "file3.txt"], true);
writeFileSync(join(writeDirNodeTar, `archive-${writeCounter++}.tar.gz`), buffer);
});
if (hasBunArchive) {
bench("Bun.Archive.write", async () => {
await Bun.Archive.write(join(writeDirBun, `archive-${writeCounter++}.tar.gz`), largeFiles, "gzip");
});
}
});
group("write .tar.gz to disk (100 small files)", () => {
bench("node-tar + writeFileSync", async () => {
const buffer = await createNodeTarBuffer(manyFilesDir, Object.keys(manyFilesEntries), true);
writeFileSync(join(writeDirNodeTar, `archive-${writeCounter++}.tar.gz`), buffer);
});
if (hasBunArchive) {
bench("Bun.Archive.write", async () => {
await Bun.Archive.write(join(writeDirBun, `archive-${writeCounter++}.tar.gz`), manyFilesEntries, "gzip");
});
}
});
// ============================================================================
// Get files array from archive (files() method) benchmarks
// ============================================================================
// Helper to get files array from node-tar (reads all entries into memory)
async function getFilesArrayNodeTar(buffer) {
return new Promise((resolve, reject) => {
const files = new Map();
let pending = 0;
let closed = false;
const maybeResolve = () => {
if (closed && pending === 0) {
resolve(files);
}
};
const unpack = new Unpack({
onReadEntry: entry => {
if (entry.type === "File") {
pending++;
const chunks = [];
entry.on("data", chunk => chunks.push(chunk));
entry.on("end", () => {
const content = Buffer.concat(chunks);
// Create a File-like object similar to Bun.Archive.files()
files.set(entry.path, new Blob([content]));
pending--;
maybeResolve();
});
}
entry.resume(); // Drain the entry
},
});
unpack.on("close", () => {
closed = true;
maybeResolve();
});
unpack.on("error", reject);
unpack.end(buffer);
});
}
group("files() - get all files as Map (3 small files)", () => {
bench("node-tar", async () => {
await getFilesArrayNodeTar(smallTarBuffer);
});
if (hasBunArchive) {
bench("Bun.Archive.files()", async () => {
await Bun.Archive.from(smallBunArchive).files();
});
}
});
group("files() - get all files as Map (3 x 100KB files)", () => {
bench("node-tar", async () => {
await getFilesArrayNodeTar(largeTarBuffer);
});
if (hasBunArchive) {
bench("Bun.Archive.files()", async () => {
await Bun.Archive.from(largeBunArchive).files();
});
}
});
group("files() - get all files as Map (100 small files)", () => {
bench("node-tar", async () => {
await getFilesArrayNodeTar(manyFilesTarBuffer);
});
if (hasBunArchive) {
bench("Bun.Archive.files()", async () => {
await Bun.Archive.from(manyFilesBunArchive).files();
});
}
});
group("files() - get all files as Map from .tar.gz (3 small files)", () => {
bench("node-tar", async () => {
await getFilesArrayNodeTar(smallTarGzBuffer);
});
if (hasBunArchive) {
bench("Bun.Archive.files()", async () => {
await Bun.Archive.from(smallBunArchiveGz).files();
});
}
});
group("files() - get all files as Map from .tar.gz (100 small files)", () => {
bench("node-tar", async () => {
await getFilesArrayNodeTar(manyFilesTarGzBuffer);
});
if (hasBunArchive) {
bench("Bun.Archive.files()", async () => {
await Bun.Archive.from(manyFilesBunArchiveGz).files();
});
}
});
await run();
// Cleanup
rmSync(setupDir, { recursive: true, force: true });
rmSync(extractDirNodeTar, { recursive: true, force: true });
rmSync(extractDirBun, { recursive: true, force: true });
rmSync(writeDirNodeTar, { recursive: true, force: true });
rmSync(writeDirBun, { recursive: true, force: true });

335
bench/snippets/array-of.js Normal file
View File

@@ -0,0 +1,335 @@
import { bench, run } from "../runner.mjs";
let sink;
// Integers
bench("int: Array.of(1,2,3,4,5)", () => {
sink = Array.of(1, 2, 3, 4, 5);
});
bench("int: Array.of(100 elements)", () => {
sink = Array.of(
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
);
});
// Doubles
bench("double: Array.of(1.1,2.2,3.3,4.4,5.5)", () => {
sink = Array.of(1.1, 2.2, 3.3, 4.4, 5.5);
});
bench("double: Array.of(100 elements)", () => {
sink = Array.of(
0.1,
1.1,
2.1,
3.1,
4.1,
5.1,
6.1,
7.1,
8.1,
9.1,
10.1,
11.1,
12.1,
13.1,
14.1,
15.1,
16.1,
17.1,
18.1,
19.1,
20.1,
21.1,
22.1,
23.1,
24.1,
25.1,
26.1,
27.1,
28.1,
29.1,
30.1,
31.1,
32.1,
33.1,
34.1,
35.1,
36.1,
37.1,
38.1,
39.1,
40.1,
41.1,
42.1,
43.1,
44.1,
45.1,
46.1,
47.1,
48.1,
49.1,
50.1,
51.1,
52.1,
53.1,
54.1,
55.1,
56.1,
57.1,
58.1,
59.1,
60.1,
61.1,
62.1,
63.1,
64.1,
65.1,
66.1,
67.1,
68.1,
69.1,
70.1,
71.1,
72.1,
73.1,
74.1,
75.1,
76.1,
77.1,
78.1,
79.1,
80.1,
81.1,
82.1,
83.1,
84.1,
85.1,
86.1,
87.1,
88.1,
89.1,
90.1,
91.1,
92.1,
93.1,
94.1,
95.1,
96.1,
97.1,
98.1,
99.1,
);
});
// Objects
bench("object: Array.of(obj x5)", () => {
sink = Array.of({ a: 1 }, { a: 2 }, { a: 3 }, { a: 4 }, { a: 5 });
});
bench("object: Array.of(100 elements)", () => {
sink = Array.of(
{ a: 0 },
{ a: 1 },
{ a: 2 },
{ a: 3 },
{ a: 4 },
{ a: 5 },
{ a: 6 },
{ a: 7 },
{ a: 8 },
{ a: 9 },
{ a: 10 },
{ a: 11 },
{ a: 12 },
{ a: 13 },
{ a: 14 },
{ a: 15 },
{ a: 16 },
{ a: 17 },
{ a: 18 },
{ a: 19 },
{ a: 20 },
{ a: 21 },
{ a: 22 },
{ a: 23 },
{ a: 24 },
{ a: 25 },
{ a: 26 },
{ a: 27 },
{ a: 28 },
{ a: 29 },
{ a: 30 },
{ a: 31 },
{ a: 32 },
{ a: 33 },
{ a: 34 },
{ a: 35 },
{ a: 36 },
{ a: 37 },
{ a: 38 },
{ a: 39 },
{ a: 40 },
{ a: 41 },
{ a: 42 },
{ a: 43 },
{ a: 44 },
{ a: 45 },
{ a: 46 },
{ a: 47 },
{ a: 48 },
{ a: 49 },
{ a: 50 },
{ a: 51 },
{ a: 52 },
{ a: 53 },
{ a: 54 },
{ a: 55 },
{ a: 56 },
{ a: 57 },
{ a: 58 },
{ a: 59 },
{ a: 60 },
{ a: 61 },
{ a: 62 },
{ a: 63 },
{ a: 64 },
{ a: 65 },
{ a: 66 },
{ a: 67 },
{ a: 68 },
{ a: 69 },
{ a: 70 },
{ a: 71 },
{ a: 72 },
{ a: 73 },
{ a: 74 },
{ a: 75 },
{ a: 76 },
{ a: 77 },
{ a: 78 },
{ a: 79 },
{ a: 80 },
{ a: 81 },
{ a: 82 },
{ a: 83 },
{ a: 84 },
{ a: 85 },
{ a: 86 },
{ a: 87 },
{ a: 88 },
{ a: 89 },
{ a: 90 },
{ a: 91 },
{ a: 92 },
{ a: 93 },
{ a: 94 },
{ a: 95 },
{ a: 96 },
{ a: 97 },
{ a: 98 },
{ a: 99 },
);
});
await run();

View File

@@ -0,0 +1,38 @@
// @runtime bun,node
import { Buffer } from "node:buffer";
import { bench, group, run } from "../runner.mjs";
// Small arrays (common case)
const int32Array8 = [1, 2, 3, 4, 5, 6, 7, 8];
const doubleArray8 = [1.5, 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5];
// Medium arrays
const int32Array64 = Array.from({ length: 64 }, (_, i) => i % 256);
const doubleArray64 = Array.from({ length: 64 }, (_, i) => i + 0.5);
// Large arrays
const int32Array1024 = Array.from({ length: 1024 }, (_, i) => i % 256);
// Array-like objects (fallback path)
const arrayLike8 = { 0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, length: 8 };
// Empty array
const emptyArray = [];
group("Buffer.from(array) - Int32 arrays", () => {
bench("Buffer.from(int32[8])", () => Buffer.from(int32Array8));
bench("Buffer.from(int32[64])", () => Buffer.from(int32Array64));
bench("Buffer.from(int32[1024])", () => Buffer.from(int32Array1024));
});
group("Buffer.from(array) - Double arrays", () => {
bench("Buffer.from(double[8])", () => Buffer.from(doubleArray8));
bench("Buffer.from(double[64])", () => Buffer.from(doubleArray64));
});
group("Buffer.from(array) - Edge cases", () => {
bench("Buffer.from([])", () => Buffer.from(emptyArray));
bench("Buffer.from(arrayLike[8])", () => Buffer.from(arrayLike8));
});
await run();

View File

@@ -0,0 +1,156 @@
import { bench, group, run } from "../runner.mjs";
const runAll = !process.argv.includes("--simple");
const small = new Uint8Array(1024);
const medium = new Uint8Array(1024 * 100);
const large = new Uint8Array(1024 * 1024);
for (let i = 0; i < large.length; i++) {
const value = Math.floor(Math.sin(i / 100) * 128 + 128);
if (i < small.length) small[i] = value;
if (i < medium.length) medium[i] = value;
large[i] = value;
}
const format = new Intl.NumberFormat("en-US", { notation: "compact", unit: "byte" });
async function compress(data, format) {
const cs = new CompressionStream(format);
const writer = cs.writable.getWriter();
const reader = cs.readable.getReader();
writer.write(data);
writer.close();
const chunks = [];
while (true) {
const { done, value } = await reader.read();
if (done) break;
chunks.push(value);
}
const result = new Uint8Array(chunks.reduce((acc, chunk) => acc + chunk.length, 0));
let offset = 0;
for (const chunk of chunks) {
result.set(chunk, offset);
offset += chunk.length;
}
return result;
}
async function decompress(data, format) {
const ds = new DecompressionStream(format);
const writer = ds.writable.getWriter();
const reader = ds.readable.getReader();
writer.write(data);
writer.close();
const chunks = [];
while (true) {
const { done, value } = await reader.read();
if (done) break;
chunks.push(value);
}
const result = new Uint8Array(chunks.reduce((acc, chunk) => acc + chunk.length, 0));
let offset = 0;
for (const chunk of chunks) {
result.set(chunk, offset);
offset += chunk.length;
}
return result;
}
async function roundTrip(data, format) {
const compressed = await compress(data, format);
return await decompress(compressed, format);
}
const formats = ["deflate", "gzip", "deflate-raw"];
if (runAll) formats.push("brotli", "zstd");
// Small data benchmarks (1KB)
group(`CompressionStream ${format.format(small.length)}`, () => {
for (const fmt of formats) {
try {
new CompressionStream(fmt);
bench(fmt, async () => await compress(small, fmt));
} catch (e) {
// Skip unsupported formats
}
}
});
// Medium data benchmarks (100KB)
group(`CompressionStream ${format.format(medium.length)}`, () => {
for (const fmt of formats) {
try {
new CompressionStream(fmt);
bench(fmt, async () => await compress(medium, fmt));
} catch (e) {}
}
});
// Large data benchmarks (1MB)
group(`CompressionStream ${format.format(large.length)}`, () => {
for (const fmt of formats) {
try {
new CompressionStream(fmt);
bench(fmt, async () => await compress(large, fmt));
} catch (e) {
// Skip unsupported formats
}
}
});
const compressedData = {};
for (const fmt of formats) {
try {
compressedData[fmt] = {
small: await compress(small, fmt),
medium: await compress(medium, fmt),
large: await compress(large, fmt),
};
} catch (e) {
// Skip unsupported formats
}
}
group(`DecompressionStream ${format.format(small.length)}`, () => {
for (const fmt of formats) {
if (compressedData[fmt]) {
bench(fmt, async () => await decompress(compressedData[fmt].small, fmt));
}
}
});
group(`DecompressionStream ${format.format(medium.length)}`, () => {
for (const fmt of formats) {
if (compressedData[fmt]) {
bench(fmt, async () => await decompress(compressedData[fmt].medium, fmt));
}
}
});
group(`DecompressionStream ${format.format(large.length)}`, () => {
for (const fmt of formats) {
if (compressedData[fmt]) {
bench(fmt, async () => await decompress(compressedData[fmt].large, fmt));
}
}
});
group(`roundtrip ${format.format(large.length)}`, () => {
for (const fmt of formats) {
try {
new CompressionStream(fmt);
bench(fmt, async () => await roundTrip(large, fmt));
} catch (e) {
// Skip unsupported formats
}
}
});
await run();

View File

@@ -0,0 +1,4 @@
// Child process for IPC benchmarks - echoes messages back to parent
process.on("message", message => {
process.send(message);
});

View File

@@ -0,0 +1,45 @@
import { fork } from "node:child_process";
import path from "node:path";
import { fileURLToPath } from "node:url";
import { bench, run } from "../runner.mjs";
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const childPath = path.join(__dirname, "ipc-json-child.mjs");
const smallMessage = { type: "ping", id: 1 };
const largeString = Buffer.alloc(10 * 1024 * 1024, "A").toString();
const largeMessage = { type: "ping", id: 1, data: largeString };
async function runBenchmark(message, count) {
let received = 0;
const { promise, resolve } = Promise.withResolvers();
const child = fork(childPath, [], {
stdio: ["ignore", "ignore", "ignore", "ipc"],
serialization: "json",
});
child.on("message", () => {
received++;
if (received >= count) {
resolve();
}
});
for (let i = 0; i < count; i++) {
child.send(message);
}
await promise;
child.kill();
}
bench("ipc json - small messages (1000 roundtrips)", async () => {
await runBenchmark(smallMessage, 1000);
});
bench("ipc json - 10MB messages (10 roundtrips)", async () => {
await runBenchmark(largeMessage, 10);
});
await run();

View File

@@ -0,0 +1,57 @@
import { bench, run } from "../runner.mjs";
const obj = { a: 1, b: 2, c: 3 };
const objDeep = { a: 1, b: 2, c: 3, d: 4, e: 5, f: 6, g: 7, h: 8 };
const sym = Symbol("test");
const objWithSymbol = { [sym]: 1, a: 2 };
const objs = [
{ f: 50 },
{ f: 50, g: 70 },
{ g: 50, f: 70 },
{ h: 50, f: 70 },
{ z: 50, f: 70 },
{ k: 50, f: 70 },
];
bench("Object.hasOwn - hit", () => {
return Object.hasOwn(obj, "a");
});
bench("Object.hasOwn - miss", () => {
return Object.hasOwn(obj, "z");
});
bench("Object.hasOwn - symbol hit", () => {
return Object.hasOwn(objWithSymbol, sym);
});
bench("Object.hasOwn - symbol miss", () => {
return Object.hasOwn(objWithSymbol, Symbol("other"));
});
bench("Object.hasOwn - multiple shapes", () => {
let result = true;
for (let i = 0; i < objs.length; i++) {
result = Object.hasOwn(objs[i], "f") && result;
}
return result;
});
bench("Object.prototype.hasOwnProperty - hit", () => {
return obj.hasOwnProperty("a");
});
bench("Object.prototype.hasOwnProperty - miss", () => {
return obj.hasOwnProperty("z");
});
bench("in operator - hit", () => {
return "a" in obj;
});
bench("in operator - miss", () => {
return "z" in obj;
});
await run();

View File

@@ -0,0 +1,7 @@
import { bench, run } from "../runner.mjs";
bench("Promise.race([p1, p2])", async function () {
return await Promise.race([Promise.resolve(1), Promise.resolve(2)]);
});
await run();

View File

@@ -112,12 +112,40 @@ const obj = {
},
};
bench("Response.json(obj)", async () => {
const smallObj = { id: 1, name: "test" };
const arrayObj = {
items: Array.from({ length: 100 }, (_, i) => ({ id: i, value: `item-${i}` })),
};
bench("Response.json(obj)", () => {
return Response.json(obj);
});
bench("Response.json(obj).json()", async () => {
return await Response.json(obj).json();
bench("new Response(JSON.stringify(obj))", () => {
return new Response(JSON.stringify(obj), {
headers: { "Content-Type": "application/json" },
});
});
bench("Response.json(smallObj)", () => {
return Response.json(smallObj);
});
bench("new Response(JSON.stringify(smallObj))", () => {
return new Response(JSON.stringify(smallObj), {
headers: { "Content-Type": "application/json" },
});
});
bench("Response.json(arrayObj)", () => {
return Response.json(arrayObj);
});
bench("new Response(JSON.stringify(arrayObj))", () => {
return new Response(JSON.stringify(arrayObj), {
headers: { "Content-Type": "application/json" },
});
});
await run();

View File

@@ -0,0 +1,34 @@
import { bench, run } from "../runner.mjs";
const shortStr = "The quick brown fox jumps over the lazy dog";
const longStr = shortStr.repeat(100);
bench("String.includes - short, hit (middle)", () => {
return shortStr.includes("jumps");
});
bench("String.includes - short, hit (start)", () => {
return shortStr.includes("The");
});
bench("String.includes - short, hit (end)", () => {
return shortStr.includes("dog");
});
bench("String.includes - short, miss", () => {
return shortStr.includes("cat");
});
bench("String.includes - long, hit (middle)", () => {
return longStr.includes("jumps");
});
bench("String.includes - long, miss", () => {
return longStr.includes("cat");
});
bench("String.includes - with position", () => {
return shortStr.includes("fox", 10);
});
await run();

View File

@@ -0,0 +1,48 @@
import { bench, group, run } from "../runner.mjs";
const patterns = [
{ name: "string pattern", input: "https://(sub.)?example(.com/)foo" },
{ name: "hostname IDN", input: { hostname: "xn--caf-dma.com" } },
{
name: "pathname + search + hash + baseURL",
input: {
pathname: "/foo",
search: "bar",
hash: "baz",
baseURL: "https://example.com:8080",
},
},
{ name: "pathname with regex", input: { pathname: "/([[a-z]--a])" } },
{ name: "named groups", input: { pathname: "/users/:id/posts/:postId" } },
{ name: "wildcard", input: { pathname: "/files/*" } },
];
const testURL = "https://sub.example.com/foo";
group("URLPattern parse (constructor)", () => {
for (const { name, input } of patterns) {
bench(name, () => {
return new URLPattern(input);
});
}
});
group("URLPattern.test()", () => {
for (const { name, input } of patterns) {
const pattern = new URLPattern(input);
bench(name, () => {
return pattern.test(testURL);
});
}
});
group("URLPattern.exec()", () => {
for (const { name, input } of patterns) {
const pattern = new URLPattern(input);
bench(name, () => {
return pattern.exec(testURL);
});
}
});
await run();

103
bench/snippets/wrap-ansi.js Normal file
View File

@@ -0,0 +1,103 @@
import wrapAnsi from "wrap-ansi";
import { bench, run, summary } from "../runner.mjs";
// Test fixtures
const shortText = "The quick brown fox jumped over the lazy dog.";
const mediumText = "The quick brown fox jumped over the lazy dog and then ran away with the unicorn. ".repeat(10);
const longText = "The quick brown fox jumped over the lazy dog and then ran away with the unicorn. ".repeat(100);
// ANSI colored text
const red = s => `\u001B[31m${s}\u001B[39m`;
const green = s => `\u001B[32m${s}\u001B[39m`;
const blue = s => `\u001B[34m${s}\u001B[39m`;
const coloredShort = `The quick ${red("brown fox")} jumped over the ${green("lazy dog")}.`;
const coloredMedium =
`The quick ${red("brown fox jumped over")} the ${green("lazy dog and then ran away")} with the ${blue("unicorn")}. `.repeat(
10,
);
const coloredLong =
`The quick ${red("brown fox jumped over")} the ${green("lazy dog and then ran away")} with the ${blue("unicorn")}. `.repeat(
100,
);
// Full-width characters (Japanese)
const japaneseText = "日本語のテキストを折り返すテストです。全角文字は幅2としてカウントされます。".repeat(5);
// Emoji text
const emojiText = "Hello 👋 World 🌍! Let's test 🧪 some emoji 😀 wrapping 📦!".repeat(5);
// Hyperlink text
const hyperlinkText = "Check out \u001B]8;;https://bun.sh\u0007Bun\u001B]8;;\u0007, it's fast! ".repeat(10);
// Options
const hardOpts = { hard: true };
const noTrimOpts = { trim: false };
// Basic text benchmarks
summary(() => {
bench("Short text (45 chars) - npm", () => wrapAnsi(shortText, 20));
bench("Short text (45 chars) - Bun", () => Bun.wrapAnsi(shortText, 20));
});
summary(() => {
bench("Medium text (810 chars) - npm", () => wrapAnsi(mediumText, 40));
bench("Medium text (810 chars) - Bun", () => Bun.wrapAnsi(mediumText, 40));
});
summary(() => {
bench("Long text (8100 chars) - npm", () => wrapAnsi(longText, 80));
bench("Long text (8100 chars) - Bun", () => Bun.wrapAnsi(longText, 80));
});
// ANSI colored text benchmarks
summary(() => {
bench("Colored short - npm", () => wrapAnsi(coloredShort, 20));
bench("Colored short - Bun", () => Bun.wrapAnsi(coloredShort, 20));
});
summary(() => {
bench("Colored medium - npm", () => wrapAnsi(coloredMedium, 40));
bench("Colored medium - Bun", () => Bun.wrapAnsi(coloredMedium, 40));
});
summary(() => {
bench("Colored long - npm", () => wrapAnsi(coloredLong, 80));
bench("Colored long - Bun", () => Bun.wrapAnsi(coloredLong, 80));
});
// Hard wrap benchmarks
summary(() => {
bench("Hard wrap long - npm", () => wrapAnsi(longText, 80, hardOpts));
bench("Hard wrap long - Bun", () => Bun.wrapAnsi(longText, 80, hardOpts));
});
summary(() => {
bench("Hard wrap colored - npm", () => wrapAnsi(coloredLong, 80, hardOpts));
bench("Hard wrap colored - Bun", () => Bun.wrapAnsi(coloredLong, 80, hardOpts));
});
// Unicode benchmarks
summary(() => {
bench("Japanese (full-width) - npm", () => wrapAnsi(japaneseText, 40));
bench("Japanese (full-width) - Bun", () => Bun.wrapAnsi(japaneseText, 40));
});
summary(() => {
bench("Emoji text - npm", () => wrapAnsi(emojiText, 30));
bench("Emoji text - Bun", () => Bun.wrapAnsi(emojiText, 30));
});
// Hyperlink benchmarks
summary(() => {
bench("Hyperlink (OSC 8) - npm", () => wrapAnsi(hyperlinkText, 40));
bench("Hyperlink (OSC 8) - Bun", () => Bun.wrapAnsi(hyperlinkText, 40));
});
// No trim option
summary(() => {
bench("No trim long - npm", () => wrapAnsi(longText, 80, noTrimOpts));
bench("No trim long - Bun", () => Bun.wrapAnsi(longText, 80, noTrimOpts));
});
await run();

View File

@@ -18,22 +18,6 @@ const OperatingSystem = @import("src/env.zig").OperatingSystem;
const pathRel = fs.path.relative;
/// When updating this, make sure to adjust SetupZig.cmake
const recommended_zig_version = "0.14.0";
// comptime {
// if (!std.mem.eql(u8, builtin.zig_version_string, recommended_zig_version)) {
// @compileError(
// "" ++
// "Bun requires Zig version " ++ recommended_zig_version ++ ", but you have " ++
// builtin.zig_version_string ++ ". This is automatically configured via Bun's " ++
// "CMake setup. You likely meant to run `bun run build`. If you are trying to " ++
// "upgrade the Zig compiler, edit ZIG_COMMIT in cmake/tools/SetupZig.cmake or " ++
// "comment this error out.",
// );
// }
// }
const zero_sha = "0000000000000000000000000000000000000000";
const BunBuildOptions = struct {
@@ -48,7 +32,9 @@ const BunBuildOptions = struct {
/// enable debug logs in release builds
enable_logs: bool = false,
enable_asan: bool,
enable_fuzzilli: bool,
enable_valgrind: bool,
enable_tinycc: bool,
use_mimalloc: bool,
tracy_callstack_depth: u16,
reported_nodejs_version: Version,
@@ -97,9 +83,11 @@ const BunBuildOptions = struct {
opts.addOption(bool, "baseline", this.isBaseline());
opts.addOption(bool, "enable_logs", this.enable_logs);
opts.addOption(bool, "enable_asan", this.enable_asan);
opts.addOption(bool, "enable_fuzzilli", this.enable_fuzzilli);
opts.addOption(bool, "enable_valgrind", this.enable_valgrind);
opts.addOption(bool, "enable_tinycc", this.enable_tinycc);
opts.addOption(bool, "use_mimalloc", this.use_mimalloc);
opts.addOption([]const u8, "reported_nodejs_version", b.fmt("{}", .{this.reported_nodejs_version}));
opts.addOption([]const u8, "reported_nodejs_version", b.fmt("{f}", .{this.reported_nodejs_version}));
opts.addOption(bool, "zig_self_hosted_backend", this.no_llvm);
opts.addOption(bool, "override_no_export_cpp_apis", this.override_no_export_cpp_apis);
@@ -134,8 +122,8 @@ pub fn getOSVersionMin(os: OperatingSystem) ?Target.Query.OsVersion {
pub fn getOSGlibCVersion(os: OperatingSystem) ?Version {
return switch (os) {
// Compiling with a newer glibc than this will break certain cloud environments.
.linux => .{ .major = 2, .minor = 27, .patch = 0 },
// Compiling with a newer glibc than this will break certain cloud environments. See symbols.test.ts.
.linux => .{ .major = 2, .minor = 26, .patch = 0 },
else => null,
};
@@ -271,7 +259,9 @@ pub fn build(b: *Build) !void {
.tracy_callstack_depth = b.option(u16, "tracy_callstack_depth", "") orelse 10,
.enable_logs = b.option(bool, "enable_logs", "Enable logs in release") orelse false,
.enable_asan = b.option(bool, "enable_asan", "Enable asan") orelse false,
.enable_fuzzilli = b.option(bool, "enable_fuzzilli", "Enable fuzzilli instrumentation") orelse false,
.enable_valgrind = b.option(bool, "enable_valgrind", "Enable valgrind") orelse false,
.enable_tinycc = b.option(bool, "enable_tinycc", "Enable TinyCC for FFI JIT compilation") orelse true,
.use_mimalloc = b.option(bool, "use_mimalloc", "Use mimalloc as default allocator") orelse false,
.llvm_codegen_threads = b.option(u32, "llvm_codegen_threads", "Number of threads to use for LLVM codegen") orelse 1,
};
@@ -290,14 +280,16 @@ pub fn build(b: *Build) !void {
var o = build_options;
var unit_tests = b.addTest(.{
.name = "bun-test",
.optimize = build_options.optimize,
.root_source_file = b.path("src/unit_test.zig"),
.test_runner = .{ .path = b.path("src/main_test.zig"), .mode = .simple },
.target = build_options.target,
.root_module = b.createModule(.{
.optimize = build_options.optimize,
.root_source_file = b.path("src/unit_test.zig"),
.target = build_options.target,
.omit_frame_pointer = false,
.strip = false,
}),
.use_llvm = !build_options.no_llvm,
.use_lld = if (build_options.os == .mac) false else !build_options.no_llvm,
.omit_frame_pointer = false,
.strip = false,
});
configureObj(b, &o, unit_tests);
// Setting `linker_allow_shlib_undefined` causes the linker to ignore
@@ -331,6 +323,7 @@ pub fn build(b: *Build) !void {
var step = b.step("check", "Check for semantic analysis errors");
var bun_check_obj = addBunObject(b, &build_options);
bun_check_obj.generated_bin = null;
// bun_check_obj.use_llvm = false;
step.dependOn(&bun_check_obj.step);
// The default install step will run zig build check. This is so ZLS
@@ -352,6 +345,7 @@ pub fn build(b: *Build) !void {
const step = b.step("check-debug", "Check for semantic analysis errors on some platforms");
addMultiCheck(b, step, build_options, &.{
.{ .os = .windows, .arch = .x86_64 },
.{ .os = .windows, .arch = .aarch64 },
.{ .os = .mac, .arch = .aarch64 },
.{ .os = .linux, .arch = .x86_64 },
}, &.{.Debug});
@@ -362,6 +356,7 @@ pub fn build(b: *Build) !void {
const step = b.step("check-all", "Check for semantic analysis errors on all supported platforms");
addMultiCheck(b, step, build_options, &.{
.{ .os = .windows, .arch = .x86_64 },
.{ .os = .windows, .arch = .aarch64 },
.{ .os = .mac, .arch = .x86_64 },
.{ .os = .mac, .arch = .aarch64 },
.{ .os = .linux, .arch = .x86_64 },
@@ -376,6 +371,7 @@ pub fn build(b: *Build) !void {
const step = b.step("check-all-debug", "Check for semantic analysis errors on all supported platforms in debug mode");
addMultiCheck(b, step, build_options, &.{
.{ .os = .windows, .arch = .x86_64 },
.{ .os = .windows, .arch = .aarch64 },
.{ .os = .mac, .arch = .x86_64 },
.{ .os = .mac, .arch = .aarch64 },
.{ .os = .linux, .arch = .x86_64 },
@@ -390,12 +386,14 @@ pub fn build(b: *Build) !void {
const step = b.step("check-windows", "Check for semantic analysis errors on Windows");
addMultiCheck(b, step, build_options, &.{
.{ .os = .windows, .arch = .x86_64 },
.{ .os = .windows, .arch = .aarch64 },
}, &.{ .Debug, .ReleaseFast });
}
{
const step = b.step("check-windows-debug", "Check for semantic analysis errors on Windows");
addMultiCheck(b, step, build_options, &.{
.{ .os = .windows, .arch = .x86_64 },
.{ .os = .windows, .arch = .aarch64 },
}, &.{.Debug});
}
{
@@ -432,6 +430,7 @@ pub fn build(b: *Build) !void {
const step = b.step("translate-c", "Copy generated translated-c-headers.zig to zig-out");
for ([_]TargetDescription{
.{ .os = .windows, .arch = .x86_64 },
.{ .os = .windows, .arch = .aarch64 },
.{ .os = .mac, .arch = .x86_64 },
.{ .os = .mac, .arch = .aarch64 },
.{ .os = .linux, .arch = .x86_64 },
@@ -503,6 +502,8 @@ fn addMultiCheck(
.no_llvm = root_build_options.no_llvm,
.enable_asan = root_build_options.enable_asan,
.enable_valgrind = root_build_options.enable_valgrind,
.enable_tinycc = root_build_options.enable_tinycc,
.enable_fuzzilli = root_build_options.enable_fuzzilli,
.use_mimalloc = root_build_options.use_mimalloc,
.override_no_export_cpp_apis = root_build_options.override_no_export_cpp_apis,
};
@@ -616,15 +617,22 @@ fn configureObj(b: *Build, opts: *BunBuildOptions, obj: *Compile) void {
obj.llvm_codegen_threads = opts.llvm_codegen_threads orelse 0;
}
obj.no_link_obj = true;
obj.no_link_obj = opts.os != .windows and !opts.no_llvm;
if (opts.enable_asan and !enableFastBuild(b)) {
if (@hasField(Build.Module, "sanitize_address")) {
if (opts.enable_fuzzilli) {
obj.sanitize_coverage_trace_pc_guard = true;
}
obj.root_module.sanitize_address = true;
} else {
const fail_step = b.addFail("asan is not supported on this platform");
obj.step.dependOn(&fail_step.step);
}
} else if (opts.enable_fuzzilli) {
const fail_step = b.addFail("fuzzilli requires asan");
obj.step.dependOn(&fail_step.step);
}
obj.bundle_compiler_rt = false;
obj.bundle_ubsan_rt = false;
@@ -779,6 +787,13 @@ fn addInternalImports(b: *Build, mod: *Module, opts: *BunBuildOptions) void {
mod.addImport("cpp", cppImport);
cppImport.addImport("bun", mod);
}
{
const ciInfoImport = b.createModule(.{
.root_source_file = (std.Build.LazyPath{ .cwd_relative = opts.codegen_path }).path(b, "ci_info.zig"),
});
mod.addImport("ci_info", ciInfoImport);
ciInfoImport.addImport("bun", mod);
}
inline for (.{
.{ .import = "completions-bash", .file = b.path("completions/bun.bash") },
.{ .import = "completions-zsh", .file = b.path("completions/bun.zsh") },
@@ -804,7 +819,7 @@ fn addInternalImports(b: *Build, mod: *Module, opts: *BunBuildOptions) void {
fn propagateImports(source_mod: *Module) !void {
var seen = std.AutoHashMap(*Module, void).init(source_mod.owner.graph.arena);
defer seen.deinit();
var queue = std.ArrayList(*Module).init(source_mod.owner.graph.arena);
var queue = std.array_list.Managed(*Module).init(source_mod.owner.graph.arena);
defer queue.deinit();
try queue.appendSlice(source_mod.import_table.values());
while (queue.pop()) |mod| {

View File

@@ -1,5 +1,6 @@
{
"lockfileVersion": 1,
"configVersion": 1,
"workspaces": {
"": {
"name": "bun",
@@ -31,16 +32,11 @@
"dependencies": {
"@types/node": "*",
},
"devDependencies": {
"@types/react": "^19",
},
"peerDependencies": {
"@types/react": "^19",
},
},
},
"overrides": {
"@types/bun": "workspace:packages/@types/bun",
"@types/node": "25.0.0",
"bun-types": "workspace:packages/bun-types",
},
"packages": {
@@ -90,13 +86,13 @@
"@esbuild/win32-x64": ["@esbuild/win32-x64@0.21.5", "", { "os": "win32", "cpu": "x64" }, "sha512-tQd/1efJuzPC6rCFwEvLtci/xNFcTZknmXs98FYDfGE4wP9ClFV98nyKrzJKVPMhdDnjzLhdUyMX4PsQAPjwIw=="],
"@lezer/common": ["@lezer/common@1.2.3", "", {}, "sha512-w7ojc8ejBqr2REPsWxJjrMFsA/ysDCFICn8zEOR9mrqzOu2amhITYuLD8ag6XZf0CFXDrhKqw7+tW8cX66NaDA=="],
"@lezer/common": ["@lezer/common@1.3.0", "", {}, "sha512-L9X8uHCYU310o99L3/MpJKYxPzXPOS7S0NmBaM7UO/x2Kb2WbmMLSkfvdr1KxRIFYOpbY0Jhn7CfLSUDzL8arQ=="],
"@lezer/cpp": ["@lezer/cpp@1.1.3", "", { "dependencies": { "@lezer/common": "^1.2.0", "@lezer/highlight": "^1.0.0", "@lezer/lr": "^1.0.0" } }, "sha512-ykYvuFQKGsRi6IcE+/hCSGUhb/I4WPjd3ELhEblm2wS2cOznDFzO+ubK2c+ioysOnlZ3EduV+MVQFCPzAIoY3w=="],
"@lezer/highlight": ["@lezer/highlight@1.2.1", "", { "dependencies": { "@lezer/common": "^1.0.0" } }, "sha512-Z5duk4RN/3zuVO7Jq0pGLJ3qynpxUVsh7IbUbGj88+uV2ApSAn6kWg2au3iJb+0Zi7kKtqffIESgNcRXWZWmSA=="],
"@lezer/highlight": ["@lezer/highlight@1.2.3", "", { "dependencies": { "@lezer/common": "^1.3.0" } }, "sha512-qXdH7UqTvGfdVBINrgKhDsVTJTxactNNxLk7+UMwZhU13lMHaOBlJe9Vqp907ya56Y3+ed2tlqzys7jDkTmW0g=="],
"@lezer/lr": ["@lezer/lr@1.4.2", "", { "dependencies": { "@lezer/common": "^1.0.0" } }, "sha512-pu0K1jCIdnQ12aWNaAVU5bzi7Bd1w54J3ECgANPmYLtQKP0HBj2cE/5coBD66MT10xbtIuUr7tg0Shbsvk0mDA=="],
"@lezer/lr": ["@lezer/lr@1.4.3", "", { "dependencies": { "@lezer/common": "^1.0.0" } }, "sha512-yenN5SqAxAPv/qMnpWW0AT7l+SxVrgG+u0tNsRQWqbrz66HIl8DnEbBObvy21J5K7+I1v7gsAnlE2VQ5yYVSeA=="],
"@octokit/app": ["@octokit/app@14.1.0", "", { "dependencies": { "@octokit/auth-app": "^6.0.0", "@octokit/auth-unauthenticated": "^5.0.0", "@octokit/core": "^5.0.0", "@octokit/oauth-app": "^6.0.0", "@octokit/plugin-paginate-rest": "^9.0.0", "@octokit/types": "^12.0.0", "@octokit/webhooks": "^12.0.4" } }, "sha512-g3uEsGOQCBl1+W1rgfwoRFUIR6PtvB2T1E4RpygeUU5LrLvlOqcxrt5lfykIeRpUPpupreGJUYl70fqMDXdTpw=="],
@@ -150,7 +146,7 @@
"@sentry/types": ["@sentry/types@7.120.4", "", {}, "sha512-cUq2hSSe6/qrU6oZsEP4InMI5VVdD86aypE+ENrQ6eZEVLTCYm1w6XhW1NvIu3UuWh7gZec4a9J7AFpYxki88Q=="],
"@types/aws-lambda": ["@types/aws-lambda@8.10.152", "", {}, "sha512-soT/c2gYBnT5ygwiHPmd9a1bftj462NWVk2tKCc1PYHSIacB2UwbTS2zYG4jzag1mRDuzg/OjtxQjQ2NKRB6Rw=="],
"@types/aws-lambda": ["@types/aws-lambda@8.10.159", "", {}, "sha512-SAP22WSGNN12OQ8PlCzGzRCZ7QDCwI85dQZbmpz7+mAk+L7j+wI7qnvmdKh+o7A5LaOp6QnOZ2NJphAZQTTHQg=="],
"@types/btoa-lite": ["@types/btoa-lite@1.0.2", "", {}, "sha512-ZYbcE2x7yrvNFJiU7xJGrpF/ihpkM7zKgw8bha3LNJSesvTtUNxbpzaT7WXBIryf6jovisrxTBvymxMeLLj1Mg=="],
@@ -160,9 +156,7 @@
"@types/ms": ["@types/ms@2.1.0", "", {}, "sha512-GsCCIZDE/p3i96vtEqx+7dBUGXrc7zeSK3wwPHIaRThS+9OhWIXRqzs4d6k1SVU8g91DrNRWxWUGhp5KXQb2VA=="],
"@types/node": ["@types/node@24.2.1", "", { "dependencies": { "undici-types": "~7.10.0" } }, "sha512-DRh5K+ka5eJic8CjH7td8QpYEV6Zo10gfRkjHCO3weqZHWDtAaSTFtl4+VMqOJ4N5jcuhZ9/l+yy8rVgw7BQeQ=="],
"@types/react": ["@types/react@19.1.10", "", { "dependencies": { "csstype": "^3.0.2" } }, "sha512-EhBeSYX0Y6ye8pNebpKrwFJq7BoQ8J5SO6NlvNwwHjSj6adXJViPQrKlsyPw7hLBLvckEMO1yxeGdR82YBBlDg=="],
"@types/node": ["@types/node@25.0.0", "", { "dependencies": { "undici-types": "~7.16.0" } }, "sha512-rl78HwuZlaDIUSeUKkmogkhebA+8K1Hy7tddZuJ3D0xV8pZSfsYGTsliGUol1JPzu9EKnTxPC4L1fiWouStRew=="],
"aggregate-error": ["aggregate-error@3.1.0", "", { "dependencies": { "clean-stack": "^2.0.0", "indent-string": "^4.0.0" } }, "sha512-4I7Td01quW/RpocfNayFdFVk1qSuoh0E7JrbRJ16nH01HhKFQ88INq9Sd+nd72zqRySlr9BmDA8xlEJ6vJMrYA=="],
@@ -192,11 +186,9 @@
"constant-case": ["constant-case@3.0.4", "", { "dependencies": { "no-case": "^3.0.4", "tslib": "^2.0.3", "upper-case": "^2.0.2" } }, "sha512-I2hSBi7Vvs7BEuJDr5dDHfzb/Ruj3FyvFyh7KLilAjNQw3Be+xgqUBA2W6scVEcL0hL1dwPRtIqEPVUCKkSsyQ=="],
"csstype": ["csstype@3.1.3", "", {}, "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw=="],
"deprecation": ["deprecation@2.3.1", "", {}, "sha512-xmHIy4F3scKVwMsQ4WnVaS8bHOx0DmVwRywosKhaILI0ywMDWPtBSku2HNxRvF7jtwDRsoEwYQSfbxj8b7RlJQ=="],
"detect-libc": ["detect-libc@2.0.4", "", {}, "sha512-3UDv+G9CsCKO1WKMGw9fwq/SWJYbI0c5Y7LU1AXYoDdbhE2AHQ6N6Nb34sG8Fj7T5APy8qXDCKuuIHd1BR0tVA=="],
"detect-libc": ["detect-libc@2.1.2", "", {}, "sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ=="],
"dot-case": ["dot-case@3.0.4", "", { "dependencies": { "no-case": "^3.0.4", "tslib": "^2.0.3" } }, "sha512-Kv5nKlh6yRrdrGvxeJ2e5y2eRUpkUosIW4A2AS38zwSz27zu7ufDwQPi5Jhs3XAlGNetl3bmnGhQsMtkKJnj3w=="],
@@ -220,27 +212,29 @@
"jws": ["jws@3.2.2", "", { "dependencies": { "jwa": "^1.4.1", "safe-buffer": "^5.0.1" } }, "sha512-YHlZCB6lMTllWDtSPHz/ZXTsi8S00usEV6v1tjq8tOUZzw7DpSDWVXjXDre6ed1w/pd495ODpHZYSdkRTsa0HA=="],
"lightningcss": ["lightningcss@1.30.1", "", { "dependencies": { "detect-libc": "^2.0.3" }, "optionalDependencies": { "lightningcss-darwin-arm64": "1.30.1", "lightningcss-darwin-x64": "1.30.1", "lightningcss-freebsd-x64": "1.30.1", "lightningcss-linux-arm-gnueabihf": "1.30.1", "lightningcss-linux-arm64-gnu": "1.30.1", "lightningcss-linux-arm64-musl": "1.30.1", "lightningcss-linux-x64-gnu": "1.30.1", "lightningcss-linux-x64-musl": "1.30.1", "lightningcss-win32-arm64-msvc": "1.30.1", "lightningcss-win32-x64-msvc": "1.30.1" } }, "sha512-xi6IyHML+c9+Q3W0S4fCQJOym42pyurFiJUHEcEyHS0CeKzia4yZDEsLlqOFykxOdHpNy0NmvVO31vcSqAxJCg=="],
"lightningcss": ["lightningcss@1.30.2", "", { "dependencies": { "detect-libc": "^2.0.3" }, "optionalDependencies": { "lightningcss-android-arm64": "1.30.2", "lightningcss-darwin-arm64": "1.30.2", "lightningcss-darwin-x64": "1.30.2", "lightningcss-freebsd-x64": "1.30.2", "lightningcss-linux-arm-gnueabihf": "1.30.2", "lightningcss-linux-arm64-gnu": "1.30.2", "lightningcss-linux-arm64-musl": "1.30.2", "lightningcss-linux-x64-gnu": "1.30.2", "lightningcss-linux-x64-musl": "1.30.2", "lightningcss-win32-arm64-msvc": "1.30.2", "lightningcss-win32-x64-msvc": "1.30.2" } }, "sha512-utfs7Pr5uJyyvDETitgsaqSyjCb2qNRAtuqUeWIAKztsOYdcACf2KtARYXg2pSvhkt+9NfoaNY7fxjl6nuMjIQ=="],
"lightningcss-darwin-arm64": ["lightningcss-darwin-arm64@1.30.1", "", { "os": "darwin", "cpu": "arm64" }, "sha512-c8JK7hyE65X1MHMN+Viq9n11RRC7hgin3HhYKhrMyaXflk5GVplZ60IxyoVtzILeKr+xAJwg6zK6sjTBJ0FKYQ=="],
"lightningcss-android-arm64": ["lightningcss-android-arm64@1.30.2", "", { "os": "android", "cpu": "arm64" }, "sha512-BH9sEdOCahSgmkVhBLeU7Hc9DWeZ1Eb6wNS6Da8igvUwAe0sqROHddIlvU06q3WyXVEOYDZ6ykBZQnjTbmo4+A=="],
"lightningcss-darwin-x64": ["lightningcss-darwin-x64@1.30.1", "", { "os": "darwin", "cpu": "x64" }, "sha512-k1EvjakfumAQoTfcXUcHQZhSpLlkAuEkdMBsI/ivWw9hL+7FtilQc0Cy3hrx0AAQrVtQAbMI7YjCgYgvn37PzA=="],
"lightningcss-darwin-arm64": ["lightningcss-darwin-arm64@1.30.2", "", { "os": "darwin", "cpu": "arm64" }, "sha512-ylTcDJBN3Hp21TdhRT5zBOIi73P6/W0qwvlFEk22fkdXchtNTOU4Qc37SkzV+EKYxLouZ6M4LG9NfZ1qkhhBWA=="],
"lightningcss-freebsd-x64": ["lightningcss-freebsd-x64@1.30.1", "", { "os": "freebsd", "cpu": "x64" }, "sha512-kmW6UGCGg2PcyUE59K5r0kWfKPAVy4SltVeut+umLCFoJ53RdCUWxcRDzO1eTaxf/7Q2H7LTquFHPL5R+Gjyig=="],
"lightningcss-darwin-x64": ["lightningcss-darwin-x64@1.30.2", "", { "os": "darwin", "cpu": "x64" }, "sha512-oBZgKchomuDYxr7ilwLcyms6BCyLn0z8J0+ZZmfpjwg9fRVZIR5/GMXd7r9RH94iDhld3UmSjBM6nXWM2TfZTQ=="],
"lightningcss-linux-arm-gnueabihf": ["lightningcss-linux-arm-gnueabihf@1.30.1", "", { "os": "linux", "cpu": "arm" }, "sha512-MjxUShl1v8pit+6D/zSPq9S9dQ2NPFSQwGvxBCYaBYLPlCWuPh9/t1MRS8iUaR8i+a6w7aps+B4N0S1TYP/R+Q=="],
"lightningcss-freebsd-x64": ["lightningcss-freebsd-x64@1.30.2", "", { "os": "freebsd", "cpu": "x64" }, "sha512-c2bH6xTrf4BDpK8MoGG4Bd6zAMZDAXS569UxCAGcA7IKbHNMlhGQ89eRmvpIUGfKWNVdbhSbkQaWhEoMGmGslA=="],
"lightningcss-linux-arm64-gnu": ["lightningcss-linux-arm64-gnu@1.30.1", "", { "os": "linux", "cpu": "arm64" }, "sha512-gB72maP8rmrKsnKYy8XUuXi/4OctJiuQjcuqWNlJQ6jZiWqtPvqFziskH3hnajfvKB27ynbVCucKSm2rkQp4Bw=="],
"lightningcss-linux-arm-gnueabihf": ["lightningcss-linux-arm-gnueabihf@1.30.2", "", { "os": "linux", "cpu": "arm" }, "sha512-eVdpxh4wYcm0PofJIZVuYuLiqBIakQ9uFZmipf6LF/HRj5Bgm0eb3qL/mr1smyXIS1twwOxNWndd8z0E374hiA=="],
"lightningcss-linux-arm64-musl": ["lightningcss-linux-arm64-musl@1.30.1", "", { "os": "linux", "cpu": "arm64" }, "sha512-jmUQVx4331m6LIX+0wUhBbmMX7TCfjF5FoOH6SD1CttzuYlGNVpA7QnrmLxrsub43ClTINfGSYyHe2HWeLl5CQ=="],
"lightningcss-linux-arm64-gnu": ["lightningcss-linux-arm64-gnu@1.30.2", "", { "os": "linux", "cpu": "arm64" }, "sha512-UK65WJAbwIJbiBFXpxrbTNArtfuznvxAJw4Q2ZGlU8kPeDIWEX1dg3rn2veBVUylA2Ezg89ktszWbaQnxD/e3A=="],
"lightningcss-linux-x64-gnu": ["lightningcss-linux-x64-gnu@1.30.1", "", { "os": "linux", "cpu": "x64" }, "sha512-piWx3z4wN8J8z3+O5kO74+yr6ze/dKmPnI7vLqfSqI8bccaTGY5xiSGVIJBDd5K5BHlvVLpUB3S2YCfelyJ1bw=="],
"lightningcss-linux-arm64-musl": ["lightningcss-linux-arm64-musl@1.30.2", "", { "os": "linux", "cpu": "arm64" }, "sha512-5Vh9dGeblpTxWHpOx8iauV02popZDsCYMPIgiuw97OJ5uaDsL86cnqSFs5LZkG3ghHoX5isLgWzMs+eD1YzrnA=="],
"lightningcss-linux-x64-musl": ["lightningcss-linux-x64-musl@1.30.1", "", { "os": "linux", "cpu": "x64" }, "sha512-rRomAK7eIkL+tHY0YPxbc5Dra2gXlI63HL+v1Pdi1a3sC+tJTcFrHX+E86sulgAXeI7rSzDYhPSeHHjqFhqfeQ=="],
"lightningcss-linux-x64-gnu": ["lightningcss-linux-x64-gnu@1.30.2", "", { "os": "linux", "cpu": "x64" }, "sha512-Cfd46gdmj1vQ+lR6VRTTadNHu6ALuw2pKR9lYq4FnhvgBc4zWY1EtZcAc6EffShbb1MFrIPfLDXD6Xprbnni4w=="],
"lightningcss-win32-arm64-msvc": ["lightningcss-win32-arm64-msvc@1.30.1", "", { "os": "win32", "cpu": "arm64" }, "sha512-mSL4rqPi4iXq5YVqzSsJgMVFENoa4nGTT/GjO2c0Yl9OuQfPsIfncvLrEW6RbbB24WtZ3xP/2CCmI3tNkNV4oA=="],
"lightningcss-linux-x64-musl": ["lightningcss-linux-x64-musl@1.30.2", "", { "os": "linux", "cpu": "x64" }, "sha512-XJaLUUFXb6/QG2lGIW6aIk6jKdtjtcffUT0NKvIqhSBY3hh9Ch+1LCeH80dR9q9LBjG3ewbDjnumefsLsP6aiA=="],
"lightningcss-win32-x64-msvc": ["lightningcss-win32-x64-msvc@1.30.1", "", { "os": "win32", "cpu": "x64" }, "sha512-PVqXh48wh4T53F/1CCu8PIPCxLzWyCnn/9T5W1Jpmdy5h9Cwd+0YQS6/LwhHXSafuc61/xg9Lv5OrCby6a++jg=="],
"lightningcss-win32-arm64-msvc": ["lightningcss-win32-arm64-msvc@1.30.2", "", { "os": "win32", "cpu": "arm64" }, "sha512-FZn+vaj7zLv//D/192WFFVA0RgHawIcHqLX9xuWiQt7P0PtdFEVaxgF9rjM/IRYHQXNnk61/H/gb2Ei+kUQ4xQ=="],
"lightningcss-win32-x64-msvc": ["lightningcss-win32-x64-msvc@1.30.2", "", { "os": "win32", "cpu": "x64" }, "sha512-5g1yc73p+iAkid5phb4oVFMB45417DkRevRbt/El/gKXJk4jid+vPFF/AXbxn05Aky8PapwzZrdJShv5C0avjw=="],
"lodash.includes": ["lodash.includes@4.3.0", "", {}, "sha512-W3Bx6mdkRTGtlJISOvVD/lbqjTlPPUDTMnlXZFnVwi9NKJ6tiAk6LVdlhZMm17VZisqhKcgzpO5Wz91PCt5b0w=="],
@@ -296,7 +290,7 @@
"scheduler": ["scheduler@0.23.2", "", { "dependencies": { "loose-envify": "^1.1.0" } }, "sha512-UOShsPwz7NrMUqhR6t0hWjFduvOzbtv7toDH1/hIrfRNIDBnnBWd0CwJTGvTpngVlmwGCdP9/Zl/tVrDqcuYzQ=="],
"semver": ["semver@7.7.2", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA=="],
"semver": ["semver@7.7.3", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q=="],
"sentence-case": ["sentence-case@3.0.4", "", { "dependencies": { "no-case": "^3.0.4", "tslib": "^2.0.3", "upper-case-first": "^2.0.2" } }, "sha512-8LS0JInaQMCRoQ7YUytAo/xUu5W2XnQxV2HI/6uM6U7CITS1RqPElr30V6uIqyMKM9lJGRVFy5/4CuzcixNYSg=="],
@@ -312,7 +306,7 @@
"uglify-js": ["uglify-js@3.19.3", "", { "bin": { "uglifyjs": "bin/uglifyjs" } }, "sha512-v3Xu+yuwBXisp6QYTcH4UbH+xYJXqnq2m/LtQVWKWzYc1iehYnLixoQDN9FH6/j9/oybfd6W9Ghwkl8+UMKTKQ=="],
"undici-types": ["undici-types@7.10.0", "", {}, "sha512-t5Fy/nfn+14LuOc2KNYg75vZqClpAiqscVvMygNnlsHBFpSXdJaYtXMcdNLpl/Qvc3P2cB3s6lOV51nqsFq4ag=="],
"undici-types": ["undici-types@7.16.0", "", {}, "sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw=="],
"universal-github-app-jwt": ["universal-github-app-jwt@1.2.0", "", { "dependencies": { "@types/jsonwebtoken": "^9.0.0", "jsonwebtoken": "^9.0.2" } }, "sha512-dncpMpnsKBk0eetwfN8D8OUHGfiDhhJ+mtsbMl+7PfW7mYjiH8LIcqRmYMtzYLgSh47HjfdBtrBwIQ/gizKR3g=="],

View File

@@ -10,4 +10,4 @@ preload = "./test/preload.ts"
[install]
linker = "isolated"
minimumReleaseAge = 1
minimumReleaseAge = 259200 # three days

View File

@@ -21,6 +21,10 @@ endforeach()
if(CMAKE_SYSTEM_PROCESSOR MATCHES "arm|ARM|arm64|ARM64|aarch64|AARCH64")
if(APPLE)
register_compiler_flags(-mcpu=apple-m1)
elseif(WIN32)
# Windows ARM64: use /clang: prefix for clang-cl, skip for MSVC cl.exe subprojects
# These flags are only understood by clang-cl, not MSVC cl.exe
register_compiler_flags(/clang:-march=armv8-a+crc /clang:-mtune=ampere1)
else()
register_compiler_flags(-march=armv8-a+crc -mtune=ampere1)
endif()
@@ -51,6 +55,23 @@ if(ENABLE_ASAN)
)
endif()
if(ENABLE_FUZZILLI)
register_compiler_flags(
DESCRIPTION "Enable coverage instrumentation for fuzzing"
-fsanitize-coverage=trace-pc-guard
)
register_linker_flags(
DESCRIPTION "Link coverage instrumentation"
-fsanitize-coverage=trace-pc-guard
)
register_compiler_flags(
DESCRIPTION "Enable fuzzilli-specific code"
-DFUZZILLI_ENABLED
)
endif()
# --- Optimization level ---
if(DEBUG)
register_compiler_flags(
@@ -225,10 +246,17 @@ if(UNIX)
)
endif()
register_compiler_flags(
DESCRIPTION "Set C/C++ error limit"
-ferror-limit=${ERROR_LIMIT}
)
if(WIN32)
register_compiler_flags(
DESCRIPTION "Set C/C++ error limit"
/clang:-ferror-limit=${ERROR_LIMIT}
)
else()
register_compiler_flags(
DESCRIPTION "Set C/C++ error limit"
-ferror-limit=${ERROR_LIMIT}
)
endif()
# --- LTO ---
if(ENABLE_LTO)

View File

@@ -106,9 +106,9 @@ else()
endif()
if(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "arm64|ARM64|aarch64|AARCH64")
set(HOST_OS "aarch64")
set(HOST_ARCH "aarch64")
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "x86_64|X86_64|x64|X64|amd64|AMD64")
set(HOST_OS "x64")
set(HOST_ARCH "x64")
else()
unsupported(CMAKE_HOST_SYSTEM_PROCESSOR)
endif()
@@ -125,7 +125,8 @@ setx(CWD ${CMAKE_SOURCE_DIR})
setx(BUILD_PATH ${CMAKE_BINARY_DIR})
optionx(CACHE_PATH FILEPATH "The path to the cache directory" DEFAULT ${BUILD_PATH}/cache)
optionx(CACHE_STRATEGY "read-write|read-only|none" "The strategy to use for caching" DEFAULT "read-write")
optionx(CACHE_STRATEGY "auto|distributed|local|none" "The strategy to use for caching" DEFAULT
"auto")
optionx(CI BOOL "If CI is enabled" DEFAULT OFF)
optionx(ENABLE_ANALYSIS BOOL "If static analysis targets should be enabled" DEFAULT OFF)
@@ -141,9 +142,39 @@ optionx(TMP_PATH FILEPATH "The path to the temporary directory" DEFAULT ${BUILD_
# --- Helper functions ---
# list_filter_out_regex()
#
# Description:
# Filters out elements from a list that match a regex pattern.
#
# Arguments:
# list - The list of strings to traverse
# pattern - The regex pattern to filter out
# touched - A variable to set if any items were removed
function(list_filter_out_regex list pattern touched)
set(result_list "${${list}}")
set(keep_list)
set(was_modified OFF)
foreach(line IN LISTS result_list)
if(line MATCHES "${pattern}")
set(was_modified ON)
else()
list(APPEND keep_list ${line})
endif()
endforeach()
set(${list} "${keep_list}" PARENT_SCOPE)
set(${touched} ${was_modified} PARENT_SCOPE)
endfunction()
# setenv()
# Description:
# Sets an environment variable during the build step, and writes it to a .env file.
#
# See Also:
# unsetenv()
#
# Arguments:
# variable string - The variable to set
# value string - The value to set the variable to
@@ -156,13 +187,7 @@ function(setenv variable value)
if(EXISTS ${ENV_PATH})
file(STRINGS ${ENV_PATH} ENV_FILE ENCODING UTF-8)
foreach(line ${ENV_FILE})
if(line MATCHES "^${variable}=")
list(REMOVE_ITEM ENV_FILE ${line})
set(ENV_MODIFIED ON)
endif()
endforeach()
list_filter_out_regex(ENV_FILE "^${variable}=" ENV_MODIFIED)
if(ENV_MODIFIED)
list(APPEND ENV_FILE "${variable}=${value}")
@@ -178,6 +203,28 @@ function(setenv variable value)
message(STATUS "Set ENV ${variable}: ${value}")
endfunction()
# See setenv()
# Description:
# Exact opposite of setenv().
# Arguments:
# variable string - The variable to unset.
# See Also:
# setenv()
function(unsetenv variable)
set(ENV_PATH ${BUILD_PATH}/.env)
if(NOT EXISTS ${ENV_PATH})
return()
endif()
file(STRINGS ${ENV_PATH} ENV_FILE ENCODING UTF-8)
list_filter_out_regex(ENV_FILE "^${variable}=" ENV_MODIFIED)
if(ENV_MODIFIED)
list(JOIN ENV_FILE "\n" ENV_FILE)
file(WRITE ${ENV_PATH} ${ENV_FILE})
endif()
endfunction()
# satisfies_range()
# Description:
# Check if a version satisfies a version range or list of ranges
@@ -386,6 +433,33 @@ function(register_command)
list(APPEND CMD_EFFECTIVE_DEPENDS ${CMD_EXECUTABLE})
endif()
# SKIP_CODEGEN: Skip commands that use BUN_EXECUTABLE if all outputs exist
# This is used for Windows ARM64 builds where x64 bun crashes under emulation
if(SKIP_CODEGEN AND CMD_EXECUTABLE STREQUAL "${BUN_EXECUTABLE}")
set(ALL_OUTPUTS_EXIST TRUE)
foreach(output ${CMD_OUTPUTS})
if(NOT EXISTS ${output})
set(ALL_OUTPUTS_EXIST FALSE)
break()
endif()
endforeach()
if(ALL_OUTPUTS_EXIST AND CMD_OUTPUTS)
message(STATUS "SKIP_CODEGEN: Skipping ${CMD_TARGET} (outputs exist)")
if(CMD_TARGET)
add_custom_target(${CMD_TARGET})
endif()
return()
elseif(NOT CMD_OUTPUTS)
message(STATUS "SKIP_CODEGEN: Skipping ${CMD_TARGET} (no outputs)")
if(CMD_TARGET)
add_custom_target(${CMD_TARGET})
endif()
return()
else()
message(FATAL_ERROR "SKIP_CODEGEN: Cannot skip ${CMD_TARGET} - missing outputs. Run codegen on x64 first.")
endif()
endif()
foreach(target ${CMD_TARGETS})
if(target MATCHES "/|\\\\")
message(FATAL_ERROR "register_command: TARGETS contains \"${target}\", if it's a path add it to SOURCES instead")
@@ -603,6 +677,7 @@ function(register_bun_install)
${NPM_CWD}
COMMAND
${BUN_EXECUTABLE}
${BUN_FLAGS}
install
--frozen-lockfile
SOURCES
@@ -710,7 +785,7 @@ function(register_cmake_command)
set(MAKE_EFFECTIVE_ARGS -B${MAKE_BUILD_PATH} ${CMAKE_ARGS})
set(setFlags GENERATOR BUILD_TYPE)
set(appendFlags C_FLAGS CXX_FLAGS LINKER_FLAGS)
set(appendFlags C_FLAGS CXX_FLAGS LINKER_FLAGS STATIC_LINKER_FLAGS EXE_LINKER_FLAGS SHARED_LINKER_FLAGS MODULE_LINKER_FLAGS)
set(specialFlags POSITION_INDEPENDENT_CODE)
set(flags ${setFlags} ${appendFlags} ${specialFlags})
@@ -756,6 +831,14 @@ function(register_cmake_command)
list(APPEND MAKE_EFFECTIVE_ARGS "-DCMAKE_${flag}=${MAKE_${flag}}")
endforeach()
# Workaround for CMake 4.1.0 bug: Force correct machine type for Windows ARM64
# Use toolchain file and set CMP0197 policy to prevent duplicate /machine: flags
if(WIN32 AND CMAKE_SYSTEM_PROCESSOR MATCHES "ARM64|aarch64|AARCH64")
list(APPEND MAKE_EFFECTIVE_ARGS "-DCMAKE_TOOLCHAIN_FILE=${CWD}/cmake/toolchains/windows-aarch64.cmake")
list(APPEND MAKE_EFFECTIVE_ARGS "-DCMAKE_POLICY_DEFAULT_CMP0197=NEW")
list(APPEND MAKE_EFFECTIVE_ARGS "-DCMAKE_PROJECT_INCLUDE=${CWD}/cmake/arm64-static-lib-fix.cmake")
endif()
if(DEFINED FRESH)
list(APPEND MAKE_EFFECTIVE_ARGS --fresh)
endif()

View File

@@ -4,6 +4,7 @@ endif()
optionx(BUN_LINK_ONLY BOOL "If only the linking step should be built" DEFAULT OFF)
optionx(BUN_CPP_ONLY BOOL "If only the C++ part of Bun should be built" DEFAULT OFF)
optionx(SKIP_CODEGEN BOOL "Skip JavaScript codegen (for Windows ARM64 debug)" DEFAULT OFF)
optionx(BUILDKITE BOOL "If Buildkite is enabled" DEFAULT OFF)
optionx(GITHUB_ACTIONS BOOL "If GitHub Actions is enabled" DEFAULT OFF)
@@ -49,7 +50,7 @@ else()
message(FATAL_ERROR "Unsupported operating system: ${CMAKE_SYSTEM_NAME}")
endif()
if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|arm64|arm")
if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|arm64|ARM64")
setx(ARCH "aarch64")
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64|x86_64|x64|AMD64")
setx(ARCH "x64")
@@ -57,6 +58,18 @@ else()
message(FATAL_ERROR "Unsupported architecture: ${CMAKE_SYSTEM_PROCESSOR}")
endif()
# CMake 4.0+ policy CMP0197 controls how MSVC machine type flags are handled
# Setting to NEW prevents duplicate /machine: flags being added to linker commands
if(WIN32 AND ARCH STREQUAL "aarch64")
set(CMAKE_POLICY_DEFAULT_CMP0197 NEW)
set(CMAKE_MSVC_CMP0197 NEW)
# Set linker flags for exe/shared linking
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /machine:ARM64")
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} /machine:ARM64")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} /machine:ARM64")
set(CMAKE_STATIC_LINKER_FLAGS "${CMAKE_STATIC_LINKER_FLAGS} /machine:ARM64")
endif()
# Windows Code Signing Option
if(WIN32)
optionx(ENABLE_WINDOWS_CODESIGNING BOOL "Enable Windows code signing with DigiCert KeyLocker" DEFAULT OFF)
@@ -127,6 +140,8 @@ if (NOT ENABLE_ASAN)
set(ENABLE_ZIG_ASAN OFF)
endif()
optionx(ENABLE_FUZZILLI BOOL "If fuzzilli support should be enabled" DEFAULT OFF)
if(RELEASE AND LINUX AND CI AND NOT ENABLE_ASSERTIONS AND NOT ENABLE_ASAN)
set(DEFAULT_LTO ON)
else()
@@ -197,6 +212,16 @@ optionx(USE_WEBKIT_ICU BOOL "Use the ICU libraries from WebKit" DEFAULT ${DEFAUL
optionx(ERROR_LIMIT STRING "Maximum number of errors to show when compiling C++ code" DEFAULT "100")
# TinyCC is used for FFI JIT compilation
# Disable on Windows ARM64 where it's not yet supported
if(WIN32 AND ARCH STREQUAL "aarch64")
set(DEFAULT_ENABLE_TINYCC OFF)
else()
set(DEFAULT_ENABLE_TINYCC ON)
endif()
optionx(ENABLE_TINYCC BOOL "Enable TinyCC for FFI JIT compilation" DEFAULT ${DEFAULT_ENABLE_TINYCC})
# This is not an `option` because setting this variable to OFF is experimental
# and unsupported. This replaces the `use_mimalloc` variable previously in
# bun.zig, and enables C++ code to also be aware of the option.

View File

@@ -34,26 +34,6 @@ register_command(
ALWAYS_RUN
)
if(GIT_CHANGED_SOURCES)
set(CLANG_FORMAT_CHANGED_SOURCES)
foreach(source ${CLANG_FORMAT_SOURCES})
list(FIND GIT_CHANGED_SOURCES ${source} index)
if(NOT ${index} EQUAL -1)
list(APPEND CLANG_FORMAT_CHANGED_SOURCES ${source})
endif()
endforeach()
endif()
if(CLANG_FORMAT_CHANGED_SOURCES)
set(CLANG_FORMAT_DIFF_COMMAND ${CLANG_FORMAT_PROGRAM}
-i # edits files in-place
--verbose
${CLANG_FORMAT_CHANGED_SOURCES}
)
else()
set(CLANG_FORMAT_DIFF_COMMAND ${CMAKE_COMMAND} -E echo "No changed files for clang-format")
endif()
register_command(
TARGET
clang-format-diff

View File

@@ -3,7 +3,7 @@
set(CLANG_TIDY_SOURCES ${BUN_C_SOURCES} ${BUN_CXX_SOURCES})
set(CLANG_TIDY_COMMAND ${CLANG_TIDY_PROGRAM}
-p ${BUILD_PATH}
-p ${BUILD_PATH}
--config-file=${CWD}/.clang-tidy
)
@@ -40,27 +40,6 @@ register_command(
ALWAYS_RUN
)
if(GIT_CHANGED_SOURCES)
set(CLANG_TIDY_CHANGED_SOURCES)
foreach(source ${CLANG_TIDY_SOURCES})
list(FIND GIT_CHANGED_SOURCES ${source} index)
if(NOT ${index} EQUAL -1)
list(APPEND CLANG_TIDY_CHANGED_SOURCES ${source})
endif()
endforeach()
endif()
if(CLANG_TIDY_CHANGED_SOURCES)
set(CLANG_TIDY_DIFF_COMMAND ${CLANG_TIDY_PROGRAM}
${CLANG_TIDY_CHANGED_SOURCES}
--fix
--fix-errors
--fix-notes
)
else()
set(CLANG_TIDY_DIFF_COMMAND ${CMAKE_COMMAND} -E echo "No changed files for clang-tidy")
endif()
register_command(
TARGET
clang-tidy-diff

View File

@@ -92,26 +92,6 @@ register_command(
ALWAYS_RUN
)
if(GIT_CHANGED_SOURCES)
set(PRETTIER_CHANGED_SOURCES)
foreach(source ${PRETTIER_SOURCES})
list(FIND GIT_CHANGED_SOURCES ${source} index)
if(NOT ${index} EQUAL -1)
list(APPEND PRETTIER_CHANGED_SOURCES ${source})
endif()
endforeach()
endif()
if(PRETTIER_CHANGED_SOURCES)
set(PRETTIER_DIFF_COMMAND ${PRETTIER_COMMAND}
--write
--plugin=prettier-plugin-organize-imports
${PRETTIER_CHANGED_SOURCES}
)
else()
set(PRETTIER_DIFF_COMMAND ${CMAKE_COMMAND} -E echo "No changed files for prettier")
endif()
register_command(
TARGET
prettier-diff

View File

@@ -25,25 +25,6 @@ register_command(
ALWAYS_RUN
)
if(GIT_CHANGED_SOURCES)
set(ZIG_FORMAT_CHANGED_SOURCES)
foreach(source ${ZIG_FORMAT_SOURCES})
list(FIND GIT_CHANGED_SOURCES ${source} index)
if(NOT ${index} EQUAL -1)
list(APPEND ZIG_FORMAT_CHANGED_SOURCES ${source})
endif()
endforeach()
endif()
if(ZIG_FORMAT_CHANGED_SOURCES)
set(ZIG_FORMAT_DIFF_COMMAND ${ZIG_EXECUTABLE}
fmt
${ZIG_FORMAT_CHANGED_SOURCES}
)
else()
set(ZIG_FORMAT_DIFF_COMMAND ${CMAKE_COMMAND} -E echo "No changed files for zig-format")
endif()
register_command(
TARGET
zig-format-diff

View File

@@ -0,0 +1,8 @@
# This file is included after project() via CMAKE_PROJECT_INCLUDE
# It fixes the static library creation command to use ARM64 machine type
if(WIN32 AND CMAKE_SYSTEM_PROCESSOR STREQUAL \"aarch64\")
# Override the static library creation commands to avoid spurious /machine:x64 flags
set(CMAKE_C_CREATE_STATIC_LIBRARY \"<CMAKE_AR> /nologo /machine:ARM64 /out:<TARGET> <OBJECTS>\" CACHE STRING \"\" FORCE)
set(CMAKE_CXX_CREATE_STATIC_LIBRARY \"<CMAKE_AR> /nologo /machine:ARM64 /out:<TARGET> <OBJECTS>\" CACHE STRING \"\" FORCE)
endif()

View File

@@ -21,7 +21,12 @@ if(NOT DEFINED CMAKE_HOST_SYSTEM_PROCESSOR)
endif()
if(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "arm64|ARM64|aarch64|AARCH64")
set(ZIG_ARCH "aarch64")
# Windows ARM64 can run x86_64 via emulation, and no native ARM64 Zig build exists yet
if(CMAKE_HOST_WIN32)
set(ZIG_ARCH "x86_64")
else()
set(ZIG_ARCH "aarch64")
endif()
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "amd64|AMD64|x86_64|X86_64|x64|X64")
set(ZIG_ARCH "x86_64")
else()

View File

@@ -0,0 +1,34 @@
@echo off
setlocal enabledelayedexpansion
REM Wrapper for llvm-lib that strips conflicting /machine:x64 flag for ARM64 builds
REM This is a workaround for CMake 4.1.0 bug
REM Find llvm-lib.exe - check LLVM_LIB env var, then PATH, then known locations
if defined LLVM_LIB (
set "LLVM_LIB_EXE=!LLVM_LIB!"
) else (
where llvm-lib.exe >nul 2>&1
if !ERRORLEVEL! equ 0 (
for /f "delims=" %%i in ('where llvm-lib.exe') do set "LLVM_LIB_EXE=%%i"
) else if exist "C:\Program Files\LLVM\bin\llvm-lib.exe" (
set "LLVM_LIB_EXE=C:\Program Files\LLVM\bin\llvm-lib.exe"
) else (
echo Error: Cannot find llvm-lib.exe. Set LLVM_LIB environment variable or add LLVM to PATH.
exit /b 1
)
)
set "ARGS="
for %%a in (%*) do (
set "ARG=%%a"
if /i "!ARG!"=="/machine:x64" (
REM Skip this argument
) else (
set "ARGS=!ARGS! %%a"
)
)
"!LLVM_LIB_EXE!" %ARGS%
exit /b %ERRORLEVEL%

View File

@@ -0,0 +1,18 @@
# Wrapper for llvm-lib that strips conflicting /machine:x64 flag for ARM64 builds
# This is a workaround for CMake 4.1.0 bug where both /machine:ARM64 and /machine:x64 are added
# Find llvm-lib.exe - check LLVM_LIB env var, then PATH, then known locations
if ($env:LLVM_LIB) {
$llvmLib = $env:LLVM_LIB
} elseif (Get-Command llvm-lib.exe -ErrorAction SilentlyContinue) {
$llvmLib = (Get-Command llvm-lib.exe).Source
} elseif (Test-Path "C:\Program Files\LLVM\bin\llvm-lib.exe") {
$llvmLib = "C:\Program Files\LLVM\bin\llvm-lib.exe"
} else {
Write-Error "Cannot find llvm-lib.exe. Set LLVM_LIB environment variable or add LLVM to PATH."
exit 1
}
$filteredArgs = $args | Where-Object { $_ -ne "/machine:x64" }
& $llvmLib @filteredArgs
exit $LASTEXITCODE

View File

@@ -0,0 +1,34 @@
@echo off
setlocal enabledelayedexpansion
REM Wrapper for llvm-lib that strips conflicting /machine:x64 flag for ARM64 builds
REM This is a workaround for CMake 4.1.0 bug
REM Find llvm-lib.exe - check LLVM_LIB env var, then PATH, then known locations
if defined LLVM_LIB (
set "LLVM_LIB_EXE=!LLVM_LIB!"
) else (
where llvm-lib.exe >nul 2>&1
if !ERRORLEVEL! equ 0 (
for /f "delims=" %%i in ('where llvm-lib.exe') do set "LLVM_LIB_EXE=%%i"
) else if exist "C:\Program Files\LLVM\bin\llvm-lib.exe" (
set "LLVM_LIB_EXE=C:\Program Files\LLVM\bin\llvm-lib.exe"
) else (
echo Error: Cannot find llvm-lib.exe. Set LLVM_LIB environment variable or add LLVM to PATH.
exit /b 1
)
)
set NEWARGS=
for %%a in (%*) do (
set "ARG=%%a"
if /i "!ARG!"=="/machine:x64" (
REM Skip /machine:x64 argument
) else (
set "NEWARGS=!NEWARGS! %%a"
)
)
"!LLVM_LIB_EXE!" %NEWARGS%
exit /b %ERRORLEVEL%

View File

@@ -4,7 +4,7 @@ register_repository(
REPOSITORY
oven-sh/boringssl
COMMIT
f1ffd9e83d4f5c28a9c70d73f9a4e6fcf310062f
4f4f5ef8ebc6e23cbf393428f0ab1b526773f7ac
)
register_cmake_command(

View File

@@ -57,13 +57,17 @@ set(BUN_DEPENDENCIES
LolHtml
Lshpack
Mimalloc
TinyCC
Zlib
LibArchive # must be loaded after zlib
HdrHistogram # must be loaded after zlib
Zstd
)
# TinyCC is optional - disabled on Windows ARM64 where it's not supported
if(ENABLE_TINYCC)
list(APPEND BUN_DEPENDENCIES TinyCC)
endif()
include(CloneZstd)
# --- Codegen ---
@@ -185,7 +189,7 @@ register_command(
CWD
${BUN_NODE_FALLBACKS_SOURCE}
COMMAND
${BUN_EXECUTABLE} run build-fallbacks
${BUN_EXECUTABLE} ${BUN_FLAGS} run build-fallbacks
${BUN_NODE_FALLBACKS_OUTPUT}
${BUN_NODE_FALLBACKS_SOURCES}
SOURCES
@@ -206,7 +210,7 @@ register_command(
CWD
${BUN_NODE_FALLBACKS_SOURCE}
COMMAND
${BUN_EXECUTABLE} build
${BUN_EXECUTABLE} ${BUN_FLAGS} build
${BUN_NODE_FALLBACKS_SOURCE}/node_modules/react-refresh/cjs/react-refresh-runtime.development.js
--outfile=${BUN_REACT_REFRESH_OUTPUT}
--target=browser
@@ -243,6 +247,7 @@ register_command(
"Generating ErrorCode.{zig,h}"
COMMAND
${BUN_EXECUTABLE}
${BUN_FLAGS}
run
${BUN_ERROR_CODE_SCRIPT}
${CODEGEN_PATH}
@@ -278,6 +283,7 @@ register_command(
"Generating ZigGeneratedClasses.{zig,cpp,h}"
COMMAND
${BUN_EXECUTABLE}
${BUN_FLAGS}
run
${BUN_ZIG_GENERATED_CLASSES_SCRIPT}
${BUN_ZIG_GENERATED_CLASSES_SOURCES}
@@ -317,6 +323,10 @@ set(BUN_CPP_OUTPUTS
${CODEGEN_PATH}/cpp.zig
)
set(BUN_CI_INFO_OUTPUTS
${CODEGEN_PATH}/ci_info.zig
)
register_command(
TARGET
bun-cppbind
@@ -324,6 +334,7 @@ register_command(
"Generating C++ --> Zig bindings"
COMMAND
${BUN_EXECUTABLE}
${BUN_FLAGS}
${CWD}/src/codegen/cppbind.ts
${CWD}/src
${CODEGEN_PATH}
@@ -336,23 +347,50 @@ register_command(
register_command(
TARGET
bun-js-modules
bun-ci-info
COMMENT
"Generating JavaScript modules"
"Generating CI info"
COMMAND
${BUN_EXECUTABLE}
run
${BUN_JAVASCRIPT_CODEGEN_SCRIPT}
--debug=${DEBUG}
${BUILD_PATH}
${BUN_FLAGS}
${CWD}/src/codegen/ci_info.ts
${CODEGEN_PATH}/ci_info.zig
SOURCES
${BUN_JAVASCRIPT_SOURCES}
${BUN_JAVASCRIPT_CODEGEN_SOURCES}
${BUN_JAVASCRIPT_CODEGEN_SCRIPT}
OUTPUTS
${BUN_JAVASCRIPT_OUTPUTS}
${BUN_CI_INFO_OUTPUTS}
)
if(SKIP_CODEGEN)
# Skip JavaScript codegen - useful for Windows ARM64 debug builds where bun crashes
message(STATUS "SKIP_CODEGEN is ON - skipping bun-js-modules codegen")
foreach(output ${BUN_JAVASCRIPT_OUTPUTS})
if(NOT EXISTS ${output})
message(FATAL_ERROR "SKIP_CODEGEN is ON but ${output} does not exist. Run codegen manually first.")
endif()
endforeach()
else()
register_command(
TARGET
bun-js-modules
COMMENT
"Generating JavaScript modules"
COMMAND
${BUN_EXECUTABLE}
${BUN_FLAGS}
run
${BUN_JAVASCRIPT_CODEGEN_SCRIPT}
--debug=${DEBUG}
${BUILD_PATH}
SOURCES
${BUN_JAVASCRIPT_SOURCES}
${BUN_JAVASCRIPT_CODEGEN_SOURCES}
${BUN_JAVASCRIPT_CODEGEN_SCRIPT}
OUTPUTS
${BUN_JAVASCRIPT_OUTPUTS}
)
endif()
set(BUN_BAKE_RUNTIME_CODEGEN_SCRIPT ${CWD}/src/codegen/bake-codegen.ts)
absolute_sources(BUN_BAKE_RUNTIME_SOURCES ${CWD}/cmake/sources/BakeRuntimeSources.txt)
@@ -373,6 +411,7 @@ register_command(
"Bundling Bake Runtime"
COMMAND
${BUN_EXECUTABLE}
${BUN_FLAGS}
run
${BUN_BAKE_RUNTIME_CODEGEN_SCRIPT}
--debug=${DEBUG}
@@ -396,16 +435,13 @@ string(REPLACE ";" "," BUN_BINDGENV2_SOURCES_COMMA_SEPARATED
"${BUN_BINDGENV2_SOURCES}")
execute_process(
COMMAND ${BUN_EXECUTABLE} run ${BUN_BINDGENV2_SCRIPT}
COMMAND ${BUN_EXECUTABLE} ${BUN_FLAGS} run ${BUN_BINDGENV2_SCRIPT}
--command=list-outputs
--sources=${BUN_BINDGENV2_SOURCES_COMMA_SEPARATED}
--codegen-path=${CODEGEN_PATH}
RESULT_VARIABLE bindgen_result
OUTPUT_VARIABLE bindgen_outputs
COMMAND_ERROR_IS_FATAL ANY
)
if(${bindgen_result})
message(FATAL_ERROR "bindgenv2/script.ts exited with non-zero status")
endif()
foreach(output IN LISTS bindgen_outputs)
if(output MATCHES "\.cpp$")
list(APPEND BUN_BINDGENV2_CPP_OUTPUTS ${output})
@@ -422,7 +458,7 @@ register_command(
COMMENT
"Generating bindings (v2)"
COMMAND
${BUN_EXECUTABLE} run ${BUN_BINDGENV2_SCRIPT}
${BUN_EXECUTABLE} ${BUN_FLAGS} run ${BUN_BINDGENV2_SCRIPT}
--command=generate
--codegen-path=${CODEGEN_PATH}
--sources=${BUN_BINDGENV2_SOURCES_COMMA_SEPARATED}
@@ -453,6 +489,7 @@ register_command(
"Processing \".bind.ts\" files"
COMMAND
${BUN_EXECUTABLE}
${BUN_FLAGS}
run
${BUN_BINDGEN_SCRIPT}
--debug=${DEBUG}
@@ -485,6 +522,7 @@ register_command(
"Generating JSSink.{cpp,h}"
COMMAND
${BUN_EXECUTABLE}
${BUN_FLAGS}
run
${BUN_JS_SINK_SCRIPT}
${CODEGEN_PATH}
@@ -557,6 +595,7 @@ foreach(i RANGE 0 ${BUN_OBJECT_LUT_SOURCES_MAX_INDEX})
${BUN_OBJECT_LUT_SOURCE}
COMMAND
${BUN_EXECUTABLE}
${BUN_FLAGS}
run
${BUN_OBJECT_LUT_SCRIPT}
${BUN_OBJECT_LUT_SOURCE}
@@ -612,6 +651,7 @@ set(BUN_ZIG_GENERATED_SOURCES
${BUN_ZIG_GENERATED_CLASSES_OUTPUTS}
${BUN_JAVASCRIPT_OUTPUTS}
${BUN_CPP_OUTPUTS}
${BUN_CI_INFO_OUTPUTS}
${BUN_BINDGENV2_ZIG_OUTPUTS}
)
@@ -639,6 +679,10 @@ endif()
if(CMAKE_SYSTEM_PROCESSOR MATCHES "arm|ARM|arm64|ARM64|aarch64|AARCH64")
if(APPLE)
set(ZIG_CPU "apple_m1")
elseif(WIN32)
# Windows ARM64: use a specific CPU with NEON support
# Zig running under x64 emulation would detect wrong CPU with "native"
set(ZIG_CPU "cortex_a76")
else()
set(ZIG_CPU "native")
endif()
@@ -675,7 +719,9 @@ register_command(
-Dcpu=${ZIG_CPU}
-Denable_logs=$<IF:$<BOOL:${ENABLE_LOGS}>,true,false>
-Denable_asan=$<IF:$<BOOL:${ENABLE_ZIG_ASAN}>,true,false>
-Denable_fuzzilli=$<IF:$<BOOL:${ENABLE_FUZZILLI}>,true,false>
-Denable_valgrind=$<IF:$<BOOL:${ENABLE_VALGRIND}>,true,false>
-Denable_tinycc=$<IF:$<BOOL:${ENABLE_TINYCC}>,true,false>
-Duse_mimalloc=$<IF:$<BOOL:${USE_MIMALLOC_AS_DEFAULT_ALLOCATOR}>,true,false>
-Dllvm_codegen_threads=${LLVM_ZIG_CODEGEN_THREADS}
-Dversion=${VERSION}
@@ -851,6 +897,7 @@ target_include_directories(${bun} PRIVATE
${CODEGEN_PATH}
${VENDOR_PATH}
${VENDOR_PATH}/picohttpparser
${VENDOR_PATH}/zlib
${NODEJS_HEADERS_PATH}/include
${NODEJS_HEADERS_PATH}/include/node
)
@@ -1178,6 +1225,29 @@ set_target_properties(${bun} PROPERTIES LINK_DEPENDS ${BUN_SYMBOLS_PATH})
include(SetupWebKit)
if(BUN_LINK_ONLY)
register_command(
TARGET
${bun}
TARGET_PHASE
POST_BUILD
COMMENT
"Uploading link metadata"
COMMAND
${CMAKE_COMMAND} -E env
BUN_VERSION=${VERSION}
WEBKIT_DOWNLOAD_URL=${WEBKIT_DOWNLOAD_URL}
WEBKIT_VERSION=${WEBKIT_VERSION}
ZIG_COMMIT=${ZIG_COMMIT}
${BUN_EXECUTABLE} ${BUN_FLAGS} ${CWD}/scripts/create-link-metadata.mjs ${BUILD_PATH} ${bun}
SOURCES
${BUN_ZIG_OUTPUT}
${BUN_CPP_OUTPUT}
ARTIFACTS
${BUILD_PATH}/link-metadata.json
)
endif()
if(WIN32)
if(DEBUG)
target_link_libraries(${bun} PRIVATE
@@ -1268,6 +1338,9 @@ if(WIN32)
wsock32 # ws2_32 required by TransmitFile aka sendfile on windows
delayimp.lib
)
# Required for static ICU linkage - without this, ICU headers expect DLL linkage
# which causes ABI mismatch and crashes (STATUS_STACK_BUFFER_OVERRUN)
target_compile_definitions(${bun} PRIVATE U_STATIC_IMPLEMENTATION)
endif()
# --- Packaging ---

View File

@@ -4,7 +4,7 @@ register_repository(
REPOSITORY
c-ares/c-ares
COMMIT
d3a507e920e7af18a5efb7f9f1d8044ed4750013
3ac47ee46edd8ea40370222f91613fc16c434853
)
register_cmake_command(

View File

@@ -20,6 +20,15 @@ set(HIGHWAY_CMAKE_ARGS
-DHWY_ENABLE_INSTALL=OFF
)
# On Windows ARM64 with clang-cl, the __ARM_NEON macro isn't defined by default
# but NEON intrinsics are supported. Define it so Highway can detect NEON support.
if(WIN32 AND CMAKE_SYSTEM_PROCESSOR MATCHES "ARM64|aarch64|AARCH64")
list(APPEND HIGHWAY_CMAKE_ARGS
-DCMAKE_C_FLAGS=-D__ARM_NEON=1
-DCMAKE_CXX_FLAGS=-D__ARM_NEON=1
)
endif()
register_cmake_command(
TARGET
highway

View File

@@ -33,6 +33,37 @@ if (NOT WIN32)
set(RUSTFLAGS "-Cpanic=abort-Cdebuginfo=0-Cforce-unwind-tables=no-Copt-level=s")
endif()
# On Windows, ensure MSVC link.exe is used instead of Git's link.exe
set(LOLHTML_ENV
CARGO_TERM_COLOR=always
CARGO_TERM_VERBOSE=true
CARGO_TERM_DIAGNOSTIC=true
CARGO_ENCODED_RUSTFLAGS=${RUSTFLAGS}
CARGO_HOME=${CARGO_HOME}
RUSTUP_HOME=${RUSTUP_HOME}
)
if(WIN32)
# On Windows, tell Rust to use MSVC link.exe directly via the target-specific linker env var.
# This avoids Git's /usr/bin/link being found first in PATH.
# Find the MSVC link.exe from Visual Studio installation
file(GLOB MSVC_VERSIONS "C:/Program Files/Microsoft Visual Studio/2022/*/VC/Tools/MSVC/*")
if(MSVC_VERSIONS)
list(GET MSVC_VERSIONS -1 MSVC_LATEST) # Get the latest version
if(CMAKE_SYSTEM_PROCESSOR MATCHES "ARM64|aarch64")
set(MSVC_LINK_PATH "${MSVC_LATEST}/bin/HostARM64/arm64/link.exe")
set(CARGO_LINKER_VAR "CARGO_TARGET_AARCH64_PC_WINDOWS_MSVC_LINKER")
else()
set(MSVC_LINK_PATH "${MSVC_LATEST}/bin/Hostx64/x64/link.exe")
set(CARGO_LINKER_VAR "CARGO_TARGET_X86_64_PC_WINDOWS_MSVC_LINKER")
endif()
if(EXISTS "${MSVC_LINK_PATH}")
list(APPEND LOLHTML_ENV "${CARGO_LINKER_VAR}=${MSVC_LINK_PATH}")
message(STATUS "lolhtml: Using MSVC link.exe: ${MSVC_LINK_PATH}")
endif()
endif()
endif()
register_command(
TARGET
lolhtml
@@ -45,12 +76,7 @@ register_command(
ARTIFACTS
${LOLHTML_LIBRARY}
ENVIRONMENT
CARGO_TERM_COLOR=always
CARGO_TERM_VERBOSE=true
CARGO_TERM_DIAGNOSTIC=true
CARGO_ENCODED_RUSTFLAGS=${RUSTFLAGS}
CARGO_HOME=${CARGO_HOME}
RUSTUP_HOME=${RUSTUP_HOME}
${LOLHTML_ENV}
)
target_link_libraries(${bun} PRIVATE ${LOLHTML_LIBRARY})

View File

@@ -4,7 +4,7 @@ register_repository(
REPOSITORY
oven-sh/tinycc
COMMIT
29985a3b59898861442fa3b43f663fc1af2591d7
12882eee073cfe5c7621bcfadf679e1372d4537b
)
register_cmake_command(

View File

@@ -0,0 +1,20 @@
set(CMAKE_SYSTEM_NAME Windows)
set(CMAKE_SYSTEM_PROCESSOR aarch64)
set(CMAKE_C_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)
# Force ARM64 architecture ID - this is what CMake uses to determine /machine: flag
set(MSVC_C_ARCHITECTURE_ID ARM64 CACHE INTERNAL "")
set(MSVC_CXX_ARCHITECTURE_ID ARM64 CACHE INTERNAL "")
# CMake 4.0+ policy CMP0197 controls how MSVC machine type flags are handled
set(CMAKE_POLICY_DEFAULT_CMP0197 NEW CACHE INTERNAL "")
# Clear any inherited static linker flags that might have wrong machine types
set(CMAKE_STATIC_LINKER_FLAGS "" CACHE STRING "" FORCE)
# Use wrapper script for llvm-lib that strips /machine:x64 flags
# This works around CMake 4.1.0 bug where both ARM64 and x64 machine flags are added
get_filename_component(_TOOLCHAIN_DIR "${CMAKE_CURRENT_LIST_DIR}" DIRECTORY)
set(CMAKE_AR "${_TOOLCHAIN_DIR}/scripts/llvm-lib-wrapper.bat" CACHE FILEPATH "" FORCE)

View File

@@ -48,6 +48,17 @@ if(NOT BUILDKITE_BUILD_STATUS EQUAL 0)
endif()
file(READ ${BUILDKITE_BUILD_PATH}/build.json BUILDKITE_BUILD)
# CMake's string(JSON ...) interprets escape sequences like \n, \r, \t.
# We need to escape these specific sequences while preserving valid JSON escapes like \" and \\.
# Strategy: Use a unique placeholder to protect \\ sequences, escape \n/\r/\t, then restore \\.
# This prevents \\n (literal backslash + n) from being corrupted to \\\n.
set(BKSLASH_PLACEHOLDER "___BKSLASH_PLACEHOLDER_7f3a9b2c___")
string(REPLACE "\\\\" "${BKSLASH_PLACEHOLDER}" BUILDKITE_BUILD "${BUILDKITE_BUILD}")
string(REPLACE "\\n" "\\\\n" BUILDKITE_BUILD "${BUILDKITE_BUILD}")
string(REPLACE "\\r" "\\\\r" BUILDKITE_BUILD "${BUILDKITE_BUILD}")
string(REPLACE "\\t" "\\\\t" BUILDKITE_BUILD "${BUILDKITE_BUILD}")
string(REPLACE "${BKSLASH_PLACEHOLDER}" "\\\\" BUILDKITE_BUILD "${BUILDKITE_BUILD}")
string(JSON BUILDKITE_BUILD_UUID GET ${BUILDKITE_BUILD} id)
string(JSON BUILDKITE_JOBS GET ${BUILDKITE_BUILD} jobs)
string(JSON BUILDKITE_JOBS_COUNT LENGTH ${BUILDKITE_JOBS})

View File

@@ -17,6 +17,14 @@ if (NOT CI)
set(BUN_EXECUTABLE ${BUN_EXECUTABLE} CACHE FILEPATH "Bun executable" FORCE)
endif()
# On Windows ARM64, we need to add --smol flag to avoid crashes when running
# x64 bun under WoW64 emulation
if(WIN32 AND ARCH STREQUAL "aarch64")
set(BUN_FLAGS "--smol" CACHE STRING "Extra flags for bun executable")
else()
set(BUN_FLAGS "" CACHE STRING "Extra flags for bun executable")
endif()
# If this is not set, some advanced features are not checked.
# https://github.com/oven-sh/bun/blob/cd7f6a1589db7f1e39dc4e3f4a17234afbe7826c/src/bun.js/javascript.zig#L1069-L1072
setenv(BUN_GARBAGE_COLLECTOR_LEVEL 1)

View File

@@ -5,18 +5,12 @@ if(NOT ENABLE_CCACHE OR CACHE_STRATEGY STREQUAL "none")
return()
endif()
if (CI AND NOT APPLE)
setenv(CCACHE_DISABLE 1)
return()
endif()
find_command(
VARIABLE
CCACHE_PROGRAM
COMMAND
ccache
REQUIRED
${CI}
)
if(NOT CCACHE_PROGRAM)

View File

@@ -4,41 +4,9 @@ find_command(
COMMAND
git
REQUIRED
OFF
${CI}
)
if(NOT GIT_PROGRAM)
return()
endif()
set(GIT_DIFF_COMMAND ${GIT_PROGRAM} diff --no-color --name-only --diff-filter=AMCR origin/main HEAD)
execute_process(
COMMAND
${GIT_DIFF_COMMAND}
WORKING_DIRECTORY
${CWD}
OUTPUT_STRIP_TRAILING_WHITESPACE
OUTPUT_VARIABLE
GIT_DIFF
ERROR_STRIP_TRAILING_WHITESPACE
ERROR_VARIABLE
GIT_DIFF_ERROR
RESULT_VARIABLE
GIT_DIFF_RESULT
)
if(NOT GIT_DIFF_RESULT EQUAL 0)
message(WARNING "Command failed: ${GIT_DIFF_COMMAND} ${GIT_DIFF_ERROR}")
return()
endif()
string(REPLACE "\n" ";" GIT_CHANGED_SOURCES "${GIT_DIFF}")
if(CI)
set(GIT_CHANGED_SOURCES "${GIT_CHANGED_SOURCES}")
message(STATUS "Set GIT_CHANGED_SOURCES: ${GIT_CHANGED_SOURCES}")
endif()
list(TRANSFORM GIT_CHANGED_SOURCES PREPEND ${CWD}/)
list(LENGTH GIT_CHANGED_SOURCES GIT_CHANGED_SOURCES_COUNT)

View File

@@ -12,7 +12,13 @@ if(NOT ENABLE_LLVM)
return()
endif()
set(DEFAULT_LLVM_VERSION "19.1.7")
# LLVM 21 is required for Windows ARM64 (first version with ARM64 Windows builds)
# Other platforms use LLVM 19.1.7
if(WIN32 AND CMAKE_SYSTEM_PROCESSOR MATCHES "ARM64|aarch64|AARCH64")
set(DEFAULT_LLVM_VERSION "21.1.8")
else()
set(DEFAULT_LLVM_VERSION "19.1.7")
endif()
optionx(LLVM_VERSION STRING "The version of LLVM to use" DEFAULT ${DEFAULT_LLVM_VERSION})

View File

@@ -1,90 +0,0 @@
if(CACHE_STRATEGY STREQUAL "none")
return()
endif()
function(check_aws_credentials OUT_VAR)
set(HAS_CREDENTIALS FALSE)
if(DEFINED ENV{AWS_ACCESS_KEY_ID} AND DEFINED ENV{AWS_SECRET_ACCESS_KEY})
set(HAS_CREDENTIALS TRUE)
message(NOTICE
"sccache: Using AWS credentials found in environment variables")
endif()
# Check for ~/.aws directory since sccache may use that.
if(NOT HAS_CREDENTIALS)
if(WIN32)
set(AWS_CONFIG_DIR "$ENV{USERPROFILE}/.aws")
else()
set(AWS_CONFIG_DIR "$ENV{HOME}/.aws")
endif()
if(EXISTS "${AWS_CONFIG_DIR}/credentials")
set(HAS_CREDENTIALS TRUE)
message(NOTICE
"sccache: Using AWS credentials found in ${AWS_CONFIG_DIR}/credentials")
endif()
endif()
set(${OUT_VAR} ${HAS_CREDENTIALS} PARENT_SCOPE)
endfunction()
function(check_running_in_ci OUT_VAR)
set(IS_CI FALSE)
# Query EC2 instance metadata service to check if running on buildkite-agent
# The IP address 169.254.169.254 is a well-known link-local address for querying EC2 instance
# metdata:
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
execute_process(
COMMAND curl -s -m 0.5 http://169.254.169.254/latest/meta-data/tags/instance/Service
OUTPUT_VARIABLE METADATA_OUTPUT
ERROR_VARIABLE METADATA_ERROR
RESULT_VARIABLE METADATA_RESULT
OUTPUT_STRIP_TRAILING_WHITESPACE
ERROR_QUIET
)
# Check if the request succeeded and returned exactly "buildkite-agent"
if(METADATA_RESULT EQUAL 0 AND METADATA_OUTPUT STREQUAL "buildkite-agent")
set(IS_CI TRUE)
endif()
set(${OUT_VAR} ${IS_CI} PARENT_SCOPE)
endfunction()
check_running_in_ci(IS_IN_CI)
find_command(VARIABLE SCCACHE_PROGRAM COMMAND sccache REQUIRED ${IS_IN_CI})
if(NOT SCCACHE_PROGRAM)
message(WARNING "sccache not found. Your builds will be slower.")
return()
endif()
set(SCCACHE_ARGS CMAKE_C_COMPILER_LAUNCHER CMAKE_CXX_COMPILER_LAUNCHER)
foreach(arg ${SCCACHE_ARGS})
setx(${arg} ${SCCACHE_PROGRAM})
list(APPEND CMAKE_ARGS -D${arg}=${${arg}})
endforeach()
# Configure S3 bucket for distributed caching
setenv(SCCACHE_BUCKET "bun-build-sccache-store")
setenv(SCCACHE_REGION "us-west-1")
setenv(SCCACHE_DIR "${CACHE_PATH}/sccache")
# Handle credentials based on cache strategy
if (CACHE_STRATEGY STREQUAL "read-only")
setenv(SCCACHE_S3_NO_CREDENTIALS "1")
message(STATUS "sccache configured in read-only mode.")
else()
# Check for AWS credentials and enable anonymous access if needed
check_aws_credentials(HAS_AWS_CREDENTIALS)
if(NOT IS_IN_CI AND NOT HAS_AWS_CREDENTIALS)
setenv(SCCACHE_S3_NO_CREDENTIALS "1")
message(NOTICE "sccache: No AWS credentials found, enabling anonymous S3 "
"access. Writing to the cache will be disabled.")
endif()
endif()
setenv(SCCACHE_LOG "info")
message(STATUS "sccache configured for bun-build-sccache-store (us-west-1).")

View File

@@ -2,10 +2,14 @@ option(WEBKIT_VERSION "The version of WebKit to use")
option(WEBKIT_LOCAL "If a local version of WebKit should be used instead of downloading")
if(NOT WEBKIT_VERSION)
set(WEBKIT_VERSION 6d0f3aac0b817cc01a846b3754b21271adedac12)
set(WEBKIT_VERSION 87c6cde57dd1d2a82bbc9caf500f70f8a7c1f249)
endif()
# Use preview build URL for Windows ARM64 until the fix is merged to main
set(WEBKIT_PREVIEW_PR 140)
string(SUBSTRING ${WEBKIT_VERSION} 0 16 WEBKIT_VERSION_PREFIX)
string(SUBSTRING ${WEBKIT_VERSION} 0 8 WEBKIT_VERSION_SHORT)
if(WEBKIT_LOCAL)
set(DEFAULT_WEBKIT_PATH ${VENDOR_PATH}/WebKit/WebKitBuild/${CMAKE_BUILD_TYPE})
@@ -28,13 +32,30 @@ if(WEBKIT_LOCAL)
# make jsc-compile-debug jsc-copy-headers
include_directories(
${WEBKIT_PATH}
${WEBKIT_PATH}/JavaScriptCore/Headers
${WEBKIT_PATH}/JavaScriptCore/Headers/JavaScriptCore
${WEBKIT_PATH}/JavaScriptCore/PrivateHeaders
${WEBKIT_PATH}/bmalloc/Headers
${WEBKIT_PATH}/WTF/Headers
${WEBKIT_PATH}/JavaScriptCore/DerivedSources/inspector
${WEBKIT_PATH}/JavaScriptCore/PrivateHeaders/JavaScriptCore
${WEBKIT_PATH}/JavaScriptCore/DerivedSources/inspector
)
# On Windows, add ICU include path from vcpkg
if(WIN32)
# Auto-detect vcpkg triplet
set(VCPKG_ARM64_PATH ${VENDOR_PATH}/WebKit/vcpkg_installed/arm64-windows-static)
set(VCPKG_X64_PATH ${VENDOR_PATH}/WebKit/vcpkg_installed/x64-windows-static)
if(EXISTS ${VCPKG_ARM64_PATH})
set(VCPKG_ICU_PATH ${VCPKG_ARM64_PATH})
else()
set(VCPKG_ICU_PATH ${VCPKG_X64_PATH})
endif()
if(EXISTS ${VCPKG_ICU_PATH}/include)
include_directories(${VCPKG_ICU_PATH}/include)
message(STATUS "Using ICU from vcpkg: ${VCPKG_ICU_PATH}/include")
endif()
endif()
endif()
# After this point, only prebuilt WebKit is supported
@@ -51,7 +72,7 @@ else()
message(FATAL_ERROR "Unsupported operating system: ${CMAKE_SYSTEM_NAME}")
endif()
if(CMAKE_SYSTEM_PROCESSOR MATCHES "arm64|aarch64")
if(CMAKE_SYSTEM_PROCESSOR MATCHES "arm64|ARM64|aarch64|AARCH64")
set(WEBKIT_ARCH "arm64")
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64|x86_64|x64|AMD64")
set(WEBKIT_ARCH "amd64")
@@ -80,7 +101,14 @@ endif()
setx(WEBKIT_NAME bun-webkit-${WEBKIT_OS}-${WEBKIT_ARCH}${WEBKIT_SUFFIX})
set(WEBKIT_FILENAME ${WEBKIT_NAME}.tar.gz)
setx(WEBKIT_DOWNLOAD_URL https://github.com/oven-sh/WebKit/releases/download/autobuild-${WEBKIT_VERSION}/${WEBKIT_FILENAME})
if(WEBKIT_VERSION MATCHES "^autobuild-")
set(WEBKIT_TAG ${WEBKIT_VERSION})
else()
set(WEBKIT_TAG autobuild-${WEBKIT_VERSION})
endif()
setx(WEBKIT_DOWNLOAD_URL https://github.com/oven-sh/WebKit/releases/download/${WEBKIT_TAG}/${WEBKIT_FILENAME})
if(EXISTS ${WEBKIT_PATH}/package.json)
file(READ ${WEBKIT_PATH}/package.json WEBKIT_PACKAGE_JSON)
@@ -90,7 +118,14 @@ if(EXISTS ${WEBKIT_PATH}/package.json)
endif()
endif()
file(DOWNLOAD ${WEBKIT_DOWNLOAD_URL} ${CACHE_PATH}/${WEBKIT_FILENAME} SHOW_PROGRESS)
file(
DOWNLOAD ${WEBKIT_DOWNLOAD_URL} ${CACHE_PATH}/${WEBKIT_FILENAME} SHOW_PROGRESS
STATUS WEBKIT_DOWNLOAD_STATUS
)
if(NOT "${WEBKIT_DOWNLOAD_STATUS}" MATCHES "^0;")
message(FATAL_ERROR "Failed to download WebKit: ${WEBKIT_DOWNLOAD_STATUS}")
endif()
file(ARCHIVE_EXTRACT INPUT ${CACHE_PATH}/${WEBKIT_FILENAME} DESTINATION ${CACHE_PATH} TOUCH)
file(REMOVE ${CACHE_PATH}/${WEBKIT_FILENAME})
file(REMOVE_RECURSE ${WEBKIT_PATH})

View File

@@ -1,4 +1,4 @@
if(CMAKE_SYSTEM_PROCESSOR MATCHES "arm64|aarch64")
if(CMAKE_SYSTEM_PROCESSOR MATCHES "arm64|ARM64|aarch64|AARCH64")
set(DEFAULT_ZIG_ARCH "aarch64")
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64|x86_64|x64|AMD64")
set(DEFAULT_ZIG_ARCH "x86_64")
@@ -20,7 +20,7 @@ else()
unsupported(CMAKE_SYSTEM_NAME)
endif()
set(ZIG_COMMIT "55fdbfa0c86be86b68d43a4ba761e6909eb0d7b2")
set(ZIG_COMMIT "c1423ff3fc7064635773a4a4616c5bf986eb00fe")
optionx(ZIG_TARGET STRING "The zig target to use" DEFAULT ${DEFAULT_ZIG_TARGET})
if(CMAKE_BUILD_TYPE STREQUAL "Release")
@@ -55,13 +55,7 @@ optionx(ZIG_OBJECT_FORMAT "obj|bc" "Output file format for Zig object files" DEF
optionx(ZIG_LOCAL_CACHE_DIR FILEPATH "The path to local the zig cache directory" DEFAULT ${CACHE_PATH}/zig/local)
optionx(ZIG_GLOBAL_CACHE_DIR FILEPATH "The path to the global zig cache directory" DEFAULT ${CACHE_PATH}/zig/global)
if(CI)
set(ZIG_COMPILER_SAFE_DEFAULT ON)
else()
set(ZIG_COMPILER_SAFE_DEFAULT OFF)
endif()
optionx(ZIG_COMPILER_SAFE BOOL "Download a ReleaseSafe build of the Zig compiler." DEFAULT ${ZIG_COMPILER_SAFE_DEFAULT})
optionx(ZIG_COMPILER_SAFE BOOL "Download a ReleaseSafe build of the Zig compiler." DEFAULT ${CI})
setenv(ZIG_LOCAL_CACHE_DIR ${ZIG_LOCAL_CACHE_DIR})
setenv(ZIG_GLOBAL_CACHE_DIR ${ZIG_GLOBAL_CACHE_DIR})

View File

@@ -35,8 +35,8 @@ end
set -l bun_install_boolean_flags yarn production optional development no-save dry-run force no-cache silent verbose global
set -l bun_install_boolean_flags_descriptions "Write a yarn.lock file (yarn v1)" "Don't install devDependencies" "Add dependency to optionalDependencies" "Add dependency to devDependencies" "Don't update package.json or save a lockfile" "Don't install anything" "Always request the latest versions from the registry & reinstall all dependencies" "Ignore manifest cache entirely" "Don't output anything" "Excessively verbose logging" "Use global folder"
set -l bun_builtin_cmds_without_run dev create help bun upgrade discord install remove add init pm x
set -l bun_builtin_cmds_accepting_flags create help bun upgrade discord run init link unlink pm x
set -l bun_builtin_cmds_without_run dev create help bun upgrade discord install remove add update init pm x
set -l bun_builtin_cmds_accepting_flags create help bun upgrade discord run init link unlink pm x update
function __bun_complete_bins_scripts --inherit-variable bun_builtin_cmds_without_run -d "Emit bun completions for bins and scripts"
# Do nothing if we already have a builtin subcommand,
@@ -148,14 +148,14 @@ complete -c bun \
for i in (seq (count $bun_install_boolean_flags))
complete -c bun \
-n "__fish_seen_subcommand_from install add remove" -l "$bun_install_boolean_flags[$i]" -d "$bun_install_boolean_flags_descriptions[$i]"
-n "__fish_seen_subcommand_from install add remove update" -l "$bun_install_boolean_flags[$i]" -d "$bun_install_boolean_flags_descriptions[$i]"
end
complete -c bun \
-n "__fish_seen_subcommand_from install add remove" -l 'cwd' -d 'Change working directory'
-n "__fish_seen_subcommand_from install add remove update" -l 'cwd' -d 'Change working directory'
complete -c bun \
-n "__fish_seen_subcommand_from install add remove" -l 'cache-dir' -d 'Choose a cache directory (default: $HOME/.bun/install/cache)'
-n "__fish_seen_subcommand_from install add remove update" -l 'cache-dir' -d 'Choose a cache directory (default: $HOME/.bun/install/cache)'
complete -c bun \
-n "__fish_seen_subcommand_from add" -d 'Popular' -a '(__fish__get_bun_packages)'
@@ -183,4 +183,5 @@ complete -c bun -n "__fish_use_subcommand" -a "unlink" -d "Unregister a local np
complete -c bun -n "__fish_use_subcommand" -a "pm" -d "Additional package management utilities" -f
complete -c bun -n "__fish_use_subcommand" -a "x" -d "Execute a package binary, installing if needed" -f
complete -c bun -n "__fish_use_subcommand" -a "outdated" -d "Display the latest versions of outdated dependencies" -f
complete -c bun -n "__fish_use_subcommand" -a "update" -d "Update dependencies to their latest versions" -f
complete -c bun -n "__fish_use_subcommand" -a "publish" -d "Publish your package from local to npm" -f

View File

@@ -1,4 +1,4 @@
FROM debian:bookworm-slim AS build
FROM debian:trixie-slim AS build
# https://github.com/oven-sh/bun/releases
ARG BUN_VERSION=latest
@@ -55,7 +55,7 @@ RUN apt-get update -qq \
&& which bun \
&& bun --version
FROM debian:bookworm-slim
FROM debian:trixie-slim
# Disable the runtime transpiler cache by default inside Docker containers.
# On ephemeral containers, the cache is not useful

View File

@@ -1,4 +1,4 @@
FROM debian:bookworm-slim AS build
FROM debian:trixie-slim AS build
# https://github.com/oven-sh/bun/releases
ARG BUN_VERSION=latest
@@ -56,7 +56,7 @@ RUN apt-get update -qq \
&& rm -f "bun-linux-$build.zip" SHASUMS256.txt.asc SHASUMS256.txt \
&& chmod +x /usr/local/bin/bun
FROM debian:bookworm
FROM debian:trixie
COPY docker-entrypoint.sh /usr/local/bin
COPY --from=build /usr/local/bin/bun /usr/local/bin/bun

View File

@@ -1,4 +1,4 @@
FROM debian:bookworm-slim AS build
FROM debian:trixie-slim AS build
# https://github.com/oven-sh/bun/releases
ARG BUN_VERSION=latest
@@ -55,7 +55,7 @@ RUN apt-get update -qq \
&& which bun \
&& bun --version
FROM gcr.io/distroless/base-nossl-debian11
FROM gcr.io/distroless/base-nossl-debian13
# Disable the runtime transpiler cache by default inside Docker containers.
# On ephemeral containers, the cache is not useful
@@ -71,6 +71,7 @@ ENV PATH "${PATH}:/usr/local/bun-node-fallback-bin"
# Temporarily use the `build`-stage image binaries to create a symlink:
RUN --mount=type=bind,from=build,source=/usr/bin,target=/usr/bin \
--mount=type=bind,from=build,source=/etc/alternatives/which,target=/etc/alternatives/which \
--mount=type=bind,from=build,source=/bin,target=/bin \
--mount=type=bind,from=build,source=/usr/lib,target=/usr/lib \
--mount=type=bind,from=build,source=/lib,target=/lib \

View File

@@ -34,7 +34,7 @@ By default, Bun's CSS bundler targets the following browsers:
The CSS Nesting specification allows you to write more concise and intuitive stylesheets by nesting selectors inside one another. Instead of repeating parent selectors across your CSS file, you can write child styles directly within their parent blocks.
```css title="styles.css" icon="file-code"
```scss title="styles.css" icon="file-code"
/* With nesting */
.card {
background: white;
@@ -72,7 +72,7 @@ Bun's CSS bundler automatically converts this nested syntax into traditional fla
You can also nest media queries and other at-rules inside selectors, eliminating the need to repeat selector patterns:
```css title="styles.css" icon="file-code"
```scss title="styles.css" icon="file-code"
.responsive-element {
display: block;
@@ -100,7 +100,7 @@ This compiles to:
The `color-mix()` function gives you an easy way to blend two colors together according to a specified ratio in a chosen color space. This powerful feature lets you create color variations without manually calculating the resulting values.
```css title="styles.css" icon="file-code"
```scss title="styles.css" icon="file-code"
.button {
/* Mix blue and red in the RGB color space with a 30/70 proportion */
background-color: color-mix(in srgb, blue 30%, red);

View File

@@ -65,6 +65,7 @@ In Bun's CLI, simple boolean flags like `--minify` do not accept an argument. Ot
| `--chunk-names` | `--chunk-naming` | Renamed for consistency with naming in JS API |
| `--color` | n/a | Always enabled |
| `--drop` | `--drop` | |
| n/a | `--feature` | Bun-specific. Enable feature flags for compile-time dead-code elimination via `import { feature } from "bun:bundle"` |
| `--entry-names` | `--entry-naming` | Renamed for consistency with naming in JS API |
| `--global-name` | n/a | Not applicable, Bun does not support `iife` output at this time |
| `--ignore-annotations` | `--ignore-dce-annotations` | |
@@ -231,23 +232,67 @@ const myPlugin: BunPlugin = {
### onResolve
<Tabs>
<Tab title="options">- 🟢 `filter` - 🟢 `namespace`</Tab>
<Tab title="options">
- 🟢 `filter`
- 🟢 `namespace`
</Tab>
<Tab title="arguments">
- 🟢 `path` - 🟢 `importer` - 🔴 `namespace` - 🔴 `resolveDir` - 🔴 `kind` - 🔴 `pluginData`
- 🟢 `path`
- 🟢 `importer`
- 🔴 `namespace`
- 🔴 `resolveDir`
- 🔴 `kind`
- 🔴 `pluginData`
</Tab>
<Tab title="results">
- 🟢 `namespace` - 🟢 `path` - 🔴 `errors` - 🔴 `external` - 🔴 `pluginData` - 🔴 `pluginName` - 🔴 `sideEffects` -
🔴 `suffix` - 🔴 `warnings` - 🔴 `watchDirs` - 🔴 `watchFiles`
- 🟢 `namespace`
- 🟢 `path`
- 🔴 `errors`
- 🔴 `external`
- 🔴 `pluginData`
- 🔴 `pluginName`
- 🔴 `sideEffects`
- 🔴 `suffix`
- 🔴 `warnings`
- 🔴 `watchDirs`
- 🔴 `watchFiles`
</Tab>
</Tabs>
### onLoad
<Tabs>
<Tab title="options">- 🟢 `filter` - 🟢 `namespace`</Tab>
<Tab title="arguments">- 🟢 `path` - 🔴 `namespace` - 🔴 `suffix` - 🔴 `pluginData`</Tab>
<Tab title="options">
- 🟢 `filter`
- 🟢 `namespace`
</Tab>
<Tab title="arguments">
- 🟢 `path`
- 🔴 `namespace`
- 🔴 `suffix`
- 🔴 `pluginData`
</Tab>
<Tab title="results">
- 🟢 `contents` - 🟢 `loader` - 🔴 `errors` - 🔴 `pluginData` - 🔴 `pluginName` - 🔴 `resolveDir` - 🔴 `warnings` -
🔴 `watchDirs` - 🔴 `watchFiles`
- 🟢 `contents`
- 🟢 `loader`
- 🔴 `errors`
- 🔴 `pluginData`
- 🔴 `pluginName`
- 🔴 `resolveDir`
- 🔴 `warnings`
- 🔴 `watchDirs`
- 🔴 `watchFiles`
</Tab>
</Tabs>

File diff suppressed because it is too large Load Diff

View File

@@ -427,8 +427,8 @@ This will allow you to use TailwindCSS utility classes in your HTML and CSS file
<!doctype html>
<html>
<head>
<link rel="stylesheet" href="tailwindcss" />
<!-- [!code ++] -->
<link rel="stylesheet" href="tailwindcss" />
</head>
<!-- the rest of your HTML... -->
</html>
@@ -448,8 +448,8 @@ Alternatively, you can import TailwindCSS in your CSS file:
<!doctype html>
<html>
<head>
<link rel="stylesheet" href="./style.css" />
<!-- [!code ++] -->
<link rel="stylesheet" href="./style.css" />
</head>
<!-- the rest of your HTML... -->
</html>
@@ -492,6 +492,28 @@ Bun will lazily resolve and load each plugin and use them to bundle your routes.
the CLI.
</Note>
## Inline Environment Variables
Bun can replace `process.env.*` references in your frontend JavaScript and TypeScript with their actual values at build time. Configure the `env` option in your `bunfig.toml`:
```toml title="bunfig.toml" icon="settings"
[serve.static]
env = "PUBLIC_*" # only inline env vars starting with PUBLIC_ (recommended)
# env = "inline" # inline all environment variables
# env = "disable" # disable env var replacement (default)
```
<Note>
This only works with literal `process.env.FOO` references, not `import.meta.env` or indirect access like `const env =
process.env; env.FOO`.
If an environment variable is not set, you may see runtime errors like `ReferenceError: process
is not defined` in the browser.
</Note>
See the [HTML & static sites documentation](/bundler/html-static#inline-environment-variables) for more details on build-time configuration and examples.
## How It Works
Bun uses `HTMLRewriter` to scan for `<script>` and `<link>` tags in HTML files, uses them as entrypoints for Bun's bundler, generates an optimized bundle for the JavaScript/TypeScript/TSX/JSX and CSS files, and serves the result.
@@ -632,7 +654,7 @@ const server = serve({
console.log(`🚀 Server running on ${server.url}`);
```
```html title="public/index.html"
```html title="public/index.html" icon="file-code"
<!DOCTYPE html>
<html>
<head>
@@ -757,7 +779,7 @@ export function App() {
}
```
```css title="src/styles.css"
```css title="src/styles.css" icon="file-code"
* {
margin: 0;
padding: 0;
@@ -999,7 +1021,7 @@ CMD ["bun", "index.js"]
### Environment Variables
```bash title=".env.production" icon="file-code"
```ini title=".env.production" icon="file-code"
NODE_ENV=production
PORT=3000
DATABASE_URL=postgresql://user:pass@localhost:5432/myapp

View File

@@ -9,7 +9,7 @@ Hot Module Replacement (HMR) allows you to update modules in a running applicati
## `import.meta.hot` API Reference
Bun implements a client-side HMR API modeled after [Vite's `import.meta.hot` API](https://vitejs.dev/guide/api-hmr.html). It can be checked for with `if (import.meta.hot)`, tree-shaking it in production.
Bun implements a client-side HMR API modeled after [Vite's `import.meta.hot` API](https://vite.dev/guide/api-hmr). It can be checked for with `if (import.meta.hot)`, tree-shaking it in production.
```ts title="index.ts" icon="/icons/typescript.svg"
if (import.meta.hot) {
@@ -144,7 +144,7 @@ Indicates that multiple dependencies' modules can be accepted. This variant acce
`import.meta.hot.data` maintains state between module instances during hot replacement, enabling data transfer from previous to new versions. When `import.meta.hot.data` is written into, Bun will also mark this module as capable of self-accepting (equivalent of calling `import.meta.hot.accept()`).
```jsx title="index.ts" icon="/icons/typescript.svg"
```tsx title="index.tsx" icon="/icons/typescript.svg"
import { createRoot } from "react-dom/client";
import { App } from "./app";

View File

@@ -25,7 +25,7 @@ bun ./index.html
```
```
Bun v1.2.20
Bun v1.3.3
ready in 6.62ms
→ http://localhost:3000/
Press h + Enter to show shortcuts
@@ -51,7 +51,7 @@ bun index.html
```
```
Bun v1.2.20
Bun v1.3.3
ready in 6.62ms
→ http://localhost:3000/
Press h + Enter to show shortcuts
@@ -81,7 +81,7 @@ bun ./index.html ./about.html
```
```txt
Bun v1.2.20
Bun v1.3.3
ready in 6.62ms
→ http://localhost:3000/
Routes:
@@ -104,7 +104,7 @@ bun ./**/*.html
```
```
Bun v1.2.20
Bun v1.3.3
ready in 6.62ms
→ http://localhost:3000/
Routes:
@@ -122,7 +122,7 @@ bun ./index.html ./about/index.html ./about/foo/index.html
```
```
Bun v1.2.20
Bun v1.3.3
ready in 6.62ms
→ http://localhost:3000/
Routes:
@@ -164,7 +164,7 @@ For example:
}
```
```css abc.css
```css abc.css icon="file-code"
body {
background-color: red;
}
@@ -174,7 +174,7 @@ body {
This outputs:
```css
```css styles.css icon="file-code"
body {
background-color: red;
}
@@ -237,17 +237,118 @@ Then, reference TailwindCSS in your HTML via `<link>` tag, `@import` in CSS, or
<Tabs>
<Tab title="index.html">
```html title="index.html" icon="file-code"
{/* Reference TailwindCSS in your HTML */}
<!-- Reference TailwindCSS in your HTML -->
<link rel="stylesheet" href="tailwindcss" />
```
</Tab>
<Tab title="styles.css">
```css title="styles.css" icon="file-code"
@import "tailwindcss";
```
</Tab>
<Tab title="app.ts">
```ts title="app.ts" icon="/icons/typescript.svg"
import "tailwindcss";
```
</Tab>
<Tab title="styles.css">```css title="styles.css" icon="file-code" @import "tailwindcss"; ```</Tab>
<Tab title="app.ts">```ts title="app.ts" icon="/icons/typescript.svg" import "tailwindcss"; ```</Tab>
</Tabs>
<Info>Only one of those are necessary, not all three.</Info>
## Inline environment variables
Bun can replace `process.env.*` references in your JavaScript and TypeScript with their actual values at build time. This is useful for injecting configuration like API URLs or feature flags into your frontend code.
### Dev server (runtime)
To inline environment variables when using `bun ./index.html`, configure the `env` option in your `bunfig.toml`:
```toml title="bunfig.toml" icon="settings"
[serve.static]
env = "PUBLIC_*" # only inline env vars starting with PUBLIC_ (recommended)
# env = "inline" # inline all environment variables
# env = "disable" # disable env var replacement (default)
```
<Note>
This only works with literal `process.env.FOO` references, not `import.meta.env` or indirect access like `const env =
process.env; env.FOO`.
If an environment variable is not set, you may see runtime errors like `ReferenceError: process
is not defined` in the browser.
</Note>
Then run the dev server:
```bash terminal icon="terminal"
PUBLIC_API_URL=https://api.example.com bun ./index.html
```
### Build for production
When building static HTML for production, use the `env` option to inline environment variables:
<Tabs>
<Tab title="CLI">
```bash terminal icon="terminal"
# Inline all environment variables
bun build ./index.html --outdir=dist --env=inline
# Only inline env vars with a specific prefix (recommended)
bun build ./index.html --outdir=dist --env=PUBLIC_*
```
</Tab>
<Tab title="API">
```ts title="build.ts" icon="/icons/typescript.svg"
// Inline all environment variables
await Bun.build({
entrypoints: ["./index.html"],
outdir: "./dist",
env: "inline", // [!code highlight]
});
// Only inline env vars with a specific prefix (recommended)
await Bun.build({
entrypoints: ["./index.html"],
outdir: "./dist",
env: "PUBLIC_*", // [!code highlight]
});
```
</Tab>
</Tabs>
### Example
Given this source file:
```ts title="app.ts" icon="/icons/typescript.svg"
const apiUrl = process.env.PUBLIC_API_URL;
console.log(`API URL: ${apiUrl}`);
```
And running with `PUBLIC_API_URL=https://api.example.com`:
```bash terminal icon="terminal"
PUBLIC_API_URL=https://api.example.com bun build ./index.html --outdir=dist --env=PUBLIC_*
```
The bundled output will contain:
```js title="dist/app.js" icon="/icons/javascript.svg"
const apiUrl = "https://api.example.com";
console.log(`API URL: ${apiUrl}`);
```
## Echo console logs from browser to terminal
Bun's dev server supports streaming console logs from the browser to the terminal.
@@ -259,7 +360,7 @@ bun ./index.html --console
```
```
Bun v1.2.20
Bun v1.3.3
ready in 6.62ms
→ http://localhost:3000/
Press h + Enter to show shortcuts
@@ -371,7 +472,8 @@ All paths are resolved relative to your HTML file, making it easy to organize yo
- Need more configuration options for things like asset handling
- Need a way to configure CORS, headers, etc.
If you want to submit a PR, most of the code is [here](https://github.com/oven-sh/bun/blob/main/src/bun.js/api/bun/html-rewriter.ts). You could even copy paste that file into your project and use it as a starting point.
{/* todo: find the correct link to link to as this 404's and there isn't any similar files */}
{/* If you want to submit a PR, most of the code is [here](https://github.com/oven-sh/bun/blob/main/src/bun.js/api/bun/html-rewriter.ts). You could even copy paste that file into your project and use it as a starting point. */}
</Warning>

View File

@@ -106,7 +106,7 @@ For each file specified in `entrypoints`, Bun will generate a new bundle. This b
The contents of `out/index.js` will look something like this:
```ts title="out/index.js" icon="/icons/javascript.svg"
```js title="out/index.js" icon="/icons/javascript.svg"
// out/index.js
// ...
// ~20k lines of code
@@ -160,8 +160,12 @@ Like the Bun runtime, the bundler supports an array of file types out of the box
| ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `.js` `.jsx` `.cjs` `.mjs` `.mts` `.cts` `.ts` `.tsx` | Uses Bun's built-in transpiler to parse the file and transpile TypeScript/JSX syntax to vanilla JavaScript. The bundler executes a set of default transforms including dead code elimination and tree shaking. At the moment Bun does not attempt to down-convert syntax; if you use recently ECMAScript syntax, that will be reflected in the bundled code. |
| `.json` | JSON files are parsed and inlined into the bundle as a JavaScript object.<br/><br/>`js<br/>import pkg from "./package.json";<br/>pkg.name; // => "my-package"<br/>` |
| `.jsonc` | JSON with comments. Files are parsed and inlined into the bundle as a JavaScript object.<br/><br/>`js<br/>import config from "./config.jsonc";<br/>config.name; // => "my-config"<br/>` |
| `.toml` | TOML files are parsed and inlined into the bundle as a JavaScript object.<br/><br/>`js<br/>import config from "./bunfig.toml";<br/>config.logLevel; // => "debug"<br/>` |
| `.yaml` `.yml` | YAML files are parsed and inlined into the bundle as a JavaScript object.<br/><br/>`js<br/>import config from "./config.yaml";<br/>config.name; // => "my-app"<br/>` |
| `.txt` | The contents of the text file are read and inlined into the bundle as a string.<br/><br/>`js<br/>import contents from "./file.txt";<br/>console.log(contents); // => "Hello, world!"<br/>` |
| `.html` | HTML files are processed and any referenced assets (scripts, stylesheets, images) are bundled. |
| `.css` | CSS files are bundled together into a single `.css` file in the output directory. |
| `.node` `.wasm` | These files are supported by the Bun runtime, but during bundling they are treated as assets. |
### Assets
@@ -216,6 +220,78 @@ An array of paths corresponding to the entrypoints of our application. One bundl
</Tab>
</Tabs>
### files
A map of file paths to their contents for in-memory bundling. This allows you to bundle virtual files that don't exist on disk, or override the contents of files that do exist. This option is only available in the JavaScript API.
File contents can be provided as a `string`, `Blob`, `TypedArray`, or `ArrayBuffer`.
#### Bundle entirely from memory
You can bundle code without any files on disk by providing all sources via `files`:
```ts title="build.ts" icon="/icons/typescript.svg"
const result = await Bun.build({
entrypoints: ["/app/index.ts"],
files: {
"/app/index.ts": `
import { greet } from "./greet.ts";
console.log(greet("World"));
`,
"/app/greet.ts": `
export function greet(name: string) {
return "Hello, " + name + "!";
}
`,
},
});
const output = await result.outputs[0].text();
console.log(output);
```
When all entrypoints are in the `files` map, the current working directory is used as the root.
#### Override files on disk
In-memory files take priority over files on disk. This lets you override specific files while keeping the rest of your codebase unchanged:
```ts title="build.ts" icon="/icons/typescript.svg"
// Assume ./src/config.ts exists on disk with development settings
await Bun.build({
entrypoints: ["./src/index.ts"],
files: {
// Override config.ts with production values
"./src/config.ts": `
export const API_URL = "https://api.production.com";
export const DEBUG = false;
`,
},
outdir: "./dist",
});
```
#### Mix disk and virtual files
Real files on disk can import virtual files, and virtual files can import real files:
```ts title="build.ts" icon="/icons/typescript.svg"
// ./src/index.ts exists on disk and imports "./generated.ts"
await Bun.build({
entrypoints: ["./src/index.ts"],
files: {
// Provide a virtual file that index.ts imports
"./src/generated.ts": `
export const BUILD_ID = "${crypto.randomUUID()}";
export const BUILD_TIME = ${Date.now()};
`,
},
outdir: "./dist",
});
```
This is useful for code generation, injecting build-time constants, or testing with mock modules.
### outdir
The directory where output files will be written.
@@ -523,7 +599,7 @@ Injects environment variables into the bundled output by converting `process.env
For the input below:
```ts title="input.js" icon="/icons/javascript.svg"
```js title="input.js" icon="/icons/javascript.svg"
// input.js
console.log(process.env.FOO);
console.log(process.env.BAZ);
@@ -531,7 +607,7 @@ console.log(process.env.BAZ);
The generated bundle will contain the following code:
```ts title="output.js" icon="/icons/javascript.svg"
```js title="output.js" icon="/icons/javascript.svg"
// output.js
console.log("bar");
console.log("123");
@@ -576,7 +652,7 @@ console.log(process.env.BAZ);
The generated bundle will contain the following code:
```ts title="output.js" icon="/icons/javascript.svg"
```js title="output.js" icon="/icons/javascript.svg"
console.log(process.env.FOO);
console.log("https://acme.com");
console.log(process.env.BAZ);
@@ -718,7 +794,7 @@ Normally, bundling `index.tsx` would generate a bundle containing the entire sou
The generated bundle will look something like this:
```ts title="out/index.js" icon="/icons/javascript.svg"
```js title="out/index.js" icon="/icons/javascript.svg"
import { z } from "zod";
// ...
@@ -1022,7 +1098,7 @@ Setting `publicPath` will prefix all file paths with the specified value.
The output file would now look something like this.
```ts title="out/index.js" icon="/icons/javascript.svg"
```js title="out/index.js" icon="/icons/javascript.svg"
var logo = "https://cdn.example.com/logo-a7305bdef.svg";
```
@@ -1137,6 +1213,157 @@ Remove function calls from a bundle. For example, `--drop=console` will remove a
</Tab>
</Tabs>
### features
Enable compile-time feature flags for dead-code elimination. This provides a way to conditionally include or exclude code paths at bundle time using `import { feature } from "bun:bundle"`.
```ts title="app.ts" icon="/icons/typescript.svg"
import { feature } from "bun:bundle";
if (feature("PREMIUM")) {
// Only included when PREMIUM flag is enabled
initPremiumFeatures();
}
if (feature("DEBUG")) {
// Only included when DEBUG flag is enabled
console.log("Debug mode");
}
```
<Tabs>
<Tab title="JavaScript">
```ts title="build.ts" icon="/icons/typescript.svg"
await Bun.build({
entrypoints: ['./app.ts'],
outdir: './out',
features: ["PREMIUM"], // PREMIUM=true, DEBUG=false
})
```
</Tab>
<Tab title="CLI">
```bash terminal icon="terminal"
bun build ./app.ts --outdir ./out --feature PREMIUM
```
</Tab>
</Tabs>
The `feature()` function is replaced with `true` or `false` at bundle time. Combined with minification, unreachable code is eliminated:
```ts title="Input" icon="/icons/typescript.svg"
import { feature } from "bun:bundle";
const mode = feature("PREMIUM") ? "premium" : "free";
```
```js title="Output (with --feature PREMIUM --minify)" icon="/icons/javascript.svg"
var mode = "premium";
```
```js title="Output (without --feature PREMIUM, with --minify)" icon="/icons/javascript.svg"
var mode = "free";
```
**Key behaviors:**
- `feature()` requires a string literal argument — dynamic values are not supported
- The `bun:bundle` import is completely removed from the output
- Works with `bun build`, `bun run`, and `bun test`
- Multiple flags can be enabled: `--feature FLAG_A --feature FLAG_B`
- For type safety, augment the `Registry` interface to restrict `feature()` to known flags (see below)
**Use cases:**
- Platform-specific code (`feature("SERVER")` vs `feature("CLIENT")`)
- Environment-based features (`feature("DEVELOPMENT")`)
- Gradual feature rollouts
- A/B testing variants
- Paid tier features
**Type safety:** By default, `feature()` accepts any string. To get autocomplete and catch typos at compile time, create an `env.d.ts` file (or add to an existing `.d.ts`) and augment the `Registry` interface:
```ts title="env.d.ts" icon="/icons/typescript.svg"
declare module "bun:bundle" {
interface Registry {
features: "DEBUG" | "PREMIUM" | "BETA_FEATURES";
}
}
```
Ensure the file is included in your `tsconfig.json` (e.g., `"include": ["src", "env.d.ts"]`). Now `feature()` only accepts those flags, and invalid strings like `feature("TYPO")` become type errors.
### metafile
Generate metadata about the build in a structured format. The metafile contains information about all input files, output files, their sizes, imports, and exports. This is useful for:
- **Bundle analysis**: Understand what's contributing to bundle size
- **Visualization**: Feed into tools like [esbuild's bundle analyzer](https://esbuild.github.io/analyze/) or other visualization tools
- **Dependency tracking**: See the full import graph of your application
- **CI integration**: Track bundle size changes over time
<Tabs>
<Tab title="JavaScript">
```ts title="build.ts" icon="/icons/typescript.svg"
const result = await Bun.build({
entrypoints: ['./src/index.ts'],
outdir: './dist',
metafile: true,
});
if (result.metafile) {
// Analyze inputs
for (const [path, meta] of Object.entries(result.metafile.inputs)) {
console.log(`${path}: ${meta.bytes} bytes`);
}
// Analyze outputs
for (const [path, meta] of Object.entries(result.metafile.outputs)) {
console.log(`${path}: ${meta.bytes} bytes`);
}
// Save for external analysis tools
await Bun.write('./dist/meta.json', JSON.stringify(result.metafile));
}
```
</Tab>
<Tab title="CLI">
```bash terminal icon="terminal"
bun build ./src/index.ts --outdir ./dist --metafile ./dist/meta.json
```
</Tab>
</Tabs>
The metafile structure contains:
```ts
interface BuildMetafile {
inputs: {
[path: string]: {
bytes: number;
imports: Array<{
path: string;
kind: ImportKind;
original?: string; // Original specifier before resolution
external?: boolean;
}>;
format?: "esm" | "cjs" | "json" | "css";
};
};
outputs: {
[path: string]: {
bytes: number;
inputs: {
[path: string]: { bytesInOutput: number };
};
imports: Array<{ path: string; kind: ImportKind }>;
exports: string[];
entryPoint?: string;
cssBundle?: string; // Associated CSS file for JS entry points
};
};
}
```
## Outputs
The `Bun.build` function returns a `Promise<BuildOutput>`, defined as:
@@ -1146,6 +1373,7 @@ interface BuildOutput {
outputs: BuildArtifact[];
success: boolean;
logs: Array<object>; // see docs for details
metafile?: BuildMetafile; // only when metafile: true
}
interface BuildArtifact extends Blob {
@@ -1352,10 +1580,12 @@ interface BuildConfig {
* JSX configuration object for controlling JSX transform behavior
*/
jsx?: {
runtime?: "automatic" | "classic";
importSource?: string;
factory?: string;
fragment?: string;
importSource?: string;
runtime?: "automatic" | "classic";
sideEffects?: boolean;
development?: boolean;
};
naming?:
| string
@@ -1372,7 +1602,7 @@ interface BuildConfig {
publicPath?: string;
define?: Record<string, string>;
loader?: { [k in string]: Loader };
sourcemap?: "none" | "linked" | "inline" | "external" | "linked" | boolean; // default: "none", true -> "inline"
sourcemap?: "none" | "linked" | "inline" | "external" | boolean; // default: "none", true -> "inline"
/**
* package.json `exports` conditions used when resolving imports
*
@@ -1439,13 +1669,20 @@ interface BuildConfig {
drop?: string[];
/**
* When set to `true`, the returned promise rejects with an AggregateError when a build failure happens.
* When set to `false`, the `success` property of the returned object will be `false` when a build failure happens.
* - When set to `true`, the returned promise rejects with an AggregateError when a build failure happens.
* - When set to `false`, returns a {@link BuildOutput} with `{success: false}`
*
* This defaults to `false` in Bun 1.1 and will change to `true` in Bun 1.2
* as most usage of `Bun.build` forgets to check for errors.
* @default true
*/
throw?: boolean;
/**
* Custom tsconfig.json file path to use for path resolution.
* Equivalent to `--tsconfig-override` in the CLI.
*/
tsconfig?: string;
outdir?: string;
}
interface BuildOutput {
@@ -1462,7 +1699,21 @@ interface BuildArtifact extends Blob {
sourcemap: BuildArtifact | null;
}
type Loader = "js" | "jsx" | "ts" | "tsx" | "json" | "toml" | "file" | "napi" | "wasm" | "text";
type Loader =
| "js"
| "jsx"
| "ts"
| "tsx"
| "css"
| "json"
| "jsonc"
| "toml"
| "yaml"
| "text"
| "file"
| "napi"
| "wasm"
| "html";
interface BuildOutput {
outputs: BuildArtifact[];

View File

@@ -7,14 +7,16 @@ The Bun bundler implements a set of default loaders out of the box.
> As a rule of thumb: **the bundler and the runtime both support the same set of file types out of the box.**
`.js` `.cjs` `.mjs` `.mts` `.cts` `.ts` `.tsx` `.jsx` `.toml` `.json` `.txt` `.wasm` `.node` `.html`
`.js` `.cjs` `.mjs` `.mts` `.cts` `.ts` `.tsx` `.jsx` `.css` `.json` `.jsonc` `.toml` `.yaml` `.yml` `.txt` `.wasm` `.node` `.html` `.sh`
Bun uses the file extension to determine which built-in loader should be used to parse the file. Every loader has a name, such as `js`, `tsx`, or `json`. These names are used when building plugins that extend Bun with custom loaders.
You can explicitly specify which loader to use using the `'loader'` import attribute.
You can explicitly specify which loader to use using the `'type'` import attribute.
```ts title="index.ts" icon="/icons/typescript.svg"
import my_toml from "./my_file" with { loader: "toml" };
import my_toml from "./my_file" with { type: "toml" };
// or with dynamic imports
const { default: my_toml } = await import("./my_file", { with: { type: "toml" } });
```
## Built-in loaders
@@ -85,7 +87,7 @@ If a `.json` file is passed as an entrypoint to the bundler, it will be converte
}
```
```ts Output
```js Output
export default {
name: "John Doe",
age: 35,
@@ -97,7 +99,32 @@ export default {
---
### toml
### `jsonc`
**JSON with Comments loader.** Default for `.jsonc`.
JSONC (JSON with Comments) files can be directly imported. Bun will parse them, stripping out comments and trailing commas.
```js
import config from "./config.jsonc";
console.log(config);
```
During bundling, the parsed JSONC is inlined into the bundle as a JavaScript object, identical to the `json` loader.
```js
var config = {
option: "value",
};
```
<Note>
Bun automatically uses the `jsonc` loader for `tsconfig.json`, `jsconfig.json`, `package.json`, and `bun.lock` files.
</Note>
---
### `toml`
**TOML loader.** Default for `.toml`.
@@ -131,7 +158,7 @@ age = 35
email = "johndoe@example.com"
```
```ts Output
```js Output
export default {
name: "John Doe",
age: 35,
@@ -143,7 +170,53 @@ export default {
---
### text
### `yaml`
**YAML loader.** Default for `.yaml` and `.yml`.
YAML files can be directly imported. Bun will parse them with its fast native YAML parser.
```js
import config from "./config.yaml";
console.log(config);
// via import attribute:
import data from "./data.txt" with { type: "yaml" };
```
During bundling, the parsed YAML is inlined into the bundle as a JavaScript object.
```js
var config = {
name: "my-app",
version: "1.0.0",
// ...other fields
};
```
If a `.yaml` or `.yml` file is passed as an entrypoint, it will be converted to a `.js` module that `export default`s the parsed object.
<CodeGroup>
```yaml Input
name: John Doe
age: 35
email: johndoe@example.com
```
```js Output
export default {
name: "John Doe",
age: 35,
email: "johndoe@example.com",
};
```
</CodeGroup>
---
### `text`
**Text loader.** Default for `.txt`.
@@ -173,7 +246,7 @@ If a `.txt` file is passed as an entrypoint, it will be converted to a `.js` mod
Hello, world!
```
```ts Output
```js Output
export default "Hello, world!";
```
@@ -181,7 +254,7 @@ export default "Hello, world!";
---
### napi
### `napi`
**Native addon loader.** Default for `.node`.
@@ -196,7 +269,7 @@ console.log(addon);
---
### sqlite
### `sqlite`
**SQLite loader.** Requires `with { "type": "sqlite" }` import attribute.
@@ -226,7 +299,9 @@ Otherwise, the database to embed is copied into the `outdir` with a hashed filen
---
### html
### `html`
**HTML loader.** Default for `.html`.
The `html` loader processes HTML files and bundles any referenced assets. It will:
@@ -237,7 +312,7 @@ The `html` loader processes HTML files and bundles any referenced assets. It wil
For example, given this HTML file:
```html title="src/index.html"
```html title="src/index.html" icon="file-code"
<!DOCTYPE html>
<html>
<body>
@@ -250,7 +325,7 @@ For example, given this HTML file:
It will output a new HTML file with the bundled assets:
```html title="dist/index.html"
```html title="dist/index.html" icon="file-code"
<!DOCTYPE html>
<html>
<body>
@@ -301,7 +376,27 @@ The `html` loader behaves differently depending on how it's used:
---
### sh
### `css`
**CSS loader.** Default for `.css`.
CSS files can be directly imported. The bundler will parse and bundle CSS files, handling `@import` statements and `url()` references.
```js
import "./styles.css";
```
During bundling, all imported CSS files are bundled together into a single `.css` file in the output directory.
```css
.my-class {
background: url("./image.png");
}
```
---
### `sh`
**Bun Shell loader.** Default for `.sh` files.
@@ -313,7 +408,7 @@ bun run ./script.sh
---
### file
### `file`
**File loader.** Default for all unrecognized file types.

View File

@@ -87,7 +87,7 @@ macro();
When shipping a library containing a macro to npm or another package registry, use the `"macro"` export condition to provide a special version of your package exclusively for the macro environment.
```json title="package.json" icon="file-code"
```json title="package.json" icon="file-json"
{
"name": "my-package",
"exports": {

File diff suppressed because it is too large Load Diff

View File

@@ -42,7 +42,21 @@ type PluginBuilder = {
config: BuildConfig;
};
type Loader = "js" | "jsx" | "ts" | "tsx" | "css" | "json" | "toml";
type Loader =
| "js"
| "jsx"
| "ts"
| "tsx"
| "json"
| "jsonc"
| "toml"
| "yaml"
| "file"
| "napi"
| "wasm"
| "text"
| "css"
| "html";
```
## Usage

View File

@@ -121,6 +121,7 @@
"/runtime/file-io",
"/runtime/streams",
"/runtime/binary-data",
"/runtime/archive",
"/runtime/sql",
"/runtime/sqlite",
"/runtime/s3",
@@ -188,7 +189,7 @@
{
"group": "Publishing & Analysis",
"icon": "upload",
"pages": ["/pm/cli/publish", "/pm/cli/outdated", "/pm/cli/why", "/pm/cli/audit"]
"pages": ["/pm/cli/publish", "/pm/cli/outdated", "/pm/cli/why", "/pm/cli/audit", "/pm/cli/info"]
},
{
"group": "Workspace Management",
@@ -326,6 +327,7 @@
"group": "Utilities",
"icon": "wrench",
"pages": [
"/guides/util/upgrade",
"/guides/util/detect-bun",
"/guides/util/version",
"/guides/util/hash-a-password",
@@ -354,7 +356,7 @@
"/guides/ecosystem/discordjs",
"/guides/ecosystem/docker",
"/guides/ecosystem/drizzle",
"/guides/ecosystem/edgedb",
"/guides/ecosystem/gel",
"/guides/ecosystem/elysia",
"/guides/ecosystem/express",
"/guides/ecosystem/hono",
@@ -369,6 +371,7 @@
"/guides/ecosystem/qwik",
"/guides/ecosystem/react",
"/guides/ecosystem/remix",
"/guides/ecosystem/tanstack-start",
"/guides/ecosystem/sentry",
"/guides/ecosystem/solidstart",
"/guides/ecosystem/ssr-react",
@@ -463,6 +466,7 @@
"/guides/test/update-snapshots",
"/guides/test/coverage",
"/guides/test/coverage-threshold",
"/guides/test/concurrent-test-glob",
"/guides/test/skip-tests",
"/guides/test/todo-tests",
"/guides/test/timeout",

View File

@@ -4,13 +4,9 @@ description: Share feedback, bug reports, and feature requests
mode: center
---
import Feedback from "/snippets/cli/feedback.mdx";
Whether you've found a bug, have a performance issue, or just want to suggest an improvement, here's how you can open a helpful issue:
<Callout icon="discord">
For general questions, please join our [Discord](https://discord.com/invite/CXdq2DP29u).
</Callout>
<Callout icon="discord">For general questions, please join our [Discord](https://bun.com/discord).</Callout>
## Reporting Issues
@@ -56,9 +52,7 @@ Whether you've found a bug, have a performance issue, or just want to suggest an
<Note>
- For MacOS and Linux: copy the output of `uname -mprs`
- For Windows: copy the output of this command in the powershell console:
```powershell
"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"
```
`"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"`
</Note>
</Step>
@@ -79,7 +73,3 @@ echo "please document X" | bun feedback --email you@example.com
```
You can provide feedback as text arguments, file paths, or piped input.
---
<Feedback />

View File

@@ -26,4 +26,4 @@ const regularArr = Array.from(uintArr);
---
See [Docs > API > Binary Data](https://bun.com/docs/api/binary-data#conversion) for complete documentation on manipulating binary data with Bun.
See [Docs > API > Binary Data](/runtime/binary-data#conversion) for complete documentation on manipulating binary data with Bun.

View File

@@ -23,4 +23,4 @@ blob.type; // => "application/octet-stream"
---
See [Docs > API > Binary Data](https://bun.com/docs/api/binary-data#conversion) for complete documentation on manipulating binary data with Bun.
See [Docs > API > Binary Data](/runtime/binary-data#conversion) for complete documentation on manipulating binary data with Bun.

View File

@@ -24,4 +24,4 @@ const nodeBuffer = Buffer.from(arrBuffer, 0, 16); // view first 16 bytes
---
See [Docs > API > Binary Data](https://bun.com/docs/api/binary-data#conversion) for complete documentation on manipulating binary data with Bun.
See [Docs > API > Binary Data](/runtime/binary-data#conversion) for complete documentation on manipulating binary data with Bun.

View File

@@ -14,4 +14,4 @@ const str = decoder.decode(buf);
---
See [Docs > API > Binary Data](https://bun.com/docs/api/binary-data#conversion) for complete documentation on manipulating binary data with Bun.
See [Docs > API > Binary Data](/runtime/binary-data#conversion) for complete documentation on manipulating binary data with Bun.

Some files were not shown because too many files have changed in this diff Show More