The render() callback API is bottom-up (exit-only), so listItem callbacks
previously couldn't know their index within the parent list, whether the
parent was ordered, or their nesting depth — forcing consumers to do
string-sentinel workarounds and post-process the output with regex.
This plumbs through the info the parser already has:
- listItem meta: {index, depth, ordered, start, checked}
- list meta: {ordered, start, depth}
Implementation:
- Track block_type + child_index in JsCallbackRenderer.StackEntry
- Compute depth by walking stack for ul/ol ancestors in createBlockMeta
- Parent list's ordered/start read from stack[len-2]
Also adds BunMarkdownMeta.{cpp,h} with cached JSC Structures for the
high-frequency meta objects (listItem, list, th/td, link) — putDirectOffset
instead of N×put() keeps them monomorphic and essentially free (~0.7ns/obj).
## What does this PR do?
Fixes encoding corruption when using Bun.file().stat() or
Bun.file().delete() with file paths containing UTF-8 characters (e.g.,
German umlauts, Japanese characters, emoji).
### The Bug
When calling Bun.file().stat() or Bun.file().delete() with paths
containing non-ASCII UTF-8 characters, the path was being corrupted due
to double-encoding:
- UTF-8 bytes were being treated as Latin1 by ZigString.init()
- When converting to ZigString.Slice, the Latin1-to-UTF-8 conversion
would encode the bytes again
- Result: paths like "über.txt" became "über.txt" (mojibake)
### The Fix
Changed ZigString.init() to ZigString.fromUTF8() in two locations:
- src/bun.js/webcore/Blob.zig (getStat function)
- src/bun.js/webcore/blob/Store.zig (unlink function)
The fromUTF8() function marks the string as UTF-8 if it contains
non-ASCII characters, preventing the double-encoding issue.
## How did you verify your code works?
- Added comprehensive test coverage in
test/regression/issue/utf8-path-encoding.test.ts
- Tests verify Bun.file().stat() and Bun.file().delete() work correctly
with:
- German umlauts (ä, ö, ü)
- Japanese characters
- Emoji
- Mixed special characters
- Tests compare Bun results against Node.js fs module to ensure
consistency
Fixes#26647
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Fix a latent GC safety issue in `NodeVMModule` and
`NodeVMSyntheticModule` where `WriteBarrier` fields were initialized
with the regular constructor before `finishCreation()` was called on the
cell.
## Problem
The regular `WriteBarrier(vm, owner, value)` constructor calls
`vm.writeBarrier(owner, value)`, which assumes the owner cell is already
fully constructed and markable by the GC. However, in C++ member
initializer lists, `finishCreation()` has not been called yet, so the
cell is not in a valid state for the GC to process.
Two fields were affected:
- `NodeVMModule::m_moduleWrapper` (`NodeVMModule.cpp:157`)
- `NodeVMSyntheticModule::m_syntheticEvaluationSteps`
(`NodeVMSyntheticModule.h:57`)
If a GC cycle happens to run during construction (between the member
initializer list and `finishCreation()`), this could lead to a crash.
The bug is timing-dependent and difficult to reproduce, but the code is
clearly incorrect.
## Fix
Use `WriteBarrierEarlyInit` instead, which stores the pointer without
issuing a write barrier. This is the standard JSC pattern — used
extensively in upstream WebKit (`JSObject`, `Structure`, `CodeBlock`,
`JSBoundFunction`, `JSPromiseReaction`, etc.) and already used by the
adjacent `m_context` field in the same constructor.
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## What does this PR do?
The `bun_shim_impl.exe` used for `node_modules/.bin/*` on Windows was
hardcoded to build for x86_64 (nehalem), even when building bun for
Windows ARM64. This meant every package binary invocation on ARM64 ran
under x64 emulation.
This PR makes `WindowsShim.create()` use the bun build's target arch —
building a native aarch64 shim when building `bun.exe` for aarch64.
## Why is this safe?
The shim source (`src/install/windows-shim/bun_shim_impl.zig`) contains
no arch-specific code:
- Uses only NTDLL (`NtCreateFile`, `NtReadFile`, `NtClose`,
`RtlExitUserProcess`) and kernel32 (`CreateProcessW`,
`WaitForSingleObject`, etc.)
- Uses `std.os.windows.teb()` which already has a native aarch64
implementation (`mov %[ptr], x18`)
- No inline assembly, no x86-specific intrinsics
Verified the ARM64 shim compiles cleanly to a 12.5KB PE32+ Aarch64
binary.
## How did you verify your code works?
- Built the ReleaseFast shim for `aarch64-windows` locally — compiles
without errors
- CI will verify the embedded build path on Windows ARM64
Note: the `zig build windows-shim` standalone helper currently fails on
Debug builds due to a pre-existing `fmtUtf16Le` format string issue —
this is unrelated to this change (same error on x64) and does not affect
the ReleaseFast shim that gets embedded into bun.exe.
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Fixes `bun pack` and `bun publish` using a stale `version` (and
`name`) when lifecycle scripts (`prepublishOnly`, `prepack`, `prepare`)
modify `package.json` during execution.
- After lifecycle scripts run and `package.json` is re-read from disk,
now also refreshes the `package_name` and `package_version` variables
from the updated JSON.
- Previously, the version was captured once before scripts ran and never
updated, causing the tarball filename and publish registry metadata
(`dist-tags`, `versions` keys, `_id`, `_attachments`) to use the
original version instead of the script-modified one.
## Test plan
- [x] Added pack test: verifies tarball filename uses updated version
when `prepack` script modifies `package.json`
- [x] Added publish test: verifies package is published under the
updated version when `prepublishOnly` modifies it
- [x] Verified new test fails with `USE_SYSTEM_BUN=1` (unfixed bun) and
passes with `bun bd test` (fixed build)
- [x] Existing pack tests (basic, shasum, workspace) still pass
Closes#17195🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Fixes#17294
- Empty string arguments (`""`, `''`, `${''}`) were silently dropped
instead of being passed as arguments
- This affected commands like `ssh-keygen -N ""` where the empty
passphrase argument was lost
### Root causes fixed
1. **`appendBunStr`**: empty interpolated values (`${''}`) now emit
literal `""` in the script text so the lexer sees an empty quoted string
2. **Single quote lexing**: added `break_word` calls matching double
quote behavior, so `''` produces proper `SingleQuotedText` tokens
(previously quote context was lost entirely)
3. **`isImmediatelyEscapedQuote`**: now handles `''` in addition to `""`
4. **New `quoted_empty` AST atom**: preserves empty quoted strings
through parsing into expansion
5. **`pushCurrentOut`**: no longer drops empty results when the word
contained quoted empty content
## Test plan
- [x] New regression test `test/regression/issue/17294.test.ts` with 6
test cases covering interpolation, double quotes, single quotes,
multiple empty strings, and mixed args
- [x] Test passes with `bun bd test` and fails with `USE_SYSTEM_BUN=1`
- [x] Updated `test/js/bun/shell/lex.test.ts` expectation for
single-quoted text (now correctly tagged as `SingleQuotedText` instead
of `Text`)
- [x] All existing shell tests pass (lex, parse, brace, exec, bunshell)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fix 100% CPU spin in the Linux inotify file watcher caused by
`read_ptr` never being reset to `null` after consuming remaining events
from an overflowed buffer
## Root Cause
In `INotifyWatcher.read()`, when a single `read()` syscall returns more
than 128 inotify events, `read_ptr` is set to save the buffer position
so the remaining events can be returned on the next call. However, after
those remaining events were fully consumed, `read_ptr` was **never reset
to `null`**. This caused every subsequent call to `read()` to:
1. Enter the `if (this.read_ptr)` branch, skipping the actual `read()`
syscall
2. Re-parse the same stale byte buffer from the saved offset
3. Return the same events repeatedly in an infinite hot loop at 100% CPU
The fix is a single line: `this.read_ptr = null;` after the remaining
events are consumed.
## Test plan
- [x] Added regression test `test/regression/issue/27667.test.ts` that
exercises the watcher under high event load
- [x] Test passes with `bun bd test test/regression/issue/27667.test.ts`
Note: The exact overflow condition (>128 events in a single `read()`)
depends on kernel timing and is hard to trigger deterministically in
tests, but the bug is clear from code inspection — `read_ptr` is set on
line 206 but never cleared anywhere.
Closes#27667🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Fixes two error handling bugs in `ReadFileUV` (identified in [PR #26633
review](https://github.com/oven-sh/bun/pull/26633)):
- **`onRead()` fd leak**: The error path at line 772 called
`this.finalize()` directly instead of `this.onFinish()`, bypassing
`doClose()` and leaking the open file descriptor. Every other error path
correctly goes through `onFinish()` → `doClose()` → `finalize()`.
- **`queueRead()` use-after-free**: The OOM catch block for non-regular
file buffer expansion called `this.onFinish()` but was missing a
`return`, causing execution to fall through to `this.remainingBuffer()`
on freed memory.
## Test plan
- [x] `bun run zig:check-windows` passes (all 25/25 build steps
succeeded)
- These are Windows-only code paths (`ReadFileUV` uses libuv), so they
only affect Windows builds
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary
- Fix a heap read-after-free vulnerability in `Buffer.indexOf`,
`Buffer.lastIndexOf`, and `Buffer.includes` where the raw `typedVector`
pointer was cached before calling `toNumber()` on the `byteOffset`
argument. A user-supplied `valueOf()` callback could call
`ArrayBuffer.prototype.transfer()` to detach the underlying
`ArrayBuffer`, freeing the original memory, causing these methods to
scan freed heap data.
- The fix defers fetching the `typedVector` pointer until after
`toNumber()` completes, adds a detachment check, and throws a
`TypeError` if the buffer was detached.
## Test plan
- [x] New test file `test/js/node/buffer-indexOf-detach.test.ts` with 7
test cases:
- `indexOf` throws `TypeError` when buffer detached via `valueOf`
- `lastIndexOf` throws `TypeError` when buffer detached via `valueOf`
- `includes` throws `TypeError` when buffer detached via `valueOf`
- `indexOf` with string value throws `TypeError` when buffer detached
- `indexOf` with Buffer value throws `TypeError` when buffer detached
- Normal `indexOf`/`lastIndexOf`/`includes` functionality still works
- `indexOf` with non-detaching `valueOf` still works correctly
- [x] All 7 tests pass with `bun bd test`
- [x] The 5 detachment tests fail with `USE_SYSTEM_BUN=1` (confirming
they test the fix)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Closes#24374.
- **`getPeerCertificate()`** now returns `{}` when a TLS handle exists
but no peer certificate is available (matching Node.js behavior).
Previously it returned `undefined`, which caused `checkServerIdentity()`
to crash during TLS handshakes.
- **Added null guard** in both TLS handshake code paths in `net.ts`
before calling `checkServerIdentity()`, as a defensive measure against
`null` (returned when no handle exists).
- This fixes the `TypeError: Cannot destructure property 'subject' from
null or undefined value` crash reported when connecting to MongoDB Atlas
clusters.
## Test plan
- [x] New regression test in `test/regression/issue/24374.test.ts`
verifying:
- `getPeerCertificate()` returns `{}` (not `undefined`) when no client
cert is presented
- `getPeerCertificate()` returns `null` when no handle is available
- `checkServerIdentity()` does not crash with an empty cert object
- TLS handshake with `checkServerIdentity` callback works without
crashing
- [x] Verified test fails with system Bun (2 of 4 tests fail) and passes
with debug build (4/4 pass)
- [x] Existing TLS tests (`node-tls-cert.test.ts`) continue to pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fixed heap buffer overflow in `path.resolve`, `path.relative`, and
`path.toNamespacedPath` that occurred when the CWD length + path
argument lengths exceeded `PATH_SIZE` (4096 on Linux, 1024 on macOS)
- The work buffers were sized for user-provided paths but didn't account
for the CWD that `resolvePosixT`/`resolveWindowsT` prepends when all
paths are relative
- This caused non-deterministic segfaults (often in a later unrelated
allocation like `path.normalize`) because path string data overwrote
mimalloc's internal heap metadata
Crash report:
https://bun.report/1.3.10/M_189a8aa9mgEuhogC_23m3oBmvsvoBm/uvtB________________A23231jqD9h19td/view
## Test plan
- [x] New test: `test/js/node/path/resolve-long-cwd.test.ts` — creates a
deep directory (~3000 bytes) and calls `resolve`/`relative` with long
relative paths, verifying correct output
- [x] Existing path tests pass (`bun bd test test/js/node/path/` — 119
pass, 0 fail)
- [x] Confirmed the bug triggers a bounds-check panic in debug builds
before the fix: `panic: index out of bounds: index 5041, len 4098`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary
- Implement RFC 6455 §4.2.2 compliant validation of the
`Sec-WebSocket-Accept` header during WebSocket upgrade handshake
- Previously the client checked that the header was present and
non-empty but did not verify the value matched
`base64(SHA-1(Sec-WebSocket-Key + GUID))`
- This hardens the handshake against intermediaries that may incorrectly
serve stale or mismatched 101 responses, ensuring the client only
proceeds when the server has actually acknowledged this specific upgrade
request
- The `mismatch_websocket_accept_header` error code was already defined
but never emitted — it is now used
## Test plan
- [x] New test: `websocket-accept-header-validation.test.ts`
- Verifies incorrect `Sec-WebSocket-Accept` value causes connection
failure
- Verifies correct `Sec-WebSocket-Accept` value allows successful
connection
- [x] Test validated with `USE_SYSTEM_BUN=1` (fails on rejection test,
confirming the fix is necessary)
- [x] All existing WebSocket tests continue to pass (short-read,
subprotocol, close-fragmented, pong-fragmented, permessage-deflate)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fixes `import Bun from 'bun'` being `undefined` when bundled with
`--bytecode` flag
- The CJS lowering pass was incorrectly adding `.default` property
access to `globalThis.Bun`, which doesn't have a `default` property
- Also skips unnecessary `__toESM` wrapper inclusion for built-in `bun`
module imports
## Root Cause
When `--bytecode` is used, the bundler wraps output in CJS format. For
external modules, the linker sets up a `namespace_alias` with `alias:
"default"` so the printer generates property access like
`require_module().default`. The printer correctly replaces the `bun`
import source with `globalThis.Bun`, but was still appending `.default`
— producing `import_bun.default` instead of just `import_bun`.
## Fix
Three targeted changes:
1. **`scanImportsAndExports.zig`**: Skip CJS interop wrapping
(`__toESM`) for `bun` tagged imports since `globalThis.Bun` doesn't need
it
2. **`LinkerContext.zig`**: Pass `import_record_index` through to
`namespace_alias` so the printer can identify the import source
3. **`js_printer.zig`**: Skip the `.default` property access for `bun`
tagged import records, while preserving named property accesses (e.g.
`.serve`)
## Test plan
- [x] `bun bd test test/regression/issue/20670.test.ts` — 4 pass
(default, aliased, combined, namespace imports)
- [x] Tests fail with `USE_SYSTEM_BUN=1` confirming they test the actual
fix
- [x] `bun bd test test/bundler/bundler_bun.test.ts` — 10 pass
- [x] `bun bd test test/bundler/bundler_cjs.test.ts` — 23 pass
- [x] `bun bd test test/bundler/bundler_edgecase.test.ts` — 99 pass
Closes#20670🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fixes `.npmrc` auth token matching to consider both host **and**
pathname when matching tokens to registries
- Previously, when multiple auth tokens existed for the same host but
different paths, the last token would incorrectly be used for all
registries
- Now, `//somehost.com/org1/npm/registry/:_authToken=jwt1` correctly
only applies to registry URLs with path `/org1/npm/registry/`
## Test plan
- [x] Added unit tests that verify correct token is matched for
same-host different-path scenarios
- [x] Verified tests fail with system Bun (pre-fix) and pass with debug
build (post-fix)
- [x] Ran existing npmrc tests to ensure no regression
Fixes#26350🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Add specific error handling for `MissingPackageJSON` errors from
`file:` dependency resolution
- Show the actual missing path instead of generic "package.json not
found" message
Fixes#26338
## What was the problem?
When a `file:` dependency's target path doesn't exist (e.g., due to a
stale lockfile referencing an old path), Bun was showing:
```
error: Bun could not find a package.json file to install from
note: Run "bun init" to initialize a project
```
This error was misleading because:
1. The root `package.json` exists
2. The actual issue is a missing `file:` dependency path
3. Running `bun init` won't help
## What is the fix?
Now when a `file:` dependency path doesn't exist, Bun shows:
```
error: Could not find package.json for "file:../packages/@scope/dep" dependency "dep"
```
This clearly identifies:
- The dependency that failed
- The exact path that was not found
## Test plan
- [x] Added regression test `test/regression/issue/26338.test.ts`
- [x] Verified test fails with system Bun (before fix)
- [x] Verified test passes with the fix
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fixes `bun bun.lockb | head` showing an internal error message instead
of exiting silently
## Test plan
- Added regression test in `test/regression/issue/05828.test.ts`
- Verified the test fails with the current release version
- Verified the test passes with the fix
Fixes#5828🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Fixes the bug where pressing 'A' to select all packages in `bun update
-i` would show "Selected X packages to update" but then immediately show
"No packages selected for update"
- The issue was that packages where `current_version == update_version`
(already at the highest version within their semver constraint) but
`current_version != latest_version` were silently skipped
- The fix applies the same logic as the spacebar handler: when selecting
all packages with 'A', automatically set `use_latest=true` for packages
that need it
## Test plan
- [x] Verified test fails with system Bun (demonstrates the bug exists)
- [x] Verified test passes with fixed debug build
- [x] Existing `update_interactive_install.test.ts` tests pass
- [x] New regression test added in `test/regression/issue/26657.test.ts`
Fixes#26657🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Adds `--reporter=dots` to `bun test` invocations in
`scripts/runner.node.mjs` to reduce CI log output volume
- The dots reporter prints a single character per test (`.` for pass,
`X` for fail) instead of full test names
- When JUnit reporting is also enabled, both reporters stack correctly —
dots goes to stdout, JUnit writes to file
## Test plan
- [ ] Verify CI builds run with dots reporter and produce smaller logs
- [ ] Verify JUnit reports are still generated correctly when `--junit`
is enabled
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## What does this PR do?
Fixes#25658 — `structuredClone()` throwing `DataCloneError` on objects
created via `napi_create_object`, which works in Node.js.
## Root cause
Bun's `napi_create_object` creates a `NapiPrototype` (a
`JSDestructibleObject` subclass with an inline `napiRef` field for fast
`napi_wrap`/`napi_unwrap`), not a plain `JSFinalObject`.
WebKit's `SerializedScriptValue` serializer checks:
```cpp
if (inObject->classInfo() != JSFinalObject::info())
return SerializationReturnCode::DataCloneError;
```
`NapiPrototype` has its own `ClassInfo`, so the check fails.
Node.js uses `v8::Object::New()` which creates a truly plain V8 object —
V8's serializer clones it fine.
## Fix
Add `NapiPrototype::info()` to the serializer's allowlist. The inline
`napiRef` C++ field is invisible to property enumeration (serializer
uses `getOwnPropertyNames` with `PrivateSymbolMode::Exclude`), so cloned
objects correctly contain only JS properties — matching Node.js, where
`napi_unwrap` on a clone also fails.
## How did you verify your code works?
- Added `test_napi_create_object_structured_clone` using
`checkSameOutput` which runs on both Node and Bun and asserts identical
output
- Verified test **fails** with `USE_SYSTEM_BUN=1` (throws
`DataCloneError`)
- Verified test **passes** with `bun bd test`
- Verified existing `test/js/web/workers/structured-clone.test.ts` still
passes (118 tests)
## Summary
- `setLocalWindowSize()` updated the internal connection window size but
never sent a `WINDOW_UPDATE` frame for stream 0 (connection level) to
the peer
- Per RFC 9113 Section 6.9, `INITIAL_WINDOW_SIZE` only applies to
stream-level windows; the connection-level window must be updated
explicitly via `WINDOW_UPDATE`
- This caused HTTP/2 streams to stall after receiving 65,535 bytes (the
default connection window), even when `setLocalWindowSize()` was called
with a larger value
## Test plan
- [x] Added regression test `test/regression/issue/26915.test.ts` that
streams 256 KB over HTTP/2 with an enlarged window
- [x] Verified test **fails** with system bun (times out at 65,535
bytes)
- [x] Verified test **passes** with debug build containing the fix
Closes#26915🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fix `checkExecSyncError` creating a self-referencing cycle where
`err.error === err`, which causes `JSON.stringify(err)` to throw
- After `ObjectAssign(err, ret)` copies all properties from `ret` onto
`err` (which IS `ret.error`), delete `err.error` to break the cycle
- Add regression tests for both `execFileSync` and `execSync`
Closes#26844
## Test plan
- [x] `USE_SYSTEM_BUN=1 bun test test/regression/issue/26844.test.ts`
fails (confirms bug exists in system bun)
- [x] `bun bd test test/regression/issue/26844.test.ts` passes (confirms
fix works)
- [ ] CI passes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixes `bun run --filter` and `bun run --workspaces` failing when
`NODE` env var points to a non-existent file
- The `getNodePath` function now validates that the `NODE` or
`npm_node_execpath` env var points to an executable file before using it
- If the env var path doesn't exist, falls back to searching PATH for
node, then creates bun's node symlink if no system node is found
## Test plan
- [x] Added regression test for issue #26207
- [x] Verified existing workspaces and filter tests still pass
- [x] Verified `bun run --workspaces` works when NODE env is invalid
- [x] Verified `bun run --filter` works when NODE env is invalid
- [x] Verified scripts with `#!/usr/bin/env node` shebang work correctly
Fixes#26207🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Fixed `ReadableStream` with `type: "direct"` incorrectly calling the
user's `cancel` callback on normal stream completion
- The `close` callback in `readDirectStream` was unconditionally
invoking `underlyingSource.cancel(reason)` whenever the stream was
finalized, even on successful completion where `reason` is `undefined`
- Now the `cancel` callback is only invoked when there is a truthy
`reason` (an actual error), matching expected ReadableStream behavior
Closes#17175
## Test plan
- [x] Added regression test `test/regression/issue/17175.test.ts` with 3
test cases:
- Direct stream cancel not called on normal completion (single write +
flush)
- Direct stream cancel not called with multiple writes
- Direct stream cancel not called with explicit `close()`
- [x] Verified all 3 tests fail with system Bun (`USE_SYSTEM_BUN=1`) and
pass with debug build
- [x] Verified existing `serve-direct-readable-stream.test.ts` still
passes
- [x] Verified existing direct stream tests in `serve.test.ts` still
pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- `trimUnusedImports` was ignored by `Bun.Transpiler.scanImports()` and
`Bun.Transpiler.scan()`, only working for `transformSync()`
- The `namedImportsToJS` helper now checks the `is_unused` flag on
import records when `trimUnusedImports` is enabled
- Also fixes a pre-existing array sizing bug where skipped records left
holes/undefined entries in the result array
Closes#13251
## Test plan
- [x] Added regression test `test/regression/issue/13251.test.ts`
covering:
- `scanImports()` with `trimUnusedImports: true` — unused named,
default, and namespace imports are trimmed
- `scan()` with `trimUnusedImports: true` — unused imports are trimmed
from the result
- Used imports, bare/side-effect imports, and type-only imports behave
correctly
- `trimUnusedImports: false` preserves all imports (no regression)
- [x] Test fails with system Bun (`USE_SYSTEM_BUN=1`) and passes with
debug build (`bun bd test`)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix `bun test` object diffs silently dropping properties with empty
string keys (`""`)
- Fix `console.log` also dropping empty string keys when formatting
objects
- `Identifier::isEmpty()` was returning `true` for both null identifiers
and empty string identifiers, causing `forEachPropertyOrdered` and
`forEachPropertyImpl` to skip `""` keys entirely
## Root Cause
In `src/bun.js/bindings/bindings.cpp`, two property iteration functions
used `property.isEmpty()` to skip invalid identifiers. However,
`WTF::String::isEmpty()` returns `true` for zero-length strings (like
`""`), not just null strings. This meant legitimate empty string
property keys were silently dropped during object formatting.
The fix replaces `property.isEmpty()` with `property.isNull()` to only
skip null identifiers, and removes a redundant `key.len == 0` check.
## Test plan
- [x] Added regression test `test/regression/issue/18028.test.ts` with 6
test cases
- [x] Verified test passes with `bun bd test`
- [x] Verified subprocess diff test fails with `USE_SYSTEM_BUN=1`
(unfixed bun)
Closes#18028🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Remove unconditional `SO_REUSEADDR` from `bsd_create_udp_socket()` in
uSockets, which was applied to all UDP sockets when `port != 0`
- This makes `SO_REUSEADDR` correctly gated behind the `reuseAddr: true`
option in `dgram.createSocket()`, matching Node.js behavior
- Without this fix, multiple processes could silently bind to the same
UDP port and `addMembership()` would never throw `EADDRINUSE`
## Test plan
- [x] New regression test in `test/regression/issue/24157.test.ts`
verifies:
- Second process gets `EADDRINUSE` when binding to an in-use UDP port
without `reuseAddr`
- Both processes can bind and `addMembership()` succeeds with
`reuseAddr: true`
- [x] Test fails with system bun (`USE_SYSTEM_BUN=1`), passes with debug
build
- [x] Manual verification with the exact reproduction from the issue
Closes#24157🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixed `downloadStream()` in `src/s3/client.zig` to correctly construct
a Range header when `offset == 0` and a `size` is specified
- The same bug was previously fixed for `downloadSlice()` (used by
`.text()`/`.arrayBuffer()`) in PR #16400, but the fix was never applied
to `downloadStream()`
- Without this fix, `s3file.slice(0, N).stream()` ignores the slice
range and downloads the entire file
## Root Cause
In the `downloadStream()` function, when building the HTTP Range header,
there was an early return `if (offset == 0) break :brk null` inside the
`if (size)` block. This caused the Range header to be omitted when
offset was 0, even when a size was specified — meaning `.slice(0,
1070).stream()` would download the entire file.
## Test plan
- [x] Added regression test at `test/regression/issue/27272.test.ts`
- [x] Added `.slice().stream()` tests (offset 0 and non-zero offset) to
`test/js/bun/s3/s3.test.ts`
- [x] Debug build compiles successfully
- [x] Existing S3 tests still pass
- [ ] CI S3 tests (requires credentials)
Closes#27272🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixed builtin `rm` returning exit code 0 instead of 1 when a file
doesn't exist and `.quiet()` or `.text()` is used
- The `next()` function had a hardcoded `done(0)` that ignored the
stored exit code, only affecting the quiet/pipe code path
- Also fixed `onShellRmTaskDone` to use POSIX-conventional exit code 1
instead of raw errno values
Closes#18161
## Test plan
- [x] New regression test `test/regression/issue/18161.test.ts`
covering:
- `.quiet()` throws on `rm` failure
- `.nothrow().quiet()` returns non-zero exit code
- `.text()` throws on `rm` failure
- `.quiet()` returns 0 on successful `rm`
- Exit code matches between quiet and non-quiet modes
- [x] Verified test fails with `USE_SYSTEM_BUN=1` (2 of 5 tests fail)
- [x] Verified test passes with `bun bd test`
- [x] Existing `throw.test.ts` tests pass
- [x] Existing `bunshell.test.ts` tests have no new failures (4
pre-existing quiet subprocess timeout failures)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Adds the missing `contentEncoding` property to the `S3Options`
TypeScript type definition in `packages/bun-types/s3.d.ts`
- The runtime support for `contentEncoding` was added in PR #26149 but
the type definitions were not updated, causing TypeScript errors and
missing IDE autocompletion
Closes#27328
## Test plan
- [x] Verify `contentEncoding` appears in `S3Options` interface
alongside `contentDisposition`
- [x] Existing runtime tests in `test/js/bun/s3/s3.test.ts` already
cover the runtime behavior
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- A failed `crypto.createPrivateKey()` call (e.g. on an encrypted PEM
without a passphrase) left stale errors on the OpenSSL error queue,
causing subsequent unrelated `createPrivateKey()` calls to fail with
"Passphrase required for encrypted key"
- Added `ClearErrorOnReturn` at the top of `TryParsePrivateKey()` in
`ncrypto.cpp` to ensure the OpenSSL error queue is clean on both entry
and exit
- This is consistent with how other functions like
`TryParsePublicKeyInner()` and `writePrivateKey()` already handle the
error queue
Closes#27445
## Test plan
- [x] Added regression test `test/regression/issue/27445.test.ts` that
reproduces the exact scenario from the issue
- [x] Verified test fails with system bun (`USE_SYSTEM_BUN=1`)
- [x] Verified test passes with debug build (`bun bd test`)
- [x] Ran full `crypto.key-objects.test.ts` suite (83 pass, 0 fail)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fix CRLF injection vulnerability in
`ServerResponse.prototype.writeEarlyHints` where non-`link` header keys
and values were concatenated into the raw HTTP 103 response without
validation
- Add `validateHeaderName` and `validateHeaderValue` checks (same ones
used by `setHeader`) before writing headers to the socket
- Add tests verifying that CRLF injection in both header names and
values is rejected, and that valid headers still work
## Test plan
- [x] `bun bd test test/js/node/http/early-hints-crlf-injection.test.ts`
— 3/3 pass
- [x] `USE_SYSTEM_BUN=1 bun test
test/js/node/http/early-hints-crlf-injection.test.ts` — 2/3 fail
(confirms test catches the vulnerability)
- [x] Existing `test-http-early-hints-invalid-argument.js` still passes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix incorrect description of bcrypt cost parameter in hashing docs:
`log10` → `log2`
- The bcrypt cost factor is a power-of-2 exponent (cost=10 means 2^10 =
1,024 rounds), confirmed by `src/bun.js/api/crypto/PasswordObject.zig`
which uses the value as `rounds_log` with valid range 4–31
## Test plan
- Documentation-only change, no code behavior affected
Fixes#27474🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Fix signed `char` comparison bug in uWS `HttpParser.h` that caused
bytes >= 0x80 to be stripped from HTTP header values on platforms where
`char` is signed (x86_64 Linux, x86_64 macOS, ARM macOS)
- Replace `< 33` comparison with explicit checks for SP and HTAB per RFC
9110 Section 5.5
- This also fixes a potential request smuggling vector where
`Transfer-Encoding: \xffchunked\xff` would be interpreted as
`Transfer-Encoding: chunked`
Closes#8893
## Root Cause
In `packages/bun-uws/src/HttpParser.h`, the `getHeaders` function
trimmed whitespace from header values using:
```cpp
while (headers->value.back() < 33) { ... }
```
`std::string_view::back()` returns `char`, which is **signed** on
x86_64. Bytes 0x80-0xFF are negative values (-128 to -1), all less than
33, so they were incorrectly stripped as whitespace.
## Fix
Replace the numeric comparison with explicit OWS character checks:
```cpp
while (headers->value.back() == ' ' || headers->value.back() == '\t') { ... }
```
This matches RFC 9110 Section 5.5 which defines OWS (Optional White
Space) as only SP and HTAB.
## Test plan
- [x] Added regression test `test/regression/issue/08893.test.ts` that
sends raw HTTP requests with 0xFF bytes in header values and verifies
they are preserved
- [x] Added test that SP/HTAB trimming still works correctly
- [x] `bun bd test test/regression/issue/08893.test.ts` passes (2/2
tests)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fixed CSS bundler incorrectly mapping all four logical border-radius
properties (`border-start-start-radius`, `border-start-end-radius`,
`border-end-start-radius`, `border-end-end-radius`) to
`border-top-left-radius`/`border-top-right-radius` when values contain
`var()` references
- The `unparsed` branch in `BorderRadiusHandler.logicalProperty()`
hardcoded property IDs instead of using the `ltr`/`rtl` comptime
parameters that are already correctly passed to the function
Closes#27458
## Test plan
- [x] Added regression test in `test/regression/issue/27458.test.ts`
- [x] Test verifies all four distinct physical properties are present in
output
- [x] Test verifies each physical property appears exactly once per LTR
block
- [x] Test fails on system bun (`USE_SYSTEM_BUN=1`), passes on debug
build
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- When `bun build --compile` bundles HTML with JS/CSS assets, the client
transpiler's `public_path` was set to `""` (empty), which caused
`cheapPrefixNormalizer` to produce relative URLs like `./chunk-abc.js`
- These relative URLs break when HTML is served from nested routes (e.g.
`/foo/`) because the browser resolves them relative to the current path
(producing `/foo/chunk-abc.js` instead of `/chunk-abc.js`)
- Changed the client transpiler's `public_path` from `""` to `"/"` so
assets use absolute root-relative URLs like `/chunk-abc.js`
Closes#27465
## Test plan
- [ ] Regression test in `test/regression/issue/27465.test.ts` verifies
compiled HTML serves assets with absolute URLs from nested routes
- [ ] Existing `test/bundler/bundler_html.test.ts` tests pass (21/21)
- [ ] Existing `test/bundler/html-import-manifest.test.ts` tests pass
(3/3)
- [ ] Regular `bun build` (without `--compile`) still uses relative `./`
paths as expected for static file serving
- [ ] Dev mode (`bun run`) continues to use `/_bun/client/` absolute
paths
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix `PathLike.fromBunString` to validate path length using
`str.utf8ByteLength()` instead of `str.length()` (UTF-16 code units),
since paths are stored as UTF-8 in a fixed-size `PathBuffer`
- Add defensive bounds check in `sliceZWithForceCopy` before `@memcpy`
to guard against oversized slices
- Add tests covering multi-byte UTF-8 paths (CJK, accented, emoji
characters) that exceed the buffer capacity
## Background
`PathLike.fromBunString` validated path length using `str.length()`,
which returns UTF-16 code units. However, the path is ultimately stored
as UTF-8 in a fixed-size `PathBuffer` (`[MAX_PATH_BYTES]u8`, 4096 bytes
on Linux). Multi-byte UTF-8 characters (CJK, accented, emoji) use 2-4
bytes per character, so a string that passes the UTF-16 length check
could exceed the buffer capacity after UTF-8 encoding. For example, 2000
CJK characters (U+4E00) = 2000 UTF-16 code units (passes the 4096 check)
but 6000 UTF-8 bytes (overflows the buffer).
## Test plan
- [x] `bun bd test test/js/node/fs/fs-path-length.test.ts` — 8/8 tests
pass
- [x] `USE_SYSTEM_BUN=1 bun test test/js/node/fs/fs-path-length.test.ts`
— crashes with segfault (confirms the issue exists pre-fix)
- [x] Tests cover sync APIs (openSync, readFileSync, statSync,
realpathSync), async APIs (promises.readFile, promises.stat), 2-byte
(é), 3-byte (一), and 4-byte (😀) UTF-8 characters
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- **SetupBuildkite.cmake**: The Buildkite API returns build JSON that
includes commit messages. Semicolons in commit messages (e.g. `fix:
handle edge cases; add tests`) cause CMake's unquoted `string(JSON ...
GET ${VAR} ...)` calls to split the JSON at every `;` (CMake's list
separator), resulting in garbled arguments and `FATAL_ERROR`. Fix:
escape semicolons before parsing and quote all JSON variable expansions.
- **Globals.cmake**: The `optionx()` macro reads environment variables
with `$ENV{...}` without quoting, so any env var containing a semicolon
would be split into a CMake list. Fix: quote the expansion.
- **Options.cmake / BuildBun.cmake**: Quote `REVISION`, `VERSION`,
`CODEGEN_PATH`, and other variables passed as `-D` flags to the Zig
build, preventing semicolons or special characters from splitting them.
- **utils.mjs `toYaml()`**: The YAML serializer only quoted strings
containing `:`, `#`, `'`, `"`, `\n`, `*`. Missing: `;`, `[`, `]`, `{`,
`}`, `&`, `!`, `|`, `>`, `%`, `@`, `` ` ``, `,`, `\`. Also now properly
escapes backslashes before quotes in double-quoted YAML strings.
## Test plan
- [x] `bun bd` builds successfully with these changes
- [x] Verified `toYaml()` correctly quotes strings with special
characters and leaves plain strings unquoted
- [ ] CI should pass with a commit message containing semicolons (this
PR's own CI run serves as a test)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary
- **Fixed use-after-free**: `TransformTask.deinit()` was calling
`tsconfig.deinit()` on a pointer shallow-copied from
`JSTranspiler.config.tsconfig`. After the first async `transform()`
completed, the JSTranspiler's tsconfig became a dangling pointer —
subsequent `transform()`/`transformSync()` calls would read freed
memory, and a second async `transform()` would double-free.
- **Fixed memory leak**: `JSTranspiler.deinit()` never freed
`config.tsconfig`, so sync-only usage leaked the `TSConfigJSON`
allocation.
- **Root cause**: No ownership protocol — both `TransformTask` and
`JSTranspiler` believed they owned the tsconfig pointer. Fix assigns
sole ownership to `JSTranspiler`.
### Trigger scenario
```js
const transpiler = new Bun.Transpiler({
tsconfig: JSON.stringify({
compilerOptions: { experimentalDecorators: true, jsx: "react" }
}),
});
await transpiler.transform("const x = 1;"); // TransformTask.deinit frees tsconfig
await transpiler.transform("const y = 2;"); // use-after-free + double-free
```
### Changes
| File | Change |
|------|--------|
| `src/bun.js/api/JSTranspiler.zig` | Remove `tsconfig.deinit()` from
`TransformTask.deinit` (borrower) |
| `src/bun.js/api/JSTranspiler.zig` | Add `tsconfig.deinit()` to
`JSTranspiler.deinit` (owner) |
| `test/js/bun/transpiler/transpiler-tsconfig-uaf.test.ts` | Regression
test: multiple async/sync transforms with tsconfig |
## Test plan
- [x] `bun bd test
test/js/bun/transpiler/transpiler-tsconfig-uaf.test.ts` — 3/3 pass
- [ ] CI passes on all platforms
> **Note**: The use-after-free is non-deterministic — mimalloc may not
immediately reuse freed memory, so the old code may not always crash.
Under ASan the original code would reliably fail.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
### What does this PR do?
Fixes a non-deterministic install bug where transitive peer dependencies
were left unresolved when all manifest loads happened synchronously
(warm cache + fresh resolution).
### The bug
When an auto-installed peer dependency creates a NEW package via
`fromNPM`, that package's own peer deps get re-queued to
`peer_dependencies` during `drainDependencyList` (with
`install_peer=false`). The subsequent `waitForPeers` call was guarded
only by `pendingTaskCount() > 0` — so if all manifest loads were
synchronous (e.g., npmjs.org manifests cached with valid `max-age=300`),
the wait loop was skipped and those transitive peers stayed unresolved
(`invalid_package_id`).
**Observable symptom** with `--linker=isolated`: a package's peer dep
symlink missing from its `.bun` entry. Example:
- Root has `react@18.2.0`, `@testing-library/react` (peers on
`react-dom`)
- `react-dom@18.3.1` (auto-installed peer) has `peerDependencies: {
react: "^18.3.1" }`
- Cold cache install: `react-dom` entry has `react` symlink ✓
- Warm cache + no lockfile: `react-dom` entry **missing** `react`
symlink ✗
The saved `bun.lock` was identical in both cases — only the in-memory
lockfile differed.
### The fix
Check `peer_dependencies.readableLength() > 0` alongside
`pendingTaskCount() > 0` when deciding whether to call `waitForPeers`.
The `waitForPeers` loop already handles draining the peer queue (its
`isDone` checks both conditions and calls `processPeerDependencyList`
each tick).
### How did you verify your code works?
- Added test in `isolated-install.test.ts` that proxies Verdaccio to
inject `Cache-Control: max-age=300` headers (replicating npmjs.org
behavior). Test uses packages where the transitive peer's constraint
doesn't match what's already in the lockfile.
- Test **fails** with `USE_SYSTEM_BUN=1` (bun 1.3.10 has this bug)
- Test **passes** with this fix
- All 29 tests in `isolated-install.test.ts` pass
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary
- Add the internal sentinel byte to the `SPECIAL_CHARS` table so
user-supplied strings containing it are stored out-of-band rather than
appended raw to the pre-lex script buffer
- Fix `validateJSObjRefIdx` to bounds-check against the actual `jsobjs`
array length instead of only checking `maxInt(u32)`, matching the
existing behavior of `validateJSStringRefIdx`
- Add defense-in-depth bounds checks in `Cmd.zig` and `Builtin.zig`
before indexing into the `jsobjs` array
## Test plan
- [x] New test file `test/js/bun/shell/shell-sentinel-hardening.test.ts`
covering:
- Strings containing the sentinel byte round-trip correctly through
shell interpolation
- Sentinel byte followed by digits in strings are properly handled
- Sentinel byte in redirect position does not cause out-of-bounds access
- `$.escape` properly handles the sentinel byte
- Plain `\x08` in strings is handled correctly
- Strings matching internal reference patterns are treated as literals
- [x] Existing shell tests pass (340/340 non-todo tests, 4 pre-existing
ASAN-only failures)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fix typo `"unicode-renage"` → `"unicode-range"` in the font-face
property name matching that caused the `unicode-range` property to never
be recognized and fall through to `CustomProperty` parsing, which
corrupted the `U+` prefix syntax
- Fix incorrect Rust-to-Zig translation in `toHexDigit()` where `(b |
0x20) -% ('a' +% 10)` should be `((b | 0x20) -% 'a') +| 10`, causing hex
letters A-F to be rejected as invalid
Together these two bugs meant all `unicode-range` values in `@font-face`
rules were mangled (e.g., `U+0000-00FF` → `U0-0FF`), causing browsers to
silently ignore the entire `@font-face` rule and fonts to not load.
Closes#27598
## Test plan
- [x] Added regression tests in `test/regression/issue/27598.test.ts`
covering single values, ranges, wildcards, hex letters, and
comma-separated lists
- [x] All 5 new tests fail with system bun (`USE_SYSTEM_BUN=1`) and pass
with the fix (`bun bd test`)
- [x] All 1028 existing CSS tests pass (`test/js/bun/css/css.test.ts`)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary
- Removed the `(run && target === "node")` condition from CLI backend
selection in `test/bundler/expectBundled.ts`
- This condition unnecessarily forced the CLI backend (which spawns a
`bun build` subprocess) for bundler tests with `target: "node"` and a
`run` step
- On resource-constrained CI runners (Alpine aarch64), this subprocess
sometimes hangs, causing 90s timeouts (`cjs/__toESM_target_node`)
- The `Bun.build()` API fully supports `target: "node"` and produces
identical output without subprocess overhead
## Test plan
- [x] `bun bd test test/bundler/bundler_cjs.test.ts` — all 23 tests pass
(including previously flaky `__toESM_target_node`)
- [x] `target: "node"` tests in `test/bundler/esbuild/default.test.ts` —
all pass with API backend
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary
- Remove the `latin1_is_now_ascii` feature flag from
`src/feature_flags.zig` (was always `false`)
- Remove two dead `if (comptime bun.FeatureFlags.latin1_is_now_ascii)`
branches in `src/string/immutable/unicode.zig` (`allocateLatin1IntoUTF8`
and `copyLatin1IntoUTF8StopOnNonASCII`)
## Test plan
- No behavioral change — the flag was always `false`, so the removed
code was unreachable dead code
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary
- Fix `Bun.Transpiler` API ignoring `experimentalDecorators: true` from
tsconfig, always emitting TC39-style decorators instead of legacy
TypeScript decorators
- Add missing `experimental_decorators` and `emit_decorator_metadata`
fields to `ParseOptions` in both sync (`getParseResult`) and async
(`TransformTask.run`) code paths in `JSTranspiler.zig`
Fixes#27575
## Root Cause
The `ParseOptions` structs constructed in
`src/bun.js/api/JSTranspiler.zig` did not include the
`experimental_decorators` and `emit_decorator_metadata` fields from the
parsed tsconfig. Since `ParseOptions` defaults both to `false`, the
`Bun.Transpiler` API always emitted TC39 decorators regardless of
tsconfig settings. Other code paths (bundler via `ParseTask.zig`,
runtime via `ModuleLoader.zig`) already propagated these fields
correctly.
## Test plan
- [x] New regression test `test/regression/issue/27575.test.ts`
verifying:
- `transformSync` emits legacy decorators with `experimentalDecorators:
true`
- `transformSync` emits decorator metadata with `emitDecoratorMetadata:
true`
- `transformSync` emits TC39 decorators when `experimentalDecorators` is
not set
- `transform` (async) emits legacy decorators with
`experimentalDecorators: true`
- [x] Test fails with system Bun (3/4 fail), passes with debug build
(4/4 pass)
- [x] Existing decorator test suites all pass (54 tests across 3 suites)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary
- Fix `chunkSize()` in `ChunkedEncoding.h` to return `uint64_t` instead
of `unsigned int`, preventing silent truncation of large chunk size
values
- Update `decChunkSize()` parameter and local variables to use
`uint64_t` consistently
- This ensures the existing `STATE_SIZE_OVERFLOW` check (bits 56-59)
actually works, as it was previously dead code due to the 32-bit
truncation
## Test plan
- [x] `bun bd test test/js/bun/http/request-smuggling.test.ts` — all 17
tests pass
- [x] New test "large chunk size exceeding 32 bits does not produce
empty body" verifies correct behavior for chunk sizes > 2^32
- [x] New test "rejects extremely large chunk size hex values" verifies
overflow detection works
- [x] New test "accepts valid chunk sizes within normal range" verifies
no regression for normal usage
- [x] Verified new test fails with `USE_SYSTEM_BUN=1` (old code) and
passes with `bun bd test` (fixed code)
- [x] Pre-existing tests unaffected
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- When `emitDecoratorMetadata: true` is set in tsconfig.json without
`experimentalDecorators: true`, Bun now correctly uses legacy decorator
semantics instead of TC39 standard decorator lowering
- `emitDecoratorMetadata` only makes sense with TypeScript's legacy
decorator system (`reflect-metadata`), so its presence implies the user
expects legacy decorator behavior
- This fixes a regression from ce715b5a0f where NestJS, TypeORM,
Angular, and other legacy-decorator frameworks would crash with
`descriptor.value` undefined
Closes#27526
## Test plan
- [x] Added regression test `test/regression/issue/27526.test.ts` with
two cases:
- Legacy decorators work when `emitDecoratorMetadata: true` but
`experimentalDecorators` is absent
- TC39 standard decorators still work when neither option is set
- [x] Verified test fails with system Bun (`USE_SYSTEM_BUN=1`) and
passes with debug build (`bun bd test`)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Fixes#11297 — flaky Windows-only data loss when reading from subprocess
pipes.
### Root Cause
A Windows kernel race in libuv's `uv__pipe_read_data`. When libuv's read
loop fills the user buffer exactly (`bytes_read == buf.len`), it loops
back for more. If the pipe is empty at that moment:
1. `ReadFile` returns `ERROR_IO_PENDING`
2. `CancelIoEx` is called to abort it
3. `GetOverlappedResult(TRUE)` blocks waiting for the cancellation
**The race**: if the child process writes to the pipe *during* step 3,
the Windows kernel can drain that data from the pipe buffer to satisfy
the pending read, then **discard it** when applying the cancellation.
The read reports 0 bytes transferred, the user buffer is unmodified, but
the data is gone from the pipe — **silent data loss**.
### Why Bun specifically?
Both Bun parent and Bun child use mimalloc. When the child reads stdin
with `ensureUnusedCapacity(65536)`, mimalloc rounds to **74468 bytes**.
The child then writes 74468-byte chunks to stdout. When the parent reads
with the same mimalloc-rounded 74468-byte buffer, **every read fills the
buffer exactly** → loop → `CancelIoEx` → race.
Node.js doesn't hit this because its allocator returns exactly 65536
bytes, so reads never align with writes.
Verified by writing chunks of exact sizes:
- `chunkSize=74467` → **20/20 OK**
- `chunkSize=74468` → **12/20 FAIL**
- `chunkSize=74469` → **20/20 OK**
### The Fix
Before looping, use `PeekNamedPipe` to verify data is actually
available. If not, break and queue a zero-read — the `CancelIoEx` path
is never entered. This replaces the [ReadFile → Cancel → Wait] sequence
with a single `PeekNamedPipe` when the pipe is empty, so it's actually
*faster* in the buggy case.
### Test Results
| | Before | After |
|---|---|---|
| `11297.test.ts` (30×) | flaky | 30/30 pass |
| `spawn-stdin-readable-stream.test.ts` | pass | pass |
| `spawn.test.ts` | pass | pass |
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
Fixes#27521
- In dev server (bake) mode, `patchImportRecordSourceIndices`
intentionally skips saving `source_index` on JS import records. This
caused the `is_export_star_target` flag to never be set on export-star
targets, because the flag-setting code at `bundle_v2.zig:4212-4220`
relied on `source_index` being valid.
- Without this flag, the barrel optimizer could incorrectly defer
submodules of packages that are targets of `export *` from another
barrel (e.g., `@tanstack/query-core` being star-exported from
`@tanstack/react-query`), leading to classes like `QueryClient` becoming
`undefined` and causing `TypeError: Right-hand side of 'instanceof' is
not an object`.
- The fix falls back to resolving the source index via
`pathToSourceIndexMap` when `source_index` is not set, matching the same
pattern already used in `scheduleBarrelDeferredImports`.
## Test plan
- [x] Added dev server test `barrel optimization: export star target not
deferred (#27521)` that verifies `instanceof` works correctly through an
`export *` chain with barrel-optimized packages
- [x] All 17 `test/bake/dev/bundle.test.ts` tests pass
- [x] All 45 `test/bundler/bundler_barrel.test.ts` tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Problem
When building with `--compile --bytecode --format=esm`, imports marked
`is_unused` by barrel optimization are recorded in bytecode ModuleInfo
as external dependencies, causing runtime crashes.
Regression from #26892 (barrel import optimization).
## Repro
```js
// node_modules/fakelib/package.json
{"name":"fakelib","type":"module","sideEffects":false,"main":"./index.js"}
// node_modules/fakelib/index.js
import Unused from './sub/unused.js';
import { used } from './sub/used.js';
export { Unused, used };
// entry.ts
import { used } from 'fakelib'
console.log(used())
```
```sh
bun build entry.ts --compile --format=esm --bytecode --outfile=out
./out
# error: Cannot find module './sub/unused.js' from '/$bunfs/root/out'
```
Affects real packages like `diff@8.x` (`"sideEffects": false` barrel
with many re-exports).
## Root cause
1. `barrel_imports.zig` marks unused re-export imports as `is_unused =
true`. Never resolved → `source_index` stays invalid.
2. `convertStmtsForChunk` correctly drops these via
`shouldRemoveImportExportStmt` (checks `is_unused`) → emitted JS is
fine.
3. But `postProcessJSChunk.zig` section 2 scans the **original** AST
parts to populate bytecode ModuleInfo. It finds the `s_import` with
`!record.source_index.isValid()` and records the relative specifier as
an external module dependency.
4. At runtime, JSC reads ModuleInfo and tries to resolve
`./sub/unused.js` from `$bunfs/root/` → fails.
Debug build assertion catches this as `Imports different between
parseFromSourceCode and fallbackParse`.
## Fix
Skip `is_unused` records when scanning for external imports in
ModuleInfo generation (they're not in the emitted code).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
The `getConsoleConstructor` custom getter in `ZigGlobalObject.cpp` was
missing a `return` after `throwException`, causing execution to continue
with an invalid result when `profiledCall` threw an exception.
## Problem
When `console.Console` is accessed in a near-stack-overflow condition,
`profiledCall` to `createConsoleConstructor` throws a
`StackOverflowError`. The exception was correctly caught and re-thrown
via `throwException`, but execution fell through to:
1. `console->putDirect(vm, property, result, 0)` — caching the
invalid/empty `result` as the `Console` property
2. `return JSValue::encode(result)` — returning the invalid value
This meant that after stack recovery, `console.Console` would
permanently return `undefined` instead of the `Console` constructor,
because `putDirect` had already cached the bad value.
## Fix
Add `return {};` after `throwException(globalObject, scope,
returnedException.get())` to properly propagate the exception without
caching an invalid result.
I audited all other `if (returnedException)` patterns in the codebase (9
total) and confirmed this was the only instance missing a return.
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
### What does this PR do?
Fixes two bugs in the `X509Certificate` constructor that prevented
subclassing. Both appear to be copy-paste artifacts from
`NodeVMScript.cpp`.
**Bug 1: `X509Certificate.prototype` was `undefined`**
`JSX509CertificateConstructor::finishCreation()` received a `prototype`
argument but never used it:
```cpp
// Before
void JSX509CertificateConstructor::finishCreation(VM& vm, JSGlobalObject* globalObject, JSObject* prototype)
{
Base::finishCreation(vm, 1, "X509Certificate"_s, PropertyAdditionMode::WithStructureTransition);
// missing: putDirectWithoutTransition(vm, vm.propertyNames->prototype, prototype, ...);
}
```
As a result, `X509Certificate.prototype` was `undefined`, and `class Foo
extends X509Certificate {}` threw immediately:
```
TypeError: The value of the superclass's prototype property is not an object or null.
```
**Bug 2: Wrong Structure used for subclass instances**
When `newTarget !== X509Certificate` (subclassing), the code used
`vm.Script`'s structure instead:
```cpp
// Before
structure = InternalFunction::createSubclassStructure(globalObject, newTarget.getObject(),
functionGlobalObject->NodeVMScriptStructure()); // ← wrong class
```
This would have caused subclass instances to inherit from
`vm.Script.prototype`. (Unreachable in practice because of Bug 1, but
would surface once that was fixed.)
Also fixed the error message on line 163 which still said `"Class
constructor Script cannot be invoked without 'new'"`.
### Fix
```cpp
void JSX509CertificateConstructor::finishCreation(VM& vm, JSGlobalObject* globalObject, JSObject* prototype)
{
Base::finishCreation(vm, 1, "X509Certificate"_s, PropertyAdditionMode::WithStructureTransition);
putDirectWithoutTransition(vm, vm.propertyNames->prototype, prototype,
PropertyAttribute::DontEnum | PropertyAttribute::DontDelete | PropertyAttribute::ReadOnly);
}
```
```cpp
structure = InternalFunction::createSubclassStructure(globalObject, newTarget.getObject(),
functionGlobalObject->m_JSX509CertificateClassStructure.get(functionGlobalObject));
```
### How did you verify your code works?
Added regression tests in `test/js/node/crypto/x509-subclass.test.ts`:
- `X509Certificate.prototype` is defined and has expected methods
- Instances have correct prototype chain
- `class MyX509 extends X509Certificate {}` works
- Subclass instances are NOT `instanceof vm.Script`
- Subclass getters (`.subject`, `.issuer`, etc.) work correctly
**Before fix: 6/6 tests fail**
```
(fail) X509Certificate > constructor has .prototype property
(fail) X509Certificate > prototype has expected methods
(fail) X509Certificate > instance uses correct prototype
(fail) X509Certificate > can be subclassed
TypeError: The value of the superclass's prototype property is not an object or null.
(fail) X509Certificate > subclass prototype chain is correct
(fail) X509Certificate > subclass instance accesses X509 getters correctly
```
**After fix: 6/6 tests pass**
Also verified `test/js/node/test/parallel/test-crypto-x509.js` still
passes.
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Cap the number of user-supplied headers written into the fixed-size
`shared_request_headers_buf` (256 entries) in `buildRequest` to prevent
out-of-bounds writes when a `fetch()` call includes more headers than
the buffer can hold.
- Six slots are reserved for default headers (Connection, User-Agent,
Accept, Host, Accept-Encoding, Content-Length/Transfer-Encoding),
leaving 250 for user-supplied headers. Excess headers are silently
dropped while their semantic flags (e.g. `override_host_header`) are
still processed correctly.
- Added tests verifying that fetch with 300 headers completes without
crashing and that 250 custom headers are all delivered correctly.
## Test plan
- [x] `bun bd test test/js/bun/http/fetch-header-count-limit.test.ts`
passes (2 tests)
- [ ] CI passes on all platforms
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Tighten bounds checks in `Buffer.prototype.compare` to properly
validate `targetEnd` and `sourceEnd` against their respective buffer
lengths unconditionally, matching Node.js semantics
- Previously certain combinations of start/end offset values could
bypass the range validation due to conjunctive check conditions; now end
values are checked first, then zero-length ranges return early, then
start values are validated
- Add comprehensive test coverage for Buffer.compare bounds edge cases
## Test plan
- [x] `bun bd test test/js/node/buffer-compare-bounds.test.ts` — 13/13
pass
- [x] `bun bd test/js/node/test/parallel/test-buffer-compare-offset.js`
— passes cleanly
- [x] `bun bd test/js/node/test/parallel/test-buffer-compare.js` —
passes cleanly
- [x] `bun bd test test/js/node/buffer.test.js` — 457/457 pass
- [x] `USE_SYSTEM_BUN=1 bun test
test/js/node/buffer-compare-bounds.test.ts` — 2 failures confirm tests
catch the issue
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- `_joinAbsStringBuf` used a fixed `MAX_PATH_BYTES * 2` stack buffer for
its scratch space; joining a long user-provided import specifier with
tsconfig `baseUrl` / `node_modules` / source dir would overflow it and
panic in `normalizeStringGenericTZ`.
- Replace the fixed buffer with a `JoinScratch` stack-fallback allocator
sized to the input (zero-alloc for normal paths, heap-alloc only for
pathological inputs).
- Add `absBufChecked` / `joinAbsStringBufChecked` that return `null`
when the normalized result would still exceed the destination buffer,
and switch resolver call sites that handle user-controlled specifiers to
use it (treating overflow as not-found).
- Guard `ESModule.Package.parseSubpath`, tsconfig `paths` wildcard
concat, and the VM cache-bust join against overflow.
## Test plan
- [ ] `bun bd test test/js/bun/resolve/resolve-error.test.ts` — new test
exercises bare packages via tsconfig baseUrl, tsconfig `paths`
wildcards, relative paths, and long paths with `..` normalization, none
of which should crash
- [ ] CI green on all platforms (Windows path joining also changed)
🤖 Generated with [Claude Code](https://claude.com/claude-code) (0%
31-shotted by claude)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- Tightens hex digit validation in the chunked Transfer-Encoding parser
(`ChunkedEncoding.h`) to strictly accept only RFC 9110 HEXDIG characters
(`0-9`, `a-f`, `A-F`)
- The previous implementation used arithmetic range checks that were
slightly too permissive, accepting certain non-HEXDIG ASCII characters
(e.g. those between `'9'` and `'A'` or after `'F'`/`'f'` in the ASCII
table)
- This aligns Bun's chunked parser with other strict HTTP
implementations (nginx, Apache, HAProxy, Node.js) and ensures consistent
chunk size interpretation across all servers in a proxy chain
## Test plan
- [x] Added 30 new tests in `test/js/bun/http/request-smuggling.test.ts`
covering:
- Valid hex digits (`0-9`, `a-f`, `A-F`) are accepted
- Multi-digit hex chunk sizes work correctly
- Invalid characters `G`, `g`, `Z`, `z`, `x`, `X` are rejected
- ASCII 58-64 characters (`:`, `<`, `=`, `>`, `?`, `@`) between '9' and
'A' are rejected
- Other special characters (`!`, `#`, `$`, `%`, `^`, `&`, `*`, etc.) are
rejected
- [x] Verified new tests fail on system bun (before fix) and pass on
debug build (after fix)
- [x] All 44 tests in `request-smuggling.test.ts` pass (14 existing + 30
new)
- [x] No regressions in existing chunked encoding tests
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Write a null terminator to the headers array when the empty-headers
early return path is taken in `getHeaders()` (`HttpParser.h:723`). This
matches the existing behavior on the normal parsing path (line 796) and
ensures headers are properly isolated between pipelined requests.
- Without this, when a pipelined request has no headers (request line
immediately followed by `\r\n\r\n`), stale header `string_view`s from
the previous request on the same connection could remain in the reused
`HttpRequest` object's headers array.
## Test plan
- [x] Added 3 new pipelined request header isolation tests to
`test/js/bun/http/request-smuggling.test.ts`
- [x] Verified new test (`pipelined headerless request is rejected and
does not inherit stale content-length`) **fails without the fix** and
**passes with the fix**
- [x] All 17 tests in the request-smuggling test file pass
- [x] `bun bd test test/js/bun/http/request-smuggling.test.ts` — all
pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## What does this PR do?
Moves Windows code signing from an inline CMake `POST_BUILD` step to a
dedicated Buildkite step (`windows-sign`) that runs on an x64 agent
after all Windows builds complete.
### Why
DigiCert `smctl` is x64-only and silently fails under ARM64 emulation.
With the old inline approach, ARM64 builds were never signed (`ci.mjs`
skipped it with `target.arch !== "aarch64"`). Now that we're shipping
Windows ARM64, we need all Windows binaries signed.
### How it works
```
windows-x64-build-bun ─┐
windows-x64-baseline-build-bun ├─→ windows-sign (x64 agent) ─→ release
windows-aarch64-build-bun ─┘
```
The `windows-sign` step:
1. Downloads all 6 Windows zips (x64, x64-baseline, aarch64 × {release,
profile})
2. Extracts each, signs the exe with smctl, re-packs
3. Re-uploads with the **same filenames**
4. `upload-release.sh` pins Windows artifact downloads to `--step
windows-sign` to guarantee signed zips are released
### When signing runs
- On `main` with non-canary builds (normal release path)
- When `[sign windows]` is in the commit message (for testing on a
branch — **this PR uses it**)
Canary builds are never signed (DigiCert charges per signature).
### Cleanup
- Removed `ENABLE_WINDOWS_CODESIGNING` CMake option
- Removed inline `POST_BUILD` signing from `BuildBun.cmake`
- Removed SM_* secret fetching from `scripts/build.mjs`
- Replaced `sign-windows.ps1` (2-exe signer) with
`sign-windows-artifacts.ps1` (batch zip signer)
### Testing
The commit message contains `[sign windows]` so this PR's CI should run
the sign step. Will verify:
- All 6 zips are downloaded, signed, re-uploaded
- `Get-AuthenticodeSignature` verification passes for each exe
- smctl healthcheck works on the x64 test agent
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
`sendBuffer()` was writing directly to `this.tcp` (the raw socket),
which is **detached** in proxy tunnel mode (`wss://` through HTTP
CONNECT proxy). This caused unencrypted WebSocket frame data to be
written to a detached socket, corrupting the connection and causing
immediate disconnection (close code 1006).
## Root Cause
The fast path in `writeString` → `enqueueEncodedBytes` correctly checks
for `proxy_tunnel` and routes through `tunnel.write()`. But the slow
path (`sendData` → `sendDataUncompressed` → `sendBuffer`), taken when
there is backpressure or the data needs to be buffered in `send_buffer`,
bypassed the tunnel entirely.
Under **bidirectional traffic** (simultaneous reads and writes),
backpressure builds up and pushes writes through the `sendBuffer` path,
killing the connection within seconds.
## Reproduction
The bug manifests when using `wss://` through an HTTP CONNECT proxy with
bidirectional WebSocket traffic. Specifically this was causing constant
disconnections in Claude Code when using Bun's native WebSocket client —
the ping/pong keepalive mechanism never received pong responses because
the connection died before they could arrive.
Key conditions:
- `wss://` (TLS) through an HTTP CONNECT proxy (uses
`WebSocketProxyTunnel`)
- Bidirectional traffic (client writes AND receives data simultaneously)
- Works fine without proxy, or without TLS, or with read-only traffic
## Fix
- `sendBuffer`: Check for `proxy_tunnel` and route writes through
`tunnel.write()` instead of `this.tcp.write()`
- `sendDataUncompressed` (2 locations): Guard debug assertions
(`isShutdown`/`isClosed`/`isEstablished`) with `proxy_tunnel == null`
since they crash on the detached socket
## Test
Added `test-ws-bidir-proxy.test.ts` which sets up a TLS WebSocket
server, HTTP CONNECT proxy, and a client that does simultaneous
bidirectional traffic with ping/pong. Before this fix, the test fails
with close code 1006 after ~13 messages and 0 pongs. After the fix, it
completes with 9+ pongs and 140+ messages.
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
Fixes#27431.
- fixes a Windows standalone executable crash when
`compile.autoloadDotenv = false`, a `.env` file exists in the runtime
cwd, and the executable spawns a `Worker`
- gives worker startup its own cloned `DotEnv.Loader` before
`configureDefines()`, so dotenv loading does not mutate env state owned
by another thread
- aligns worker startup with other Bun runtime paths by wiring
`resolver.env_loader = transpiler.env`
- extracts standalone runtime flag propagation into
`applyStandaloneRuntimeFlags(...)` so main and worker startup share the
same env/tsconfig/package.json behavior
- adds regression coverage in `test/regression/issue/27431.test.ts` and
bundler coverage in `test/bundler/bundler_compile_autoload.test.ts`
### How did you verify your code works?
- reproduced the original crash with `bun test
regression/issue/27431.test.ts` on stock `1.3.10-canary.104`; the test
fails on unpatched Bun
- rebuilt `build/debug/bun-debug.exe` with this patch and ran
`build/debug/bun-debug.exe test regression/issue/27431.test.ts`; the
test passes on the patched build
- manually validated the minimal repro from
`https://github.com/Hona/bun1310-minimal-repro` against the patched
`bun-debug.exe`; the standalone executable no longer crashes and still
keeps dotenv disabled (`process.env` does not pick up `.env`)
## Summary
- Adds `bun repl -e <script>` / `-p <script>` for non-interactive
evaluation using REPL semantics (object literal wrapping, declaration
hoisting), draining the event loop before exit. Returns exit code 1 on
error.
- Adds `docs/runtime/repl.mdx` documenting the interactive REPL
(commands, keybindings, special variables, top-level await, imports) and
the new non-interactive mode.
- Updates bash/fish/zsh completions for the `repl` subcommand and its
flags.
## Test plan
- [x] `bun bd test test/js/bun/repl/repl.test.ts` — all 103 tests pass
(20 new)
- [x] `USE_SYSTEM_BUN=1 bun test` — new tests fail (validates they test
new behavior)
- [x] `bun run zig:check-all` — compiles on all platforms
- [x] `bash -n` / `fish -n` / `zsh -n` syntax checks on completion files
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Remove `shutdown()` calls on subprocess stdio socketpair file
descriptors that were causing Python asyncio-based MCP servers to break
## Root Cause
Bun uses `SOCK_STREAM` socketpairs for subprocess stdio pipes. After
creating each socketpair, it called `shutdown(SHUT_WR)` on the parent's
read end (for stdout/stderr) and `shutdown(SHUT_RD)` on the parent's
write end (for stdin) to make them unidirectional.
On `SOCK_STREAM` sockets, `shutdown(fd, SHUT_WR)` sends a **FIN** to the
peer. Python's `asyncio.connect_write_pipe()` registers an `EPOLLIN`
watcher on the write pipe fd to detect peer closure. The FIN from
`shutdown()` triggers an immediate `EPOLLIN` event, causing asyncio to
interpret it as "connection closed" and tear down the write transport —
even though the pipe should remain open.
This broke **all Python MCP servers** using the `model_context_protocol`
SDK (which uses `connect_write_pipe()` in its stdio transport) whenever
they took more than a few seconds to initialize. Node.js does not have
this issue because it does not call `shutdown()` on its socketpairs.
## Fix
Remove the `shutdown()` calls entirely. The socketpairs are already used
unidirectionally by convention, and the `shutdown()` calls provided no
functional benefit while causing compatibility issues with any program
that polls its stdio fds for readability/writability events.
## Test plan
- [x] Added regression test
`test/js/bun/spawn/spawn-socketpair-shutdown.test.ts` with 3 test cases:
- Subprocess stdout pipe stays writable after idle delay
- Python asyncio `connect_write_pipe` works correctly with idle period
- Subprocess stdin pipe stays readable for child after idle delay
- [x] Verified test fails on system bun (without fix) and passes on
debug build (with fix)
- [x] Verified existing spawn tests still pass
(`spawn-streaming-stdout`, `spawn-stdin-readable-stream`)
- [x] Verified original bug report repro script works with the fix
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- **Enable keepalive for custom TLS configs (mTLS):** Previously, all
connections using custom TLS configurations (client certificates, custom
CA, etc.) had `disable_keepalive=true` forced, causing a new TCP+TLS
handshake on every request. This removes that restriction and properly
tracks SSL contexts per connection.
- **Intern SSLConfig with reference counting:** Identical TLS
configurations are now deduplicated via a global registry
(`SSLConfig.GlobalRegistry`), enabling O(1) pointer-equality lookups
instead of O(n) content comparisons. Uses `ThreadSafeRefCount` for safe
lifetime management across threads.
- **Bounded SSL context cache with LRU eviction:** The custom SSL
context map in `HTTPThread` is now bounded (max 60 entries, 30-minute
TTL) with proper cleanup of both SSL contexts and their associated
SSLConfig references when evicted.
- **Correct keepalive pool isolation:** Pooled sockets now track their
`ssl_config` (with refcount) and `owner` context, ensuring connections
are only reused when the TLS configuration matches exactly, and sockets
return to the correct pool on release.
Fixes#27358
## Changed files
- `src/bun.js/api/server/SSLConfig.zig` — ref counting, content hashing,
GlobalRegistry interning
- `src/bun.js/webcore/fetch.zig` — intern SSLConfig on creation, deref
on cleanup
- `src/http.zig` — `custom_ssl_ctx` field, `getSslCtx()` helper, updated
all callback sites
- `src/http/HTTPContext.zig` — `ssl_config`/`owner` on PooledSocket,
pointer-equality matching
- `src/http/HTTPThread.zig` — `SslContextCacheEntry` with timestamps,
TTL + LRU eviction
## Test plan
- [x] `test/regression/issue/27358.test.ts` — verifies keepalive
connection reuse with custom TLS and isolation between different configs
- [x] `test/js/bun/http/tls-keepalive.test.ts` — comprehensive tests:
keepalive reuse, config isolation, stress test (50 sequential requests),
keepalive-disabled control
- [x] `test/js/bun/http/tls-keepalive-leak-fixture.js` — memory leak
detection fixture (50k requests with same config, 200 requests with
distinct configs)
## Changelog
<!-- CHANGELOG:START -->
Fixed a bug where HTTP connections using custom TLS configurations
(mTLS, custom CA certificates) could not reuse keepalive connections,
causing a new TCP+TLS handshake for every request and leaking SSL
contexts. Custom TLS connections now properly participate in keepalive
pooling with correct isolation between different configurations.
<!-- CHANGELOG:END -->
🤖 Generated with [Claude Code](https://claude.com/claude-code) (0%
16-shotted by claude-opus-4-6, 3 memories recalled)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## What
Fixes a null pointer dereference crash in the `seq` shell builtin when
called with only flags and no numeric arguments.
## Reproduction
```js
await Bun.$`seq -w` // crash
await Bun.$`seq -s ,` // crash
await Bun.$`seq -t .` // crash
```
```
panic(main thread): attempt to use null value
src/shell/builtin/seq.zig:47:31
```
Also crashes release builds (segfault).
## Root cause
The flag-parsing loop at line 17 consumes all arguments. When the user
passes only flags, the iterator is exhausted after the loop exits. Line
47 then calls `iter.next().?` which panics on `null`.
The existing `args.len == 0` check on line 14 only catches the case
where no args are passed at all — it does not cover the case where all
args are consumed as flags.
## Fix
Changed `.?` to `orelse return this.fail(usageString)`, matching the
behavior when `seq` is called with zero arguments.
## Summary
- Bind the `stream.finished` callback with `AsyncLocalStorage.bind()`
before wrapping with `once()`, matching [Node.js
behavior](https://github.com/nodejs/node/blob/main/lib/internal/streams/end-of-stream.js#L70).
Without this, the async context active when `finished()` is called is
lost by the time the callback fires.
Closes#27428
## Test plan
- [x] Added regression test `test/regression/issue/27428.test.ts` that
spawns an HTTP server using `stream.finished` inside
`AsyncLocalStorage.run()` and verifies the store is preserved in the
callback
- [x] Verified test fails with system bun (`USE_SYSTEM_BUN=1`) and
passes with the debug build
- [x] Existing stream finished tests (`test-stream-finished.js`,
`test-stream-end-of-streams.js`, `test-http-outgoing-finished.js`,
`test-http-client-finished.js`) continue to pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
This PR implements a native Zig REPL for Bun with full TUI (Text User
Interface) support, providing a modern and feature-rich interactive
experience.
### Features
- **Syntax highlighting** using `QuickAndDirtySyntaxHighlighter` for
colorized JavaScript code
- **Full line editing** with Emacs-style keybindings:
- `Ctrl+A/E` - Move to start/end of line
- `Ctrl+B/F` - Move backward/forward one character
- `Ctrl+K/U` - Kill to end/start of line
- `Ctrl+W` - Delete word backward
- `Ctrl+L` - Clear screen
- Arrow keys for cursor movement
- **Persistent history** with file storage (`~/.bun_repl_history`)
- Up/Down arrow for history navigation
- `Ctrl+P/N` also works for history
- **Tab completion** for properties and commands
- **Multi-line input support** with automatic continuation detection
- **REPL commands**: `.help`, `.exit`, `.clear`, `.load`, `.save`,
`.editor`
- **Special variables**:
- `_` - Contains the result of the last expression
- `_error` - Contains the last error that occurred
- **Result formatting** with `util.inspect` integration
- **replMode transforms** for proper REPL semantics:
- Expression result capture via `{ value: expr }` wrapper
- Variable hoisting for persistence across REPL lines (`const`/`let` →
`var`)
- Function and class declaration hoisting
- Top-level await support with async IIFE wrapper
- Object literal detection (no parentheses needed for `{ a: 1 }`)
### Implementation
The REPL is implemented in pure Zig (`src/repl.zig`) with C++ bindings
for JSC integration:
- Uses raw terminal mode for character-by-character input
- Integrates with Bun's existing `VirtualMachine` for JavaScript
evaluation
- Uses the parser with `repl_mode=true` to apply REPL-specific AST
transforms
- Provides access to all Bun globals (`Bun`, `Buffer`, `console`,
`process`, etc.)
### Files Changed
- `src/repl.zig` - Main REPL implementation (~1500 lines)
- `src/cli/repl_command.zig` - CLI entry point
- `src/bun.js/bindings/bindings.cpp` - C++ REPL functions
- `src/bun.js/bindings/headers.h` - C++ declarations
- `src/ast/repl_transforms.zig` - REPL-specific AST transforms
(cherry-picked from jarred/repl-mode)
- `test/js/bun/repl/repl.test.ts` - Comprehensive tests
## Test Plan
- [x] Run `bun bd test test/js/bun/repl/repl.test.ts` - 27 tests pass
- [x] Manual testing of interactive features:
- Basic expression evaluation
- Special variables `_` and `_error`
- History navigation
- Tab completion
- Multi-line input
- REPL commands
- Top-level await
- Variable persistence
- [x] Verified REPL starts without downloading packages (fixes#26058)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- The `fuzzilli('FUZZILLI_PRINT', ...)` native function handler in
FuzzilliREPRL.cpp called `fdopen(103, "w")` on every invocation and
passed the result directly to `fprintf()` without a NULL check
- When running the debug-fuzz binary outside the REPRL harness (where fd
103 is not open), `fdopen()` returns NULL, and `fprintf(NULL, ...)`
causes a SIGSEGV crash
- Fix: make the `FILE*` static (so `fdopen` is called once, avoiding fd
leaks) and guard `fprintf`/`fflush` behind a NULL check
## Crash reproduction
```js
// The crash is triggered when the Fuzzilli explore framework calls
// fuzzilli('FUZZILLI_PRINT', ...) on the native fuzzilli() function
// while running outside the REPRL harness (fd 103 not open).
// Minimal reproduction:
fuzzilli('FUZZILLI_PRINT', 'hello');
```
Running the above with the debug-fuzz binary (which registers the native
`fuzzilli` function) causes SIGSEGV in `__vfprintf_internal` due to NULL
FILE*.
## Test plan
- [x] Verified crash reproduces 10/10 times with the original binary
- [x] Verified 0/10 crashes with the fixed binary
- [x] Fix is trivially correct: static FILE* + NULL guard
Co-authored-by: Alistair Smith <alistair@anthropic.com>
## Summary
- When using `bun build --compile --sourcemap=external`, the `.map`
files were embedded in the executable but never written to disk. This
fix writes them next to the compiled executable.
- With `--splitting` enabled, multiple chunks each produce their own
sourcemap. Previously all would overwrite a single `{outfile}.map`; now
each `.map` file preserves its chunk-specific name (e.g.,
`chunk-XXXXX.js.map`).
- Fixes both the JavaScript API (`Bun.build`) and CLI (`bun build`) code
paths.
## Test plan
- [x] `bun bd test test/bundler/bun-build-compile-sourcemap.test.ts` —
all 8 tests pass
- [x] New test: `compile with sourcemap: external writes .map file to
disk` — verifies a single `.map` file is written and contains valid
sourcemap JSON
- [x] New test: `compile without sourcemap does not write .map file` —
verifies no `.map` file appears without the flag
- [x] New test: `compile with splitting and external sourcemap writes
multiple .map files` — verifies each chunk gets its own uniquely-named
`.map` file on disk
- [x] Verified new tests fail with `USE_SYSTEM_BUN=1` (confirming they
test the fix, not pre-existing behavior)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Fixes https://github.com/oven-sh/bun/issues/21201. Ran into this today
during a migration from node->bun. TLDR if you set `Transfer-Encoding:
chunked` via `res.writeHead()`, Bun sends the header twice because two
layers independently add it:
1. `writeFetchHeadersToUWSResponse` (NodeHTTP.cpp) writes the user's
headers to the response buffer
2. uWS's `HttpResponse::write()` auto-inserts `Transfer-Encoding:
chunked` since no flag indicates it was already set
Added a `HTTP_WROTE_TRANSFER_ENCODING_HEADER` flag copying the pattern
for `Content-Length` and `Date` deduping, checked before auto-insertion
in `write()`, `flushHeaders()`, and `sendTerminatingChunk()`.
### How did you verify your code works?
Minimal repro:
```
import http from "node:http";
import net from "node:net";
http.createServer((_, res) => {
res.writeHead(200, { "Transfer-Encoding": "chunked" });
res.write("ok");
}).listen(0, "127.0.0.1", function () {
const s = net.createConnection(this.address().port, "127.0.0.1", () =>
s.write("GET / HTTP/1.1\r\nHost: x\r\n\r\n"));
s.on("data", (d) => {
console.log(d.toString().split("\r\n\r\n")[0]);
s.destroy(); this.close();
});
});
```
Node working (showing header once)
```
$ node --version
v22.21.1
$ node /Users/mamps/code/labs/conway/tmp/bun-duplicate-te.mjs
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Date: Tue, 24 Feb 2026 06:14:50 GMT
Connection: keep-alive
Keep-Alive: timeout=5
```
Bun bug (duplicate header):
```
$ bun --version
1.3.9
$ bun bun-duplicate-te.mjs
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Date: Tue, 24 Feb 2026 06:13:55 GMT
Transfer-Encoding: chunked
```
Bun fixed:
```
$ ./build/debug/bun-debug --version
1.3.10-debug
$ BUN_DEBUG_QUIET_LOGS=1 ./build/debug/bun-debug /tmp/bun-duplicate-te.mjs
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Date: Tue, 24 Feb 2026 06:15:53 GMT
```
## Summary
- `MSG_NOSIGNAL` is only valid for send operations (`send`, `sendto`,
`sendmsg`), not receive operations (`recv`, `recvfrom`, `recvmsg`).
Passing it to `recvfrom` causes `EINVAL` in strict environments like
gVisor (Google Cloud Run).
- Split the shared `socket_flags_nonblock` constant into
`recv_flags_nonblock` (`MSG_DONTWAIT` only) and `send_flags_nonblock`
(`MSG_DONTWAIT | MSG_NOSIGNAL`).
Closes#27389
## Test plan
- [x] Added regression test `test/regression/issue/27389.test.ts` that
exercises the socket recv path
- [x] Debug build passes (`bun bd test
test/regression/issue/27389.test.ts`)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Replace deprecated `bun create @tanstack/start@latest` with `bunx
@tanstack/cli create` in the TanStack Start guide
- The `@tanstack/create-start` package now prints a runtime deprecation
warning: *"@tanstack/create-start is deprecated. Use `tanstack create`
or `npx @tanstack/cli create` instead."*
Fixes#27374
## Test plan
- [x] Verified `bunx @tanstack/cli create --help` works correctly
- [x] Verified `bunx @tanstack/create-start@latest` shows the
deprecation warning
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Alistair Smith <alistair@anthropic.com>
## Summary
- Add `Bun.generateHeapSnapshot("v8", "arraybuffer")` which returns the
heap snapshot as an `ArrayBuffer` instead of a `string`
- Avoids potential integer overflow crashes in `WTF::String` when heap
snapshots approach max uint32 length
- Eliminates the overhead of creating a JavaScript string for large
snapshots
- The `ArrayBuffer` contains UTF-8 encoded JSON that can be written
directly to a file or decoded with `TextDecoder`
- Updates TypeScript types in `bun.d.ts` with the new overload
- Adds tests for the new `"arraybuffer"` encoding option
Depends on oven-sh/WebKit#158 for the
`BunV8HeapSnapshotBuilder::jsonBytes()` method.
## Test plan
- [x] `Bun.generateHeapSnapshot("v8", "arraybuffer")` returns a valid
`ArrayBuffer`
- [x] Decoded ArrayBuffer produces valid JSON parseable as a V8 heap
snapshot
- [x] Existing `Bun.generateHeapSnapshot("v8")` string output still
works
- [x] Tests fail with `USE_SYSTEM_BUN=1` (system bun doesn't support the
new argument)
- [x] Tests pass with the debug-local build
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
# JavaScriptCore Upstream Changes: 1b6f54d1c872..upstream/main
**Date range**: Feb 5 -- Feb 11, 2026
**Total JSC commits**: 38
---
## 1. New Features
### WebAssembly JSPI (JavaScript Promise Integration)
**Commit**: `53e97afd3421` -- [JSC] JSPI Implementation
**Files changed**: 59 files, +3879/-148 lines
Full implementation of the [WebAssembly JSPI
proposal](https://github.com/WebAssembly/js-promise-integration). Adds
two new APIs:
- `WebAssembly.promising(wasmFun)` -- wraps a wasm function so it
returns a Promise
- `new WebAssembly.Suspending(jsFun)` -- wraps a JS function so wasm can
suspend on its result
Controlled by `useJSPI` feature flag (disabled by default). This is a
large change that introduces:
- New classes: `EvacuatedStack`, `PinballCompletion`, `JSPIContext`,
`WebAssemblySuspending`, `WebAssemblyPromising`,
`JSWebAssemblySuspendError`
- New `JSPIContext` struct added to `VM` (field `topJSPIContext`)
- New `pinballCompletionStructure` in VM
- New `WebAssemblySuspendError` and `WebAssemblySuspending` lazy class
structures in JSGlobalObject
- New methods on VM: `addEvacuatedStackSlice`,
`removeEvacuatedStackSlice`, `gatherEvacuatedStackRoots`
- New iso subspace: `pinballCompletionSpace`
**Follow-up**: `f67441012b90` -- JSPI cages should only generate JIT
code when JITCage is enabled
### WebAssembly Memory64 in BBQ Tier
**Commit**: `7b98e4b17594` -- Add support for Memory64 in BBQ Tier
Extends Memory64 support from the interpreter to the BBQ JIT tier,
allowing wasm modules to address >4GB of memory.
### WebAssembly Multi-Memory (instantiation only)
**Commit**: `bdf26416947e` -- Support instantiating multiple wasm
memories
Adds support for instantiating multiple memories in wasm modules (but
not executing instructions that use memories other than index 0). Gated
behind `useWasmMultiMemory` flag.
### LOL (Lightweight Optimizing Language) JIT -- New Tier (In Progress)
A new JIT compilation tier "LOL" is being built incrementally across
multiple commits:
- `dd56e0f5b991` -- Add get_argument/argument_count bytecodes
- `fb43184a0a77` -- Add to_primitive-like bytecodes
- `f52fa4a30c76` -- Add switch/throw bytecodes
- `7fd04c82d291` -- Add support for this/prototype bytecodes
- `11127b8a61e0` -- Add support for op_ret
These commits add new files under `Source/JavaScriptCore/lol/`
(LOLJIT.cpp, LOLJIT.h, LOLRegisterAllocator.h). This appears to be a new
lightweight JIT tier between Baseline and DFG.
---
## 2. Performance Improvements
### Deep Rope String Slicing -- 168x faster
**Commit**: `93f2fd68619b` -- [JSC] Limit rope traversal depth in
`tryJSSubstringImpl`
Limits rope traversal depth to 16 in `tryJSSubstringImpl`. Deep rope
strings from repeated concatenation (`s += 'A'`) previously caused
O(n^2) behavior. When the depth limit is exceeded, it falls back to
`resolveRope` to flatten the string.
This directly addresses a performance issue reported against Bun where
it was significantly slower than Node.js.
### String#endsWith DFG/FTL Optimization -- up to 10.5x faster
**Commit**: `901865149859` -- [JSC] Optimize `String#endsWith` in
DFG/FTL
Adds a new DFG/FTL intrinsic `StringPrototypeEndsWithIntrinsic` for
`String.prototype.endsWith`. Constant folding case is 10.5x faster;
general case is 1.45x faster.
### RegExp Flag Getters in DFG/FTL and IC -- 1.6x faster
**Commit**: `1fe86a244d00` -- [JSC] Handle RegExp flag getters in
DFG/FTL and IC
Adds DFG/FTL and inline cache support for RegExp flag property getters
(`.global`, `.ignoreCase`, `.multiline`, `.dotAll`, `.sticky`,
`.unicode`, `.unicodeSets`, `.hasIndices`).
### String#@@iterator as NewInternalFieldObject in DFG/FTL
**Commit**: `44cba41d8eef` -- [JSC] Handle `String#@@iterator` as
`NewInternalFieldObject` in DFG/FTL
Optimizes string iterator creation in DFG/FTL by treating it as a
`NewInternalFieldObject`, enabling allocation sinking.
### ArithMod(Int52) in DFG/FTL
**Commit**: `49a21e7ff327` -- [JSC] Add ArithMod(Int52Use, Int52Use) in
DFG / FTL
**Follow-ups**: `f96be6066671` (constant folding fix), `c8f283248f3f`
(NaNOrInfinity fix)
Adds Int52 modulo support in DFG/FTL, avoiding expensive `fmod` double
operations when inputs are integer-like doubles.
### ArithDiv(Int52) in DFG/FTL -- REVERTED
**Commit**: `582cb0306b7c` -- [JSC] Add ArithDiv(Int52Use, Int52Use) in
DFG / FTL
**Revert**: `2e967edd1dc0` -- Reverted due to JetStream3 regressions
### Intl formatToParts Pre-built Structure -- up to 1.15x faster
**Commit**: `a5dd9753d23c` -- [JSC] Optimize Intl formatToParts methods
with pre-built Structure
Optimizes Intl's `formatToParts` methods by using pre-built Structures
for the returned part objects.
### JSBigInt Inline Storage
**Commit**: `dbc50284d4cf` -- [JSC] Inline storage into JSBigInt
**Related**: `304cf0713b9d` -- Remove JSBigInt::rightTrim and
JSBigInt::tryRightTrim
Major structural change to JSBigInt: digits are now stored inline
(trailing storage) instead of via a separate `CagedBarrierPtr`. This
eliminates the separate allocation and pointer indirection.
`tryRightTrim` and `rightTrim` removed; `createFrom` renamed to
`tryCreateFrom` with span-based API.
### WYHash Always Enabled for Strings
**Commit**: `14ba1421ca08` -- [JSC] Always use WYHash for strings
Previously, WYHash had a threshold (disabled for short strings on weak
devices). Now it's enabled regardless of string size.
### JIT Worklist Thread Count: 3 -> 4
**Commit**: `453c578cadf6` -- [JSC] Bump JIT worklist thread number from
3 to 4
### Greedy Register Allocator Improvements
**Commit**: `b642ae17aade` -- [JSC] Proactively coalesce spill slots
during register allocation
**Commit**: `a78705db7c08` -- [JSC] GreedyRegAlloc: amortize cost of
clearing the visited set in the evict loop
---
## 3. Bug Fixes
### RegExp (YARR) Fixes
- `dea6808af40e` -- **Fix stale captures in FixedCount groups in
MatchOnly mode**: `.test()` was not clearing captures between FixedCount
iterations, causing stale values visible to backreferences.
- `437e137889ea` -- **Fix infinite loop in JIT for non-greedy
backreference to zero-width capture**: A non-greedy backreference like
`\1*?` could loop forever when the referenced capture was undefined or
empty.
- `d4f884d21c0e` -- **Fix backtracking in NestedAlternativeEnd**: Wrong
jump target was used when backtracking from NestedAlternativeEnd.
### ARM64 Cached Immediate Fix
**Commit**: `8396ad321ad0` -- [JSC] Fix edge case issue of cached imm in
ARM64
TrustedImm32(-1) was not zero-extended when cached, causing wrong reuse
when followed by TrustedImm64(-1).
### Wasm Fixes
- `cea8513d43ef` -- **Fix RefCast/RefTest folding in B3**: The condition
was inverted, causing wrong cast results.
- `629632da96e1` -- **Fix debugger races with idle and active VMs in
stop-the-world**
- `e5391ad90f47` -- **Stack check on IPInt->BBQ loop OSR entry**
- `fd15e0d70ab1` -- **Use FrameTracer in wasm table_init operations**
### Structure Heap Alignment Assert Fix
**Commit**: `47b52526fd42` -- [JSC] Fix startOfStructureHeap alignment
RELEASE_ASSERT
---
## 4. Bun-side Changes
- Updated `WEBKIT_VERSION` to `f03492d0636f`
- Updated `SerializedScriptValue.cpp`: replaced removed
`JSBigInt::tryRightTrim` with `JSBigInt::tryCreateFrom` span-based API
- Fixed `CloneSerializer`/`CloneDeserializer` inheritance (`private` ->
`public CloneBase`)
- Explicitly disabled `ENABLE_MEDIA_SOURCE`, `ENABLE_MEDIA_STREAM`,
`ENABLE_WEB_RTC` in `build.ts` and `SetupWebKit.cmake` for JSCOnly port
---
## 5. Summary by Component
| Component | Commits | Key Changes |
|-----------|---------|-------------|
| **WASM** | 9 | JSPI implementation, Memory64 BBQ, multi-memory,
RefCast fix, stack checks, debugger races |
| **DFG/FTL** | 6 | String#endsWith opt, RegExp flags opt,
String@@iterator opt, ArithMod(Int52) |
| **YARR (RegExp)** | 3 | Stale captures fix, infinite loop fix,
NestedAlternativeEnd backtracking fix |
| **B3/Air** | 3 | Spill slot coalescing, generational set for eviction,
RefCast/RefTest fix |
| **LOL JIT** | 5 | New JIT tier being built incrementally |
| **Runtime** | 6 | BigInt inline storage, WYHash always-on, Intl
formatToParts opt, ARM64 imm fix, rope depth limit |
---------
Co-authored-by: Sosuke Suzuki <sosuke@bun.com>
## Summary
- `GetFinalPathNameByHandleW` returns a value `>= buffer size` when the
path doesn't fit, but 4 out of 5 call sites only checked for `== 0` (the
error case). This could cause out-of-bounds reads/writes when Windows
paths exceed the buffer capacity — e.g. deeply nested `node_modules` or
`\\?\` extended paths.
- Also fixed `PackageInstall.zig:1264` passing `dest_buf.len` (a
`PathBuffer` = 4096 bytes) instead of `wbuf.len` (a `WPathBuffer` =
32767 u16 elements) as the buffer size parameter to the API.
| File | Line | Issue |
|------|------|-------|
| `src/windows.zig` | 3419 | `GetFinalPathNameByHandle` wrapper missing
`>= len` check |
| `src/install/PackageInstall.zig` | 538 | OOB write at
`state.buf[dest_path_length]` |
| `src/install/PackageInstall.zig` | 562 | OOB slice
`state.buf2[0..cache_path_length]` |
| `src/install/PackageInstall.zig` | 1261 | Wrong buffer size param +
missing bounds check |
| `src/bun.js/node/node_fs.zig` | 6578 | OOB slice in symlink path
resolution during `copyFile` |
| `src/install/isolated_install/Installer.zig` | 556 | `setLength` with
unchecked length |
The sole correct call site (`src/sys.zig:4062`) already had the proper
`>= len` check and was used as the reference pattern.
## Test plan
- [x] `bun run zig:check-windows` passes (x86_64 + aarch64, Debug +
ReleaseFast)
- [ ] Windows CI passes
- [ ] Test with deeply nested `node_modules` paths on Windows (>260
chars)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Adds a unified baseline verification script
(`scripts/verify-baseline.ts`) that combines basic CPU instruction
checks and JIT stress test fixtures into a single step.
**Changes:**
- New TypeScript script replaces separate `verify-baseline-cpu.sh` and
`verify-jit-stress-qemu.sh` CI steps
- Adds Windows x64 baseline verification using Intel SDE v9.58 with
Nehalem emulation
- Linux continues to use QEMU (Nehalem for x64, Cortex-A53 for aarch64)
- SDE violations are detected by checking output for `SDE-ERROR`
messages rather than exit codes, avoiding false positives from
application errors
- JIT stress fixtures now run on every build instead of only when WebKit
changes
**Platform support:**
| Platform | Emulator | CPU Model |
|----------|----------|-----------|
| Linux x64 baseline | QEMU | Nehalem (SSE4.2, no AVX) |
| Linux aarch64 | QEMU | Cortex-A53 (no LSE/SVE) |
| Windows x64 baseline | Intel SDE | Nehalem (no AVX) |
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary
Fixes#27063
On Windows, when `Bun.spawn` fails (e.g., ENOENT for a nonexistent
executable), pipes that were already initialized with `uv_pipe_init`
were being freed directly with `allocator.destroy()` without first
calling `uv_close()`. This left dangling pointers in libuv's
`handle_queue` linked list, corrupting it. Subsequent spawn calls would
crash with a segfault when inserting new handles into the corrupted
list.
Three sites were freeing pipe handles without `uv_close`:
- **`process.zig` `Stdio.deinit()`**: When spawn failed,
already-initialized pipes were freed without `uv_close()`. Now uses
`closePipeAndDestroy()` which checks `pipe.loop` to determine if the
pipe was registered with the event loop.
- **`process.zig` `spawnProcessWindows` IPC handling**: Unsupported IPC
pipes in stdin/stdout/stderr were freed directly. Now uses the same safe
close-then-destroy pattern.
- **`source.zig` `openPipe()`**: If `pipe.open(fd)` failed after
`pipe.init()` succeeded, the pipe was destroyed directly. Now calls
`uv_close()` with a callback that frees the memory.
Additionally, pipe allocations in `stdio.zig` are now zero-initialized
so that the `loop` field is reliably `null` before `uv_pipe_init`,
enabling the init detection in `deinit`.
## Test plan
- [x] Added regression test `test/regression/issue/27063.test.ts` that
spawns nonexistent executables repeatedly and verifies a valid spawn
still works afterward
- [x] Verified existing spawn tests pass (`exit-code.test.ts`,
`spawnSync.test.ts` — timing-related pre-existing flakes only)
- [x] Debug build compiles successfully
- [ ] Windows CI should verify the fix prevents the segfault
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fixes `bun install` producing incomplete `node_modules` on NFS, FUSE,
and some bind mount filesystems
- On these filesystems, `getdents64` returns `DT_UNKNOWN` for `d_type`
instead of `DT_DIR`/`DT_REG`
- The directory walker was silently skipping these entries, causing
missing files (e.g., 484 instead of 1070 for `@sinclair/typebox`)
- When an entry has unknown kind, we now fall back to `fstatat()` to
resolve the actual file type
## Test plan
- [x] Reproduced with Docker NFS environment: npm installs 1071 files,
bun installs only 484
- [x] Verified fix: bun-debug now installs 1070 files (matching npm
minus `.package-lock.json`)
- [x] Second install from cache also works correctly (1070 files)
- [x] `bun run zig:check-all` passes on all 16 platform targets
- [ ] CI passes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary
Fixes a use-after-free in bindgen v1 generated C++ bindings that causes
`"switch on corrupt value"` panics in `String.deref` on Windows. This is
a top crash (500+ reports across v1.3.3–v1.3.9), predominantly affecting
standalone executables.
## Root Cause
`Bun::toString(WTF::String&)` copies the raw `StringImpl*` pointer
**without adding a reference**. For optional string arguments with
defaults and dictionary string fields, the generated code declares
`WTF::String` inside an `if` block, but the resulting `BunString`
outlives it:
```cpp
BunString argStr;
if (!arg.isUndefinedOrNull()) {
WTF::String wtfString_0 = WebCore::convert<IDLDOMString>(...);
argStr = Bun::toString(wtfString_0); // copies pointer, no ref
} // ← wtfString_0 destroyed here, drops ref → StringImpl may be freed
// argStr now holds a dangling pointer to freed memory
```
When the freed memory is reused, `String.deref()` reads garbage for the
tag field → `"switch on corrupt value"` panic.
### Why it was Windows-only / elevated recently
- The mimalloc v3 update (shipped in v1.3.7/v1.3.8) changed heap reuse
patterns on Windows, causing freed memory to be overwritten more
aggressively — turning a latent UAF into a frequent crash
- The mimalloc v3 revert in v1.3.9 reduced crash frequency back to
baseline but did not fix the underlying bug
- A [previous fix](https://github.com/oven-sh/bun/pull/26717) was
reverted due to unrelated CI failures
## Fix
Hoist all `WTF::String` temporaries to the same scope as the Zig
dispatch call, so they stay alive until the `BunString` values are
consumed:
1. **Function string arguments**: `WTF::String` is declared at the top
of the generated function, before any `if` blocks for optional arguments
2. **Dictionary string fields**: The `convert*` function accepts
`WTF::String&` references owned by the caller, so the string data
outlives the `convert*` function and remains valid through the dispatch
call
This approach is exception-safe — `WTF::String` destructors handle
cleanup automatically on all exit paths (normal return,
`RETURN_IF_EXCEPTION`, etc.) with no leaked refs.
### Difference from the previous fix
The [previous fix](https://github.com/oven-sh/bun/pull/26717) hoisted
`WTF::String` for function arguments but kept dictionary field temps
**inside** the `convert*` function. This left dictionary string fields
as use-after-return — `result->encoding` would be a dangling pointer
after `convert*` returned. This fix correctly passes `WTF::String&` refs
from the dispatch scope through to the `convert*` function.
### Affected call sites
Only 2 call sites have the vulnerable pattern (`DOMString` +
`.default(...)`):
- `Bun.stringWidth()` — `str: t.DOMString.default("")`
- `os.userInfo()` — `encoding: t.DOMString.default("")` in
`UserInfoOptions` dictionary
Note: bindgen v2 is not affected — it uses `releaseImpl().leakRef()`
which transfers ownership correctly.
## Summary
- HTTPS/URL tarball dependencies were not having their integrity hash
stored in the lockfile, allowing a malicious server to change the
tarball contents without detection on subsequent installs
- Now computes a sha512 hash from the downloaded tarball bytes during
extraction and stores it in both the binary lockfile and text bun.lock
- The hash is verified on re-download, matching the behavior of npm
registry packages
- Old lockfiles without integrity hashes continue to work (backward
compatible)
## Changes
- `src/install/integrity.zig`: Added `Integrity.forBytes()` to compute
sha512 from raw bytes
- `src/install/install.zig`: Added `integrity` field to `ExtractData`
struct
- `src/install/PackageManagerTask.zig`: Compute hash from tarball bytes
for both remote and local tarball tasks
- `src/install/PackageManager/processDependencyList.zig`: Set
`package.meta.integrity` from computed hash
- `src/install/lockfile/bun.lock.zig`: Serialize/deserialize integrity
for `remote_tarball` and `local_tarball` types
## Test plan
- [x] Integrity hash is stored in text lockfile for tarball URL deps
- [x] Integrity hash is consistent/deterministic across reinstalls
- [x] Integrity mismatch (changed tarball content) causes install
failure
- [x] Old lockfiles without integrity still install successfully
(backward compat)
- [x] Fresh installs produce integrity hashes
- [x] All 12 existing tarball tests pass (no regressions)
- [x] Tests fail with `USE_SYSTEM_BUN=1` (confirms fix is effective)
Fixes GHSA-jfhr-4v9p-9rm4
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fix null pointer dereference in `Listener.getsockname()` when called
without an object argument (or with a non-object argument)
- `getsockname()` wrote properties directly into its first argument via
`.put()`, which calls `getObject()` in C++ — this returns null for
non-object values like `undefined`, causing a crash at
`BunString.cpp:942`
- Now validates the argument is an object first; if not, creates a new
empty object, writes properties into it, and returns it
## Crash reproduction
```js
const listener = Bun.listen({
hostname: "localhost",
port: 0,
socket: { data() {} },
});
listener.getsockname(); // Segfault - null pointer dereference
```
## Test plan
- [x] Added `test/js/bun/http/listener-getsockname.test.ts` with tests
for calling `getsockname()` with no argument, with an object argument,
and with a non-object argument
- [x] Verified test crashes with system bun and passes with patched
build
- [x] Verified original fuzzer reproduction no longer crashes
---------
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- Override Zig's default `cryptoRandomSeed` in `std_options` to use
BoringSSL's `RAND_bytes` (via `bun.csprng`) instead of the `getrandom()`
syscall
- On Linux kernels < 3.17 (e.g. Synology NAS with kernel 3.10), the
`getrandom` syscall doesn't exist and returns `ENOSYS`, causing Zig's
stdlib to panic with `"getrandom() failed to provide entropy"`
- BoringSSL already handles this gracefully by falling back to
`/dev/urandom`
## Details
Bun already uses BoringSSL's `RAND_bytes` for all its own cryptographic
random needs (`bun.csprng`). However, Zig's standard library
`std.crypto.random` uses a separate code path that calls the `getrandom`
syscall directly, with no fallback for `ENOSYS`.
Zig's `std.Options` struct provides a `cryptoRandomSeed` override for
exactly this purpose. This PR sets it to `bun.csprng` in both
`src/main.zig` and `src/main_test.zig`.
## Test plan
- [x] `bun bd` compiles successfully
- [x] `crypto.getRandomValues()`, `crypto.randomUUID()`, and
`require("crypto").randomFillSync()` all work correctly
- Cannot write a meaningful automated regression test since reproducing
requires a Linux kernel < 3.17
Closes#27279🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Crash
Integer overflow panic in `getArgv` when `Bun.spawn`/`Bun.spawnSync`
receives an array with `.length` near u32 max (e.g. 4294967295).
## Reproduction
```js
const arr = ["echo", "hello"];
Object.defineProperty(arr, "length", { value: 4294967295 });
Bun.spawnSync(arr);
```
## Root Cause
`JSArrayIterator.len` is a `u32` derived from the JS array's `.length`
property. In `getArgv`, the expression `cmds_array.len + 2` (for argv0 +
null terminator) overflows `u32` arithmetic when `len` is close to `u32`
max. This causes a panic in debug builds and a segfault in release
builds. Additionally, the validation checks (`isEmptyOrUndefinedOrNull`
and `len == 0`) were placed after the overflowing `initCapacity` call,
so they couldn't prevent the crash.
## Fix
- Move validation checks before the `initCapacity` call
- Add a length check rejecting arrays with length > `u32 max - 2`
- Widen `cmds_array.len` to `usize` before adding 2 to prevent overflow
- Use `try argv.append()` instead of `appendAssumeCapacity` for safety
## Verification
- Reproduction no longer crashes (throws clean "cmd array is too large"
error)
- Normal `Bun.spawn`/`Bun.spawnSync` usage unaffected
- Added regression test at
`test/js/bun/spawn/spawn-large-array-length.test.ts`
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Same issue as #27191 (FetchTasklet), but in `S3DownloadStreamWrapper`
(`src/s3/client.zig`).
When a streaming S3 download body is cancelled via `reader.cancel()`,
`S3DownloadStreamWrapper.readable_stream_ref` (a `ReadableStream.Strong`
GC root) was never released. The S3 download continued in the
background, and the Strong ref prevented GC of the ReadableStream —
leaking memory until the download eventually completed.
## Root Cause
`ByteStream.onCancel()` cleaned up its own state but **did not notify
the `S3DownloadStreamWrapper`**. The wrapper only called `deinit()`
(which releases the Strong ref) when `has_more == false` — i.e., when
the S3 download fully completed. If the user cancelled the stream
mid-download, the Strong ref was held until the entire file finished
downloading in the background.
This is the exact same pattern that was fixed for `FetchTasklet` in
#27191.
## Fix
- Register a `cancel_handler` on the `ByteStream.Source` that releases
`readable_stream_ref` when the stream is cancelled. The download
callback will see `readable_stream_ref.get()` return `null` and skip
data delivery until the download finishes and `deinit()` cleans up the
remaining resources.
- Add `clearStreamCancelHandler()` in `deinit()` to null the
`cancel_handler`/`cancel_ctx` on the `ByteStream.Source`, preventing
use-after-free when the wrapper is freed before `cancel()` is called
(e.g., download completes normally).
## Test
Added `test/js/bun/s3/s3-stream-cancel-leak.test.ts` — uses a raw TCP
server (`Bun.listen`) that mocks an S3 GET response: sends one HTTP
chunk then keeps the connection open. Client streams 30 times via
`s3.file().stream()`, reads one chunk, cancels, then asserts
`heapStats().objectTypeCounts.ReadableStream` does not accumulate.
Before the fix, all 30 ReadableStreams leaked; after the fix, 0 leak.
## Summary
- When `require()` loads an ESM module (`.mjs`) that throws during
evaluation, the module was removed from `requireMap` but left in the ESM
registry (`Loader.registry`) in a partially-initialized state
- A subsequent `import()` of the same module would find this corrupt
entry and throw `ReferenceError: Cannot access 'foo' before
initialization` instead of re-throwing the original evaluation error
- Fix by also deleting the module from `Loader.registry` in both
`overridableRequire` and `requireESMFromHijackedExtension` when ESM
evaluation fails, allowing `import()` to re-evaluate from scratch
Closes#27287
## Test plan
- [x] Added regression test in `test/regression/issue/27287.test.ts`
- [x] Verified test fails with system bun (`USE_SYSTEM_BUN=1`)
- [x] Verified test passes with `bun bd test`
- [x] Manual verification: `require()` correctly throws original error,
`import()` now re-throws the same error instead of `ReferenceError`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Alistair Smith <alistair@anthropic.com>
## Summary
- `DeadSocket` only contains a `u8` field (alignment 1), so the linker
could place `DeadSocket.dead_socket` at a non-8-byte-aligned address
- When `markTaggedSocketAsDead` creates a tagged pointer embedding this
address and passes it through `bun.cast(**anyopaque, ...)`, the
`@alignCast` panics with "incorrect alignment" because the bottom bits
of the tagged value come from the unaligned address
- Fix: add `align(@alignOf(usize))` to the `dead_socket` variable
declaration
This only manifested on stable (non-canary) Windows ARM64 builds because
the binary layout differs when `ci_assert` is false, shifting the static
variable to a non-aligned address. Canary builds happened to place it at
an aligned address by coincidence.
## Test plan
- [x] Verified `fetch('https://example.com')` no longer crashes on
Windows ARM64 stable build (ENABLE_CANARY=OFF)
- [x] Verified 5 sequential HTTPS fetches complete successfully
- [x] Verified the fix is a single-line change with no behavioral side
effects
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Bun's builtin `echo` only supported the `-n` flag. The `-e` flag (and
`-E`) were treated as literal text, causing `echo -e password` to output
`-e password` instead of `password`. This broke common patterns like
`echo -e $password | sudo -S ...`.
- Added full `-e` (enable backslash escapes) and `-E` (disable backslash
escapes) flag support, matching bash behavior including combined flags
like `-ne`, `-en`, `-eE`, `-Ee`.
- Supported escape sequences: `\\`, `\a`, `\b`, `\c`, `\e`/`\E`, `\f`,
`\n`, `\r`, `\t`, `\v`, `\0nnn` (octal), `\xHH` (hex).
Closes#17405
## Test plan
- [x] Added 22 tests in `test/regression/issue/17405.test.ts` covering
all escape sequences, flag combinations, and the original issue scenario
- [x] Verified tests fail with system bun (19/22 fail) and pass with
debug build (22/22 pass)
- [x] Verified existing shell tests (`bunshell.test.ts`) still pass —
all 27 echo-related tests pass, no regressions
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- `FileSystemRouter.Route.parse()` was caching file descriptors in the
global entry cache (`entry.cache.fd`). When `Bun.build()` later closed
these fds during `ParseTask`, the cache still referenced them.
Subsequent `Bun.build()` calls would find these stale fds, pass them to
`readFileWithAllocator`, and `seekTo(0)` would fail with EBADF (errno
9).
- The fix ensures `Route.parse` always closes the file it opens for
`getFdPath` instead of caching it in the shared entry. The fd was only
used to resolve the absolute path via `getFdPath`, so caching was
unnecessary and harmful.
Closes#18242
## Test plan
- [x] Added regression test `test/regression/issue/18242.test.ts` that
creates a `FileSystemRouter` and runs `Bun.build()` three times
sequentially
- [x] Test passes with `bun bd test test/regression/issue/18242.test.ts`
- [x] Test fails with `USE_SYSTEM_BUN=1 bun test
test/regression/issue/18242.test.ts` (system bun v1.3.9)
- [x] Verified 5 sequential builds work correctly after the fix
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fixes `db.close(true)` throwing "database is locked" after using
`db.transaction()`
- The `getController` function creates prepared statements via
`db.prepare()` which bypasses the query cache, so they were never
finalized during `close()`
- `close()` now explicitly finalizes any cached transaction controller
statements before calling `sqlite3_close()`
Fixes#14709
## Test plan
- [x] New regression tests in `test/regression/issue/14709.test.ts`
covering:
- Basic `close(true)` after `transaction()`
- `close(true)` after transaction with actual work
- `using` declaration (calls `close(true)` via `Symbol.dispose`)
- Multiple transaction types (deferred, immediate, exclusive)
- Nested transactions
- [x] All new tests fail with system bun (`USE_SYSTEM_BUN=1`) and pass
with debug build
- [x] Existing SQLite test suite (`test/js/bun/sqlite/sqlite.test.js`)
passes with no regressions
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixes the CSS bundler incorrectly removing a `:root` selector when it
appears before an `@property` at-rule and another `:root` exists after
it
- The deduplication logic in `CssRuleList.minify()` was merging style
rules across non-style rule boundaries (like `@property`), which changes
CSS semantics
- Clears the `style_rules` deduplication map when a non-style rule is
appended, preventing merges across these boundaries
## Test plan
- [x] Added regression test in `test/regression/issue/27117.test.ts`
- [x] Verified test fails with system bun (`USE_SYSTEM_BUN=1`) —
reproduces the bug
- [x] Verified test passes with debug build (`bun bd test`)
- [x] Verified adjacent `:root` rules (without intervening at-rules) are
still correctly merged
- [x] All existing CSS bundler tests pass
(`test/bundler/esbuild/css.test.ts` — 53 tests)
- [x] All CSS modules tests pass (`test/bundler/css/css-modules.test.ts`
— 3 tests)
Closes#27117🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixed `srcBytesAtCursor()` and `cursorPos()` in the shell lexer's
unicode path (`ShellCharIter(.wtf8)`) to use `self.src.cursor.i` instead
of `self.src.iter.i`, which was permanently stuck at 0
- Fixed `bumpCursorAscii()` to properly decode the codepoint at the new
cursor position instead of storing the last digit character, which
caused the wrong character to be returned on the next read
## Root Cause
When the shell template literal source contained multi-byte UTF-8
characters (e.g., `Í`, `€`), the `LexerUnicode` path was used. In this
path, `srcBytesAtCursor()` and `cursorPos()` referenced
`self.src.iter.i` — the `CodepointIterator`'s internal field that is
never modified (the `next()` method takes `*const Iterator`). This
meant:
1. `srcBytesAtCursor()` always returned bytes from position 0 (the start
of the source)
2. `looksLikeJSStringRef()` checked for `__bunstr_` at position 0
instead of the current cursor position, failing to match
3. The `\x08__bunstr_N` reference was passed through as literal text
into the shell output
This only occurred when **both** conditions were met:
- An interpolated value contained a space (triggering
`needsEscapeBunstr` → stored as `__bunstr_N` ref)
- A subsequent value contained multi-byte UTF-8 (triggering
`LexerUnicode` instead of `LexerAscii`)
Closes#17244
## Test plan
- [x] Added regression tests in `test/regression/issue/17244.test.ts`
- [x] Verified tests fail with `USE_SYSTEM_BUN=1` (system bun 1.3.9)
- [x] Verified tests pass with `bun bd test`
- [x] Ran existing shell tests (`bunshell.test.ts`,
`bunshell-default.test.ts`, `bunshell-instance.test.ts`) — no
regressions
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- When bundling CSS with `@layer` declarations (e.g. `@layer one;`)
followed by `@import` rules with `layer()`, the bundler left the bare
`@layer` statements and `@import` lines in the output even though their
content was already inlined into `@layer` blocks
- The fix adds `.layer_statement` to the leading-rule filter in
`prepareCssAstsForChunk`, which already stripped `@import` and
`.ignored` rules but missed `@layer` statement rules
Closes#20546
## Test plan
- [x] New regression test in `test/regression/issue/20546.test.ts`
covers both separate `@layer` statements and comma syntax
- [x] Test fails with system bun (`USE_SYSTEM_BUN=1`) confirming the bug
- [x] Test passes with debug build (`bun bd test`)
- [x] All 53 existing CSS bundler tests pass
(`test/bundler/esbuild/css.test.ts`)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixes `bun build` producing syntactically invalid JavaScript
(`Promise.resolve().then(() => )`) for unused dynamic imports like `void
import("./dep.ts")` or bare `import("./dep.ts")` expression statements
- When `exports_ref` is cleared for unused results but the `.then(() =>
...)` wrapper was still emitted, the arrow function body was empty. Now
skips the `.then()` wrapper entirely when there's nothing to execute
inside the callback, producing just `Promise.resolve()`
- The bug only affected cases where the import result was unused —
`const x = import(...)`, `await import(...)`, and `.then()` chains were
already correct
Closes#24709
## Test plan
- [x] Added regression test in `test/regression/issue/24709.test.ts`
that validates both `void import()` and bare `import()` statement cases
- [x] Verified test fails with system bun (reproduces the bug) and
passes with debug build (fix works)
- [x] Verified used dynamic imports (`const m = await import(...)`)
still produce correct `.then(() => exports)` output
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Speculative fix for
[BUN-Q81](https://bun-p9.sentry.io/issues/BUN-Q81)
BUN-Q81 is a long-standing `SlotVisitor::drain` segfault during GC
marking (150 occurrences since July 2025, across v1.1.10 through
v1.3.10). A full audit of the codebase for GC safety issues found three
bugs:
### 1. `JSCommonJSModule::m_overriddenCompile` not visited in
`visitChildren`
`m_overriddenCompile` is a `WriteBarrier<Unknown>` that stores the
overridden `module._compile` function (used by `ts-node`, `pirates`,
`@swc-node/register`, etc.). It was the only WriteBarrier field in the
class not visited by `visitChildrenImpl`, making it invisible to the GC.
The pointed-to function could be prematurely collected, and subsequent
GC marking would follow the dangling WriteBarrier pointer into freed
memory.
**This is the strongest candidate for BUN-Q81.**
### 2. `JSSQLStatement::userPrototype` — wrong owner in
`WriteBarrier::set()`
```cpp
// Before (wrong):
castedThis->userPrototype.set(vm, classObject, prototype.getObject());
// After (correct):
castedThis->userPrototype.set(vm, castedThis, prototype.getObject());
```
The owner parameter must be the object containing the WriteBarrier so
the GC's remembered set is updated correctly. All other `.set()` calls
in the same file correctly use `castedThis`.
### 3. `NodeVMSpecialSandbox` — missing `visitChildren` entirely
`NodeVMSpecialSandbox` has a `WriteBarrier<NodeVMGlobalObject>
m_parentGlobal` member but had no `visitChildren` implementation. Added
the standard boilerplate.
## Summary
When a streaming HTTP response body is cancelled via `reader.cancel()`
or `body.cancel()`, `FetchTasklet.readable_stream_ref` (a
`ReadableStream.Strong` GC root) was never released. This caused
ReadableStream objects, associated Promises, and Uint8Array buffers to
be retained indefinitely — leaking ~260KB per cancelled streaming
request.
## Root Cause
`ByteStream.onCancel()` cleaned up its own state (`done = true`, buffer
freed, pending promise resolved) but **did not notify the
FetchTasklet**. The Strong ref was only released when:
- `has_more` became `false` (HTTP response fully received) — but the
server may keep the connection open
- `Bun__FetchResponse_finalize` — but this checks
`readable_stream_ref.held.has()` and **skips cleanup when the Strong ref
is set** (line 958)
This created a circular dependency: the Strong ref prevented GC, and the
finalizer skipped cleanup because the Strong ref existed.
## Fix
Add a `cancel_handler` callback to `NewSource` (`ReadableStream.zig`)
that propagates cancel events to the data producer. `FetchTasklet`
registers this callback via `Body.PendingValue.onStreamCancelled`. When
the stream is cancelled, the handler calls
`ignoreRemainingResponseBody()` to release the Strong ref, stop
processing further HTTP data, and unref the event loop.
To prevent use-after-free when `FetchTasklet` is freed before `cancel()`
is called (e.g., HTTP response completes normally, then user cancels the
orphaned stream), `clearStreamCancelHandler()` nulls the
`cancel_handler` on the `ByteStream.Source` at all 3 sites where
`readable_stream_ref` is released.
## Test
Added `test/js/web/fetch/fetch-stream-cancel-leak.test.ts` — uses a raw
TCP server (`Bun.listen`) that sends one HTTP chunk then keeps the
connection open. Client fetches 30 times, reads one chunk, cancels, then
asserts `heapStats().objectTypeCounts.ReadableStream` does not
accumulate. Before the fix, all 30 ReadableStreams leaked; after the
fix, 0 leak.
## Summary
- Pre-convert strings to JSValues using `MarkedArgumentBuffer` before
entering `ObjectInitializationScope` in `JSC__JSObject__putRecord` and
`JSC__JSValue__putRecord`, since `jsString()` allocates GC cells which
is not allowed inside the scope
- Remove unused `ObjectInitializationScope` declaration in
`JSSQLStatement.cpp`'s `initializeColumnNames`
## Test plan
- [ ] Verify `bun bd test` passes for existing tests that exercise
`putRecord` paths (e.g., HTTP header handling, SQLite column names)
- [ ] Run with `BUN_JSC_validateExceptionChecks=1` to confirm no
exception scope violations
## Changelog
<!-- CHANGELOG:START -->
<!-- CHANGELOG:END -->
🤖 Generated with [Claude Code](https://claude.com/claude-code) (0%
9-shotted by claude-opus-4-6)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Removes an unnecessary 64KB `path_buffer_pool` allocation in
`PathLike.sliceZWithForceCopy` on Windows for paths that already have a
drive letter
- For drive-letter paths (e.g. `C:\foo\bar`),
`resolveCWDWithExternalBufZ` just does a memcpy, so the intermediate
buffer is unnecessary — we can pass the input slice directly to
`normalizeBuf`
- Eliminates an OOM crash path where `ObjectPool.get()` would panic via
`catch unreachable` when the allocator fails
## Test plan
- [ ] Verify Windows CI passes (this code path is Windows-only)
- [ ] Verify node:fs operations with absolute Windows paths still work
correctly
- [ ] Monitor BUN-Z4V crash reports after deployment to confirm fix
## Context
Speculative fix for BUN-Z4V (124 occurrences on Windows) showing `Panic:
attempt to unwrap error: OutOfMemory` in `sliceZWithForceCopy` →
`path_buffer_pool.get()` → `allocBytesWithAlignment`. We have not been
able to reproduce the crash locally, but the code analysis shows the
allocation is unnecessary for the drive-letter path case.
## Changelog
<!-- CHANGELOG:START -->
Fixed a crash on Windows (`OutOfMemory` panic) in `node:fs` path
handling when the system is under memory pressure.
<!-- CHANGELOG:END -->
🤖 Generated with [Claude Code](https://claude.com/claude-code) (0%
8-shotted by claude-opus-4-6)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- Fix standalone HTML mode (`--compile --target=browser`) placing
bundled JS as `<script type="module">` before `</body>`, which broke
execution order with existing inline body scripts
- Move bundled JS into `<head>` as a classic `<script>` (not
`type="module"`) so it executes synchronously before inline body
scripts, preserving the original script execution order
- Remove the now-unnecessary `addBodyTags()` function and associated
body script injection paths
Fixes#27113
## Test plan
- [x] Added regression test `test/regression/issue/27113.test.ts` that
verifies head scripts appear before `</head>` and don't use
`type="module"`
- [x] Updated existing standalone HTML tests in
`test/bundler/standalone.test.ts` to reflect the change from `<script
type="module">` to classic `<script>`
- [x] All 18 standalone tests pass (`bun bd test
test/bundler/standalone.test.ts`)
- [x] Regression test passes (`bun bd test
test/regression/issue/27113.test.ts`)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Sets up ci for windows arm64
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Fix infinite loop in `Bun.stripANSI()` when input contains control
characters in the `0x10-0x1F` range that are not ANSI escape introducers
(e.g. `0x16` SYN, `0x19` EM)
- The SIMD fast-path in `findEscapeCharacter` matched the broad range
`0x10-0x1F` / `0x90-0x9F`, but `consumeANSI` only handles a subset of
those characters. When `consumeANSI` returned the same pointer for an
unrecognized byte, the main loop in `stripANSI` never advanced, causing
a hang in release builds and an assertion failure in debug builds.
- Fix verifies SIMD candidates through `isEscapeCharacter()` before
returning, matching the behavior the scalar fallback path already had
## Test plan
- [x] Added regression test in `test/regression/issue/27014.test.ts`
with 4 test cases
- [x] Verified test hangs with system bun (v1.3.9) confirming the bug
- [x] All 4 new tests pass with debug build
- [x] All 265 existing `stripANSI.test.ts` tests pass with debug build
Closes#27014🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Adds self-contained HTML output mode: `--compile --target=browser`
(CLI) or `compile: true, target: "browser"` (`Bun.build()` API)
- Produces HTML files with all JS, CSS, and assets inlined directly:
`<script src="...">` → inline `<script>`, `<link rel="stylesheet">` →
inline `<style>`, asset references → `data:` URIs
- All entrypoints must be `.html` files when using `--compile
--target=browser`
- Validates: errors if any entrypoints aren't HTML, or if `--splitting`
is used
- Useful for distributing `.html` files that work via `file://` URLs
without needing a web server or worrying about CORS restrictions
## Test plan
- [x] Added `test/bundler/standalone.test.ts` covering:
- Basic JS inlining into HTML
- CSS inlining into HTML
- Combined JS + CSS inlining
- Asset inlining as data URIs
- CSS `url()` references inlined as data URIs
- Validation: non-HTML entrypoints rejected
- Validation: mixed HTML/non-HTML entrypoints rejected
- Validation: splitting rejected
- `Bun.build()` API with `compile: true, target: "browser"`
- CLI `--compile --target=browser`
- Minification works with compile+browser
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
When Error.captureStackTrace(e, fn) is called with a function that isn't
in the call stack, all frames are filtered out and e.stack should return
just the error name and message (e.g. "Error: test"), matching Node.js
behavior. Previously Bun returned undefined because:
1. The empty frame vector replaced the original stack frames via
setStackFrames(), but the lazy stack accessor was only installed when
hasMaterializedErrorInfo() was true (i.e. stack was previously
accessed). When it wasn't, JSC's internal materialization saw the
empty/null frames and produced no stack property at all.
2. The custom lazy getter returned undefined when stackTrace was
nullptr, instead of computing the error name+message string with zero
frames.
Fix: always force materialization before replacing frames, always
install the custom lazy accessor, and handle nullptr stackTrace in the
getter by computing the error string with an empty frame list.
### How did you verify your code works?
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Speculative fix for
[BUN-1K54](https://bun-p9.sentry.io/issues/7260165386/) — a segfault in
`JSC__JSValue__fromEntries` with 238 occurrences on Windows x86_64 (Bun
1.3.9).
## Problem
`JSC__JSValue__fromEntries` wraps its property insertion loop inside a
`JSC::ObjectInitializationScope`. This scope is designed for fast,
allocation-free object initialization using `putDirectOffset`. However,
the code uses `putDirect` (which can trigger structure transitions)
along with `toJSStringGC` and `toIdentifier` (which allocate on the GC
heap).
In **debug builds**, `ObjectInitializationScope` includes `AssertNoGC`
and `DisallowVMEntry` guards that would catch this misuse immediately.
In **release builds**, the scope is essentially a no-op (only emits a
`mutatorFence` on destruction), so GC can silently trigger during the
loop. When this happens, the GC may encounter partially-initialized
object slots containing garbage values, leading to a segfault.
## Fix
- Remove the `ObjectInitializationScope` block, since `putDirect` with
allocating helpers is incompatible with its contract.
- Add `RETURN_IF_EXCEPTION` checks inside each loop iteration to
properly propagate exceptions (e.g., OOM during string allocation).
## Test
Added a regression test that creates a `FileSystemRouter` with 128
routes and accesses `router.routes` under GC pressure. Verified via
temporary `fprintf` logging that the test exercises the modified
`JSC__JSValue__fromEntries` code path with `initialCapacity=128,
clone=true`.
Note: The original crash is a GC timing issue that cannot be
deterministically reproduced in a test. This test validates correctness
of the code path rather than reproducing the specific crash.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
Fixes#12117, #24118, #25948
When a TCP socket is upgraded to TLS via `tls.connect({ socket })`,
`upgradeTLS()` creates **two** `TLSSocket` structs — a TLS wrapper and a
raw TCP wrapper. Both are `markActive()`'d and `ref()`'d. On close, uws
fires `onClose` through the **TLS context only**, so the TLS wrapper is
properly cleaned up, but the raw TCP wrapper's `onClose` never fires.
Its `has_pending_activity` stays `true` forever and its `ref_count` is
never decremented, **leaking one raw `TLSSocket` per upgrade cycle**.
This affects any code using the `tls.connect({ socket })` "starttls"
pattern:
- **MongoDB Node.js driver** — SDAM heartbeat connections cycle TLS
upgrades every ~10s, causing unbounded memory growth in production
- **mysql2** TLS upgrade path
- Any custom starttls implementation
### The fix
Adds a `defer` block in `NewWrappedHandler(true).onClose` that cleans up
the raw TCP socket when the TLS socket closes:
```zig
defer {
if (!this.tcp.socket.isDetached()) {
this.tcp.socket.detach();
this.tcp.has_pending_activity.store(false, .release);
this.tcp.deref();
}
}
```
- **`isDetached()` guard** — skips cleanup if the raw socket was already
closed through another code path (e.g., JS-side `handle.close()`)
- **`socket.detach()`** — marks `InternalSocket` as `.detached` so
`isClosed()` returns `true` safely (the underlying `us_socket_t` is
freed when uws closes the TLS context)
- **`has_pending_activity.store(false)`** — allows JSC GC to collect the
raw socket's JS wrapper
- **`deref()`** — balances the `ref()` from `upgradeTLS`; the remaining
ref is the implicit one from JSC (`ref_count.init() == 1`). When GC
later calls `finalize()` → `deref()`, ref_count hits 0 and `deinit()`
runs the full cleanup chain (markInactive, handlers, poll_ref,
socket_context)
`markInactive()` is intentionally **not** called in the defer — it must
run inside `deinit()` to avoid double-freeing the handlers struct.
### Why Node.js doesn't have this bug
Node.js implements TLS upgrades purely in JavaScript/C++ with OpenSSL,
where the TLS wrapper takes ownership of the underlying socket. There is
no separate "raw socket wrapper" that needs independent cleanup.
## Test Results
### Regression test
```
$ bun test test/js/node/tls/node-tls-upgrade-leak.test.ts
1 pass, 0 fail
```
Creates 20 TCP→TLS upgrade cycles, closes all connections, runs GC,
asserts `TLSSocket` count stays below 10.
### Existing TLS test suite (all passing)
```
node-tls-upgrade.test.ts 1 pass
node-tls-connect.test.ts 24 pass, 1 skip
node-tls-server.test.ts 21 pass
node-tls-cert.test.ts 25 pass, 3 todo
renegotiation.test.ts 6 pass
```
### MongoDB TLS scenario (patched Bun, 4 minutes)
```
Baseline: RSS=282.4 MB | Heap Used=26.4 MB
Check #4: RSS=166.7 MB | Heap Used=24.2 MB — No TLSSocket growth. RSS DECREASED.
```
## Test plan
- [x] New regression test passes (`node-tls-upgrade-leak.test.ts`)
- [x] All existing TLS tests pass (upgrade, connect, server, cert,
renegotiation)
- [x] MongoDB TLS scenario shows zero `TLSSocket` accumulation
- [x] Node.js control confirms leak is Bun-specific
- [ ] CI passes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
### What does this PR do?
### How did you verify your code works?
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix `<link rel="manifest" href="./manifest.json" />` (and similar
non-JS/CSS URL assets) resulting in 404s when using `Bun.build` with
HTML entrypoints
- The HTML scanner correctly identifies these as `ImportKind.url`
imports, but the bundler was assigning the extension-based loader (e.g.
`.json`) which parses the file instead of copying it as a static asset
- Force the `.file` loader for `ImportKind.url` imports when the
resolved loader wouldn't `shouldCopyForBundling()` and isn't JS/CSS/HTML
(which have their own handling)
## Test plan
- [x] Added `html/manifest-json` test: verifies manifest.json is copied
as hashed asset and HTML href is rewritten
- [x] Added `html/xml-asset` test: verifies `.webmanifest` files are
also handled correctly
- [x] All 20 HTML bundler tests pass (`bun bd test
test/bundler/bundler_html.test.ts`)
- [x] New tests fail on system bun (`USE_SYSTEM_BUN=1`) confirming they
validate the fix
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix `OutgoingMessage.setHeaders()` incorrectly throwing
`ERR_HTTP_HEADERS_SENT` on brand new `ClientRequest` instances
- The guard condition `this[headerStateSymbol] !==
NodeHTTPHeaderState.none` failed when `headerStateSymbol` was
`undefined` (since `ClientRequest` doesn't call the `OutgoingMessage`
constructor), and was also stricter than Node.js which only checks
`this._header`
- Align the check with the working `setHeader()` (singular) method: only
throw when `_header` is set or `headerStateSymbol` equals `sent`
Closes#27049
## Test plan
- [x] New regression test `test/regression/issue/27049.test.ts` covers:
- `ClientRequest.setHeaders()` with `Headers` object
- `ClientRequest.setHeaders()` with `Map`
- `ServerResponse.setHeaders()` before headers are sent
- [x] Test fails with system bun (`USE_SYSTEM_BUN=1`)
- [x] Test passes with debug build (`bun bd test`)
- [x] Existing header-related tests in `node-http.test.ts` still pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Compute SHA-512 hash of GitHub tarball bytes during extraction and
store in `bun.lock`
- Verify the hash on subsequent installs when re-downloading, rejecting
tampered tarballs
- Automatically upgrade old lockfiles without integrity by computing and
persisting the hash
- Maintain backward compatibility with old lockfile format (no integrity
field)
Fixes GHSA-pfwx-36v6-832x
## Lockfile format change
```
Before: ["pkg@github:user/repo#ref", {}, "resolved-commit"]
After: ["pkg@github:user/repo#ref", {}, "resolved-commit", "sha512-..."]
```
The integrity field is optional for backward compatibility. Old
lockfiles are automatically upgraded when the tarball is re-downloaded.
## Test plan
- [x] Fresh install stores SHA-512 integrity hash in lockfile
- [x] Re-install with matching hash succeeds
- [x] Re-install with mismatched hash rejects the tarball
- [x] Old lockfile without integrity is auto-upgraded with hash on
re-download
- [x] Cache hits still work without re-downloading
- [x] Existing GitHub dependency tests pass (10/10)
- [x] Existing git resolution snapshot test passes
- [x] Yarn migration snapshot tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fixes the esbuild migration guide (`docs/bundler/esbuild.mdx`) which
incorrectly stated that `onStart`, `onEnd`, `onDispose`, and `resolve`
were all unimplemented. `onStart` and `onEnd` **are** implemented — only
`onDispose` and `resolve` remain unimplemented.
- Adds missing `onEnd()` documentation section to both
`docs/bundler/plugins.mdx` and `docs/runtime/plugins.mdx`, including
type signature, description, and usage examples.
- Adds `onEnd` to the type reference overview and lifecycle hooks list
in both plugin docs.
Fixes#27083
## Test plan
- Documentation-only change — no code changes.
- Verified the `onEnd` implementation exists in
`src/js/builtins/BundlerPlugin.ts` and matches the documented API.
- Verified `onStart` implementation exists and is fully functional.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- When `http.ClientRequest.write()` was called more than once (streaming
data in chunks), Bun was stripping the explicitly-set `Content-Length`
header and switching to `Transfer-Encoding: chunked`. Node.js preserves
`Content-Length` in all cases when it's explicitly set by the user.
- This caused real-world failures (e.g. Vercel CLI file uploads) where
large binary files streamed via multiple `write()` calls had their
Content-Length stripped, causing server-side "invalid file size" errors.
- The fix preserves the user's explicit `Content-Length` for streaming
request bodies and skips chunked transfer encoding framing when
`Content-Length` is set.
Closes#27061Closes#26976
## Changes
- **`src/http.zig`**: When a streaming request body has an explicit
`Content-Length` header set by the user, use that instead of adding
`Transfer-Encoding: chunked`. Added
`is_streaming_request_body_with_content_length` flag to track this.
- **`src/bun.js/webcore/fetch/FetchTasklet.zig`**: Skip chunked transfer
encoding framing (`writeRequestData`) and the chunked terminator
(`writeEndRequest`) when the request has an explicit `Content-Length`.
- **`test/regression/issue/27061.test.ts`**: Regression test covering
multiple write patterns (2x write, write+end(data), 3x write) plus
validation that chunked encoding is still used when no `Content-Length`
is set.
## Test plan
- [x] New regression test passes with `bun bd test
test/regression/issue/27061.test.ts`
- [x] Test fails with `USE_SYSTEM_BUN=1` (confirms the bug exists in
current release)
- [x] Existing `test/js/node/http/` tests pass (no regressions)
- [x] Fetch file upload tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### Problem
The bundler's `__toESM` helper creates a new getter-wrapped proxy object
every time a CJS
module is imported. In a large app, a popular dependency like React can
be imported 600+
times — each creating a fresh object with ~44 getter properties. This
produces ~27K
unnecessary `GetterSetter` objects, ~25K closures, and ~25K
`JSLexicalEnvironment` scope
objects at startup.
Additionally, `__export` and `__exportValue` use `var`-scoped loop
variables captured by
setter closures, meaning all setters incorrectly reference the last
iterated key (a latent
bug).
### Changes
1. **`__toESM`: add WeakMap cache** — deduplicate repeated wrappings of
the same CJS
module. Two caches (one per `isNodeMode` value) to handle both import
modes correctly.
2. **Replace closures with `.bind()`** — `() => obj[key]` becomes
`__accessProp.bind(obj,
key)`. BoundFunction is cheaper than Function + JSLexicalEnvironment,
and frees the for-in
`JSPropertyNameEnumerator` from the closure scope.
3. **Fix var-scoping bug in `__export`/`__exportValue`** — setter
closures captured a
shared `var name` and would all modify the last iterated key. `.bind()`
eagerly captures
the correct key per iteration.
4. **`__toCommonJS`: `.map()` → `for..of`** — eliminates throwaway array
allocation.
5. **`__reExport`: single `getOwnPropertyNames` call** — was calling it
twice when
`secondTarget` was provided.
### Impact (measured on a ~23MB single-bundle app with 600+ React
imports)
| Metric | Before | After | Delta |
|--------|--------|-------|-------|
| **Total objects** | 745,985 | 664,001 | **-81,984 (-11%)** |
| **Heap size** | 115 MB | 111 MB | **-4 MB** |
| GetterSetter | 34,625 | 13,428 | -21,197 (-61%) |
| Function | 221,302 | 197,024 | -24,278 (-11%) |
| JSLexicalEnvironment | 70,101 | 44,633 | -25,468 (-36%) |
| Structure | 40,254 | 39,762 | -492 |
…int.generate()
`ServerEntryPoint.source` defaults to `undefined`, and accessing its
`.contents` or `.path.text` fields before `generate()` has been called
causes a segfault. This happens when `bun:main` is resolved in contexts
where `entry_point.generate()` is skipped (HTML entry points) or never
called (test runner).
Add a `generated` flag to `ServerEntryPoint` and guard both access
sites:
- `getHardcodedModule()` in ModuleLoader.zig (returns null instead of
crashing)
- `_resolve()` in VirtualMachine.zig (falls through to normal
resolution)
### What does this PR do?
### How did you verify your code works?
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
The SIMD fast path in `findEscapeCharacter` matches bytes in `0x10-0x1F`
and `0x90-0x9F`, but `consumeANSI` only handles actual ANSI escape
introducers (`0x1b`, `0x9b`, `0x9d`, `0x90`, `0x98`, `0x9e`, `0x9f`).
For non-ANSI bytes like `0x16` (SYN), `consumeANSI` returns the same
pointer it received, causing an infinite loop in release builds and an
assertion failure in debug builds.
When a SIMD false positive is detected (`consumeANSI` returns the same
pointer), the byte is preserved in the output and scanning advances past
it.
Fixes#27014
## Summary
- Fixes HTTP response splitting vulnerability where `res.statusMessage`
could contain CRLF characters that were written directly to the socket,
allowing injection of arbitrary HTTP headers and response body
- Adds native-layer validation in `NodeHTTPResponse.zig` `writeHead()`
to reject status messages containing control characters (matching
Node.js's `checkInvalidHeaderChar` behavior)
- The `writeHead(code, msg)` API already validated via JS-side
`checkInvalidHeaderChar`, but direct property assignment
(`res.statusMessage = userInput`) followed by `res.end()` or
`res.write()` bypassed all validation
## Test plan
- [x] Verified vulnerability is reproducible: attacker can inject
`Set-Cookie` headers via `res.statusMessage = "OK\r\nSet-Cookie:
admin=true"`
- [x] Verified fix throws `ERR_INVALID_CHAR` TypeError when CRLF is
present in status message
- [x] Added 4 new tests covering: property assignment + `res.end()`,
property assignment + `res.write()`, explicit `writeHead()` rejection,
and valid status message passthrough
- [x] Tests fail with `USE_SYSTEM_BUN=1` (confirming they detect the
vulnerability) and pass with `bun bd test`
- [x] Existing Node.js compat test
`test-http-status-reason-invalid-chars.js` still passes
- [x] All 14 HTTP security tests pass
- [x] Full `node-http.test.ts` suite passes (77 pass, 1 pre-existing
skip, 1 pre-existing proxy failure)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- **HTMLRewriter test**: Replaced `fetch("https://www.example.com/")`
with a local HTTP content server, eliminating dependency on external
HTTPS and system CA certificates
- **HTTPS agent test**: Replaced `https.request("https://example.com/")`
with a local TLS server using self-signed certs from harness
- **Expired certs**: Regenerated self-signed certificates in
`test/js/bun/http/fixtures/` and `test/regression/fixtures/` (were
expired since Sep 2024, now valid until 2036)
## Root cause
Tests fetching external HTTPS URLs (`https://example.com`,
`https://www.example.com`) fail on CI environments (Alpine Linux,
Windows) that lack system CA certificate bundles, producing
`UNABLE_TO_GET_ISSUER_CERT_LOCALLY` errors. This affected:
- `test/js/workerd/html-rewriter.test.js` (HTMLRewriter: async
replacement using fetch + Bun.serve)
-
`test/js/bun/test/parallel/test-https-should-work-when-sending-request-with-agent-false.ts`
## Test plan
- [x] `bun bd test test/js/workerd/html-rewriter.test.js` - 41 pass, 0
fail
- [x] `bun bd test test/js/workerd/html-rewriter.test.js -t "async
replacement using fetch"` - 1 pass
- [x] `bun bd
test/js/bun/test/parallel/test-https-should-work-when-sending-request-with-agent-false.ts`
- exits 0
- [x] `bun bd test test/js/bun/http/serve-listen.test.ts` - 27 pass, 0
fail (uses renewed certs)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixes use-after-free (ASAN use-after-poison) when shell builtins
(`ls`, `touch`, `mkdir`, `cp`) run inside command substitution `$(...)`
and encounter errors (e.g., permission denied)
- The `output_waiting` and `output_done` counters in the builtin exec
state went out of sync because `output_waiting` was only incremented for
async IO operations, while `output_done` was always incremented
- In command substitution, stdout is `.pipe` (sync) and stderr is `.fd`
(async), so a single OutputTask completing both writes could satisfy the
done condition while another OutputTask's async stderr write was still
pending in the IOWriter
The fix moves `output_waiting += 1` before the `needsIO()` check in all
four affected builtins so it's always incremented, matching
`output_done`.
## Test plan
- [x] `echo $(ls /tmp/*)` — no ASAN errors (previously crashed with
use-after-poison)
- [x] `echo $(touch /root/a /root/b ...)` — no ASAN errors
- [x] `echo $(mkdir /root/a /root/b ...)` — no ASAN errors
- [x] `ls /nonexistent` — still reports error and exits with code 1
- [x] `echo $(ls /tmp)` — still captures output correctly
- [x] Existing shell test suite: 292 pass, 52 fail (pre-existing), 83
todo — no regressions
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
### What does this PR do?
The `sendPong` fix alone wasn't sufficient. The bug only manifests with
**wss:// through HTTP proxy** (not ws://), because only that path uses
`initWithTunnel` with a detached socket.
**Two bugs were found and fixed:**
1. **`sendPong`/`sendCloseWithBody` socket checks**
(`src/http/websocket_client.zig`): Replaced `socket.isClosed() or
socket.isShutdown()` with `!this.hasTCP()` as originally proposed. Also
guarded `shutdownRead()` against detached sockets.
2. **Spurious 1006 during clean close** (`src/http/websocket_client.zig`
+ `WebSocketProxyTunnel.zig`): When `sendCloseWithBody` calls
`clearData()`, it shuts down the proxy tunnel. The tunnel's `onClose`
callback was calling `ws.fail(ErrorCode.ended)` which dispatched a 1006
abrupt close, overriding the clean 1000 close already in progress. Fixed
by adding `tunnel.clearConnectedWebSocket()` before `tunnel.shutdown()`
so the callback is a no-op.
### How did you verify your code works?
- `USE_SYSTEM_BUN=1`: Fails with `Unexpected close code: 1006`
- `bun bd test`: Passes with clean 1000 close
- Full proxy test suite: 25 pass, 4 skip, 0 fail
- Related fragmented/close tests: all passing
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## What
Replace two-step `Vector<uint8_t>` zero-initialization + `memcpy` with
direct `std::span` copy construction in the `structuredClone` Int32 and
Double array fast paths.
## Why
The previous code allocated a zero-filled buffer and then immediately
overwrote it with `memcpy`. By constructing the `Vector` directly from a
`std::span`, we eliminate the redundant zero-fill and reduce the
operation to a single copy.
### Before
```cpp
Vector<uint8_t> buffer(byteSize, 0);
memcpy(buffer.mutableSpan().data(), data, byteSize);
```
### After
```cpp
Vector<uint8_t> buffer(std::span<const uint8_t> { reinterpret_cast<const uint8_t*>(data), byteSize });
```
## Test
`bun bd test test/js/web/structured-clone-fastpath.test.ts` — 91/92 pass
(1 pre-existing flaky memory test unrelated to this change).
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- Defer resolution of dynamic `import()` of unknown `node:` modules
(like `node:sqlite`) to runtime instead of failing at transpile time
- Fix use-after-poison in `addResolveError` by always duping `line_text`
from the source so Location data outlives the arena
Fixes#25707
## Root cause
When a CJS file is `require()`d, Bun's linker eagerly resolves all
import records, including dynamic `import()` expressions. For unknown
`node:` prefixed modules, `whenModuleNotFound` was only deferring
`.require` and `.require_resolve` to runtime — `.dynamic` imports fell
through to the error path, causing the entire `require()` to fail.
This broke Next.js builds with turbopack + `cacheComponents: true` +
Better Auth, because Kysely's dialect detection code uses
`import("node:sqlite")` inside a try/catch that gracefully handles the
module not being available.
Additionally, when the resolve error was generated, the
`Location.line_text` was a slice into arena-allocated source contents.
The arena is reset before `processFetchLog` processes the error, causing
a use-after-poison when `Location.clone` tries to dupe the freed memory.
## Test plan
- [x] New regression test in `test/regression/issue/25707.test.ts` with
3 cases:
- CJS require of file with `import("node:sqlite")` inside try/catch
(turbopack pattern)
- CJS require of file with bare `import("node:sqlite")` (no try/catch)
- Runtime error produces correct `ERR_UNKNOWN_BUILTIN_MODULE` code
- [x] Test fails with `USE_SYSTEM_BUN=1` (system bun v1.3.9)
- [x] Test passes with `bun bd test`
- [x] No ASAN use-after-poison crash on debug build
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fix out-of-bounds read in the INI parser's `prepareStr` function when
a multi-byte UTF-8 lead byte appears at the end of a value with
insufficient continuation bytes
- Fix undefined behavior when bare continuation bytes (0x80-0xBF) cause
`utf8ByteSequenceLength` to return 0, hitting an `unreachable` branch
(UB in ReleaseFast builds)
- Add bounds checking before accessing `val[i+1]`, `val[i+2]`,
`val[i+3]` in both escaped and non-escaped code paths
The vulnerability could be triggered by a crafted `.npmrc` file
containing truncated UTF-8 sequences. In release builds, this could
cause OOB heap reads (potential info leak) or undefined behavior.
## Test plan
- [x] Added 9 tests covering truncated 2/3/4-byte sequences, bare
continuation bytes, and escaped contexts
- [x] All 52 INI parser tests pass (`bun bd test
test/js/bun/ini/ini.test.ts`)
- [x] No regressions in npmrc tests (failures are pre-existing Verdaccio
connectivity issues)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Reject null bytes in `username`, `password`, `database`, and `path`
connection parameters for both PostgreSQL and MySQL to prevent wire
protocol parameter injection
- Both the Postgres and MySQL wire protocols use null-terminated strings
in their startup/handshake messages, so embedded null bytes in these
fields act as field terminators, allowing injection of arbitrary
protocol parameters (e.g. `search_path` for schema hijacking)
- The fix validates these fields immediately after UTF-8 conversion and
throws `InvalidArguments` error with a clear message if null bytes are
found
## Test plan
- [x] New test
`test/regression/issue/postgres-null-byte-injection.test.ts` verifies:
- Null bytes in username are rejected with an error before any data is
sent
- Null bytes in database are rejected with an error before any data is
sent
- Null bytes in password are rejected with an error before any data is
sent
- Normal connections without null bytes still work correctly
- [x] Test verified to fail with `USE_SYSTEM_BUN=1` (unfixed bun) and
pass with `bun bd test` (fixed build)
- [x] Existing SQL tests pass (`adapter-env-var-precedence.test.ts`,
`postgres-stringbuilder-assertion-aggressive.test.ts`)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix double-free in `Bun.plugin.clearAll()` by setting `virtualModules
= nullptr` after `delete`
- In `jsFunctionBunPluginClear` (`BunPlugin.cpp:956`), `delete
global->onLoadPlugins.virtualModules` freed the pointer without
nullifying it. When the `OnLoad` destructor later runs (during Worker
termination or VM destruction), it checks `if (virtualModules)` — the
dangling non-null pointer passes the check and is deleted again,
corrupting the heap allocator.
## Test plan
- [ ] New test
`test/regression/issue/plugin-clearall-double-free.test.ts` spawns a
subprocess that registers a virtual module, calls
`Bun.plugin.clearAll()`, and exits with `BUN_DESTRUCT_VM_ON_EXIT=1` to
trigger the destructor path
- [ ] Verified the test fails on the system bun (pre-fix) with `pas
panic: deallocation did fail ... Alloc bit not set`
- [ ] Verified the test passes with the debug build (post-fix)
- [ ] Existing plugin tests (`test/js/bun/plugin/plugins.test.ts`) all
pass (29/29)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fix WebSocket client pong frame handler to properly handle payloads
split across TCP segments, preventing frame desync that could cause
protocol confusion
- Add missing RFC 6455 Section 5.5 validation: control frame payloads
must not exceed 125 bytes (pong handler lacked this check, unlike ping
and close handlers)
## Details
The pong handler (lines 652-663) had two issues:
1. **Frame desync on fragmented delivery**: When a pong payload was
split across TCP segments (`data.len < receive_body_remain`), the
handler consumed only the available bytes but unconditionally reset
`receive_state = .need_header` and `receive_body_remain = 0`. The
remaining payload bytes in the next TCP delivery were then
misinterpreted as WebSocket frame headers.
2. **Missing payload length validation**: Unlike the ping handler (line
615) and close handler (line 680), the pong handler did not validate the
7-bit payload length against the RFC 6455 limit of 125 bytes for control
frames.
The fix models the pong handler after the existing ping handler pattern:
track partial delivery state with a `pong_received` boolean, buffer
incoming data into `ping_frame_bytes`, and only reset to `need_header`
after the complete payload has been consumed.
## Test plan
- [x] New test `websocket-pong-fragmented.test.ts` verifies:
- Fragmented pong delivery (50-byte payload split into 2+48 bytes) does
not cause frame desync, and a subsequent text frame is received
correctly
- Pong frames with >125 byte payloads are rejected as invalid control
frames
- [x] Test fails with `USE_SYSTEM_BUN=1` (reproduces the bug) and passes
with `bun bd test`
- [x] Existing WebSocket tests pass: `websocket-client.test.ts`,
`websocket-close-fragmented.test.ts`,
`websocket-client-short-read.test.ts`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixes HTTP header injection vulnerability in S3 client where
user-controlled options (`contentDisposition`, `contentEncoding`,
`type`) were passed to HTTP headers without CRLF validation
- Adds input validation at the JS-to-Zig boundary in
`src/s3/credentials.zig` that throws a `TypeError` if `\r` or `\n`
characters are detected
- An attacker could previously inject arbitrary headers (e.g.
`X-Amz-Security-Token`) by embedding `\r\n` in these string fields
## Test plan
- [x] Added `test/regression/issue/s3-header-injection.test.ts` with 6
tests:
- CRLF in `contentDisposition` throws
- CRLF in `contentEncoding` throws
- CRLF in `type` (content-type) throws
- Lone CR in `contentDisposition` throws
- Lone LF in `contentDisposition` throws
- Valid `contentDisposition` without CRLF still works correctly
- [x] Tests fail with `USE_SYSTEM_BUN=1` (confirming vulnerability
exists in current release)
- [x] Tests pass with `bun bd test` (confirming fix works)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix path traversal vulnerability in tarball directory extraction on
POSIX systems where `mkdiratZ` used the un-normalized `pathname` (raw
from tarball) instead of the normalized `path` variable, allowing `../`
components to escape the extraction root via kernel path resolution
- The Windows directory creation, symlink creation, and file creation
code paths already correctly used the normalized path — only the two
POSIX `mkdiratZ` calls were affected (lines 463 and 469)
- `bun install` is not affected because npm mode skips directory
entries; affected callers include `bun create`, GitHub tarball
extraction, and `compile_target`
## Test plan
- [x] Added regression test that crafts a tarball with
`safe_dir/../../escaped_dir/` directory entry and verifies it cannot
create directories outside the extraction root
- [x] Verified test **fails** with system bun (vulnerable) and
**passes** with debug build (fixed)
- [x] Full `archive.test.ts` suite passes (99/99 tests)
- [x] `symlink-path-traversal.test.ts` continues to pass (3/3 tests)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Replace `bun.strings.eqlLong` with BoringSSL's `CRYPTO_memcmp` for
SCRAM-SHA-256 server signature verification in the PostgreSQL client
- The previous comparison (`eqlLong`) returned early on the first
mismatching byte, potentially leaking information about the expected
server signature via timing side-channel
- `CRYPTO_memcmp` is already used elsewhere in the codebase for
constant-time comparisons (CSRF tokens, `crypto.timingSafeEqual`,
KeyObject comparison)
## Test plan
- [x] `bun bd` compiles successfully
- [ ] Existing SCRAM-SHA-256 integration tests in
`test/js/sql/sql.test.ts` pass (require Docker/PostgreSQL)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixes#26918 — segfault at address `0x28189480080` caused by
use-after-free in the shell interpreter
- When `setupIOBeforeRun()` fails (e.g., stdout handle unavailable on
Windows), the `runFromJS` error path called `deinitFromExec()` which
directly freed the GC-managed interpreter object with
`allocator.destroy(this)`. When the GC later swept and called
`deinitFromFinalizer()` on the already-freed memory, it caused a
segfault.
- Replaced `deinitFromExec()` with `derefRootShellAndIOIfNeeded(true)`
which properly cleans up runtime resources (IO handles, shell
environment) while leaving final object destruction to the GC finalizer
— matching the pattern already used in `finish()`.
## Test plan
- [x] Added regression test in `test/regression/issue/26918.test.ts`
that verifies the shell interpreter handles closed stdout gracefully
without crashing
- [x] Test passes with `bun bd test test/regression/issue/26918.test.ts`
- [ ] The actual crash is primarily reproducible on Windows where stdout
handles can be truly unavailable — CI Windows tests should validate the
fix
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- When `--bail` caused an early exit after a test failure, the JUnit
reporter output file (`--reporter-outfile`) was never written because
`Global.exit()` was called before the normal completion path
- Extracted the JUnit write logic into a `writeJUnitReportIfNeeded()`
method on `CommandLineReporter` and call it in both bail exit paths
(test failure and unhandled rejection) as well as the normal completion
path
Closes#26851
## Test plan
- [x] Added regression test `test/regression/issue/26851.test.ts` with
two cases:
- Single failing test file with `--bail` produces JUnit XML output
- Multiple test files where bail triggers on second file still writes
the report
- [x] Verified test fails with system bun (`USE_SYSTEM_BUN=1`)
- [x] Verified test passes with `bun bd test`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Add `expectMessageEventually(value)` to the bake test harness `Client`
class — waits for a specific message to appear, draining any
intermediate messages that arrived before it
- Rewrite "hmr handles rapid consecutive edits" test to use raw
`Bun.write` + sleep for intermediate edits and `expectMessageEventually`
for the final assertion, avoiding flaky failures when HMR batches
updates non-deterministically across platforms
Fixes flaky failure on Windows where an extra "render 10" message
arrived after `expectMessage` consumed its expected messages but before
client disposal.
## Test plan
- [x] `bun bd test test/bake/dev-and-prod.test.ts` — all 12 tests pass
- [x] Ran the specific test multiple times to confirm no flakiness
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Alistair Smith <alistair@anthropic.com>
## Problem
The bundler's number renamer was mangling `.name` properties on crypto
class prototype methods and constructors:
- `hash.update.name` → `"update2"` instead of `"update"`
- `verify.verify.name` → `"verify2"` instead of `"verify"`
- `cipher.update.name` → `"update3"` instead of `"update"`
- `crypto.Hash.name` → `"Hash2"` instead of `"Hash"`
### Root causes
1. **Named function expressions on prototypes** collided with other
bindings after scope flattening (e.g. `Verify.prototype.verify =
function verify(...)` collided with the imported `verify`)
2. **Block-scoped constructor declarations** (`Hash`, `Hmac`) got
renamed when the bundler hoisted them out of block scope
3. **Shared function declarations** in the Cipher/Decipher block all got
numeric suffixes (`update3`, `final2`, `setAutoPadding2`, etc.)
## Fix
- Use `Object.assign` with object literals for prototype methods —
object literal property keys correctly infer `.name` and aren't subject
to the renamer
- Remove unnecessary block scopes around `Hash` and `Hmac` constructors
so they stay at module level and don't get renamed
- Inline `Cipheriv` methods and copy references to `Decipheriv`
## Tests
Added comprehensive `.name` tests for all crypto classes: Hash, Hmac,
Sign, Verify, Cipheriv, Decipheriv, DiffieHellman, ECDH, plus factory
functions and constructor names.
## What does this PR do?
Fixes the write loop in `StandaloneModuleGraph.inject()` for POSIX
targets (the `else` branch handling ELF/Linux standalone binaries) to
pass `remain` instead of `bytes` to `Syscall.write()`.
## Problem
The write loop that appends the bundled module graph to the end of the
executable uses a standard partial-write retry pattern, but passes the
full `bytes` buffer on every iteration instead of the remaining portion:
```zig
var remain = bytes;
while (remain.len > 0) {
switch (Syscall.write(cloned_executable_fd, bytes)) { // bug: should be 'remain'
.result => |written| remain = remain[written..],
...
}
}
```
If a partial write occurs, the next iteration re-writes from the start
of the buffer instead of continuing where it left off, corrupting the
output binary. The analogous read loop elsewhere in the same file
already correctly uses `remain`.
## Fix
One-character change: `bytes` → `remain` in the `Syscall.write()` call.
## How did you verify your code works?
- `bun bd` compiles successfully
- `bun bd test test/bundler/bun-build-compile.test.ts` — 4/4 pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Alistair Smith <alistair@anthropic.com>
## What does this PR do?
Fixes `bun build --compile` producing an all-zeros binary when the
output directory is on a different filesystem than the temp directory.
This is common in Docker containers, Gitea runners, and other
environments using overlayfs.
## Problem
When `inject()` finishes writing the modified executable to the temp
file, the file descriptor's offset is at EOF. If the subsequent
`renameat()` to the output path fails with `EXDEV` (cross-device — the
temp file and output dir are on different filesystems), the code falls
back to `copyFileZSlowWithHandle()`, which:
1. Calls `fallocate()` to pre-allocate the output file to the correct
size (filled with zeros)
2. Calls `bun.copyFile(in_handle, out_handle)` — but `in_handle`'s
offset is at EOF
3. `copy_file_range` / `sendfile` / `read` all use the current file
offset (EOF), read 0 bytes, and return immediately
4. Result: output file is the correct size but entirely zeros
This explains user reports of `bun build --compile
--target=bun-darwin-arm64` producing invalid binaries that `file`
identifies as "data" rather than a Mach-O executable.
## Fix
Seek the input fd to offset 0 in `copyFileZSlowWithHandle` before
calling `bun.copyFile`.
## How did you verify your code works?
- `bun bd` compiles successfully
- `bun bd test test/bundler/bun-build-compile.test.ts` — 6/6 pass
- Added tests that verify compiled binaries have valid executable
headers and produce correct output
- Manually verified cross-compilation: `bun build --compile
--target=bun-darwin-arm64` produces a valid Mach-O binary
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Update inline error snapshots in valkey reliability tests to match
Redis 8's changed error message format
- Redis 8 (`redis:8-alpine` in our test Docker container) no longer
appends `, with args beginning with: ` when an unknown command has no
arguments
## Root cause
Redis commit
[`25f780b6`](25f780b662)
([PR #14690](https://github.com/redis/redis/pull/14690)) changed
`commandCheckExistence()` in `src/server.c` to only append `, with args
beginning with: ` when there are actual arguments (`c->argc >= 2`).
Previously it was always appended, producing a dangling `, with args
beginning with: ` even with zero arguments.
## Changes
- `test/js/valkey/reliability/protocol-handling.test.ts`: Updated
`SYNTAX-ERROR` snapshot (no args case)
- `test/js/valkey/reliability/error-handling.test.ts`: Updated
`undefined` and `123` snapshots (no args cases)
## Test plan
- [ ] Verify `protocol-handling.test.ts` passes in CI (was failing on
every attempt as shown in #26869 / build #36831)
- [ ] Verify `error-handling.test.ts` passes in CI
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
#### (Copies commits from #26447)
## Summary
- Add a global `--retry <N>` flag to `bun test` that sets a default
retry count for all tests (overridable by per-test `{ retry: N }`). Also
configurable via `[test] retry = N` in bunfig.toml.
- When a test passes after one or more retries, the JUnit XML reporter
emits a separate `<testcase>` entry for each failed attempt (with
`<failure>`), followed by the final passing `<testcase>`. This gives
flaky test detection tools per-attempt timing and result data using
standard JUnit XML that all CI systems can parse.
## Test plan
- `bun bd test test/js/junit-reporter/junit.test.js` — verifies separate
`<testcase>` entries appear in JUnit XML for tests that pass after retry
- `bun bd test test/cli/test/retry-flag.test.ts` — verifies the
`--retry` CLI flag applies a default retry count to all tests
## Changelog
<!-- CHANGELOG:START -->
- Added `--retry <N>` flag to `bun test` to set a default retry count
for all tests
- Added `[test] retry` option to bunfig.toml
- JUnit XML reporter now emits separate `<testcase>` entries for each
retry attempt, providing CI visibility into flaky tests
<!-- CHANGELOG:END -->
---------
Co-authored-by: Chris Lloyd <chrislloyd@anthropic.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
Fixes "Unknown HMR script" error during rapid consecutive edits in HMR
## Test plan
- [x] Basic consecutive HMR edits work correctly
---------
Co-authored-by: Alistair Smith <alistair@anthropic.com>
## Summary
- Convert `file:` URL strings to filesystem paths via
`Bun.fileURLToPath()` in the JS layer for both `fs.watch` and
`fs.watchFile`/`fs.unwatchFile`
- Handles percent-decoding (e.g. `%20` → space) and proper URL parsing,
which the previous naive `slice[6..]` stripping in Zig could not do
- Zig-level `file://` stripping is left unchanged; the JS layer now
resolves file URLs before they reach native code
## Test plan
- [x] New test: `fs.watch` with `file:` URL string containing
`%20`-encoded spaces
- [x] New test: `fs.watchFile` with `file:` URL string containing
`%20`-encoded spaces
- [x] Both tests fail with `USE_SYSTEM_BUN=1` and pass with `bun bd
test`
- [x] Existing `fs.watch` "should work with url" test (URL object) still
passes
- [x] Full `fs.watchFile` suite passes (6 pass, 1 skip, 0 fail)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Implement complete lowering of TC39 stage-3 standard ES decorators
(the non-legacy variant used when tsconfig has no
`experimentalDecorators`).
- Passes all 147 esbuild decorator tests and 22 additional Bun-specific
tests (191 total, 0 failures).
- Supports method, getter, setter, field, auto-accessor, private member,
and class decorators in both statement and expression positions, with
proper evaluation order, class binding semantics, and decorator
metadata.
Fixes#4122Fixes#20206Fixes#14529Fixes#6051
## What's implemented
| Feature | Details |
|---|---|
| Method/getter/setter decorators | Static and instance, public and
private |
| Field decorators | Initializer replacement + extra initializers via
`__runInitializers` |
| Auto-accessor (`accessor` keyword) | Lowered to WeakMap storage +
getter/setter pair |
| Private member decorators | WeakMap/WeakSet lowering with
`__privateGet`/`__privateSet` |
| Class decorators | Statement and expression positions |
| Class expression decorators | Comma-expression lowering (no IIFE) |
| Decorator metadata | `Symbol.metadata` support via
`__decoratorMetadata` |
| Evaluation order | All decorator expressions + computed keys evaluated
in source order per TC39 spec |
| Class binding semantics | Separate inner/outer class name bindings
(element vs class decorator closures) |
| Static block extraction | `this` replaced with class name ref when
moved to suffix |
| Computed property keys | Pre-evaluated into temp variables for correct
ordering |
## Runtime helpers
Added to `src/runtime.js` and registered in `src/runtime.zig`:
- `__decoratorStart(base)` — creates decorator context array
- `__decorateElement(array, flags, name, decorators, target, extra)` —
applies decorators to a class element
- `__decoratorMetadata(array, target)` — sets `Symbol.metadata` on the
class
- `__runInitializers(array, flags, self, value)` — runs
initializer/extra-initializer arrays
## Test plan
- [x] `bun bd test
test/bundler/transpiler/es-decorators-esbuild.test.ts` — **147/147
pass** (esbuild's full decorator test suite)
- [x] `bun bd test test/bundler/transpiler/es-decorators.test.ts` —
**22/22 pass**
- [x] `bun bd test test/bundler/transpiler/decorators.test.ts` — **22/22
pass** (legacy decorators still work)
- [x] E2E runtime verification of method, field, accessor, class,
private, and expression decorators
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
## Summary
Adds a **DenseArray fast path** for `structuredClone` / `postMessage`
that completely skips byte-buffer serialization when an
`ArrayWithContiguous` array contains **flat objects** (whose property
values are only primitives or strings).
This builds on #26814 which added fast paths for Int32/Double/Contiguous
arrays of primitives and strings. The main remaining slow case was
**arrays of objects** — the most common real-world pattern (e.g.
`[{name: "Alice", age: 30}, {name: "Bob", age: 25}]`). Previously, these
fell back to the full serialization path because the Contiguous fast
path rejected non-string cell elements.
## How it works
### Serialization
The existing Contiguous array handler is extended to recognize object
elements that pass `isObjectFastPathCandidate` (FinalObject, no
getters/setters, no indexed properties, all enumerable). For qualifying
objects, properties are collected into a `SimpleCloneableObject` struct
(reusing the existing `SimpleInMemoryPropertyTableEntry` type). The
result is stored as a `FixedVector<DenseArrayElement>` where
`DenseArrayElement = std::variant<JSValue, String,
SimpleCloneableObject>`.
If no object elements are found, the existing `SimpleArray` path is used
(no regression).
### Deserialization
A **Structure cache** avoids repeated Structure transitions when the
array contains many same-shape objects (the common case). The first
object is built via `constructEmptyObject` + `putDirect`, and its final
Structure + Identifiers are cached. Subsequent objects with matching
property names are created directly with `JSFinalObject::create(vm,
cachedStructure)`, skipping all transitions.
Safety guards:
- Cache is only used when property count AND all property names match
- Cache is disabled when `outOfLineCapacity() > 0` (properties exceed
`maxInlineCapacity`), since `JSFinalObject::create` cannot allocate a
butterfly
### Fallback conditions
| Condition | Behavior |
|-----------|----------|
| Elements are only primitives/strings | SimpleArray (existing) |
| Elements include `isObjectFastPathCandidate` objects | **DenseArray
(NEW)** |
| Object property value is an object/array | Fallback to normal path |
| Elements include Date, RegExp, Map, Set, ArrayBuffer, etc. | Fallback
to normal path |
| Array has holes | Fallback to normal path |
## Benchmarks
Apple M4 Max, release build vs system Bun v1.3.8 and Node.js v24.12:
| Benchmark | Node.js v24.12 | Bun v1.3.8 | **This PR** | vs Bun | vs
Node |
|-----------|---------------|------------|-------------|--------|---------|
| `[10 objects]` | 2.83 µs | 2.72 µs | **1.56 µs** | **1.7x** | **1.8x**
|
| `[100 objects]` | 24.51 µs | 25.98 µs | **14.11 µs** | **1.8x** |
**1.7x** |
## Test coverage
28 new edge-case tests covering:
- **Property value variants**: empty objects, special numbers (NaN,
Infinity, -0), null/undefined values, empty string keys, boolean-only
values, numeric string keys
- **Structure cache correctness**: alternating shapes, objects
interleaved with primitives, >maxInlineCapacity properties (100+), 1000
same-shape objects (stress test), repeated clone independence
- **Fallback correctness**: array property values, nested objects,
Date/RegExp/Map/Set/ArrayBuffer elements, getters, non-enumerable
properties, `Object.create(null)`, class instances
- **Frozen/sealed**: clones are mutable regardless of source
- **postMessage via MessageChannel**: mixed arrays with objects, empty
object arrays
## Changed files
- `src/bun.js/bindings/webcore/SerializedScriptValue.h` —
`SimpleCloneableObject`, `DenseArrayElement`, `FastPath::DenseArray`,
factory/constructor/member
- `src/bun.js/bindings/webcore/SerializedScriptValue.cpp` — serialize,
deserialize, `computeMemoryCost`
- `test/js/web/structured-clone-fastpath.test.ts` — 28 new tests
- `bench/snippets/structuredClone.mjs` — object array benchmarks
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Follow-up to #26819 ([review
comment](https://github.com/oven-sh/bun/pull/26819#discussion_r2781484939)).
Fixes `Buffer.slice()` / `Buffer.subarray()` on resizable `ArrayBuffer`
/ growable `SharedArrayBuffer` to return a **fixed-length view** instead
of a length-tracking view.
## Problem
The resizable/growable branch was passing `std::nullopt` to
`JSUint8Array::create()`, which creates a length-tracking view. When the
underlying buffer grows, the sliced view's length would incorrectly
expand:
```js
const rab = new ArrayBuffer(10, { maxByteLength: 20 });
const buf = Buffer.from(rab);
const sliced = buf.slice(0, 5);
sliced.length; // 5
rab.resize(20);
sliced.length; // was 10 (wrong), now 5 (correct)
```
Node.js specifies that `Buffer.slice()` always returns a fixed-length
view (verified on Node.js v22).
## Fix
Replace `std::nullopt` with `newLength` in the
`isResizableOrGrowableShared()` branch of
`jsBufferPrototypeFunction_sliceBody`.
## Test
Added a regression test that creates a `Buffer` from a resizable
`ArrayBuffer`, slices it, resizes the buffer, and verifies the slice
length doesn't change.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
### What does this PR do?
Instead of calling event_loop.wakeup() (which writes to the eventfd)
when there are pending immediate tasks, use a zero timeout in
getTimeout() so epoll/kqueue returns immediately. This avoids the
overhead of the eventfd write/read cycle on each setImmediate iteration.
On Windows, continue to call .wakeup() since that's cheap for libuv.
Verified with strace: system bun makes ~44k eventfd writes for a 5s
setImmediate loop, while this change makes 0.
### How did you verify your code works?
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Fixes the remaining kqueue filter comparison bug in
`packages/bun-usockets/src/eventing/epoll_kqueue.c` that caused
excessive CPU usage with network requests on macOS:
- **`us_loop_run_bun_tick` filter comparison (line 302-303):** kqueue
filter values (`EVFILT_READ=-1`, `EVFILT_WRITE=-2`) were compared using
bitwise AND (`&`) instead of equality (`==`). Since these are signed
negative integers (not bitmasks), `(-2) & (-1)` = `-2` (truthy), meaning
every `EVFILT_WRITE` event was also misidentified as `EVFILT_READ`. This
was already fixed in `us_loop_run` (by PR #25475) but the same bug
remained in `us_loop_run_bun_tick`, which is the primary event loop
function used by Bun.
This is a macOS-only issue (Linux uses epoll, which is unaffected).
Closes#26811
## Test plan
- [x] Added regression test at `test/regression/issue/26811.test.ts`
that makes concurrent HTTPS POST requests
- [x] Test passes with `bun bd test test/regression/issue/26811.test.ts`
- [ ] Manual verification on macOS: run the reporter's [repro
script](https://gist.github.com/jkoppel/d26732574dfcdcc6bfc4958596054d2e)
and confirm CPU usage stays low
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Switch both eventfd wakeup sites (Zig IO watcher loop and usockets
async) to edge-triggered (`EPOLLET`) epoll mode, eliminating unnecessary
`read()` syscalls on every event loop wakeup
- Add `EAGAIN`/`EINTR` overflow handling in `us_internal_async_wakeup`,
matching libuv's approach ([commit
`e5cb1d3d`](https://github.com/libuv/libuv/commit/e5cb1d3d))
With edge-triggered mode, each `write()` to the eventfd produces a new
edge event regardless of the current counter value, so draining the
counter via `read()` is unnecessary. The counter will never overflow in
practice (~18 quintillion wakeups), but overflow handling is included
defensively.
### Files changed
- **`src/io/io.zig`** — Add `EPOLL.ET` to eventfd registration, replace
drain `read()` with `continue`
- **`packages/bun-usockets/src/eventing/epoll_kqueue.c`** — Set
`leave_poll_ready = 1` for async callbacks, upgrade to `EPOLLET` via
`EPOLL_CTL_MOD`, add `EAGAIN`/`EINTR` handling in wakeup write
## Test plan
- [x] Verified with `strace -f -e trace=read,eventfd2` that eventfd
reads are fully eliminated after the change (0 reads on the eventfd fd)
- [x] Confirmed remaining 8-byte reads in traces are timerfd reads
(legitimate, required)
- [x] Stress tested with 50 concurrent async tasks (1000 total
`Bun.sleep(1)` iterations) — all completed correctly
- [x] `LinuxWaker.wait()` (used by `BundleThread` as a blocking sleep)
is intentionally unchanged
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Add a fast path for `structuredClone` and `postMessage` when the root
value is a dense array of primitives or strings. This bypasses the full
`CloneSerializer`/`CloneDeserializer` machinery by keeping data in
native C++ structures instead of serializing to a byte stream.
**Important:** This optimization only applies when the root value passed
to `structuredClone()` / `postMessage()` is an array. Nested arrays
within objects still go through the normal serialization path.
## Implementation
Three tiers of array fast paths, checked in order:
| Tier | Indexing Type | Strategy | Applies When |
|------|--------------|----------|--------------|
| **Tier 1** | `ArrayWithInt32` | `memcpy` butterfly data | Dense int32
array, no holes, no named properties |
| **Tier 2** | `ArrayWithDouble` | `memcpy` butterfly data | Dense
double array, no holes, no named properties |
| **Tier 3** | `ArrayWithContiguous` | Copy elements into
`FixedVector<variant<JSValue, String>>` | Dense array of
primitives/strings, no holes, no named properties |
All tiers fall through to the normal serialization path when:
- The array has holes that must forward to the prototype
- The array has named properties (e.g., `arr.foo = "bar"`) — checked via
`structure->maxOffset() != invalidOffset`
- Elements contain non-primitive, non-string values (objects, arrays,
etc.)
- The context requires wire-format serialization (storage, cross-process
transfer)
### Deserialization
- **Tier 1/2:** Allocate a new `Butterfly` via `vm.auxiliarySpace()`,
`memcpy` data back, create array with `JSArray::createWithButterfly()`.
Falls back to normal deserialization if `isHavingABadTime` (forced
ArrayStorage mode).
- **Tier 3:** Pre-convert elements to `JSValue` (including `jsString()`
allocation), then use `JSArray::tryCreateUninitializedRestricted()` +
`initializeIndex()`.
## Benchmarks
Apple M4 Max, comparing system Bun 1.3.8 vs this branch (release build):
| Benchmark | Before | After | Speedup |
|-----------|--------|-------|---------|
| `structuredClone([10 numbers])` | 308.71 ns | 40.38 ns | **7.6x** |
| `structuredClone([100 numbers])` | 1.62 µs | 86.87 ns | **18.7x** |
| `structuredClone([1000 numbers])` | 13.79 µs | 544.56 ns | **25.3x** |
| `structuredClone([10 strings])` | 642.38 ns | 307.38 ns | **2.1x** |
| `structuredClone([100 strings])` | 5.67 µs | 2.57 µs | **2.2x** |
| `structuredClone([10 mixed])` | 446.32 ns | 198.35 ns | **2.3x** |
| `structuredClone(nested array)` | 1.84 µs | 1.79 µs | 1.0x (not
eligible) |
| `structuredClone({a: 123})` | 95.98 ns | 100.07 ns | 1.0x (no
regression) |
Int32 arrays see the largest gains (up to 25x) since they use a direct
`memcpy` of butterfly memory. String/mixed arrays see ~2x improvement.
No performance regression on non-eligible inputs.
## Bug Fix
Also fixes a correctness bug where arrays with named properties (e.g.,
`arr.foo = "bar"`) would lose those properties when going through the
array fast path. Added a `structure->maxOffset() != invalidOffset` guard
to fall back to normal serialization for such arrays.
Fixed a minor double-counting issue in `computeMemoryCost` where
`JSValue` elements in `SimpleArray` were counted both by `byteSize()`
and individually.
## Test Plan
38 tests in `test/js/web/structured-clone-fastpath.test.ts` covering:
- Basic array types: empty, numbers, strings, mixed primitives, special
numbers (`-0`, `NaN`, `Infinity`)
- Large arrays (10,000 elements)
- Tier 2: double arrays, Int32→Double transition
- Deep clone independence verification
- Named properties on Int32, Double, and Contiguous arrays
- `postMessage` via `MessageChannel` for Int32, Double, and mixed arrays
- Edge cases: frozen/sealed arrays, deleted elements (holes), `length`
extension, single-element arrays
- Prototype modification (custom prototype, indexed prototype properties
with holes)
- `Array` subclass identity loss (per spec)
- `undefined`-only and `null`-only arrays
- Multiple independent clones from the same source
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Add `minify.unwrapCJSToESM` JS API option and `--unwrap-cjs-to-esm` CLI
flag to force CJS-to-ESM conversion for specific packages, eliminating
the `__commonJS` wrapper. Supports wildcard patterns (e.g. `"@scope/*"`).
User entries extend the default React family list.
Also removes the react/react-dom version check that gated conversion,
and fixes `packageName()` to handle scoped packages (`@scope/pkg`).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- Fix crash ("Pure virtual function called!") when WebSocket client
receives binary data with `binaryType = "blob"` and no event listener
attached
- Add missing `incPendingActivityCount()` call before `postTask` in the
Blob case of `didReceiveBinaryData`
- Add regression test for issue #26669
## Root Cause
The Blob case in `didReceiveBinaryData` (WebSocket.cpp:1324-1331) was
calling `decPendingActivityCount()` inside the `postTask` callback
without a matching `incPendingActivityCount()` beforehand. This bug was
introduced in #21471 when Blob support was added.
The ArrayBuffer and NodeBuffer cases correctly call
`incPendingActivityCount()` before `postTask`, but the Blob case was
missing this call.
## Test plan
- [x] New regression test verifies WebSocket with `binaryType = "blob"`
doesn't crash on ping frames
- [x] `bun bd test test/regression/issue/26669.test.ts` passes
Fixes#26669🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Ciro Spaciari MacBook <ciro@anthropic.com>
## Summary
`Bun.stringWidth` was incorrectly treating Thai SARA AA (U+0E32), SARA
AM (U+0E33), and their Lao equivalents (U+0EB2, U+0EB3) as zero-width
characters.
## Root Cause
In `src/string/immutable/visible.zig`, the range check for Thai/Lao
combining marks was too broad:
- Thai: `0xe31 <= cp <= 0xe3a` included U+0E32 and U+0E33
- Lao: `0xeb1 <= cp <= 0xebc` included U+0EB2 and U+0EB3
According to Unicode (UCD Grapheme_Break property), these are **spacing
vowels** (Grapheme_Base), not combining marks.
## Changes
- **`src/string/immutable/visible.zig`**: Exclude U+0E32, U+0E33,
U+0EB2, U+0EB3 from zero-width ranges
- **`test/js/bun/util/stringWidth.test.ts`**: Add tests for Thai and Lao
spacing vowels
## Before/After
| Character | Before | After |
|-----------|--------|-------|
| `\u0E32` (SARA AA) | 0 | 1 |
| `\u0E33` (SARA AM) | 0 | 1 |
| `คำ` (common Thai word) | 1 | 2 |
| `\u0EB2` (Lao AA) | 0 | 1 |
| `\u0EB3` (Lao AM) | 0 | 1 |
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
…ments (#26717)"
This reverts commit 315e822866.
### What does this PR do?
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
Enables the `net.Server → Http2SecureServer` connection upgrade pattern
used by libraries like
[http2-wrapper](https://github.com/szmarczak/http2-wrapper),
[crawlee](https://github.com/apify/crawlee), and custom HTTP/2 proxy
servers. This pattern works by accepting raw TCP connections on a
`net.Server` and forwarding them to an `Http2SecureServer` via
`h2Server.emit('connection', rawSocket)`.
#### Bug fixes
**SSLWrapper use-after-free (Zig)**
Two use-after-free bugs in `ssl_wrapper.zig` are fixed:
1. **`flush()` stale pointer** — `flush()` captured the `ssl` pointer
*before* calling `handleTraffic()`, which can trigger a close callback
that frees the SSL object via `deinit`. The pointer was then used after
being freed. Fix: read `this.ssl` *after* `handleTraffic()` returns.
2. **`handleReading()` null dereference** — `handleReading()` called
`triggerCloseCallback()` after `triggerDataCallback()` without checking
whether the data callback had already closed the connection. This led to
a null function pointer dereference. Fix: check `this.ssl == null ||
this.flags.closed_notified` before calling the close callback.
### How did you verify your code works?
- Added **13 in-process tests** (`node-http2-upgrade.test.mts`) covering
the `net.Server → Http2SecureServer` upgrade path:
- GET/POST requests through upgraded connections
- Sequential requests sharing a single H2 session
- `session` event emission
- Concurrent clients with independent sessions
- Socket close ordering (rawSocket first vs session first) — no crash
- ALPN protocol negotiation (`h2`)
- Varied status codes (200, 302, 404)
- Client disconnect mid-response (stream destroyed early)
- Three independent clients producing three distinct sessions
- Tests use `node:test` + `node:assert` and **pass in both Bun and
Node.js**
- Ported `test-http2-socket-close.js` from the Node.js test suite,
verifying no segfault when the raw socket is destroyed before the H2
session is closed
---------
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Fixes a bug where sequential HTTP requests with proxy-style absolute
URLs (e.g. `GET http://example.com/path HTTP/1.1`) hang on the 2nd+
request when using keep-alive connections.
## Root Cause
In `packages/bun-uws/src/HttpParser.h`, the parser was treating
proxy-style absolute URLs identically to `CONNECT` method requests —
setting `isConnectRequest = true` and entering tunnel mode. This flag
was never reset between requests on the same keep-alive connection, so
the 2nd+ request was swallowed as raw tunnel data instead of being
parsed as HTTP.
## Fix
3-line change in `HttpParser.h:569`:
- **`isConnect`**: Now only matches actual `CONNECT` method requests
(removed `isHTTPorHTTPSPrefixForProxies` from the condition)
- **`isProxyStyleURL`**: New variable that detects `http://`/`https://`
prefixes and accepts them as valid request targets — without triggering
tunnel mode
## Who was affected
- Any Bun HTTP server (`Bun.serve()` or `node:http createServer`)
receiving proxy-style requests on keep-alive connections
- HTTP proxy servers built with Bun could only handle one request per
connection
- Bun's own HTTP client making sequential requests through an HTTP proxy
backed by a Bun server
## Test
Added `test/js/node/http/node-http-proxy-url.test.ts` with 3 test cases:
1. Sequential GET requests with absolute URL paths
2. Sequential POST requests with absolute URL paths
3. Mixed normal and proxy-style URLs
Tests run under both Node.js and Bun for compatibility verification.
- ❌ Fails with system bun (2/3 tests timeout on 2nd request)
- ✅ Passes with debug build (3/3 tests pass)
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Fix a use-after-free bug in the bindgen code generator where string
arguments with default values would have their underlying WTF::String
destroyed before the BunString was used
- The issue occurred because for optional string parameters with
defaults, a WTF::String was created inside an `if` block, converted to
BunString, then the if block closed and destroyed the WTF::String while
the BunString was still in use
- This manifested as a segfault in `Bun.stringWidth()` and potentially
other functions using optional string arguments
## Details
The crash stack trace showed:
```
Segmentation fault at address 0x31244B0F0
visible.zig:888: string.immutable.visible.visible.visibleUTF16WidthFn
BunObject.zig:1371: bindgen_BunObject_dispatchStringWidth1
GeneratedBindings.cpp:242: bindgen_BunObject_jsStringWidth
```
The generated code before this fix looked like:
```cpp
BunString argStr;
if (!arg0.value().isUndefinedOrNull()) {
WTF::String wtfString_0 = WebCore::convert<...>(...);
argStr = Bun::toString(wtfString_0);
} // <-- wtfString_0 destroyed here!
// ... argStr used later, pointing to freed memory
```
The fix declares the WTF::String holder outside the if block:
```cpp
BunString argStr;
WTF::String wtfStringHolder_0; // Lives until function returns
if (!arg0.value().isUndefinedOrNull()) {
wtfStringHolder_0 = WebCore::convert<...>(...);
}
if (!wtfStringHolder_0.isEmpty()) argStr = Bun::toString(wtfStringHolder_0);
// argStr now points to valid memory
```
This fix applies to both:
- Direct string function arguments with defaults (e.g.,
`t.DOMString.default("")`)
- Dictionary fields with string defaults
## Test plan
- [x] Existing `stringWidth.test.ts` tests pass (105 tests)
- [x] Manual testing with GC stress shows no crashes
- [x] `os.userInfo()` with encoding option works correctly
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Update LLVM version references across build scripts, Dockerfiles, CI,
Nix configs, and documentation
- Fix LLVM 21 `-Wcharacter-conversion` errors in WebKit bindings:
- `EncodingTables.h`: pragma for intentional char32_t/char16_t
comparisons
- `TextCodecCJK.cpp`: widen `gb18030AsymmetricEncode` param to char32_t
- `URLPatternParser`: widen `isValidNameCodepoint` param to char32_t,
cast for `startsWith`
- Fix `__libcpp_verbose_abort` noexcept mismatch (LLVM 21 uses
`_NOEXCEPT`)
- Fix dangling pointer in `BunJSCModule.h` (`toCString` temporary
lifetime)
- Remove `useMathSumPreciseMethod` (removed upstream in JSC)
**Before merging:** Merge https://github.com/oven-sh/WebKit/pull/153
first, then update `WEBKIT_VERSION` in `cmake/tools/SetupWebKit.cmake`
to point to the merged commit.
## Test plan
- [ ] Build bun debug on macOS with LLVM 21
- [ ] Build bun on Linux (glibc)
- [ ] Build bun on Linux (musl)
- [ ] Build bun on Windows
- [ ] Run test suite
Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Add `[Symbol.dispose]` to mock function prototype, aliased to
`mockRestore`
- Enables `using spy = spyOn(obj, "method")` to auto-restore when
leaving scope
- Works for both `spyOn()` and `mock()`
Addresses #6040 — gives users a clean way to scope spy lifetimes instead
of manually calling `mockRestore()` or relying on `afterEach`.
### Example
```ts
import { spyOn, expect, test } from "bun:test";
test("auto-restores spy", () => {
const obj = { method: () => "original" };
{
using spy = spyOn(obj, "method").mockReturnValue("mocked");
expect(obj.method()).toBe("mocked");
}
// automatically restored
expect(obj.method()).toBe("original");
});
```
## Test plan
- `bun bd test test/js/bun/test/mock-disposable.test.ts` — 3 tests pass
- Verified tests fail with `USE_SYSTEM_BUN=1`
## Summary
- Make `defaultRemainingRunsUntilSkipReleaseAccess` configurable at
runtime instead of a compile-time constant
- Add `BUN_GC_RUNS_UNTIL_SKIP_RELEASE_ACCESS` environment variable to
control how many idle event loop iterations pass before skipping JSC
heap `releaseAccess` calls in `onBeforeWait`
- Default remains 10, matching the previous hardcoded value
## Test plan
- [ ] Verify default behavior is unchanged (no env var set, value is 10)
- [ ] Verify `BUN_GC_RUNS_UNTIL_SKIP_RELEASE_ACCESS=0` causes release
access to be skipped every iteration
- [ ] Verify `BUN_GC_RUNS_UNTIL_SKIP_RELEASE_ACCESS=100` delays skipping
for 100 idle iterations
- [ ] Verify negative values are ignored (default is preserved)
- [ ] Verify non-numeric values are ignored (default is preserved)
## Changelog
<!-- CHANGELOG:START -->
<!-- CHANGELOG:END -->
🤖 Generated with [Claude Code](https://claude.com/claude-code) (0%
3-shotted by claude)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
This is a tiny optimization that skips creating and dispatching an Event
object when `AbortSignal.abort()` is called with no registered
listeners.
## Changes
When there are no listeners (no `addEventListener` or `onabort`), we now
check `hasEventListeners()` before creating the Event, avoiding:
- Event object allocation (~112 bytes)
- EventPath creation
- dispatchEvent overhead (hash map lookups, method calls)
## Performance
Improvement for the no-listener case:
- **~6% faster** in micro-benchmarks
- ~16ms saved per 1M `abort()` calls (271ms → 255ms)
| Case | Before | After | Improvement |
|------|--------|-------|-------------|
| no listener | 271 ms | 255 ms | ~6% |
| with listener | 368 ms | 370 ms | (same) |
## Why this is safe
The optimization has no observable side effects because:
- `dispatchEvent` is called from C++, not observable via JS
monkey-patching
- Without listeners, no code can obtain a reference to the Event object
- All internal state (`aborted`, `reason`) is set correctly regardless
## Test Plan
- Existing AbortController/AbortSignal tests pass
- Added mitata benchmark: `bench/snippets/abort-signal.mjs`
## Summary
- Fix path normalization for "." on Windows where `normalizeStringBuf`
was incorrectly stripping it to an empty string
- This caused `existsSync('.')`, `statSync('.')`, and other fs
operations to fail on Windows
## Test plan
- Added regression test `test/regression/issue/26631.test.ts` that tests
`existsSync`, `exists`, `statSync`, and `stat` for both `.` and `..`
paths
- All tests pass locally with `bun bd test
test/regression/issue/26631.test.ts`
- Verified code compiles on all platforms with `bun run zig:check-all`
Fixes#26631🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Add validation to require `--compile` when using ESM bytecode
- Update documentation to clarify ESM bytecode requirements
## Why
ESM module resolution is two-phase: (1) analyze imports/exports, (2)
evaluate. Without `--compile`, there's no `module_info` embedded, so JSC
must still parse the file for module analysis even with bytecode -
causing a double-parse deopt.
## Changes
- **CLI**: Error when `--bytecode --format=esm` is used without
`--compile`
- **JS API**: Error when `bytecode: true, format: 'esm'` is used without
`compile: true`
- **Docs**: Update bytecode.mdx, executables.mdx, index.mdx to clarify
requirements
- **Types**: Update JSDoc for bytecode option in bun.d.ts
## Test plan
```bash
# Should error
bun build ./test.js --bytecode --format=esm --outdir=./out
# error: ESM bytecode requires --compile. Use --format=cjs for bytecode without --compile.
# Should work
bun build ./test.js --bytecode --format=esm --compile --outfile=./mytest
bun build ./test.js --bytecode --format=cjs --outdir=./out
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Adds `--cpu-prof-interval` to configure the CPU profiler sampling
interval in microseconds (default: 1000), matching Node.js's
`--cpu-prof-interval` flag.
```sh
bun --cpu-prof --cpu-prof-interval 500 index.js
```
- Parsed as `u32`, truncated to `c_int` when passed to JSC's
`SamplingProfiler::setTimingInterval`
- Invalid values silently fall back to the default (1000μs)
- Warns if used without `--cpu-prof` or `--cpu-prof-md`
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
Fixes#26625
This fixes a segmentation fault that occurred on Windows x64 when the GC
finalizer tried to free shell interpreter resources that were already
partially freed during normal shell completion.
- Added explicit `cleanup_state` enum to track resource ownership state
- `needs_full_cleanup`: Nothing cleaned up yet, finalizer must clean
everything
- `runtime_cleaned`: `finish()` already cleaned IO/shell, finalizer
skips those
- Early return in `#derefRootShellAndIOIfNeeded()` when already cleaned
- Explicit state-based cleanup in `deinitFromFinalizer()`
The vulnerability existed on all platforms but was most reliably
triggered on Windows with high GC pressure (many concurrent shell
commands).
## Test plan
- [x] Build passes (`bun bd`)
- [x] New regression test added (`test/regression/issue/26625.test.ts`)
- [x] Existing shell tests pass (same 4 pre-existing failures, no new
failures)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- `bun run build:local` now handles everything: configuring JSC,
building JSC, and building Bun in a single command on all platforms
(macOS, Linux, Windows). Previously required manually running `bun run
jsc:build:debug`, deleting a duplicate `InspectorProtocolObjects.h`
header, and then running the Bun build separately.
- Incremental JSC rebuilds: JSC is built via `add_custom_target` that
delegates to JSC's inner Ninja, which tracks WebKit source file changes
and only rebuilds what changed. `ninja -Cbuild/debug-local` also works
after the first build.
- Cross-platform support:
- macOS: Uses system ICU automatically
- Linux: Uses system ICU via find_package instead of requiring bundled
static libs
- Windows: Builds ICU from source automatically (only when libs don't
already exist), sets up static CRT and ICU naming conventions
### Changes
- cmake/tools/SetupWebKit.cmake: Replace the old WEBKIT_LOCAL block
(which just set include paths and assumed JSC was pre-built) with full
JSC configure + build integration for all platforms
- cmake/targets/BuildBun.cmake: Add jsc as a build dependency, use
system ICU on Linux for local builds, handle bmalloc linking for local
builds
- CONTRIBUTING.md / docs/project/contributing.mdx: Simplify "Building
WebKit locally" docs from ~15 lines of manual steps to 3 lines
## Test plan
- [x] macOS arm64: clean build, incremental rebuild, WebKit source
change rebuild
- [x] Windows x64: clean build with ICU, incremental rebuild with ICU
skip
- [x] Linux x64: build with system ICU via find_package
- [x] No duplicate InspectorProtocolObjects.h errors
- [x] build/debug-local/bun-debug --version works
Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Extract NO_PROXY checking logic from getHttpProxyFor into a reusable
isNoProxy method on the env Loader. This allows both fetch() and
WebSocket to check NO_PROXY even when a proxy is explicitly provided via
the proxy option (not just via http_proxy env var).
Changes:
- env_loader.zig: Extract isNoProxy() from getHttpProxyFor()
- FetchTasklet.zig: Check isNoProxy() before using explicit proxy
- WebSocket.cpp: Check Bun__isNoProxy() before using explicit proxy
- virtual_machine_exports.zig: Export Bun__isNoProxy for C++ access
- Add NO_PROXY tests for both fetch and WebSocket proxy paths
### How did you verify your code works?
Tests
---------
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
## Summary
- Fix type definition for `Socket.reload()` to match runtime behavior
- The runtime expects `{ socket: handler }` but types previously
accepted just `handler`
## Test plan
- [x] Added regression test `test/regression/issue/26290.test.ts`
- [x] Verified test passes with `bun bd test`
Fixes#26290🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Alistair Smith <hi@alistair.sh>
## Summary
- Adds missing SIMD variants to the `Build.Target` TypeScript type
- The runtime accepts targets like `bun-linux-x64-modern` but TypeScript
was rejecting them
- Generalized the type to use `${Architecture}` template where possible
## Test plan
- [x] Added regression test in `test/regression/issue/26247.test.ts`
that validates all valid target combinations type-check correctly
- [x] Verified with `bun bd test test/regression/issue/26247.test.ts`
Fixes#26247🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Alistair Smith <hi@alistair.sh>
## Summary
- Adds missing `bun-linux-x64-baseline` and `bun-linux-x64-modern`
compile target types
- These targets are supported by the Bun CLI but were missing from the
TypeScript type definitions
## Changes
Added `bun-linux-x64-${SIMD}` to the `CompileTarget` type union, which
expands to:
- `bun-linux-x64-baseline`
- `bun-linux-x64-modern`
## Test plan
- [x] TypeScript should now accept `target: 'bun-linux-x64-modern'`
without type errors
Closes#26247🤖 Generated with [Claude Code](https://claude.com/claude-code)
2026-01-30 12:21:11 -08:00
537 changed files with 40393 additions and 4523 deletions
body: 'This PR has been closed because it was flagged as AI slop.\n\nMany AI-generated PRs are fine, but this one was identified as having one or more of the following issues:\n- Fails to verify the problem actually exists\n- Fails to test that the fix works\n- Makes incorrect assumptions about the codebase\n- Submits changes that are incomplete or misleading\n\nIf you believe this was done in error, please leave a comment explaining why.'
});
await github.rest.pulls.update({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: context.payload.pull_request.number,
title: 'ai slop',
body: 'This PR has been marked as AI slop and the description has been updated to avoid confusion or misleading reviewers.\n\nMany AI PRs are fine, but sometimes they submit a PR too early, fail to test if the problem is real, fail to reproduce the problem, or fail to test that the problem is fixed. If you think this PR is not AI slop, please leave a comment.',
@@ -90,17 +90,17 @@ Our build scripts will automatically detect and use `ccache` if available. You c
## Install LLVM
Bun requires LLVM 19 (`clang` is part of LLVM). This version requirement is to match WebKit (precompiled), as mismatching versions will cause memory allocation failures at runtime. In most cases, you can install LLVM through your system package manager:
Bun requires LLVM 21.1.8 (`clang` is part of LLVM). This version is enforced by the build system — mismatching versions will cause memory allocation failures at runtime. In most cases, you can install LLVM through your system package manager:
{% codetabs group="os" %}
```bash#macOS (Homebrew)
$ brew install llvm@19
$ brew install llvm@21
```
```bash#Ubuntu/Debian
$ # LLVM has an automatic installation script that is compatible with all versions of Ubuntu
# Build bun with the local JSC build — this automatically configures and builds JSC
$ bun run build:local
```
Using `bun run build:local` will build Bun in the `./build/debug-local` directory (instead of `./build/debug`), you'll have to change a couple of places to use this new directory:
`bun run build:local` handles everything: configuring JSC, building JSC, and building Bun. On subsequent runs, JSC will incrementally rebuild if any WebKit sources changed. `ninja -Cbuild/debug-local` also works after the first build, and will build Bun+JSC.
The build output goes to `./build/debug-local` (instead of `./build/debug`), so you'll need to update a couple of places:
- The first line in [`src/js/builtins.d.ts`](/src/js/builtins.d.ts)
- The `CompilationDatabase` line in [`.clangd` config](/.clangd) should be `CompilationDatabase: build/debug-local`
@@ -281,7 +276,7 @@ Note that the WebKit folder, including build artifacts, is 8GB+ in size.
If you are using a JSC debug build and using VScode, make sure to run the `C/C++: Select a Configuration` command to configure intellisense to find the debug headers.
Note that if you change make changes to our [WebKit fork](https://github.com/oven-sh/WebKit), you will also have to change [`SetupWebKit.cmake`](/cmake/tools/SetupWebKit.cmake) to point to the commit hash.
Note that if you make changes to our [WebKit fork](https://github.com/oven-sh/WebKit), you will also have to change [`SetupWebKit.cmake`](/cmake/tools/SetupWebKit.cmake) to point to the commit hash.
## Troubleshooting
@@ -304,7 +299,7 @@ The issue may manifest when initially running `bun setup` as Clang being unable
// Simulates ink's output.ts sliceAnsi(line, from, to) call in the render loop.
// Each line is colored and gets clipped to the viewport width.
constlogLine=`${bold("[2024-01-15 12:34:56]")}${red("ERROR")} Connection to ${link("https://api.example.com","api.example.com")} timed out after 30s (attempt 3/5)`;
"description":"Configure auto-install behavior. One of \"auto\" (default, auto-installs when no node_modules), \"fallback\" (missing packages only), \"force\" (always).",
"hasValue":true,
"valueType":"val",
"choices":[
"auto"
],
"choices":["auto"],
"required":false,
"multiple":false
},
@@ -3827,12 +3781,7 @@
"description":"Set the default order of DNS lookup results. Valid orders: verbatim (default), ipv4first, ipv6first",
set-lbun_install_boolean_flags yarn production optional development no-save dry-run force no-cache silent verbose global
set-lbun_install_boolean_flags_descriptions"Write a yarn.lock file (yarn v1)""Don't install devDependencies""Add dependency to optionalDependencies""Add dependency to devDependencies""Don't update package.json or save a lockfile""Don't install anything""Always request the latest versions from the registry & reinstall all dependencies""Ignore manifest cache entirely""Don't output anything""Excessively verbose logging""Use global folder"
set-lbun_builtin_cmds_without_run dev create help bun upgrade discord install remove add update init pm x
set-lbun_builtin_cmds_without_run dev create help bun upgrade discord install remove add update init pm x repl
set-lbun_builtin_cmds_accepting_flags create help bun upgrade discord run init link unlink pm x update
function__bun_complete_bins_scripts--inherit-variable bun_builtin_cmds_without_run -d"Emit bun completions for bins and scripts"
@@ -185,3 +185,12 @@ complete -c bun -n "__fish_use_subcommand" -a "x" -d "Execute a package binary,
complete-c bun -n"__fish_use_subcommand"-a"outdated"-d"Display the latest versions of outdated dependencies"-f
complete-c bun -n"__fish_use_subcommand"-a"update"-d"Update dependencies to their latest versions"-f
complete-c bun -n"__fish_use_subcommand"-a"publish"-d"Publish your package from local to npm"-f
complete-c bun -n"__fish_use_subcommand"-a"repl"-d"Start a REPL session with Bun"-f
complete-c bun -n"__fish_seen_subcommand_from repl"-s"e"-l"eval"-r-d"Evaluate argument as a script, then exit"-f
complete-c bun -n"__fish_seen_subcommand_from repl"-s"p"-l"print"-r-d"Evaluate argument as a script, print the result, then exit"-f
complete-c bun -n"__fish_seen_subcommand_from repl"-s"r"-l"preload"-r-d"Import a module before other modules are loaded"
complete-c bun -n"__fish_seen_subcommand_from repl"-l"smol"-d"Use less memory, but run garbage collection more often"-f
complete-c bun -n"__fish_seen_subcommand_from repl"-s"c"-l"config"-r-d"Specify path to Bun config file"
complete-c bun -n"__fish_seen_subcommand_from repl"-l"cwd"-r-d"Absolute path to resolve files & entry points from"
complete-c bun -n"__fish_seen_subcommand_from repl"-l"env-file"-r-d"Load environment variables from the specified file(s)"
complete-c bun -n"__fish_seen_subcommand_from repl"-l"no-env-file"-d"Disable automatic loading of .env files"-f
@@ -7,9 +7,9 @@ Bytecode caching is a build-time optimization that dramatically improves applica
## Usage
### Basic usage
### Basic usage (CommonJS)
Enable bytecode caching with the `--bytecode` flag:
Enable bytecode caching with the `--bytecode` flag. Without `--format`, this defaults to CommonJS:
```bash terminal icon="terminal"
bun build ./index.ts --target=bun --bytecode --outdir=./dist
@@ -17,7 +17,7 @@ bun build ./index.ts --target=bun --bytecode --outdir=./dist
This generates two files:
- `dist/index.js` - Your bundled JavaScript
- `dist/index.js` - Your bundled JavaScript (CommonJS)
- `dist/index.jsc` - The bytecode cache file
At runtime, Bun automatically detects and uses the `.jsc` file:
@@ -28,14 +28,24 @@ bun ./dist/index.js # Automatically uses index.jsc
### With standalone executables
When creating executables with `--compile`, bytecode is embedded into the binary:
When creating executables with `--compile`, bytecode is embedded into the binary. Both ESM and CommonJS formats are supported:
```bash terminal icon="terminal"
# ESM (requires --compile)
bun build ./cli.ts --compile --bytecode --format=esm --outfile=mycli
# CommonJS (works with or without --compile)
bun build ./cli.ts --compile --bytecode --outfile=mycli
```
The resulting executable contains both the code and bytecode, giving you maximum performance in a single file.
### ESM bytecode
ESM bytecode requires `--compile` because Bun embeds module metadata (import/export information) in the compiled binary. This metadata allows the JavaScript engine to skip parsing entirely at runtime.
Without `--compile`, ESM bytecode would still require parsing the source to analyze module dependencies—defeating the purpose of bytecode caching.
### Combining with other optimizations
Bytecode works great with minification and source maps:
@@ -90,35 +100,9 @@ Larger applications benefit more because they have more code to parse.
- ❌ **Code that runs once**
- ❌ **Development builds**
- ❌ **Size-constrained environments**
- ❌ **Code with top-level await** (not supported)
## Limitations
### CommonJS only
Bytecode caching currently works with CommonJS output format. Bun's bundler automatically converts most ESM code to CommonJS, but **top-level await** is the exception:
```js
// This prevents bytecode caching
const data = await fetch("https://api.example.com");
export default data;
```
**Why**: Top-level await requires async module evaluation, which can't be represented in CommonJS. The module graph becomes asynchronous, and the CommonJS wrapper function model breaks down.
**Workaround**: Move async initialization into a function:
```js
async function init() {
const data = await fetch("https://api.example.com");
return data;
}
export default init;
```
Now the module exports a function that the consumer can await when needed.
### Version compatibility
Bytecode is **not portable across Bun versions**. The bytecode format is tied to JavaScriptCore's internal representation, which changes between versions.
@@ -236,8 +220,6 @@ It's normal for it it to log a cache miss multiple times since Bun doesn't curre
- Compressing `.jsc` files for network transfer (gzip/brotli)
- Evaluating if the startup performance gain is worth the size increase
**Top-level await**: Not supported. Refactor to use async initialization functions.
## What is bytecode?
When you run JavaScript, the JavaScript engine doesn't execute your source code directly. Instead, it goes through several steps:
The builder object provides some methods for hooking into parts of the bundling process. Bun implements `onResolve` and `onLoad`; it does not yet implement the esbuild hooks `onStart`, `onEnd`, and `onDispose`, and `resolve` utilities. `initialOptions` is partially implemented, being read-only and only having a subset of esbuild's options; use `config` (same thing but with Bun's `BuildConfig` format) instead.
The builder object provides some methods for hooking into parts of the bundling process. Bun implements `onStart`, `onEnd`, `onResolve`, and `onLoad`. It does not yet implement the esbuild hooks `onDispose` and `resolve`. `initialOptions` is partially implemented, being read-only and only having a subset of esbuild's options; use `config` (same thing but with Bun's `BuildConfig` format) instead.
Bytecode compilation moves parsing overhead for large input files from runtime to bundle time. Your app starts faster, in exchange for making the `bun build` command a little slower. It doesn't obscure source code.
<Warning>
**Experimental:** Bytecode compilation is an experimental feature. Only `cjs` format is supported (which means no
top-level-await). Let us know if you run into any issues!
</Warning>
<Note>Bytecode compilation supports both `cjs` and `esm` formats when used with `--compile`.</Note>
### What do these flags do?
@@ -1187,7 +1209,8 @@ Currently, the `--compile` flag can only accept a single entrypoint at a time an
- `--outdir` — use `outfile` instead (except when using with `--splitting`).
- `--public-path`
- `--target=node` or `--target=browser`
- `--target=node`
- `--target=browser` (without HTML entrypoints — see [Standalone HTML](/bundler/standalone-html) for `--compile --target=browser` with `.html` files)
- `--no-bundle` - we always bundle everything into the executable.
@@ -481,6 +481,16 @@ All paths are resolved relative to your HTML file, making it easy to organize yo
This is a small wrapper around Bun's support for HTML imports in JavaScript.
## Standalone HTML
You can bundle your entire frontend into a **single self-contained `.html` file** with no external dependencies using `--compile --target=browser`. All JavaScript, CSS, and images are inlined directly into the HTML.
```bash terminal icon="terminal"
bun build --compile --target=browser ./index.html --outdir=dist
```
Learn more in the [Standalone HTML docs](/bundler/standalone-html).
## Adding a backend to your frontend
To add a backend to your frontend, you can use the "routes" option in `Bun.serve`.
Ensure the file is included in your `tsconfig.json` (e.g., `"include": ["src", "env.d.ts"]`). Now `feature()` only accepts those flags, and invalid strings like `feature("TYPO")` become type errors.
### optimizeImports
Skip parsing unused submodules of barrel files (re-export index files). When you import only a few named exports from a large library, normally the bundler parses every file the barrel re-exports. With `optimizeImports`, only the submodules you actually use are parsed.
For example, `import { Button } from 'antd'` normally parses all ~3000 modules that `antd/index.js` re-exports. With `optimizeImports: ['antd']`, only the `Button` submodule is parsed.
This works for **pure barrel files** — files where every named export is a re-export (`export { X } from './x'`). If a barrel file has any local exports (`export const foo = ...`), or if any importer uses `import *`, all submodules are loaded.
`export *` re-exports are always loaded (never deferred) to avoid circular resolution issues. Only named re-exports (`export { X } from './x'`) that aren't used by any importer are deferred.
**Automatic mode:** Packages with `"sideEffects": false` in their `package.json` get barrel optimization automatically — no `optimizeImports` config needed. Use `optimizeImports` for packages that don't have this field.
**Plugins:** Resolve and load plugins work correctly with barrel optimization. Deferred submodules go through the plugin pipeline when they are eventually loaded.
### metafile
Generate metadata about the build in a structured format. The metafile contains information about all input files, output files, their sizes, imports, and exports. This is useful for:
The `bytecode: boolean` option can be used to generate bytecode for any JavaScript/TypeScript entrypoints. This can greatly improve startup times for large applications. Only supported for `"cjs"` format, only supports `"target": "bun"` and dependent on a matching version of Bun. This adds a corresponding `.jsc` file for each entrypoint.
The `bytecode: boolean` option can be used to generate bytecode for any JavaScript/TypeScript entrypoints. This can greatly improve startup times for large applications. Requires `"target": "bun"` and is dependent on a matching version of Bun.
- **CommonJS**: Works with or without `compile: true`. Generates a `.jsc` file alongside each entrypoint.
- **ESM**: Requires `compile: true`. Bytecode and module metadata are embedded in the standalone executable.
Without an explicit `format`, bytecode defaults to CommonJS.
Registers a callback to be run after the bundle is complete. The callback receives the [`BuildOutput`](/docs/bundler#outputs) object containing the build results, including output files and any build messages.
description: Bundle a single-page app into a single self-contained .html file with no external dependencies
---
Bun can bundle your entire frontend into a **single `.html` file** with zero external dependencies. JavaScript, TypeScript, JSX, CSS, images, fonts, videos, WASM — everything gets inlined into one file.
```bash terminal icon="terminal"
bun build --compile --target=browser ./index.html --outdir=dist
```
The output is a completely self-contained HTML document. No relative paths. No external files. No server required. Just one `.html` file that works anywhere a browser can open it.
## One file. Upload anywhere. It just works.
The output is a single `.html` file you can put anywhere:
- **Upload it to S3** or any static file host — no directory structure to maintain, just one file
- **Double-click it from your desktop** — it opens in the browser and works offline, no localhost server needed
- **Embed it in your webview** — No need to deal with relative files
- **Insert it in an `<iframe>`** — embed interactive content in another page with a single file URL
- **Serve it from anywhere** — any HTTP server, CDN, or file share. One file, zero configuration.
There's nothing to install, no `node_modules` to deploy, no build artifacts to coordinate, no relative paths to think about. The entire app — framework code, stylesheets, images, everything — lives in that one file.
## Truly one file
Normally, distributing a web page means managing a folder of assets — the HTML, the JavaScript bundles, the CSS files, the images. Move the HTML without the rest and everything breaks. Browsers have tried to solve this before: Safari's `.webarchive` and `.mhtml` are supposed to save a page as a single file, but in practice they unpack into a folder of loose files on your computer — defeating the purpose.
Standalone HTML is different. The output is a plain `.html` file. Not an archive. Not a folder. One file, with everything inside it. Every image, every font, every line of CSS and JavaScript is embedded directly in the HTML using standard `<style>` tags, `<script>` tags, and `data:` URIs. Any browser can open it. Any server can host it. Any file host can store it.
This makes it practical to distribute web pages the same way you'd distribute a PDF — as a single file you can move, copy, upload, or share without worrying about broken paths or missing assets.
bun build --compile --target=browser ./index.html --outdir=dist
```
Open `dist/index.html` — the React app works with no server.
## Everything gets inlined
Bun inlines every local asset it finds in your HTML. If it has a relative path, it gets embedded into the output file. This isn't limited to images and stylesheets — it works with any file type.
| JS `import "./styles.css"` | Merged into the `<style>` tag |
Images, fonts, WASM binaries, videos, audio files, SVGs — any file referenced by a relative path gets base64-encoded into a `data:` URI and embedded directly in the HTML. The MIME type is automatically detected from the file extension.
External URLs (like CDN links or absolute URLs) are left untouched.
## Using with React
React apps work out of the box. Bun handles JSX transpilation and npm package resolution automatically.
[Server-Sent Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events) let you push a stream of text events to the browser over a single HTTP response. The client consumes them via [`EventSource`](https://developer.mozilla.org/en-US/docs/Web/API/EventSource).
In Bun, you can implement an SSE endpoint by returning a `Response` whose body is a streaming source and setting the `Content-Type` header to `text/event-stream`.
<Note>
`Bun.serve` closes idle connections after **10 seconds** by default. A quiet SSE stream counts as idle, so the
examples below call `server.timeout(req, 0)` to disable the timeout for the stream. See
[`idleTimeout`](/runtime/http/server#idletimeout) for details.
</Note>
## Using an async generator
In Bun, `new Response` accepts an async generator function directly. This is usually the simplest way to write an SSE endpoint — each `yield` flushes a chunk to the client, and if the client disconnects, the generator's `finally` block runs so you can clean up.
```ts server.ts icon="/icons/typescript.svg"
Bun.serve({
port: 3000,
routes: {
"/events": (req, server) => {
// SSE streams are often quiet between events. By default,
// Bun.serve closes connections after 10 seconds of inactivity.
// Disable the idle timeout for this request so the stream
// stays open indefinitely.
server.timeout(req, 0);
return new Response(
async function* () {
yield `data: connected at ${Date.now()}\n\n`;
// Emit a tick every 5 seconds until the client disconnects.
// When the client goes away, the generator is returned
// (cancelled) and this loop stops automatically.
while (true) {
await Bun.sleep(5000);
yield `data: tick ${Date.now()}\n\n`;
}
},
{
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
},
},
);
},
},
});
```
## Using a `ReadableStream`
If your events originate from callbacks — message brokers, timers, external pushes — rather than a linear `await` flow, a `ReadableStream` often fits better. When the client disconnects, Bun calls the stream's `cancel()` method automatically, so you can release any resources you set up in `start()`.
```ts server.ts icon="/icons/typescript.svg"
Bun.serve({
port: 3000,
routes: {
"/events": (req, server) => {
server.timeout(req, 0);
let timer: Timer;
const stream = new ReadableStream({
start(controller) {
controller.enqueue(`data: connected at ${Date.now()}\n\n`);
@@ -95,17 +95,17 @@ Our build scripts will automatically detect and use `ccache` if available. You c
## Install LLVM
Bun requires LLVM 19 (`clang` is part of LLVM). This version requirement is to match WebKit (precompiled), as mismatching versions will cause memory allocation failures at runtime. In most cases, you can install LLVM through your system package manager:
Bun requires LLVM 21.1.8 (`clang` is part of LLVM). This version is enforced by the build system — mismatching versions will cause memory allocation failures at runtime. In most cases, you can install LLVM through your system package manager:
<CodeGroup>
```bash macOS (Homebrew)
brew install llvm@19
brew install llvm@21
```
```bash Ubuntu/Debian
# LLVM has an automatic installation script that is compatible with all versions of Ubuntu
# Build bun with the local JSC build — this automatically configures and builds JSC
bun run build:local
```
Using `bun run build:local` will build Bun in the `./build/debug-local` directory (instead of `./build/debug`), you'll have to change a couple of places to use this new directory:
`bun run build:local` handles everything: configuring JSC, building JSC, and building Bun. On subsequent runs, JSC will incrementally rebuild if any WebKit sources changed. `ninja -Cbuild/debug-local` also works after the first build, and will build Bun+JSC.
The build output goes to `./build/debug-local` (instead of `./build/debug`), so you'll need to update a couple of places:
- The first line in `src/js/builtins.d.ts`
- The `CompilationDatabase` line in `.clangd` config should be `CompilationDatabase: build/debug-local`
@@ -288,7 +283,7 @@ Note that the WebKit folder, including build artifacts, is 8GB+ in size.
If you are using a JSC debug build and using VScode, make sure to run the `C/C++: Select a Configuration` command to configure intellisense to find the debug headers.
Note that if you change make changes to our [WebKit fork](https://github.com/oven-sh/WebKit), you will also have to change `SetupWebKit.cmake` to point to the commit hash.
Note that if you make changes to our [WebKit fork](https://github.com/oven-sh/WebKit), you will also have to change `SetupWebKit.cmake` to point to the commit hash.
## Troubleshooting
@@ -314,7 +309,7 @@ The issue may manifest when initially running `bun setup` as Clang being unable
@@ -298,6 +298,17 @@ This is useful for catching flaky tests or non-deterministic behavior. Each test
The `--rerun-each` CLI flag will override this setting when specified.
### `test.retry`
Default retry count for all tests. Failed tests will be retried up to this many times. Per-test `{ retry: N }` overrides this value. Default `0` (no retries).
```toml title="bunfig.toml" icon="settings"
[test]
retry = 3
```
The `--retry` CLI flag will override this setting when specified.
### `test.concurrentTestGlob`
Specify a glob pattern to automatically run matching test files with concurrent test execution enabled. Test files matching this pattern will behave as if the `--concurrent` flag was passed, running all tests within those files concurrently.
@@ -171,12 +171,14 @@ Unlike unix domain sockets, abstract namespace sockets are not bound to the file
## idleTimeout
To configure the idle timeout, set the `idleTimeout` field in Bun.serve.
By default, `Bun.serve` closes connections after **10 seconds** of inactivity. A connection is considered idle when there is no data being sent or received — this includes in-flight requests where your handler is still running but hasn't written any bytes to the response yet. Browsers and `fetch()` clients will see this as a connection reset.
To configure this, set the `idleTimeout` field (in seconds). The maximum value is `255`, and `0` disables the timeout entirely.
```ts
Bun.serve({
// 10 seconds:
idleTimeout: 10,
// 30 seconds (default is 10)
idleTimeout: 30,
fetch(req) {
return new Response("Bun!");
@@ -184,7 +186,11 @@ Bun.serve({
});
```
This is the maximum amount of time a connection is allowed to be idle before the server closes it. A connection is idling if there is no data sent or received.
<Note>
**Streaming & Server-Sent Events** — The idle timer applies while a response is being streamed. If your stream goes
quiet for longer than `idleTimeout`, the connection will be closed mid-response. For long-lived streams, disable the
timeout for that request with [`server.timeout(req, 0)`](#server-timeout-request-seconds).
</Note>
---
@@ -296,12 +302,12 @@ This is useful for development and hot reloading. Only `fetch`, `error`, and `ro
### `server.timeout(Request, seconds)`
Set a custom idle timeout for individual requests:
Override the idle timeout for an individual request. Pass `0` to disable the timeout entirely for that request.
```ts
const server = Bun.serve({
async fetch(req, server) {
// Set 60 second timeout for this request
// Give this request up to 60 seconds of inactivity instead of the default 10
server.timeout(req, 60);
// If they take longer than 60 seconds to send the body, the request will be aborted
@@ -312,7 +318,28 @@ const server = Bun.serve({
});
```
Pass `0` to disable the timeout for a request.
This is the recommended way to keep long-lived streaming responses (like Server-Sent Events) alive without raising the global `idleTimeout` for every request:
```ts
Bun.serve({
routes: {
"/events": (req, server) => {
// Disable the idle timeout for this streaming response.
// Otherwise the connection will be closed if no bytes
// are sent for 10 seconds (the default idleTimeout).
server.timeout(req, 0);
return new Response(
async function* () {
yield "data: hello\n\n";
// events can arrive sporadically without the connection being killed
description: "An interactive JavaScript and TypeScript REPL with syntax highlighting, history, and tab completion"
---
`bun repl` starts an interactive Read-Eval-Print Loop (REPL) for evaluating JavaScript and TypeScript expressions. It's useful for quickly testing code snippets, exploring APIs, and debugging.
```sh terminal icon="terminal"
bun repl
```
```txt
Welcome to Bun v1.3.3
Type .copy [code] to copy to clipboard. .help for more info.
> 1 + 1
2
> const greeting = "Hello, Bun!"
undefined
> greeting
'Hello, Bun!'
```
---
## Features
- **TypeScript & JSX** — Write TypeScript and JSX directly. Bun transpiles everything on the fly.
- **Top-level `await`** — Await promises directly at the prompt without wrapping in an async function.
- **Syntax highlighting** — Input is highlighted as you type.
- **Persistent history** — History is saved to `~/.bun_repl_history` and persists across sessions.
- **Tab completion** — Press `Tab` to complete property names and REPL commands.
- **Multi-line input** — Unclosed brackets, braces, and parentheses automatically continue on the next line.
- **Node.js globals** — `require`, `module`, `__dirname`, and `__filename` are available, resolved relative to your current working directory.
---
## Special variables
The REPL exposes two special variables that update after each evaluation.
Just like Bun's runtime, you can use either `require` or `import` in the REPL and it Just Works — mix ESM and CommonJS freely at the prompt. Module resolution uses the same rules as `bun run`, so you can import from `node_modules`, relative paths, or `node:` builtins.
```txt
> import { z } from "zod"
undefined
> const path = require("path")
undefined
> z.string().parse(path.join("/tmp", "file.txt"))
'/tmp/file.txt'
```
Declarations persist for the rest of the session, and `const`/`let` can be redeclared across evaluations (unlike in regular scripts) so you can re-run `import` and `require` statements while iterating.
---
## Multi-line input
When you press `Enter` on a line with unclosed brackets, braces, or parentheses, the REPL automatically continues on the next line. The prompt changes to `...` to indicate continuation.
```txt
> function add(a, b) {
... return a + b;
... }
undefined
> add(2, 3)
5
```
For longer multi-line entries, use `.editor` to enter editor mode, which buffers all input until you press `Ctrl+D`.
---
## REPL commands
Type `.help` at the prompt to see all available REPL commands.
| `Ctrl+B` / `Ctrl+F` | Move backward/forward one character |
| `Alt+B` / `Alt+F` | Move backward/forward one word |
| `Ctrl+U` | Delete to start of line |
| `Ctrl+K` | Delete to end of line |
| `Ctrl+W` | Delete word backward |
| `Ctrl+D` | Delete character (or exit if line is empty) |
| `Ctrl+L` | Clear screen |
| `Ctrl+T` | Swap the two characters before the cursor |
| `Up` / `Down` | Navigate history |
| `Tab` | Auto-complete |
| `Ctrl+C` | Cancel current input (press twice on empty line to exit) |
---
## History
REPL history is automatically saved to `~/.bun_repl_history` (up to 1000 entries) and loaded at the start of each session. Use `Up`/`Down` to navigate.
To export your history to a different file, use `.save`:
```txt
> .save ./my-session.txt
```
---
## Non-interactive mode
Use `-e` / `--eval` to evaluate a script with REPL semantics and exit. Use `-p` / `--print` to additionally print the result.
```sh terminal icon="terminal"
bun repl -e "const x: number = 42; console.log(x)"
# 42
bun repl -p "await fetch('https://example.com').then(r => r.status)"
# 200
bun repl -p "{ a: 1, b: 2 }"
# { a: 1, b: 2 }
```
This uses the same transforms as the interactive REPL, so a bare object literal like `{ a: 1 }` is treated as an object expression instead of a block statement. The process exits after the event loop drains (pending timers and I/O complete first). On error, the process exits with code `1`.
Default retry count for all tests. Failed tests will be retried up to this many times. Per-test `{ retry: N }` overrides this value. Default `0` (no retries).
```toml title="bunfig.toml" icon="settings"
[test]
retry = 3
```
The `--retry` CLI flag will override this setting when specified.
#### rerunEach
Re-run each test file multiple times to identify flaky tests:
Use the `--retry` flag to automatically retry failed tests up to a given number of times. If a test fails and then passes on a subsequent attempt, it is reported as passing.
```sh terminal icon="terminal"
bun test --retry 3
```
Per-test `{ retry: N }` overrides the global `--retry` value:
```ts
// Uses the global --retry value
test("uses global retry", () => {
/* ... */
});
// Overrides --retry with its own value
test("custom retry", { retry: 1 }, () => {
/* ... */
});
```
You can also set this in `bunfig.toml`:
```toml title="bunfig.toml" icon="settings"
[test]
retry = 3
```
## Rerun tests
Use the `--rerun-each` flag to run each test multiple times. This is useful for detecting flaky or non-deterministic test failures.
# Use LLVM_VERSION_MAJOR from environment or default to 19
LLVM_VERSION="${LLVM_VERSION_MAJOR:-19}"
LLVM_VERSION="${LLVM_VERSION_MAJOR:-21}"
# Ensure we have the specific clang-format version
CLANG_FORMAT="clang-format-${LLVM_VERSION}"
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.