### What does this PR do?
The `sendPong` fix alone wasn't sufficient. The bug only manifests with
**wss:// through HTTP proxy** (not ws://), because only that path uses
`initWithTunnel` with a detached socket.
**Two bugs were found and fixed:**
1. **`sendPong`/`sendCloseWithBody` socket checks**
(`src/http/websocket_client.zig`): Replaced `socket.isClosed() or
socket.isShutdown()` with `!this.hasTCP()` as originally proposed. Also
guarded `shutdownRead()` against detached sockets.
2. **Spurious 1006 during clean close** (`src/http/websocket_client.zig`
+ `WebSocketProxyTunnel.zig`): When `sendCloseWithBody` calls
`clearData()`, it shuts down the proxy tunnel. The tunnel's `onClose`
callback was calling `ws.fail(ErrorCode.ended)` which dispatched a 1006
abrupt close, overriding the clean 1000 close already in progress. Fixed
by adding `tunnel.clearConnectedWebSocket()` before `tunnel.shutdown()`
so the callback is a no-op.
### How did you verify your code works?
- `USE_SYSTEM_BUN=1`: Fails with `Unexpected close code: 1006`
- `bun bd test`: Passes with clean 1000 close
- Full proxy test suite: 25 pass, 4 skip, 0 fail
- Related fragmented/close tests: all passing
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Defer resolution of dynamic `import()` of unknown `node:` modules
(like `node:sqlite`) to runtime instead of failing at transpile time
- Fix use-after-poison in `addResolveError` by always duping `line_text`
from the source so Location data outlives the arena
Fixes#25707
## Root cause
When a CJS file is `require()`d, Bun's linker eagerly resolves all
import records, including dynamic `import()` expressions. For unknown
`node:` prefixed modules, `whenModuleNotFound` was only deferring
`.require` and `.require_resolve` to runtime — `.dynamic` imports fell
through to the error path, causing the entire `require()` to fail.
This broke Next.js builds with turbopack + `cacheComponents: true` +
Better Auth, because Kysely's dialect detection code uses
`import("node:sqlite")` inside a try/catch that gracefully handles the
module not being available.
Additionally, when the resolve error was generated, the
`Location.line_text` was a slice into arena-allocated source contents.
The arena is reset before `processFetchLog` processes the error, causing
a use-after-poison when `Location.clone` tries to dupe the freed memory.
## Test plan
- [x] New regression test in `test/regression/issue/25707.test.ts` with
3 cases:
- CJS require of file with `import("node:sqlite")` inside try/catch
(turbopack pattern)
- CJS require of file with bare `import("node:sqlite")` (no try/catch)
- Runtime error produces correct `ERR_UNKNOWN_BUILTIN_MODULE` code
- [x] Test fails with `USE_SYSTEM_BUN=1` (system bun v1.3.9)
- [x] Test passes with `bun bd test`
- [x] No ASAN use-after-poison crash on debug build
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fix out-of-bounds read in the INI parser's `prepareStr` function when
a multi-byte UTF-8 lead byte appears at the end of a value with
insufficient continuation bytes
- Fix undefined behavior when bare continuation bytes (0x80-0xBF) cause
`utf8ByteSequenceLength` to return 0, hitting an `unreachable` branch
(UB in ReleaseFast builds)
- Add bounds checking before accessing `val[i+1]`, `val[i+2]`,
`val[i+3]` in both escaped and non-escaped code paths
The vulnerability could be triggered by a crafted `.npmrc` file
containing truncated UTF-8 sequences. In release builds, this could
cause OOB heap reads (potential info leak) or undefined behavior.
## Test plan
- [x] Added 9 tests covering truncated 2/3/4-byte sequences, bare
continuation bytes, and escaped contexts
- [x] All 52 INI parser tests pass (`bun bd test
test/js/bun/ini/ini.test.ts`)
- [x] No regressions in npmrc tests (failures are pre-existing Verdaccio
connectivity issues)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Reject null bytes in `username`, `password`, `database`, and `path`
connection parameters for both PostgreSQL and MySQL to prevent wire
protocol parameter injection
- Both the Postgres and MySQL wire protocols use null-terminated strings
in their startup/handshake messages, so embedded null bytes in these
fields act as field terminators, allowing injection of arbitrary
protocol parameters (e.g. `search_path` for schema hijacking)
- The fix validates these fields immediately after UTF-8 conversion and
throws `InvalidArguments` error with a clear message if null bytes are
found
## Test plan
- [x] New test
`test/regression/issue/postgres-null-byte-injection.test.ts` verifies:
- Null bytes in username are rejected with an error before any data is
sent
- Null bytes in database are rejected with an error before any data is
sent
- Null bytes in password are rejected with an error before any data is
sent
- Normal connections without null bytes still work correctly
- [x] Test verified to fail with `USE_SYSTEM_BUN=1` (unfixed bun) and
pass with `bun bd test` (fixed build)
- [x] Existing SQL tests pass (`adapter-env-var-precedence.test.ts`,
`postgres-stringbuilder-assertion-aggressive.test.ts`)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix WebSocket client pong frame handler to properly handle payloads
split across TCP segments, preventing frame desync that could cause
protocol confusion
- Add missing RFC 6455 Section 5.5 validation: control frame payloads
must not exceed 125 bytes (pong handler lacked this check, unlike ping
and close handlers)
## Details
The pong handler (lines 652-663) had two issues:
1. **Frame desync on fragmented delivery**: When a pong payload was
split across TCP segments (`data.len < receive_body_remain`), the
handler consumed only the available bytes but unconditionally reset
`receive_state = .need_header` and `receive_body_remain = 0`. The
remaining payload bytes in the next TCP delivery were then
misinterpreted as WebSocket frame headers.
2. **Missing payload length validation**: Unlike the ping handler (line
615) and close handler (line 680), the pong handler did not validate the
7-bit payload length against the RFC 6455 limit of 125 bytes for control
frames.
The fix models the pong handler after the existing ping handler pattern:
track partial delivery state with a `pong_received` boolean, buffer
incoming data into `ping_frame_bytes`, and only reset to `need_header`
after the complete payload has been consumed.
## Test plan
- [x] New test `websocket-pong-fragmented.test.ts` verifies:
- Fragmented pong delivery (50-byte payload split into 2+48 bytes) does
not cause frame desync, and a subsequent text frame is received
correctly
- Pong frames with >125 byte payloads are rejected as invalid control
frames
- [x] Test fails with `USE_SYSTEM_BUN=1` (reproduces the bug) and passes
with `bun bd test`
- [x] Existing WebSocket tests pass: `websocket-client.test.ts`,
`websocket-close-fragmented.test.ts`,
`websocket-client-short-read.test.ts`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixes HTTP header injection vulnerability in S3 client where
user-controlled options (`contentDisposition`, `contentEncoding`,
`type`) were passed to HTTP headers without CRLF validation
- Adds input validation at the JS-to-Zig boundary in
`src/s3/credentials.zig` that throws a `TypeError` if `\r` or `\n`
characters are detected
- An attacker could previously inject arbitrary headers (e.g.
`X-Amz-Security-Token`) by embedding `\r\n` in these string fields
## Test plan
- [x] Added `test/regression/issue/s3-header-injection.test.ts` with 6
tests:
- CRLF in `contentDisposition` throws
- CRLF in `contentEncoding` throws
- CRLF in `type` (content-type) throws
- Lone CR in `contentDisposition` throws
- Lone LF in `contentDisposition` throws
- Valid `contentDisposition` without CRLF still works correctly
- [x] Tests fail with `USE_SYSTEM_BUN=1` (confirming vulnerability
exists in current release)
- [x] Tests pass with `bun bd test` (confirming fix works)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix path traversal vulnerability in tarball directory extraction on
POSIX systems where `mkdiratZ` used the un-normalized `pathname` (raw
from tarball) instead of the normalized `path` variable, allowing `../`
components to escape the extraction root via kernel path resolution
- The Windows directory creation, symlink creation, and file creation
code paths already correctly used the normalized path — only the two
POSIX `mkdiratZ` calls were affected (lines 463 and 469)
- `bun install` is not affected because npm mode skips directory
entries; affected callers include `bun create`, GitHub tarball
extraction, and `compile_target`
## Test plan
- [x] Added regression test that crafts a tarball with
`safe_dir/../../escaped_dir/` directory entry and verifies it cannot
create directories outside the extraction root
- [x] Verified test **fails** with system bun (vulnerable) and
**passes** with debug build (fixed)
- [x] Full `archive.test.ts` suite passes (99/99 tests)
- [x] `symlink-path-traversal.test.ts` continues to pass (3/3 tests)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- When `--bail` caused an early exit after a test failure, the JUnit
reporter output file (`--reporter-outfile`) was never written because
`Global.exit()` was called before the normal completion path
- Extracted the JUnit write logic into a `writeJUnitReportIfNeeded()`
method on `CommandLineReporter` and call it in both bail exit paths
(test failure and unhandled rejection) as well as the normal completion
path
Closes#26851
## Test plan
- [x] Added regression test `test/regression/issue/26851.test.ts` with
two cases:
- Single failing test file with `--bail` produces JUnit XML output
- Multiple test files where bail triggers on second file still writes
the report
- [x] Verified test fails with system bun (`USE_SYSTEM_BUN=1`)
- [x] Verified test passes with `bun bd test`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Add `expectMessageEventually(value)` to the bake test harness `Client`
class — waits for a specific message to appear, draining any
intermediate messages that arrived before it
- Rewrite "hmr handles rapid consecutive edits" test to use raw
`Bun.write` + sleep for intermediate edits and `expectMessageEventually`
for the final assertion, avoiding flaky failures when HMR batches
updates non-deterministically across platforms
Fixes flaky failure on Windows where an extra "render 10" message
arrived after `expectMessage` consumed its expected messages but before
client disposal.
## Test plan
- [x] `bun bd test test/bake/dev-and-prod.test.ts` — all 12 tests pass
- [x] Ran the specific test multiple times to confirm no flakiness
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Alistair Smith <alistair@anthropic.com>
## Problem
The bundler's number renamer was mangling `.name` properties on crypto
class prototype methods and constructors:
- `hash.update.name` → `"update2"` instead of `"update"`
- `verify.verify.name` → `"verify2"` instead of `"verify"`
- `cipher.update.name` → `"update3"` instead of `"update"`
- `crypto.Hash.name` → `"Hash2"` instead of `"Hash"`
### Root causes
1. **Named function expressions on prototypes** collided with other
bindings after scope flattening (e.g. `Verify.prototype.verify =
function verify(...)` collided with the imported `verify`)
2. **Block-scoped constructor declarations** (`Hash`, `Hmac`) got
renamed when the bundler hoisted them out of block scope
3. **Shared function declarations** in the Cipher/Decipher block all got
numeric suffixes (`update3`, `final2`, `setAutoPadding2`, etc.)
## Fix
- Use `Object.assign` with object literals for prototype methods —
object literal property keys correctly infer `.name` and aren't subject
to the renamer
- Remove unnecessary block scopes around `Hash` and `Hmac` constructors
so they stay at module level and don't get renamed
- Inline `Cipheriv` methods and copy references to `Decipheriv`
## Tests
Added comprehensive `.name` tests for all crypto classes: Hash, Hmac,
Sign, Verify, Cipheriv, Decipheriv, DiffieHellman, ECDH, plus factory
functions and constructor names.
## What does this PR do?
Fixes `bun build --compile` producing an all-zeros binary when the
output directory is on a different filesystem than the temp directory.
This is common in Docker containers, Gitea runners, and other
environments using overlayfs.
## Problem
When `inject()` finishes writing the modified executable to the temp
file, the file descriptor's offset is at EOF. If the subsequent
`renameat()` to the output path fails with `EXDEV` (cross-device — the
temp file and output dir are on different filesystems), the code falls
back to `copyFileZSlowWithHandle()`, which:
1. Calls `fallocate()` to pre-allocate the output file to the correct
size (filled with zeros)
2. Calls `bun.copyFile(in_handle, out_handle)` — but `in_handle`'s
offset is at EOF
3. `copy_file_range` / `sendfile` / `read` all use the current file
offset (EOF), read 0 bytes, and return immediately
4. Result: output file is the correct size but entirely zeros
This explains user reports of `bun build --compile
--target=bun-darwin-arm64` producing invalid binaries that `file`
identifies as "data" rather than a Mach-O executable.
## Fix
Seek the input fd to offset 0 in `copyFileZSlowWithHandle` before
calling `bun.copyFile`.
## How did you verify your code works?
- `bun bd` compiles successfully
- `bun bd test test/bundler/bun-build-compile.test.ts` — 6/6 pass
- Added tests that verify compiled binaries have valid executable
headers and produce correct output
- Manually verified cross-compilation: `bun build --compile
--target=bun-darwin-arm64` produces a valid Mach-O binary
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Update inline error snapshots in valkey reliability tests to match
Redis 8's changed error message format
- Redis 8 (`redis:8-alpine` in our test Docker container) no longer
appends `, with args beginning with: ` when an unknown command has no
arguments
## Root cause
Redis commit
[`25f780b6`](25f780b662)
([PR #14690](https://github.com/redis/redis/pull/14690)) changed
`commandCheckExistence()` in `src/server.c` to only append `, with args
beginning with: ` when there are actual arguments (`c->argc >= 2`).
Previously it was always appended, producing a dangling `, with args
beginning with: ` even with zero arguments.
## Changes
- `test/js/valkey/reliability/protocol-handling.test.ts`: Updated
`SYNTAX-ERROR` snapshot (no args case)
- `test/js/valkey/reliability/error-handling.test.ts`: Updated
`undefined` and `123` snapshots (no args cases)
## Test plan
- [ ] Verify `protocol-handling.test.ts` passes in CI (was failing on
every attempt as shown in #26869 / build #36831)
- [ ] Verify `error-handling.test.ts` passes in CI
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
#### (Copies commits from #26447)
## Summary
- Add a global `--retry <N>` flag to `bun test` that sets a default
retry count for all tests (overridable by per-test `{ retry: N }`). Also
configurable via `[test] retry = N` in bunfig.toml.
- When a test passes after one or more retries, the JUnit XML reporter
emits a separate `<testcase>` entry for each failed attempt (with
`<failure>`), followed by the final passing `<testcase>`. This gives
flaky test detection tools per-attempt timing and result data using
standard JUnit XML that all CI systems can parse.
## Test plan
- `bun bd test test/js/junit-reporter/junit.test.js` — verifies separate
`<testcase>` entries appear in JUnit XML for tests that pass after retry
- `bun bd test test/cli/test/retry-flag.test.ts` — verifies the
`--retry` CLI flag applies a default retry count to all tests
## Changelog
<!-- CHANGELOG:START -->
- Added `--retry <N>` flag to `bun test` to set a default retry count
for all tests
- Added `[test] retry` option to bunfig.toml
- JUnit XML reporter now emits separate `<testcase>` entries for each
retry attempt, providing CI visibility into flaky tests
<!-- CHANGELOG:END -->
---------
Co-authored-by: Chris Lloyd <chrislloyd@anthropic.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
Fixes "Unknown HMR script" error during rapid consecutive edits in HMR
## Test plan
- [x] Basic consecutive HMR edits work correctly
---------
Co-authored-by: Alistair Smith <alistair@anthropic.com>
## Summary
- Convert `file:` URL strings to filesystem paths via
`Bun.fileURLToPath()` in the JS layer for both `fs.watch` and
`fs.watchFile`/`fs.unwatchFile`
- Handles percent-decoding (e.g. `%20` → space) and proper URL parsing,
which the previous naive `slice[6..]` stripping in Zig could not do
- Zig-level `file://` stripping is left unchanged; the JS layer now
resolves file URLs before they reach native code
## Test plan
- [x] New test: `fs.watch` with `file:` URL string containing
`%20`-encoded spaces
- [x] New test: `fs.watchFile` with `file:` URL string containing
`%20`-encoded spaces
- [x] Both tests fail with `USE_SYSTEM_BUN=1` and pass with `bun bd
test`
- [x] Existing `fs.watch` "should work with url" test (URL object) still
passes
- [x] Full `fs.watchFile` suite passes (6 pass, 1 skip, 0 fail)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Implement complete lowering of TC39 stage-3 standard ES decorators
(the non-legacy variant used when tsconfig has no
`experimentalDecorators`).
- Passes all 147 esbuild decorator tests and 22 additional Bun-specific
tests (191 total, 0 failures).
- Supports method, getter, setter, field, auto-accessor, private member,
and class decorators in both statement and expression positions, with
proper evaluation order, class binding semantics, and decorator
metadata.
Fixes#4122Fixes#20206Fixes#14529Fixes#6051
## What's implemented
| Feature | Details |
|---|---|
| Method/getter/setter decorators | Static and instance, public and
private |
| Field decorators | Initializer replacement + extra initializers via
`__runInitializers` |
| Auto-accessor (`accessor` keyword) | Lowered to WeakMap storage +
getter/setter pair |
| Private member decorators | WeakMap/WeakSet lowering with
`__privateGet`/`__privateSet` |
| Class decorators | Statement and expression positions |
| Class expression decorators | Comma-expression lowering (no IIFE) |
| Decorator metadata | `Symbol.metadata` support via
`__decoratorMetadata` |
| Evaluation order | All decorator expressions + computed keys evaluated
in source order per TC39 spec |
| Class binding semantics | Separate inner/outer class name bindings
(element vs class decorator closures) |
| Static block extraction | `this` replaced with class name ref when
moved to suffix |
| Computed property keys | Pre-evaluated into temp variables for correct
ordering |
## Runtime helpers
Added to `src/runtime.js` and registered in `src/runtime.zig`:
- `__decoratorStart(base)` — creates decorator context array
- `__decorateElement(array, flags, name, decorators, target, extra)` —
applies decorators to a class element
- `__decoratorMetadata(array, target)` — sets `Symbol.metadata` on the
class
- `__runInitializers(array, flags, self, value)` — runs
initializer/extra-initializer arrays
## Test plan
- [x] `bun bd test
test/bundler/transpiler/es-decorators-esbuild.test.ts` — **147/147
pass** (esbuild's full decorator test suite)
- [x] `bun bd test test/bundler/transpiler/es-decorators.test.ts` —
**22/22 pass**
- [x] `bun bd test test/bundler/transpiler/decorators.test.ts` — **22/22
pass** (legacy decorators still work)
- [x] E2E runtime verification of method, field, accessor, class,
private, and expression decorators
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
## Summary
Adds a **DenseArray fast path** for `structuredClone` / `postMessage`
that completely skips byte-buffer serialization when an
`ArrayWithContiguous` array contains **flat objects** (whose property
values are only primitives or strings).
This builds on #26814 which added fast paths for Int32/Double/Contiguous
arrays of primitives and strings. The main remaining slow case was
**arrays of objects** — the most common real-world pattern (e.g.
`[{name: "Alice", age: 30}, {name: "Bob", age: 25}]`). Previously, these
fell back to the full serialization path because the Contiguous fast
path rejected non-string cell elements.
## How it works
### Serialization
The existing Contiguous array handler is extended to recognize object
elements that pass `isObjectFastPathCandidate` (FinalObject, no
getters/setters, no indexed properties, all enumerable). For qualifying
objects, properties are collected into a `SimpleCloneableObject` struct
(reusing the existing `SimpleInMemoryPropertyTableEntry` type). The
result is stored as a `FixedVector<DenseArrayElement>` where
`DenseArrayElement = std::variant<JSValue, String,
SimpleCloneableObject>`.
If no object elements are found, the existing `SimpleArray` path is used
(no regression).
### Deserialization
A **Structure cache** avoids repeated Structure transitions when the
array contains many same-shape objects (the common case). The first
object is built via `constructEmptyObject` + `putDirect`, and its final
Structure + Identifiers are cached. Subsequent objects with matching
property names are created directly with `JSFinalObject::create(vm,
cachedStructure)`, skipping all transitions.
Safety guards:
- Cache is only used when property count AND all property names match
- Cache is disabled when `outOfLineCapacity() > 0` (properties exceed
`maxInlineCapacity`), since `JSFinalObject::create` cannot allocate a
butterfly
### Fallback conditions
| Condition | Behavior |
|-----------|----------|
| Elements are only primitives/strings | SimpleArray (existing) |
| Elements include `isObjectFastPathCandidate` objects | **DenseArray
(NEW)** |
| Object property value is an object/array | Fallback to normal path |
| Elements include Date, RegExp, Map, Set, ArrayBuffer, etc. | Fallback
to normal path |
| Array has holes | Fallback to normal path |
## Benchmarks
Apple M4 Max, release build vs system Bun v1.3.8 and Node.js v24.12:
| Benchmark | Node.js v24.12 | Bun v1.3.8 | **This PR** | vs Bun | vs
Node |
|-----------|---------------|------------|-------------|--------|---------|
| `[10 objects]` | 2.83 µs | 2.72 µs | **1.56 µs** | **1.7x** | **1.8x**
|
| `[100 objects]` | 24.51 µs | 25.98 µs | **14.11 µs** | **1.8x** |
**1.7x** |
## Test coverage
28 new edge-case tests covering:
- **Property value variants**: empty objects, special numbers (NaN,
Infinity, -0), null/undefined values, empty string keys, boolean-only
values, numeric string keys
- **Structure cache correctness**: alternating shapes, objects
interleaved with primitives, >maxInlineCapacity properties (100+), 1000
same-shape objects (stress test), repeated clone independence
- **Fallback correctness**: array property values, nested objects,
Date/RegExp/Map/Set/ArrayBuffer elements, getters, non-enumerable
properties, `Object.create(null)`, class instances
- **Frozen/sealed**: clones are mutable regardless of source
- **postMessage via MessageChannel**: mixed arrays with objects, empty
object arrays
## Changed files
- `src/bun.js/bindings/webcore/SerializedScriptValue.h` —
`SimpleCloneableObject`, `DenseArrayElement`, `FastPath::DenseArray`,
factory/constructor/member
- `src/bun.js/bindings/webcore/SerializedScriptValue.cpp` — serialize,
deserialize, `computeMemoryCost`
- `test/js/web/structured-clone-fastpath.test.ts` — 28 new tests
- `bench/snippets/structuredClone.mjs` — object array benchmarks
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Follow-up to #26819 ([review
comment](https://github.com/oven-sh/bun/pull/26819#discussion_r2781484939)).
Fixes `Buffer.slice()` / `Buffer.subarray()` on resizable `ArrayBuffer`
/ growable `SharedArrayBuffer` to return a **fixed-length view** instead
of a length-tracking view.
## Problem
The resizable/growable branch was passing `std::nullopt` to
`JSUint8Array::create()`, which creates a length-tracking view. When the
underlying buffer grows, the sliced view's length would incorrectly
expand:
```js
const rab = new ArrayBuffer(10, { maxByteLength: 20 });
const buf = Buffer.from(rab);
const sliced = buf.slice(0, 5);
sliced.length; // 5
rab.resize(20);
sliced.length; // was 10 (wrong), now 5 (correct)
```
Node.js specifies that `Buffer.slice()` always returns a fixed-length
view (verified on Node.js v22).
## Fix
Replace `std::nullopt` with `newLength` in the
`isResizableOrGrowableShared()` branch of
`jsBufferPrototypeFunction_sliceBody`.
## Test
Added a regression test that creates a `Buffer` from a resizable
`ArrayBuffer`, slices it, resizes the buffer, and verifies the slice
length doesn't change.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
Fixes the remaining kqueue filter comparison bug in
`packages/bun-usockets/src/eventing/epoll_kqueue.c` that caused
excessive CPU usage with network requests on macOS:
- **`us_loop_run_bun_tick` filter comparison (line 302-303):** kqueue
filter values (`EVFILT_READ=-1`, `EVFILT_WRITE=-2`) were compared using
bitwise AND (`&`) instead of equality (`==`). Since these are signed
negative integers (not bitmasks), `(-2) & (-1)` = `-2` (truthy), meaning
every `EVFILT_WRITE` event was also misidentified as `EVFILT_READ`. This
was already fixed in `us_loop_run` (by PR #25475) but the same bug
remained in `us_loop_run_bun_tick`, which is the primary event loop
function used by Bun.
This is a macOS-only issue (Linux uses epoll, which is unaffected).
Closes#26811
## Test plan
- [x] Added regression test at `test/regression/issue/26811.test.ts`
that makes concurrent HTTPS POST requests
- [x] Test passes with `bun bd test test/regression/issue/26811.test.ts`
- [ ] Manual verification on macOS: run the reporter's [repro
script](https://gist.github.com/jkoppel/d26732574dfcdcc6bfc4958596054d2e)
and confirm CPU usage stays low
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Add a fast path for `structuredClone` and `postMessage` when the root
value is a dense array of primitives or strings. This bypasses the full
`CloneSerializer`/`CloneDeserializer` machinery by keeping data in
native C++ structures instead of serializing to a byte stream.
**Important:** This optimization only applies when the root value passed
to `structuredClone()` / `postMessage()` is an array. Nested arrays
within objects still go through the normal serialization path.
## Implementation
Three tiers of array fast paths, checked in order:
| Tier | Indexing Type | Strategy | Applies When |
|------|--------------|----------|--------------|
| **Tier 1** | `ArrayWithInt32` | `memcpy` butterfly data | Dense int32
array, no holes, no named properties |
| **Tier 2** | `ArrayWithDouble` | `memcpy` butterfly data | Dense
double array, no holes, no named properties |
| **Tier 3** | `ArrayWithContiguous` | Copy elements into
`FixedVector<variant<JSValue, String>>` | Dense array of
primitives/strings, no holes, no named properties |
All tiers fall through to the normal serialization path when:
- The array has holes that must forward to the prototype
- The array has named properties (e.g., `arr.foo = "bar"`) — checked via
`structure->maxOffset() != invalidOffset`
- Elements contain non-primitive, non-string values (objects, arrays,
etc.)
- The context requires wire-format serialization (storage, cross-process
transfer)
### Deserialization
- **Tier 1/2:** Allocate a new `Butterfly` via `vm.auxiliarySpace()`,
`memcpy` data back, create array with `JSArray::createWithButterfly()`.
Falls back to normal deserialization if `isHavingABadTime` (forced
ArrayStorage mode).
- **Tier 3:** Pre-convert elements to `JSValue` (including `jsString()`
allocation), then use `JSArray::tryCreateUninitializedRestricted()` +
`initializeIndex()`.
## Benchmarks
Apple M4 Max, comparing system Bun 1.3.8 vs this branch (release build):
| Benchmark | Before | After | Speedup |
|-----------|--------|-------|---------|
| `structuredClone([10 numbers])` | 308.71 ns | 40.38 ns | **7.6x** |
| `structuredClone([100 numbers])` | 1.62 µs | 86.87 ns | **18.7x** |
| `structuredClone([1000 numbers])` | 13.79 µs | 544.56 ns | **25.3x** |
| `structuredClone([10 strings])` | 642.38 ns | 307.38 ns | **2.1x** |
| `structuredClone([100 strings])` | 5.67 µs | 2.57 µs | **2.2x** |
| `structuredClone([10 mixed])` | 446.32 ns | 198.35 ns | **2.3x** |
| `structuredClone(nested array)` | 1.84 µs | 1.79 µs | 1.0x (not
eligible) |
| `structuredClone({a: 123})` | 95.98 ns | 100.07 ns | 1.0x (no
regression) |
Int32 arrays see the largest gains (up to 25x) since they use a direct
`memcpy` of butterfly memory. String/mixed arrays see ~2x improvement.
No performance regression on non-eligible inputs.
## Bug Fix
Also fixes a correctness bug where arrays with named properties (e.g.,
`arr.foo = "bar"`) would lose those properties when going through the
array fast path. Added a `structure->maxOffset() != invalidOffset` guard
to fall back to normal serialization for such arrays.
Fixed a minor double-counting issue in `computeMemoryCost` where
`JSValue` elements in `SimpleArray` were counted both by `byteSize()`
and individually.
## Test Plan
38 tests in `test/js/web/structured-clone-fastpath.test.ts` covering:
- Basic array types: empty, numbers, strings, mixed primitives, special
numbers (`-0`, `NaN`, `Infinity`)
- Large arrays (10,000 elements)
- Tier 2: double arrays, Int32→Double transition
- Deep clone independence verification
- Named properties on Int32, Double, and Contiguous arrays
- `postMessage` via `MessageChannel` for Int32, Double, and mixed arrays
- Edge cases: frozen/sealed arrays, deleted elements (holes), `length`
extension, single-element arrays
- Prototype modification (custom prototype, indexed prototype properties
with holes)
- `Array` subclass identity loss (per spec)
- `undefined`-only and `null`-only arrays
- Multiple independent clones from the same source
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Add `minify.unwrapCJSToESM` JS API option and `--unwrap-cjs-to-esm` CLI
flag to force CJS-to-ESM conversion for specific packages, eliminating
the `__commonJS` wrapper. Supports wildcard patterns (e.g. `"@scope/*"`).
User entries extend the default React family list.
Also removes the react/react-dom version check that gated conversion,
and fixes `packageName()` to handle scoped packages (`@scope/pkg`).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- Fix crash ("Pure virtual function called!") when WebSocket client
receives binary data with `binaryType = "blob"` and no event listener
attached
- Add missing `incPendingActivityCount()` call before `postTask` in the
Blob case of `didReceiveBinaryData`
- Add regression test for issue #26669
## Root Cause
The Blob case in `didReceiveBinaryData` (WebSocket.cpp:1324-1331) was
calling `decPendingActivityCount()` inside the `postTask` callback
without a matching `incPendingActivityCount()` beforehand. This bug was
introduced in #21471 when Blob support was added.
The ArrayBuffer and NodeBuffer cases correctly call
`incPendingActivityCount()` before `postTask`, but the Blob case was
missing this call.
## Test plan
- [x] New regression test verifies WebSocket with `binaryType = "blob"`
doesn't crash on ping frames
- [x] `bun bd test test/regression/issue/26669.test.ts` passes
Fixes#26669🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Ciro Spaciari MacBook <ciro@anthropic.com>
## Summary
`Bun.stringWidth` was incorrectly treating Thai SARA AA (U+0E32), SARA
AM (U+0E33), and their Lao equivalents (U+0EB2, U+0EB3) as zero-width
characters.
## Root Cause
In `src/string/immutable/visible.zig`, the range check for Thai/Lao
combining marks was too broad:
- Thai: `0xe31 <= cp <= 0xe3a` included U+0E32 and U+0E33
- Lao: `0xeb1 <= cp <= 0xebc` included U+0EB2 and U+0EB3
According to Unicode (UCD Grapheme_Break property), these are **spacing
vowels** (Grapheme_Base), not combining marks.
## Changes
- **`src/string/immutable/visible.zig`**: Exclude U+0E32, U+0E33,
U+0EB2, U+0EB3 from zero-width ranges
- **`test/js/bun/util/stringWidth.test.ts`**: Add tests for Thai and Lao
spacing vowels
## Before/After
| Character | Before | After |
|-----------|--------|-------|
| `\u0E32` (SARA AA) | 0 | 1 |
| `\u0E33` (SARA AM) | 0 | 1 |
| `คำ` (common Thai word) | 1 | 2 |
| `\u0EB2` (Lao AA) | 0 | 1 |
| `\u0EB3` (Lao AM) | 0 | 1 |
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
Enables the `net.Server → Http2SecureServer` connection upgrade pattern
used by libraries like
[http2-wrapper](https://github.com/szmarczak/http2-wrapper),
[crawlee](https://github.com/apify/crawlee), and custom HTTP/2 proxy
servers. This pattern works by accepting raw TCP connections on a
`net.Server` and forwarding them to an `Http2SecureServer` via
`h2Server.emit('connection', rawSocket)`.
#### Bug fixes
**SSLWrapper use-after-free (Zig)**
Two use-after-free bugs in `ssl_wrapper.zig` are fixed:
1. **`flush()` stale pointer** — `flush()` captured the `ssl` pointer
*before* calling `handleTraffic()`, which can trigger a close callback
that frees the SSL object via `deinit`. The pointer was then used after
being freed. Fix: read `this.ssl` *after* `handleTraffic()` returns.
2. **`handleReading()` null dereference** — `handleReading()` called
`triggerCloseCallback()` after `triggerDataCallback()` without checking
whether the data callback had already closed the connection. This led to
a null function pointer dereference. Fix: check `this.ssl == null ||
this.flags.closed_notified` before calling the close callback.
### How did you verify your code works?
- Added **13 in-process tests** (`node-http2-upgrade.test.mts`) covering
the `net.Server → Http2SecureServer` upgrade path:
- GET/POST requests through upgraded connections
- Sequential requests sharing a single H2 session
- `session` event emission
- Concurrent clients with independent sessions
- Socket close ordering (rawSocket first vs session first) — no crash
- ALPN protocol negotiation (`h2`)
- Varied status codes (200, 302, 404)
- Client disconnect mid-response (stream destroyed early)
- Three independent clients producing three distinct sessions
- Tests use `node:test` + `node:assert` and **pass in both Bun and
Node.js**
- Ported `test-http2-socket-close.js` from the Node.js test suite,
verifying no segfault when the raw socket is destroyed before the H2
session is closed
---------
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Fixes a bug where sequential HTTP requests with proxy-style absolute
URLs (e.g. `GET http://example.com/path HTTP/1.1`) hang on the 2nd+
request when using keep-alive connections.
## Root Cause
In `packages/bun-uws/src/HttpParser.h`, the parser was treating
proxy-style absolute URLs identically to `CONNECT` method requests —
setting `isConnectRequest = true` and entering tunnel mode. This flag
was never reset between requests on the same keep-alive connection, so
the 2nd+ request was swallowed as raw tunnel data instead of being
parsed as HTTP.
## Fix
3-line change in `HttpParser.h:569`:
- **`isConnect`**: Now only matches actual `CONNECT` method requests
(removed `isHTTPorHTTPSPrefixForProxies` from the condition)
- **`isProxyStyleURL`**: New variable that detects `http://`/`https://`
prefixes and accepts them as valid request targets — without triggering
tunnel mode
## Who was affected
- Any Bun HTTP server (`Bun.serve()` or `node:http createServer`)
receiving proxy-style requests on keep-alive connections
- HTTP proxy servers built with Bun could only handle one request per
connection
- Bun's own HTTP client making sequential requests through an HTTP proxy
backed by a Bun server
## Test
Added `test/js/node/http/node-http-proxy-url.test.ts` with 3 test cases:
1. Sequential GET requests with absolute URL paths
2. Sequential POST requests with absolute URL paths
3. Mixed normal and proxy-style URLs
Tests run under both Node.js and Bun for compatibility verification.
- ❌ Fails with system bun (2/3 tests timeout on 2nd request)
- ✅ Passes with debug build (3/3 tests pass)
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Update LLVM version references across build scripts, Dockerfiles, CI,
Nix configs, and documentation
- Fix LLVM 21 `-Wcharacter-conversion` errors in WebKit bindings:
- `EncodingTables.h`: pragma for intentional char32_t/char16_t
comparisons
- `TextCodecCJK.cpp`: widen `gb18030AsymmetricEncode` param to char32_t
- `URLPatternParser`: widen `isValidNameCodepoint` param to char32_t,
cast for `startsWith`
- Fix `__libcpp_verbose_abort` noexcept mismatch (LLVM 21 uses
`_NOEXCEPT`)
- Fix dangling pointer in `BunJSCModule.h` (`toCString` temporary
lifetime)
- Remove `useMathSumPreciseMethod` (removed upstream in JSC)
**Before merging:** Merge https://github.com/oven-sh/WebKit/pull/153
first, then update `WEBKIT_VERSION` in `cmake/tools/SetupWebKit.cmake`
to point to the merged commit.
## Test plan
- [ ] Build bun debug on macOS with LLVM 21
- [ ] Build bun on Linux (glibc)
- [ ] Build bun on Linux (musl)
- [ ] Build bun on Windows
- [ ] Run test suite
Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Add `[Symbol.dispose]` to mock function prototype, aliased to
`mockRestore`
- Enables `using spy = spyOn(obj, "method")` to auto-restore when
leaving scope
- Works for both `spyOn()` and `mock()`
Addresses #6040 — gives users a clean way to scope spy lifetimes instead
of manually calling `mockRestore()` or relying on `afterEach`.
### Example
```ts
import { spyOn, expect, test } from "bun:test";
test("auto-restores spy", () => {
const obj = { method: () => "original" };
{
using spy = spyOn(obj, "method").mockReturnValue("mocked");
expect(obj.method()).toBe("mocked");
}
// automatically restored
expect(obj.method()).toBe("original");
});
```
## Test plan
- `bun bd test test/js/bun/test/mock-disposable.test.ts` — 3 tests pass
- Verified tests fail with `USE_SYSTEM_BUN=1`
## Summary
- Fix path normalization for "." on Windows where `normalizeStringBuf`
was incorrectly stripping it to an empty string
- This caused `existsSync('.')`, `statSync('.')`, and other fs
operations to fail on Windows
## Test plan
- Added regression test `test/regression/issue/26631.test.ts` that tests
`existsSync`, `exists`, `statSync`, and `stat` for both `.` and `..`
paths
- All tests pass locally with `bun bd test
test/regression/issue/26631.test.ts`
- Verified code compiles on all platforms with `bun run zig:check-all`
Fixes#26631🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
### What does this PR do?
Extract NO_PROXY checking logic from getHttpProxyFor into a reusable
isNoProxy method on the env Loader. This allows both fetch() and
WebSocket to check NO_PROXY even when a proxy is explicitly provided via
the proxy option (not just via http_proxy env var).
Changes:
- env_loader.zig: Extract isNoProxy() from getHttpProxyFor()
- FetchTasklet.zig: Check isNoProxy() before using explicit proxy
- WebSocket.cpp: Check Bun__isNoProxy() before using explicit proxy
- virtual_machine_exports.zig: Export Bun__isNoProxy for C++ access
- Add NO_PROXY tests for both fetch and WebSocket proxy paths
### How did you verify your code works?
Tests
---------
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
## Summary
- Fix type definition for `Socket.reload()` to match runtime behavior
- The runtime expects `{ socket: handler }` but types previously
accepted just `handler`
## Test plan
- [x] Added regression test `test/regression/issue/26290.test.ts`
- [x] Verified test passes with `bun bd test`
Fixes#26290🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Alistair Smith <hi@alistair.sh>
## Summary
- Adds missing SIMD variants to the `Build.Target` TypeScript type
- The runtime accepts targets like `bun-linux-x64-modern` but TypeScript
was rejecting them
- Generalized the type to use `${Architecture}` template where possible
## Test plan
- [x] Added regression test in `test/regression/issue/26247.test.ts`
that validates all valid target combinations type-check correctly
- [x] Verified with `bun bd test test/regression/issue/26247.test.ts`
Fixes#26247🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Alistair Smith <hi@alistair.sh>
### What does this PR do?
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
## Summary
- Improve handling of fragmented chunk data in the HTTP parser
- Add test coverage for edge cases
## Test plan
- [x] New tests pass
- [x] Existing tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
### What does this PR do?
fixes https://github.com/oven-sh/bun/issues/26597
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Adds `bun run --parallel` and `bun run --sequential` — new flags for
running multiple package.json scripts concurrently or sequentially with
Foreman-style prefixed output. Includes full `--filter`/`--workspaces`
integration for running scripts across workspace packages.
### Usage
```bash
# Run "build" and "test" concurrently from the current package.json
bun run --parallel build test
# Run "build" and "test" sequentially with prefixed output
bun run --sequential build test
# Glob-matched script names
bun run --parallel "build:*"
# Run "build" in all workspace packages concurrently
bun run --parallel --filter '*' build
# Run "build" in all workspace packages sequentially
bun run --sequential --workspaces build
# Glob-matched scripts across all packages
bun run --parallel --filter '*' "build:*"
# Multiple scripts across all packages
bun run --parallel --filter '*' build lint test
# Continue running even if one package fails
bun run --parallel --no-exit-on-error --filter '*' test
# Skip packages missing the script
bun run --parallel --workspaces --if-present build
```
## How it works
### Output format
Each script's stdout/stderr is prefixed with a colored, padded label:
```
build | compiling...
test | running suite...
lint | checking files...
```
### Label format
- **Without `--filter`/`--workspaces`**: labels are just the script name
→ `build | output`
- **With `--filter`/`--workspaces`**: labels are `package:script` →
`pkg-a:build | output`
- **Fallback**: if a package.json has no `name` field, the relative path
from the workspace root is used (e.g., `packages/my-pkg:build`)
### Execution model
- **`--parallel`**: all scripts start immediately, output is interleaved
with prefixes
- **`--sequential`**: scripts run one at a time in order, each waiting
for the previous to finish
- **Pre/post scripts** (`prebuild`/`postbuild`) are grouped with their
main script and run in dependency order within each group
- By default, a failure kills all remaining scripts.
`--no-exit-on-error` lets all scripts finish.
### Workspace integration
The workspace branch in `multi_run.zig` uses a two-pass approach for
deterministic ordering:
1. **Collect**: iterate workspace packages using
`FilterArg.PackageFilterIterator` (same infrastructure as
`filter_run.zig`), filtering with `FilterArg.FilterSet`, collecting
matched packages with their scripts, PATH, and cwd.
2. **Sort**: sort matched packages by name (tiebreak by directory path)
for deterministic ordering — filesystem iteration order from the glob
walker is nondeterministic.
3. **Build configs**: for each sorted package, expand script names
(including globs like `build:*`) against that package's scripts map,
creating `ScriptConfig` entries with `pkg:script` labels and per-package
cwd/PATH.
### Behavioral consistency with `filter_run.zig`
| Behavior | `filter_run.zig` | `multi_run.zig` (this PR) |
|----------|-------------------|---------------------------|
| `--workspaces` skips root package | Yes | Yes |
| `--workspaces` errors on missing script | Yes | Yes |
| `--if-present` silently skips missing | Yes | Yes |
| `--filter` without `--workspaces` includes root | Yes (if matches) |
Yes (if matches) |
| Pre/post script chains | Per-package | Per-package |
| Per-package cwd | Yes | Yes |
| Per-package PATH (`node_modules/.bin`) | Yes | Yes |
### Key implementation details
- Each workspace package script runs in its own package directory with
its own `node_modules/.bin` PATH
- `dirpath` from the glob walker is duped to avoid use-after-free when
the iterator's arena is freed between patterns
- `addScriptConfigs` takes an optional `label_prefix` parameter — `null`
for single-package mode, package name for workspace mode
- `MultiRunProcessHandle` is registered in the `ProcessExitHandler`
tagged pointer union in `process.zig`
## Files changed
| File | Change |
|------|--------|
| `src/cli/multi_run.zig` | New file: process management, output
routing, workspace integration, dependency ordering |
| `src/cli.zig` | Dispatch to `MultiRun.run()` for
`--parallel`/`--sequential`, new context fields |
| `src/cli/Arguments.zig` | Parse `--parallel`, `--sequential`,
`--no-exit-on-error` flags |
| `src/bun.js/api/bun/process.zig` | Register `MultiRunProcessHandle` in
`ProcessExitHandler` tagged pointer union |
| `test/cli/run/multi-run.test.ts` | 118 tests (102 core + 16 workspace
integration) |
| `docs/pm/filter.mdx` | Document `--parallel`/`--sequential` +
`--filter`/`--workspaces` combination |
| `docs/snippets/cli/run.mdx` | Add `--parallel`, `--sequential`,
`--no-exit-on-error` parameter docs |
## Test plan
All 118 tests pass with debug build (`bun bd test
test/cli/run/multi-run.test.ts`). The 16 new workspace tests all fail
with system bun (`USE_SYSTEM_BUN=1`), confirming they test new
functionality.
### Workspace integration tests (16 tests)
1. `--parallel --filter='*'` runs script in all packages
2. `--parallel --filter='pkg-a'` runs only in matching package
3. `--parallel --workspaces` matches all workspace packages
4. `--parallel --filter='*'` with glob expands per-package scripts
5. `--sequential --filter='*'` runs in sequence (deterministic order)
6. Workspace + failure aborts other scripts
7. Workspace + `--no-exit-on-error` lets all finish
8. `--workspaces` skips root package
9. Each workspace script runs in its own package directory (cwd
verification)
10. Multiple script names across workspaces (`build` + `test`)
11. Pre/post scripts work per workspace package
12. `--filter` skips packages without the script (no error)
13. `--workspaces` errors when a package is missing the script
14. `--workspaces --if-present` skips missing scripts silently
15. Labels are padded correctly across workspace packages
16. Package without `name` field uses relative path as label
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
When a `SyntheticModule` callback was wrapped in an `AsyncContextFrame`
on the main globalObject (where async context tracking is enabled),
evaluating it on a `NodeVMGlobalObject` would crash because the tracking
flag wasn't propagated.
`AsyncContextFrame::call` checks `isAsyncContextTrackingEnabled()` to
decide whether to unwrap the frame — without the flag, it takes the fast
path and tries to call the `AsyncContextFrame` wrapper directly, which
is not callable.
The async context data (`m_asyncContextData`) was already shared between
parent and `NodeVMGlobalObject`, but the tracking flag was missing. This
adds propagation of `isAsyncContextTrackingEnabled` alongside the data.
**Repro:** `react-email` v5.2.5 preview server crashes when rendering a
template because it imports `node:async_hooks` (enabling async context
tracking) and uses `node:vm` `SyntheticModule` for module evaluation.
Fixes#26540
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Adds `--metafile-md` CLI option to `bun build` that generates a
markdown visualization of the module graph
- Designed to help Claude and other LLMs analyze bundle composition,
identify bloat, and understand dependency chains
- Reuses existing metafile JSON generation code as a post-processing
step
## Features
The generated markdown includes:
1. **Quick Summary** - Module counts, sizes, ESM/CJS breakdown,
output/input ratio
2. **Largest Input Files** - Sorted by size to identify potential bloat
3. **Entry Point Analysis** - Shows bundle size, exports, CSS bundles,
and bundled modules
4. **Dependency Chains** - Most commonly imported modules and reverse
dependencies
5. **Full Module Graph** - Complete import/export info for each module
6. **Raw Data for Searching** - Grep-friendly markers in code blocks:
- `[MODULE:]`, `[SIZE:]`, `[IMPORT:]`, `[IMPORTED_BY:]`
- `[ENTRY:]`, `[EXTERNAL:]`, `[NODE_MODULES:]`
## Usage
```bash
# Default filename (meta.md)
bun build entry.js --metafile-md --outdir=dist
# Custom filename
bun build entry.js --metafile-md=analysis.md --outdir=dist
# Both JSON and markdown
bun build entry.js --metafile=meta.json --metafile-md=meta.md --outdir=dist
```
## Example Output
See sample output: https://gist.github.com/example (will add)
## Test plan
- [x] Test default filename (`meta.md`)
- [x] Test custom filename
- [x] Test both `--metafile` and `--metafile-md` together
- [x] Test summary metrics
- [x] Test module format info (ESM/CJS)
- [x] Test external imports
- [x] Test exports list
- [x] Test bundled modules table
- [x] Test CSS bundle reference
- [x] Test import kinds (static, dynamic, require)
- [x] Test commonly imported modules
- [x] Test largest files sorting (bloat analysis)
- [x] Test output/input ratio
- [x] Test grep-friendly raw data section
- [x] Test entry point markers
- [x] Test external import markers
- [x] Test node_modules markers
All 17 new tests pass.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
## Summary
- Removes the `#!/bin/sh` shebang from placeholder `bin/bun.exe` and
`bin/bunx.exe` scripts in the npm package
- Fixes `npm i -g bun` being completely broken on Windows since v1.3.7
## Problem
PR #26259 added a `#!/bin/sh` shebang to the placeholder scripts to show
a helpful error when postinstall hasn't run. However, npm's `cmd-shim`
reads shebangs to generate `.ps1`/`.cmd` wrappers **before** postinstall
runs, and bakes the interpreter path into them. On Windows, the wrappers
referenced `/bin/sh` which doesn't exist, causing:
```
& "/bin/sh$exe" "$basedir/node_modules/bun/bin/bun.exe" $args
~~~~~~~~~~~~~
The term '/bin/sh.exe' is not recognized...
```
Even after postinstall successfully replaced the placeholder with the
real binary, the stale wrappers still tried to invoke `/bin/sh`.
## Fix
Remove the shebang. Without it, `cmd-shim` generates a direct invocation
wrapper that works after postinstall replaces the placeholder. On Unix,
bash/zsh still execute shebang-less files as shell scripts via ENOEXEC
fallback, so the helpful error message is preserved.
## Test plan
- [x] `bun bd test test/regression/issue/24329.test.ts` passes (2/2
tests)
- Manually verify `npm i -g bun` works on Windows
Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Refactors `tls-sql.test.ts` to use `describeWithContainer` with a
local Docker container instead of external Neon secrets
- Updates `postgres_tls` service to build from Dockerfile (fixes SSL key
permission issues)
- Fixes pg_hba.conf to allow local socket connections for init scripts
## Test plan
- [x] Verified tests pass locally with `bun bd test
test/js/sql/tls-sql.test.ts` (30 tests pass)
- [ ] CI passes on x64 Linux (arm64 Docker tests are currently disabled)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- `napi_typeof` was returning `napi_object` for `AsyncContextFrame`
values, which are internally callable JSObjects
- Native addons that check callback types (e.g. encore.dev's runtime)
would fail with `expect Function, got: Object` and panic
- Added a `jsDynamicCast<AsyncContextFrame*>` check before the final
`napi_object` fallback to correctly report these values as
`napi_function`
Closes#25933
## Test plan
- [x] Verify encore.dev + supertokens reproduction from the issue no
longer panics
- [ ] Existing napi tests continue to pass
Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixed inverted logic in `canReceiveData` function in HTTP/2 stream
state handling
- Added gRPC streaming tests to verify correct behavior
## Problem
The `canReceiveData` function had completely inverted logic that
reported incorrect `remoteClose` status:
| Stream State | Before (Wrong) | After (Correct) |
|--------------|----------------|-----------------|
| OPEN | `false` (can't receive) | `true` (can receive) |
| HALF_CLOSED_LOCAL | `false` (can't receive) | `true` (can receive from
remote) |
| HALF_CLOSED_REMOTE | `true` (can receive) | `false` (remote closed) |
| CLOSED | `true` (can receive) | `false` (stream done) |
Per RFC 7540 Section 5.1:
- In `HALF_CLOSED_LOCAL` state, the local endpoint has sent END_STREAM
but can still **receive** data from the remote peer
- In `HALF_CLOSED_REMOTE` state, the remote endpoint has sent END_STREAM
so no more data will be received
## Test plan
- [x] Added gRPC streaming tests covering unary, server streaming,
client streaming, and bidirectional streaming calls
- [x] Verified HTTP/2 test suite passes (same or fewer failures than
before)
- [x] Verified gRPC test suite improves (7 failures vs 9 failures before
+ 2 errors)
Closes#20875🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Fix `$`...`.cwd(".")` causing ENOENT error with path ending in
"undefined"
- The same fix applies to `.cwd("")` and `.cwd("./")`
- Falls back to `process.cwd()` when `defaultCwd` is undefined
Closes#26460
## Test plan
- [x] Added regression test in `test/regression/issue/26460.test.ts`
- [x] Verified test fails with `USE_SYSTEM_BUN=1` (reproduces the bug)
- [x] Verified test passes with `bun bd test` (fix works)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
Fix a bug in `appendOptionsEnv` where bare flags (no `=`) that aren't
the last option get a trailing space appended, causing the argument
parser to not recognize them.
For example, `BUN_OPTIONS="--cpu-prof --cpu-prof-dir=profiles"` would
parse `--cpu-prof` as `"--cpu-prof "` (trailing space), so CPU profiling
was never enabled.
## Root Cause
When `appendOptionsEnv` encounters a `--flag` followed by whitespace, it
advances past the whitespace looking for a possible quoted value (e.g.
`--flag "quoted"`). If no quote is found and there's no `=`, it falls
through without resetting `j`, so the emitted argument includes the
trailing whitespace.
## Fix
Save `end_of_flag = j` after scanning the flag name. Add an `else`
branch that resets `j = end_of_flag` when no value (quote or `=`) is
found after the whitespace. This is a 3-line change.
Also fixes a separate bug in `BunCPUProfiler.zig` where `--cpu-prof-dir`
with an absolute path would hit a debug assertion (`path.append` on an
already-rooted path with an absolute input). Changed to `path.join`
which handles both relative and absolute paths correctly.
## Tests
- `test/cli/env/bun-options.test.ts`: Two new tests verifying
`--cpu-prof --cpu-prof-dir=<abs-path>` produces a `.cpuprofile` file,
for both normal and standalone compiled executables.