Commit Graph

4597 Commits

Author SHA1 Message Date
Dylan Conway
c19dcb3181 fix(shell): reject non-finite seq args and handle empty condexpr args (#26993)
## Summary
- **`seq inf` / `seq nan` / `seq -inf` hang**: `std.fmt.parseFloat`
accepts non-finite float values like `inf`, `nan`, `-inf`, but the loop
`while (current <= this._end)` never terminates when `_end` is infinity.
Now rejects non-finite values after parsing.
- **`[[ -d "" ]]` out-of-bounds panic**: Empty string expansion produces
no args in the args list, but `doStat()` unconditionally accesses
`args.items[0]`. Now checks `args.items.len == 0` before calling
`doStat()` and returns exit code 1 (path doesn't exist).

## Test plan
- [x] `seq inf`, `seq nan`, `seq -inf` return exit code 1 with "invalid
argument" instead of hanging
- [x] `[[ -d "" ]]` and `[[ -f "" ]]` return exit code 1 instead of
panicking
- [x] `seq 3` still works normally (produces 1, 2, 3)
- [x] `[[ -d /tmp ]]`, `[[ -f /etc/hostname ]]` still work correctly
- [x] Tests pass with `bun bd test`, seq tests fail with
`USE_SYSTEM_BUN=1`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 18:01:40 -08:00
Jarred Sumner
c57af9df38 Fix websocket proxy ping crash (#26995)
### What does this PR do?

The `sendPong` fix alone wasn't sufficient. The bug only manifests with
**wss:// through HTTP proxy** (not ws://), because only that path uses
`initWithTunnel` with a detached socket.

**Two bugs were found and fixed:**

1. **`sendPong`/`sendCloseWithBody` socket checks**
(`src/http/websocket_client.zig`): Replaced `socket.isClosed() or
socket.isShutdown()` with `!this.hasTCP()` as originally proposed. Also
guarded `shutdownRead()` against detached sockets.

2. **Spurious 1006 during clean close** (`src/http/websocket_client.zig`
+ `WebSocketProxyTunnel.zig`): When `sendCloseWithBody` calls
`clearData()`, it shuts down the proxy tunnel. The tunnel's `onClose`
callback was calling `ws.fail(ErrorCode.ended)` which dispatched a 1006
abrupt close, overriding the clean 1000 close already in progress. Fixed
by adding `tunnel.clearConnectedWebSocket()` before `tunnel.shutdown()`
so the callback is a no-op.

### How did you verify your code works?

- `USE_SYSTEM_BUN=1`: Fails with `Unexpected close code: 1006`
- `bun bd test`: Passes with clean 1000 close
- Full proxy test suite: 25 pass, 4 skip, 0 fail
- Related fragmented/close tests: all passing

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-13 14:49:40 -08:00
robobun
7afead629c fix(linker): defer dynamic import() of unknown node: modules to runtime (#26981)
## Summary

- Defer resolution of dynamic `import()` of unknown `node:` modules
(like `node:sqlite`) to runtime instead of failing at transpile time
- Fix use-after-poison in `addResolveError` by always duping `line_text`
from the source so Location data outlives the arena

Fixes #25707

## Root cause

When a CJS file is `require()`d, Bun's linker eagerly resolves all
import records, including dynamic `import()` expressions. For unknown
`node:` prefixed modules, `whenModuleNotFound` was only deferring
`.require` and `.require_resolve` to runtime — `.dynamic` imports fell
through to the error path, causing the entire `require()` to fail.

This broke Next.js builds with turbopack + `cacheComponents: true` +
Better Auth, because Kysely's dialect detection code uses
`import("node:sqlite")` inside a try/catch that gracefully handles the
module not being available.

Additionally, when the resolve error was generated, the
`Location.line_text` was a slice into arena-allocated source contents.
The arena is reset before `processFetchLog` processes the error, causing
a use-after-poison when `Location.clone` tries to dupe the freed memory.

## Test plan

- [x] New regression test in `test/regression/issue/25707.test.ts` with
3 cases:
- CJS require of file with `import("node:sqlite")` inside try/catch
(turbopack pattern)
  - CJS require of file with bare `import("node:sqlite")` (no try/catch)
  - Runtime error produces correct `ERR_UNKNOWN_BUILTIN_MODULE` code
- [x] Test fails with `USE_SYSTEM_BUN=1` (system bun v1.3.9)
- [x] Test passes with `bun bd test`
- [x] No ASAN use-after-poison crash on debug build


🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-12 14:47:31 -08:00
robobun
7a801fcf93 fix(ini): prevent OOB read and UB on truncated/invalid UTF-8 in INI parser (#26947)
## Summary

- Fix out-of-bounds read in the INI parser's `prepareStr` function when
a multi-byte UTF-8 lead byte appears at the end of a value with
insufficient continuation bytes
- Fix undefined behavior when bare continuation bytes (0x80-0xBF) cause
`utf8ByteSequenceLength` to return 0, hitting an `unreachable` branch
(UB in ReleaseFast builds)
- Add bounds checking before accessing `val[i+1]`, `val[i+2]`,
`val[i+3]` in both escaped and non-escaped code paths

The vulnerability could be triggered by a crafted `.npmrc` file
containing truncated UTF-8 sequences. In release builds, this could
cause OOB heap reads (potential info leak) or undefined behavior.

## Test plan

- [x] Added 9 tests covering truncated 2/3/4-byte sequences, bare
continuation bytes, and escaped contexts
- [x] All 52 INI parser tests pass (`bun bd test
test/js/bun/ini/ini.test.ts`)
- [x] No regressions in npmrc tests (failures are pre-existing Verdaccio
connectivity issues)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-12 00:28:44 -08:00
robobun
44541eb574 fix(sql): reject null bytes in connection parameters to prevent protocol injection (#26952)
## Summary

- Reject null bytes in `username`, `password`, `database`, and `path`
connection parameters for both PostgreSQL and MySQL to prevent wire
protocol parameter injection
- Both the Postgres and MySQL wire protocols use null-terminated strings
in their startup/handshake messages, so embedded null bytes in these
fields act as field terminators, allowing injection of arbitrary
protocol parameters (e.g. `search_path` for schema hijacking)
- The fix validates these fields immediately after UTF-8 conversion and
throws `InvalidArguments` error with a clear message if null bytes are
found

## Test plan

- [x] New test
`test/regression/issue/postgres-null-byte-injection.test.ts` verifies:
- Null bytes in username are rejected with an error before any data is
sent
- Null bytes in database are rejected with an error before any data is
sent
- Null bytes in password are rejected with an error before any data is
sent
  - Normal connections without null bytes still work correctly
- [x] Test verified to fail with `USE_SYSTEM_BUN=1` (unfixed bun) and
pass with `bun bd test` (fixed build)
- [x] Existing SQL tests pass (`adapter-env-var-precedence.test.ts`,
`postgres-stringbuilder-assertion-aggressive.test.ts`)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-12 00:27:00 -08:00
robobun
a68393926b fix(ws): handle fragmented pong frames and validate control frame size (#26944)
## Summary

- Fix WebSocket client pong frame handler to properly handle payloads
split across TCP segments, preventing frame desync that could cause
protocol confusion
- Add missing RFC 6455 Section 5.5 validation: control frame payloads
must not exceed 125 bytes (pong handler lacked this check, unlike ping
and close handlers)

## Details

The pong handler (lines 652-663) had two issues:

1. **Frame desync on fragmented delivery**: When a pong payload was
split across TCP segments (`data.len < receive_body_remain`), the
handler consumed only the available bytes but unconditionally reset
`receive_state = .need_header` and `receive_body_remain = 0`. The
remaining payload bytes in the next TCP delivery were then
misinterpreted as WebSocket frame headers.

2. **Missing payload length validation**: Unlike the ping handler (line
615) and close handler (line 680), the pong handler did not validate the
7-bit payload length against the RFC 6455 limit of 125 bytes for control
frames.

The fix models the pong handler after the existing ping handler pattern:
track partial delivery state with a `pong_received` boolean, buffer
incoming data into `ping_frame_bytes`, and only reset to `need_header`
after the complete payload has been consumed.

## Test plan

- [x] New test `websocket-pong-fragmented.test.ts` verifies:
- Fragmented pong delivery (50-byte payload split into 2+48 bytes) does
not cause frame desync, and a subsequent text frame is received
correctly
- Pong frames with >125 byte payloads are rejected as invalid control
frames
- [x] Test fails with `USE_SYSTEM_BUN=1` (reproduces the bug) and passes
with `bun bd test`
- [x] Existing WebSocket tests pass: `websocket-client.test.ts`,
`websocket-close-fragmented.test.ts`,
`websocket-client-short-read.test.ts`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-11 23:12:28 -08:00
robobun
e8a5f23385 fix(s3): reject CRLF characters in header values to prevent header injection (#26942)
## Summary

- Fixes HTTP header injection vulnerability in S3 client where
user-controlled options (`contentDisposition`, `contentEncoding`,
`type`) were passed to HTTP headers without CRLF validation
- Adds input validation at the JS-to-Zig boundary in
`src/s3/credentials.zig` that throws a `TypeError` if `\r` or `\n`
characters are detected
- An attacker could previously inject arbitrary headers (e.g.
`X-Amz-Security-Token`) by embedding `\r\n` in these string fields

## Test plan

- [x] Added `test/regression/issue/s3-header-injection.test.ts` with 6
tests:
  - CRLF in `contentDisposition` throws
  - CRLF in `contentEncoding` throws
  - CRLF in `type` (content-type) throws
  - Lone CR in `contentDisposition` throws
  - Lone LF in `contentDisposition` throws
  - Valid `contentDisposition` without CRLF still works correctly
- [x] Tests fail with `USE_SYSTEM_BUN=1` (confirming vulnerability
exists in current release)
- [x] Tests pass with `bun bd test` (confirming fix works)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-11 23:02:39 -08:00
robobun
16b3e7cde7 fix(libarchive): use normalized path in mkdiratZ to prevent directory traversal (#26956)
## Summary

- Fix path traversal vulnerability in tarball directory extraction on
POSIX systems where `mkdiratZ` used the un-normalized `pathname` (raw
from tarball) instead of the normalized `path` variable, allowing `../`
components to escape the extraction root via kernel path resolution
- The Windows directory creation, symlink creation, and file creation
code paths already correctly used the normalized path — only the two
POSIX `mkdiratZ` calls were affected (lines 463 and 469)
- `bun install` is not affected because npm mode skips directory
entries; affected callers include `bun create`, GitHub tarball
extraction, and `compile_target`

## Test plan

- [x] Added regression test that crafts a tarball with
`safe_dir/../../escaped_dir/` directory entry and verifies it cannot
create directories outside the extraction root
- [x] Verified test **fails** with system bun (vulnerable) and
**passes** with debug build (fixed)
- [x] Full `archive.test.ts` suite passes (99/99 tests)
- [x] `symlink-path-traversal.test.ts` continues to pass (3/3 tests)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-11 22:47:41 -08:00
robobun
3e792d0d2e fix(test): write JUnit reporter outfile when --bail triggers early exit (#26852)
## Summary
- When `--bail` caused an early exit after a test failure, the JUnit
reporter output file (`--reporter-outfile`) was never written because
`Global.exit()` was called before the normal completion path
- Extracted the JUnit write logic into a `writeJUnitReportIfNeeded()`
method on `CommandLineReporter` and call it in both bail exit paths
(test failure and unhandled rejection) as well as the normal completion
path

Closes #26851

## Test plan
- [x] Added regression test `test/regression/issue/26851.test.ts` with
two cases:
  - Single failing test file with `--bail` produces JUnit XML output
- Multiple test files where bail triggers on second file still writes
the report
- [x] Verified test fails with system bun (`USE_SYSTEM_BUN=1`)
- [x] Verified test passes with `bun bd test`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-11 17:41:45 -08:00
robobun
b7d505b6c1 deflake: make HMR rapid edits test event-driven (#26890)
## Summary
- Add `expectMessageEventually(value)` to the bake test harness `Client`
class — waits for a specific message to appear, draining any
intermediate messages that arrived before it
- Rewrite "hmr handles rapid consecutive edits" test to use raw
`Bun.write` + sleep for intermediate edits and `expectMessageEventually`
for the final assertion, avoiding flaky failures when HMR batches
updates non-deterministically across platforms

Fixes flaky failure on Windows where an extra "render 10" message
arrived after `expectMessage` consumed its expected messages but before
client disposal.

## Test plan
- [x] `bun bd test test/bake/dev-and-prod.test.ts` — all 12 tests pass
- [x] Ran the specific test multiple times to confirm no flakiness

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Alistair Smith <alistair@anthropic.com>
2026-02-11 16:05:25 -08:00
Dylan Conway
50e478dcdc fix(crypto): correct method and constructor names mangled by number renamer (#26876)
## Problem

The bundler's number renamer was mangling `.name` properties on crypto
class prototype methods and constructors:

- `hash.update.name` → `"update2"` instead of `"update"`
- `verify.verify.name` → `"verify2"` instead of `"verify"`
- `cipher.update.name` → `"update3"` instead of `"update"`
- `crypto.Hash.name` → `"Hash2"` instead of `"Hash"`

### Root causes

1. **Named function expressions on prototypes** collided with other
bindings after scope flattening (e.g. `Verify.prototype.verify =
function verify(...)` collided with the imported `verify`)
2. **Block-scoped constructor declarations** (`Hash`, `Hmac`) got
renamed when the bundler hoisted them out of block scope
3. **Shared function declarations** in the Cipher/Decipher block all got
numeric suffixes (`update3`, `final2`, `setAutoPadding2`, etc.)

## Fix

- Use `Object.assign` with object literals for prototype methods —
object literal property keys correctly infer `.name` and aren't subject
to the renamer
- Remove unnecessary block scopes around `Hash` and `Hmac` constructors
so they stay at module level and don't get renamed
- Inline `Cipheriv` methods and copy references to `Decipheriv`

## Tests

Added comprehensive `.name` tests for all crypto classes: Hash, Hmac,
Sign, Verify, Cipheriv, Decipheriv, DiffieHellman, ECDH, plus factory
functions and constructor names.
2026-02-10 23:06:22 -08:00
robobun
ba6e84fecd fix(compile): seek to start of file before EXDEV cross-device copy (#26883)
## What does this PR do?

Fixes `bun build --compile` producing an all-zeros binary when the
output directory is on a different filesystem than the temp directory.
This is common in Docker containers, Gitea runners, and other
environments using overlayfs.

## Problem

When `inject()` finishes writing the modified executable to the temp
file, the file descriptor's offset is at EOF. If the subsequent
`renameat()` to the output path fails with `EXDEV` (cross-device — the
temp file and output dir are on different filesystems), the code falls
back to `copyFileZSlowWithHandle()`, which:

1. Calls `fallocate()` to pre-allocate the output file to the correct
size (filled with zeros)
2. Calls `bun.copyFile(in_handle, out_handle)` — but `in_handle`'s
offset is at EOF
3. `copy_file_range` / `sendfile` / `read` all use the current file
offset (EOF), read 0 bytes, and return immediately
4. Result: output file is the correct size but entirely zeros

This explains user reports of `bun build --compile
--target=bun-darwin-arm64` producing invalid binaries that `file`
identifies as "data" rather than a Mach-O executable.

## Fix

Seek the input fd to offset 0 in `copyFileZSlowWithHandle` before
calling `bun.copyFile`.

## How did you verify your code works?

- `bun bd` compiles successfully
- `bun bd test test/bundler/bun-build-compile.test.ts` — 6/6 pass
- Added tests that verify compiled binaries have valid executable
headers and produce correct output
- Manually verified cross-compilation: `bun build --compile
--target=bun-darwin-arm64` produces a valid Mach-O binary

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-10 22:32:31 -08:00
robobun
4c205486d6 fix(test): update valkey test snapshots for Redis 8 error message format (#26878)
## Summary
- Update inline error snapshots in valkey reliability tests to match
Redis 8's changed error message format
- Redis 8 (`redis:8-alpine` in our test Docker container) no longer
appends `, with args beginning with: ` when an unknown command has no
arguments

## Root cause
Redis commit
[`25f780b6`](25f780b662)
([PR #14690](https://github.com/redis/redis/pull/14690)) changed
`commandCheckExistence()` in `src/server.c` to only append `, with args
beginning with: ` when there are actual arguments (`c->argc >= 2`).
Previously it was always appended, producing a dangling `, with args
beginning with: ` even with zero arguments.

## Changes
- `test/js/valkey/reliability/protocol-handling.test.ts`: Updated
`SYNTAX-ERROR` snapshot (no args case)
- `test/js/valkey/reliability/error-handling.test.ts`: Updated
`undefined` and `123` snapshots (no args cases)

## Test plan
- [ ] Verify `protocol-handling.test.ts` passes in CI (was failing on
every attempt as shown in #26869 / build #36831)
- [ ] Verify `error-handling.test.ts` passes in CI

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-10 16:30:55 -08:00
Alistair Smith
099b5e430c feat(test): add --retry flag and emit separate testcase entries for retries in JUnit XML (#26866)
#### (Copies commits from #26447)

## Summary
- Add a global `--retry <N>` flag to `bun test` that sets a default
retry count for all tests (overridable by per-test `{ retry: N }`). Also
configurable via `[test] retry = N` in bunfig.toml.
- When a test passes after one or more retries, the JUnit XML reporter
emits a separate `<testcase>` entry for each failed attempt (with
`<failure>`), followed by the final passing `<testcase>`. This gives
flaky test detection tools per-attempt timing and result data using
standard JUnit XML that all CI systems can parse.

## Test plan
- `bun bd test test/js/junit-reporter/junit.test.js` — verifies separate
`<testcase>` entries appear in JUnit XML for tests that pass after retry
- `bun bd test test/cli/test/retry-flag.test.ts` — verifies the
`--retry` CLI flag applies a default retry count to all tests

## Changelog
<!-- CHANGELOG:START -->
- Added `--retry <N>` flag to `bun test` to set a default retry count
for all tests
- Added `[test] retry` option to bunfig.toml
- JUnit XML reporter now emits separate `<testcase>` entries for each
retry attempt, providing CI visibility into flaky tests
<!-- CHANGELOG:END -->

---------

Co-authored-by: Chris Lloyd <chrislloyd@anthropic.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-10 10:58:21 -08:00
Przemysław Rekucki
746771d495 fix(bake): queue pending HMR scripts for reuse (#19736) (#23995)
## Summary

Fixes "Unknown HMR script" error during rapid consecutive edits in HMR

## Test plan
- [x] Basic consecutive HMR edits work correctly

---------

Co-authored-by: Alistair Smith <alistair@anthropic.com>
2026-02-10 01:53:07 -08:00
robobun
ead4a8c45d fix(fs): properly handle file: URL strings in fs.watch and fs.watchFile (#26859)
## Summary
- Convert `file:` URL strings to filesystem paths via
`Bun.fileURLToPath()` in the JS layer for both `fs.watch` and
`fs.watchFile`/`fs.unwatchFile`
- Handles percent-decoding (e.g. `%20` → space) and proper URL parsing,
which the previous naive `slice[6..]` stripping in Zig could not do
- Zig-level `file://` stripping is left unchanged; the JS layer now
resolves file URLs before they reach native code

## Test plan
- [x] New test: `fs.watch` with `file:` URL string containing
`%20`-encoded spaces
- [x] New test: `fs.watchFile` with `file:` URL string containing
`%20`-encoded spaces
- [x] Both tests fail with `USE_SYSTEM_BUN=1` and pass with `bun bd
test`
- [x] Existing `fs.watch` "should work with url" test (URL object) still
passes
- [x] Full `fs.watchFile` suite passes (6 pass, 1 skip, 0 fail)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-09 22:22:22 -08:00
Jarred Sumner
ce715b5a0f feat(transpiler): implement TC39 standard ES decorators lowering (#26436)
## Summary

- Implement complete lowering of TC39 stage-3 standard ES decorators
(the non-legacy variant used when tsconfig has no
`experimentalDecorators`).
- Passes all 147 esbuild decorator tests and 22 additional Bun-specific
tests (191 total, 0 failures).
- Supports method, getter, setter, field, auto-accessor, private member,
and class decorators in both statement and expression positions, with
proper evaluation order, class binding semantics, and decorator
metadata.

Fixes #4122
Fixes #20206
Fixes #14529
Fixes #6051

## What's implemented

| Feature | Details |
|---|---|
| Method/getter/setter decorators | Static and instance, public and
private |
| Field decorators | Initializer replacement + extra initializers via
`__runInitializers` |
| Auto-accessor (`accessor` keyword) | Lowered to WeakMap storage +
getter/setter pair |
| Private member decorators | WeakMap/WeakSet lowering with
`__privateGet`/`__privateSet` |
| Class decorators | Statement and expression positions |
| Class expression decorators | Comma-expression lowering (no IIFE) |
| Decorator metadata | `Symbol.metadata` support via
`__decoratorMetadata` |
| Evaluation order | All decorator expressions + computed keys evaluated
in source order per TC39 spec |
| Class binding semantics | Separate inner/outer class name bindings
(element vs class decorator closures) |
| Static block extraction | `this` replaced with class name ref when
moved to suffix |
| Computed property keys | Pre-evaluated into temp variables for correct
ordering |

## Runtime helpers

Added to `src/runtime.js` and registered in `src/runtime.zig`:
- `__decoratorStart(base)` — creates decorator context array
- `__decorateElement(array, flags, name, decorators, target, extra)` —
applies decorators to a class element
- `__decoratorMetadata(array, target)` — sets `Symbol.metadata` on the
class
- `__runInitializers(array, flags, self, value)` — runs
initializer/extra-initializer arrays

## Test plan

- [x] `bun bd test
test/bundler/transpiler/es-decorators-esbuild.test.ts` — **147/147
pass** (esbuild's full decorator test suite)
- [x] `bun bd test test/bundler/transpiler/es-decorators.test.ts` —
**22/22 pass**
- [x] `bun bd test test/bundler/transpiler/decorators.test.ts` — **22/22
pass** (legacy decorators still work)
- [x] E2E runtime verification of method, field, accessor, class,
private, and expression decorators

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2026-02-09 22:03:54 -08:00
SUZUKI Sosuke
fa78d2b408 perf(structuredClone): add fast path for arrays containing simple objects (#26818)
## Summary

Adds a **DenseArray fast path** for `structuredClone` / `postMessage`
that completely skips byte-buffer serialization when an
`ArrayWithContiguous` array contains **flat objects** (whose property
values are only primitives or strings).

This builds on #26814 which added fast paths for Int32/Double/Contiguous
arrays of primitives and strings. The main remaining slow case was
**arrays of objects** — the most common real-world pattern (e.g.
`[{name: "Alice", age: 30}, {name: "Bob", age: 25}]`). Previously, these
fell back to the full serialization path because the Contiguous fast
path rejected non-string cell elements.

## How it works

### Serialization
The existing Contiguous array handler is extended to recognize object
elements that pass `isObjectFastPathCandidate` (FinalObject, no
getters/setters, no indexed properties, all enumerable). For qualifying
objects, properties are collected into a `SimpleCloneableObject` struct
(reusing the existing `SimpleInMemoryPropertyTableEntry` type). The
result is stored as a `FixedVector<DenseArrayElement>` where
`DenseArrayElement = std::variant<JSValue, String,
SimpleCloneableObject>`.

If no object elements are found, the existing `SimpleArray` path is used
(no regression).

### Deserialization
A **Structure cache** avoids repeated Structure transitions when the
array contains many same-shape objects (the common case). The first
object is built via `constructEmptyObject` + `putDirect`, and its final
Structure + Identifiers are cached. Subsequent objects with matching
property names are created directly with `JSFinalObject::create(vm,
cachedStructure)`, skipping all transitions.

Safety guards:
- Cache is only used when property count AND all property names match
- Cache is disabled when `outOfLineCapacity() > 0` (properties exceed
`maxInlineCapacity`), since `JSFinalObject::create` cannot allocate a
butterfly

### Fallback conditions

| Condition | Behavior |
|-----------|----------|
| Elements are only primitives/strings | SimpleArray (existing) |
| Elements include `isObjectFastPathCandidate` objects | **DenseArray
(NEW)** |
| Object property value is an object/array | Fallback to normal path |
| Elements include Date, RegExp, Map, Set, ArrayBuffer, etc. | Fallback
to normal path |
| Array has holes | Fallback to normal path |

## Benchmarks

Apple M4 Max, release build vs system Bun v1.3.8 and Node.js v24.12:

| Benchmark | Node.js v24.12 | Bun v1.3.8 | **This PR** | vs Bun | vs
Node |

|-----------|---------------|------------|-------------|--------|---------|
| `[10 objects]` | 2.83 µs | 2.72 µs | **1.56 µs** | **1.7x** | **1.8x**
|
| `[100 objects]` | 24.51 µs | 25.98 µs | **14.11 µs** | **1.8x** |
**1.7x** |

## Test coverage

28 new edge-case tests covering:
- **Property value variants**: empty objects, special numbers (NaN,
Infinity, -0), null/undefined values, empty string keys, boolean-only
values, numeric string keys
- **Structure cache correctness**: alternating shapes, objects
interleaved with primitives, >maxInlineCapacity properties (100+), 1000
same-shape objects (stress test), repeated clone independence
- **Fallback correctness**: array property values, nested objects,
Date/RegExp/Map/Set/ArrayBuffer elements, getters, non-enumerable
properties, `Object.create(null)`, class instances
- **Frozen/sealed**: clones are mutable regardless of source
- **postMessage via MessageChannel**: mixed arrays with objects, empty
object arrays

## Changed files

- `src/bun.js/bindings/webcore/SerializedScriptValue.h` —
`SimpleCloneableObject`, `DenseArrayElement`, `FastPath::DenseArray`,
factory/constructor/member
- `src/bun.js/bindings/webcore/SerializedScriptValue.cpp` — serialize,
deserialize, `computeMemoryCost`
- `test/js/web/structured-clone-fastpath.test.ts` — 28 new tests
- `bench/snippets/structuredClone.mjs` — object array benchmarks

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-09 21:54:41 -08:00
SUZUKI Sosuke
b7475d8768 fix(buffer): return fixed-length view from slice on resizable ArrayBuffer (#26822)
## Summary

Follow-up to #26819 ([review
comment](https://github.com/oven-sh/bun/pull/26819#discussion_r2781484939)).
Fixes `Buffer.slice()` / `Buffer.subarray()` on resizable `ArrayBuffer`
/ growable `SharedArrayBuffer` to return a **fixed-length view** instead
of a length-tracking view.

## Problem

The resizable/growable branch was passing `std::nullopt` to
`JSUint8Array::create()`, which creates a length-tracking view. When the
underlying buffer grows, the sliced view's length would incorrectly
expand:

```js
const rab = new ArrayBuffer(10, { maxByteLength: 20 });
const buf = Buffer.from(rab);
const sliced = buf.slice(0, 5);
sliced.length; // 5

rab.resize(20);
sliced.length; // was 10 (wrong), now 5 (correct)
```

Node.js specifies that `Buffer.slice()` always returns a fixed-length
view (verified on Node.js v22).

## Fix

Replace `std::nullopt` with `newLength` in the
`isResizableOrGrowableShared()` branch of
`jsBufferPrototypeFunction_sliceBody`.

## Test

Added a regression test that creates a `Buffer` from a resizable
`ArrayBuffer`, slices it, resizes the buffer, and verifies the slice
length doesn't change.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 04:48:20 -08:00
SUZUKI Sosuke
9484218ba4 perf(buffer): move Buffer.slice/subarray to native C++ with int32 fast path (#26819)
## Summary

Move `Buffer.slice()` / `Buffer.subarray()` from a JS builtin to a
native C++ implementation, eliminating the `adjustOffset` closure
allocation and JS→C++ constructor overhead on every call. Additionally,
add an int32 fast path that skips `toNumber()` (which can invoke
`valueOf`/`Symbol.toPrimitive`) when arguments are already int32—the
common case for calls like `buf.slice(0, 10)`.

## Changes

- **`src/bun.js/bindings/JSBuffer.cpp`**: Add
`jsBufferPrototypeFunction_sliceBody` with `adjustSliceOffsetInt32` /
`adjustSliceOffsetDouble` helpers. Update prototype hash table entries
from `BuiltinGeneratorType` to `NativeFunctionType` for both `slice` and
`subarray`.
- **`src/js/builtins/JSBufferPrototype.ts`**: Remove the JS `slice`
function (was lines 667–687).
- **`bench/snippets/buffer-slice.mjs`**: Add mitata benchmark.

## Benchmark (Apple M4 Max)

| Benchmark | Before (v1.3.8) | After | Speedup |
|---|---|---|---|
| `Buffer(64).slice()` | 27.19 ns | **14.56 ns** | **1.87x** |
| `Buffer(1024).slice()` | 27.84 ns | **14.62 ns** | **1.90x** |
| `Buffer(1M).slice()` | 29.20 ns | **14.89 ns** | **1.96x** |
| `Buffer(64).slice(10)` | 30.26 ns | **16.01 ns** | **1.89x** |
| `Buffer(1024).slice(10, 100)` | 30.92 ns | **18.32 ns** | **1.69x** |
| `Buffer(1024).slice(-100, -10)` | 28.82 ns | **17.37 ns** | **1.66x**
|
| `Buffer(1024).subarray(10, 100)` | 28.67 ns | **16.32 ns** | **1.76x**
|

**~1.7–1.9x faster** across all cases. All 449 buffer tests pass.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 01:46:33 -08:00
robobun
2a5e8ef38c fix(kqueue): fix incorrect filter comparison causing excessive CPU on macOS (#26812)
## Summary

Fixes the remaining kqueue filter comparison bug in
`packages/bun-usockets/src/eventing/epoll_kqueue.c` that caused
excessive CPU usage with network requests on macOS:

- **`us_loop_run_bun_tick` filter comparison (line 302-303):** kqueue
filter values (`EVFILT_READ=-1`, `EVFILT_WRITE=-2`) were compared using
bitwise AND (`&`) instead of equality (`==`). Since these are signed
negative integers (not bitmasks), `(-2) & (-1)` = `-2` (truthy), meaning
every `EVFILT_WRITE` event was also misidentified as `EVFILT_READ`. This
was already fixed in `us_loop_run` (by PR #25475) but the same bug
remained in `us_loop_run_bun_tick`, which is the primary event loop
function used by Bun.

This is a macOS-only issue (Linux uses epoll, which is unaffected).

Closes #26811

## Test plan

- [x] Added regression test at `test/regression/issue/26811.test.ts`
that makes concurrent HTTPS POST requests
- [x] Test passes with `bun bd test test/regression/issue/26811.test.ts`
- [ ] Manual verification on macOS: run the reporter's [repro
script](https://gist.github.com/jkoppel/d26732574dfcdcc6bfc4958596054d2e)
and confirm CPU usage stays low

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-09 00:52:17 -08:00
SUZUKI Sosuke
0f43ea9bec perf(structuredClone): add fast path for root-level dense arrays (#26814)
## Summary

Add a fast path for `structuredClone` and `postMessage` when the root
value is a dense array of primitives or strings. This bypasses the full
`CloneSerializer`/`CloneDeserializer` machinery by keeping data in
native C++ structures instead of serializing to a byte stream.

**Important:** This optimization only applies when the root value passed
to `structuredClone()` / `postMessage()` is an array. Nested arrays
within objects still go through the normal serialization path.

## Implementation

Three tiers of array fast paths, checked in order:

| Tier | Indexing Type | Strategy | Applies When |
|------|--------------|----------|--------------|
| **Tier 1** | `ArrayWithInt32` | `memcpy` butterfly data | Dense int32
array, no holes, no named properties |
| **Tier 2** | `ArrayWithDouble` | `memcpy` butterfly data | Dense
double array, no holes, no named properties |
| **Tier 3** | `ArrayWithContiguous` | Copy elements into
`FixedVector<variant<JSValue, String>>` | Dense array of
primitives/strings, no holes, no named properties |

All tiers fall through to the normal serialization path when:
- The array has holes that must forward to the prototype
- The array has named properties (e.g., `arr.foo = "bar"`) — checked via
`structure->maxOffset() != invalidOffset`
- Elements contain non-primitive, non-string values (objects, arrays,
etc.)
- The context requires wire-format serialization (storage, cross-process
transfer)

### Deserialization

- **Tier 1/2:** Allocate a new `Butterfly` via `vm.auxiliarySpace()`,
`memcpy` data back, create array with `JSArray::createWithButterfly()`.
Falls back to normal deserialization if `isHavingABadTime` (forced
ArrayStorage mode).
- **Tier 3:** Pre-convert elements to `JSValue` (including `jsString()`
allocation), then use `JSArray::tryCreateUninitializedRestricted()` +
`initializeIndex()`.

## Benchmarks

Apple M4 Max, comparing system Bun 1.3.8 vs this branch (release build):

| Benchmark | Before | After | Speedup |
|-----------|--------|-------|---------|
| `structuredClone([10 numbers])` | 308.71 ns | 40.38 ns | **7.6x** |
| `structuredClone([100 numbers])` | 1.62 µs | 86.87 ns | **18.7x** |
| `structuredClone([1000 numbers])` | 13.79 µs | 544.56 ns | **25.3x** |
| `structuredClone([10 strings])` | 642.38 ns | 307.38 ns | **2.1x** |
| `structuredClone([100 strings])` | 5.67 µs | 2.57 µs | **2.2x** |
| `structuredClone([10 mixed])` | 446.32 ns | 198.35 ns | **2.3x** |
| `structuredClone(nested array)` | 1.84 µs | 1.79 µs | 1.0x (not
eligible) |
| `structuredClone({a: 123})` | 95.98 ns | 100.07 ns | 1.0x (no
regression) |

Int32 arrays see the largest gains (up to 25x) since they use a direct
`memcpy` of butterfly memory. String/mixed arrays see ~2x improvement.
No performance regression on non-eligible inputs.

## Bug Fix

Also fixes a correctness bug where arrays with named properties (e.g.,
`arr.foo = "bar"`) would lose those properties when going through the
array fast path. Added a `structure->maxOffset() != invalidOffset` guard
to fall back to normal serialization for such arrays.

Fixed a minor double-counting issue in `computeMemoryCost` where
`JSValue` elements in `SimpleArray` were counted both by `byteSize()`
and individually.

## Test Plan

38 tests in `test/js/web/structured-clone-fastpath.test.ts` covering:

- Basic array types: empty, numbers, strings, mixed primitives, special
numbers (`-0`, `NaN`, `Infinity`)
- Large arrays (10,000 elements)
- Tier 2: double arrays, Int32→Double transition
- Deep clone independence verification
- Named properties on Int32, Double, and Contiguous arrays
- `postMessage` via `MessageChannel` for Int32, Double, and mixed arrays
- Edge cases: frozen/sealed arrays, deleted elements (holes), `length`
extension, single-element arrays
- Prototype modification (custom prototype, indexed prototype properties
with holes)
- `Array` subclass identity loss (per spec)
- `undefined`-only and `null`-only arrays
- Multiple independent clones from the same source

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-08 21:36:59 -08:00
Jarred Sumner
0889897a1c Revert "feat(bundler): add configurable CJS→ESM unwrapping via unwrapCJSToESM"
This reverts commit e3c25260ed.
2026-02-08 19:49:26 -08:00
Jarred Sumner
e3c25260ed feat(bundler): add configurable CJS→ESM unwrapping via unwrapCJSToESM
Add `minify.unwrapCJSToESM` JS API option and `--unwrap-cjs-to-esm` CLI
flag to force CJS-to-ESM conversion for specific packages, eliminating
the `__commonJS` wrapper. Supports wildcard patterns (e.g. `"@scope/*"`).
User entries extend the default React family list.

Also removes the react/react-dom version check that gated conversion,
and fixes `packageName()` to handle scoped packages (`@scope/pkg`).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 01:32:10 -08:00
Alistair Smith
1bded85718 types: Enable --splitting with compile (#26796)
### What does this PR do?

Enables --splitting with compile

### How did you verify your code works?

Bun types integration test fixture updates
2026-02-07 13:39:18 -08:00
Dylan Conway
cf6cdbbbad Revert "Mimalloc v3 update (#26379)" (#26783)
This reverts commit c63415c9c9.

### What does this PR do?

### How did you verify your code works?
2026-02-06 18:05:17 -08:00
robobun
89d2b1cd0b fix(websocket): add missing incPendingActivityCount() in blob binaryType case (#26670)
## Summary

- Fix crash ("Pure virtual function called!") when WebSocket client
receives binary data with `binaryType = "blob"` and no event listener
attached
- Add missing `incPendingActivityCount()` call before `postTask` in the
Blob case of `didReceiveBinaryData`
- Add regression test for issue #26669

## Root Cause

The Blob case in `didReceiveBinaryData` (WebSocket.cpp:1324-1331) was
calling `decPendingActivityCount()` inside the `postTask` callback
without a matching `incPendingActivityCount()` beforehand. This bug was
introduced in #21471 when Blob support was added.

The ArrayBuffer and NodeBuffer cases correctly call
`incPendingActivityCount()` before `postTask`, but the Blob case was
missing this call.

## Test plan

- [x] New regression test verifies WebSocket with `binaryType = "blob"`
doesn't crash on ping frames
- [x] `bun bd test test/regression/issue/26669.test.ts` passes

Fixes #26669

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Ciro Spaciari MacBook <ciro@anthropic.com>
2026-02-05 20:39:19 -08:00
SUZUKI Sosuke
0e386c4168 fix(stringWidth): correct width for Thai/Lao spacing vowels (#26728)
## Summary

`Bun.stringWidth` was incorrectly treating Thai SARA AA (U+0E32), SARA
AM (U+0E33), and their Lao equivalents (U+0EB2, U+0EB3) as zero-width
characters.

## Root Cause

In `src/string/immutable/visible.zig`, the range check for Thai/Lao
combining marks was too broad:
- Thai: `0xe31 <= cp <= 0xe3a` included U+0E32 and U+0E33
- Lao: `0xeb1 <= cp <= 0xebc` included U+0EB2 and U+0EB3

According to Unicode (UCD Grapheme_Break property), these are **spacing
vowels** (Grapheme_Base), not combining marks.

## Changes

- **`src/string/immutable/visible.zig`**: Exclude U+0E32, U+0E33,
U+0EB2, U+0EB3 from zero-width ranges
- **`test/js/bun/util/stringWidth.test.ts`**: Add tests for Thai and Lao
spacing vowels

## Before/After

| Character | Before | After |
|-----------|--------|-------|
| `\u0E32` (SARA AA) | 0 | 1 |
| `\u0E33` (SARA AM) | 0 | 1 |
| `คำ` (common Thai word) | 1 | 2 |
| `\u0EB2` (Lao AA) | 0 | 1 |
| `\u0EB3` (Lao AM) | 0 | 1 |

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-05 17:31:15 -08:00
Alistair Smith
e5cd034e9a Define seed in crc32 types (#26754)
### What does this PR do?

Fixes #26711 

### How did you verify your code works?

bun-types.test.ts integration test
2026-02-05 06:41:25 -08:00
Ciro Spaciari
0ad562d3bd fix(http2) Fix SSLWrapper and allow injecting connections in Http2SecureServer (#26539)
### What does this PR do?

Enables the `net.Server → Http2SecureServer` connection upgrade pattern
used by libraries like
[http2-wrapper](https://github.com/szmarczak/http2-wrapper),
[crawlee](https://github.com/apify/crawlee), and custom HTTP/2 proxy
servers. This pattern works by accepting raw TCP connections on a
`net.Server` and forwarding them to an `Http2SecureServer` via
`h2Server.emit('connection', rawSocket)`.

#### Bug fixes

**SSLWrapper use-after-free (Zig)**

Two use-after-free bugs in `ssl_wrapper.zig` are fixed:

1. **`flush()` stale pointer** — `flush()` captured the `ssl` pointer
*before* calling `handleTraffic()`, which can trigger a close callback
that frees the SSL object via `deinit`. The pointer was then used after
being freed. Fix: read `this.ssl` *after* `handleTraffic()` returns.

2. **`handleReading()` null dereference** — `handleReading()` called
`triggerCloseCallback()` after `triggerDataCallback()` without checking
whether the data callback had already closed the connection. This led to
a null function pointer dereference. Fix: check `this.ssl == null ||
this.flags.closed_notified` before calling the close callback.

### How did you verify your code works?

- Added **13 in-process tests** (`node-http2-upgrade.test.mts`) covering
the `net.Server → Http2SecureServer` upgrade path:
  - GET/POST requests through upgraded connections
  - Sequential requests sharing a single H2 session
  - `session` event emission
  - Concurrent clients with independent sessions
  - Socket close ordering (rawSocket first vs session first) — no crash
  - ALPN protocol negotiation (`h2`)
  - Varied status codes (200, 302, 404)
  - Client disconnect mid-response (stream destroyed early)
  - Three independent clients producing three distinct sessions
- Tests use `node:test` + `node:assert` and **pass in both Bun and
Node.js**
- Ported `test-http2-socket-close.js` from the Node.js test suite,
verifying no segfault when the raw socket is destroyed before the H2
session is closed

---------

Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-04 19:23:29 -08:00
Ciro Spaciari
63a323a511 fix(http): don't enter tunnel mode for proxy-style absolute URLs in request line (#26737)
## Summary

Fixes a bug where sequential HTTP requests with proxy-style absolute
URLs (e.g. `GET http://example.com/path HTTP/1.1`) hang on the 2nd+
request when using keep-alive connections.

## Root Cause

In `packages/bun-uws/src/HttpParser.h`, the parser was treating
proxy-style absolute URLs identically to `CONNECT` method requests —
setting `isConnectRequest = true` and entering tunnel mode. This flag
was never reset between requests on the same keep-alive connection, so
the 2nd+ request was swallowed as raw tunnel data instead of being
parsed as HTTP.

## Fix

3-line change in `HttpParser.h:569`:
- **`isConnect`**: Now only matches actual `CONNECT` method requests
(removed `isHTTPorHTTPSPrefixForProxies` from the condition)
- **`isProxyStyleURL`**: New variable that detects `http://`/`https://`
prefixes and accepts them as valid request targets — without triggering
tunnel mode

## Who was affected

- Any Bun HTTP server (`Bun.serve()` or `node:http createServer`)
receiving proxy-style requests on keep-alive connections
- HTTP proxy servers built with Bun could only handle one request per
connection
- Bun's own HTTP client making sequential requests through an HTTP proxy
backed by a Bun server

## Test

Added `test/js/node/http/node-http-proxy-url.test.ts` with 3 test cases:
1. Sequential GET requests with absolute URL paths
2. Sequential POST requests with absolute URL paths
3. Mixed normal and proxy-style URLs

Tests run under both Node.js and Bun for compatibility verification.

-  Fails with system bun (2/3 tests timeout on 2nd request)
-  Passes with debug build (3/3 tests pass)

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-04 19:23:18 -08:00
Dylan Conway
41de7a3bfb [publish images] Upgrade LLVM toolchain from 19.1.7 to 21.1.8 (#26667)
## Summary
- Update LLVM version references across build scripts, Dockerfiles, CI,
Nix configs, and documentation
- Fix LLVM 21 `-Wcharacter-conversion` errors in WebKit bindings:
- `EncodingTables.h`: pragma for intentional char32_t/char16_t
comparisons
- `TextCodecCJK.cpp`: widen `gb18030AsymmetricEncode` param to char32_t
- `URLPatternParser`: widen `isValidNameCodepoint` param to char32_t,
cast for `startsWith`
- Fix `__libcpp_verbose_abort` noexcept mismatch (LLVM 21 uses
`_NOEXCEPT`)
- Fix dangling pointer in `BunJSCModule.h` (`toCString` temporary
lifetime)
- Remove `useMathSumPreciseMethod` (removed upstream in JSC)

**Before merging:** Merge https://github.com/oven-sh/WebKit/pull/153
first, then update `WEBKIT_VERSION` in `cmake/tools/SetupWebKit.cmake`
to point to the merged commit.

## Test plan
- [ ] Build bun debug on macOS with LLVM 21
- [ ] Build bun on Linux (glibc)
- [ ] Build bun on Linux (musl)
- [ ] Build bun on Windows
- [ ] Run test suite

Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-02 23:12:21 -08:00
Chris Lloyd
d23312d3f6 feat(test): add Symbol.dispose support to mock/spyOn (#26692)
## Summary

- Add `[Symbol.dispose]` to mock function prototype, aliased to
`mockRestore`
- Enables `using spy = spyOn(obj, "method")` to auto-restore when
leaving scope
- Works for both `spyOn()` and `mock()`

Addresses #6040 — gives users a clean way to scope spy lifetimes instead
of manually calling `mockRestore()` or relying on `afterEach`.

### Example

```ts
import { spyOn, expect, test } from "bun:test";

test("auto-restores spy", () => {
  const obj = { method: () => "original" };

  {
    using spy = spyOn(obj, "method").mockReturnValue("mocked");
    expect(obj.method()).toBe("mocked");
  }

  // automatically restored
  expect(obj.method()).toBe("original");
});
```

## Test plan

- `bun bd test test/js/bun/test/mock-disposable.test.ts` — 3 tests pass
- Verified tests fail with `USE_SYSTEM_BUN=1`
2026-02-02 17:39:36 -08:00
robobun
ddefa11070 fix(fs): handle '.' path normalization on Windows (#26634)
## Summary
- Fix path normalization for "." on Windows where `normalizeStringBuf`
was incorrectly stripping it to an empty string
- This caused `existsSync('.')`, `statSync('.')`, and other fs
operations to fail on Windows

## Test plan
- Added regression test `test/regression/issue/26631.test.ts` that tests
`existsSync`, `exists`, `statSync`, and `stat` for both `.` and `..`
paths
- All tests pass locally with `bun bd test
test/regression/issue/26631.test.ts`
- Verified code compiles on all platforms with `bun run zig:check-all`

Fixes #26631

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 00:33:59 -08:00
Ciro Spaciari
a14a89ca95 fix(proxy): respect NO_PROXY for explicit proxy options in fetch and ws (#26608)
### What does this PR do?

Extract NO_PROXY checking logic from getHttpProxyFor into a reusable
isNoProxy method on the env Loader. This allows both fetch() and
WebSocket to check NO_PROXY even when a proxy is explicitly provided via
the proxy option (not just via http_proxy env var).

Changes:
- env_loader.zig: Extract isNoProxy() from getHttpProxyFor()
- FetchTasklet.zig: Check isNoProxy() before using explicit proxy
- WebSocket.cpp: Check Bun__isNoProxy() before using explicit proxy
- virtual_machine_exports.zig: Export Bun__isNoProxy for C++ access
- Add NO_PROXY tests for both fetch and WebSocket proxy paths

### How did you verify your code works?
Tests

---------

Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
2026-01-30 16:20:45 -08:00
robobun
a5246344fa fix(types): Socket.reload() now correctly expects { socket: handler } (#26291)
## Summary
- Fix type definition for `Socket.reload()` to match runtime behavior
- The runtime expects `{ socket: handler }` but types previously
accepted just `handler`

## Test plan
- [x] Added regression test `test/regression/issue/26290.test.ts`
- [x] Verified test passes with `bun bd test`

Fixes #26290

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Alistair Smith <hi@alistair.sh>
2026-01-30 13:23:04 -08:00
robobun
f648483fe7 fix(types): add missing SIMD variants to Bun.Build.CompileTarget type (#26248)
## Summary

- Adds missing SIMD variants to the `Build.Target` TypeScript type
- The runtime accepts targets like `bun-linux-x64-modern` but TypeScript
was rejecting them
- Generalized the type to use `${Architecture}` template where possible

## Test plan

- [x] Added regression test in `test/regression/issue/26247.test.ts`
that validates all valid target combinations type-check correctly
- [x] Verified with `bun bd test test/regression/issue/26247.test.ts`

Fixes #26247

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Alistair Smith <hi@alistair.sh>
2026-01-30 13:13:28 -08:00
Alistair Smith
71ce550cfa esm bytecode (#26402)
### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
2026-01-30 01:38:45 -08:00
robobun
8f61adf494 Harden chunked encoding parser (#26594)
## Summary
- Improve handling of fragmented chunk data in the HTTP parser
- Add test coverage for edge cases

## Test plan
- [x] New tests pass
- [x] Existing tests pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 01:18:39 -08:00
Dylan Conway
b4b7cc6d78 fix multi-run.test.ts on windows (#26590)
### What does this PR do?

fixes https://github.com/oven-sh/bun/issues/26597

### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-29 23:35:53 -08:00
Jarred Sumner
bb4d5b9af5 feat(cli/run): add --parallel and --sequential for running multiple scripts with workspace support (#26551)
## Summary

Adds `bun run --parallel` and `bun run --sequential` — new flags for
running multiple package.json scripts concurrently or sequentially with
Foreman-style prefixed output. Includes full `--filter`/`--workspaces`
integration for running scripts across workspace packages.

### Usage

```bash
# Run "build" and "test" concurrently from the current package.json
bun run --parallel build test

# Run "build" and "test" sequentially with prefixed output
bun run --sequential build test

# Glob-matched script names
bun run --parallel "build:*"

# Run "build" in all workspace packages concurrently
bun run --parallel --filter '*' build

# Run "build" in all workspace packages sequentially
bun run --sequential --workspaces build

# Glob-matched scripts across all packages
bun run --parallel --filter '*' "build:*"

# Multiple scripts across all packages
bun run --parallel --filter '*' build lint test

# Continue running even if one package fails
bun run --parallel --no-exit-on-error --filter '*' test

# Skip packages missing the script
bun run --parallel --workspaces --if-present build
```

## How it works

### Output format

Each script's stdout/stderr is prefixed with a colored, padded label:

```
build | compiling...
test  | running suite...
lint  | checking files...
```

### Label format

- **Without `--filter`/`--workspaces`**: labels are just the script name
→ `build | output`
- **With `--filter`/`--workspaces`**: labels are `package:script` →
`pkg-a:build | output`
- **Fallback**: if a package.json has no `name` field, the relative path
from the workspace root is used (e.g., `packages/my-pkg:build`)

### Execution model

- **`--parallel`**: all scripts start immediately, output is interleaved
with prefixes
- **`--sequential`**: scripts run one at a time in order, each waiting
for the previous to finish
- **Pre/post scripts** (`prebuild`/`postbuild`) are grouped with their
main script and run in dependency order within each group
- By default, a failure kills all remaining scripts.
`--no-exit-on-error` lets all scripts finish.

### Workspace integration

The workspace branch in `multi_run.zig` uses a two-pass approach for
deterministic ordering:

1. **Collect**: iterate workspace packages using
`FilterArg.PackageFilterIterator` (same infrastructure as
`filter_run.zig`), filtering with `FilterArg.FilterSet`, collecting
matched packages with their scripts, PATH, and cwd.
2. **Sort**: sort matched packages by name (tiebreak by directory path)
for deterministic ordering — filesystem iteration order from the glob
walker is nondeterministic.
3. **Build configs**: for each sorted package, expand script names
(including globs like `build:*`) against that package's scripts map,
creating `ScriptConfig` entries with `pkg:script` labels and per-package
cwd/PATH.

### Behavioral consistency with `filter_run.zig`

| Behavior | `filter_run.zig` | `multi_run.zig` (this PR) |
|----------|-------------------|---------------------------|
| `--workspaces` skips root package | Yes | Yes |
| `--workspaces` errors on missing script | Yes | Yes |
| `--if-present` silently skips missing | Yes | Yes |
| `--filter` without `--workspaces` includes root | Yes (if matches) |
Yes (if matches) |
| Pre/post script chains | Per-package | Per-package |
| Per-package cwd | Yes | Yes |
| Per-package PATH (`node_modules/.bin`) | Yes | Yes |

### Key implementation details

- Each workspace package script runs in its own package directory with
its own `node_modules/.bin` PATH
- `dirpath` from the glob walker is duped to avoid use-after-free when
the iterator's arena is freed between patterns
- `addScriptConfigs` takes an optional `label_prefix` parameter — `null`
for single-package mode, package name for workspace mode
- `MultiRunProcessHandle` is registered in the `ProcessExitHandler`
tagged pointer union in `process.zig`

## Files changed

| File | Change |
|------|--------|
| `src/cli/multi_run.zig` | New file: process management, output
routing, workspace integration, dependency ordering |
| `src/cli.zig` | Dispatch to `MultiRun.run()` for
`--parallel`/`--sequential`, new context fields |
| `src/cli/Arguments.zig` | Parse `--parallel`, `--sequential`,
`--no-exit-on-error` flags |
| `src/bun.js/api/bun/process.zig` | Register `MultiRunProcessHandle` in
`ProcessExitHandler` tagged pointer union |
| `test/cli/run/multi-run.test.ts` | 118 tests (102 core + 16 workspace
integration) |
| `docs/pm/filter.mdx` | Document `--parallel`/`--sequential` +
`--filter`/`--workspaces` combination |
| `docs/snippets/cli/run.mdx` | Add `--parallel`, `--sequential`,
`--no-exit-on-error` parameter docs |

## Test plan

All 118 tests pass with debug build (`bun bd test
test/cli/run/multi-run.test.ts`). The 16 new workspace tests all fail
with system bun (`USE_SYSTEM_BUN=1`), confirming they test new
functionality.

### Workspace integration tests (16 tests)

1. `--parallel --filter='*'` runs script in all packages
2. `--parallel --filter='pkg-a'` runs only in matching package
3. `--parallel --workspaces` matches all workspace packages
4. `--parallel --filter='*'` with glob expands per-package scripts
5. `--sequential --filter='*'` runs in sequence (deterministic order)
6. Workspace + failure aborts other scripts
7. Workspace + `--no-exit-on-error` lets all finish
8. `--workspaces` skips root package
9. Each workspace script runs in its own package directory (cwd
verification)
10. Multiple script names across workspaces (`build` + `test`)
11. Pre/post scripts work per workspace package
12. `--filter` skips packages without the script (no error)
13. `--workspaces` errors when a package is missing the script
14. `--workspaces --if-present` skips missing scripts silently
15. Labels are padded correctly across workspace packages
16. Package without `name` field uses relative path as label

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2026-01-29 20:20:39 -08:00
Dylan Conway
fc4624c672 fix(node:vm): propagate async context tracking flag to NodeVMGlobalObject (#26542)
When a `SyntheticModule` callback was wrapped in an `AsyncContextFrame`
on the main globalObject (where async context tracking is enabled),
evaluating it on a `NodeVMGlobalObject` would crash because the tracking
flag wasn't propagated.

`AsyncContextFrame::call` checks `isAsyncContextTrackingEnabled()` to
decide whether to unwrap the frame — without the flag, it takes the fast
path and tries to call the `AsyncContextFrame` wrapper directly, which
is not callable.

The async context data (`m_asyncContextData`) was already shared between
parent and `NodeVMGlobalObject`, but the tracking flag was missing. This
adds propagation of `isAsyncContextTrackingEnabled` alongside the data.

**Repro:** `react-email` v5.2.5 preview server crashes when rendering a
template because it imports `node:async_hooks` (enabling async context
tracking) and uses `node:vm` `SyntheticModule` for module evaluation.

Fixes #26540

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-28 21:40:55 -08:00
Jarred Sumner
1bfe5c6b37 feat(md): Zig markdown parser with Bun.markdown API (#26440)
## Summary

- Port md4c (CommonMark-compliant markdown parser) from C to Zig under
`src/md/`
- Three output modes:
  - `Bun.markdown.html(input, options?)` — render to HTML string
- `Bun.markdown.render(input, callbacks?)` — render with custom
callbacks for each element
- `Bun.markdown.react(input, options?)` — render to a React Fragment
element, directly usable as a component return value
- React element creation uses a cached JSC Structure with
`putDirectOffset` for fast allocation
- Component overrides in `react()`: pass tag names as options keys to
replace default HTML elements with custom components
- GFM extensions: tables, strikethrough, task lists, permissive
autolinks, disallowed raw HTML tag filter
- Wire up `.md` as a bundler loader (via explicit `{ type: "md" }`)

## JavaScript API

### `Bun.markdown.html(input, options?)`

Renders markdown to an HTML string:

```js
const html = Bun.markdown.html("# Hello **world**");
// "<h1>Hello <strong>world</strong></h1>\n"

Bun.markdown.html("## Hello", { headingIds: true });
// '<h2 id="hello">Hello</h2>\n'
```

### `Bun.markdown.render(input, callbacks?)`

Renders markdown with custom JavaScript callbacks for each element. Each
callback receives children as a string and optional metadata, and
returns a string:

```js
// Custom HTML with classes
const html = Bun.markdown.render("# Title\n\nHello **world**", {
  heading: (children, { level }) => `<h${level} class="title">${children}</h${level}>`,
  paragraph: (children) => `<p>${children}</p>`,
  strong: (children) => `<b>${children}</b>`,
});

// ANSI terminal output
const ansi = Bun.markdown.render("# Hello\n\n**bold**", {
  heading: (children) => `\x1b[1;4m${children}\x1b[0m\n`,
  paragraph: (children) => children + "\n",
  strong: (children) => `\x1b[1m${children}\x1b[22m`,
});

// Strip all formatting
const text = Bun.markdown.render("# Hello **world**", {
  heading: (children) => children,
  paragraph: (children) => children,
  strong: (children) => children,
});
// "Hello world"

// Return null to omit elements
const result = Bun.markdown.render("# Title\n\n![logo](img.png)\n\nHello", {
  image: () => null,
  heading: (children) => children,
  paragraph: (children) => children + "\n",
});
// "Title\nHello\n"
```

Parser options can be included alongside callbacks:

```js
Bun.markdown.render("Visit www.example.com", {
  link: (children, { href }) => `[${children}](${href})`,
  paragraph: (children) => children,
  permissiveAutolinks: true,
});
```

### `Bun.markdown.react(input, options?)`

Returns a React Fragment element — use it directly as a component return
value:

```tsx
// Use as a component
function Markdown({ text }: { text: string }) {
  return Bun.markdown.react(text);
}

// With custom components
function Heading({ children }: { children: React.ReactNode }) {
  return <h1 className="title">{children}</h1>;
}
const element = Bun.markdown.react("# Hello", { h1: Heading });

// Server-side rendering
import { renderToString } from "react-dom/server";
const html = renderToString(Bun.markdown.react("# Hello **world**"));
// "<h1>Hello <strong>world</strong></h1>"
```

#### React 18 and older

By default, `react()` uses `Symbol.for('react.transitional.element')` as
the `$$typeof` symbol, which is what React 19 expects. For React 18 and
older, pass `reactVersion: 18`:

```tsx
const el = Bun.markdown.react("# Hello", { reactVersion: 18 });
```

### Component Overrides

Tag names can be overridden in `react()`:

```tsx
Bun.markdown.react(input, {
  h1: MyHeading,      // block elements
  p: CustomParagraph,
  a: CustomLink,      // inline elements
  img: CustomImage,
  pre: CodeBlock,
  // ... h1-h6, p, blockquote, ul, ol, li, pre, hr, html,
  //     table, thead, tbody, tr, th, td,
  //     em, strong, a, img, code, del, math, u, br
});
```

Boolean values are ignored (not treated as overrides), so parser options
like `{ strikethrough: true }` don't conflict with component overrides.

### Options

```js
Bun.markdown.html(input, {
  tables: true,              // GFM tables (default: true)
  strikethrough: true,       // ~~deleted~~ (default: true)
  tasklists: true,           // - [x] items (default: true)
  headingIds: true,          // Generate id attributes on headings
  autolinkHeadings: true,    // Wrap heading content in <a> tags
  tagFilter: false,          // GFM disallowed HTML tags
  wikiLinks: false,          // [[wiki]] links
  latexMath: false,          // $inline$ and $$display$$
  underline: false,          // __underline__ (instead of <strong>)
  // ... and more
});
```

## Architecture

### Parser (`src/md/`)

The parser is split into focused modules using Zig's delegation pattern:

| Module | Purpose |
|--------|---------|
| `parser.zig` | Core `Parser` struct, state, and re-exported method
delegation |
| `blocks.zig` | Block-level parsing: document processing, line
analysis, block start/end |
| `containers.zig` | Container management: blockquotes, lists, list
items |
| `inlines.zig` | Inline parsing: emphasis, code spans, HTML tags,
entities |
| `links.zig` | Link/image resolution, reference links, autolink
rendering |
| `autolinks.zig` | Permissive autolink detection (www, url, email) |
| `line_analysis.zig` | Line classification: headings, fences, HTML
blocks, tables |
| `ref_defs.zig` | Reference definition parsing and lookup |
| `render_blocks.zig` | Block rendering dispatch (code, HTML, table
blocks) |
| `html_renderer.zig` | HTML renderer implementing `Renderer` VTable |
| `types.zig` | Shared types: `Renderer` VTable, `BlockType`,
`SpanType`, `TextType`, etc. |

### Renderer Abstraction

Parsing is decoupled from output via a `Renderer` VTable interface:

```zig
pub const Renderer = struct {
    ptr: *anyopaque,
    vtable: *const VTable,

    pub const VTable = struct {
        enterBlock: *const fn (...) void,
        leaveBlock: *const fn (...) void,
        enterSpan:  *const fn (...) void,
        leaveSpan:  *const fn (...) void,
        text:       *const fn (...) void,
    };
};
```

Four renderers are implemented:
- **`HtmlRenderer`** (`src/md/html_renderer.zig`) — produces HTML string
output
- **`JsCallbackRenderer`** (`src/bun.js/api/MarkdownObject.zig`) — calls
JS callbacks for each element, accumulates string output
- **`ParseRenderer`** (`src/bun.js/api/MarkdownObject.zig`) — builds
React element AST with `MarkedArgumentBuffer` for GC safety
- **`JSReactElement`** (`src/bun.js/bindings/JSReactElement.cpp`) — C++
fast path for React element creation using cached JSC Structure +
`putDirectOffset`

## Test plan

- [x] 792 spec tests pass (CommonMark, GFM tables, strikethrough,
tasklists, permissive autolinks, GFM tag filter, wiki links, coverage,
regressions)
- [x] 114 API tests pass (`html()`, `render()`, `react()`,
`renderToString` integration, component overrides)
- [x] 58 GFM compatibility tests pass

```
bun bd test test/js/bun/md/md-spec.test.ts       # 792 pass
bun bd test test/js/bun/md/md-render-api.test.ts  # 114 pass
bun bd test test/js/bun/md/gfm-compat.test.ts     # 58 pass
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Co-authored-by: SUZUKI Sosuke <sosuke@bun.com>
Co-authored-by: robobun <robobun@oven.sh>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Kirill Markelov <kerusha.chubko@gmail.com>
Co-authored-by: Ciro Spaciari <ciro.spaciari@gmail.com>
Co-authored-by: Alistair Smith <hi@alistair.sh>
2026-01-28 20:24:02 -08:00
robobun
aded701d1d feat(build): add --metafile-md CLI option for LLM-friendly bundle analysis (#26441)
## Summary

- Adds `--metafile-md` CLI option to `bun build` that generates a
markdown visualization of the module graph
- Designed to help Claude and other LLMs analyze bundle composition,
identify bloat, and understand dependency chains
- Reuses existing metafile JSON generation code as a post-processing
step

## Features

The generated markdown includes:

1. **Quick Summary** - Module counts, sizes, ESM/CJS breakdown,
output/input ratio
2. **Largest Input Files** - Sorted by size to identify potential bloat
3. **Entry Point Analysis** - Shows bundle size, exports, CSS bundles,
and bundled modules
4. **Dependency Chains** - Most commonly imported modules and reverse
dependencies
5. **Full Module Graph** - Complete import/export info for each module
6. **Raw Data for Searching** - Grep-friendly markers in code blocks:
   - `[MODULE:]`, `[SIZE:]`, `[IMPORT:]`, `[IMPORTED_BY:]`
   - `[ENTRY:]`, `[EXTERNAL:]`, `[NODE_MODULES:]`

## Usage

```bash
# Default filename (meta.md)
bun build entry.js --metafile-md --outdir=dist

# Custom filename
bun build entry.js --metafile-md=analysis.md --outdir=dist

# Both JSON and markdown
bun build entry.js --metafile=meta.json --metafile-md=meta.md --outdir=dist
```

## Example Output

See sample output: https://gist.github.com/example (will add)

## Test plan

- [x] Test default filename (`meta.md`)
- [x] Test custom filename
- [x] Test both `--metafile` and `--metafile-md` together
- [x] Test summary metrics
- [x] Test module format info (ESM/CJS)
- [x] Test external imports
- [x] Test exports list
- [x] Test bundled modules table
- [x] Test CSS bundle reference
- [x] Test import kinds (static, dynamic, require)
- [x] Test commonly imported modules
- [x] Test largest files sorting (bloat analysis)
- [x] Test output/input ratio
- [x] Test grep-friendly raw data section
- [x] Test entry point markers
- [x] Test external import markers
- [x] Test node_modules markers

All 17 new tests pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2026-01-28 18:01:39 -08:00
Dylan Conway
7ebfdf97a8 fix(npm): remove shebang from placeholder scripts to fix npm i -g bun on Windows (#26517)
## Summary
- Removes the `#!/bin/sh` shebang from placeholder `bin/bun.exe` and
`bin/bunx.exe` scripts in the npm package
- Fixes `npm i -g bun` being completely broken on Windows since v1.3.7

## Problem
PR #26259 added a `#!/bin/sh` shebang to the placeholder scripts to show
a helpful error when postinstall hasn't run. However, npm's `cmd-shim`
reads shebangs to generate `.ps1`/`.cmd` wrappers **before** postinstall
runs, and bakes the interpreter path into them. On Windows, the wrappers
referenced `/bin/sh` which doesn't exist, causing:

```
& "/bin/sh$exe"  "$basedir/node_modules/bun/bin/bun.exe" $args
   ~~~~~~~~~~~~~
The term '/bin/sh.exe' is not recognized...
```

Even after postinstall successfully replaced the placeholder with the
real binary, the stale wrappers still tried to invoke `/bin/sh`.

## Fix
Remove the shebang. Without it, `cmd-shim` generates a direct invocation
wrapper that works after postinstall replaces the placeholder. On Unix,
bash/zsh still execute shebang-less files as shell scripts via ENOEXEC
fallback, so the helpful error message is preserved.

## Test plan
- [x] `bun bd test test/regression/issue/24329.test.ts` passes (2/2
tests)
- Manually verify `npm i -g bun` works on Windows

Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-28 00:00:50 -08:00
robobun
a394063a7d refactor(test): use container-based postgres_tls for TLS SQL tests (#26518)
## Summary
- Refactors `tls-sql.test.ts` to use `describeWithContainer` with a
local Docker container instead of external Neon secrets
- Updates `postgres_tls` service to build from Dockerfile (fixes SSL key
permission issues)
- Fixes pg_hba.conf to allow local socket connections for init scripts

## Test plan
- [x] Verified tests pass locally with `bun bd test
test/js/sql/tls-sql.test.ts` (30 tests pass)
- [ ] CI passes on x64 Linux (arm64 Docker tests are currently disabled)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 23:32:39 -08:00
Dylan Conway
2f510724a9 fix(napi): return napi_function for AsyncContextFrame in napi_typeof (#26511)
## Summary
- `napi_typeof` was returning `napi_object` for `AsyncContextFrame`
values, which are internally callable JSObjects
- Native addons that check callback types (e.g. encore.dev's runtime)
would fail with `expect Function, got: Object` and panic
- Added a `jsDynamicCast<AsyncContextFrame*>` check before the final
`napi_object` fallback to correctly report these values as
`napi_function`

Closes #25933

## Test plan
- [x] Verify encore.dev + supertokens reproduction from the issue no
longer panics
- [ ] Existing napi tests continue to pass

Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-27 13:35:15 -08:00
robobun
9a16f4c345 fix(http2): correct canReceiveData logic per RFC 7540 (#26491)
## Summary

- Fixed inverted logic in `canReceiveData` function in HTTP/2 stream
state handling
- Added gRPC streaming tests to verify correct behavior

## Problem

The `canReceiveData` function had completely inverted logic that
reported incorrect `remoteClose` status:

| Stream State | Before (Wrong) | After (Correct) |
|--------------|----------------|-----------------|
| OPEN | `false` (can't receive) | `true` (can receive) |
| HALF_CLOSED_LOCAL | `false` (can't receive) | `true` (can receive from
remote) |
| HALF_CLOSED_REMOTE | `true` (can receive) | `false` (remote closed) |
| CLOSED | `true` (can receive) | `false` (stream done) |

Per RFC 7540 Section 5.1:
- In `HALF_CLOSED_LOCAL` state, the local endpoint has sent END_STREAM
but can still **receive** data from the remote peer
- In `HALF_CLOSED_REMOTE` state, the remote endpoint has sent END_STREAM
so no more data will be received

## Test plan

- [x] Added gRPC streaming tests covering unary, server streaming,
client streaming, and bidirectional streaming calls
- [x] Verified HTTP/2 test suite passes (same or fewer failures than
before)
- [x] Verified gRPC test suite improves (7 failures vs 9 failures before
+ 2 errors)

Closes #20875

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 10:29:34 -08:00
robobun
ba426210c2 fix(shell): handle ".", "", "./" in cwd() by using process.cwd() (#26461)
## Summary
- Fix `$`...`.cwd(".")` causing ENOENT error with path ending in
"undefined"
- The same fix applies to `.cwd("")` and `.cwd("./")`
- Falls back to `process.cwd()` when `defaultCwd` is undefined

Closes #26460

## Test plan
- [x] Added regression test in `test/regression/issue/26460.test.ts`
- [x] Verified test fails with `USE_SYSTEM_BUN=1` (reproduces the bug)
- [x] Verified test passes with `bun bd test` (fix works)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 16:04:45 -08:00
Dylan Conway
bd63fb9ef6 fix: BUN_OPTIONS bare flags getting trailing whitespace (#26464)
## Summary

Fix a bug in `appendOptionsEnv` where bare flags (no `=`) that aren't
the last option get a trailing space appended, causing the argument
parser to not recognize them.

For example, `BUN_OPTIONS="--cpu-prof --cpu-prof-dir=profiles"` would
parse `--cpu-prof` as `"--cpu-prof "` (trailing space), so CPU profiling
was never enabled.

## Root Cause

When `appendOptionsEnv` encounters a `--flag` followed by whitespace, it
advances past the whitespace looking for a possible quoted value (e.g.
`--flag "quoted"`). If no quote is found and there's no `=`, it falls
through without resetting `j`, so the emitted argument includes the
trailing whitespace.

## Fix

Save `end_of_flag = j` after scanning the flag name. Add an `else`
branch that resets `j = end_of_flag` when no value (quote or `=`) is
found after the whitespace. This is a 3-line change.

Also fixes a separate bug in `BunCPUProfiler.zig` where `--cpu-prof-dir`
with an absolute path would hit a debug assertion (`path.append` on an
already-rooted path with an absolute input). Changed to `path.join`
which handles both relative and absolute paths correctly.

## Tests

- `test/cli/env/bun-options.test.ts`: Two new tests verifying
`--cpu-prof --cpu-prof-dir=<abs-path>` produces a `.cpuprofile` file,
for both normal and standalone compiled executables.
2026-01-26 14:02:36 -08:00