Rewrites the 11 case-changing utility methods (camelCase, pascalCase,
snakeCase, etc.) from Zig to C++, eliminating two unnecessary
allocations and transcoding steps.
The Zig implementation converted every JS string to UTF-8 via
bunstr.toUTF8(allocator), processed codepoints, then converted back
via bun.String.cloneUTF8(result_bytes) — two unnecessary allocations
+ transcoding for every call.
Move to C++ and work directly with the JSC string's native encoding
(Latin1 or UTF-16) using StringView, StringBuilder, and ICU — same
pattern as stripANSI.cpp.
- New: CaseChange.cpp + CaseChange.h with the case-change algorithm
templated on Latin1Character/UChar
- Wiring: 11 functions registered directly in bunObjectTable as C++
host functions
- Cleanup: Deleted string_case.zig and all Zig/C++ bridge wiring
(icu_toUpper/icu_toLower wrappers)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add Bun.camelCase, pascalCase, snakeCase, kebabCase, constantCase,
dotCase, capitalCase, trainCase, pathCase, sentenceCase, and noCase
matching the change-case npm package. Uses ICU for full Unicode support
and bun.strings.UnsignedCodepointIterator for codepoint iteration.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ments (#26717)"
This reverts commit 315e822866.
### What does this PR do?
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
Enables the `net.Server → Http2SecureServer` connection upgrade pattern
used by libraries like
[http2-wrapper](https://github.com/szmarczak/http2-wrapper),
[crawlee](https://github.com/apify/crawlee), and custom HTTP/2 proxy
servers. This pattern works by accepting raw TCP connections on a
`net.Server` and forwarding them to an `Http2SecureServer` via
`h2Server.emit('connection', rawSocket)`.
#### Bug fixes
**SSLWrapper use-after-free (Zig)**
Two use-after-free bugs in `ssl_wrapper.zig` are fixed:
1. **`flush()` stale pointer** — `flush()` captured the `ssl` pointer
*before* calling `handleTraffic()`, which can trigger a close callback
that frees the SSL object via `deinit`. The pointer was then used after
being freed. Fix: read `this.ssl` *after* `handleTraffic()` returns.
2. **`handleReading()` null dereference** — `handleReading()` called
`triggerCloseCallback()` after `triggerDataCallback()` without checking
whether the data callback had already closed the connection. This led to
a null function pointer dereference. Fix: check `this.ssl == null ||
this.flags.closed_notified` before calling the close callback.
### How did you verify your code works?
- Added **13 in-process tests** (`node-http2-upgrade.test.mts`) covering
the `net.Server → Http2SecureServer` upgrade path:
- GET/POST requests through upgraded connections
- Sequential requests sharing a single H2 session
- `session` event emission
- Concurrent clients with independent sessions
- Socket close ordering (rawSocket first vs session first) — no crash
- ALPN protocol negotiation (`h2`)
- Varied status codes (200, 302, 404)
- Client disconnect mid-response (stream destroyed early)
- Three independent clients producing three distinct sessions
- Tests use `node:test` + `node:assert` and **pass in both Bun and
Node.js**
- Ported `test-http2-socket-close.js` from the Node.js test suite,
verifying no segfault when the raw socket is destroyed before the H2
session is closed
---------
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Fixes a bug where sequential HTTP requests with proxy-style absolute
URLs (e.g. `GET http://example.com/path HTTP/1.1`) hang on the 2nd+
request when using keep-alive connections.
## Root Cause
In `packages/bun-uws/src/HttpParser.h`, the parser was treating
proxy-style absolute URLs identically to `CONNECT` method requests —
setting `isConnectRequest = true` and entering tunnel mode. This flag
was never reset between requests on the same keep-alive connection, so
the 2nd+ request was swallowed as raw tunnel data instead of being
parsed as HTTP.
## Fix
3-line change in `HttpParser.h:569`:
- **`isConnect`**: Now only matches actual `CONNECT` method requests
(removed `isHTTPorHTTPSPrefixForProxies` from the condition)
- **`isProxyStyleURL`**: New variable that detects `http://`/`https://`
prefixes and accepts them as valid request targets — without triggering
tunnel mode
## Who was affected
- Any Bun HTTP server (`Bun.serve()` or `node:http createServer`)
receiving proxy-style requests on keep-alive connections
- HTTP proxy servers built with Bun could only handle one request per
connection
- Bun's own HTTP client making sequential requests through an HTTP proxy
backed by a Bun server
## Test
Added `test/js/node/http/node-http-proxy-url.test.ts` with 3 test cases:
1. Sequential GET requests with absolute URL paths
2. Sequential POST requests with absolute URL paths
3. Mixed normal and proxy-style URLs
Tests run under both Node.js and Bun for compatibility verification.
- ❌ Fails with system bun (2/3 tests timeout on 2nd request)
- ✅ Passes with debug build (3/3 tests pass)
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Fix a use-after-free bug in the bindgen code generator where string
arguments with default values would have their underlying WTF::String
destroyed before the BunString was used
- The issue occurred because for optional string parameters with
defaults, a WTF::String was created inside an `if` block, converted to
BunString, then the if block closed and destroyed the WTF::String while
the BunString was still in use
- This manifested as a segfault in `Bun.stringWidth()` and potentially
other functions using optional string arguments
## Details
The crash stack trace showed:
```
Segmentation fault at address 0x31244B0F0
visible.zig:888: string.immutable.visible.visible.visibleUTF16WidthFn
BunObject.zig:1371: bindgen_BunObject_dispatchStringWidth1
GeneratedBindings.cpp:242: bindgen_BunObject_jsStringWidth
```
The generated code before this fix looked like:
```cpp
BunString argStr;
if (!arg0.value().isUndefinedOrNull()) {
WTF::String wtfString_0 = WebCore::convert<...>(...);
argStr = Bun::toString(wtfString_0);
} // <-- wtfString_0 destroyed here!
// ... argStr used later, pointing to freed memory
```
The fix declares the WTF::String holder outside the if block:
```cpp
BunString argStr;
WTF::String wtfStringHolder_0; // Lives until function returns
if (!arg0.value().isUndefinedOrNull()) {
wtfStringHolder_0 = WebCore::convert<...>(...);
}
if (!wtfStringHolder_0.isEmpty()) argStr = Bun::toString(wtfStringHolder_0);
// argStr now points to valid memory
```
This fix applies to both:
- Direct string function arguments with defaults (e.g.,
`t.DOMString.default("")`)
- Dictionary fields with string defaults
## Test plan
- [x] Existing `stringWidth.test.ts` tests pass (105 tests)
- [x] Manual testing with GC stress shows no crashes
- [x] `os.userInfo()` with encoding option works correctly
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Update LLVM version references across build scripts, Dockerfiles, CI,
Nix configs, and documentation
- Fix LLVM 21 `-Wcharacter-conversion` errors in WebKit bindings:
- `EncodingTables.h`: pragma for intentional char32_t/char16_t
comparisons
- `TextCodecCJK.cpp`: widen `gb18030AsymmetricEncode` param to char32_t
- `URLPatternParser`: widen `isValidNameCodepoint` param to char32_t,
cast for `startsWith`
- Fix `__libcpp_verbose_abort` noexcept mismatch (LLVM 21 uses
`_NOEXCEPT`)
- Fix dangling pointer in `BunJSCModule.h` (`toCString` temporary
lifetime)
- Remove `useMathSumPreciseMethod` (removed upstream in JSC)
**Before merging:** Merge https://github.com/oven-sh/WebKit/pull/153
first, then update `WEBKIT_VERSION` in `cmake/tools/SetupWebKit.cmake`
to point to the merged commit.
## Test plan
- [ ] Build bun debug on macOS with LLVM 21
- [ ] Build bun on Linux (glibc)
- [ ] Build bun on Linux (musl)
- [ ] Build bun on Windows
- [ ] Run test suite
Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Add `[Symbol.dispose]` to mock function prototype, aliased to
`mockRestore`
- Enables `using spy = spyOn(obj, "method")` to auto-restore when
leaving scope
- Works for both `spyOn()` and `mock()`
Addresses #6040 — gives users a clean way to scope spy lifetimes instead
of manually calling `mockRestore()` or relying on `afterEach`.
### Example
```ts
import { spyOn, expect, test } from "bun:test";
test("auto-restores spy", () => {
const obj = { method: () => "original" };
{
using spy = spyOn(obj, "method").mockReturnValue("mocked");
expect(obj.method()).toBe("mocked");
}
// automatically restored
expect(obj.method()).toBe("original");
});
```
## Test plan
- `bun bd test test/js/bun/test/mock-disposable.test.ts` — 3 tests pass
- Verified tests fail with `USE_SYSTEM_BUN=1`
## Summary
- Make `defaultRemainingRunsUntilSkipReleaseAccess` configurable at
runtime instead of a compile-time constant
- Add `BUN_GC_RUNS_UNTIL_SKIP_RELEASE_ACCESS` environment variable to
control how many idle event loop iterations pass before skipping JSC
heap `releaseAccess` calls in `onBeforeWait`
- Default remains 10, matching the previous hardcoded value
## Test plan
- [ ] Verify default behavior is unchanged (no env var set, value is 10)
- [ ] Verify `BUN_GC_RUNS_UNTIL_SKIP_RELEASE_ACCESS=0` causes release
access to be skipped every iteration
- [ ] Verify `BUN_GC_RUNS_UNTIL_SKIP_RELEASE_ACCESS=100` delays skipping
for 100 idle iterations
- [ ] Verify negative values are ignored (default is preserved)
- [ ] Verify non-numeric values are ignored (default is preserved)
## Changelog
<!-- CHANGELOG:START -->
<!-- CHANGELOG:END -->
🤖 Generated with [Claude Code](https://claude.com/claude-code) (0%
3-shotted by claude)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
This is a tiny optimization that skips creating and dispatching an Event
object when `AbortSignal.abort()` is called with no registered
listeners.
## Changes
When there are no listeners (no `addEventListener` or `onabort`), we now
check `hasEventListeners()` before creating the Event, avoiding:
- Event object allocation (~112 bytes)
- EventPath creation
- dispatchEvent overhead (hash map lookups, method calls)
## Performance
Improvement for the no-listener case:
- **~6% faster** in micro-benchmarks
- ~16ms saved per 1M `abort()` calls (271ms → 255ms)
| Case | Before | After | Improvement |
|------|--------|-------|-------------|
| no listener | 271 ms | 255 ms | ~6% |
| with listener | 368 ms | 370 ms | (same) |
## Why this is safe
The optimization has no observable side effects because:
- `dispatchEvent` is called from C++, not observable via JS
monkey-patching
- Without listeners, no code can obtain a reference to the Event object
- All internal state (`aborted`, `reason`) is set correctly regardless
## Test Plan
- Existing AbortController/AbortSignal tests pass
- Added mitata benchmark: `bench/snippets/abort-signal.mjs`
## Summary
- Fix path normalization for "." on Windows where `normalizeStringBuf`
was incorrectly stripping it to an empty string
- This caused `existsSync('.')`, `statSync('.')`, and other fs
operations to fail on Windows
## Test plan
- Added regression test `test/regression/issue/26631.test.ts` that tests
`existsSync`, `exists`, `statSync`, and `stat` for both `.` and `..`
paths
- All tests pass locally with `bun bd test
test/regression/issue/26631.test.ts`
- Verified code compiles on all platforms with `bun run zig:check-all`
Fixes#26631🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Add validation to require `--compile` when using ESM bytecode
- Update documentation to clarify ESM bytecode requirements
## Why
ESM module resolution is two-phase: (1) analyze imports/exports, (2)
evaluate. Without `--compile`, there's no `module_info` embedded, so JSC
must still parse the file for module analysis even with bytecode -
causing a double-parse deopt.
## Changes
- **CLI**: Error when `--bytecode --format=esm` is used without
`--compile`
- **JS API**: Error when `bytecode: true, format: 'esm'` is used without
`compile: true`
- **Docs**: Update bytecode.mdx, executables.mdx, index.mdx to clarify
requirements
- **Types**: Update JSDoc for bytecode option in bun.d.ts
## Test plan
```bash
# Should error
bun build ./test.js --bytecode --format=esm --outdir=./out
# error: ESM bytecode requires --compile. Use --format=cjs for bytecode without --compile.
# Should work
bun build ./test.js --bytecode --format=esm --compile --outfile=./mytest
bun build ./test.js --bytecode --format=cjs --outdir=./out
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Adds `--cpu-prof-interval` to configure the CPU profiler sampling
interval in microseconds (default: 1000), matching Node.js's
`--cpu-prof-interval` flag.
```sh
bun --cpu-prof --cpu-prof-interval 500 index.js
```
- Parsed as `u32`, truncated to `c_int` when passed to JSC's
`SamplingProfiler::setTimingInterval`
- Invalid values silently fall back to the default (1000μs)
- Warns if used without `--cpu-prof` or `--cpu-prof-md`
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
Fixes#26625
This fixes a segmentation fault that occurred on Windows x64 when the GC
finalizer tried to free shell interpreter resources that were already
partially freed during normal shell completion.
- Added explicit `cleanup_state` enum to track resource ownership state
- `needs_full_cleanup`: Nothing cleaned up yet, finalizer must clean
everything
- `runtime_cleaned`: `finish()` already cleaned IO/shell, finalizer
skips those
- Early return in `#derefRootShellAndIOIfNeeded()` when already cleaned
- Explicit state-based cleanup in `deinitFromFinalizer()`
The vulnerability existed on all platforms but was most reliably
triggered on Windows with high GC pressure (many concurrent shell
commands).
## Test plan
- [x] Build passes (`bun bd`)
- [x] New regression test added (`test/regression/issue/26625.test.ts`)
- [x] Existing shell tests pass (same 4 pre-existing failures, no new
failures)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- `bun run build:local` now handles everything: configuring JSC,
building JSC, and building Bun in a single command on all platforms
(macOS, Linux, Windows). Previously required manually running `bun run
jsc:build:debug`, deleting a duplicate `InspectorProtocolObjects.h`
header, and then running the Bun build separately.
- Incremental JSC rebuilds: JSC is built via `add_custom_target` that
delegates to JSC's inner Ninja, which tracks WebKit source file changes
and only rebuilds what changed. `ninja -Cbuild/debug-local` also works
after the first build.
- Cross-platform support:
- macOS: Uses system ICU automatically
- Linux: Uses system ICU via find_package instead of requiring bundled
static libs
- Windows: Builds ICU from source automatically (only when libs don't
already exist), sets up static CRT and ICU naming conventions
### Changes
- cmake/tools/SetupWebKit.cmake: Replace the old WEBKIT_LOCAL block
(which just set include paths and assumed JSC was pre-built) with full
JSC configure + build integration for all platforms
- cmake/targets/BuildBun.cmake: Add jsc as a build dependency, use
system ICU on Linux for local builds, handle bmalloc linking for local
builds
- CONTRIBUTING.md / docs/project/contributing.mdx: Simplify "Building
WebKit locally" docs from ~15 lines of manual steps to 3 lines
## Test plan
- [x] macOS arm64: clean build, incremental rebuild, WebKit source
change rebuild
- [x] Windows x64: clean build with ICU, incremental rebuild with ICU
skip
- [x] Linux x64: build with system ICU via find_package
- [x] No duplicate InspectorProtocolObjects.h errors
- [x] build/debug-local/bun-debug --version works
Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Extract NO_PROXY checking logic from getHttpProxyFor into a reusable
isNoProxy method on the env Loader. This allows both fetch() and
WebSocket to check NO_PROXY even when a proxy is explicitly provided via
the proxy option (not just via http_proxy env var).
Changes:
- env_loader.zig: Extract isNoProxy() from getHttpProxyFor()
- FetchTasklet.zig: Check isNoProxy() before using explicit proxy
- WebSocket.cpp: Check Bun__isNoProxy() before using explicit proxy
- virtual_machine_exports.zig: Export Bun__isNoProxy for C++ access
- Add NO_PROXY tests for both fetch and WebSocket proxy paths
### How did you verify your code works?
Tests
---------
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
## Summary
- Fix type definition for `Socket.reload()` to match runtime behavior
- The runtime expects `{ socket: handler }` but types previously
accepted just `handler`
## Test plan
- [x] Added regression test `test/regression/issue/26290.test.ts`
- [x] Verified test passes with `bun bd test`
Fixes#26290🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Alistair Smith <hi@alistair.sh>
## Summary
- Adds missing SIMD variants to the `Build.Target` TypeScript type
- The runtime accepts targets like `bun-linux-x64-modern` but TypeScript
was rejecting them
- Generalized the type to use `${Architecture}` template where possible
## Test plan
- [x] Added regression test in `test/regression/issue/26247.test.ts`
that validates all valid target combinations type-check correctly
- [x] Verified with `bun bd test test/regression/issue/26247.test.ts`
Fixes#26247🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Alistair Smith <hi@alistair.sh>
## Summary
- Adds missing `bun-linux-x64-baseline` and `bun-linux-x64-modern`
compile target types
- These targets are supported by the Bun CLI but were missing from the
TypeScript type definitions
## Changes
Added `bun-linux-x64-${SIMD}` to the `CompileTarget` type union, which
expands to:
- `bun-linux-x64-baseline`
- `bun-linux-x64-modern`
## Test plan
- [x] TypeScript should now accept `target: 'bun-linux-x64-modern'`
without type errors
Closes#26247🤖 Generated with [Claude Code](https://claude.com/claude-code)
### What does this PR do?
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
## Summary
- Improve handling of fragmented chunk data in the HTTP parser
- Add test coverage for edge cases
## Test plan
- [x] New tests pass
- [x] Existing tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
### What does this PR do?
fixes https://github.com/oven-sh/bun/issues/26597
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Add a CI step that runs JSC JIT stress tests under QEMU when
`SetupWebKit.cmake` is modified. This complements #26571 (basic baseline
CPU verification) by also testing JIT-generated code.
## Motivation
PR #26571 added QEMU-based verification that catches illegal
instructions in:
- Startup code
- Static initialization
- Basic interpreter execution
However, JIT compilers (DFG, FTL, Wasm BBQ/OMG) generate code at runtime
that could emit AVX or LSE instructions even if the compiled binary
doesn't. The JSC stress tests from #26380 exercise all JIT tiers through
hot loops that trigger tier-up.
## How it works
1. Detects if `cmake/tools/SetupWebKit.cmake` is modified in the PR
2. If WebKit changes are detected, runs `verify-jit-stress-qemu.sh`
after the build
3. Executes all 78 JIT stress test fixtures under QEMU with restricted
CPU features:
- x64: `qemu-x86_64 -cpu Nehalem` (SSE4.2, no AVX)
- aarch64: `qemu-aarch64 -cpu cortex-a53` (ARMv8.0-A, no LSE)
4. Any SIGILL from JIT-generated code fails the build
## Platforms tested
| Target | CPU Model | What it catches |
|---|---|---|
| `linux-x64-baseline` | Nehalem | JIT emitting AVX/AVX2/AVX512 |
| `linux-x64-musl-baseline` | Nehalem | JIT emitting AVX/AVX2/AVX512 |
| `linux-aarch64` | Cortex-A53 | JIT emitting LSE atomics, SVE |
| `linux-aarch64-musl` | Cortex-A53 | JIT emitting LSE atomics, SVE |
## Timeout
The step has a 30-minute timeout since QEMU emulation is ~10-50x slower
than native. This only runs on WebKit update PRs, so it won't affect
most CI runs.
## Refs
- #26380 - Added JSC JIT stress tests
- #26571 - Added basic QEMU baseline verification
## Summary
Adds `bun run --parallel` and `bun run --sequential` — new flags for
running multiple package.json scripts concurrently or sequentially with
Foreman-style prefixed output. Includes full `--filter`/`--workspaces`
integration for running scripts across workspace packages.
### Usage
```bash
# Run "build" and "test" concurrently from the current package.json
bun run --parallel build test
# Run "build" and "test" sequentially with prefixed output
bun run --sequential build test
# Glob-matched script names
bun run --parallel "build:*"
# Run "build" in all workspace packages concurrently
bun run --parallel --filter '*' build
# Run "build" in all workspace packages sequentially
bun run --sequential --workspaces build
# Glob-matched scripts across all packages
bun run --parallel --filter '*' "build:*"
# Multiple scripts across all packages
bun run --parallel --filter '*' build lint test
# Continue running even if one package fails
bun run --parallel --no-exit-on-error --filter '*' test
# Skip packages missing the script
bun run --parallel --workspaces --if-present build
```
## How it works
### Output format
Each script's stdout/stderr is prefixed with a colored, padded label:
```
build | compiling...
test | running suite...
lint | checking files...
```
### Label format
- **Without `--filter`/`--workspaces`**: labels are just the script name
→ `build | output`
- **With `--filter`/`--workspaces`**: labels are `package:script` →
`pkg-a:build | output`
- **Fallback**: if a package.json has no `name` field, the relative path
from the workspace root is used (e.g., `packages/my-pkg:build`)
### Execution model
- **`--parallel`**: all scripts start immediately, output is interleaved
with prefixes
- **`--sequential`**: scripts run one at a time in order, each waiting
for the previous to finish
- **Pre/post scripts** (`prebuild`/`postbuild`) are grouped with their
main script and run in dependency order within each group
- By default, a failure kills all remaining scripts.
`--no-exit-on-error` lets all scripts finish.
### Workspace integration
The workspace branch in `multi_run.zig` uses a two-pass approach for
deterministic ordering:
1. **Collect**: iterate workspace packages using
`FilterArg.PackageFilterIterator` (same infrastructure as
`filter_run.zig`), filtering with `FilterArg.FilterSet`, collecting
matched packages with their scripts, PATH, and cwd.
2. **Sort**: sort matched packages by name (tiebreak by directory path)
for deterministic ordering — filesystem iteration order from the glob
walker is nondeterministic.
3. **Build configs**: for each sorted package, expand script names
(including globs like `build:*`) against that package's scripts map,
creating `ScriptConfig` entries with `pkg:script` labels and per-package
cwd/PATH.
### Behavioral consistency with `filter_run.zig`
| Behavior | `filter_run.zig` | `multi_run.zig` (this PR) |
|----------|-------------------|---------------------------|
| `--workspaces` skips root package | Yes | Yes |
| `--workspaces` errors on missing script | Yes | Yes |
| `--if-present` silently skips missing | Yes | Yes |
| `--filter` without `--workspaces` includes root | Yes (if matches) |
Yes (if matches) |
| Pre/post script chains | Per-package | Per-package |
| Per-package cwd | Yes | Yes |
| Per-package PATH (`node_modules/.bin`) | Yes | Yes |
### Key implementation details
- Each workspace package script runs in its own package directory with
its own `node_modules/.bin` PATH
- `dirpath` from the glob walker is duped to avoid use-after-free when
the iterator's arena is freed between patterns
- `addScriptConfigs` takes an optional `label_prefix` parameter — `null`
for single-package mode, package name for workspace mode
- `MultiRunProcessHandle` is registered in the `ProcessExitHandler`
tagged pointer union in `process.zig`
## Files changed
| File | Change |
|------|--------|
| `src/cli/multi_run.zig` | New file: process management, output
routing, workspace integration, dependency ordering |
| `src/cli.zig` | Dispatch to `MultiRun.run()` for
`--parallel`/`--sequential`, new context fields |
| `src/cli/Arguments.zig` | Parse `--parallel`, `--sequential`,
`--no-exit-on-error` flags |
| `src/bun.js/api/bun/process.zig` | Register `MultiRunProcessHandle` in
`ProcessExitHandler` tagged pointer union |
| `test/cli/run/multi-run.test.ts` | 118 tests (102 core + 16 workspace
integration) |
| `docs/pm/filter.mdx` | Document `--parallel`/`--sequential` +
`--filter`/`--workspaces` combination |
| `docs/snippets/cli/run.mdx` | Add `--parallel`, `--sequential`,
`--no-exit-on-error` parameter docs |
## Test plan
All 118 tests pass with debug build (`bun bd test
test/cli/run/multi-run.test.ts`). The 16 new workspace tests all fail
with system bun (`USE_SYSTEM_BUN=1`), confirming they test new
functionality.
### Workspace integration tests (16 tests)
1. `--parallel --filter='*'` runs script in all packages
2. `--parallel --filter='pkg-a'` runs only in matching package
3. `--parallel --workspaces` matches all workspace packages
4. `--parallel --filter='*'` with glob expands per-package scripts
5. `--sequential --filter='*'` runs in sequence (deterministic order)
6. Workspace + failure aborts other scripts
7. Workspace + `--no-exit-on-error` lets all finish
8. `--workspaces` skips root package
9. Each workspace script runs in its own package directory (cwd
verification)
10. Multiple script names across workspaces (`build` + `test`)
11. Pre/post scripts work per workspace package
12. `--filter` skips packages without the script (no error)
13. `--workspaces` errors when a package is missing the script
14. `--workspaces --if-present` skips missing scripts silently
15. Labels are padded correctly across workspace packages
16. Package without `name` field uses relative path as label
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Fixes illegal instruction (SIGILL) crashes on ARMv8.0 aarch64 CPUs
(Cortex-A53, Raspberry Pi 4, AWS a1 instances).
## Root cause
Upstream mimalloc force-enables `MI_OPT_ARCH` on arm64, which adds
`-march=armv8.1-a` and emits LSE atomic instructions (`casa`, `swpa`,
`ldaddl`). These are not available on ARMv8.0 CPUs.
## Fix
- Pass `MI_NO_OPT_ARCH=ON` to mimalloc on aarch64 (has priority over
`MI_OPT_ARCH` in mimalloc's CMake)
- Update WebKit to autobuild-596e48e22e3a1090e5b802744a7938088b1ea860
which explicitly passes `-march` flags to the WebKit build
## Verification
Includes QEMU-based baseline CPU verification CI steps (#26571) that
catch these regressions automatically.
## Summary
Add CI steps that verify baseline builds don't use CPU instructions
beyond their target. Uses QEMU user-mode emulation with restricted CPU
features — any illegal instruction causes SIGILL and fails the build.
## Platforms verified
| Build Target | QEMU Command | What it catches |
|---|---|---|
| `linux-x64-baseline` (glibc) | `qemu-x86_64 -cpu Nehalem` | AVX, AVX2,
AVX512 |
| `linux-x64-musl-baseline` | `qemu-x86_64 -cpu Nehalem` | AVX, AVX2,
AVX512 |
| `linux-aarch64` (glibc) | `qemu-aarch64 -cpu cortex-a35` | LSE
atomics, SVE, dotprod |
| `linux-aarch64-musl` | `qemu-aarch64 -cpu cortex-a35` | LSE atomics,
SVE, dotprod |
## How it works
Each verify step:
1. Downloads the built binary artifact from the `build-bun` step
2. Installs `qemu-user-static` on-the-fly (dnf/apk/apt-get)
3. Runs two smoke tests under QEMU with restricted CPU features:
- `bun --version` — validates startup, linker, static init code
- `bun -e eval` — validates JSC initialization and basic execution
4. Hard fails on SIGILL (exit code 132)
The verify step runs in the build group after `build-bun`, with a
5-minute timeout.
## Known issue this will surface
**mimalloc on aarch64**: Built with `MI_OPT_ARCH=ON` which adds
`-march=armv8.1-a`, enabling LSE atomics. This will SIGILL on
Cortex-A35/A53 CPUs. The aarch64 verify steps are expected to fail
initially, confirming the test catches real issues. Fix can be done
separately in `cmake/targets/BuildMimalloc.cmake`.
When a `SyntheticModule` callback was wrapped in an `AsyncContextFrame`
on the main globalObject (where async context tracking is enabled),
evaluating it on a `NodeVMGlobalObject` would crash because the tracking
flag wasn't propagated.
`AsyncContextFrame::call` checks `isAsyncContextTrackingEnabled()` to
decide whether to unwrap the frame — without the flag, it takes the fast
path and tries to call the `AsyncContextFrame` wrapper directly, which
is not callable.
The async context data (`m_asyncContextData`) was already shared between
parent and `NodeVMGlobalObject`, but the tracking flag was missing. This
adds propagation of `isAsyncContextTrackingEnabled` alongside the data.
**Repro:** `react-email` v5.2.5 preview server crashes when rendering a
template because it imports `node:async_hooks` (enabling async context
tracking) and uses `node:vm` `SyntheticModule` for module evaluation.
Fixes#26540
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Adds `--metafile-md` CLI option to `bun build` that generates a
markdown visualization of the module graph
- Designed to help Claude and other LLMs analyze bundle composition,
identify bloat, and understand dependency chains
- Reuses existing metafile JSON generation code as a post-processing
step
## Features
The generated markdown includes:
1. **Quick Summary** - Module counts, sizes, ESM/CJS breakdown,
output/input ratio
2. **Largest Input Files** - Sorted by size to identify potential bloat
3. **Entry Point Analysis** - Shows bundle size, exports, CSS bundles,
and bundled modules
4. **Dependency Chains** - Most commonly imported modules and reverse
dependencies
5. **Full Module Graph** - Complete import/export info for each module
6. **Raw Data for Searching** - Grep-friendly markers in code blocks:
- `[MODULE:]`, `[SIZE:]`, `[IMPORT:]`, `[IMPORTED_BY:]`
- `[ENTRY:]`, `[EXTERNAL:]`, `[NODE_MODULES:]`
## Usage
```bash
# Default filename (meta.md)
bun build entry.js --metafile-md --outdir=dist
# Custom filename
bun build entry.js --metafile-md=analysis.md --outdir=dist
# Both JSON and markdown
bun build entry.js --metafile=meta.json --metafile-md=meta.md --outdir=dist
```
## Example Output
See sample output: https://gist.github.com/example (will add)
## Test plan
- [x] Test default filename (`meta.md`)
- [x] Test custom filename
- [x] Test both `--metafile` and `--metafile-md` together
- [x] Test summary metrics
- [x] Test module format info (ESM/CJS)
- [x] Test external imports
- [x] Test exports list
- [x] Test bundled modules table
- [x] Test CSS bundle reference
- [x] Test import kinds (static, dynamic, require)
- [x] Test commonly imported modules
- [x] Test largest files sorting (bloat analysis)
- [x] Test output/input ratio
- [x] Test grep-friendly raw data section
- [x] Test entry point markers
- [x] Test external import markers
- [x] Test node_modules markers
All 17 new tests pass.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
## Summary
- Removes the `#!/bin/sh` shebang from placeholder `bin/bun.exe` and
`bin/bunx.exe` scripts in the npm package
- Fixes `npm i -g bun` being completely broken on Windows since v1.3.7
## Problem
PR #26259 added a `#!/bin/sh` shebang to the placeholder scripts to show
a helpful error when postinstall hasn't run. However, npm's `cmd-shim`
reads shebangs to generate `.ps1`/`.cmd` wrappers **before** postinstall
runs, and bakes the interpreter path into them. On Windows, the wrappers
referenced `/bin/sh` which doesn't exist, causing:
```
& "/bin/sh$exe" "$basedir/node_modules/bun/bin/bun.exe" $args
~~~~~~~~~~~~~
The term '/bin/sh.exe' is not recognized...
```
Even after postinstall successfully replaced the placeholder with the
real binary, the stale wrappers still tried to invoke `/bin/sh`.
## Fix
Remove the shebang. Without it, `cmd-shim` generates a direct invocation
wrapper that works after postinstall replaces the placeholder. On Unix,
bash/zsh still execute shebang-less files as shell scripts via ENOEXEC
fallback, so the helpful error message is preserved.
## Test plan
- [x] `bun bd test test/regression/issue/24329.test.ts` passes (2/2
tests)
- Manually verify `npm i -g bun` works on Windows
Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Updates oven-sh/mimalloc bun-dev3 branch to latest upstream
microsoft/mimalloc dev3 (ffa38ab8)
- Merged 12 new commits from upstream
### Key upstream changes included:
- fix re-initialization of threads on macOS
- add lock for sub-pagemap allocations
- fix peak commit stat
- fix use of continue in bitmap find_and_clear (fixes rare case of not
finding space while it exists)
## Test plan
- [ ] CI passes
- [ ] Memory allocation tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Refactors `tls-sql.test.ts` to use `describeWithContainer` with a
local Docker container instead of external Neon secrets
- Updates `postgres_tls` service to build from Dockerfile (fixes SSL key
permission issues)
- Fixes pg_hba.conf to allow local socket connections for init scripts
## Test plan
- [x] Verified tests pass locally with `bun bd test
test/js/sql/tls-sql.test.ts` (30 tests pass)
- [ ] CI passes on x64 Linux (arm64 Docker tests are currently disabled)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- `napi_typeof` was returning `napi_object` for `AsyncContextFrame`
values, which are internally callable JSObjects
- Native addons that check callback types (e.g. encore.dev's runtime)
would fail with `expect Function, got: Object` and panic
- Added a `jsDynamicCast<AsyncContextFrame*>` check before the final
`napi_object` fallback to correctly report these values as
`napi_function`
Closes#25933
## Test plan
- [x] Verify encore.dev + supertokens reproduction from the issue no
longer panics
- [ ] Existing napi tests continue to pass
Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixed inverted logic in `canReceiveData` function in HTTP/2 stream
state handling
- Added gRPC streaming tests to verify correct behavior
## Problem
The `canReceiveData` function had completely inverted logic that
reported incorrect `remoteClose` status:
| Stream State | Before (Wrong) | After (Correct) |
|--------------|----------------|-----------------|
| OPEN | `false` (can't receive) | `true` (can receive) |
| HALF_CLOSED_LOCAL | `false` (can't receive) | `true` (can receive from
remote) |
| HALF_CLOSED_REMOTE | `true` (can receive) | `false` (remote closed) |
| CLOSED | `true` (can receive) | `false` (stream done) |
Per RFC 7540 Section 5.1:
- In `HALF_CLOSED_LOCAL` state, the local endpoint has sent END_STREAM
but can still **receive** data from the remote peer
- In `HALF_CLOSED_REMOTE` state, the remote endpoint has sent END_STREAM
so no more data will be received
## Test plan
- [x] Added gRPC streaming tests covering unary, server streaming,
client streaming, and bidirectional streaming calls
- [x] Verified HTTP/2 test suite passes (same or fewer failures than
before)
- [x] Verified gRPC test suite improves (7 failures vs 9 failures before
+ 2 errors)
Closes#20875🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Fix `$`...`.cwd(".")` causing ENOENT error with path ending in
"undefined"
- The same fix applies to `.cwd("")` and `.cwd("./")`
- Falls back to `process.cwd()` when `defaultCwd` is undefined
Closes#26460
## Test plan
- [x] Added regression test in `test/regression/issue/26460.test.ts`
- [x] Verified test fails with `USE_SYSTEM_BUN=1` (reproduces the bug)
- [x] Verified test passes with `bun bd test` (fix works)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
Fix a bug in `appendOptionsEnv` where bare flags (no `=`) that aren't
the last option get a trailing space appended, causing the argument
parser to not recognize them.
For example, `BUN_OPTIONS="--cpu-prof --cpu-prof-dir=profiles"` would
parse `--cpu-prof` as `"--cpu-prof "` (trailing space), so CPU profiling
was never enabled.
## Root Cause
When `appendOptionsEnv` encounters a `--flag` followed by whitespace, it
advances past the whitespace looking for a possible quoted value (e.g.
`--flag "quoted"`). If no quote is found and there's no `=`, it falls
through without resetting `j`, so the emitted argument includes the
trailing whitespace.
## Fix
Save `end_of_flag = j` after scanning the flag name. Add an `else`
branch that resets `j = end_of_flag` when no value (quote or `=`) is
found after the whitespace. This is a 3-line change.
Also fixes a separate bug in `BunCPUProfiler.zig` where `--cpu-prof-dir`
with an absolute path would hit a debug assertion (`path.append` on an
already-rooted path with an absolute input). Changed to `path.join`
which handles both relative and absolute paths correctly.
## Tests
- `test/cli/env/bun-options.test.ts`: Two new tests verifying
`--cpu-prof --cpu-prof-dir=<abs-path>` produces a `.cpuprofile` file,
for both normal and standalone compiled executables.