## Summary
- Disable HTTP keep-alive when a proxy returns a 407 (Proxy
Authentication Required) status code
- This prevents subsequent requests from trying to reuse a connection
that the proxy server has closed
- Refactored proxy tests to use `describe.concurrent` and async
`Bun.spawn` patterns
## Test plan
- [x] Added test `simultaneous proxy auth failures should not hang` that
verifies multiple concurrent requests with invalid proxy credentials
complete without hanging
- [x] Existing proxy tests pass
🤖 Generated with [Claude Code](https://claude.ai/code)
## Summary
- **SQLClient.cpp**: Fix bug where `RETURN_IF_EXCEPTION` after
`JSONParse` would never trigger on JSON parse failure since `JSONParse`
doesn't throw
- **BunString.cpp**: Simplify by using `JSONParseWithException` instead
of manually checking for empty result and throwing
## Details
`JSC::JSONParse` returns an empty `JSValue` on failure **without
throwing an exception**. This means that `RETURN_IF_EXCEPTION(scope,
{})` will never catch JSON parsing errors when used after `JSONParse`.
Before this fix in `SQLClient.cpp`:
```cpp
JSC::JSValue json = JSC::JSONParse(globalObject, str);
RETURN_IF_EXCEPTION(scope, {}); // This never triggers on parse failure!
return json; // Returns empty JSValue
```
This could cause issues when parsing invalid JSON data from SQL
databases (e.g., PostgreSQL's JSON/JSONB columns).
`JSONParseWithException` properly throws a `SyntaxError` exception that
`RETURN_IF_EXCEPTION` can catch.
## Test plan
- [x] Build succeeds with `bun bd`
- The changes follow the same pattern used in `ModuleLoader.cpp` and
`BunObject.cpp`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Aligns Bun's temp directory resolution with Node.js's `os.tmpdir()`
behavior
- Checks `TMPDIR`, `TMP`, and `TEMP` environment variables in order
(matching Node.js)
- Uses `bun.once` for lazy initialization instead of mutable static
state
- Removes `setTempdir` function and simplifies the API to use
`RealFS.tmpdirPath()` directly
## Test plan
- [ ] Verify temp directory resolution matches Node.js behavior
- [ ] Test with various environment variable configurations
- [ ] Ensure existing tests pass with `bun bd test`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Without this change, building with `-DWEBKIT_LOCAL=ON` fails with:
```
/work/bun/src/bun.js/bindings/BunObject.cpp:12:10: fatal error: 'JavaScriptCore/JSBase.h' file not found
12 | #include <JavaScriptCore/JSBase.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
```
The reason for this is because the directory structure differs between
downloaded and local WebKit.
Downloaded WebKit:
```
build/debug/cache/webkit-6d0f3aac0b817cc0/
└── include/
└── JavaScriptCore/
└── JSBase.h ← Direct path
```
Local WebKit:
```
vendor/WebKit/WebKitBuild/Debug/
└── JavaScriptCore/Headers/
└── JavaScriptCore/
└── JSBase.h ← Nested path
```
The include paths are thus configured differently for each build type.
For Remote WebKit (when WEBKIT_LOCAL=OFF):
- SetupWebKit.cmake line 22 sets: WEBKIT_INCLUDE_PATH =
${WEBKIT_PATH}/include
- BuildBun.cmake line 1253 adds:
include_directories(${WEBKIT_INCLUDE_PATH})
- This resolves to: build/debug/cache/webkit-6d0f3aac0b817cc0/include/
- So #include <JavaScriptCore/JSBase.h> finds the file at
include/JavaScriptCore/JSBase.h ✅
For Local WebKit (when WEBKIT_LOCAL=ON):
- The original code only added:
${WEBKIT_PATH}/JavaScriptCore/Headers/JavaScriptCore
- This resolves to:
vendor/WebKit/WebKitBuild/Debug/JavaScriptCore/Headers/JavaScriptCore/
- So #include <JavaScriptCore/JSBase.h> fails because there's no
JavaScriptCore/ subdirectory at that level ❌
- The fix adds: ${WEBKIT_PATH}/JavaScriptCore/Headers
- Now the include path includes:
vendor/WebKit/WebKitBuild/Debug/JavaScriptCore/Headers/
- So #include <JavaScriptCore/JSBase.h> finds the file at
Headers/JavaScriptCore/JSBase.h ✅
### How did you verify your code works?
Built locally.
Co-authored-by: Carl Smedstad <carsme@archlinux.org>
### What does this PR do?
In CMake, failure to execute a process will place a message string in
the RESULT_VARIABLE. If the message string starts with 'No' such as in
'No such file or directory' then CMake will interpret that as the
boolean false and not halt the build. The new code uses a built-in
option to halt the build on any failure, so the script will halt
correctly if that error occurs. This could also be fixed by quoting, but
might as well use the CMake feature.
I encountered this error when I improperly defined BUN_EXECUTABLE.
### How did you verify your code works?
<details>
<summary>Ran the build with invalid executable path:</summary>
```
$ node ./scripts/build.mjs \
-GNinja \
-DCMAKE_BUILD_TYPE=Debug \
-B build/debug \
--log-level=NOTICE \
-DBUN_EXECUTABLE="foo" \
-DNPM_EXECUTABLE="$(which npm)" \
-DZIG_EXECUTABLE="$(which zig)" \
-DENABLE_ASAN=OFF
Globbed 1971 sources [178.21ms]
CMake Configure
$ cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug -B /work/bun/build/debug --log-level NOTICE -DBUN_EXECUTABLE=foo -DNPM_EXECUTABLE=/bin/npm -DZIG_EXECUTABLE=/bin/zig -DENABLE_ASAN=OFF -S /work/bun -DCACHE_STRATEGY=auto
sccache: Using local cache strategy.
```
</details>
Result:
```
CMake Error at cmake/targets/BuildBun.cmake:422 (execute_process):
execute_process failed command indexes:
1: "Abnormal exit with child return code: no such file or directory"
Call Stack (most recent call first):
CMakeLists.txt:66 (include)
```
### What does this PR do?
Before this change, downloading WebKit would fail silently if there were
e.g. a connection or TLS certificate issue. I experienced this when
trying to build bun in an overly-restrictive sandbox.
After this change, the failure reason will be visible.
### How did you verify your code works?
<details>
<summary>Brought down my network interface and ran the build:</summary>
```sh
$ node ./scripts/build.mjs \
-GNinja \
-DCMAKE_BUILD_TYPE=Debug \
-B build/debug \
--log-level=NOTICE \
-DBUN_EXECUTABLE="$(which node)" \
-DNPM_EXECUTABLE="$(which npm)" \
-DZIG_EXECUTABLE="$(which zig)" \
-DENABLE_ASAN=OFF
Globbed 1971 sources [180.67ms]
CMake Configure
$ cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug -B /work/bun/build/debug --log-level NOTICE -DBUN_EXECUTABLE=/bin/node -DNPM_EXECUTABLE=/bin/npm -DZIG_EXECUTABLE=/bin/zig -DENABLE_ASAN=OFF -S /work/bun -DCACHE_STRATEGY=auto
sccache: Using local cache strategy.
...
```
</details>
Result:
```
CMake Error at cmake/tools/SetupWebKit.cmake:99 (message):
Failed to download WebKit: 6;"Could not resolve hostname"
Call Stack (most recent call first):
cmake/targets/BuildBun.cmake:1204 (include)
CMakeLists.txt:66 (include)
```
## Summary
Fixes#20821
When `bun build --compile` was used with 8 or more embedded files, the
compiled binary would silently fail to execute any code (exit code 0, no
output).
**Root cause:** Chunks were sorted alphabetically by their `entry_bits`
key bytes. For entry point 0, the key starts with bit 0 set (byte
pattern `0x01`), but for entry point 8, the key has bit 8 set in the
byte (pattern `0x00, 0x01`). Alphabetically, `0x00 < 0x01`, so entry
point 8's chunk sorted before entry point 0.
This caused the wrong entry point to be identified as the main entry,
resulting in asset wrapper code being executed instead of the user's
code.
**Fix:** Custom sort that ensures `entry_point_id=0` (the main entry
point) always sorts first, with remaining chunks sorted alphabetically
for determinism.
## Test plan
- Added regression test `compile/ManyEmbeddedFiles` that embeds 8 files
and verifies the main entry point runs correctly
- Verified manually with reproduction case from issue
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
It is standard practice to send HTTP2 window resize frames once half the
window is used up:
- [nghttp2](https://en.wikipedia.org/wiki/Nghttp2#HTTP/2_implementation)
- [Go](https://github.com/golang/net/blob/master/http2/flow.go#L31)
- [Rust
h2](https://github.com/hyperium/h2/blob/master/src/proto/streams/flow_control.rs#L17)
The current behaviour of Bun is to send a window update once the client
has sent 65535 bytes exactly. This leads to an interruption in
throughput while the window update is received.
This is not just a performance concern however, I think some clients
will not handle this well, as it means that if you stop sending data
even 1 byte before 65535, you have a deadlock. The reason I came across
this issue is that it appears that the Rust `hyper` crate always
reserves an additional 1 byte from the connection for each http2 stream
(https://github.com/hyperium/hyper/blob/master/src/proto/h2/mod.rs#L122).
This means that when you have two concurrent requests, the client treats
it like the window is exhausted when it actually has a byte remaining,
leading to a sequential dependency between the requests that can create
deadlocks if they depend on running concurrently. I accept this is not a
problem with bun, but its a happy accident that we can resolve such
off-by-one issues by increasing the window size once it is 50% utilized
### How did you verify your code works?
Using wireshark, bun debug logging, and client logging I can observe
that the window updates are now sent after 32767 bytes. This also
resolves the h2 crate client issue.
### What does this PR do?
- I was trying to understand how the SEA bundling worked in Bun and
noticed the size of the `Offsets` struct is hard-coded here to 32. It
should use the computed size to be future proof to changes in the
schema.
### How did you verify your code works?
- I didn't. Can add tests if this is not covered by existing tests.
ChatGPT agreed with me though. =)
## Summary
This PR fixes a memory leak that occurs when `fetch()` is called with a
`ReadableStream` body. The ReadableStream objects were not being
properly released, causing them to accumulate in memory.
## Problem
When using `fetch()` with a ReadableStream body:
```javascript
const stream = new ReadableStream({
start(controller) {
controller.enqueue(new TextEncoder().encode("data"));
controller.close();
}
});
await fetch(url, { method: "POST", body: stream });
```
The ReadableStream objects leak because `FetchTasklet.clearData()` has a
conditional that prevents `detach()` from being called on ReadableStream
request bodies after streaming has started.
### Root Cause
The problematic condition in `clearData()`:
```zig
if (this.request_body != .ReadableStream or this.is_waiting_request_stream_start) {
this.request_body.detach();
}
```
After `startRequestStream()` is called:
- `is_waiting_request_stream_start` becomes `false`
- `request_body` is still `.ReadableStream`
- The condition evaluates to `(false or false) = false`
- `detach()` is skipped → **memory leak**
### Why the Original Code Was Wrong
The original code appears to assume that when `startRequestStream()` is
called, ownership of the Strong reference is transferred to
`ResumableSink`. However, this is incorrect:
1. `startRequestStream()` creates a **new independent** Strong reference
in `ResumableSink` (see `ResumableSink.zig:119`)
2. The FetchTasklet's original reference is **not transferred** - it
becomes redundant
3. Strong references in Bun are independent - calling `deinit()` on one
does not affect the other
## Solution
Remove the conditional and always call `detach()`:
```zig
// Always detach request_body regardless of type.
// When request_body is a ReadableStream, startRequestStream() creates
// an independent Strong reference in ResumableSink, so FetchTasklet's
// reference becomes redundant and must be released to avoid leaks.
this.request_body.detach();
```
### Safety Analysis
This change is safe because:
1. **Strong references are independent**: Each Strong reference
maintains its own ref count. Detaching FetchTasklet's reference doesn't
affect ResumableSink's reference
2. **Idempotency**: `detach()` is safe to call on already-detached
references
3. **Timing**: `clearData()` is only called from `deinit()` after
streaming has completed (ref_count = 0)
4. **No UAF risk**: `deinit()` only runs when ref_count reaches 0, which
means all streaming operations have completed
## Test Results
Before fix (with system Bun):
```
Expected: <= 100
Received: 501 (Request objects leaked)
Received: 1002 (ReadableStream objects leaked)
```
After fix:
```
6 pass
0 fail
```
## Test Coverage
Added comprehensive tests in
`test/js/web/fetch/fetch-cyclic-reference.test.ts` covering:
- Response stream leaks with cyclic references
- Streaming response body leaks
- Request body stream leaks with cyclic references
- ReadableStream body leaks (no cyclic reference needed to reproduce)
- Concurrent fetch operations with cyclic references
---------
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Include the Windows module definition file (symbols.def) in
link-metadata.json, similar to how
symbols.txt and symbols.dyn are already included for macOS and Linux.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
- Introduced `requestPayer` option in S3-related functions and
structures to handle Requester Pays buckets.
- Updated S3 client methods to accept and propagate the `requestPayer`
flag.
- Enhanced documentation for the `requestPayer` option in the S3 type
definitions.
- Adjusted existing S3 operations to utilize the `requestPayer`
parameter where applicable, ensuring compatibility with AWS S3's
Requester Pays feature.
- Ensured that the new functionality is integrated into multipart
uploads and simple requests.
### What does this PR do?
This change allows users to specify whether they are willing to pay for
data transfer costs when accessing objects in Requester Pays buckets,
improving flexibility and compliance with AWS S3's billing model.
This closes#25499
### How did you verify your code works?
I have added a new test file to verify this functionality, and all my
tests pass.
I also tested this against an actual S3 bucket which can only be
accessed if requester pays. I can confirm that it's accessible with
`requestPayer` is `true`, and the default of `false` does not allow
access.
An example bucket is here: s3://hl-mainnet-evm-blocks/0/0/1.rmp.lz4
(my usecase is indexing [hyperliquid block
data](https://hyperliquid.gitbook.io/hyperliquid-docs/for-developers/hyperevm/raw-hyperevm-block-data)
which is stored in s3, and I want to use bun to index faster)
---------
Co-authored-by: Alistair Smith <hi@alistair.sh>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Ciro Spaciari <ciro.spaciari@gmail.com>
### What does this PR do?
the `sql()` helper now filters out `undefined` values in INSERT
statements instead of converting them to `NULL`. This allows columns
with `DEFAULT` values to use their defaults when `undefined` is passed,
rather than being overridden with `NULL`.
**Before:** `sql({ foo: undefined, id: "123" })` in INSERT would
generate `(foo, id) VALUES (NULL, "123")`, causing NOT NULL constraint
violations even when the column has a DEFAULT.
**After:** `sql({ foo: undefined, id: "123" })` in INSERT generates
`(id) VALUES ("123")`, omitting the undefined column entirely and
letting the database use the DEFAULT value.
Also fixes a data loss bug in bulk inserts where columns were determined
only from the first item - now all items are checked, so values in later
items aren't silently dropped.
Fixes#25829
### How did you verify your code works?
- Added regression test for #25829 (NOT NULL column with DEFAULT)
- Added tests for bulk insert with mixed undefined patterns which is the
data loss scenario
## Summary
- Fix CMake JSON parsing error when Buildkite API returns commit
messages with newlines
CMake's `file(READ ...)` reads files with literal newlines, which breaks
`string(JSON ...)` when the JSON contains escape sequences like `\n` in
string values (e.g., commit messages from Buildkite API).
Use `file(STRINGS ...)` to read line-by-line, then join with `\n` to
preserve valid JSON escape sequences while avoiding literal newlines.
## Test plan
- Verify CMake configure succeeds when Buildkite build has commit
messages with newlines
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix O(n²) performance bug in JSON mode IPC when receiving large
messages that arrive in chunks
- Add `JsonIncomingBuffer` wrapper that tracks newline positions to
avoid re-scanning
- Each byte is now scanned exactly once (on arrival or when preceding
message is consumed)
## Problem
When data arrives in chunks in JSON mode, `decodeIPCMessage` was calling
`indexOfChar(data, '\n')` on the ENTIRE accumulated buffer every time.
For a 10MB message arriving in 160 chunks of 64KB:
- Chunk 1: scan 64KB
- Chunk 2: scan 128KB
- Chunk 3: scan 192KB
- ...
- Chunk 160: scan 10MB
Total: ~800MB scanned for one 10MB message.
## Solution
Introduced a `JsonIncomingBuffer` struct that:
1. Tracks `newline_pos: ?u32` - position of known upcoming newline (if
any)
2. On `append(bytes)`: Only scans new chunk for `\n` if no position is
cached
3. On `consume(bytes)`: Updates or re-scans as needed after message
processing
This ensures O(n) scanning instead of O(n²).
## Test plan
- [x] `bun run zig:check-all` passes (all platforms compile)
- [x] `bun bd test test/js/bun/spawn/spawn.ipc.test.ts` - 4 tests pass
- [x] `bun bd test test/js/node/child_process/child_process_ipc.test.js`
- 1 test pass
- [x] `bun bd test test/js/bun/spawn/bun-ipc-inherit.test.ts` - 1 test
pass
- [x] `bun bd test test/js/bun/spawn/spawn.ipc.bun-node.test.ts` - 1
test pass
- [x] `bun bd test test/js/bun/spawn/spawn.ipc.node-bun.test.ts` - 1
test pass
- [x] `bun bd test
test/js/node/child_process/child_process_ipc_large_disconnect.test.js` -
1 test pass
- [x] Manual verification with `child-process-send-cb-more.js` (32KB
messages)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Optimize `Buffer.indexOf` and `Buffer.includes` by replacing
`std::search` with Highway SIMD-optimized functions:
- **Single-byte search**: `highway_index_of_char` - SIMD-accelerated
character search
- **Multi-byte search**: `highway_memmem` - SIMD-accelerated substring
search
These Highway functions are already used throughout Bun's codebase for
fast string searching (URL parsing, JS lexer, etc.) and provide
significant speedups for pattern matching in large buffers.
### Changes
```cpp
// Before: std::search (scalar)
auto it = std::search(haystack.begin(), haystack.end(), needle.begin(), needle.end());
// After: SIMD-optimized
if (valueLength == 1) {
size_t result = highway_index_of_char(haystackPtr, haystackLen, valuePtr[0]);
} else {
void* result = highway_memmem(haystackPtr, haystackLen, valuePtr, valueLength);
}
```
## Test plan
- [x] Debug build compiles successfully
- [x] Basic functionality verified (`indexOf`, `includes` with single
and multi-byte patterns)
- [ ] Existing Buffer tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Fixes#25716
Adds support for a `reactFastRefresh: boolean` option in the `Bun.build`
JavaScript API, matching the existing `--react-fast-refresh` CLI flag.
```ts
const result = await Bun.build({
reactFastRefresh: true,
entrypoints: ["src/App.tsx"],
});
```
When enabled, the bundler adds React Fast Refresh transform code
(`$RefreshReg$`, `$RefreshSig$`) to the output.
## Summary
Apply the same optimization technique from PR #25717 (Response.json) to
other APIs that use JSON.stringify internally:
- **IPC message serialization** (`ipc.zig`) - used for inter-process
communication
- **console.log with %j format** (`ConsoleObject.zig`) - commonly used
for debugging
- **PostgreSQL JSON/JSONB types** (`PostgresRequest.zig`) - database
operations
- **MySQL JSON type** (`MySQLTypes.zig`) - database operations
- **Jest %j/%o format specifiers** (`jest.zig`) - test output formatting
- **Transpiler tsconfig/macros** (`JSTranspiler.zig`) - build
configuration
### Root Cause
When calling `JSONStringify(globalObject, value, 0)`, the space
parameter `0` becomes `jsNumber(0)`, which is NOT `undefined`. This
causes JSC's FastStringifier (SIMD-optimized) to bail out:
```cpp
// In WebKit's JSONObject.cpp FastStringifier::stringify()
if (!space.isUndefined()) {
logOutcome("space"_s);
return { }; // Bail out to slow path
}
```
Using `jsonStringifyFast` which passes `jsUndefined()` triggers the fast
path.
### Expected Performance Improvement
Based on PR #25717 results, these changes should provide ~3x speedup for
JSON serialization in the affected APIs.
## Test plan
- [x] Debug build compiles successfully
- [x] Basic functionality verified (IPC, console.log %j, Response.json)
- [x] Existing tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Add maximum decompressed message size limit to WebSocket client
deflate handling
- Add test coverage for decompression limits
## Test plan
- Run `bun test
test/js/web/websocket/websocket-permessage-deflate-edge-cases.test.ts`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Closes#8254
Fixes a data corruption bug in `Bun.write()` where files larger than 2GB
would have chunks skipped resulting in corrupted output with missing
data.
The `doWriteLoop` had an issue where it would essentially end up
offsetting twice every 2GB chunks:
- it first sliced the buffer by `total_written`:
```remain = remain[@min(this.total_written, remain.len)..]```
- it would then increment `bytes_blob.offset`:
`this.bytes_blob.offset += @truncate(wrote)`
but because `sharedView()` already uses the blob offset `slice_ = slice_[this.offset..]` it would end up doubling the offset.
In a local reproduction writing a 16GB file with each 2GB chunk filled with incrementing values `[1, 2, 3, 4, 5, 6, 7, 8]`, the buggy version produced: `[1, 3, 5, 7, …]`, skipping every other chunk.
The fix is to simply remove the redundant manual offset and rely only on `total_written` to track write progress.
### What does this PR do?
Enables `CHECK_REF_COUNTED_LIFECYCLE` in WebKit (
https://github.com/oven-sh/WebKit/pull/121 )
See also
a978fae619
#### `CHECK_REF_COUNTED_LIFECYCLE`?
A compile-time macro that enables lifecycle validation for
reference-counted objects in debug builds.
**Definition**
```cpp
#if ASSERT_ENABLED || ENABLE(SECURITY_ASSERTIONS)
#define CHECK_REF_COUNTED_LIFECYCLE 1
#else
#define CHECK_REF_COUNTED_LIFECYCLE 0
#endif
```
**Purpose**
Detects three categories of bugs:
1. Missing adoption - Objects stored in RefPtr without using adoptRef()
2. Ref during destruction - ref() called while destructor is running
(causes dangling pointers)
3. Thread safety violations - Unsafe ref/deref across threads
**Implementation**
When enabled, RefCountDebugger adds two tracking flags:
- m_deletionHasBegun - Set when destructor starts
- m_adoptionIsRequired - Cleared when adoptRef() is called
These flags are checked on every ref()/deref() call, with assertions
failing on violations.
**Motivation**
Refactored debug code into a separate RefCountDebugger class to:
- Improve readability of core refcount logic
- Eliminate duplicate code across RefCounted, ThreadSafeRefCounted, etc.
- Simplify adding new refcount classes
**Overhead**
Zero in release builds - the flags and checks are compiled out entirely.
### How did you verify your code works?
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Fix performance regression where `Response.json()` was 2-3x slower
than `JSON.stringify() + new Response()`
- Root cause: The existing code called `JSC::JSONStringify` with
`indent=0`, which internally passes `jsNumber(0)` as the space
parameter. This bypasses WebKit's FastStringifier optimization.
- Fix: Add a new `jsonStringifyFast` binding that passes `jsUndefined()`
for the space parameter, triggering JSC's FastStringifier
(SIMD-optimized) code path.
## Root Cause Analysis
In WebKit's `JSONObject.cpp`, the `stringify()` function has this logic:
```cpp
static NEVER_INLINE String stringify(JSGlobalObject& globalObject, JSValue value, JSValue replacer, JSValue space)
{
// ...
if (String result = FastStringifier<Latin1Character, BufferMode::StaticBuffer>::stringify(globalObject, value, replacer, space, failureReason); !result.isNull())
return result;
// Falls back to slow Stringifier...
}
```
And `FastStringifier::stringify()` checks:
```cpp
if (!space.isUndefined()) {
logOutcome("space"_s);
return { }; // Bail out to slow path
}
```
So when we called `JSONStringify(globalObject, value, (unsigned)0)`, it
converted to `jsNumber(0)` which is NOT `undefined`, causing
FastStringifier to bail out.
## Performance Results
### Before (3.5x slower than manual approach)
```
Response.json(): 2415ms
JSON.stringify() + Response(): 689ms
Ratio: 3.50x
```
### After (parity with manual approach)
```
Response.json(): ~700ms
JSON.stringify() + Response(): ~700ms
Ratio: ~1.09x
```
## Test plan
- [x] Existing `Response.json()` tests pass
(`test/regression/issue/21257.test.ts`)
- [x] Response tests pass (`test/js/web/fetch/response.test.ts`)
- [x] Manual verification that output is correct for various JSON inputs
Fixes#25693🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Sosuke Suzuki <sosuke@bun.com>
## What does this PR do?
Switch `Bun.hash.crc32` to use `zlib`'s CRC32 implementation. Bun
already links `zlib`, which provides highly optimized,
hardware-accelerated CRC32. Because `zlib.crc32` takes a 32-bit length,
chunk large inputs to avoid truncation/regressions on buffers >4GB.
Note: This was tried before (PR #12164 by Jarred), which switched CRC32
to zlib for speed. This proposal keeps that approach and adds explicit
chunking to avoid the >4GB length pitfall.
**Problem:** `Bun.hash.crc32` is a significant outlier in
microbenchmarks compared to other hash functions (about 21x slower than
`zlib.crc32` in a 1MB test on M1).
**Root cause:** `Bun.hash.crc32` uses Zig's `std.hash.Crc32`
implementation, which is software-only and does not leverage hardware
acceleration (e.g., `PCLMULQDQ` on x86 or `CRC32` instructions on ARM).
**Implementation:**
https://github.com/oven-sh/bun/blob/main/src/bun.js/api/HashObject.zig
```zig
pub const crc32 = hashWrap(struct {
pub fn hash(seed: u32, bytes: []const u8) u32 {
// zlib takes a 32-bit length, so chunk large inputs to avoid truncation.
var crc: u64 = seed;
var offset: usize = 0;
while (offset < bytes.len) {
const remaining = bytes.len - offset;
const max_len: usize = std.math.maxInt(u32);
const chunk_len: u32 = if (remaining > max_len) @intCast(max_len) else @intCast(remaining);
crc = bun.zlib.crc32(crc, bytes.ptr + offset, chunk_len);
offset += chunk_len;
}
return @intCast(crc);
}
});
```
### How did you verify your code works?
**Benchmark (1MB payload):**
- **Before:** Bun 1.3.5 `Bun.hash.crc32` = 2,644,444 ns/op vs
`zlib.crc32` = 124,324 ns/op (~21x slower)
- **After (local bun-debug):** `Bun.hash.crc32` = 360,591 ns/op vs
`zlib.crc32` = 359,069 ns/op (~1.0x), results match
## Test environment
- **Machine:** MacBook Pro 13" (M1, 2020)
- **OS:** macOS 15.7.3
- **Baseline Bun:** 1.3.5
- **After Bun:** local `bun-debug` (build/debug)
## Summary
- Reject null bytes in command-line arguments passed to `Bun.spawn` and
`Bun.spawnSync`
- Reject null bytes in environment variable keys and values
- Reject null bytes in shell (`$`) template literal arguments
This prevents null byte injection attacks (CWE-158) where null bytes in
strings could cause unintended truncation when passed to the OS,
potentially allowing attackers to bypass file extension validation or
create files with unexpected names.
## Test plan
- [x] Added tests in `test/js/bun/spawn/null-byte-injection.test.ts`
- [x] Tests pass with debug build: `bun bd test
test/js/bun/spawn/null-byte-injection.test.ts`
- [x] Tests fail with system Bun (confirming the fix works)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fixes dead code elimination producing invalid syntax like `{ ...a, x:
}` when simplifying empty objects in spread contexts
- The issue was that `simplifyUnusedExpr` and `joinAllWithCommaCallback`
could return `E.Missing` instead of `null` to indicate "no side effects"
- Added checks to return `null` when the result is `E.Missing`
Fixes#25609
## Test plan
- [x] Added regression test that fails on v1.3.5 and passes with fix
- [x] `bun bd test test/regression/issue/25609.test.ts` passes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Remove sccache support entirely, use ccache only
- Missing ccache no longer fails the build (just skips caching)
- Remove S3 distributed cache support
## Changes
- Remove `cmake/tools/SetupSccache.cmake` and S3 distributed cache
support
- Simplify `CMakeLists.txt` to only use ccache
- Update `SetupCcache.cmake` to not fail when ccache is missing
- Replace sccache with ccache in bootstrap scripts (sh, ps1)
- Update `.buildkite/Dockerfile` to install ccache instead of sccache
- Update `flake.nix` and `shell.nix` to use ccache
- Update documentation (CONTRIBUTING.md, contributing.mdx,
building-windows.mdx)
- Remove `scripts/build-cache/` directory (was only for sccache S3
access)
## Test plan
- [x] Build completes successfully with `bun bd`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
- Add test that is broken before the changes in the code and fix
previous test making script in dependency takes a bit of time to be
executed. Without the `setTimeout` in the tests, due race conditions it
always success. I tried adding a test combining both tests, with
dependencies `dep0` and `larger-than-8-char`, but if the timeout is the
same it success.
- Fix for the use case added, by using the correct buffer for
`Dependency.name` otherwise it gets garbage when package name is larger
than 8 characters. This should fix#12203
### How did you verify your code works?
Undo the changes in the code to verify the new test fails and check it
again after adding the changes in the code.
- WTFMove → WTF::move / std::move: Replaced WTFMove() macro with
WTF::move() function for WTF types, std::move() for std types
- SortedArrayMap removed: Replaced with if-else chains in
EventFactory.cpp, JSCryptoKeyUsage.cpp
- Wasm::Memory::create signature changed: Removed VM parameter
- URLPattern allocation: Changed from WTF_MAKE_ISO_ALLOCATED to
WTF_MAKE_TZONE_ALLOCATED
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Migrate Cursor rules to Claude Code skills format
- Add 4 new skills for development guidance:
- `writing-dev-server-tests`: HMR/dev server test guidance
- `implementing-jsc-classes-cpp`: C++ JSC class implementation
- `implementing-jsc-classes-zig`: Zig JSC bindings generator
- `writing-bundler-tests`: bundler test guidance with itBundled
- Remove all `.cursor/rules/` files
## Test plan
- [x] Skills follow Claude Code skill authoring guidelines
- [x] Each skill has proper YAML frontmatter with name and description
- [x] Skills are concise and actionable
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>