## Summary
This PR fixes a memory leak that occurs when `fetch()` is called with a
`ReadableStream` body. The ReadableStream objects were not being
properly released, causing them to accumulate in memory.
## Problem
When using `fetch()` with a ReadableStream body:
```javascript
const stream = new ReadableStream({
start(controller) {
controller.enqueue(new TextEncoder().encode("data"));
controller.close();
}
});
await fetch(url, { method: "POST", body: stream });
```
The ReadableStream objects leak because `FetchTasklet.clearData()` has a
conditional that prevents `detach()` from being called on ReadableStream
request bodies after streaming has started.
### Root Cause
The problematic condition in `clearData()`:
```zig
if (this.request_body != .ReadableStream or this.is_waiting_request_stream_start) {
this.request_body.detach();
}
```
After `startRequestStream()` is called:
- `is_waiting_request_stream_start` becomes `false`
- `request_body` is still `.ReadableStream`
- The condition evaluates to `(false or false) = false`
- `detach()` is skipped → **memory leak**
### Why the Original Code Was Wrong
The original code appears to assume that when `startRequestStream()` is
called, ownership of the Strong reference is transferred to
`ResumableSink`. However, this is incorrect:
1. `startRequestStream()` creates a **new independent** Strong reference
in `ResumableSink` (see `ResumableSink.zig:119`)
2. The FetchTasklet's original reference is **not transferred** - it
becomes redundant
3. Strong references in Bun are independent - calling `deinit()` on one
does not affect the other
## Solution
Remove the conditional and always call `detach()`:
```zig
// Always detach request_body regardless of type.
// When request_body is a ReadableStream, startRequestStream() creates
// an independent Strong reference in ResumableSink, so FetchTasklet's
// reference becomes redundant and must be released to avoid leaks.
this.request_body.detach();
```
### Safety Analysis
This change is safe because:
1. **Strong references are independent**: Each Strong reference
maintains its own ref count. Detaching FetchTasklet's reference doesn't
affect ResumableSink's reference
2. **Idempotency**: `detach()` is safe to call on already-detached
references
3. **Timing**: `clearData()` is only called from `deinit()` after
streaming has completed (ref_count = 0)
4. **No UAF risk**: `deinit()` only runs when ref_count reaches 0, which
means all streaming operations have completed
## Test Results
Before fix (with system Bun):
```
Expected: <= 100
Received: 501 (Request objects leaked)
Received: 1002 (ReadableStream objects leaked)
```
After fix:
```
6 pass
0 fail
```
## Test Coverage
Added comprehensive tests in
`test/js/web/fetch/fetch-cyclic-reference.test.ts` covering:
- Response stream leaks with cyclic references
- Streaming response body leaks
- Request body stream leaks with cyclic references
- ReadableStream body leaks (no cyclic reference needed to reproduce)
- Concurrent fetch operations with cyclic references
---------
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Include the Windows module definition file (symbols.def) in
link-metadata.json, similar to how
symbols.txt and symbols.dyn are already included for macOS and Linux.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
- Introduced `requestPayer` option in S3-related functions and
structures to handle Requester Pays buckets.
- Updated S3 client methods to accept and propagate the `requestPayer`
flag.
- Enhanced documentation for the `requestPayer` option in the S3 type
definitions.
- Adjusted existing S3 operations to utilize the `requestPayer`
parameter where applicable, ensuring compatibility with AWS S3's
Requester Pays feature.
- Ensured that the new functionality is integrated into multipart
uploads and simple requests.
### What does this PR do?
This change allows users to specify whether they are willing to pay for
data transfer costs when accessing objects in Requester Pays buckets,
improving flexibility and compliance with AWS S3's billing model.
This closes#25499
### How did you verify your code works?
I have added a new test file to verify this functionality, and all my
tests pass.
I also tested this against an actual S3 bucket which can only be
accessed if requester pays. I can confirm that it's accessible with
`requestPayer` is `true`, and the default of `false` does not allow
access.
An example bucket is here: s3://hl-mainnet-evm-blocks/0/0/1.rmp.lz4
(my usecase is indexing [hyperliquid block
data](https://hyperliquid.gitbook.io/hyperliquid-docs/for-developers/hyperevm/raw-hyperevm-block-data)
which is stored in s3, and I want to use bun to index faster)
---------
Co-authored-by: Alistair Smith <hi@alistair.sh>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Ciro Spaciari <ciro.spaciari@gmail.com>
### What does this PR do?
the `sql()` helper now filters out `undefined` values in INSERT
statements instead of converting them to `NULL`. This allows columns
with `DEFAULT` values to use their defaults when `undefined` is passed,
rather than being overridden with `NULL`.
**Before:** `sql({ foo: undefined, id: "123" })` in INSERT would
generate `(foo, id) VALUES (NULL, "123")`, causing NOT NULL constraint
violations even when the column has a DEFAULT.
**After:** `sql({ foo: undefined, id: "123" })` in INSERT generates
`(id) VALUES ("123")`, omitting the undefined column entirely and
letting the database use the DEFAULT value.
Also fixes a data loss bug in bulk inserts where columns were determined
only from the first item - now all items are checked, so values in later
items aren't silently dropped.
Fixes#25829
### How did you verify your code works?
- Added regression test for #25829 (NOT NULL column with DEFAULT)
- Added tests for bulk insert with mixed undefined patterns which is the
data loss scenario
## Summary
- Fix CMake JSON parsing error when Buildkite API returns commit
messages with newlines
CMake's `file(READ ...)` reads files with literal newlines, which breaks
`string(JSON ...)` when the JSON contains escape sequences like `\n` in
string values (e.g., commit messages from Buildkite API).
Use `file(STRINGS ...)` to read line-by-line, then join with `\n` to
preserve valid JSON escape sequences while avoiding literal newlines.
## Test plan
- Verify CMake configure succeeds when Buildkite build has commit
messages with newlines
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix O(n²) performance bug in JSON mode IPC when receiving large
messages that arrive in chunks
- Add `JsonIncomingBuffer` wrapper that tracks newline positions to
avoid re-scanning
- Each byte is now scanned exactly once (on arrival or when preceding
message is consumed)
## Problem
When data arrives in chunks in JSON mode, `decodeIPCMessage` was calling
`indexOfChar(data, '\n')` on the ENTIRE accumulated buffer every time.
For a 10MB message arriving in 160 chunks of 64KB:
- Chunk 1: scan 64KB
- Chunk 2: scan 128KB
- Chunk 3: scan 192KB
- ...
- Chunk 160: scan 10MB
Total: ~800MB scanned for one 10MB message.
## Solution
Introduced a `JsonIncomingBuffer` struct that:
1. Tracks `newline_pos: ?u32` - position of known upcoming newline (if
any)
2. On `append(bytes)`: Only scans new chunk for `\n` if no position is
cached
3. On `consume(bytes)`: Updates or re-scans as needed after message
processing
This ensures O(n) scanning instead of O(n²).
## Test plan
- [x] `bun run zig:check-all` passes (all platforms compile)
- [x] `bun bd test test/js/bun/spawn/spawn.ipc.test.ts` - 4 tests pass
- [x] `bun bd test test/js/node/child_process/child_process_ipc.test.js`
- 1 test pass
- [x] `bun bd test test/js/bun/spawn/bun-ipc-inherit.test.ts` - 1 test
pass
- [x] `bun bd test test/js/bun/spawn/spawn.ipc.bun-node.test.ts` - 1
test pass
- [x] `bun bd test test/js/bun/spawn/spawn.ipc.node-bun.test.ts` - 1
test pass
- [x] `bun bd test
test/js/node/child_process/child_process_ipc_large_disconnect.test.js` -
1 test pass
- [x] Manual verification with `child-process-send-cb-more.js` (32KB
messages)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Optimize `Buffer.indexOf` and `Buffer.includes` by replacing
`std::search` with Highway SIMD-optimized functions:
- **Single-byte search**: `highway_index_of_char` - SIMD-accelerated
character search
- **Multi-byte search**: `highway_memmem` - SIMD-accelerated substring
search
These Highway functions are already used throughout Bun's codebase for
fast string searching (URL parsing, JS lexer, etc.) and provide
significant speedups for pattern matching in large buffers.
### Changes
```cpp
// Before: std::search (scalar)
auto it = std::search(haystack.begin(), haystack.end(), needle.begin(), needle.end());
// After: SIMD-optimized
if (valueLength == 1) {
size_t result = highway_index_of_char(haystackPtr, haystackLen, valuePtr[0]);
} else {
void* result = highway_memmem(haystackPtr, haystackLen, valuePtr, valueLength);
}
```
## Test plan
- [x] Debug build compiles successfully
- [x] Basic functionality verified (`indexOf`, `includes` with single
and multi-byte patterns)
- [ ] Existing Buffer tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Fixes#25716
Adds support for a `reactFastRefresh: boolean` option in the `Bun.build`
JavaScript API, matching the existing `--react-fast-refresh` CLI flag.
```ts
const result = await Bun.build({
reactFastRefresh: true,
entrypoints: ["src/App.tsx"],
});
```
When enabled, the bundler adds React Fast Refresh transform code
(`$RefreshReg$`, `$RefreshSig$`) to the output.
## Summary
Apply the same optimization technique from PR #25717 (Response.json) to
other APIs that use JSON.stringify internally:
- **IPC message serialization** (`ipc.zig`) - used for inter-process
communication
- **console.log with %j format** (`ConsoleObject.zig`) - commonly used
for debugging
- **PostgreSQL JSON/JSONB types** (`PostgresRequest.zig`) - database
operations
- **MySQL JSON type** (`MySQLTypes.zig`) - database operations
- **Jest %j/%o format specifiers** (`jest.zig`) - test output formatting
- **Transpiler tsconfig/macros** (`JSTranspiler.zig`) - build
configuration
### Root Cause
When calling `JSONStringify(globalObject, value, 0)`, the space
parameter `0` becomes `jsNumber(0)`, which is NOT `undefined`. This
causes JSC's FastStringifier (SIMD-optimized) to bail out:
```cpp
// In WebKit's JSONObject.cpp FastStringifier::stringify()
if (!space.isUndefined()) {
logOutcome("space"_s);
return { }; // Bail out to slow path
}
```
Using `jsonStringifyFast` which passes `jsUndefined()` triggers the fast
path.
### Expected Performance Improvement
Based on PR #25717 results, these changes should provide ~3x speedup for
JSON serialization in the affected APIs.
## Test plan
- [x] Debug build compiles successfully
- [x] Basic functionality verified (IPC, console.log %j, Response.json)
- [x] Existing tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Add maximum decompressed message size limit to WebSocket client
deflate handling
- Add test coverage for decompression limits
## Test plan
- Run `bun test
test/js/web/websocket/websocket-permessage-deflate-edge-cases.test.ts`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Closes#8254
Fixes a data corruption bug in `Bun.write()` where files larger than 2GB
would have chunks skipped resulting in corrupted output with missing
data.
The `doWriteLoop` had an issue where it would essentially end up
offsetting twice every 2GB chunks:
- it first sliced the buffer by `total_written`:
```remain = remain[@min(this.total_written, remain.len)..]```
- it would then increment `bytes_blob.offset`:
`this.bytes_blob.offset += @truncate(wrote)`
but because `sharedView()` already uses the blob offset `slice_ = slice_[this.offset..]` it would end up doubling the offset.
In a local reproduction writing a 16GB file with each 2GB chunk filled with incrementing values `[1, 2, 3, 4, 5, 6, 7, 8]`, the buggy version produced: `[1, 3, 5, 7, …]`, skipping every other chunk.
The fix is to simply remove the redundant manual offset and rely only on `total_written` to track write progress.
### What does this PR do?
Enables `CHECK_REF_COUNTED_LIFECYCLE` in WebKit (
https://github.com/oven-sh/WebKit/pull/121 )
See also
a978fae619
#### `CHECK_REF_COUNTED_LIFECYCLE`?
A compile-time macro that enables lifecycle validation for
reference-counted objects in debug builds.
**Definition**
```cpp
#if ASSERT_ENABLED || ENABLE(SECURITY_ASSERTIONS)
#define CHECK_REF_COUNTED_LIFECYCLE 1
#else
#define CHECK_REF_COUNTED_LIFECYCLE 0
#endif
```
**Purpose**
Detects three categories of bugs:
1. Missing adoption - Objects stored in RefPtr without using adoptRef()
2. Ref during destruction - ref() called while destructor is running
(causes dangling pointers)
3. Thread safety violations - Unsafe ref/deref across threads
**Implementation**
When enabled, RefCountDebugger adds two tracking flags:
- m_deletionHasBegun - Set when destructor starts
- m_adoptionIsRequired - Cleared when adoptRef() is called
These flags are checked on every ref()/deref() call, with assertions
failing on violations.
**Motivation**
Refactored debug code into a separate RefCountDebugger class to:
- Improve readability of core refcount logic
- Eliminate duplicate code across RefCounted, ThreadSafeRefCounted, etc.
- Simplify adding new refcount classes
**Overhead**
Zero in release builds - the flags and checks are compiled out entirely.
### How did you verify your code works?
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Fix performance regression where `Response.json()` was 2-3x slower
than `JSON.stringify() + new Response()`
- Root cause: The existing code called `JSC::JSONStringify` with
`indent=0`, which internally passes `jsNumber(0)` as the space
parameter. This bypasses WebKit's FastStringifier optimization.
- Fix: Add a new `jsonStringifyFast` binding that passes `jsUndefined()`
for the space parameter, triggering JSC's FastStringifier
(SIMD-optimized) code path.
## Root Cause Analysis
In WebKit's `JSONObject.cpp`, the `stringify()` function has this logic:
```cpp
static NEVER_INLINE String stringify(JSGlobalObject& globalObject, JSValue value, JSValue replacer, JSValue space)
{
// ...
if (String result = FastStringifier<Latin1Character, BufferMode::StaticBuffer>::stringify(globalObject, value, replacer, space, failureReason); !result.isNull())
return result;
// Falls back to slow Stringifier...
}
```
And `FastStringifier::stringify()` checks:
```cpp
if (!space.isUndefined()) {
logOutcome("space"_s);
return { }; // Bail out to slow path
}
```
So when we called `JSONStringify(globalObject, value, (unsigned)0)`, it
converted to `jsNumber(0)` which is NOT `undefined`, causing
FastStringifier to bail out.
## Performance Results
### Before (3.5x slower than manual approach)
```
Response.json(): 2415ms
JSON.stringify() + Response(): 689ms
Ratio: 3.50x
```
### After (parity with manual approach)
```
Response.json(): ~700ms
JSON.stringify() + Response(): ~700ms
Ratio: ~1.09x
```
## Test plan
- [x] Existing `Response.json()` tests pass
(`test/regression/issue/21257.test.ts`)
- [x] Response tests pass (`test/js/web/fetch/response.test.ts`)
- [x] Manual verification that output is correct for various JSON inputs
Fixes#25693🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Sosuke Suzuki <sosuke@bun.com>
## What does this PR do?
Switch `Bun.hash.crc32` to use `zlib`'s CRC32 implementation. Bun
already links `zlib`, which provides highly optimized,
hardware-accelerated CRC32. Because `zlib.crc32` takes a 32-bit length,
chunk large inputs to avoid truncation/regressions on buffers >4GB.
Note: This was tried before (PR #12164 by Jarred), which switched CRC32
to zlib for speed. This proposal keeps that approach and adds explicit
chunking to avoid the >4GB length pitfall.
**Problem:** `Bun.hash.crc32` is a significant outlier in
microbenchmarks compared to other hash functions (about 21x slower than
`zlib.crc32` in a 1MB test on M1).
**Root cause:** `Bun.hash.crc32` uses Zig's `std.hash.Crc32`
implementation, which is software-only and does not leverage hardware
acceleration (e.g., `PCLMULQDQ` on x86 or `CRC32` instructions on ARM).
**Implementation:**
https://github.com/oven-sh/bun/blob/main/src/bun.js/api/HashObject.zig
```zig
pub const crc32 = hashWrap(struct {
pub fn hash(seed: u32, bytes: []const u8) u32 {
// zlib takes a 32-bit length, so chunk large inputs to avoid truncation.
var crc: u64 = seed;
var offset: usize = 0;
while (offset < bytes.len) {
const remaining = bytes.len - offset;
const max_len: usize = std.math.maxInt(u32);
const chunk_len: u32 = if (remaining > max_len) @intCast(max_len) else @intCast(remaining);
crc = bun.zlib.crc32(crc, bytes.ptr + offset, chunk_len);
offset += chunk_len;
}
return @intCast(crc);
}
});
```
### How did you verify your code works?
**Benchmark (1MB payload):**
- **Before:** Bun 1.3.5 `Bun.hash.crc32` = 2,644,444 ns/op vs
`zlib.crc32` = 124,324 ns/op (~21x slower)
- **After (local bun-debug):** `Bun.hash.crc32` = 360,591 ns/op vs
`zlib.crc32` = 359,069 ns/op (~1.0x), results match
## Test environment
- **Machine:** MacBook Pro 13" (M1, 2020)
- **OS:** macOS 15.7.3
- **Baseline Bun:** 1.3.5
- **After Bun:** local `bun-debug` (build/debug)
## Summary
- Reject null bytes in command-line arguments passed to `Bun.spawn` and
`Bun.spawnSync`
- Reject null bytes in environment variable keys and values
- Reject null bytes in shell (`$`) template literal arguments
This prevents null byte injection attacks (CWE-158) where null bytes in
strings could cause unintended truncation when passed to the OS,
potentially allowing attackers to bypass file extension validation or
create files with unexpected names.
## Test plan
- [x] Added tests in `test/js/bun/spawn/null-byte-injection.test.ts`
- [x] Tests pass with debug build: `bun bd test
test/js/bun/spawn/null-byte-injection.test.ts`
- [x] Tests fail with system Bun (confirming the fix works)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fixes dead code elimination producing invalid syntax like `{ ...a, x:
}` when simplifying empty objects in spread contexts
- The issue was that `simplifyUnusedExpr` and `joinAllWithCommaCallback`
could return `E.Missing` instead of `null` to indicate "no side effects"
- Added checks to return `null` when the result is `E.Missing`
Fixes#25609
## Test plan
- [x] Added regression test that fails on v1.3.5 and passes with fix
- [x] `bun bd test test/regression/issue/25609.test.ts` passes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Remove sccache support entirely, use ccache only
- Missing ccache no longer fails the build (just skips caching)
- Remove S3 distributed cache support
## Changes
- Remove `cmake/tools/SetupSccache.cmake` and S3 distributed cache
support
- Simplify `CMakeLists.txt` to only use ccache
- Update `SetupCcache.cmake` to not fail when ccache is missing
- Replace sccache with ccache in bootstrap scripts (sh, ps1)
- Update `.buildkite/Dockerfile` to install ccache instead of sccache
- Update `flake.nix` and `shell.nix` to use ccache
- Update documentation (CONTRIBUTING.md, contributing.mdx,
building-windows.mdx)
- Remove `scripts/build-cache/` directory (was only for sccache S3
access)
## Test plan
- [x] Build completes successfully with `bun bd`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
- Add test that is broken before the changes in the code and fix
previous test making script in dependency takes a bit of time to be
executed. Without the `setTimeout` in the tests, due race conditions it
always success. I tried adding a test combining both tests, with
dependencies `dep0` and `larger-than-8-char`, but if the timeout is the
same it success.
- Fix for the use case added, by using the correct buffer for
`Dependency.name` otherwise it gets garbage when package name is larger
than 8 characters. This should fix#12203
### How did you verify your code works?
Undo the changes in the code to verify the new test fails and check it
again after adding the changes in the code.
- WTFMove → WTF::move / std::move: Replaced WTFMove() macro with
WTF::move() function for WTF types, std::move() for std types
- SortedArrayMap removed: Replaced with if-else chains in
EventFactory.cpp, JSCryptoKeyUsage.cpp
- Wasm::Memory::create signature changed: Removed VM parameter
- URLPattern allocation: Changed from WTF_MAKE_ISO_ALLOCATED to
WTF_MAKE_TZONE_ALLOCATED
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Migrate Cursor rules to Claude Code skills format
- Add 4 new skills for development guidance:
- `writing-dev-server-tests`: HMR/dev server test guidance
- `implementing-jsc-classes-cpp`: C++ JSC class implementation
- `implementing-jsc-classes-zig`: Zig JSC bindings generator
- `writing-bundler-tests`: bundler test guidance with itBundled
- Remove all `.cursor/rules/` files
## Test plan
- [x] Skills follow Claude Code skill authoring guidelines
- [x] Each skill has proper YAML frontmatter with name and description
- [x] Skills are concise and actionable
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixed documentation that incorrectly claimed you could use `[env]` as
a TOML section to set environment variables directly
- The `env` option in bunfig.toml only controls whether automatic `.env`
file loading is disabled (via `env = false`)
- Updated to show the correct approaches: using preload scripts or
`.env` files with `--env-file`
## Test plan
- Documentation-only change, no code changes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix LLVM installation on Debian Trixie (13) by using the unstable
repository from apt.llvm.org
The `llvm.sh` script doesn't automatically detect that Debian Trixie
needs to use the unstable repository. This is because trixie's `VERSION`
is `13 (trixie)` rather than `testing`, and apt.llvm.org doesn't have a
dedicated trixie repository.
Without this fix, the LLVM installation falls back to Debian's main
repository packages, which don't include `libclang-rt-19-dev` (the
compiler-rt sanitizer libraries) by default. This causes builds with
ASan (AddressSanitizer) to fail with:
```
ld.lld: error: cannot open /usr/lib/llvm-19/lib/clang/19/lib/x86_64-pc-linux-gnu/libclang_rt.asan.a: No such file or directory
```
This was breaking the [Daily Docker
Build](https://github.com/oven-sh/bun-development-docker-image/actions/runs/20437290601)
in the bun-development-docker-image repo.
## Test plan
- [ ] Wait for the PR CI to pass
- [ ] After merging, the next Daily Docker Build should succeed
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Add `MemfdFlags` enum to replace raw integer flags for `memfd_create`,
providing semantic clarity for different use cases (`executable`,
`non_executable`, `cross_process`)
- Add support for `MFD_EXEC` and `MFD_NOEXEC_SEAL` flags (Linux 6.3+)
with automatic fallback to older kernel flags when `EINVAL` is returned
- Use memfd + `/proc/self/fd/{fd}` path for loading embedded `.node`
files in standalone builds, avoiding disk writes entirely on Linux
## Test plan
- [ ] Verify standalone builds with embedded `.node` files work on Linux
- [ ] Verify fallback works on older kernels (pre-6.3)
- [ ] Verify subprocess stdio memfd still works correctly
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix segmentation fault in `bun create` when using `--no-install` with
a template that has a `bun-create.postinstall` task starting with "bun "
- The bug was caused by unconditionally slicing `argv[2..]` which
created an empty array when `npm_client` was null
- Added check for `npm_client != null` before slicing
## Reproduction
```bash
# Create template with bun-create.postinstall
mkdir -p ~/.bun-create/test-template
echo '{"name":"test","bun-create":{"postinstall":"bun install"}}' > ~/.bun-create/test-template/package.json
# This would crash before the fix
bun create test-template /tmp/my-app --no-install
```
## Test plan
- [x] Verified the reproduction case crashes before the fix
- [x] Verified the reproduction case works after the fix
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Fix out-of-bounds access when parsing `NO_PROXY` environment variable
with empty entries
- Empty entries (e.g., `"localhost, , example.com"`) would cause a panic
when checking if the host starts with a dot
- Skip empty entries after trimming whitespace
fixes BUN-110G
fixes BUN-128V
## Test plan
- [x] Verify `NO_PROXY="localhost, , example.com"` no longer crashes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix use-after-free crash in async zstd compression, scrypt, and
JSTranspiler operations
- When `StringOrBuffer.fromJSMaybeAsync` is called with `is_async=true`,
the buffer's JSValue is now protected from garbage collection
- Previously, the buffer could be GC'd while a worker thread was still
accessing it, causing segfaults in zstd's `HIST_count_simple` and
similar functions
Fixes BUN-167Z
## Changes
- `fromJSMaybeAsync`: Call `protect()` on buffer when `is_async=true`
- `fromJSWithEncodingMaybeAsync`: Same protection for the early return
path
- `Scrypt`: Fix cleanup to use `deinitAndUnprotect()` for async path,
add missing `deinit()` in sync path
- `JSTranspiler`: Use new protection mechanism instead of manual
`protect()`/`unprotect()` calls
- Simplify `createOnJSThread` signatures to not return errors (OOM is
handled internally)
- Update all callers to use renamed/simplified APIs
## Test plan
- [x] Code review of all callsites to verify correct protect/unprotect
pairing
- [ ] Run existing zstd tests
- [ ] Run existing scrypt tests
- [ ] Run existing transpiler tests
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Add periodic memory reclamation for IPC buffers after processing
messages
- Fix missing `deref()` on `bun.String` created from `cmd` property in
`handleIPCMessage`
- Add `reclaimMemory()` function to shrink incoming buffer and send
queue when they exceed 2MB capacity
- Track message count to trigger memory reclamation every 256 messages
The incoming `ByteList` buffer and send queue `ArrayList` would grow but
never shrink, causing memory accumulation during sustained IPC
messaging.
## Test plan
- [x] Added regression tests in
`test/js/bun/spawn/spawn-ipc-memory.test.ts`
- [x] Existing IPC tests pass (`spawn.ipc.test.ts`)
- [x] Existing cluster tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Closes#25505. This adjusts the byte length check in `DataCell:
fromBytes` to 12 bytes instead of 16, as zero-dimensional arrays will
have a shorter preamble.
### How did you verify your code works?
Test suite passes, and I've added a new test that fails in the main
branch but passes with this change. The issue only seems to crop up when
a connection is _reused_, which is curious.
## Summary
Fix several memory leaks in the compression libraries:
- **NativeBrotli/NativeZstd reset()** - Each call to `reset()` allocated
a new encoder/decoder without freeing the previous one
- **NativeBrotli/NativeZstd init() error paths** - If `setParams()`
failed after `stream.init()` succeeded, the instance was leaked
- **NativeZstd init()** - If `setPledgedSrcSize()` failed after context
creation, the context was leaked
- **ZlibCompressorArrayList** - After `deflateInit2_()` succeeded, if
`ensureTotalCapacityPrecise()` failed with OOM, zlib internal state was
never freed
- **NativeBrotli close()** - Now sets state to null to prevent potential
double-free (defensive)
- **LibdeflateState** - Added `deinit()` for API consistency
## Test plan
- [x] Added regression test that calls `reset()` 100k times and measures
memory growth
- [x] Test shows memory growth dropped from ~600MB to ~10MB for Brotli
- [x] Verified no double-frees by tracing code paths
- [x] Existing zlib tests pass (except pre-existing timeout in debug
build)
Before fix (system bun 1.3.3):
```
Memory growth after 100000 reset() calls: 624.38 MB (BrotliCompress)
Memory growth after 100000 reset() calls: 540.63 MB (BrotliDecompress)
```
After fix:
```
Memory growth after 100000 reset() calls: 11.84 MB (BrotliCompress)
Memory growth after 100000 reset() calls: 0.16 MB (BrotliDecompress)
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Add missing `UV_UNKNOWN` and `UV_EAI_*` error code mappings to the
`errno()` function in `ReturnCode`
- Fixes panic "integer does not fit in destination type" on Windows when
libuv returns unmapped error codes
- Speculative fix for BUN-131E
## Root Cause
The `errno()` function was missing mappings for `UV_UNKNOWN` (-4094) and
all `UV_EAI_*` address info errors (-3000 to -3014). When libuv returned
these codes, the switch fell through to `else => null`, and the caller
at `sys_uv.zig:317` assumed success and tried to cast the negative
return code to `usize`, causing a panic.
This was triggered in `readFileWithOptions` -> `preadv` when:
- Memory-mapped file operations encounter exceptions (file
modified/truncated by another process, network drive issues)
- Windows returns error codes that libuv cannot map to standard errno
values
## Crash Report
```
Bun v1.3.5 (1e86ceb) on windows x86_64baseline []
panic: integer does not fit in destination type
sys_uv.zig:294: preadv
node_fs.zig:5039: readFileWithOptions
```
## Test plan
- [ ] This fix prevents a panic, converting it to a proper error.
Testing would require triggering `UV_UNKNOWN` from libuv, which is
difficult to do reliably (requires memory-mapped file exceptions or
unusual Windows errors).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>