## Summary
Fixes Clang 19 detection in the Nix flake environment by explicitly
setting CMAKE compiler environment variables in the shellHook.
## Problem
When using `nix develop` or `nix print-dev-env`, CMake was unable to
detect the Clang 19 compiler because the `CMAKE_C_COMPILER` and
`CMAKE_CXX_COMPILER` environment variables were not being set, even
though the compiler was available in the environment.
The `shell.nix` file correctly sets these variables (lines 80-87), but
`flake.nix` was missing them.
## Solution
Updated `flake.nix` shellHook to export the same compiler environment
variables as `shell.nix`:
- `CC`, `CXX`, `AR`, `RANLIB`
- `CMAKE_C_COMPILER`, `CMAKE_CXX_COMPILER`, `CMAKE_AR`, `CMAKE_RANLIB`
This ensures consistent compiler detection across both Nix entry points
(`nix develop` with flakes and `nix-shell` with shell.nix).
## Testing
Verified that all compiler variables are now properly set:
```bash
nix develop --accept-flake-config --impure --command bash -c 'echo "CMAKE_C_COMPILER=$CMAKE_C_COMPILER"'
# Output: CMAKE_C_COMPILER=/nix/store/.../clang-wrapper-19.1.7/bin/clang
```
Also tested with the profile workflow:
```bash
nix develop --accept-flake-config --impure --profile ./dev-profile --command true
eval "$(nix print-dev-env ./dev-profile --accept-flake-config --impure)"
echo "CMAKE_C_COMPILER=$CMAKE_C_COMPILER"
# Output: CMAKE_C_COMPILER=/nix/store/.../clang
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
This PR fixes incorrect WriteBarrier initialization patterns throughout
the Bun codebase where `.set()` or `.setEarlyValue()` was being called
in the constructor body or in `finishCreation()`. According to
JavaScriptCore's `WriteBarrier.h`, WriteBarriers that are initialized
during construction should use the `WriteBarrierEarlyInit` constructor
in the initializer list to avoid triggering unnecessary write barriers.
## Changes
The following classes were updated to properly initialize WriteBarrier
fields:
1. **InternalModuleRegistry** - Initialize internal fields in
constructor using `setWithoutWriteBarrier()` instead of calling `.set()`
in `finishCreation()`
2. **AsyncContextFrame** - Pass callback and context to constructor and
use `WriteBarrierEarlyInit`
3. **JSCommonJSModule** - Pass id, filename, dirname to constructor and
use `WriteBarrierEarlyInit`
4. **JSMockImplementation** - Pass initial values to constructor and use
`WriteBarrierEarlyInit`
5. **JSConnectionsList** - Pass connection sets to constructor and use
`WriteBarrierEarlyInit`
6. **JSBunRequest** - Pass params to constructor and initialize both
`m_params` and `m_cookies` using `WriteBarrierEarlyInit`
7. **JSNodeHTTPServerSocket** - Initialize `currentResponseObject` using
`WriteBarrierEarlyInit` instead of calling `setEarlyValue()`
## Why This Matters
From JavaScriptCore's `WriteBarrier.h`:
```cpp
enum WriteBarrierEarlyInitTag { WriteBarrierEarlyInit };
// Constructor for early initialization during object construction
WriteBarrier(T* value, WriteBarrierEarlyInitTag)
{
this->setWithoutWriteBarrier(value);
}
```
Using `WriteBarrierEarlyInit` during construction:
- Avoids triggering write barriers when they're not needed
- Is the correct pattern for initializing WriteBarriers before the
object is fully constructed
- Aligns with JavaScriptCore best practices
## Testing
- ✅ Full debug build completes successfully
- ✅ Basic functionality tested (CommonJS modules, HTTP requests, Node
HTTP servers)
- ✅ No regressions observed
## Note on generate-classes.ts
The code generator (`generate-classes.ts`) does not need updates because
generated classes intentionally leave WriteBarrier fields (callbacks,
cached fields, values) uninitialized. They start with
default-constructed WriteBarriers and are populated later by Zig code
via setter functions, which is the correct pattern for those fields.
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Fixes test\regression\issue\23316-long-path-spawn.test.ts
The problem was ``await Bun.write(join(deepPath, "test.js"),
`console.log("hello");`);`` was failing because the name was too long,
but it failed before refConcurrently was called and it called
unrefConcurrently after failing. so then when the subprocess spawned it
didn't ref.
---------
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
### What does this PR do?
Fixes `bun -p "process.stderr.write('Hello' +
String.fromCharCode(0xd800))"`.
Also fixes potential index out of bounds if there are many invalid
sequences.
This also affects `TextEncoder`.
### How did you verify your code works?
Added tests for edgecases
---------
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Provides a Nix flake as an alternative to `scripts/bootstrap.sh` for
setting up the Bun development environment.
## What's included:
- **flake.nix**: Full development environment with all dependencies from
bootstrap.sh
- LLVM 19, CMake 3.30+, Node.js 24, Rust, Go
- Build tools: ninja, ccache, pkg-config, make
- Chromium dependencies for Puppeteer testing
- gdb for core dump debugging
- **shell.nix**: Simple wrapper for `nix-shell` usage
- **cmake/CompilerFlags.cmake**: Nix compatibility fixes
- Disable zstd debug compression (Nix's LLVM not built with zstd)
- Set _FORTIFY_SOURCE=0 for -O0 debug builds
- Downgrade _FORTIFY_SOURCE warning to not error
## Usage:
```bash
nix-shell
export CMAKE_SYSTEM_PROCESSOR=$(uname -m)
bun bd
```
## Verified working:
✅ Successfully compiles Bun debug build
✅ Binary tested: `./build/debug/bun-debug --version` → 1.2.24-debug
✅ All dependencies from bootstrap.sh included
## Advantages:
- Fully isolated (no sudo required)
- 100% reproducible dependency versions
- Fast setup with binary caching
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Calls `uncork()` after flushing response headers to ensure data is sent
as soon as possible, improving responsiveness.
This behavior still works correctly even without the explicit `uncork()`
call, due to the deferred uncork logic implemented here:
6e3359dd16/packages/bun-uws/src/Loop.h (L57-L64)
A test already covers this scenario in
`test/js/node/test/parallel/test-http-flush-response-headers.js`.
### How did you verify your code works?
CI
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
fix tests failing because of example.com
### How did you verify your code works?
CI
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Fixes#23333, Fixes#13978
### What does this PR do?
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pfg <pfg@pfg.pw>
Co-authored-by: Zack Radisic <zack@theradisic.com>
## What does this PR do?
Bumps Bun version from 1.2.24 to 1.3.0, marking the start of the 1.3.x
release series.
## Changes
- **`package.json`**: Updated version from `1.2.24` to `1.3.0`
- **`LATEST`**: Updated from `1.2.23` to `1.3.0` (used by installation
scripts)
- **`test/bundler/bundler_bun.test.ts`**: Updated version check to
include `1.3.x` so export conditions tests continue to run
## Verification
✅ Debug build successful showing version `1.3.0-debug`
✅ All platforms compile successfully via `bun run zig:check-all` (49/49
steps)
✅ Bundler tests pass with updated version check
## Additional Notes
- CI workflow Bun versions (e.g., `1.2.3`, `1.2.0` in
`.github/workflows/release.yml`) are intentionally left unchanged -
these are pinned versions used to run the release tooling, not the
version being released
- Docker images use `ARG BUN_VERSION` passed at build time and don't
need updates
- The actual release version comes from git tags via `${{
env.BUN_VERSION }}`
---
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Fixes#23380 - this is a use-case for the `--only` flag that I missed
Adds back the `--only` flag. When running `bun test` on a full test
suite, without this flag it will run only that test in its file, but it
will run all other tests from other files. With this flag, it will not
run things from other files.
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pfg <pfg@pfg.pw>
## Summary
Modernizes `test/bundler/bundler_promiseall_deadcode.test.ts` to use the
`itBundled` test helper instead of manual temp directory creation and
spawning. This makes the test more concise, maintainable, and consistent
with other bundler tests.
## Changes
- Replace `tempDirWithFiles` + manual `Bun.spawn` with `itBundled`
- Use `files` object for test fixtures instead of creating a temp
directory
- Use `onAfterBundle` callback for bundled output assertions
- Use `run.validate` for runtime stderr validation
- Use `run.partialStdout` for stdout verification
- Preserve all original test assertions and behavior
## Test Results
All 3 tests pass with identical functional behavior:
```
3 pass
0 fail
2 snapshots, 23 expect() calls
Ran 3 tests across 1 file. [8.95s]
```
## Verification
All original assertions are preserved:
- ✅ Build success validation
- ✅ Bundled output snapshots (updated paths to match itBundled format)
- ✅ `__esm` and `__promiseAll` presence/absence checks
- ✅ Runtime execution validation (exit code 0)
- ✅ Runtime stderr validation (no async syntax errors)
- ✅ Runtime stdout validation (contains expected output)
The test is now more concise (407 insertions vs 514 deletions) while
maintaining full test coverage.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Makes isolated installs the default install strategy for projects with
workspaces in Bun v1.3.
Also fixes creating patches with `bun patch` and `--linker isolated`
Fixes#22693
### How did you verify your code works?
Added tests for node_modules renaming `bun patch` with isolated install.
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Alternative to #15596 where it now only impacts `bun run` for the same
cwd dir. This does not effect `bunx` ([even though according to code it
should load
it](7830e15650/src/cli.zig (L2597-L2628))),
and isnt as fancy as `bun install` where it ensures to check the bunfig
in `package.json` dir.
This shouldn't have any performance issues because its already loading
the file, but now its loading earlier so it can use `run.bun` option.
Fixes#11445, (as well as fixes#15484, fixes#15483, fixes#17064)
---------
Co-authored-by: pfg <pfg@pfg.pw>
## Summary
This PR improves the correctness of bin linking by atomically
normalizing `\r\n` to `\n` in shebang lines when linking bins.
### Changes
- **Refactored shebang normalization in `src/install/bin.zig`**:
- Extracted logic into separate `tryNormalizeShebang` function
- Changed from in-place file modification to atomic file replacement
- Reads entire file, creates temporary file with corrected shebang, then
atomically renames
- Properly cleans up temporary files on errors
- **Added test coverage**:
- New test file `test/cli/install/shebang-normalize.test.ts` verifies
CRLF normalization works correctly
- Modified existing test in `bun-link.test.ts` to use Python script with
CRLF shebang
### Why
The previous implementation modified files in-place by seeking to the
`\r` position and overwriting with `\n`. This could potentially corrupt
files if interrupted mid-write. The new atomic approach ensures file
integrity by writing to a temporary file first, then renaming it to
replace the original.
## Test plan
- ✅ `bun bd test test/cli/install/shebang-normalize.test.ts` - passes
- ✅ Verified bins with CRLF shebangs are normalized to LF during linking
- ✅ Code compiles successfully
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
### What does this PR do?
BeforeOpen code is not necessary since we have `setOnSocketUpgraded`
callback now,and we should NOT convert websocket to a response, make
sure that no closed socket is passed to `JSNodeHTTPServerSocket`, change
isIdle to be inside AsyncSocketData to be more reliable (works for
websocket and normal sockets)
### How did you verify your code works?
CI
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Fixed a race condition where calling `pause()` followed by `resume()` on
`process.stdin` would prevent data from being received, causing the
process to exit immediately instead of listening for input.
## Root Cause
The issue was in the pause/resume event handling logic in
`ProcessObjectInternals.ts`:
1. When `pause()` is called, the "pause" event handler schedules a
`disown()` call for the next tick
2. When `resume()` is called immediately after, it calls `own()` to
acquire a stream reader
3. On the next tick, the scheduled `disown()` from step 1 executes and
incorrectly releases the reader that was just acquired in step 2
This race condition left the stream without a reader, so no data could
be received.
## Solution
Added a `pendingDisown` flag that:
- Gets set to `true` when scheduling a disown operation
- Gets cleared to `false` when `own()` is called (during resume)
- Prevents the scheduled disown from executing if it has been cancelled
by a subsequent `own()` call
## Test Plan
- [x] Added regression tests in
`test/regression/issue/stdin-pause-resume.test.ts`
- [x] Verified fix with original reproduction case
- [x] Existing stdin/tty tests still pass
(`tty-readstream-ref-unref.test.ts`,
`tty-reopen-after-stdin-eof.test.ts`)
## Reproduction
Before this fix, the following code would exit immediately:
```ts
process.stdin.on("data", chunk => {
process.stdout.write(chunk);
});
process.stdin.pause();
process.stdin.resume();
```
After the fix, it correctly waits for and processes input.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Fixes data loss when reading large amounts of data from subprocess pipes
on Windows, a regression introduced by the libuv 1.51.0 upgrade in
commit e3783c244f.
### The Problem
When piping large data through a subprocess on Windows (e.g.,
`process.stdin.pipe(process.stdout)`), Bun randomly loses ~73KB of data
out of 1MB, receiving only ~974KB instead of the full 1048576 bytes.
The subprocess correctly receives all 1MB on stdin, but the parent
process loses data when reading from the subprocess stdout.
### Root Cause Analysis
#### libuv 1.51.0 Change
The libuv 1.51.0 upgrade (commit
[libuv/libuv@727ee723](727ee7237e))
changed Windows pipe reading behavior:
**Before:** libuv would call `PeekNamedPipe` to check available bytes,
then read exactly that amount.
**After:** libuv attempts immediate non-blocking reads (up to 65536
bytes) before falling back to async reads. If less data is available
than requested, it returns what's available and signals `more=0`,
causing the read loop to break.
This optimization introduces **0-byte reads** when data isn't
immediately available, which are delivered to Bun's read callback.
#### The Race Condition
When Bun's `WindowsBufferedReader` called `onRead(.drained)` for these
0-byte reads, it created a race condition. Debug logs clearly show the
issue:
**Error case (log.txt):**
```
Line 79-80: onStreamRead = 0 (drained)
Line 81: filesink closes (stdin closes)
Line 85: onStreamRead = 6024 ← Should be 74468!
Line 89: onStreamRead = -4095 (EOF)
```
**Success case (success.log.txt):**
```
Line 79-80: onStreamRead = 0 (drained)
Line 81: filesink closes (stdin closes)
Line 85: onStreamRead = 74468 ← Full chunk!
Line 89-90: onStreamRead = 0 (drained)
Line 91: onStreamRead = 6024
Line 95: onStreamRead = -4095 (EOF)
```
When stdin closes while a 0-byte drained read is pending, the next read
returns truncated data (6024 bytes instead of 74468 bytes).
### The Fix
Two changes to `WindowsBufferedReader` in `src/io/PipeReader.zig`:
#### 1. Ignore 0-byte reads (line 937-940)
Don't call `onRead(.drained)` for 0-byte reads. Just return and let
libuv queue the next read. This prevents the race condition that causes
truncated reads.
```zig
0 => {
// With libuv 1.51.0+, calling onRead(.drained) here causes a race condition
// where subsequent reads return truncated data. Just ignore 0-byte reads.
return;
},
```
#### 2. Defer `has_inflight_read` flag clearing (line 827-839)
Clear the flag **after** the read callback completes, not before. This
prevents libuv from starting a new overlapped read operation while we're
still processing the current data buffer, which could cause memory
corruption per the libuv commit message:
> "Starting a new read after uv_read_cb returns causes memory corruption
on the OVERLAPPED read_req if uv_read_stop+uv_read_start was called
during the callback"
```zig
const result = onReadChunkFn(this.parent, buf, hasMore);
// Clear has_inflight_read after the callback completes
this.flags.has_inflight_read = false;
return result;
```
### How to Test
Run the modified test in
`test/js/bun/spawn/spawn-stdin-readable-stream.test.ts`:
```js
test("ReadableStream with very large chunked data", async () => {
const chunkSize = 64 * 1024; // 64KB chunks
const numChunks = 16; // 1MB total
const chunk = Buffer.alloc(chunkSize, "x");
const stream = new ReadableStream({
pull(controller) {
if (pushedChunks < numChunks) {
controller.enqueue(chunk);
pushedChunks++;
} else {
controller.close();
}
},
});
await using proc = spawn({
cmd: [bunExe(), "-e", `
let length = 0;
process.stdin.on('data', (data) => length += data.length);
process.once('beforeExit', () => console.error(length));
process.stdin.pipe(process.stdout)
`],
stdin: stream,
stdout: "pipe",
env: bunEnv,
});
const text = await proc.stdout.text();
expect(text.length).toBe(chunkSize * numChunks); // Should be 1048576
});
```
**Before fix:** Randomly fails with ~974KB instead of 1MB
**After fix:** Consistently passes with full 1MB
Run ~100 times to verify the race condition is fixed.
### Related Issues
This may also fix#23071 (Windows scripts hanging), though that issue
needs separate verification.
### Why Draft?
Marking as draft for Windows testing by the team. The fix is based on
detailed debug log analysis showing the exact race condition, but needs
verification on Windows CI.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
When a snapshot is created in CI without `--update-snapshots`, the error
message now displays the received value that was attempting to be
snapshotted. This helps developers understand what value triggered the
error.
## Changes
- Modified the `SnapshotCreationNotAllowedInCI` error message in
`src/bun.js/test/expect.zig` to include the received value using the
same formatting pattern as other expect error messages
## Before
```
Snapshot creation is not allowed in CI environments unless --update-snapshots is used
If this is not a CI environment, set the environment variable CI=false to force allow.
```
## After
```
Snapshot creation is not allowed in CI environments unless --update-snapshots is used
If this is not a CI environment, set the environment variable CI=false to force allow.
Received: <formatted value>
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Breaking changes:
- bun:test: disallow creating snapshots or using .only() in ci
- for users: hopefully this should only reveal existing bugs in tests,
not cause failures.
- general: enable calling unhandled rejection handlers for
ErrorBuilder.reject()
- for users: this might reveal some unhandled rejections that were not
visible before.
## Summary
- Clone `bunEnv` and delete `AGENTS` property in `beforeAll`
- Replace all `bunEnv` references with `testEnv` in test spawns
- Prevents parent process's `AGENTS` env var from leaking into tests
## Problem
The `claudecode-flag` test was using `bunEnv` directly, which includes
`...process.env`. When running in environments like Claude Code where
`AGENTS` may be set, this variable would leak into the test child
processes and potentially affect test behavior.
## Solution
Created a `testEnv` clone in `beforeAll` that explicitly deletes
`AGENTS`, ensuring consistent test behavior regardless of the parent
process's environment.
## Test plan
- [x] Test passes without `AGENTS` set
- [x] Test passes with `AGENTS=1` set in parent environment
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
This PR upgrades the `react-shadcn` template:
- Upgrades to the new Tailwind v4 styles and components
- Updates the example components to use the new ones.
- Removed unused form component
- Fixed some a11y issues with the example component.
### How did you verify your code works?
- Ran `bun build` to test if the template builds with no errors.
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Fixes#23206
When using `test.each` with object syntax and `$variable` interpolation,
string values were being quoted (e.g., `"apple"` instead of `apple`).
This didn't match the behavior of `%s` formatting or Jest's behavior.
## Changes
- Modified `formatLabel` in `src/bun.js/test/jest.zig` to check if the
value is a primitive string and use `toString()` instead of the
formatter with `quote_strings=true`
- Added regression test in `test/regression/issue/23206.test.ts`
## Example
**Before:**
```
test.each([
{ name: "apple" },
{ name: "banana" }
])("fruit #%# is $name", fruit => {
// Test names were:
// "fruit #0 is "apple""
// "fruit #1 is "banana""
});
```
**After:**
```
test.each([
{ name: "apple" },
{ name: "banana" }
])("fruit #%# is $name", fruit => {
// Test names are now:
// "fruit #0 is apple"
// "fruit #1 is banana"
});
```
## Test plan
- [x] Added regression test that verifies both `%s` and `$name` syntax
produce consistent output
- [x] Tested with `AGENT=0` - all tests pass
- [x] Verified other primitive types (numbers, booleans) still format
correctly
- [x] Verified complex objects still use proper formatting
This matches Jest's behavior after their fix:
https://github.com/jestjs/jest/issues/7689🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: pfg <pfg@pfg.pw>
### What does this PR do?
Fixes#23314 where `zlib.zstdCompress()` created data that caused an
out-of-memory error when decompressed with `Bun.zstdDecompressSync()`.
#### 1. `zlib.zstdCompress()` now sets `pledgedSrcSize`
The async convenience method now automatically sets the `pledgedSrcSize`
option to the input buffer size. This ensures the compressed frame
includes the content size in the header, making sync and async
compression produce identical output.
**Node.js compatibility**: `pledgedSrcSize` is a documented Node.js
option:
-
[`vendor/node/doc/api/zlib.md:754-758`](https://github.com/oven-sh/bun/blob/main/vendor/node/doc/api/zlib.md#L754-L758)
-
[`vendor/node/lib/zlib.js:893`](https://github.com/oven-sh/bun/blob/main/vendor/node/lib/zlib.js#L893)
-
[`vendor/node/src/node_zlib.cc:890-904`](https://github.com/oven-sh/bun/blob/main/vendor/node/src/node_zlib.cc#L890-L904)
#### 2. Added `bun.zstd.decompressAlloc()` - centralized safe
decompression
Created a new function in `src/deps/zstd.zig` that handles decompression
in one place with automatic safety features:
- **Handles unknown content sizes**: Automatically switches to streaming
decompression when the zstd frame doesn't include content size (e.g.,
from streams without `pledgedSrcSize`)
- **16MB safety limit**: For security, if the reported decompressed size
exceeds 16MB, streaming decompression is used instead of blindly
trusting the header
- **Fast path for small files**: Still uses efficient pre-allocation for
files < 16MB with known sizes
This centralized fix automatically protects:
- `Bun.zstdDecompressSync()` / `Bun.zstdDecompress()`
- `StandaloneModuleGraph` source map decompression
- Any other code using `bun.zstd` decompression
### How did you verify your code works?
**Before:**
```typescript
const input = "hello world";
// Async compression
const compressed = await new Promise((resolve, reject) => {
zlib.zstdCompress(input, (err, result) => {
if (err) reject(err);
else resolve(result);
});
});
// This would fail with "Out of memory"
const decompressed = Bun.zstdDecompressSync(compressed);
```
**Error**: `RangeError: Out of memory` (tried to allocate UINT64_MAX
bytes)
**After:**
```typescript
const input = "hello world";
// Async compression (now includes content size)
const compressed = await new Promise((resolve, reject) => {
zlib.zstdCompress(input, (err, result) => {
if (err) reject(err);
else resolve(result);
});
});
// ✅ Works! Falls back to streaming decompression if needed
const decompressed = Bun.zstdDecompressSync(compressed);
console.log(decompressed.toString()); // "hello world"
```
**Tests:**
- ✅ All existing tests pass
- ✅ New regression tests for async/sync compression compatibility
(`test/regression/issue/23314/zstd-async-compress.test.ts`)
- ✅ Test for large (>16MB) decompression using streaming
(`test/regression/issue/23314/zstd-large-decompression.test.ts`)
- ✅ Test for various input sizes and types
(`test/regression/issue/23314/zstd-large-input.test.ts`)
**Security:**
The 16MB safety limit protects against malicious zstd frames that claim
huge decompressed sizes in the header, preventing potential OOM attacks.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Fixes#12095
Manually confirmed to fix the case, but it would be better to have an
automated test to compare default reporter output with lcov reporter
output.
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pfg <pfg@pfg.pw>
### What does this PR do?
Matches node behavior.
Fixes#20975
### How did you verify your code works?
Manually and added a test
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>