Added the required root.h include at the top of V8StackTraceIterator.cpp
following Bun's standard pattern for C++ source files.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fixed a potential integer underflow bug when extracting the function name
from stack trace lines. The previous code used `openingParentheses - 1`
without checking if `openingParentheses` was 0, which could cause an
unsigned integer underflow.
Changes:
- Check if openingParentheses is 0 before subtracting
- Only trim trailing space if the character is actually a space
- Use safe substring bounds to prevent out-of-bounds reads
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Extracted the V8StackTraceIterator class from bindings.cpp into
dedicated V8StackTraceIterator.h and V8StackTraceIterator.cpp files.
Changes:
- Created V8StackTraceIterator.h with class declaration
- Created V8StackTraceIterator.cpp with implementation
- Moved class into Bun namespace
- Updated bindings.cpp to include the new header
- Removed class definition and helper function from bindings.cpp
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
### What does this PR do?
Fixes the crash handler failing to capture and display stack traces on
Linux ARM64 systems.
**Before:**
```
============================================================
panic(main thread): cast causes pointer to be null
```
No stack trace shown.
**After:**
```
============================================================
panic(main thread): cast causes pointer to be null
bun.js.api.FFIObject.Reader.u8
/workspace/bun/src/bun.js/api/FFIObject.zig:67:41
bun.js.jsc.host_fn.toJSHostCall__anon_2545765
/workspace/bun/src/bun.js/jsc/host_fn.zig:93:5
```
Full stack trace with source locations.
#### Root Cause
- Zig's `std.debug.captureStackTrace` uses `StackIterator.init()` which
falls back to frame pointer-based unwinding when no context is provided
- Frame pointer-based unwinding doesn't work reliably on ARM64, even
with `-fno-omit-frame-pointer` enabled
- This resulted in 0 frames being captured (`trace.index == 0`)
#### Changes
1. **Use glibc's backtrace() on Linux**: On Linux with glibc (not musl),
always use glibc's `backtrace()` function instead of Zig's
StackIterator. glibc's implementation properly uses DWARF unwinding
information from `.eh_frame` sections.
2. **Skip crash handler frames**: After capturing with `backtrace()`,
find the desired `begin_addr` in the trace (within 128 byte tolerance)
and filter out crash handler internal frames for cleaner output. If
`begin_addr` is not found, use the complete backtrace.
3. **Preserve existing behavior**:
- Non-debug builds: Use WTF printer (fast, no external deps)
- Debug builds: Fall through to llvm-symbolizer (detailed source info)
### How did you verify your code works?
Reproduced the crash:
```bash
bun-debug --print 'Bun.FFI.read.u8(0)'
```
Verified that:
- ✅ Stack traces now appear on Linux ARM64 with proper source locations
- ✅ Crash handler frames are properly filtered out
- ✅ llvm-symbolizer integration works for debug builds
- ✅ WTF printer is used for release builds
- ✅ When begin_addr is not found, complete backtrace is used
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## What does this PR do?
Fixes a race condition where multiple threads could attempt to
initialize JavaScriptCore concurrently when the bundler's thread pool
processes files with macros.
Fixes#23540
## How did you verify your code works?
Reproduced the segfault with the Brisa project build and verified the
fix resolves it:
```bash
git clone https://github.com/brisa-build/brisa
cd brisa
bun install
bun run build
```
Before the fix: Segmentation fault with assertion failure
After the fix: Build proceeds without crashing
## Root Cause
The previous implementation used a simple boolean flag `has_loaded_jsc`
without synchronization. When multiple bundler threads tried to execute
macros simultaneously, they could race through the initialization check
before `JSC::initialize()` finished finalizing options on another
thread.
This caused crashes with:
```
ASSERTION FAILED: g_jscConfig.options.allowUnfinalizedAccess || g_jscConfig.options.isFinalized
vendor/WebKit/Source/JavaScriptCore/runtime/Options.h(146) : static OptionsStorage::Bool &JSC::Options::forceTrapAwareStackChecks()
```
## The Fix
Replace the boolean flag with `std::call_once`, which provides:
- Thread-safe initialization
- Guaranteed exactly-once execution
- Proper memory barriers to ensure visibility across threads
The initialization code is now wrapped in a lambda passed to
`std::call_once`, capturing the necessary parameters (`evalMode`,
`envp`, `envc`, `onCrash`).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Adds `BUN_WATCHER_TRACE` environment variable that logs all file watcher
events to a JSON file for debugging. When set, the watcher appends
detailed event information to the specified file path.
## Motivation
Debugging watch-related issues (especially with `bun --watch` and `bun
--hot`) can be difficult without visibility into what the watcher is
actually seeing. This feature provides detailed trace logs showing
exactly which files are being watched and what events are triggered.
## Implementation
- **Isolated module** (`src/watcher/WatcherTrace.zig`) - All trace logic
in separate file
- **No locking needed** - Watcher runs on its own thread, no mutex
required
- **Append-only mode** - Traces persist across multiple runs for easier
debugging
- **Silent errors** - Won't break functionality if trace file can't be
created
- **JSON format** - Easy to parse and analyze
## Usage
```bash
BUN_WATCHER_TRACE=/tmp/watch.log bun --watch script.js
BUN_WATCHER_TRACE=/tmp/hot.log bun --hot server.ts
```
## JSON Output Format
Each line is a JSON object with:
```json
{
"timestamp": 1760280923269,
"index": 0,
"path": "/path/to/watched/file.js",
"delete": false,
"write": true,
"rename": false,
"metadata": false,
"move_to": false,
"changed_files": ["script.js"]
}
```
## Testing
All tests use stdout streaming to wait for actual reloads (no
sleeps/timeouts):
- Tests with `--watch` flag
- Tests with `fs.watch` API
- Tests that trace file appends across multiple runs
- Tests validation of JSON format and event details
```
✅ 4 pass
❌ 0 fail
📊 52 expect() calls
```
## Files Changed
- `src/Watcher.zig` - Minimal integration with WatcherTrace module
- `src/watcher/WatcherTrace.zig` - New isolated trace implementation
- `src/watcher/KEventWatcher.zig` - Calls writeTraceEvents before
onFileUpdate
- `src/watcher/INotifyWatcher.zig` - Calls writeTraceEvents before
onFileUpdate
- `src/watcher/WindowsWatcher.zig` - Calls writeTraceEvents before
onFileUpdate
- `test/cli/watch/watcher-trace.test.ts` - Comprehensive tests
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
This PR implements support for `localAddress` and `localPort` options in
TCP connections, allowing users to bind outgoing connections to a
specific local IP address and port.
This addresses issue #6888 and implements Node.js-compatible behavior
for these options.
## Changes
### C Layer (uSockets)
- **`bsd.c`**: Modified `bsd_create_connect_socket()` to accept a
`local_addr` parameter and call `bind()` before `connect()` when a local
address is specified
- **`context.c`**: Updated `us_socket_context_connect()` and
`start_connections()` to parse and pass local address parameters through
the connection flow
- **`libusockets.h`**: Updated public API signatures to include
`local_host` and `local_port` parameters
- **`internal.h`**: Added `local_host` and `local_port` fields to
`us_connecting_socket_t` structure
- **`openssl.c`**: Updated SSL connection function to match the new
signature
### Zig Layer
- **`SocketContext.zig`**: Updated `connect()` method to accept and pass
through `local_host` and `local_port` parameters
- **`socket.zig`**: Modified `connectAnon()` to handle local address
binding, including IPv6 bracket removal and proper memory management
- **`Handlers.zig`**: Added `localAddress` and `localPort` fields to
`SocketConfig` and implemented parsing from JavaScript options
- **`Listener.zig`**: Updated connection structures to store and pass
local binding information
- **`socket.zig` (bun.js/api/bun)**: Modified `doConnect()` to extract
and pass local address options
- Updated all other call sites (HTTP, MySQL, PostgreSQL, Valkey) to pass
`null, 0` for backward compatibility
### JavaScript Layer
- **`net.ts`**: Enabled `localAddress` and `localPort` support by
passing these options to `doConnect()` and removing TODO comments
### Tests
- **`06888-localaddress.test.ts`**: Added comprehensive tests covering:
- IPv4 local address binding
- IPv4 local address and port binding
- IPv6 local address binding (loopback)
- Backward compatibility (connections without local address)
## Test Results
All tests pass successfully:
```
✓ TCP socket can bind to localAddress - IPv4
✓ TCP socket can bind to localAddress and localPort - IPv4
✓ TCP socket can bind to localAddress - IPv6 loopback
✓ TCP socket without localAddress works normally
4 pass, 0 fail
```
## API Usage
```typescript
import net from "net";
// Connect with a specific local address
const client = net.createConnection({
host: "example.com",
port: 80,
localAddress: "192.168.1.100", // Bind to this local IP
localPort: 0, // Let system assign port (optional)
});
```
## Implementation Details
The implementation follows the same flow as Node.js:
1. JavaScript options are parsed in `Handlers.zig`
2. Local address/port are stored in the connection configuration
3. The Zig layer processes and passes them to the C layer
4. The C layer parses the local address and calls `bind()` before
`connect()`
5. Both IPv4 and IPv6 addresses are supported
Memory management is handled properly throughout the stack, with
appropriate allocation/deallocation at each layer.
Closes#6888🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Makes sure strings are doubled quoted when they start with flow
indicators and `:`.
Fixes#23502
### How did you verify your code works?
Added tests for each indicator in flow and block context
### What does this PR do?
### How did you verify your code works?
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Add missing error handling for directory entries errors
The code was missing a check for .err
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Fixes#23474
## Summary
When `request.cookies.set()` is called before `server.upgrade()`, the
cookies are now properly included in the WebSocket upgrade response
headers.
## Problem
Previously, cookies set on the request via `req.cookies.set()` were only
written for regular HTTP responses but were ignored during WebSocket
upgrades. Users had to manually pass cookies via the `headers` option:
```js
server.upgrade(req, {
headers: {
"Set-Cookie": `SessionId=${sessionId}`,
},
});
```
## Solution
Modified `src/bun.js/api/server.zig` to check for and write cookies to
the WebSocket upgrade response after the "101 Switching Protocols"
status is set but before the actual upgrade is performed.
The fix handles both cases:
- When `upgrade()` is called without custom headers
- When `upgrade()` is called with custom headers
## Testing
Added comprehensive regression tests in
`test/regression/issue/23474.test.ts` that:
- Verify cookies are set in the upgrade response without custom headers
- Verify cookies are set in the upgrade response with custom headers
- Use `Promise.withResolvers()` for efficient async handling (no
arbitrary timeouts)
Tests confirmed:
- ❌ Fail with system bun v1.2.23 (without fix)
- ✅ Pass with debug build v1.3.0 (with fix)
## Manual verification
```bash
curl -i -H "Connection: Upgrade" -H "Upgrade: websocket" \
-H "Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==" \
-H "Sec-WebSocket-Version: 13" \
http://localhost:3000/ws
```
Response now includes:
```
HTTP/1.1 101 Switching Protocols
Set-Cookie: test=123; Path=/; SameSite=Lax
Upgrade: websocket
Connection: Upgrade
...
```
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
This PR fixes incorrect WriteBarrier initialization patterns throughout
the Bun codebase where `.set()` or `.setEarlyValue()` was being called
in the constructor body or in `finishCreation()`. According to
JavaScriptCore's `WriteBarrier.h`, WriteBarriers that are initialized
during construction should use the `WriteBarrierEarlyInit` constructor
in the initializer list to avoid triggering unnecessary write barriers.
## Changes
The following classes were updated to properly initialize WriteBarrier
fields:
1. **InternalModuleRegistry** - Initialize internal fields in
constructor using `setWithoutWriteBarrier()` instead of calling `.set()`
in `finishCreation()`
2. **AsyncContextFrame** - Pass callback and context to constructor and
use `WriteBarrierEarlyInit`
3. **JSCommonJSModule** - Pass id, filename, dirname to constructor and
use `WriteBarrierEarlyInit`
4. **JSMockImplementation** - Pass initial values to constructor and use
`WriteBarrierEarlyInit`
5. **JSConnectionsList** - Pass connection sets to constructor and use
`WriteBarrierEarlyInit`
6. **JSBunRequest** - Pass params to constructor and initialize both
`m_params` and `m_cookies` using `WriteBarrierEarlyInit`
7. **JSNodeHTTPServerSocket** - Initialize `currentResponseObject` using
`WriteBarrierEarlyInit` instead of calling `setEarlyValue()`
## Why This Matters
From JavaScriptCore's `WriteBarrier.h`:
```cpp
enum WriteBarrierEarlyInitTag { WriteBarrierEarlyInit };
// Constructor for early initialization during object construction
WriteBarrier(T* value, WriteBarrierEarlyInitTag)
{
this->setWithoutWriteBarrier(value);
}
```
Using `WriteBarrierEarlyInit` during construction:
- Avoids triggering write barriers when they're not needed
- Is the correct pattern for initializing WriteBarriers before the
object is fully constructed
- Aligns with JavaScriptCore best practices
## Testing
- ✅ Full debug build completes successfully
- ✅ Basic functionality tested (CommonJS modules, HTTP requests, Node
HTTP servers)
- ✅ No regressions observed
## Note on generate-classes.ts
The code generator (`generate-classes.ts`) does not need updates because
generated classes intentionally leave WriteBarrier fields (callbacks,
cached fields, values) uninitialized. They start with
default-constructed WriteBarriers and are populated later by Zig code
via setter functions, which is the correct pattern for those fields.
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Fixes test\regression\issue\23316-long-path-spawn.test.ts
The problem was ``await Bun.write(join(deepPath, "test.js"),
`console.log("hello");`);`` was failing because the name was too long,
but it failed before refConcurrently was called and it called
unrefConcurrently after failing. so then when the subprocess spawned it
didn't ref.
---------
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
### What does this PR do?
Fixes `bun -p "process.stderr.write('Hello' +
String.fromCharCode(0xd800))"`.
Also fixes potential index out of bounds if there are many invalid
sequences.
This also affects `TextEncoder`.
### How did you verify your code works?
Added tests for edgecases
---------
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
### What does this PR do?
Calls `uncork()` after flushing response headers to ensure data is sent
as soon as possible, improving responsiveness.
This behavior still works correctly even without the explicit `uncork()`
call, due to the deferred uncork logic implemented here:
6e3359dd16/packages/bun-uws/src/Loop.h (L57-L64)
A test already covers this scenario in
`test/js/node/test/parallel/test-http-flush-response-headers.js`.
### How did you verify your code works?
CI
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Fixes#23333, Fixes#13978
### What does this PR do?
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pfg <pfg@pfg.pw>
Co-authored-by: Zack Radisic <zack@theradisic.com>
## What does this PR do?
Bumps Bun version from 1.2.24 to 1.3.0, marking the start of the 1.3.x
release series.
## Changes
- **`package.json`**: Updated version from `1.2.24` to `1.3.0`
- **`LATEST`**: Updated from `1.2.23` to `1.3.0` (used by installation
scripts)
- **`test/bundler/bundler_bun.test.ts`**: Updated version check to
include `1.3.x` so export conditions tests continue to run
## Verification
✅ Debug build successful showing version `1.3.0-debug`
✅ All platforms compile successfully via `bun run zig:check-all` (49/49
steps)
✅ Bundler tests pass with updated version check
## Additional Notes
- CI workflow Bun versions (e.g., `1.2.3`, `1.2.0` in
`.github/workflows/release.yml`) are intentionally left unchanged -
these are pinned versions used to run the release tooling, not the
version being released
- Docker images use `ARG BUN_VERSION` passed at build time and don't
need updates
- The actual release version comes from git tags via `${{
env.BUN_VERSION }}`
---
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Fixes#23380 - this is a use-case for the `--only` flag that I missed
Adds back the `--only` flag. When running `bun test` on a full test
suite, without this flag it will run only that test in its file, but it
will run all other tests from other files. With this flag, it will not
run things from other files.
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pfg <pfg@pfg.pw>
### What does this PR do?
Makes isolated installs the default install strategy for projects with
workspaces in Bun v1.3.
Also fixes creating patches with `bun patch` and `--linker isolated`
Fixes#22693
### How did you verify your code works?
Added tests for node_modules renaming `bun patch` with isolated install.
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Alternative to #15596 where it now only impacts `bun run` for the same
cwd dir. This does not effect `bunx` ([even though according to code it
should load
it](7830e15650/src/cli.zig (L2597-L2628))),
and isnt as fancy as `bun install` where it ensures to check the bunfig
in `package.json` dir.
This shouldn't have any performance issues because its already loading
the file, but now its loading earlier so it can use `run.bun` option.
Fixes#11445, (as well as fixes#15484, fixes#15483, fixes#17064)
---------
Co-authored-by: pfg <pfg@pfg.pw>
## Summary
This PR improves the correctness of bin linking by atomically
normalizing `\r\n` to `\n` in shebang lines when linking bins.
### Changes
- **Refactored shebang normalization in `src/install/bin.zig`**:
- Extracted logic into separate `tryNormalizeShebang` function
- Changed from in-place file modification to atomic file replacement
- Reads entire file, creates temporary file with corrected shebang, then
atomically renames
- Properly cleans up temporary files on errors
- **Added test coverage**:
- New test file `test/cli/install/shebang-normalize.test.ts` verifies
CRLF normalization works correctly
- Modified existing test in `bun-link.test.ts` to use Python script with
CRLF shebang
### Why
The previous implementation modified files in-place by seeking to the
`\r` position and overwriting with `\n`. This could potentially corrupt
files if interrupted mid-write. The new atomic approach ensures file
integrity by writing to a temporary file first, then renaming it to
replace the original.
## Test plan
- ✅ `bun bd test test/cli/install/shebang-normalize.test.ts` - passes
- ✅ Verified bins with CRLF shebangs are normalized to LF during linking
- ✅ Code compiles successfully
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
### What does this PR do?
BeforeOpen code is not necessary since we have `setOnSocketUpgraded`
callback now,and we should NOT convert websocket to a response, make
sure that no closed socket is passed to `JSNodeHTTPServerSocket`, change
isIdle to be inside AsyncSocketData to be more reliable (works for
websocket and normal sockets)
### How did you verify your code works?
CI
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Fixed a race condition where calling `pause()` followed by `resume()` on
`process.stdin` would prevent data from being received, causing the
process to exit immediately instead of listening for input.
## Root Cause
The issue was in the pause/resume event handling logic in
`ProcessObjectInternals.ts`:
1. When `pause()` is called, the "pause" event handler schedules a
`disown()` call for the next tick
2. When `resume()` is called immediately after, it calls `own()` to
acquire a stream reader
3. On the next tick, the scheduled `disown()` from step 1 executes and
incorrectly releases the reader that was just acquired in step 2
This race condition left the stream without a reader, so no data could
be received.
## Solution
Added a `pendingDisown` flag that:
- Gets set to `true` when scheduling a disown operation
- Gets cleared to `false` when `own()` is called (during resume)
- Prevents the scheduled disown from executing if it has been cancelled
by a subsequent `own()` call
## Test Plan
- [x] Added regression tests in
`test/regression/issue/stdin-pause-resume.test.ts`
- [x] Verified fix with original reproduction case
- [x] Existing stdin/tty tests still pass
(`tty-readstream-ref-unref.test.ts`,
`tty-reopen-after-stdin-eof.test.ts`)
## Reproduction
Before this fix, the following code would exit immediately:
```ts
process.stdin.on("data", chunk => {
process.stdout.write(chunk);
});
process.stdin.pause();
process.stdin.resume();
```
After the fix, it correctly waits for and processes input.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Fixes data loss when reading large amounts of data from subprocess pipes
on Windows, a regression introduced by the libuv 1.51.0 upgrade in
commit e3783c244f.
### The Problem
When piping large data through a subprocess on Windows (e.g.,
`process.stdin.pipe(process.stdout)`), Bun randomly loses ~73KB of data
out of 1MB, receiving only ~974KB instead of the full 1048576 bytes.
The subprocess correctly receives all 1MB on stdin, but the parent
process loses data when reading from the subprocess stdout.
### Root Cause Analysis
#### libuv 1.51.0 Change
The libuv 1.51.0 upgrade (commit
[libuv/libuv@727ee723](727ee7237e))
changed Windows pipe reading behavior:
**Before:** libuv would call `PeekNamedPipe` to check available bytes,
then read exactly that amount.
**After:** libuv attempts immediate non-blocking reads (up to 65536
bytes) before falling back to async reads. If less data is available
than requested, it returns what's available and signals `more=0`,
causing the read loop to break.
This optimization introduces **0-byte reads** when data isn't
immediately available, which are delivered to Bun's read callback.
#### The Race Condition
When Bun's `WindowsBufferedReader` called `onRead(.drained)` for these
0-byte reads, it created a race condition. Debug logs clearly show the
issue:
**Error case (log.txt):**
```
Line 79-80: onStreamRead = 0 (drained)
Line 81: filesink closes (stdin closes)
Line 85: onStreamRead = 6024 ← Should be 74468!
Line 89: onStreamRead = -4095 (EOF)
```
**Success case (success.log.txt):**
```
Line 79-80: onStreamRead = 0 (drained)
Line 81: filesink closes (stdin closes)
Line 85: onStreamRead = 74468 ← Full chunk!
Line 89-90: onStreamRead = 0 (drained)
Line 91: onStreamRead = 6024
Line 95: onStreamRead = -4095 (EOF)
```
When stdin closes while a 0-byte drained read is pending, the next read
returns truncated data (6024 bytes instead of 74468 bytes).
### The Fix
Two changes to `WindowsBufferedReader` in `src/io/PipeReader.zig`:
#### 1. Ignore 0-byte reads (line 937-940)
Don't call `onRead(.drained)` for 0-byte reads. Just return and let
libuv queue the next read. This prevents the race condition that causes
truncated reads.
```zig
0 => {
// With libuv 1.51.0+, calling onRead(.drained) here causes a race condition
// where subsequent reads return truncated data. Just ignore 0-byte reads.
return;
},
```
#### 2. Defer `has_inflight_read` flag clearing (line 827-839)
Clear the flag **after** the read callback completes, not before. This
prevents libuv from starting a new overlapped read operation while we're
still processing the current data buffer, which could cause memory
corruption per the libuv commit message:
> "Starting a new read after uv_read_cb returns causes memory corruption
on the OVERLAPPED read_req if uv_read_stop+uv_read_start was called
during the callback"
```zig
const result = onReadChunkFn(this.parent, buf, hasMore);
// Clear has_inflight_read after the callback completes
this.flags.has_inflight_read = false;
return result;
```
### How to Test
Run the modified test in
`test/js/bun/spawn/spawn-stdin-readable-stream.test.ts`:
```js
test("ReadableStream with very large chunked data", async () => {
const chunkSize = 64 * 1024; // 64KB chunks
const numChunks = 16; // 1MB total
const chunk = Buffer.alloc(chunkSize, "x");
const stream = new ReadableStream({
pull(controller) {
if (pushedChunks < numChunks) {
controller.enqueue(chunk);
pushedChunks++;
} else {
controller.close();
}
},
});
await using proc = spawn({
cmd: [bunExe(), "-e", `
let length = 0;
process.stdin.on('data', (data) => length += data.length);
process.once('beforeExit', () => console.error(length));
process.stdin.pipe(process.stdout)
`],
stdin: stream,
stdout: "pipe",
env: bunEnv,
});
const text = await proc.stdout.text();
expect(text.length).toBe(chunkSize * numChunks); // Should be 1048576
});
```
**Before fix:** Randomly fails with ~974KB instead of 1MB
**After fix:** Consistently passes with full 1MB
Run ~100 times to verify the race condition is fixed.
### Related Issues
This may also fix#23071 (Windows scripts hanging), though that issue
needs separate verification.
### Why Draft?
Marking as draft for Windows testing by the team. The fix is based on
detailed debug log analysis showing the exact race condition, but needs
verification on Windows CI.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
When a snapshot is created in CI without `--update-snapshots`, the error
message now displays the received value that was attempting to be
snapshotted. This helps developers understand what value triggered the
error.
## Changes
- Modified the `SnapshotCreationNotAllowedInCI` error message in
`src/bun.js/test/expect.zig` to include the received value using the
same formatting pattern as other expect error messages
## Before
```
Snapshot creation is not allowed in CI environments unless --update-snapshots is used
If this is not a CI environment, set the environment variable CI=false to force allow.
```
## After
```
Snapshot creation is not allowed in CI environments unless --update-snapshots is used
If this is not a CI environment, set the environment variable CI=false to force allow.
Received: <formatted value>
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Breaking changes:
- bun:test: disallow creating snapshots or using .only() in ci
- for users: hopefully this should only reveal existing bugs in tests,
not cause failures.
- general: enable calling unhandled rejection handlers for
ErrorBuilder.reject()
- for users: this might reveal some unhandled rejections that were not
visible before.
### What does this PR do?
This PR upgrades the `react-shadcn` template:
- Upgrades to the new Tailwind v4 styles and components
- Updates the example components to use the new ones.
- Removed unused form component
- Fixed some a11y issues with the example component.
### How did you verify your code works?
- Ran `bun build` to test if the template builds with no errors.
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Fixes#23206
When using `test.each` with object syntax and `$variable` interpolation,
string values were being quoted (e.g., `"apple"` instead of `apple`).
This didn't match the behavior of `%s` formatting or Jest's behavior.
## Changes
- Modified `formatLabel` in `src/bun.js/test/jest.zig` to check if the
value is a primitive string and use `toString()` instead of the
formatter with `quote_strings=true`
- Added regression test in `test/regression/issue/23206.test.ts`
## Example
**Before:**
```
test.each([
{ name: "apple" },
{ name: "banana" }
])("fruit #%# is $name", fruit => {
// Test names were:
// "fruit #0 is "apple""
// "fruit #1 is "banana""
});
```
**After:**
```
test.each([
{ name: "apple" },
{ name: "banana" }
])("fruit #%# is $name", fruit => {
// Test names are now:
// "fruit #0 is apple"
// "fruit #1 is banana"
});
```
## Test plan
- [x] Added regression test that verifies both `%s` and `$name` syntax
produce consistent output
- [x] Tested with `AGENT=0` - all tests pass
- [x] Verified other primitive types (numbers, booleans) still format
correctly
- [x] Verified complex objects still use proper formatting
This matches Jest's behavior after their fix:
https://github.com/jestjs/jest/issues/7689🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: pfg <pfg@pfg.pw>
### What does this PR do?
Fixes#23314 where `zlib.zstdCompress()` created data that caused an
out-of-memory error when decompressed with `Bun.zstdDecompressSync()`.
#### 1. `zlib.zstdCompress()` now sets `pledgedSrcSize`
The async convenience method now automatically sets the `pledgedSrcSize`
option to the input buffer size. This ensures the compressed frame
includes the content size in the header, making sync and async
compression produce identical output.
**Node.js compatibility**: `pledgedSrcSize` is a documented Node.js
option:
-
[`vendor/node/doc/api/zlib.md:754-758`](https://github.com/oven-sh/bun/blob/main/vendor/node/doc/api/zlib.md#L754-L758)
-
[`vendor/node/lib/zlib.js:893`](https://github.com/oven-sh/bun/blob/main/vendor/node/lib/zlib.js#L893)
-
[`vendor/node/src/node_zlib.cc:890-904`](https://github.com/oven-sh/bun/blob/main/vendor/node/src/node_zlib.cc#L890-L904)
#### 2. Added `bun.zstd.decompressAlloc()` - centralized safe
decompression
Created a new function in `src/deps/zstd.zig` that handles decompression
in one place with automatic safety features:
- **Handles unknown content sizes**: Automatically switches to streaming
decompression when the zstd frame doesn't include content size (e.g.,
from streams without `pledgedSrcSize`)
- **16MB safety limit**: For security, if the reported decompressed size
exceeds 16MB, streaming decompression is used instead of blindly
trusting the header
- **Fast path for small files**: Still uses efficient pre-allocation for
files < 16MB with known sizes
This centralized fix automatically protects:
- `Bun.zstdDecompressSync()` / `Bun.zstdDecompress()`
- `StandaloneModuleGraph` source map decompression
- Any other code using `bun.zstd` decompression
### How did you verify your code works?
**Before:**
```typescript
const input = "hello world";
// Async compression
const compressed = await new Promise((resolve, reject) => {
zlib.zstdCompress(input, (err, result) => {
if (err) reject(err);
else resolve(result);
});
});
// This would fail with "Out of memory"
const decompressed = Bun.zstdDecompressSync(compressed);
```
**Error**: `RangeError: Out of memory` (tried to allocate UINT64_MAX
bytes)
**After:**
```typescript
const input = "hello world";
// Async compression (now includes content size)
const compressed = await new Promise((resolve, reject) => {
zlib.zstdCompress(input, (err, result) => {
if (err) reject(err);
else resolve(result);
});
});
// ✅ Works! Falls back to streaming decompression if needed
const decompressed = Bun.zstdDecompressSync(compressed);
console.log(decompressed.toString()); // "hello world"
```
**Tests:**
- ✅ All existing tests pass
- ✅ New regression tests for async/sync compression compatibility
(`test/regression/issue/23314/zstd-async-compress.test.ts`)
- ✅ Test for large (>16MB) decompression using streaming
(`test/regression/issue/23314/zstd-large-decompression.test.ts`)
- ✅ Test for various input sizes and types
(`test/regression/issue/23314/zstd-large-input.test.ts`)
**Security:**
The 16MB safety limit protects against malicious zstd frames that claim
huge decompressed sizes in the header, preventing potential OOM attacks.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Fixes#12095
Manually confirmed to fix the case, but it would be better to have an
automated test to compare default reporter output with lcov reporter
output.
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pfg <pfg@pfg.pw>
### What does this PR do?
Matches node behavior.
Fixes#20975
### How did you verify your code works?
Manually and added a test
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>