Commit Graph

13778 Commits

Author SHA1 Message Date
Jarred Sumner
b0a6feca57 Update no-validate-leaksan.txt bun-v1.3.0 2025-10-10 04:35:12 -07:00
robobun
012a2bab92 Fix Clang 19 detection in Nix flake by setting CMAKE compiler variables (#23445)
## Summary

Fixes Clang 19 detection in the Nix flake environment by explicitly
setting CMAKE compiler environment variables in the shellHook.

## Problem

When using `nix develop` or `nix print-dev-env`, CMake was unable to
detect the Clang 19 compiler because the `CMAKE_C_COMPILER` and
`CMAKE_CXX_COMPILER` environment variables were not being set, even
though the compiler was available in the environment.

The `shell.nix` file correctly sets these variables (lines 80-87), but
`flake.nix` was missing them.

## Solution

Updated `flake.nix` shellHook to export the same compiler environment
variables as `shell.nix`:
- `CC`, `CXX`, `AR`, `RANLIB` 
- `CMAKE_C_COMPILER`, `CMAKE_CXX_COMPILER`, `CMAKE_AR`, `CMAKE_RANLIB`

This ensures consistent compiler detection across both Nix entry points
(`nix develop` with flakes and `nix-shell` with shell.nix).

## Testing

Verified that all compiler variables are now properly set:
```bash
nix develop --accept-flake-config --impure --command bash -c 'echo "CMAKE_C_COMPILER=$CMAKE_C_COMPILER"'
# Output: CMAKE_C_COMPILER=/nix/store/.../clang-wrapper-19.1.7/bin/clang
```

Also tested with the profile workflow:
```bash
nix develop --accept-flake-config --impure --profile ./dev-profile --command true
eval "$(nix print-dev-env ./dev-profile --accept-flake-config --impure)"
echo "CMAKE_C_COMPILER=$CMAKE_C_COMPILER"
# Output: CMAKE_C_COMPILER=/nix/store/.../clang
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-10 04:11:29 -07:00
robobun
8197a5f7af Fix WriteBarrier initialization to use WriteBarrierEarlyInit (#23435)
## Summary

This PR fixes incorrect WriteBarrier initialization patterns throughout
the Bun codebase where `.set()` or `.setEarlyValue()` was being called
in the constructor body or in `finishCreation()`. According to
JavaScriptCore's `WriteBarrier.h`, WriteBarriers that are initialized
during construction should use the `WriteBarrierEarlyInit` constructor
in the initializer list to avoid triggering unnecessary write barriers.

## Changes

The following classes were updated to properly initialize WriteBarrier
fields:

1. **InternalModuleRegistry** - Initialize internal fields in
constructor using `setWithoutWriteBarrier()` instead of calling `.set()`
in `finishCreation()`
2. **AsyncContextFrame** - Pass callback and context to constructor and
use `WriteBarrierEarlyInit`
3. **JSCommonJSModule** - Pass id, filename, dirname to constructor and
use `WriteBarrierEarlyInit`
4. **JSMockImplementation** - Pass initial values to constructor and use
`WriteBarrierEarlyInit`
5. **JSConnectionsList** - Pass connection sets to constructor and use
`WriteBarrierEarlyInit`
6. **JSBunRequest** - Pass params to constructor and initialize both
`m_params` and `m_cookies` using `WriteBarrierEarlyInit`
7. **JSNodeHTTPServerSocket** - Initialize `currentResponseObject` using
`WriteBarrierEarlyInit` instead of calling `setEarlyValue()`

## Why This Matters

From JavaScriptCore's `WriteBarrier.h`:

```cpp
enum WriteBarrierEarlyInitTag { WriteBarrierEarlyInit };

// Constructor for early initialization during object construction
WriteBarrier(T* value, WriteBarrierEarlyInitTag)
{
    this->setWithoutWriteBarrier(value);
}
```

Using `WriteBarrierEarlyInit` during construction:
- Avoids triggering write barriers when they're not needed
- Is the correct pattern for initializing WriteBarriers before the
object is fully constructed
- Aligns with JavaScriptCore best practices

## Testing

-  Full debug build completes successfully
-  Basic functionality tested (CommonJS modules, HTTP requests, Node
HTTP servers)
-  No regressions observed

## Note on generate-classes.ts

The code generator (`generate-classes.ts`) does not need updates because
generated classes intentionally leave WriteBarrier fields (callbacks,
cached fields, values) uninitialized. They start with
default-constructed WriteBarriers and are populated later by Zig code
via setter functions, which is the correct pattern for those fields.

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-10 03:53:04 -07:00
pfg
c50db1dbfb fix unpaired deref when write_file fails due to nametoolong (#23438)
Fixes test\regression\issue\23316-long-path-spawn.test.ts

The problem was ``await Bun.write(join(deepPath, "test.js"),
`console.log("hello");`);`` was failing because the name was too long,
but it failed before refConcurrently was called and it called
unrefConcurrently after failing. so then when the subprocess spawned it
didn't ref.

---------

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-10-10 03:48:47 -07:00
Dylan Conway
312a86fd43 fix writing UTF-16 with a trailing unpaired surrogate to process.stdout/stderr (#23444)
### What does this PR do?
Fixes `bun -p "process.stderr.write('Hello' +
String.fromCharCode(0xd800))"`.

Also fixes potential index out of bounds if there are many invalid
sequences.

This also affects `TextEncoder`.
### How did you verify your code works?
Added tests for edgecases

---------

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-10-10 03:48:04 -07:00
robobun
8826b4f5f5 Fix WTFTimer issues with Atomics.waitAsync (#23442)
## Summary

Fixes two critical issues in `WTFTimer` when `Atomics.waitAsync` creates
multiple timer instances.

## Problems

### 1. Use-After-Free in `WTFTimer.fire()`

**Location:** `/workspace/bun/src/bun.js/api/Timer/WTFTimer.zig:70-82`

```zig
pub fn fire(this: *WTFTimer, _: *const bun.timespec, _: *VirtualMachine) EventLoopTimer.Arm {
    this.event_loop_timer.state = .FIRED;
    this.imminent.store(null, .seq_cst);
    this.runWithoutRemoving();  // ← Callback might destroy `this`
    return if (this.repeat)     // ← UAF: accessing freed memory
        .{ .rearm = this.event_loop_timer.next }
    else
        .disarm;
}
```

When `Atomics.waitAsync` creates a `DispatchTimer` with a timeout, the
timer fires and the callback destroys `this`, but we continue to access
it.

### 2. Imminent Pointer Corruption

**Location:** `/workspace/bun/src/bun.js/api/Timer/WTFTimer.zig:36-42`

```zig
pub fn update(this: *WTFTimer, seconds: f64, repeat: bool) void {
    // Multiple WTFTimers unconditionally overwrite the shared imminent pointer
    this.imminent.store(if (seconds == 0) this else null, .seq_cst);
    // ...
}
```

All `WTFTimer` instances share the same
`vm.eventLoop().imminent_gc_timer` atomic pointer. When multiple timers
are created (GC timer + Atomics.waitAsync timers), they stomp on each
other's imminent state.

## Solutions

### 1. UAF Fix

Read `this.repeat` and `this.event_loop_timer.next` **before** calling
`runWithoutRemoving()`:

```zig
const should_repeat = this.repeat;
const next_time = this.event_loop_timer.next;
this.runWithoutRemoving();
return if (should_repeat)
    .{ .rearm = next_time }
else
    .disarm;
```

### 2. Imminent Pointer Fix

Use compare-and-swap to only set imminent if it's null, and only clear
it if this timer was the one that set it:

```zig
if (seconds == 0) {
    _ = this.imminent.cmpxchgStrong(null, this, .seq_cst, .seq_cst);
    return;
} else {
    _ = this.imminent.cmpxchgStrong(this, null, .seq_cst, .seq_cst);
}
```

## Test Plan

Added regression test at
`test/regression/issue/atomics-waitasync-wtftimer-uaf.test.ts`:

```javascript
const buffer = new SharedArrayBuffer(16);
const view = new Int32Array(buffer);
Atomics.store(view, 0, 0);

const result = Atomics.waitAsync(view, 0, 0, 10);
setTimeout(() => {
  console.log("hi");
}, 100);
```

**Before:** Crashes with UAF under ASAN  
**After:** Runs cleanly

All existing atomics tests pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-10-10 03:47:38 -07:00
robobun
f65e280521 Add Nix flake for development environment (#23406)
Provides a Nix flake as an alternative to `scripts/bootstrap.sh` for
setting up the Bun development environment.

## What's included:

- **flake.nix**: Full development environment with all dependencies from
bootstrap.sh
  - LLVM 19, CMake 3.30+, Node.js 24, Rust, Go
  - Build tools: ninja, ccache, pkg-config, make
  - Chromium dependencies for Puppeteer testing
  - gdb for core dump debugging

- **shell.nix**: Simple wrapper for `nix-shell` usage

- **cmake/CompilerFlags.cmake**: Nix compatibility fixes
  - Disable zstd debug compression (Nix's LLVM not built with zstd)
  - Set _FORTIFY_SOURCE=0 for -O0 debug builds
  - Downgrade _FORTIFY_SOURCE warning to not error

## Usage:

```bash
nix-shell
export CMAKE_SYSTEM_PROCESSOR=$(uname -m)
bun bd
```

## Verified working:
 Successfully compiles Bun debug build
 Binary tested: `./build/debug/bun-debug --version` → 1.2.24-debug
 All dependencies from bootstrap.sh included

## Advantages:
- Fully isolated (no sudo required)
- 100% reproducible dependency versions  
- Fast setup with binary caching

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-10 02:13:28 -07:00
Jarred Sumner
a686b9fc39 Update package.json 2025-10-09 23:39:37 -07:00
Zack Radisic
a3cf974752 Update bun-plugin-tailwind (#23396)
### What does this PR do?

---------

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-10-09 23:39:04 -07:00
Jarred Sumner
de366cfe4b Doc updates 2025-10-09 23:37:10 -07:00
Jarred Sumner
98d1a9d110 Add some missing docs 2025-10-09 23:28:03 -07:00
Ciro Spaciari
3395774c8c improve(node:http): uncork after flushing headers to ensure data is sent immediately (#23413)
### What does this PR do?
Calls `uncork()` after flushing response headers to ensure data is sent
as soon as possible, improving responsiveness.
This behavior still works correctly even without the explicit `uncork()`
call, due to the deferred uncork logic implemented here:

6e3359dd16/packages/bun-uws/src/Loop.h (L57-L64)

A test already covers this scenario in
`test/js/node/test/parallel/test-http-flush-response-headers.js`.


### How did you verify your code works?
CI

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-10-09 22:56:49 -07:00
robobun
086eb73fe7 deps: update elysia to 1.4.10 (#23422)
## What does this PR do?

Updates the vendored Elysia version from 1.4.6 to 1.4.10.

## Changelog

Compare: https://github.com/elysiajs/elysia/compare/1.4.6...1.4.10

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-09 19:49:33 -07:00
Ciro Spaciari
979b69b673 fix(CI) (#23418)
### What does this PR do?
fix tests failing because of example.com
### How did you verify your code works?
CI

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-10-09 19:11:08 -07:00
Jarred Sumner
b3cfaab07f Fix: after pausing stdin, a subprocess should be able to read from stdin (#23341)
Fixes #23333, Fixes #13978

### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pfg <pfg@pfg.pw>
Co-authored-by: Zack Radisic <zack@theradisic.com>
2025-10-09 19:04:41 -07:00
Meghan Denny
8574d62da0 Update LATEST 2025-10-09 18:28:48 -07:00
robobun
6e3359dd16 Bump version to 1.3.0 (#23401)
## What does this PR do?

Bumps Bun version from 1.2.24 to 1.3.0, marking the start of the 1.3.x
release series.

## Changes

- **`package.json`**: Updated version from `1.2.24` to `1.3.0`
- **`LATEST`**: Updated from `1.2.23` to `1.3.0` (used by installation
scripts)
- **`test/bundler/bundler_bun.test.ts`**: Updated version check to
include `1.3.x` so export conditions tests continue to run

## Verification

 Debug build successful showing version `1.3.0-debug`
 All platforms compile successfully via `bun run zig:check-all` (49/49
steps)
 Bundler tests pass with updated version check

## Additional Notes

- CI workflow Bun versions (e.g., `1.2.3`, `1.2.0` in
`.github/workflows/release.yml`) are intentionally left unchanged -
these are pinned versions used to run the release tooling, not the
version being released
- Docker images use `ARG BUN_VERSION` passed at build time and don't
need updates
- The actual release version comes from git tags via `${{
env.BUN_VERSION }}`

---

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-10-09 06:30:35 -07:00
Dylan Conway
3f0d16d181 fix (node:zlib): use runCallback for calling the error callback (#23397)
### What does this PR do?
Fixes debug panic running `zlib/zlib.test.js`
### How did you verify your code works?
Manually
2025-10-09 04:01:45 -07:00
pfg
46cf50dee5 Fix 23382 (unicode object key printed as 'key" in snapshot instead of "key") (#23390)
Fixes #23382

Breaking change because any existing snapshots that have unicode keys
will need to be regenerated
2025-10-08 21:25:24 -07:00
Dylan Conway
d17134f851 fix build 2025-10-08 18:34:57 -07:00
robobun
f6f7e66a2c Add back --only flag to test runner (#23385)
Fixes #23380 - this is a use-case for the `--only` flag that I missed

Adds back the `--only` flag. When running `bun test` on a full test
suite, without this flag it will run only that test in its file, but it
will run all other tests from other files. With this flag, it will not
run things from other files.

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pfg <pfg@pfg.pw>
2025-10-08 18:26:56 -07:00
robobun
6875cc3f7b test: refactor bundler_promiseall_deadcode to use itBundled helper (#23370)
## Summary

Modernizes `test/bundler/bundler_promiseall_deadcode.test.ts` to use the
`itBundled` test helper instead of manual temp directory creation and
spawning. This makes the test more concise, maintainable, and consistent
with other bundler tests.

## Changes

- Replace `tempDirWithFiles` + manual `Bun.spawn` with `itBundled`
- Use `files` object for test fixtures instead of creating a temp
directory
- Use `onAfterBundle` callback for bundled output assertions
- Use `run.validate` for runtime stderr validation
- Use `run.partialStdout` for stdout verification
- Preserve all original test assertions and behavior

## Test Results

All 3 tests pass with identical functional behavior:

```
 3 pass
 0 fail
 2 snapshots, 23 expect() calls
Ran 3 tests across 1 file. [8.95s]
```

## Verification

All original assertions are preserved:
-  Build success validation
-  Bundled output snapshots (updated paths to match itBundled format)
-  `__esm` and `__promiseAll` presence/absence checks
-  Runtime execution validation (exit code 0)
-  Runtime stderr validation (no async syntax errors)
-  Runtime stdout validation (contains expected output)

The test is now more concise (407 insertions vs 514 deletions) while
maintaining full test coverage.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-08 18:25:37 -07:00
Dylan Conway
0601eb0007 Make --linker=isolated the default for bun install (#23311)
### What does this PR do?
Makes isolated installs the default install strategy for projects with
workspaces in Bun v1.3.

Also fixes creating patches with `bun patch` and `--linker isolated`

Fixes #22693

### How did you verify your code works?
Added tests for node_modules renaming `bun patch` with isolated install.

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-08 18:00:38 -07:00
Michael H
767e03ef24 load local bunfig.toml for bun run earlier (for run.bun option) (#16664)
Alternative to #15596 where it now only impacts `bun run` for the same
cwd dir. This does not effect `bunx` ([even though according to code it
should load
it](7830e15650/src/cli.zig (L2597-L2628))),
and isnt as fancy as `bun install` where it ensures to check the bunfig
in `package.json` dir.

This shouldn't have any performance issues because its already loading
the file, but now its loading earlier so it can use `run.bun` option.


Fixes #11445, (as well as fixes #15484, fixes #15483, fixes #17064)

---------

Co-authored-by: pfg <pfg@pfg.pw>
2025-10-08 12:13:06 -07:00
robobun
93910f34da Fix bin linking to atomically normalize CRLF in shebang lines (#23360)
## Summary

This PR improves the correctness of bin linking by atomically
normalizing `\r\n` to `\n` in shebang lines when linking bins.

### Changes

- **Refactored shebang normalization in `src/install/bin.zig`**:
  - Extracted logic into separate `tryNormalizeShebang` function
  - Changed from in-place file modification to atomic file replacement
- Reads entire file, creates temporary file with corrected shebang, then
atomically renames
  - Properly cleans up temporary files on errors
  
- **Added test coverage**:
- New test file `test/cli/install/shebang-normalize.test.ts` verifies
CRLF normalization works correctly
- Modified existing test in `bun-link.test.ts` to use Python script with
CRLF shebang

### Why

The previous implementation modified files in-place by seeking to the
`\r` position and overwriting with `\n`. This could potentially corrupt
files if interrupted mid-write. The new atomic approach ensures file
integrity by writing to a temporary file first, then renaming it to
replace the original.

## Test plan

-  `bun bd test test/cli/install/shebang-normalize.test.ts` - passes
-  Verified bins with CRLF shebangs are normalized to LF during linking
-  Code compiles successfully

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-10-08 01:51:25 -07:00
Jarred Sumner
8e27087853 Update no-validate-leaksan.txt 2025-10-08 01:50:37 -07:00
Jarred Sumner
3c5565184e 4 less values for the global object to visit (#23368)
### What does this PR do?

### How did you verify your code works?
2025-10-08 01:14:00 -07:00
Meghan Denny
7d10e57422 test: fix claudecode-flag.test.ts (#23367) 2025-10-08 00:54:37 -07:00
Jarred Sumner
562b79c57f Deflake test/js/web/fetch/request-cyclic-reference.test.ts test/js/web/fetch/response-cyclic-reference.test.ts 2025-10-08 00:31:52 -07:00
Ciro Spaciari
625e537f5d fix(NodeHTTP) remove unneeded code add more safety measures agains raw_response after upgrade/close (#23348)
### What does this PR do?
BeforeOpen code is not necessary since we have `setOnSocketUpgraded`
callback now,and we should NOT convert websocket to a response, make
sure that no closed socket is passed to `JSNodeHTTPServerSocket`, change
isIdle to be inside AsyncSocketData to be more reliable (works for
websocket and normal sockets)

### How did you verify your code works?
CI

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-10-07 22:35:08 -07:00
robobun
d3ff6a5e35 Fix process.stdin not receiving data after pause/resume (#23362)
## Summary

Fixed a race condition where calling `pause()` followed by `resume()` on
`process.stdin` would prevent data from being received, causing the
process to exit immediately instead of listening for input.

## Root Cause

The issue was in the pause/resume event handling logic in
`ProcessObjectInternals.ts`:

1. When `pause()` is called, the "pause" event handler schedules a
`disown()` call for the next tick
2. When `resume()` is called immediately after, it calls `own()` to
acquire a stream reader
3. On the next tick, the scheduled `disown()` from step 1 executes and
incorrectly releases the reader that was just acquired in step 2

This race condition left the stream without a reader, so no data could
be received.

## Solution

Added a `pendingDisown` flag that:
- Gets set to `true` when scheduling a disown operation
- Gets cleared to `false` when `own()` is called (during resume)
- Prevents the scheduled disown from executing if it has been cancelled
by a subsequent `own()` call

## Test Plan

- [x] Added regression tests in
`test/regression/issue/stdin-pause-resume.test.ts`
- [x] Verified fix with original reproduction case
- [x] Existing stdin/tty tests still pass
(`tty-readstream-ref-unref.test.ts`,
`tty-reopen-after-stdin-eof.test.ts`)

## Reproduction

Before this fix, the following code would exit immediately:
```ts
process.stdin.on("data", chunk => {
  process.stdout.write(chunk);
});
process.stdin.pause();
process.stdin.resume();
```

After the fix, it correctly waits for and processes input.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-07 22:04:40 -07:00
Dylan Conway
3143c9216c Update security scanner test snapshots (#23361)
### What does this PR do?

### How did you verify your code works?
2025-10-07 20:11:07 -07:00
Jarred Sumner
c0660674fb Fix outdated doc 2025-10-07 20:08:57 -07:00
robobun
5f1ca176cd fix(windows): prevent data loss in pipe reads after libuv 1.51.0 upgrade (#23340)
### What does this PR do?

Fixes data loss when reading large amounts of data from subprocess pipes
on Windows, a regression introduced by the libuv 1.51.0 upgrade in
commit e3783c244f.

### The Problem

When piping large data through a subprocess on Windows (e.g.,
`process.stdin.pipe(process.stdout)`), Bun randomly loses ~73KB of data
out of 1MB, receiving only ~974KB instead of the full 1048576 bytes.

The subprocess correctly receives all 1MB on stdin, but the parent
process loses data when reading from the subprocess stdout.

### Root Cause Analysis

#### libuv 1.51.0 Change

The libuv 1.51.0 upgrade (commit
[libuv/libuv@727ee723](727ee7237e))
changed Windows pipe reading behavior:

**Before:** libuv would call `PeekNamedPipe` to check available bytes,
then read exactly that amount.

**After:** libuv attempts immediate non-blocking reads (up to 65536
bytes) before falling back to async reads. If less data is available
than requested, it returns what's available and signals `more=0`,
causing the read loop to break.

This optimization introduces **0-byte reads** when data isn't
immediately available, which are delivered to Bun's read callback.

#### The Race Condition

When Bun's `WindowsBufferedReader` called `onRead(.drained)` for these
0-byte reads, it created a race condition. Debug logs clearly show the
issue:

**Error case (log.txt):**
```
Line 79-80: onStreamRead = 0 (drained)
Line 81:    filesink closes (stdin closes)
Line 85:    onStreamRead = 6024        ← Should be 74468!
Line 89:    onStreamRead = -4095 (EOF)
```

**Success case (success.log.txt):**
```
Line 79-80: onStreamRead = 0 (drained)
Line 81:    filesink closes (stdin closes)
Line 85:    onStreamRead = 74468       ← Full chunk!
Line 89-90: onStreamRead = 0 (drained)
Line 91:    onStreamRead = 6024
Line 95:    onStreamRead = -4095 (EOF)
```

When stdin closes while a 0-byte drained read is pending, the next read
returns truncated data (6024 bytes instead of 74468 bytes).

### The Fix

Two changes to `WindowsBufferedReader` in `src/io/PipeReader.zig`:

#### 1. Ignore 0-byte reads (line 937-940)

Don't call `onRead(.drained)` for 0-byte reads. Just return and let
libuv queue the next read. This prevents the race condition that causes
truncated reads.

```zig
0 => {
    // With libuv 1.51.0+, calling onRead(.drained) here causes a race condition
    // where subsequent reads return truncated data. Just ignore 0-byte reads.
    return;
},
```

#### 2. Defer `has_inflight_read` flag clearing (line 827-839)

Clear the flag **after** the read callback completes, not before. This
prevents libuv from starting a new overlapped read operation while we're
still processing the current data buffer, which could cause memory
corruption per the libuv commit message:

> "Starting a new read after uv_read_cb returns causes memory corruption
on the OVERLAPPED read_req if uv_read_stop+uv_read_start was called
during the callback"

```zig
const result = onReadChunkFn(this.parent, buf, hasMore);
// Clear has_inflight_read after the callback completes
this.flags.has_inflight_read = false;
return result;
```

### How to Test

Run the modified test in
`test/js/bun/spawn/spawn-stdin-readable-stream.test.ts`:

```js
test("ReadableStream with very large chunked data", async () => {
  const chunkSize = 64 * 1024; // 64KB chunks
  const numChunks = 16; // 1MB total
  const chunk = Buffer.alloc(chunkSize, "x");

  const stream = new ReadableStream({
    pull(controller) {
      if (pushedChunks < numChunks) {
        controller.enqueue(chunk);
        pushedChunks++;
      } else {
        controller.close();
      }
    },
  });

  await using proc = spawn({
    cmd: [bunExe(), "-e", `
      let length = 0;
      process.stdin.on('data', (data) => length += data.length);
      process.once('beforeExit', () => console.error(length));
      process.stdin.pipe(process.stdout)
    `],
    stdin: stream,
    stdout: "pipe",
    env: bunEnv,
  });

  const text = await proc.stdout.text();
  expect(text.length).toBe(chunkSize * numChunks); // Should be 1048576
});
```

**Before fix:** Randomly fails with ~974KB instead of 1MB  
**After fix:** Consistently passes with full 1MB

Run ~100 times to verify the race condition is fixed.

### Related Issues

This may also fix #23071 (Windows scripts hanging), though that issue
needs separate verification.

### Why Draft?

Marking as draft for Windows testing by the team. The fix is based on
detailed debug log analysis showing the exact race condition, but needs
verification on Windows CI.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-10-07 18:33:34 -07:00
Alistair Smith
de6ea7375a types: Add (passing) regression test for #5396 2025-10-07 16:32:42 -07:00
robobun
ed0d932a6d test: show received value in snapshot CI error message (#23352)
## Summary

When a snapshot is created in CI without `--update-snapshots`, the error
message now displays the received value that was attempting to be
snapshotted. This helps developers understand what value triggered the
error.

## Changes

- Modified the `SnapshotCreationNotAllowedInCI` error message in
`src/bun.js/test/expect.zig` to include the received value using the
same formatting pattern as other expect error messages

## Before

```
Snapshot creation is not allowed in CI environments unless --update-snapshots is used
If this is not a CI environment, set the environment variable CI=false to force allow.
```

## After

```
Snapshot creation is not allowed in CI environments unless --update-snapshots is used
If this is not a CI environment, set the environment variable CI=false to force allow.

Received: <formatted value>
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-07 16:27:32 -07:00
Alistair Smith
95ebe828fa types: slight simplification of test.each def 2025-10-07 16:24:10 -07:00
Alistair Smith
b289828de2 [@types/bun]: Flatten non-wide types in test.each() (#23354) 2025-10-07 16:13:12 -07:00
robobun
92ec83a92a fix(windows): handle UV_ENOTCONN gracefully when spawning with long cwd (#23344) 2025-10-07 15:32:31 -07:00
pfg
5e8feca98b Enable breaking_changes_1_3 (#23308)
Breaking changes:

- bun:test: disallow creating snapshots or using .only() in ci
- for users: hopefully this should only reveal existing bugs in tests,
not cause failures.
- general: enable calling unhandled rejection handlers for
ErrorBuilder.reject()
- for users: this might reveal some unhandled rejections that were not
visible before.
2025-10-07 12:07:29 -07:00
Ciro Spaciari
bcbba97807 refactor(Response) isolate body usage (#23313) 2025-10-07 08:17:31 -07:00
robobun
6c6849cbf5 fix: prevent crash when workspace includes "./" or ".\" (#23337) 2025-10-07 08:17:13 -07:00
robobun
420d51985b fix(test): prevent AGENTS env var from affecting claudecode-flag test (#23331)
## Summary

- Clone `bunEnv` and delete `AGENTS` property in `beforeAll`
- Replace all `bunEnv` references with `testEnv` in test spawns
- Prevents parent process's `AGENTS` env var from leaking into tests

## Problem

The `claudecode-flag` test was using `bunEnv` directly, which includes
`...process.env`. When running in environments like Claude Code where
`AGENTS` may be set, this variable would leak into the test child
processes and potentially affect test behavior.

## Solution

Created a `testEnv` clone in `beforeAll` that explicitly deletes
`AGENTS`, ensuring consistent test behavior regardless of the parent
process's environment.

## Test plan

- [x] Test passes without `AGENTS` set
- [x] Test passes with `AGENTS=1` set in parent environment

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-07 03:35:36 -07:00
shadcn
3077081646 feat: upgrade react-shadcn (#23328)
### What does this PR do?

This PR upgrades the `react-shadcn` template:
- Upgrades to the new Tailwind v4 styles and components
- Updates the example components to use the new ones.
- Removed unused form component
- Fixed some a11y issues with the example component.

### How did you verify your code works?

- Ran `bun build` to test if the template builds with no errors.

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-10-07 01:49:19 -07:00
robobun
0b7aed1d0d fix(test): remove quotes from string variables in test.each (#23244)
## Summary
Fixes #23206

When using `test.each` with object syntax and `$variable` interpolation,
string values were being quoted (e.g., `"apple"` instead of `apple`).
This didn't match the behavior of `%s` formatting or Jest's behavior.

## Changes
- Modified `formatLabel` in `src/bun.js/test/jest.zig` to check if the
value is a primitive string and use `toString()` instead of the
formatter with `quote_strings=true`
- Added regression test in `test/regression/issue/23206.test.ts`

## Example

**Before:**
```
test.each([
  { name: "apple" },
  { name: "banana" }
])("fruit #%# is $name", fruit => {
  // Test names were:
  // "fruit #0 is "apple""
  // "fruit #1 is "banana""
});
```

**After:**
```
test.each([
  { name: "apple" },
  { name: "banana" }
])("fruit #%# is $name", fruit => {
  // Test names are now:
  // "fruit #0 is apple"
  // "fruit #1 is banana"
});
```

## Test plan
- [x] Added regression test that verifies both `%s` and `$name` syntax
produce consistent output
- [x] Tested with `AGENT=0` - all tests pass
- [x] Verified other primitive types (numbers, booleans) still format
correctly
- [x] Verified complex objects still use proper formatting

This matches Jest's behavior after their fix:
https://github.com/jestjs/jest/issues/7689

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: pfg <pfg@pfg.pw>
2025-10-06 20:19:25 -07:00
Jarred Sumner
680e668efd Update CLAUDE.md 2025-10-06 20:03:46 -07:00
robobun
d273f7fdde fix: zstd decompression with async-compressed data (#23314) (#23317)
### What does this PR do?

Fixes #23314 where `zlib.zstdCompress()` created data that caused an
out-of-memory error when decompressed with `Bun.zstdDecompressSync()`.

#### 1. `zlib.zstdCompress()` now sets `pledgedSrcSize`

The async convenience method now automatically sets the `pledgedSrcSize`
option to the input buffer size. This ensures the compressed frame
includes the content size in the header, making sync and async
compression produce identical output.

**Node.js compatibility**: `pledgedSrcSize` is a documented Node.js
option:
-
[`vendor/node/doc/api/zlib.md:754-758`](https://github.com/oven-sh/bun/blob/main/vendor/node/doc/api/zlib.md#L754-L758)
-
[`vendor/node/lib/zlib.js:893`](https://github.com/oven-sh/bun/blob/main/vendor/node/lib/zlib.js#L893)
-
[`vendor/node/src/node_zlib.cc:890-904`](https://github.com/oven-sh/bun/blob/main/vendor/node/src/node_zlib.cc#L890-L904)

#### 2. Added `bun.zstd.decompressAlloc()` - centralized safe
decompression

Created a new function in `src/deps/zstd.zig` that handles decompression
in one place with automatic safety features:

- **Handles unknown content sizes**: Automatically switches to streaming
decompression when the zstd frame doesn't include content size (e.g.,
from streams without `pledgedSrcSize`)
- **16MB safety limit**: For security, if the reported decompressed size
exceeds 16MB, streaming decompression is used instead of blindly
trusting the header
- **Fast path for small files**: Still uses efficient pre-allocation for
files < 16MB with known sizes

This centralized fix automatically protects:
- `Bun.zstdDecompressSync()` / `Bun.zstdDecompress()`
- `StandaloneModuleGraph` source map decompression
- Any other code using `bun.zstd` decompression

### How did you verify your code works?

**Before:**
```typescript
const input = "hello world";

// Async compression
const compressed = await new Promise((resolve, reject) => {
  zlib.zstdCompress(input, (err, result) => {
    if (err) reject(err);
    else resolve(result);
  });
});

// This would fail with "Out of memory"
const decompressed = Bun.zstdDecompressSync(compressed);
```
**Error**: `RangeError: Out of memory` (tried to allocate UINT64_MAX
bytes)

**After:**
```typescript
const input = "hello world";

// Async compression (now includes content size)
const compressed = await new Promise((resolve, reject) => {
  zlib.zstdCompress(input, (err, result) => {
    if (err) reject(err);
    else resolve(result);
  });
});

//  Works! Falls back to streaming decompression if needed
const decompressed = Bun.zstdDecompressSync(compressed);
console.log(decompressed.toString()); // "hello world"
```

**Tests:**
-  All existing tests pass
-  New regression tests for async/sync compression compatibility
(`test/regression/issue/23314/zstd-async-compress.test.ts`)
-  Test for large (>16MB) decompression using streaming
(`test/regression/issue/23314/zstd-large-decompression.test.ts`)
-  Test for various input sizes and types
(`test/regression/issue/23314/zstd-large-input.test.ts`)

**Security:**
The 16MB safety limit protects against malicious zstd frames that claim
huge decompressed sizes in the header, preventing potential OOM attacks.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-06 19:56:40 -07:00
robobun
d92d2e5770 Add bunfig.toml support for test randomize, seed, and rerunEach options (#23286)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Alistair Smith <hi@alistair.sh>
2025-10-06 19:48:16 -07:00
robobun
85f89a100e fix(test): lcov reporter now counts only executable lines (#23320)
Fixes #12095

Manually confirmed to fix the case, but it would be better to have an
automated test to compare default reporter output with lcov reporter
output.

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pfg <pfg@pfg.pw>
2025-10-06 19:47:24 -07:00
Dylan Conway
5b51d421da fix(node:path): reverse iterate path.resolve arguments, and stop on absolute (#23293)
### What does this PR do?
Matches node behavior.

Fixes #20975
### How did you verify your code works?
Manually and added a test

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-10-06 19:47:05 -07:00