Files
bun.sh/test/regression/issue/11297/11297.fixture.ts
robobun 5f1ca176cd fix(windows): prevent data loss in pipe reads after libuv 1.51.0 upgrade (#23340)
### What does this PR do?

Fixes data loss when reading large amounts of data from subprocess pipes
on Windows, a regression introduced by the libuv 1.51.0 upgrade in
commit e3783c244f.

### The Problem

When piping large data through a subprocess on Windows (e.g.,
`process.stdin.pipe(process.stdout)`), Bun randomly loses ~73KB of data
out of 1MB, receiving only ~974KB instead of the full 1048576 bytes.

The subprocess correctly receives all 1MB on stdin, but the parent
process loses data when reading from the subprocess stdout.

### Root Cause Analysis

#### libuv 1.51.0 Change

The libuv 1.51.0 upgrade (commit
[libuv/libuv@727ee723](727ee7237e))
changed Windows pipe reading behavior:

**Before:** libuv would call `PeekNamedPipe` to check available bytes,
then read exactly that amount.

**After:** libuv attempts immediate non-blocking reads (up to 65536
bytes) before falling back to async reads. If less data is available
than requested, it returns what's available and signals `more=0`,
causing the read loop to break.

This optimization introduces **0-byte reads** when data isn't
immediately available, which are delivered to Bun's read callback.

#### The Race Condition

When Bun's `WindowsBufferedReader` called `onRead(.drained)` for these
0-byte reads, it created a race condition. Debug logs clearly show the
issue:

**Error case (log.txt):**
```
Line 79-80: onStreamRead = 0 (drained)
Line 81:    filesink closes (stdin closes)
Line 85:    onStreamRead = 6024        ← Should be 74468!
Line 89:    onStreamRead = -4095 (EOF)
```

**Success case (success.log.txt):**
```
Line 79-80: onStreamRead = 0 (drained)
Line 81:    filesink closes (stdin closes)
Line 85:    onStreamRead = 74468       ← Full chunk!
Line 89-90: onStreamRead = 0 (drained)
Line 91:    onStreamRead = 6024
Line 95:    onStreamRead = -4095 (EOF)
```

When stdin closes while a 0-byte drained read is pending, the next read
returns truncated data (6024 bytes instead of 74468 bytes).

### The Fix

Two changes to `WindowsBufferedReader` in `src/io/PipeReader.zig`:

#### 1. Ignore 0-byte reads (line 937-940)

Don't call `onRead(.drained)` for 0-byte reads. Just return and let
libuv queue the next read. This prevents the race condition that causes
truncated reads.

```zig
0 => {
    // With libuv 1.51.0+, calling onRead(.drained) here causes a race condition
    // where subsequent reads return truncated data. Just ignore 0-byte reads.
    return;
},
```

#### 2. Defer `has_inflight_read` flag clearing (line 827-839)

Clear the flag **after** the read callback completes, not before. This
prevents libuv from starting a new overlapped read operation while we're
still processing the current data buffer, which could cause memory
corruption per the libuv commit message:

> "Starting a new read after uv_read_cb returns causes memory corruption
on the OVERLAPPED read_req if uv_read_stop+uv_read_start was called
during the callback"

```zig
const result = onReadChunkFn(this.parent, buf, hasMore);
// Clear has_inflight_read after the callback completes
this.flags.has_inflight_read = false;
return result;
```

### How to Test

Run the modified test in
`test/js/bun/spawn/spawn-stdin-readable-stream.test.ts`:

```js
test("ReadableStream with very large chunked data", async () => {
  const chunkSize = 64 * 1024; // 64KB chunks
  const numChunks = 16; // 1MB total
  const chunk = Buffer.alloc(chunkSize, "x");

  const stream = new ReadableStream({
    pull(controller) {
      if (pushedChunks < numChunks) {
        controller.enqueue(chunk);
        pushedChunks++;
      } else {
        controller.close();
      }
    },
  });

  await using proc = spawn({
    cmd: [bunExe(), "-e", `
      let length = 0;
      process.stdin.on('data', (data) => length += data.length);
      process.once('beforeExit', () => console.error(length));
      process.stdin.pipe(process.stdout)
    `],
    stdin: stream,
    stdout: "pipe",
    env: bunEnv,
  });

  const text = await proc.stdout.text();
  expect(text.length).toBe(chunkSize * numChunks); // Should be 1048576
});
```

**Before fix:** Randomly fails with ~974KB instead of 1MB  
**After fix:** Consistently passes with full 1MB

Run ~100 times to verify the race condition is fixed.

### Related Issues

This may also fix #23071 (Windows scripts hanging), though that issue
needs separate verification.

### Why Draft?

Marking as draft for Windows testing by the team. The fix is based on
detailed debug log analysis showing the exact race condition, but needs
verification on Windows CI.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-10-07 18:33:34 -07:00

73 lines
1.9 KiB
TypeScript

import { bunExe } from "harness";
const string = Buffer.alloc(1024 * 1024, "zombo.com\n").toString();
process.exitCode = 1;
const proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
let length = 0;
process.stdin.on('data', (data) => length += data.length);
process.once('beforeExit', () => console.error(length));
process.stdin.pipe(process.stdout)
`,
],
stdio: ["pipe", "pipe", "inherit"],
});
const writer = (async function () {
console.time("Sent " + string.length + " bytes x 10");
for (let i = 0; i < 10; i += 1) {
// TODO: investigate if the need for this "await" is a bug.
// I believe FileSink should be buffering internally.
//
// To reproduce:
//
// 1. Remove "await" from proc.stdin.write(string) (keep the .end() await)
// 2. Run `hyperfine "bun test/regression/issue/011297.fixture.ts"` (or run this many times on macOS.)
//
proc.stdin.write(string);
}
await proc.stdin.end();
console.timeEnd("Sent " + string.length + " bytes x 10");
})();
const reader = (async function () {
console.time("Read " + string.length + " bytes x 10");
const chunks = [];
for await (const chunk of proc.stdout) {
chunks.push(chunk);
}
console.timeEnd("Read " + string.length + " bytes x 10");
return chunks;
})();
const [chunks, exitCode] = await Promise.all([reader, proc.exited, writer]);
const combined = Buffer.concat(chunks).toString().trim();
if (combined !== string.repeat(10)) {
await Bun.write("a.txt", string.repeat(10));
await Bun.write("b.txt", combined);
throw new Error(`string mismatch!
exit code: ${exitCode}
hash:
input ${Bun.SHA1.hash(string.repeat(10), "hex")}
output: ${Bun.SHA1.hash(combined, "hex")}
length:
input ${string.length * 10}
output: ${combined.length}
`);
}
if (exitCode !== 0) {
throw new Error("process exited with non-zero code");
}
console.timeEnd("Read " + string.length + " bytes x 10");
process.exitCode = 0;