mirror of
https://github.com/oven-sh/bun
synced 2026-02-25 11:07:19 +01:00
## Summary When a streaming HTTP response body is cancelled via `reader.cancel()` or `body.cancel()`, `FetchTasklet.readable_stream_ref` (a `ReadableStream.Strong` GC root) was never released. This caused ReadableStream objects, associated Promises, and Uint8Array buffers to be retained indefinitely — leaking ~260KB per cancelled streaming request. ## Root Cause `ByteStream.onCancel()` cleaned up its own state (`done = true`, buffer freed, pending promise resolved) but **did not notify the FetchTasklet**. The Strong ref was only released when: - `has_more` became `false` (HTTP response fully received) — but the server may keep the connection open - `Bun__FetchResponse_finalize` — but this checks `readable_stream_ref.held.has()` and **skips cleanup when the Strong ref is set** (line 958) This created a circular dependency: the Strong ref prevented GC, and the finalizer skipped cleanup because the Strong ref existed. ## Fix Add a `cancel_handler` callback to `NewSource` (`ReadableStream.zig`) that propagates cancel events to the data producer. `FetchTasklet` registers this callback via `Body.PendingValue.onStreamCancelled`. When the stream is cancelled, the handler calls `ignoreRemainingResponseBody()` to release the Strong ref, stop processing further HTTP data, and unref the event loop. To prevent use-after-free when `FetchTasklet` is freed before `cancel()` is called (e.g., HTTP response completes normally, then user cancels the orphaned stream), `clearStreamCancelHandler()` nulls the `cancel_handler` on the `ByteStream.Source` at all 3 sites where `readable_stream_ref` is released. ## Test Added `test/js/web/fetch/fetch-stream-cancel-leak.test.ts` — uses a raw TCP server (`Bun.listen`) that sends one HTTP chunk then keeps the connection open. Client fetches 30 times, reads one chunk, cancels, then asserts `heapStats().objectTypeCounts.ReadableStream` does not accumulate. Before the fix, all 30 ReadableStreams leaked; after the fix, 0 leak.