Commit Graph

332 Commits

Author SHA1 Message Date
robobun
525315cf39 fix: include cookies in WebSocket upgrade response (#23499)
Fixes #23474

## Summary

When `request.cookies.set()` is called before `server.upgrade()`, the
cookies are now properly included in the WebSocket upgrade response
headers.

## Problem

Previously, cookies set on the request via `req.cookies.set()` were only
written for regular HTTP responses but were ignored during WebSocket
upgrades. Users had to manually pass cookies via the `headers` option:

```js
server.upgrade(req, {
  headers: {
    "Set-Cookie": `SessionId=${sessionId}`,
  },
});
```

## Solution

Modified `src/bun.js/api/server.zig` to check for and write cookies to
the WebSocket upgrade response after the "101 Switching Protocols"
status is set but before the actual upgrade is performed.

The fix handles both cases:
- When `upgrade()` is called without custom headers
- When `upgrade()` is called with custom headers

## Testing

Added comprehensive regression tests in
`test/regression/issue/23474.test.ts` that:
- Verify cookies are set in the upgrade response without custom headers
- Verify cookies are set in the upgrade response with custom headers
- Use `Promise.withResolvers()` for efficient async handling (no
arbitrary timeouts)

Tests confirmed:
-  Fail with system bun v1.2.23 (without fix)
-  Pass with debug build v1.3.0 (with fix)

## Manual verification

```bash
curl -i -H "Connection: Upgrade" -H "Upgrade: websocket" \
  -H "Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==" \
  -H "Sec-WebSocket-Version: 13" \
  http://localhost:3000/ws
```

Response now includes:
```
HTTP/1.1 101 Switching Protocols
Set-Cookie: test=123; Path=/; SameSite=Lax
Upgrade: websocket
Connection: Upgrade
...
```

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-11 13:34:06 -07:00
pfg
c50db1dbfb fix unpaired deref when write_file fails due to nametoolong (#23438)
Fixes test\regression\issue\23316-long-path-spawn.test.ts

The problem was ``await Bun.write(join(deepPath, "test.js"),
`console.log("hello");`);`` was failing because the name was too long,
but it failed before refConcurrently was called and it called
unrefConcurrently after failing. so then when the subprocess spawned it
didn't ref.

---------

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-10-10 03:48:47 -07:00
robobun
8826b4f5f5 Fix WTFTimer issues with Atomics.waitAsync (#23442)
## Summary

Fixes two critical issues in `WTFTimer` when `Atomics.waitAsync` creates
multiple timer instances.

## Problems

### 1. Use-After-Free in `WTFTimer.fire()`

**Location:** `/workspace/bun/src/bun.js/api/Timer/WTFTimer.zig:70-82`

```zig
pub fn fire(this: *WTFTimer, _: *const bun.timespec, _: *VirtualMachine) EventLoopTimer.Arm {
    this.event_loop_timer.state = .FIRED;
    this.imminent.store(null, .seq_cst);
    this.runWithoutRemoving();  // ← Callback might destroy `this`
    return if (this.repeat)     // ← UAF: accessing freed memory
        .{ .rearm = this.event_loop_timer.next }
    else
        .disarm;
}
```

When `Atomics.waitAsync` creates a `DispatchTimer` with a timeout, the
timer fires and the callback destroys `this`, but we continue to access
it.

### 2. Imminent Pointer Corruption

**Location:** `/workspace/bun/src/bun.js/api/Timer/WTFTimer.zig:36-42`

```zig
pub fn update(this: *WTFTimer, seconds: f64, repeat: bool) void {
    // Multiple WTFTimers unconditionally overwrite the shared imminent pointer
    this.imminent.store(if (seconds == 0) this else null, .seq_cst);
    // ...
}
```

All `WTFTimer` instances share the same
`vm.eventLoop().imminent_gc_timer` atomic pointer. When multiple timers
are created (GC timer + Atomics.waitAsync timers), they stomp on each
other's imminent state.

## Solutions

### 1. UAF Fix

Read `this.repeat` and `this.event_loop_timer.next` **before** calling
`runWithoutRemoving()`:

```zig
const should_repeat = this.repeat;
const next_time = this.event_loop_timer.next;
this.runWithoutRemoving();
return if (should_repeat)
    .{ .rearm = next_time }
else
    .disarm;
```

### 2. Imminent Pointer Fix

Use compare-and-swap to only set imminent if it's null, and only clear
it if this timer was the one that set it:

```zig
if (seconds == 0) {
    _ = this.imminent.cmpxchgStrong(null, this, .seq_cst, .seq_cst);
    return;
} else {
    _ = this.imminent.cmpxchgStrong(this, null, .seq_cst, .seq_cst);
}
```

## Test Plan

Added regression test at
`test/regression/issue/atomics-waitasync-wtftimer-uaf.test.ts`:

```javascript
const buffer = new SharedArrayBuffer(16);
const view = new Int32Array(buffer);
Atomics.store(view, 0, 0);

const result = Atomics.waitAsync(view, 0, 0, 10);
setTimeout(() => {
  console.log("hi");
}, 100);
```

**Before:** Crashes with UAF under ASAN  
**After:** Runs cleanly

All existing atomics tests pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-10-10 03:47:38 -07:00
Jarred Sumner
b3cfaab07f Fix: after pausing stdin, a subprocess should be able to read from stdin (#23341)
Fixes #23333, Fixes #13978

### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pfg <pfg@pfg.pw>
Co-authored-by: Zack Radisic <zack@theradisic.com>
2025-10-09 19:04:41 -07:00
pfg
46cf50dee5 Fix 23382 (unicode object key printed as 'key" in snapshot instead of "key") (#23390)
Fixes #23382

Breaking change because any existing snapshots that have unicode keys
will need to be regenerated
2025-10-08 21:25:24 -07:00
Dylan Conway
0601eb0007 Make --linker=isolated the default for bun install (#23311)
### What does this PR do?
Makes isolated installs the default install strategy for projects with
workspaces in Bun v1.3.

Also fixes creating patches with `bun patch` and `--linker isolated`

Fixes #22693

### How did you verify your code works?
Added tests for node_modules renaming `bun patch` with isolated install.

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-08 18:00:38 -07:00
robobun
d3ff6a5e35 Fix process.stdin not receiving data after pause/resume (#23362)
## Summary

Fixed a race condition where calling `pause()` followed by `resume()` on
`process.stdin` would prevent data from being received, causing the
process to exit immediately instead of listening for input.

## Root Cause

The issue was in the pause/resume event handling logic in
`ProcessObjectInternals.ts`:

1. When `pause()` is called, the "pause" event handler schedules a
`disown()` call for the next tick
2. When `resume()` is called immediately after, it calls `own()` to
acquire a stream reader
3. On the next tick, the scheduled `disown()` from step 1 executes and
incorrectly releases the reader that was just acquired in step 2

This race condition left the stream without a reader, so no data could
be received.

## Solution

Added a `pendingDisown` flag that:
- Gets set to `true` when scheduling a disown operation
- Gets cleared to `false` when `own()` is called (during resume)
- Prevents the scheduled disown from executing if it has been cancelled
by a subsequent `own()` call

## Test Plan

- [x] Added regression tests in
`test/regression/issue/stdin-pause-resume.test.ts`
- [x] Verified fix with original reproduction case
- [x] Existing stdin/tty tests still pass
(`tty-readstream-ref-unref.test.ts`,
`tty-reopen-after-stdin-eof.test.ts`)

## Reproduction

Before this fix, the following code would exit immediately:
```ts
process.stdin.on("data", chunk => {
  process.stdout.write(chunk);
});
process.stdin.pause();
process.stdin.resume();
```

After the fix, it correctly waits for and processes input.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-07 22:04:40 -07:00
robobun
5f1ca176cd fix(windows): prevent data loss in pipe reads after libuv 1.51.0 upgrade (#23340)
### What does this PR do?

Fixes data loss when reading large amounts of data from subprocess pipes
on Windows, a regression introduced by the libuv 1.51.0 upgrade in
commit e3783c244f.

### The Problem

When piping large data through a subprocess on Windows (e.g.,
`process.stdin.pipe(process.stdout)`), Bun randomly loses ~73KB of data
out of 1MB, receiving only ~974KB instead of the full 1048576 bytes.

The subprocess correctly receives all 1MB on stdin, but the parent
process loses data when reading from the subprocess stdout.

### Root Cause Analysis

#### libuv 1.51.0 Change

The libuv 1.51.0 upgrade (commit
[libuv/libuv@727ee723](727ee7237e))
changed Windows pipe reading behavior:

**Before:** libuv would call `PeekNamedPipe` to check available bytes,
then read exactly that amount.

**After:** libuv attempts immediate non-blocking reads (up to 65536
bytes) before falling back to async reads. If less data is available
than requested, it returns what's available and signals `more=0`,
causing the read loop to break.

This optimization introduces **0-byte reads** when data isn't
immediately available, which are delivered to Bun's read callback.

#### The Race Condition

When Bun's `WindowsBufferedReader` called `onRead(.drained)` for these
0-byte reads, it created a race condition. Debug logs clearly show the
issue:

**Error case (log.txt):**
```
Line 79-80: onStreamRead = 0 (drained)
Line 81:    filesink closes (stdin closes)
Line 85:    onStreamRead = 6024        ← Should be 74468!
Line 89:    onStreamRead = -4095 (EOF)
```

**Success case (success.log.txt):**
```
Line 79-80: onStreamRead = 0 (drained)
Line 81:    filesink closes (stdin closes)
Line 85:    onStreamRead = 74468       ← Full chunk!
Line 89-90: onStreamRead = 0 (drained)
Line 91:    onStreamRead = 6024
Line 95:    onStreamRead = -4095 (EOF)
```

When stdin closes while a 0-byte drained read is pending, the next read
returns truncated data (6024 bytes instead of 74468 bytes).

### The Fix

Two changes to `WindowsBufferedReader` in `src/io/PipeReader.zig`:

#### 1. Ignore 0-byte reads (line 937-940)

Don't call `onRead(.drained)` for 0-byte reads. Just return and let
libuv queue the next read. This prevents the race condition that causes
truncated reads.

```zig
0 => {
    // With libuv 1.51.0+, calling onRead(.drained) here causes a race condition
    // where subsequent reads return truncated data. Just ignore 0-byte reads.
    return;
},
```

#### 2. Defer `has_inflight_read` flag clearing (line 827-839)

Clear the flag **after** the read callback completes, not before. This
prevents libuv from starting a new overlapped read operation while we're
still processing the current data buffer, which could cause memory
corruption per the libuv commit message:

> "Starting a new read after uv_read_cb returns causes memory corruption
on the OVERLAPPED read_req if uv_read_stop+uv_read_start was called
during the callback"

```zig
const result = onReadChunkFn(this.parent, buf, hasMore);
// Clear has_inflight_read after the callback completes
this.flags.has_inflight_read = false;
return result;
```

### How to Test

Run the modified test in
`test/js/bun/spawn/spawn-stdin-readable-stream.test.ts`:

```js
test("ReadableStream with very large chunked data", async () => {
  const chunkSize = 64 * 1024; // 64KB chunks
  const numChunks = 16; // 1MB total
  const chunk = Buffer.alloc(chunkSize, "x");

  const stream = new ReadableStream({
    pull(controller) {
      if (pushedChunks < numChunks) {
        controller.enqueue(chunk);
        pushedChunks++;
      } else {
        controller.close();
      }
    },
  });

  await using proc = spawn({
    cmd: [bunExe(), "-e", `
      let length = 0;
      process.stdin.on('data', (data) => length += data.length);
      process.once('beforeExit', () => console.error(length));
      process.stdin.pipe(process.stdout)
    `],
    stdin: stream,
    stdout: "pipe",
    env: bunEnv,
  });

  const text = await proc.stdout.text();
  expect(text.length).toBe(chunkSize * numChunks); // Should be 1048576
});
```

**Before fix:** Randomly fails with ~974KB instead of 1MB  
**After fix:** Consistently passes with full 1MB

Run ~100 times to verify the race condition is fixed.

### Related Issues

This may also fix #23071 (Windows scripts hanging), though that issue
needs separate verification.

### Why Draft?

Marking as draft for Windows testing by the team. The fix is based on
detailed debug log analysis showing the exact race condition, but needs
verification on Windows CI.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-10-07 18:33:34 -07:00
robobun
92ec83a92a fix(windows): handle UV_ENOTCONN gracefully when spawning with long cwd (#23344) 2025-10-07 15:32:31 -07:00
pfg
5e8feca98b Enable breaking_changes_1_3 (#23308)
Breaking changes:

- bun:test: disallow creating snapshots or using .only() in ci
- for users: hopefully this should only reveal existing bugs in tests,
not cause failures.
- general: enable calling unhandled rejection handlers for
ErrorBuilder.reject()
- for users: this might reveal some unhandled rejections that were not
visible before.
2025-10-07 12:07:29 -07:00
robobun
d273f7fdde fix: zstd decompression with async-compressed data (#23314) (#23317)
### What does this PR do?

Fixes #23314 where `zlib.zstdCompress()` created data that caused an
out-of-memory error when decompressed with `Bun.zstdDecompressSync()`.

#### 1. `zlib.zstdCompress()` now sets `pledgedSrcSize`

The async convenience method now automatically sets the `pledgedSrcSize`
option to the input buffer size. This ensures the compressed frame
includes the content size in the header, making sync and async
compression produce identical output.

**Node.js compatibility**: `pledgedSrcSize` is a documented Node.js
option:
-
[`vendor/node/doc/api/zlib.md:754-758`](https://github.com/oven-sh/bun/blob/main/vendor/node/doc/api/zlib.md#L754-L758)
-
[`vendor/node/lib/zlib.js:893`](https://github.com/oven-sh/bun/blob/main/vendor/node/lib/zlib.js#L893)
-
[`vendor/node/src/node_zlib.cc:890-904`](https://github.com/oven-sh/bun/blob/main/vendor/node/src/node_zlib.cc#L890-L904)

#### 2. Added `bun.zstd.decompressAlloc()` - centralized safe
decompression

Created a new function in `src/deps/zstd.zig` that handles decompression
in one place with automatic safety features:

- **Handles unknown content sizes**: Automatically switches to streaming
decompression when the zstd frame doesn't include content size (e.g.,
from streams without `pledgedSrcSize`)
- **16MB safety limit**: For security, if the reported decompressed size
exceeds 16MB, streaming decompression is used instead of blindly
trusting the header
- **Fast path for small files**: Still uses efficient pre-allocation for
files < 16MB with known sizes

This centralized fix automatically protects:
- `Bun.zstdDecompressSync()` / `Bun.zstdDecompress()`
- `StandaloneModuleGraph` source map decompression
- Any other code using `bun.zstd` decompression

### How did you verify your code works?

**Before:**
```typescript
const input = "hello world";

// Async compression
const compressed = await new Promise((resolve, reject) => {
  zlib.zstdCompress(input, (err, result) => {
    if (err) reject(err);
    else resolve(result);
  });
});

// This would fail with "Out of memory"
const decompressed = Bun.zstdDecompressSync(compressed);
```
**Error**: `RangeError: Out of memory` (tried to allocate UINT64_MAX
bytes)

**After:**
```typescript
const input = "hello world";

// Async compression (now includes content size)
const compressed = await new Promise((resolve, reject) => {
  zlib.zstdCompress(input, (err, result) => {
    if (err) reject(err);
    else resolve(result);
  });
});

//  Works! Falls back to streaming decompression if needed
const decompressed = Bun.zstdDecompressSync(compressed);
console.log(decompressed.toString()); // "hello world"
```

**Tests:**
-  All existing tests pass
-  New regression tests for async/sync compression compatibility
(`test/regression/issue/23314/zstd-async-compress.test.ts`)
-  Test for large (>16MB) decompression using streaming
(`test/regression/issue/23314/zstd-large-decompression.test.ts`)
-  Test for various input sizes and types
(`test/regression/issue/23314/zstd-large-input.test.ts`)

**Security:**
The 16MB safety limit protects against malicious zstd frames that claim
huge decompressed sizes in the header, preventing potential OOM attacks.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-06 19:56:40 -07:00
robobun
5fca74a979 fix(sourcemap): escape tab characters in filenames for JSON (#23298)
## What does this PR do?

Fixes #22003 by escaping tab characters in filenames when generating
sourcemap JSON.

When a filename contained a tab character (e.g., `file\ttab.js`), the
sourcemap JSON would contain a **literal tab byte** instead of the
escaped `\t`, producing invalid JSON that caused `error:
InvalidSourceMap`.

The root cause was in `src/bun.js/bindings/highway_strings.cpp` where
the scalar fallback path had:
```cpp
if (char_ >= 127 || (char_ < 0x20 && char_ != 0x09) || ...)
```

This **exempted tab characters** (0x09) from being detected as needing
escape, while the SIMD path correctly detected them. The fix removes the
`&& char_ != 0x09` exemption so both paths consistently escape tabs.

## How did you verify your code works?

Added regression test in `test/regression/issue/22003.test.ts` that:
- Creates a file with a tab character in its filename
- Builds it with sourcemap generation
- Verifies the sourcemap is valid JSON
- Checks that the tab is escaped as `\t` (not a literal byte)

The test **fails on system bun** (produces invalid JSON with literal
tab):
```bash
USE_SYSTEM_BUN=1 bun test test/regression/issue/22003.test.ts
# error: JSON Parse error: Unterminated string
```

The test **passes with the fix** (tab properly escaped):
```bash
bun bd test test/regression/issue/22003.test.ts
# ✓ 1 pass
```

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-06 19:28:48 -07:00
robobun
fc9db832dc Fix bindings package compatibility by preventing empty stack trace filenames (#22106) 2025-10-06 16:22:24 -07:00
Dylan Conway
166c8ff4f0 fix loaders used by Module._extensions (#23291)
### What does this PR do?
Three things:
- JSCommonJSExtensions.cpp `onAssign` was returning out of sync numbers
instead of `BunLoaderTypeJS`/`BunLoaderTypeNAPI`/...
- `bun.schema.api.Loader._none` was 255 instead of 254 like
`BunLoaderTypeNone`
- `Bun__transpileFile` used `bun.options.Loader.Optional` instead of
`bun.schema.api.Loader`. `bun.options.Loader` does not have a type kept
in sync in C++.
### How did you verify your code works?
Added tests that make sure the correct loader is used for modules
required with custom _extensions functions
2025-10-06 06:40:15 -07:00
Dylan Conway
b81018707d fix(parser): unused arrays with no side effects (#23288)
### What does this PR do?
Fixes a bug since Bun v1.0.15: `var f = ([1, 2], "hi");`
Fixes a regression since Bun v1.2.22: `var f = (new Array([1, 2]),
"hi");`

Fixes #23287
### How did you verify your code works?
Added a test
2025-10-06 04:44:05 -07:00
Dylan Conway
1c363f0ad0 fix(parser): typeof minification regression (#23280)
### What does this PR do?
In Bun v1.2.22 a minification for `typeof x === "undefined"` → `typeof x
> "u"` was added. This introduced a regression causing `return (typeof x
!== "undefined", false)` to minify to invalid syntax when
`--minify-syntax` is enabled (this is also enabled for transpilation at
runtime).

This pr fixes the regression making sure `return (typeof x !==
"undefined", false);` minifies correctly to `return !1;`.

fixes #21137
### How did you verify your code works?
Added a regression test.
2025-10-06 00:39:08 -07:00
Dylan Conway
d292dcad26 fix(parser): typescript module parsing bug (#23284)
### What does this PR do?
A bug in our typescript parser was causing `module.foo = foo` to parse
as a typescript namespace. If it didn't end with a semicolon and there's
a statement on the next line it would cause a syntax error. Example:

```ts
module.foo = foo
foo.foo = foo
```

fixes #22929 
fixes #22883

### How did you verify your code works?
Added a regression test
2025-10-06 00:37:29 -07:00
robobun
f0295ce0a5 Fix bunfig.toml parsing with UTF-8 BOM (#23276)
Fixes #23275

### What does this PR do?

This PR fixes a bug where `bunfig.toml` files starting with a UTF-8 BOM
(byte order mark, `U+FEFF` or bytes `0xEF 0xBB 0xBF`) would fail to
parse with an "Unexpected" error.

The fix uses Bun's existing `File.toSource()` function with
`convert_bom: true` option when loading config files. This properly
detects and strips the BOM before parsing, matching the behavior of
other file readers in Bun (like the JavaScript lexer which treats
`0xFEFF` as whitespace).

**Changes:**
- Modified `src/cli/Arguments.zig` to use `bun.sys.File.toSource()` with
BOM conversion instead of manually reading the file
- Simplified the config loading code by removing intermediate file
handle and buffer logic

### How did you verify your code works?

Added comprehensive regression tests in
`test/regression/issue/23275.test.ts` that verify:
1.  `bunfig.toml` with UTF-8 BOM parses correctly without errors
2.  `bunfig.toml` without BOM still works (regression test)
3.  `bunfig.toml` with BOM and actual config content parses the content
correctly

All three tests pass with the debug build:
```
 3 pass
 0 fail
 11 expect() calls
Ran 3 tests across 1 file. [6.41s]
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-10-05 17:22:37 -07:00
robobun
a9b383bac5 fix(crypto): hkdf callback should pass null (not undefined) on success (#23216)
## Summary
- Fixed crypto.hkdf callback to pass `null` instead of `undefined` for
the error parameter on success
- Added regression test to verify the fix

## Details

Fixes #23211

Node.js convention requires crypto callbacks to receive `null` as the
error parameter on success, but Bun was passing `undefined`. This caused
compatibility issues with code that relies on strict null checks (e.g.,
[matter.js](fdbec2cf88/packages/general/src/crypto/NodeJsStyleCrypto.ts (L169))).

### Changes
- Updated `CryptoHkdf.cpp` to pass `jsNull()` instead of `jsUndefined()`
for the error parameter in the success callback
- Added regression test in `test/regression/issue/23211.test.ts`

## Test plan
- [x] Added regression test that verifies callback receives `null` on
success
- [x] Test passes with the fix
- [x] Ran existing crypto tests (no failures)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2025-10-03 15:55:57 -07:00
robobun
c8cb7713fc Fix Windows crash in process.title when console title is empty (#23184)
## Summary

Fixes a segmentation fault on Windows 11 when accessing `process.title`
in certain scenarios (e.g., when fetching system information or making
Discord webhook requests).

## Root Cause

The crash occurred in libuv's `uv_get_process_title()` at `util.c:413`
in the `strlen()` call. The issue is that `uv__get_process_title()`
could return success (0) but leave `process_title` as NULL in edge cases
where:

1. `GetConsoleTitleW()` returns an empty string
2. `uv__convert_utf16_to_utf8()` succeeds but doesn't allocate memory
for the empty string
3. The subsequent `assert(process_title)` doesn't catch this in release
builds
4. `strlen(process_title)` crashes with a null pointer dereference

## Changes

Added defensive checks in `BunProcess.cpp`:
1. Initialize the title buffer to an empty string before calling
`uv_get_process_title()`
2. Check if the buffer is empty after the call returns
3. Fall back to "bun" if the title is empty or the call fails

## Testing

Added regression test in `test/regression/issue/23183.test.ts` that
verifies:
- `process.title` doesn't crash when accessed
- Returns a valid string (either the console title or "bun")

Fixes #23183

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-03 02:54:23 -07:00
pfg
613aea1787 run beforeAll before the first file and afterAll after the last file (#23113)
Fixes #23066

Reverts the breaking change to this order made in #22534
2025-09-30 21:47:31 -07:00
pfg
1fb9be3880 Re-enable afterAll inside a test (#23110)
Fixes #23064, Fixes #23077

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-30 19:41:35 -07:00
robobun
5a12175cb0 Fix V8StackTraceIterator to handle frames without parentheses (#23034)
## Summary
- Fixes V8StackTraceIterator terminating early when encountering stack
frames without parentheses (e.g., "at unknown")
- Ensures complete stack traces are shown by `Bun.inspect()` even after
`error.stack` property is accessed
- Addresses the root cause that PR #23022 was working around

## Problem
When the V8StackTraceIterator encountered a stack frame line without
parentheses (like `at unknown`), it would:
1. Set `offset = stack.length()` 
2. Return `false`, terminating the entire iteration
3. Only parse frames before the "unknown" frame

This caused `Bun.inspect()` to show incomplete stack traces after the
`error.stack` property was accessed, as documented in issue discussions
around PR #23022.

## Solution
Changed the parser to continue iterating through subsequent frames
instead of terminating. Frames without parentheses are now treated as
having a source URL but no function name or location info.

## Test plan
- [x] Added regression test
`test/regression/issue/23022-stack-trace-iterator.test.ts`
- [x] Verified existing stack trace tests still pass
- [x] Manually tested with Node.js stream errors that trigger this code
path

### Before fix:
```
error: Socket is closed
 code: "ERR_SOCKET_CLOSED"

      at node:net:1322:32
```

### After fix:
```
error: Socket is closed
 code: "ERR_SOCKET_CLOSED"

      at unknown:1:1
      at _write (node:net:1322:32)
      at writeOrBuffer (internal:streams/writable:381:18)
      at internal:streams/writable:334:16
      at testStackTrace (/workspace/bun/test.js:8:10)
      ...
```

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-27 00:55:54 -07:00
Dylan Conway
8102e80f88 fix(build): Promise.all() async module dependencies (#22704)
### What does this PR do?
Currently bundling and running projects with cyclic async module
dependencies will hang due to module promises never resolving. This PR
unblocks these projects by outputting `await Promise.all` with these
dependencies.

Before (will hang with bun, or error with unsettled top level await with
node):
```js
var __esm = (fn, res) => () => (fn && (res = fn((fn = 0))), res);

var init_mod3 = __esm(async () => {
  await init_mod1();
});

var init_mod2 = __esm(async () => {
  await init_mod1();
});

var init_mod1 = __esm(async () => {
  await init_mod2();
  await init_mod3();
});

await init_mod1();
```

After:
```js
var __esm = (fn, res) => () => (fn && (res = fn((fn = 0))), res);
var __promiseAll = Promise.all.bind(Promise);

var init_mod3 = __esm(async () => {
  await init_mod1();
});

var init_mod2 = __esm(async () => {
  await init_mod1();
});

var init_mod1 = __esm(async () => {
  await __promiseAll([init_mod2(), init_mod3()]);
});

await init_mod1();
```

### How did you verify your code works?
Manually and tests

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-26 22:21:00 -07:00
robobun
a329da97f4 Fix server stability issue with oversized requests (#22701)
## Summary
Improves server stability when handling certain request edge cases.

## Test plan
- Added regression test in `test/regression/issue/22353.test.ts`
- Test verifies server continues operating normally after handling edge
case requests
- All existing HTTP server tests pass

Fixes #22353

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-26 04:59:07 -07:00
Filip Stevanovic
f45900d7e6 fix(fetch): print request body for application/x-www-form-urlencoded in curl logs (#22849)
### What does this PR do?

fixes an issue where fetch requests with `Content-Type:
application/x-www-form-urlencoded` would not include the request body in
curl logs when `BUN_CONFIG_VERBOSE_FETCH=curl` is enabled

previously, only JSON and text-based content types were recognized as
safe-to-print in the curl formatter. This change updates the allow-list
to also handle `application/x-www-form-urlencoded`, ensuring bodies for
common form submissions are shown in logs

### How did you verify your code works?

- added `Content-Type: application/x-www-form-urlencoded` to a fetch
request and confirmed that `BUN_CONFIG_VERBOSE_FETCH=curl` now outputs a
`--data-raw` section with the encoded body
- verified the fix against the reproduction script provided in issue
#12042
 - created and ran a regression test
- checked that existing content types (JSON, text, etc.) continue to
print correctly

fixes #12042
2025-09-26 03:54:41 -07:00
robobun
5457d76bcb Fix double-free in createArgv function (#22978)
## Summary
- Fixed a double-free bug in the `createArgv` function in
`node_process.zig`

## Details
The `createArgv` function had two `defer allocator.free(args)`
statements:
- One on line 164 
- Another on line 192 (now removed)

This would cause the same memory to be freed twice when the function
returned, leading to undefined behavior.

Fixes #22975

## Test plan
The existing process.argv tests should continue to pass with this fix.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-25 23:52:56 -07:00
robobun
fee28ca66f Fix dns.resolve callback parameters to match Node.js behavior (#22814)
## Summary
- Fixed `dns.resolve()` callback to pass 2 parameters instead of 3,
matching Node.js
- Fixed `dns.promises.resolve()` to return array of strings for A/AAAA
records instead of objects
- Added comprehensive regression tests

## What was wrong?

The `dns.resolve()` callback was incorrectly passing 3 parameters
`(error, hostname, results)` instead of Node.js's 2 parameters `(error,
results)`. Additionally, `dns.promises.resolve()` was returning objects
with `{address, family}` instead of plain string arrays for A/AAAA
records.

## How this fixes it

1. Removed the extra `hostname` parameter from the callback in
`dns.resolve()` for A/AAAA records
2. Changed promise version to use `promisifyResolveX(false)` instead of
`promisifyLookup()` to return string arrays
3. Applied same fixes to the `Resolver` class methods

## Test plan
- Added regression test `test/regression/issue/22712.test.ts` with 6
test cases
- All tests pass with the fix
- Verified existing DNS tests still pass

Fixes #22712

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-24 18:29:15 -07:00
robobun
57b93f6ea3 Fix panic when macros return collections with 3+ arrays/objects (#22827)
## Summary

Fixes #22656, #11730, and #7116

Fixes a panic that occurred when macros returned collections containing
three or more arrays or objects.

## Problem

The issue was caused by hash table resizing during recursive processing.
When `this.run()` was called recursively to process nested
arrays/objects, it could add more entries to the `visited` map,
triggering a resize. This would invalidate the `_entry.value_ptr`
pointer obtained from `getOrPut`, leading to memory corruption and
crashes.

## Solution

The fix ensures we handle hash table resizing safely:

1. Use `getOrPut` to reserve an entry and store a placeholder
2. Process all children (which may trigger hash table resizing)
3. Create the final expression with all data
4. Use `put` to update the entry (safe even after resizing)

This approach is applied consistently to both arrays and objects.

## Verification

All three issues have been tested and verified as fixed:

###  #22656 - "Panic when returning collections with three or more
arrays or objects"
- **Before**: `panic(main thread): switch on corrupt value`
- **After**: Works correctly

###  #11730 - "Constructing deep objects in macros causes segfaults"
- **Before**: `Segmentation fault at address 0x8` with deep nested
structures
- **After**: Handles deep nesting without crashes

###  #7116 - "[macro] crash with large complex array"
- **Before**: Crashes with objects containing 50+ properties (hash table
stress)
- **After**: Processes large complex arrays successfully

## Test Plan

Added comprehensive regression tests that cover:
- Collections with 3+ arrays
- Collections with 3+ objects
- Deeply nested structures (5+ levels)
- Objects with many properties (50+) to stress hash table operations
- Mixed collections of arrays and objects

All tests pass with the fix applied.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-24 18:25:39 -07:00
robobun
fcd628424a Fix YAML.parse to throw SyntaxError instead of BuildMessage (#22924)
YAML.parse now throws SyntaxError for invalid syntax matching JSON.parse
behavior

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2025-09-24 16:29:05 -07:00
pfg
526686fdc9 Prevent test.only and snapshot updates in CI (#21811)
This is feature flagged and will not activate until Bun 1.3

- Makes `test.only()` throw an error in CI
- Unless `--update-snapshots` is passed:
- Makes `expect.toMatchSnapshot()` throw an error instead of adding a
new snapshot in CI
- Makes `expect.toMatchInlineSnapshot()` throw an error instead of
filling in the snapshot value in CI

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-24 15:19:16 -07:00
robobun
e555702653 Fix infinite recursion when error.stack is a circular reference (#22863)
## Summary

This PR fixes infinite recursion and stack overflow crashes when error
objects have circular references in their properties, particularly when
`error.stack = error`.

### The Problem
When an error object's stack property references itself or creates a
circular reference chain, Bun would enter infinite recursion and crash.
Common patterns that triggered this:
```javascript
const error = new Error();
error.stack = error;  // Crash!
console.log(error);

// Or circular cause chains:
error1.cause = error2;
error2.cause = error1;  // Crash!
```

### The Solution
Added proper circular reference detection at three levels:

1. **C++ bindings layer** (`bindings.cpp`): Skip processing if `stack`
property equals the error object itself
2. **VirtualMachine layer** (`VirtualMachine.zig`): Track visited errors
when printing error instances and their causes
3. **ConsoleObject layer** (`ConsoleObject.zig`): Properly coordinate
visited map between formatters

Circular references are now safely detected and printed as `[Circular]`
instead of causing crashes.

## Test plan

Added comprehensive tests in
`test/regression/issue/circular-error-stack.test.ts`:
-  `error.stack = error` circular reference
-  Nested circular references via error properties  
-  Circular cause chains (`error1.cause = error2; error2.cause =
error1`)

All tests pass:
```
bun test circular-error-stack.test.ts
✓ error with circular stack reference should not cause infinite recursion
✓ error with nested circular references should not cause infinite recursion  
✓ error with circular reference in cause chain
```

Manual testing:
```javascript
// Before: Stack overflow crash
// After: Prints error normally
const error = new Error("Test");
error.stack = error;
console.log(error);  // error: Test
```

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-22 22:22:58 -07:00
pfg
d2201eb1fe Rewrite test/describe, add test.concurrent (#22534)
# bun test

Fixes #8768, Fixes #14624, Fixes #20100, Fixes #19875, Fixes #14135,
Fixes #20980, Fixes #21830, Fixes #5738, Fixes #19758, Fixes #12782,
Fixes #5585, Fixes #9548, Might fix 5996

# New features:

## Concurrent tests

Concurrent tests allow running multiple async tests at the same time.

```ts
// concurrent.test.ts
test.concurrent("this takes a while 1", async () => {
  await Bun.sleep(1000);
});
test.concurrent("this takes a while 2", async () => {
  await Bun.sleep(1000);
});
test.concurrent("this takes a while 3", async () => {
  await Bun.sleep(1000);
});
```

Without `.concurrent`, this test file takes 3 seconds to run because
each one has to wait for the one before it to finish before it can
start.

With `.concurrent`, this file takes 1 second because all three sleeps
can run at once.

```
$> bun-after test concurrent
concurrent.test.js:
✓ this takes a while 1 [1005.36ms]
✓ this takes a while 2 [1012.51ms]
✓ this takes a while 3 [1013.15ms]

 3 pass
 0 fail
Ran 3 tests across 1 file. [1081.00ms]
```

To run all tests as concurrent, pass the `--concurrent` flag when
running tests.

Limitations:

- concurrent tests cannot attribute `expect()` call counts to the test,
meaning `expect.assertions()` does not function
- concurrent tests cannot use `toMatchSnapshot`. `toMatchInlineSnapshot`
is still supported.
- `beforeAll`/`afterAll` will never be executed concurrently.
`beforeEach`/`afterEach` will.

## Chaining

Chaining multiple describe/test qualifiers is now allowed. Previously,
it would fail.

```ts
// chaining-test-qualifiers.test.ts
test.failing.each([1, 2, 3])("each %i", async i => {
  throw new Error(i);
});
```

```
$> bun-after test chaining-test-qualifiers
a.test.js:
✓ each 1
✓ each 2
✓ each 3
```

# Breaking changes:

## Describe ordering

Previously, describe callbacks were called immediately. Now, they are
deferred until the outer callback has finished running. The previous
order matched Jest. The new order is similar to Vitest, but does not
match exactly.

```ts
// describe-ordering.test.ts
describe("outer", () => {
  console.log("outer before");
  describe("inner", () => {
    console.log("inner");
  });
  console.log("outer after");
});
```

Before, this would print

```
$> bun-before test describe-ordering
outer before
inner
outer after
```

Now, this will print

```
$> bun-after test describe-ordering
outer before
outer after
inner
```

## Test ordering

Describes are no longer always called before tests. They are now in
order.

```ts
// test-ordering.test.ts
test("one", () => {});
describe("scope", () => {
  test("two", () => {});
});
test("three", () => {});
```

Before, this would print

```
$> bun-before test test-ordering
✓ scope > two
✓ one
✓ three
```

Now, this will print

```
$> bun-after test test-ordering
✓ one
✓ scope > two
✓ three
```

## Preload hooks

Previously, beforeAll in a preload ran before the first file and
afterAll ran after the last file. Now, beforeAll will run at the start
of each file and afterAll will run at the end of each file. This
behaviour matches Jest and Vitest.

```ts
// preload.ts
beforeAll(() => console.log("preload: beforeAll"));
afterAll(() => console.log("preload: afterAll"));
```

```ts
// preload-ordering-1.test.ts
test("demonstration file 1", () => {});
```

```ts
// preload-ordering-2.test.ts
test("demonstration file 2", () => {});
```

```
$> bun-before test --preload=./preload preload-ordering
preload-ordering-1.test.ts:
preload: beforeAll
✓ demonstration file 1

preload-ordering-2.test.ts:
✓ demonstration file 2
preload: afterAll
```

```
$> bun-after test --preload=./preload preload-ordering
preload-ordering-1.test.ts:
preload: beforeAll
✓ demonstration file 1
preload: afterAll

preload-ordering-2.test.ts:
preload: beforeAll
✓ demonstration file 2
preload: afterAll
```

## Describe failures

Current behaviour is that when an error is thrown inside a describe
callback, none of the tests declared there will run. Now, describes
declared inside will also not run. The new behaviour matches the
behaviour of Jest and Vitest.

```ts
// describe-failures.test.ts
describe("erroring describe", () => {
  test("this test does not run because its describe failed", () => {
    expect(true).toBe(true);
  });
  describe("inner describe", () => {
    console.log("does the inner describe callback get called?");
    test("does the inner test run?", () => {
      expect(true).toBe(true);
    });
  });
  throw new Error("uh oh!");
});
```

Before, the inner describe callback would be called and the inner test
would run, although the outer test would not:

```
$> bun-before test describe-failures
describe-failures.test.ts:
does the inner describe callback get called?

# Unhandled error between tests
-------------------------------
11 |   throw new Error("uh oh!");
             ^
error: uh oh!
-------------------------------

✓ erroring describe > inner describe > does the inner test run?

 1 pass
 0 fail
 1 error
 1 expect() calls
Ran 1 test across 1 file.
Exited with code [1]
```

Now, the inner describe callback is not called at all.

```
$> bun-after test describe-failures
describe-failures.test.ts:

# Unhandled error between tests
-------------------------------
11 |   throw new Error("uh oh!");
             ^
error: uh oh!
-------------------------------


 0 pass
 0 fail
 1 error
Ran 0 tests across 1 file.
Exited with code [1]
```

## Hook failures

Previously, a beforeAll failure would skip subsequent beforeAll()s, the
test, and the afterAll. Now, a beforeAll failure skips any subsequent
beforeAll()s and the test, but not the afterAll.

```js
beforeAll(() => {
  throw new Error("before all: uh oh!");
});
test("my test", () => {
  console.log("my test");
});
afterAll(() => console.log("after all"));
```

```
$> bun-before test hook-failures
Error: before all: uh oh!

$> bun-after test hook-failures
Error: before all: uh oh!
after all
```

Previously, an async beforeEach failure would still allow the test to
run. Now, an async beforeEach failure will prevent the test from running

```js
beforeEach(() => {
  await 0;
  throw "uh oh!";
});
it("the test", async () => {
  console.log("does the test run?");
});
```

```
$> bun-before test async-beforeeach-failure
does the test run?
error: uh oh!
uh oh!
✗ the test

$> bun-after test async-beforeeach-failure
error: uh oh!
uh oh!
✗ the test
```

## Hook timeouts

Hooks will now time out, and can have their timeout configured in an
options parameter

```js
beforeAll(async () => {
  await Bun.sleep(1000);
}, 500);
test("my test", () => {
  console.log("ran my test");
});
```

```
$> bun-before test hook-timeouts
ran my test
Ran 1 test across 1 file. [1011.00ms]

$> bun-after test hook-timeouts
✗ my test [501.15ms]
  ^ a beforeEach/afterEach hook timed out for this test.
```

## Hook execution order

beforeAll will now execute before the tests in the scope, rather than
immediately when it is called.

```ts
describe("d1", () => {
  beforeAll(() => {
    console.log("<d1>");
  });
  test("test", () => {
    console.log("  test");
  });
  afterAll(() => {
    console.log("</d1>");
  });
});
describe("d2", () => {
  beforeAll(() => {
    console.log("<d2>");
  });
  test("test", () => {
    console.log("  test");
  });
  afterAll(() => {
    console.log("</d2>");
  });
});
```

```
$> bun-before test ./beforeall-ordering.test.ts
<d1>
<d2>
  test
</d1>
  test
</d2>

$> bun-after test ./beforeall-ordering.test.ts
<d1>
  test
</d1>
<d2>
  test
</d2>
```

## test inside test

test() inside test() now errors rather than silently failing. Support
for this may be added in the future.

```ts
test("outer", () => {
    console.log("outer");
    test("inner", () => {
        console.log("inner");
    });
});
```

```
$> bun-before test
outer
✓ outer [0.06ms]

 1 pass
 0 fail
Ran 1 test across 1 file. [8.00ms]

$> bun-after test
outer
1 | test("outer", () => {
2 |     console.log("outer");
3 |     test("inner", () => {
        ^
error: Cannot call test() inside a test. Call it inside describe() instead.
✗ outer [0.71ms]

 0 pass
 1 fail
```

## afterAll inside test

afterAll inside a test is no longer allowed

```ts
test("test 1", () => {
  afterAll(() => console.log("afterAll"));
  console.log("test 1");
});
test("test 2", () => {
  console.log("test 2");
});
```

```
$> bun-before
test 1
✓ test 1 [0.05ms]
test 2
✓ test 2
afterAll

$> bun-after
error: Cannot call afterAll() inside a test. Call it inside describe() instead.
✗ test 1 [1.00ms]
test 2
✓ test 2 [0.20ms]
```

# Only inside only

Previously, an outer 'describe.only' would run all tests inside it even
if there was an inner 'test.only'. Now, only the innermost only tests
are executed.

```ts
describe.only("outer", () => {
    test("one", () => console.log("should not run"));
    test.only("two", () => console.log("should run"));
});
```

```
$> bun-before test
should not run
should run

$> bun-after test
should run
```

With no inner only, the outer only will still run all tests:

```ts
describe.only("outer", () => {
    test("test 1", () => console.log("test 1 runs"));
    test("test 2", () => console.log("test 2 runs"));
});
```

# Potential follow-up work

- [ ] for concurrent tests, display headers before console.log messages
saying which test it is for
  - this will need async context or similar
- refActiveExecutionEntry should also be able to know the current test
even in test.concurrent
- [ ] `test("rerun me", () => { console.log("run one time!"); });`
`--rerun-each=3` <- this runs the first and third time but not the
second time. fix.
- [ ] should to cache the JSValue created from
DoneCallback.callAsFunction
- [ ] implement retry and rerun params for tests.
- [ ] Remove finalizer on ScopeFunctions.zig by storing the data in 3
jsvalues passed in bind rather than using a custom class. We should also
migrate off of the ClassGenerator for ScopeFunctions
- [ ] support concurrent limit, how many concurrent tests are allowed to
run at a time. ie `--concurrent-limit=25`
- [ ] flag to run tests in random order
- [ ] `test.failing` should have its own style in the same way
`test.todo` passing marks as 'todo' insetead of 'passing'. right now
it's `✓` which is confusing.
- [ ] remove all instances of bun.jsc.Jest.Jest.current
  - [ ] test options should be in BunTestRoot
- [ ] we will need one global still, stored in the globalobject/vm/?.
but it should not be a Jest instance.
- [ ] consider allowing test() inside test(), as well as afterEach and
afterAll. could even allow describe() too. to do this we would switch
from indices to pointers and they would be in a linked list. they would
be allocated in memorypools for perf/locality. some special
consideration is needed for making sure repeated tests lose their
temporary items. this could also improve memory usage soomewhat.
- [ ] consider using a jsc Bound Function rather than CallbackWithArgs.
bound functions allow adding arguments and they are only one value for
GC instead of many. and this removes our unnecessary three copies.
- [ ] eliminate Strong.Safe. we should be using a C++ class instead.
- [ ] consider modifying the junit reporter to print the whole describe
tree at the end instead of trying to output as test results come in. and
move it into its own file.
- [ ] expect_call_count/expect_assertions is confusing. rename to
`expect_calls`, `assert_expect_calls`. or something.
- [ ] Should make line_no be an enum with a none option and a function
to get if line nombers are enabled
- [ ] looks like we don't need to use file_id anymore (remove
`bun.jsc.Jest.Jest.runner.?.getOrPutFile(file_path).file_id;`, store the
file path directly)
- [ ] 'dot' test reporter like vitest?
- [ ] `test.failing.if(false)` errors because it can't replace mode
'failing' with mode 'skip'. this should probably be allowed instead.
- [ ] trigger timeout termination exception for `while(true) {}`
- [ ] clean up unused callbacks. as soon as we advance to the next
execution group, we can fully clean out the previous one. sometimes
within an execution sequence we can do the same.
  - clean by swapping held values with undefined
- [ ] structure cache for performance for donecallback/scopefunctions
- [ ] consider migrating CallbackWithArgs to be a bound function. the
length of the bound function can exclude the specified args.
- [ ] setting both result and maybe_skip is not ideal, maybe there
should be a function to do both at once?
- [ ] try using a linked list rather than arraylist for describe/test
children, see how it affects performance
- [ ] consider a memory pool for describescope/executionentry. test if
it improves performance.
- [ ] consider making RefDataValue methods return the reason for failure
rather than ?value. that way we can improve error messages. the reason
could be a string or it could be a defined error set
- [ ] instead of 'description orelse (unnamed)', let's have description
default to 'unnamed' and not free it if it === the global that defines
that
- [ ] Add a phase before ordering results that inherits properties to
the parents. (eg inherit only from the child and inherit has_callback
from the child. and has_callback can be on describe/test individually
rather than on base). then we won't have that happening in an init()
function (terrible!)
- [ ] this test was incidentally passing because resolves.pass() wasn't
waiting for promise
  ```
  test("fetching with Request object - issue #1527", async () => {
    const server = createServer((req, res) => {
      res.end();
    }).listen(0);
    try {
      await once(server, "listening");

      const body = JSON.stringify({ foo: "bar" });
const request = new Request(`http://localhost:${server.address().port}`,
{
        method: "POST",
        body,
      });

      expect(fetch(request)).resolves.pass();
    } finally {
      server.closeAllConnections();
    }
  });
  ```
- [ ] the error "expect.assertions() is not supported in the describe
phase, in concurrent tests, between tests, or after test execution has
completed" is not very good. we should be able to identify which of
those it is and print the right error for the context
- [ ] consider: instead of storing weak pointers to BunTest, we can
instead give the instance an id and check that it is correct when
getting the current bun test instance from the ref
- [ ] auto_killer: add three layers of auto_killer:
  - preload (includes file & test)
  - file (includes test)
  - test
- that way at the end of the test, we kill the test processes. at the
end of the file, we kill the file processes. at the end of all, we kill
anything remaining.

AsyncLocalStorage

- store active_id & refdatavalue. active_id is a replacement for the
above weak pointers thing. refdatavalue is for determining which test it
is. this probably fits in 2×u64
- use for auto_killer so timeouts can kill even in concurrent tests

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-20 00:35:42 -07:00
Jarred Sumner
9661af5049 Make the error better when you do console.log(\" (#22787)
### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-20 00:29:15 -07:00
Dylan Conway
d3d68f45fd fix(bundler): minify Array constructor with ternary regression (#22803)
### What does this PR do?
Fixes accessing the wrong union field.

Resolves BUN-WQF
### How did you verify your code works?
Added a regression test

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-19 17:32:12 -07:00
robobun
661deb8eaf Fix MessagePort communication after transfer to Worker (#22638)
## Summary

Fixes #22635 - MessagePort communication fails after being transferred
to a Worker thread.
Fixes https://github.com/oven-sh/bun/issues/22636

The issue was that `MessagePort::addEventListener()` only called
`start()` for attribute listeners (like `onmessage = ...`) but not for
regular event listeners added via `addEventListener()` or the Node.js
EventEmitter wrapper (`.on('message', ...)`).

## Changes

- Modified `MessagePort::addEventListener()` to call `start()` for all
message event listeners, not just attribute listeners
- Added regression test for issue #22635

## Test Plan

- [x] Regression test added and passing
- [x] Original reproduction case from issue #22635 now works correctly
- [x] Existing MessagePort tests still pass

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-17 03:33:14 -07:00
robobun
dd9d1530da Fix crash when plugin onResolve returns undefined (#22670)
## Summary
Fixes #22199

When a plugin's `onResolve` handler returns `undefined` or `null`, Bun
should continue to the next plugin or use default resolution. However,
the code was crashing with a segmentation fault.

## The Bug
The crash occurred when:
1. A plugin's `onResolve` handler returned `undefined` (especially from
an async function as a fulfilled promise)
2. The code extracted the promise result but didn't check if it was
undefined before expecting it to be an object
3. This caused an improper exception to be thrown, leading to a crash

## The Fix
1. **Main fix**: Added a check for `undefined/null` after extracting the
result from a fulfilled promise, allowing the code to continue to the
next plugin
2. **Promise rejection fix**: Changed rejected promise handling to
return the promise itself instead of throwing an exception (which was
causing hangs)
3. **Exception handling**: Standardized exception throwing throughout
the file to use the proper `throwException` pattern

## Test Plan
Added comprehensive regression tests in
`test/regression/issue/22199.test.ts` that verify:
-  Async function returning `undefined` doesn't crash
-  Async function returning `null` doesn't crash  
-  Sync function returning `undefined` doesn't crash
-  Async function throwing an error properly shows the error

All tests:
- **Fail (crash) with release Bun**: Segmentation fault
- **Pass with this fix**: All test cases pass

## Verification
```bash
# Crashes without the fix
bun test test/regression/issue/22199.test.ts  

# Passes with the fix
bun bd test test/regression/issue/22199.test.ts
```

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-15 23:37:10 -07:00
robobun
0351bd5f28 Fix zstd decompression truncation for multi-frame responses (#22680)
## Summary

Fixes #20053

When a server sends zstd-compressed data with chunked transfer encoding,
each chunk may be compressed as a separate zstd frame. Previously, Bun's
zstd decompressor would stop after the first frame, causing responses to
be truncated at 16KB.

## The Fix

The fix modifies the zstd decompressor (`src/deps/zstd.zig`) to continue
decompression when a frame completes but input data remains. When
`ZSTD_decompressStream` returns 0 (frame complete), we now check if
there's more input data and reinitialize the decompressor to handle the
next frame.

## Testing

Added regression tests in `test/regression/issue/20053.test.ts` that:
1. Test multi-frame zstd decompression where two frames need to be
concatenated
2. Simulate the exact Hono + compression middleware scenario from the
original issue

Both tests fail without the fix (truncating at 16KB) and pass with the
fix.

## Verification

```bash
# Without fix (regular bun):
$ bun test test/regression/issue/20053.test.ts
 0 pass
 2 fail

# With fix (debug build):
$ bun bd test test/regression/issue/20053.test.ts  
 2 pass
 0 fail
```

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-15 14:24:03 -07:00
Jarred Sumner
6bafe2602e Fix Windows shell crash with && operator and external commands (#22651)
## What does this PR do?

Fixes https://github.com/oven-sh/bun/issues/22650
Fixes https://github.com/oven-sh/bun/issues/22615
Fixes https://github.com/oven-sh/bun/issues/22603
Fixes https://github.com/oven-sh/bun/issues/22602

Fixes a crash that occurred when running shell commands through `bun
run` (package.json scripts) on Windows that use the `&&` operator
followed by an external command.

### The Problem

The minimal reproduction was:
```bash
bun exec 'echo && node --version'
```

This would crash with: `panic(main thread): attempt to use null value`

### Root Causes

Two issues were causing the crash:

1. **Missing top_level_dir**: When `runPackageScriptForeground` creates
a MiniEventLoop for running package scripts, it wasn't setting the
`top_level_dir` field. This caused a null pointer dereference when the
shell tried to access it.

2. **MovableIfWindowsFd handling**: After PR #21800 introduced
`MovableIfWindowsFd` to handle file descriptor ownership on Windows, the
`IOWriter.fd` could be moved to libuv, leaving it null. When the shell
tried to spawn an external command after a `&&` operator, it would crash
trying to access this null fd.

### The Fix

1. Set `mini.top_level_dir = cwd` after initializing the MiniEventLoop
in `run_command.zig`
2. In `IO.zig`, when the fd has been moved to libuv (is null), use
`.inherit` for stdio instead of trying to pass the null fd

### How did you verify your code works?

- Added a regression test that reproduces the issue
- Verified the test fails without the fix and passes with it
- Tested the minimal reproduction command directly
- The fix correctly allows both commands in the `&&` chain to execute

```bash
# Before fix: crashes
> bun exec 'echo test && node --version'
panic(main thread): attempt to use null value

# After fix: works correctly
> bun exec 'echo test && node --version'
test
v22.4.1
```
<sub>
also probably fixes #22615 and fixes #22603 and fixes #22602
</sub>

---------

Co-authored-by: Zack Radisic <zack@theradisic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-14 04:14:48 -07:00
robobun
9b97dd11e2 Fix TTY reopening after stdin EOF (#22591)
## Summary
- Fixes ENXIO error when reopening `/dev/tty` after stdin reaches EOF
- Fixes ESPIPE error when reading from reopened TTY streams  
- Adds ref/unref methods to tty.ReadStream for socket-like behavior
- Enables TUI applications that read piped input then switch to
interactive TTY mode

## The Problem
TUI applications and interactive CLI tools have a pattern where they:
1. Read piped input as initial data: `echo "data" | tui-app`
2. After stdin ends, reopen `/dev/tty` for interactive session
3. Use the TTY for interactive input/output

This didn't work in Bun due to missing functionality:
- **ESPIPE error**: TTY ReadStreams incorrectly had `pos=0` causing
`pread()` syscall usage which fails on character devices
- **Missing methods**: tty.ReadStream lacked ref/unref methods that TUI
apps expect for socket-like behavior
- **Hardcoded isTTY**: tty.ReadStream always set `isTTY = true` even for
non-TTY file descriptors

## The Solution
1. **Fix ReadStream position**: For fd-based streams (like TTY), don't
default `start` to 0. This keeps `pos` undefined, ensuring `read()`
syscall is used instead of `pread()`.

2. **Add ref/unref methods**: Implement ref/unref on tty.ReadStream
prototype to match Node.js socket-like behavior, allowing TUI apps to
control event loop behavior.

3. **Dynamic isTTY check**: Use `isatty(fd)` to properly detect if the
file descriptor is actually a TTY.

## Test Results
```bash
$ bun test test/regression/issue/tty-reopen-after-stdin-eof.test.ts
✓ can reopen /dev/tty after stdin EOF for interactive session
✓ TTY ReadStream should not set position for character devices

$ bun test test/regression/issue/tty-readstream-ref-unref.test.ts
✓ tty.ReadStream should have ref/unref methods when opened on /dev/tty
✓ tty.ReadStream ref/unref should behave like Node.js

$ bun test test/regression/issue/tui-app-tty-pattern.test.ts
✓ TUI app pattern: read piped stdin then reopen /dev/tty
✓ tty.ReadStream handles non-TTY file descriptors correctly
```

## Compatibility
Tested against Node.js v24.3.0 - our behavior now matches:
-  Can reopen `/dev/tty` after stdin EOF
-  TTY ReadStream has `pos: undefined` and `start: undefined`
-  tty.ReadStream has ref/unref methods for socket-like behavior
-  `isTTY` is properly determined using `isatty(fd)`

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-13 01:00:57 -07:00
robobun
9907c2e9fa fix(patch): add bounds checking to prevent segfault during patch application (#21939)
## Summary

- Fixes segmentation fault when applying patches with out-of-bounds line
numbers
- Adds comprehensive bounds checking in patch application logic
- Includes regression tests to prevent future issues

## Problem

Previously, malformed patches with line numbers beyond file bounds could
cause segmentation faults by attempting to access memory beyond
allocated array bounds in `addManyAt()` and `replaceRange()` calls.

## Solution

Added bounds validation at four key points in `src/patch.zig`:

1. **Hunk start position validation** (line 283-286) - Ensures hunk
starts within file bounds
2. **Context line validation** (line 294-297) - Validates context lines
exist within bounds
3. **Insertion position validation** (line 302-305) - Checks insertion
position is valid
4. **Deletion range validation** (line 317-320) - Ensures deletion range
is within bounds

All bounds violations now return `EINVAL` error gracefully instead of
crashing.

## Test Coverage

Added comprehensive regression tests in
`test/regression/issue/patch-bounds-check.test.ts`:

-  Out-of-bounds insertion attempts
-  Out-of-bounds deletion attempts  
-  Out-of-bounds context line validation
-  Valid patch application (positive test case)

Tests verify that `bun install` completes gracefully when encountering
malformed patches, with no crashes or memory corruption.

## Test Results

```
bun test v1.2.21
 Bounds checking working: bun install completed gracefully despite malformed patch
 Bounds checking working: bun install completed gracefully despite deletion beyond bounds
 Bounds checking working: bun install completed gracefully despite context lines beyond bounds

 4 pass
 0 fail
 22 expect() calls
Ran 4 tests across 1 file. [4.70s]
```

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Zack Radisic <56137411+zackradisic@users.noreply.github.com>
2025-09-12 23:44:48 -07:00
robobun
09c56c8ba8 Fix PostgreSQL StringBuilder assertion failure with empty error messages (#22558)
## Summary
- Fixed a debug build assertion failure in PostgreSQL error handling
when all error message fields are empty
- Added safety check before calling `StringBuilder.allocatedSlice()` to
handle zero-length messages
- Added regression test to prevent future occurrences

## The Problem
When PostgreSQL sends an error response with completely empty message
fields, the `ErrorResponse.toJS` function would:
1. Calculate `b.cap` but end up with `b.len = 0` (no actual content)
2. Call `b.allocatedSlice()[0..b.len]` unconditionally
3. Trigger an assertion in `StringBuilder.allocatedSlice()` that
requires `cap > 0`

This only affected debug builds since the assertion is compiled out in
release builds.

## The Fix
Check if `b.len > 0` before calling `allocatedSlice()`. If there's no
content, use an empty string instead.

## Test Plan
- [x] Added regression test that triggers the exact crash scenario
- [x] Verified test crashes without the fix (debug build)
- [x] Verified test passes with the fix
- [x] Confirmed release builds were not affected

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-10 18:47:50 -07:00
Alistair Smith
e63608fced Fix: Make SQL connection string parsing more sensible (#22260)
This PR makes connection string parsing more sensible in Bun.SQL,
without breaking the default fallback of postgres

Added some tests checking for connection string precedence

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Ciro Spaciari <ciro.spaciari@gmail.com>
2025-09-08 20:59:24 -07:00
robobun
594b03c275 Fix: Socket.write() fails with Uint8Array (#22482)
## Summary
- Fixes #22481 where `Socket.write()` was throwing
"Stream._isArrayBufferView is not a function" when passed a Uint8Array
- The helper methods were being added to the wrong Stream export
- Now adds them directly to the Stream constructor in
`internal/streams/legacy.ts` where they're actually used

## Test plan
- Added regression test in `test/regression/issue/22481.test.ts`
- Test verifies that sockets can write Uint8Array, Buffer, and other
TypedArray views
- All tests pass with the fix

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-08 14:12:00 -07:00
robobun
63c4d8f68f Fix TypeScript syntax not working with 'ts' loader in BunPlugin (#22460)
## Summary

Fixes #12548 - TypeScript syntax doesn't work in BunPlugin when using
`loader: 'ts'`

## The Problem

When creating a virtual module with `build.module()` and specifying
`loader: 'ts'`, TypeScript syntax like `import { type TSchema }` would
fail to parse with errors like:

```
error: Expected "}" but found "TSchema"
error: Expected "from" but found "}"
```

The same code worked fine when using `loader: 'tsx'`, indicating the
TypeScript parser wasn't being configured correctly for `.ts` files.

## Root Cause

The bug was caused by an enum value mismatch between C++ and Zig:

### Before (Incorrect)
- **C++ (`headers-handwritten.h`)**: `jsx=0, js=1, ts=2, tsx=3, ...`
- **Zig API (`api/schema.zig`)**: `jsx=1, js=2, ts=3, tsx=4, ...`  
- **Zig Internal (`options.zig`)**: `jsx=0, js=1, ts=2, tsx=3, ...`

When a plugin returned `loader: 'ts'`, the C++ code correctly parsed the
string "ts" and set `BunLoaderTypeTS=2`. However, when this value was
passed to Zig's `Bun__transpileVirtualModule` function (which expects
`api.Loader`), the value `2` was interpreted as `api.Loader.js` instead
of `api.Loader.ts`, causing the TypeScript parser to not be enabled.

### Design Context

The codebase has two loader enum systems by design:
- **`api.Loader`**: External API interface used for C++/Zig
communication
- **`options.Loader`**: Internal representation used within Zig

The conversion between them happens via `options.Loader.fromAPI()` and
`.toAPI()` functions. The C++ layer should use `api.Loader` values since
that's what the interface functions expect.

## The Fix

1. **Aligned enum values**: Updated the `BunLoaderType` constants in
`headers-handwritten.h` to match the values in `api/schema.zig`,
ensuring C++ and Zig agree on the enum values
2. **Removed unnecessary assertion**: Removed the assertion that
`plugin_runner` must be non-null for virtual modules, as it's not
actually required for modules created via `build.module()`
3. **Added regression test**: Created comprehensive test in
`test/regression/issue/12548.test.ts` that verifies TypeScript syntax
works correctly with the `'ts'` loader

## Testing

### New Tests Pass
-  `test/regression/issue/12548.test.ts` - 2 tests verifying TypeScript
type imports work with `'ts'` loader

### Existing Tests Still Pass
-  `test/js/bun/plugin/plugins.test.ts` - 28 pass
-  `test/bundler/bundler_plugin.test.ts` - 52 pass  
-  `test/bundler/bundler_loader.test.ts` - 27 pass
-  `test/bundler/esbuild/loader.test.ts` - 10 pass
-  `test/bundler/bundler_plugin_chain.test.ts` - 13 pass

### Manual Verification
```javascript
// This now works correctly with loader: 'ts'
Bun.plugin({
  setup(build) {
    build.module('hi', () => ({
      contents: "import { type TSchema } from '@sinclair/typebox'",
      loader: 'ts',  //  Works now (previously failed)
    }))
  },
})
```

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-07 21:43:38 -07:00
robobun
5b842ade1d Fix cookie.isExpired() returning false for Unix epoch (#22478)
## Summary
Fixes #22475 

`cookie.isExpired()` was incorrectly returning `false` for cookies with
`Expires` set to Unix epoch (Thu, 01 Jan 1970 00:00:00 GMT).

## The Problem
The bug had two parts:

1. **In `Cookie::isExpired()`**: The condition `m_expires < 1`
incorrectly treated Unix epoch (0) as a session cookie instead of an
expired cookie.

2. **In `Cookie::parse()`**: When parsing date strings that evaluate to
0 (Unix epoch), the code used implicit boolean conversion which treated
0 as false, preventing the expires value from being set.

## The Fix
- Removed the `m_expires < 1` check from `isExpired()`, keeping only the
check for `emptyExpiresAtValue` to identify session cookies
- Fixed date parsing to use `std::isfinite()` instead of implicit
boolean conversion, properly handling Unix epoch (0)

## Test Plan
- Added regression test in `test/regression/issue/22475.test.ts`
covering Unix epoch and edge cases
- All existing cookie tests pass (`bun bd test test/js/bun/cookie/`)
- Manually tested the reported issue from #22475

```javascript
const cookies = [
  'a=; Expires=Thu, 01 Jan 1970 00:00:00 GMT',
  'b=; Expires=Thu, 01 Jan 1970 00:00:01 GMT'
];

for (const _cookie of cookies) {
  const cookie = new Bun.Cookie(_cookie);
  console.log(cookie.name, cookie.expires, cookie.isExpired());
}
```

Now correctly outputs:
```
a 1970-01-01T00:00:00.000Z true
b 1970-01-01T00:00:01.000Z true
```

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-07 17:42:09 -07:00
robobun
40b310c208 Fix child_process stdio properties not enumerable for Object.assign() compatibility (#22322)
## Summary

Fixes compatibility issue with Node.js libraries that use
`Object.assign(promise, childProcess)` pattern, specifically `tinyspawn`
(used by `youtube-dl-exec`).

## Problem

In Node.js, child process stdio properties (`stdin`, `stdout`, `stderr`,
`stdio`) are enumerable own properties that can be copied by
`Object.assign()`. In Bun, they were non-enumerable getters on the
prototype, causing `Object.assign()` to fail copying them.

This broke libraries like:
- `tinyspawn` - uses `Object.assign(promise, childProcess)` to merge
properties
- `youtube-dl-exec` - depends on tinyspawn internally

## Solution

Make stdio properties enumerable own properties during spawn while
preserving:
-  Lazy initialization (streams created only when accessed)
-  Original getter functionality and caching
-  Performance (minimal overhead)

## Testing

- Added comprehensive regression tests
- Verified compatibility with `tinyspawn` and `youtube-dl-exec`
- Existing child_process tests still pass

## Related

- Fixes: https://github.com/microlinkhq/youtube-dl-exec/issues/246

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-06 01:40:36 -07:00
robobun
d5431fcfe6 Fix Windows compilation issues with embedded resources and relative paths (#22365)
## Summary
- Fixed embedded resource path resolution when using
`Bun.build({compile: true})` API for Windows targets
- Fixed relative path handling for `--outfile` parameter in compilation

## Details

This PR fixes two regressions introduced after v1.2.19 in the
`Bun.build({compile})` feature:

### 1. Embedded Resource Path Issue
When using `Bun.build({compile: true})`, the module prefix wasn't being
set to the target-specific base path, causing embedded resources to fail
with "ENOENT: no such file or directory" errors on Windows (e.g.,
`B:/~BUN/root/` paths).

**Fix**: Ensure the target-specific base path is used as the module
prefix in `doCompilation`, matching the behavior of the CLI build
command.

### 2. PE Metadata with Relative Paths
When using relative paths with `--outfile` (e.g.,
`--outfile=forward/slash` or `--outfile=back\\slash`), the compilation
would fail with "FailedToLoadExecutable" error.

**Fix**: Ensure relative paths are properly converted to absolute paths
before PE metadata operations.

## Test Plan
- [x] Tested `Bun.build({compile: true})` with embedded resources
- [x] Tested relative path handling with nested directories
- [x] Verified compiled executables run correctly

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Zack Radisic <zack@theradisic.com>
2025-09-04 18:17:14 -07:00
robobun
72490281e5 fix: handle empty chunked gzip responses correctly (#22360)
## Summary
Fixes #18413 - Empty chunked gzip responses were causing `Decompression
error: ShortRead`

## The Issue
When a server sends an empty response with `Content-Encoding: gzip` and
`Transfer-Encoding: chunked`, Bun was throwing a `ShortRead` error. This
occurred because the code was checking if `avail_in == 0` (no input
data) and immediately returning an error, without attempting to
decompress what could be a valid empty gzip stream.

## The Fix
Instead of checking `avail_in == 0` before calling `inflate()`, we now:
1. Always call `inflate()` even when `avail_in == 0` 
2. Check the return code from `inflate()`
3. If it returns `BufError` with `avail_in == 0`, then we truly need
more data and return `ShortRead`
4. If it returns `StreamEnd`, it was a valid empty gzip stream and we
finish successfully

This approach correctly distinguishes between "no data yet" and "valid
empty gzip stream".

## Why This Works
- A valid empty gzip stream still has headers and trailers (~20 bytes)
- The zlib `inflate()` function can handle empty streams correctly  
- `BufError` with `avail_in == 0` specifically means "need more input
data"

## Test Plan
 Added regression test in `test/regression/issue/18413.test.ts`
covering:
- Empty chunked gzip response
- Empty non-chunked gzip response  
- Empty chunked response without gzip

 Verified all existing gzip-related tests still pass
 Tested with the original failing case from the issue

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Ciro Spaciari <ciro.spaciari@gmail.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-09-03 18:57:39 -07:00
robobun
f78d197523 Fix crypto.verify() with null/undefined algorithm for RSA keys (#22331)
## Summary
Fixes #11029 - `crypto.verify()` now correctly handles null/undefined
algorithm parameter for RSA keys, matching Node.js behavior.

## Problem
When calling `crypto.verify()` with a null or undefined algorithm
parameter, Bun was throwing an error:
```
error: error:06000077:public key routines:OPENSSL_internal:NO_DEFAULT_DIGEST
```

## Root Cause
The issue stems from the difference between OpenSSL (used by Node.js)
and BoringSSL (used by Bun):
- **OpenSSL v3**: Automatically provides SHA256 as the default digest
for RSA keys when NULL is passed
- **BoringSSL**: Returns an error when NULL digest is passed for RSA
keys

## Solution
This fix explicitly sets SHA256 as the default digest for RSA keys when
no algorithm is specified, achieving OpenSSL-compatible behavior.

## OpenSSL v3 Source Code Analysis

I traced through the OpenSSL v3 source code to understand exactly how it
handles null digests:

### 1. Entry Point (`crypto/evp/m_sigver.c`)
When `EVP_DigestSignInit` or `EVP_DigestVerifyInit` is called with NULL
digest:
```c
// Lines 215-220 in do_sigver_init function
if (mdname == NULL && !reinit) {
    if (evp_keymgmt_util_get_deflt_digest_name(tmp_keymgmt, provkey,
                                               locmdname,
                                               sizeof(locmdname)) > 0) {
        mdname = canon_mdname(locmdname);
    }
}
```

### 2. Default Digest Query (`crypto/evp/keymgmt_lib.c`)
```c
// Lines 533-571 in evp_keymgmt_util_get_deflt_digest_name
params[0] = OSSL_PARAM_construct_utf8_string(OSSL_PKEY_PARAM_DEFAULT_DIGEST,
                                            mddefault, sizeof(mddefault));
if (!evp_keymgmt_get_params(keymgmt, keydata, params))
    return 0;
```

### 3. RSA Provider Implementation
(`providers/implementations/keymgmt/rsa_kmgmt.c`)
```c
// Line 54: Define the default
#define RSA_DEFAULT_MD "SHA256"

// Lines 351-355: Return it for RSA keys
if ((p = OSSL_PARAM_locate(params, OSSL_PKEY_PARAM_DEFAULT_DIGEST)) != NULL
    && (rsa_type != RSA_FLAG_TYPE_RSASSAPSS
        || ossl_rsa_pss_params_30_is_unrestricted(pss_params))) {
    if (!OSSL_PARAM_set_utf8_string(p, RSA_DEFAULT_MD))
        return 0;
}
```

## Implementation Details

The fix includes extensive documentation in the source code explaining:
- The OpenSSL v3 mechanism with specific file paths and line numbers
- Why BoringSSL behaves differently
- Why Ed25519/Ed448 keys are handled differently (they don't need a
digest)

## Test Plan
 Added comprehensive regression test in
`test/regression/issue/11029-crypto-verify-null-algorithm.test.ts`
 Tests cover:
  - RSA keys with null/undefined algorithm
  - Ed25519 keys with null algorithm  
  - Cross-verification between null and explicit SHA256
  - `createVerify()` compatibility
 All tests pass and behavior matches Node.js

## Verification
```bash
# Test with Bun
bun test test/regression/issue/11029-crypto-verify-null-algorithm.test.ts

# Compare with Node.js behavior
node -e "const crypto = require('crypto'); 
const {publicKey, privateKey} = crypto.generateKeyPairSync('rsa', {modulusLength: 2048});
const data = Buffer.from('test');
const sig = crypto.sign(null, data, privateKey);
console.log('Node.js verify with null:', crypto.verify(null, data, publicKey, sig));"
```

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-02 23:30:52 -07:00