Compare commits

...

27 Commits

Author SHA1 Message Date
Claude Bot
232db80457 fix(test): remove exitCode assertion since bunRun throws on failure
bunRun() already throws if the process exits non-zero, so it only
returns { stdout, stderr } on success. The exitCode property doesn't
exist in the return value.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-23 08:51:36 +00:00
Claude Bot
793fdf15ba fix: revert flushQueue() changes, keep only updateReferenceType fix
Address PR review feedback:

- Revert all flushQueue() catch {} replacements back to queue.advance()
  in handleAuth, caching_sha2 success, checkIfPreparedStatementIsDone,
  and handlePreparedStatement ERROR paths. These are called during
  readAndProcessData, and onData's defer already calls
  registerAutoFlusher() which handles flushing. Using flushQueue()
  directly would break batching by forcing immediate socket writes.

- The real fix is the updateReferenceType() call in enqueueRequest(),
  which re-refs the poll_ref when a new query is enqueued on an idle
  connection whose poll_ref was previously unrefed.

- Fix test fixture to support TLS by reading CA_PATH from env
- Pass CA_PATH in test invocation for MySQL TLS container
- Add exitCode assertion to sequential queries test

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-23 08:45:05 +00:00
Claude Bot
8188893044 fix(sql): prevent MySQL sequential queries from hanging on TCP connections
Two related bugs caused sequential `await sql.unsafe()` calls to hang
forever on remote MySQL connections (with real TCP latency):

1. `enqueueRequest()` did not call `updateReferenceType()`, so the
   poll_ref remained unrefed after a connection went idle. Without the
   poll ref, the event loop would not wait for the socket response.

2. Several code paths called `queue.advance()` without flushing the
   write buffer, so queued queries would never be sent to the server.
   Replace these with `flushQueue()` which calls both `advance()` and
   `flushData()`. The affected paths are: handleAuth OK, handleAuth
   caching_sha2 success, checkIfPreparedStatementIsDone, and
   handlePreparedStatement ERROR.

Closes #27362
Closes #26235
Closes #24130
Closes #27102

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-23 04:26:59 +00:00
Dylan Conway
cb3c39be23 ci: add Intel SDE baseline verification for Windows, unify baseline checks (#27121)
Adds a unified baseline verification script
(`scripts/verify-baseline.ts`) that combines basic CPU instruction
checks and JIT stress test fixtures into a single step.

**Changes:**
- New TypeScript script replaces separate `verify-baseline-cpu.sh` and
`verify-jit-stress-qemu.sh` CI steps
- Adds Windows x64 baseline verification using Intel SDE v9.58 with
Nehalem emulation
- Linux continues to use QEMU (Nehalem for x64, Cortex-A53 for aarch64)
- SDE violations are detected by checking output for `SDE-ERROR`
messages rather than exit codes, avoiding false positives from
application errors
- JIT stress fixtures now run on every build instead of only when WebKit
changes

**Platform support:**
| Platform | Emulator | CPU Model |
|----------|----------|-----------|
| Linux x64 baseline | QEMU | Nehalem (SSE4.2, no AVX) |
| Linux aarch64 | QEMU | Cortex-A53 (no LSE/SVE) |
| Windows x64 baseline | Intel SDE | Nehalem (no AVX) |

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-02-21 14:24:21 -08:00
robobun
bc98025d93 fix(spawn): close libuv pipes before freeing to prevent handle queue corruption (#27064)
## Summary

Fixes #27063

On Windows, when `Bun.spawn` fails (e.g., ENOENT for a nonexistent
executable), pipes that were already initialized with `uv_pipe_init`
were being freed directly with `allocator.destroy()` without first
calling `uv_close()`. This left dangling pointers in libuv's
`handle_queue` linked list, corrupting it. Subsequent spawn calls would
crash with a segfault when inserting new handles into the corrupted
list.

Three sites were freeing pipe handles without `uv_close`:

- **`process.zig` `Stdio.deinit()`**: When spawn failed,
already-initialized pipes were freed without `uv_close()`. Now uses
`closePipeAndDestroy()` which checks `pipe.loop` to determine if the
pipe was registered with the event loop.
- **`process.zig` `spawnProcessWindows` IPC handling**: Unsupported IPC
pipes in stdin/stdout/stderr were freed directly. Now uses the same safe
close-then-destroy pattern.
- **`source.zig` `openPipe()`**: If `pipe.open(fd)` failed after
`pipe.init()` succeeded, the pipe was destroyed directly. Now calls
`uv_close()` with a callback that frees the memory.

Additionally, pipe allocations in `stdio.zig` are now zero-initialized
so that the `loop` field is reliably `null` before `uv_pipe_init`,
enabling the init detection in `deinit`.

## Test plan

- [x] Added regression test `test/regression/issue/27063.test.ts` that
spawns nonexistent executables repeatedly and verifies a valid spawn
still works afterward
- [x] Verified existing spawn tests pass (`exit-code.test.ts`,
`spawnSync.test.ts` — timing-related pre-existing flakes only)
- [x] Debug build compiles successfully
- [ ] Windows CI should verify the fix prevents the segfault


🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-21 14:04:25 -08:00
Jarred Sumner
b371bf9420 fix(install): resolve DT_UNKNOWN entries on NFS/FUSE filesystems (#27008)
## Summary

- Fixes `bun install` producing incomplete `node_modules` on NFS, FUSE,
and some bind mount filesystems
- On these filesystems, `getdents64` returns `DT_UNKNOWN` for `d_type`
instead of `DT_DIR`/`DT_REG`
- The directory walker was silently skipping these entries, causing
missing files (e.g., 484 instead of 1070 for `@sinclair/typebox`)
- When an entry has unknown kind, we now fall back to `fstatat()` to
resolve the actual file type

## Test plan

- [x] Reproduced with Docker NFS environment: npm installs 1071 files,
bun installs only 484
- [x] Verified fix: bun-debug now installs 1070 files (matching npm
minus `.package-lock.json`)
- [x] Second install from cache also works correctly (1070 files)
- [x] `bun run zig:check-all` passes on all 16 platform targets
- [ ] CI passes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-02-21 02:18:47 -08:00
Jarred Sumner
e6ec92244c fix(bindgen): hoist WTF::String temps to dispatch scope to prevent use-after-free (#27324)
## Summary

Fixes a use-after-free in bindgen v1 generated C++ bindings that causes
`"switch on corrupt value"` panics in `String.deref` on Windows. This is
a top crash (500+ reports across v1.3.3–v1.3.9), predominantly affecting
standalone executables.

## Root Cause

`Bun::toString(WTF::String&)` copies the raw `StringImpl*` pointer
**without adding a reference**. For optional string arguments with
defaults and dictionary string fields, the generated code declares
`WTF::String` inside an `if` block, but the resulting `BunString`
outlives it:

```cpp
BunString argStr;
if (!arg.isUndefinedOrNull()) {
    WTF::String wtfString_0 = WebCore::convert<IDLDOMString>(...);
    argStr = Bun::toString(wtfString_0);  // copies pointer, no ref
}  // ← wtfString_0 destroyed here, drops ref → StringImpl may be freed
// argStr now holds a dangling pointer to freed memory
```

When the freed memory is reused, `String.deref()` reads garbage for the
tag field → `"switch on corrupt value"` panic.

### Why it was Windows-only / elevated recently

- The mimalloc v3 update (shipped in v1.3.7/v1.3.8) changed heap reuse
patterns on Windows, causing freed memory to be overwritten more
aggressively — turning a latent UAF into a frequent crash
- The mimalloc v3 revert in v1.3.9 reduced crash frequency back to
baseline but did not fix the underlying bug
- A [previous fix](https://github.com/oven-sh/bun/pull/26717) was
reverted due to unrelated CI failures

## Fix

Hoist all `WTF::String` temporaries to the same scope as the Zig
dispatch call, so they stay alive until the `BunString` values are
consumed:

1. **Function string arguments**: `WTF::String` is declared at the top
of the generated function, before any `if` blocks for optional arguments
2. **Dictionary string fields**: The `convert*` function accepts
`WTF::String&` references owned by the caller, so the string data
outlives the `convert*` function and remains valid through the dispatch
call

This approach is exception-safe — `WTF::String` destructors handle
cleanup automatically on all exit paths (normal return,
`RETURN_IF_EXCEPTION`, etc.) with no leaked refs.

### Difference from the previous fix

The [previous fix](https://github.com/oven-sh/bun/pull/26717) hoisted
`WTF::String` for function arguments but kept dictionary field temps
**inside** the `convert*` function. This left dictionary string fields
as use-after-return — `result->encoding` would be a dangling pointer
after `convert*` returned. This fix correctly passes `WTF::String&` refs
from the dispatch scope through to the `convert*` function.

### Affected call sites

Only 2 call sites have the vulnerable pattern (`DOMString` +
`.default(...)`):
- `Bun.stringWidth()` — `str: t.DOMString.default("")`  
- `os.userInfo()` — `encoding: t.DOMString.default("")` in
`UserInfoOptions` dictionary

Note: bindgen v2 is not affected — it uses `releaseImpl().leakRef()`
which transfers ownership correctly.
2026-02-21 02:15:52 -08:00
Dylan Conway
b509acb533 Revert "fix: clean up ESM registry when require() of ESM module fails… (#27325)
… (#27288)"

This reverts commit 21c3439bb4.
2026-02-21 01:14:27 -08:00
robobun
ede635b8a9 fix(install): store tarball integrity hash in lockfile for HTTPS dependencies (#27018)
## Summary
- HTTPS/URL tarball dependencies were not having their integrity hash
stored in the lockfile, allowing a malicious server to change the
tarball contents without detection on subsequent installs
- Now computes a sha512 hash from the downloaded tarball bytes during
extraction and stores it in both the binary lockfile and text bun.lock
- The hash is verified on re-download, matching the behavior of npm
registry packages
- Old lockfiles without integrity hashes continue to work (backward
compatible)

## Changes
- `src/install/integrity.zig`: Added `Integrity.forBytes()` to compute
sha512 from raw bytes
- `src/install/install.zig`: Added `integrity` field to `ExtractData`
struct
- `src/install/PackageManagerTask.zig`: Compute hash from tarball bytes
for both remote and local tarball tasks
- `src/install/PackageManager/processDependencyList.zig`: Set
`package.meta.integrity` from computed hash
- `src/install/lockfile/bun.lock.zig`: Serialize/deserialize integrity
for `remote_tarball` and `local_tarball` types

## Test plan
- [x] Integrity hash is stored in text lockfile for tarball URL deps
- [x] Integrity hash is consistent/deterministic across reinstalls
- [x] Integrity mismatch (changed tarball content) causes install
failure
- [x] Old lockfiles without integrity still install successfully
(backward compat)
- [x] Fresh installs produce integrity hashes
- [x] All 12 existing tarball tests pass (no regressions)
- [x] Tests fail with `USE_SYSTEM_BUN=1` (confirms fix is effective)

Fixes GHSA-jfhr-4v9p-9rm4

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-20 23:52:45 -08:00
Jarred Sumner
ebb3730166 Update ci.mjs 2026-02-20 23:17:53 -08:00
robobun
76ceb26e0a fix(socket): prevent null deref in Listener.getsockname (#27303)
## Summary
- Fix null pointer dereference in `Listener.getsockname()` when called
without an object argument (or with a non-object argument)
- `getsockname()` wrote properties directly into its first argument via
`.put()`, which calls `getObject()` in C++ — this returns null for
non-object values like `undefined`, causing a crash at
`BunString.cpp:942`
- Now validates the argument is an object first; if not, creates a new
empty object, writes properties into it, and returns it

## Crash reproduction
```js
const listener = Bun.listen({
    hostname: "localhost",
    port: 0,
    socket: { data() {} },
});
listener.getsockname(); // Segfault - null pointer dereference
```

## Test plan
- [x] Added `test/js/bun/http/listener-getsockname.test.ts` with tests
for calling `getsockname()` with no argument, with an object argument,
and with a non-object argument
- [x] Verified test crashes with system bun and passes with patched
build
- [x] Verified original fuzzer reproduction no longer crashes

---------

Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 22:50:44 -08:00
robobun
06f26e5f01 fix: use BoringSSL for std.crypto random seed to support older Linux kernels (#27282)
## Summary

- Override Zig's default `cryptoRandomSeed` in `std_options` to use
BoringSSL's `RAND_bytes` (via `bun.csprng`) instead of the `getrandom()`
syscall
- On Linux kernels < 3.17 (e.g. Synology NAS with kernel 3.10), the
`getrandom` syscall doesn't exist and returns `ENOSYS`, causing Zig's
stdlib to panic with `"getrandom() failed to provide entropy"`
- BoringSSL already handles this gracefully by falling back to
`/dev/urandom`

## Details

Bun already uses BoringSSL's `RAND_bytes` for all its own cryptographic
random needs (`bun.csprng`). However, Zig's standard library
`std.crypto.random` uses a separate code path that calls the `getrandom`
syscall directly, with no fallback for `ENOSYS`.

Zig's `std.Options` struct provides a `cryptoRandomSeed` override for
exactly this purpose. This PR sets it to `bun.csprng` in both
`src/main.zig` and `src/main_test.zig`.

## Test plan

- [x] `bun bd` compiles successfully
- [x] `crypto.getRandomValues()`, `crypto.randomUUID()`, and
`require("crypto").randomFillSync()` all work correctly
- Cannot write a meaningful automated regression test since reproducing
requires a Linux kernel < 3.17

Closes #27279

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-20 21:13:51 -08:00
robobun
9f5970938f fix(spawn): prevent integer overflow in getArgv with large array length (#27316)
## Crash
Integer overflow panic in `getArgv` when `Bun.spawn`/`Bun.spawnSync`
receives an array with `.length` near u32 max (e.g. 4294967295).

## Reproduction
```js
const arr = ["echo", "hello"];
Object.defineProperty(arr, "length", { value: 4294967295 });
Bun.spawnSync(arr);
```

## Root Cause
`JSArrayIterator.len` is a `u32` derived from the JS array's `.length`
property. In `getArgv`, the expression `cmds_array.len + 2` (for argv0 +
null terminator) overflows `u32` arithmetic when `len` is close to `u32`
max. This causes a panic in debug builds and a segfault in release
builds. Additionally, the validation checks (`isEmptyOrUndefinedOrNull`
and `len == 0`) were placed after the overflowing `initCapacity` call,
so they couldn't prevent the crash.

## Fix
- Move validation checks before the `initCapacity` call
- Add a length check rejecting arrays with length > `u32 max - 2`
- Widen `cmds_array.len` to `usize` before adding 2 to prevent overflow
- Use `try argv.append()` instead of `appendAssumeCapacity` for safety

## Verification
- Reproduction no longer crashes (throws clean "cmd array is too large"
error)
- Normal `Bun.spawn`/`Bun.spawnSync` usage unaffected
- Added regression test at
`test/js/bun/spawn/spawn-large-array-length.test.ts`

Co-authored-by: Claude <noreply@anthropic.com>
2026-02-20 20:24:22 -08:00
Jarred Sumner
c01a5e08be Bring back the slop label 2026-02-20 17:54:21 -08:00
SUZUKI Sosuke
e9db16c257 fix: release ReadableStream Strong ref on S3 download stream cancel (#27277)
## Summary

Same issue as #27191 (FetchTasklet), but in `S3DownloadStreamWrapper`
(`src/s3/client.zig`).

When a streaming S3 download body is cancelled via `reader.cancel()`,
`S3DownloadStreamWrapper.readable_stream_ref` (a `ReadableStream.Strong`
GC root) was never released. The S3 download continued in the
background, and the Strong ref prevented GC of the ReadableStream —
leaking memory until the download eventually completed.

## Root Cause

`ByteStream.onCancel()` cleaned up its own state but **did not notify
the `S3DownloadStreamWrapper`**. The wrapper only called `deinit()`
(which releases the Strong ref) when `has_more == false` — i.e., when
the S3 download fully completed. If the user cancelled the stream
mid-download, the Strong ref was held until the entire file finished
downloading in the background.

This is the exact same pattern that was fixed for `FetchTasklet` in
#27191.

## Fix

- Register a `cancel_handler` on the `ByteStream.Source` that releases
`readable_stream_ref` when the stream is cancelled. The download
callback will see `readable_stream_ref.get()` return `null` and skip
data delivery until the download finishes and `deinit()` cleans up the
remaining resources.
- Add `clearStreamCancelHandler()` in `deinit()` to null the
`cancel_handler`/`cancel_ctx` on the `ByteStream.Source`, preventing
use-after-free when the wrapper is freed before `cancel()` is called
(e.g., download completes normally).

## Test

Added `test/js/bun/s3/s3-stream-cancel-leak.test.ts` — uses a raw TCP
server (`Bun.listen`) that mocks an S3 GET response: sends one HTTP
chunk then keeps the connection open. Client streams 30 times via
`s3.file().stream()`, reads one chunk, cancels, then asserts
`heapStats().objectTypeCounts.ReadableStream` does not accumulate.
Before the fix, all 30 ReadableStreams leaked; after the fix, 0 leak.
2026-02-20 17:49:01 -08:00
robobun
21c3439bb4 fix: clean up ESM registry when require() of ESM module fails (#27288)
## Summary

- When `require()` loads an ESM module (`.mjs`) that throws during
evaluation, the module was removed from `requireMap` but left in the ESM
registry (`Loader.registry`) in a partially-initialized state
- A subsequent `import()` of the same module would find this corrupt
entry and throw `ReferenceError: Cannot access 'foo' before
initialization` instead of re-throwing the original evaluation error
- Fix by also deleting the module from `Loader.registry` in both
`overridableRequire` and `requireESMFromHijackedExtension` when ESM
evaluation fails, allowing `import()` to re-evaluate from scratch

Closes #27287

## Test plan

- [x] Added regression test in `test/regression/issue/27287.test.ts`
- [x] Verified test fails with system bun (`USE_SYSTEM_BUN=1`)
- [x] Verified test passes with `bun bd test`
- [x] Manual verification: `require()` correctly throws original error,
`import()` now re-throws the same error instead of `ReferenceError`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Alistair Smith <alistair@anthropic.com>
2026-02-20 17:08:50 -08:00
Dylan Conway
7baf50f379 fix(http): align DeadSocket to prevent crash on Windows ARM64 stable builds (#27290)
## Summary

- `DeadSocket` only contains a `u8` field (alignment 1), so the linker
could place `DeadSocket.dead_socket` at a non-8-byte-aligned address
- When `markTaggedSocketAsDead` creates a tagged pointer embedding this
address and passes it through `bun.cast(**anyopaque, ...)`, the
`@alignCast` panics with "incorrect alignment" because the bottom bits
of the tagged value come from the unaligned address
- Fix: add `align(@alignOf(usize))` to the `dead_socket` variable
declaration

This only manifested on stable (non-canary) Windows ARM64 builds because
the binary layout differs when `ci_assert` is false, shifting the static
variable to a non-aligned address. Canary builds happened to place it at
an aligned address by coincidence.

## Test plan

- [x] Verified `fetch('https://example.com')` no longer crashes on
Windows ARM64 stable build (ENABLE_CANARY=OFF)
- [x] Verified 5 sequential HTTPS fetches complete successfully
- [x] Verified the fix is a single-line change with no behavioral side
effects

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-02-20 13:51:24 -08:00
robobun
76754a8ead fix(shell): support -e and -E flags in builtin echo (#27144)
## Summary

- Bun's builtin `echo` only supported the `-n` flag. The `-e` flag (and
`-E`) were treated as literal text, causing `echo -e password` to output
`-e password` instead of `password`. This broke common patterns like
`echo -e $password | sudo -S ...`.
- Added full `-e` (enable backslash escapes) and `-E` (disable backslash
escapes) flag support, matching bash behavior including combined flags
like `-ne`, `-en`, `-eE`, `-Ee`.
- Supported escape sequences: `\\`, `\a`, `\b`, `\c`, `\e`/`\E`, `\f`,
`\n`, `\r`, `\t`, `\v`, `\0nnn` (octal), `\xHH` (hex).

Closes #17405

## Test plan

- [x] Added 22 tests in `test/regression/issue/17405.test.ts` covering
all escape sequences, flag combinations, and the original issue scenario
- [x] Verified tests fail with system bun (19/22 fail) and pass with
debug build (22/22 pass)
- [x] Verified existing shell tests (`bunshell.test.ts`) still pass —
all 27 echo-related tests pass, no regressions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-19 20:21:49 -08:00
robobun
ecd4e680eb fix(router): don't cache file descriptors in Route.parse to prevent stale fd reuse (#27164)
## Summary
- `FileSystemRouter.Route.parse()` was caching file descriptors in the
global entry cache (`entry.cache.fd`). When `Bun.build()` later closed
these fds during `ParseTask`, the cache still referenced them.
Subsequent `Bun.build()` calls would find these stale fds, pass them to
`readFileWithAllocator`, and `seekTo(0)` would fail with EBADF (errno
9).
- The fix ensures `Route.parse` always closes the file it opens for
`getFdPath` instead of caching it in the shared entry. The fd was only
used to resolve the absolute path via `getFdPath`, so caching was
unnecessary and harmful.

Closes #18242

## Test plan
- [x] Added regression test `test/regression/issue/18242.test.ts` that
creates a `FileSystemRouter` and runs `Bun.build()` three times
sequentially
- [x] Test passes with `bun bd test test/regression/issue/18242.test.ts`
- [x] Test fails with `USE_SYSTEM_BUN=1 bun test
test/regression/issue/18242.test.ts` (system bun v1.3.9)
- [x] Verified 5 sequential builds work correctly after the fix

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-19 20:15:11 -08:00
robobun
044bb00382 fix(sqlite): finalize transaction statements on close() to prevent "database is locked" (#27202)
## Summary

- Fixes `db.close(true)` throwing "database is locked" after using
`db.transaction()`
- The `getController` function creates prepared statements via
`db.prepare()` which bypasses the query cache, so they were never
finalized during `close()`
- `close()` now explicitly finalizes any cached transaction controller
statements before calling `sqlite3_close()`

Fixes #14709

## Test plan

- [x] New regression tests in `test/regression/issue/14709.test.ts`
covering:
  - Basic `close(true)` after `transaction()`
  - `close(true)` after transaction with actual work
  - `using` declaration (calls `close(true)` via `Symbol.dispose`)
  - Multiple transaction types (deferred, immediate, exclusive)
  - Nested transactions
- [x] All new tests fail with system bun (`USE_SYSTEM_BUN=1`) and pass
with debug build
- [x] Existing SQLite test suite (`test/js/bun/sqlite/sqlite.test.js`)
passes with no regressions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-19 20:11:02 -08:00
robobun
655aab845d fix(css): prevent style rule deduplication across @property boundaries (#27119)
## Summary

- Fixes the CSS bundler incorrectly removing a `:root` selector when it
appears before an `@property` at-rule and another `:root` exists after
it
- The deduplication logic in `CssRuleList.minify()` was merging style
rules across non-style rule boundaries (like `@property`), which changes
CSS semantics
- Clears the `style_rules` deduplication map when a non-style rule is
appended, preventing merges across these boundaries

## Test plan

- [x] Added regression test in `test/regression/issue/27117.test.ts`
- [x] Verified test fails with system bun (`USE_SYSTEM_BUN=1`) —
reproduces the bug
- [x] Verified test passes with debug build (`bun bd test`)
- [x] Verified adjacent `:root` rules (without intervening at-rules) are
still correctly merged
- [x] All existing CSS bundler tests pass
(`test/bundler/esbuild/css.test.ts` — 53 tests)
- [x] All CSS modules tests pass (`test/bundler/css/css-modules.test.ts`
— 3 tests)

Closes #27117

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-19 20:03:18 -08:00
robobun
4141ef1edf fix(shell): fix unicode cursor tracking causing __bunstr_N leak in output (#27226)
## Summary
- Fixed `srcBytesAtCursor()` and `cursorPos()` in the shell lexer's
unicode path (`ShellCharIter(.wtf8)`) to use `self.src.cursor.i` instead
of `self.src.iter.i`, which was permanently stuck at 0
- Fixed `bumpCursorAscii()` to properly decode the codepoint at the new
cursor position instead of storing the last digit character, which
caused the wrong character to be returned on the next read

## Root Cause
When the shell template literal source contained multi-byte UTF-8
characters (e.g., `Í`, `€`), the `LexerUnicode` path was used. In this
path, `srcBytesAtCursor()` and `cursorPos()` referenced
`self.src.iter.i` — the `CodepointIterator`'s internal field that is
never modified (the `next()` method takes `*const Iterator`). This
meant:

1. `srcBytesAtCursor()` always returned bytes from position 0 (the start
of the source)
2. `looksLikeJSStringRef()` checked for `__bunstr_` at position 0
instead of the current cursor position, failing to match
3. The `\x08__bunstr_N` reference was passed through as literal text
into the shell output

This only occurred when **both** conditions were met:
- An interpolated value contained a space (triggering
`needsEscapeBunstr` → stored as `__bunstr_N` ref)
- A subsequent value contained multi-byte UTF-8 (triggering
`LexerUnicode` instead of `LexerAscii`)

Closes #17244

## Test plan
- [x] Added regression tests in `test/regression/issue/17244.test.ts`
- [x] Verified tests fail with `USE_SYSTEM_BUN=1` (system bun 1.3.9)
- [x] Verified tests pass with `bun bd test`
- [x] Ran existing shell tests (`bunshell.test.ts`,
`bunshell-default.test.ts`, `bunshell-instance.test.ts`) — no
regressions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-19 19:38:33 -08:00
Jarred Sumner
e57593759f Update no-validate-leaksan.txt 2026-02-19 16:35:56 -08:00
robobun
e7cf4b77ba fix(css): strip leading @layer declarations from bundled CSS output (#27131)
## Summary
- When bundling CSS with `@layer` declarations (e.g. `@layer one;`)
followed by `@import` rules with `layer()`, the bundler left the bare
`@layer` statements and `@import` lines in the output even though their
content was already inlined into `@layer` blocks
- The fix adds `.layer_statement` to the leading-rule filter in
`prepareCssAstsForChunk`, which already stripped `@import` and
`.ignored` rules but missed `@layer` statement rules

Closes #20546

## Test plan
- [x] New regression test in `test/regression/issue/20546.test.ts`
covers both separate `@layer` statements and comma syntax
- [x] Test fails with system bun (`USE_SYSTEM_BUN=1`) confirming the bug
- [x] Test passes with debug build (`bun bd test`)
- [x] All 53 existing CSS bundler tests pass
(`test/bundler/esbuild/css.test.ts`)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-19 12:31:47 -08:00
robobun
2e5e21015f fix(bundler): emit valid JS for unused dynamic imports (#27176)
## Summary

- Fixes `bun build` producing syntactically invalid JavaScript
(`Promise.resolve().then(() => )`) for unused dynamic imports like `void
import("./dep.ts")` or bare `import("./dep.ts")` expression statements
- When `exports_ref` is cleared for unused results but the `.then(() =>
...)` wrapper was still emitted, the arrow function body was empty. Now
skips the `.then()` wrapper entirely when there's nothing to execute
inside the callback, producing just `Promise.resolve()`
- The bug only affected cases where the import result was unused —
`const x = import(...)`, `await import(...)`, and `.then()` chains were
already correct

Closes #24709

## Test plan

- [x] Added regression test in `test/regression/issue/24709.test.ts`
that validates both `void import()` and bare `import()` statement cases
- [x] Verified test fails with system bun (reproduces the bug) and
passes with debug build (fix works)
- [x] Verified used dynamic imports (`const m = await import(...)`)
still produce correct `.then(() => exports)` output

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-19 12:30:15 -08:00
SUZUKI Sosuke
b04303cb23 fix(gc): fix three GC safety issues (speculative fix for BUN-Q81) (#27190)
## Speculative fix for
[BUN-Q81](https://bun-p9.sentry.io/issues/BUN-Q81)

BUN-Q81 is a long-standing `SlotVisitor::drain` segfault during GC
marking (150 occurrences since July 2025, across v1.1.10 through
v1.3.10). A full audit of the codebase for GC safety issues found three
bugs:

### 1. `JSCommonJSModule::m_overriddenCompile` not visited in
`visitChildren`

`m_overriddenCompile` is a `WriteBarrier<Unknown>` that stores the
overridden `module._compile` function (used by `ts-node`, `pirates`,
`@swc-node/register`, etc.). It was the only WriteBarrier field in the
class not visited by `visitChildrenImpl`, making it invisible to the GC.
The pointed-to function could be prematurely collected, and subsequent
GC marking would follow the dangling WriteBarrier pointer into freed
memory.

**This is the strongest candidate for BUN-Q81.**

### 2. `JSSQLStatement::userPrototype` — wrong owner in
`WriteBarrier::set()`

```cpp
// Before (wrong):
castedThis->userPrototype.set(vm, classObject, prototype.getObject());
// After (correct):
castedThis->userPrototype.set(vm, castedThis, prototype.getObject());
```

The owner parameter must be the object containing the WriteBarrier so
the GC's remembered set is updated correctly. All other `.set()` calls
in the same file correctly use `castedThis`.

### 3. `NodeVMSpecialSandbox` — missing `visitChildren` entirely

`NodeVMSpecialSandbox` has a `WriteBarrier<NodeVMGlobalObject>
m_parentGlobal` member but had no `visitChildren` implementation. Added
the standard boilerplate.
2026-02-19 12:28:49 -08:00
SUZUKI Sosuke
b6eaa96e56 fix: release ReadableStream Strong ref on fetch body cancel (#27191)
## Summary

When a streaming HTTP response body is cancelled via `reader.cancel()`
or `body.cancel()`, `FetchTasklet.readable_stream_ref` (a
`ReadableStream.Strong` GC root) was never released. This caused
ReadableStream objects, associated Promises, and Uint8Array buffers to
be retained indefinitely — leaking ~260KB per cancelled streaming
request.

## Root Cause

`ByteStream.onCancel()` cleaned up its own state (`done = true`, buffer
freed, pending promise resolved) but **did not notify the
FetchTasklet**. The Strong ref was only released when:
- `has_more` became `false` (HTTP response fully received) — but the
server may keep the connection open
- `Bun__FetchResponse_finalize` — but this checks
`readable_stream_ref.held.has()` and **skips cleanup when the Strong ref
is set** (line 958)

This created a circular dependency: the Strong ref prevented GC, and the
finalizer skipped cleanup because the Strong ref existed.

## Fix

Add a `cancel_handler` callback to `NewSource` (`ReadableStream.zig`)
that propagates cancel events to the data producer. `FetchTasklet`
registers this callback via `Body.PendingValue.onStreamCancelled`. When
the stream is cancelled, the handler calls
`ignoreRemainingResponseBody()` to release the Strong ref, stop
processing further HTTP data, and unref the event loop.

To prevent use-after-free when `FetchTasklet` is freed before `cancel()`
is called (e.g., HTTP response completes normally, then user cancels the
orphaned stream), `clearStreamCancelHandler()` nulls the
`cancel_handler` on the `ByteStream.Source` at all 3 sites where
`readable_stream_ref` is released.

## Test

Added `test/js/web/fetch/fetch-stream-cancel-leak.test.ts` — uses a raw
TCP server (`Bun.listen`) that sends one HTTP chunk then keeps the
connection open. Client fetches 30 times, reads one chunk, cancels, then
asserts `heapStats().objectTypeCounts.ReadableStream` does not
accumulate. Before the fix, all 30 ReadableStreams leaked; after the
fix, 0 leak.
2026-02-19 12:22:43 -08:00
74 changed files with 2474 additions and 211 deletions

View File

@@ -593,8 +593,35 @@ function getTargetTriplet(platform) {
*/
function needsBaselineVerification(platform) {
const { os, arch, baseline } = platform;
if (os !== "linux") return false;
return (arch === "x64" && baseline) || arch === "aarch64";
if (os === "linux") return (arch === "x64" && baseline) || arch === "aarch64";
if (os === "windows") return arch === "x64" && baseline;
return false;
}
/**
* Returns the emulator binary name for the given platform.
* Linux uses QEMU user-mode; Windows uses Intel SDE.
* @param {Platform} platform
* @returns {string}
*/
function getEmulatorBinary(platform) {
const { os, arch } = platform;
if (os === "windows") return "sde-external/sde.exe";
if (arch === "aarch64") return "qemu-aarch64-static";
return "qemu-x86_64-static";
}
const SDE_VERSION = "9.58.0-2025-06-16";
const SDE_URL = `https://downloadmirror.intel.com/859732/sde-external-${SDE_VERSION}-win.tar.xz`;
/**
* @param {Platform} platform
* @param {PipelineOptions} options
* @returns {Step}
*/
function hasWebKitChanges(options) {
const { changedFiles = [] } = options;
return changedFiles.some(file => file.includes("SetupWebKit.cmake"));
}
/**
@@ -603,9 +630,31 @@ function needsBaselineVerification(platform) {
* @returns {Step}
*/
function getVerifyBaselineStep(platform, options) {
const { arch } = platform;
const { os } = platform;
const targetKey = getTargetKey(platform);
const archArg = arch === "x64" ? "x64" : "aarch64";
const triplet = getTargetTriplet(platform);
const emulator = getEmulatorBinary(platform);
const jitStressFlag = hasWebKitChanges(options) ? " --jit-stress" : "";
const setupCommands =
os === "windows"
? [
`echo Downloading build artifacts...`,
`buildkite-agent artifact download ${triplet}.zip . --step ${targetKey}-build-bun`,
`echo Extracting ${triplet}.zip...`,
`tar -xf ${triplet}.zip`,
`echo Downloading Intel SDE...`,
`curl.exe -fsSL -o sde.tar.xz "${SDE_URL}"`,
`echo Extracting Intel SDE...`,
`7z x -y sde.tar.xz`,
`7z x -y sde.tar`,
`ren sde-external-${SDE_VERSION}-win sde-external`,
]
: [
`buildkite-agent artifact download '*.zip' . --step ${targetKey}-build-bun`,
`unzip -o '${triplet}.zip'`,
`chmod +x ${triplet}/bun`,
];
return {
key: `${targetKey}-verify-baseline`,
@@ -614,57 +663,10 @@ function getVerifyBaselineStep(platform, options) {
agents: getLinkBunAgent(platform, options),
retry: getRetry(),
cancel_on_build_failing: isMergeQueue(),
timeout_in_minutes: 5,
timeout_in_minutes: hasWebKitChanges(options) ? 30 : 10,
command: [
`buildkite-agent artifact download '*.zip' . --step ${targetKey}-build-bun`,
`unzip -o '${getTargetTriplet(platform)}.zip'`,
`unzip -o '${getTargetTriplet(platform)}-profile.zip'`,
`chmod +x ${getTargetTriplet(platform)}/bun ${getTargetTriplet(platform)}-profile/bun-profile`,
`./scripts/verify-baseline-cpu.sh --arch ${archArg} --binary ${getTargetTriplet(platform)}/bun`,
`./scripts/verify-baseline-cpu.sh --arch ${archArg} --binary ${getTargetTriplet(platform)}-profile/bun-profile`,
],
};
}
/**
* Returns true if the PR modifies SetupWebKit.cmake (WebKit version changes).
* JIT stress tests under QEMU should run when WebKit is updated to catch
* JIT-generated code that uses unsupported CPU instructions.
* @param {PipelineOptions} options
* @returns {boolean}
*/
function hasWebKitChanges(options) {
const { changedFiles = [] } = options;
return changedFiles.some(file => file.includes("SetupWebKit.cmake"));
}
/**
* Returns a step that runs JSC JIT stress tests under QEMU.
* This verifies that JIT-compiled code doesn't use CPU instructions
* beyond the baseline target (no AVX on x64, no LSE on aarch64).
* @param {Platform} platform
* @param {PipelineOptions} options
* @returns {Step}
*/
function getJitStressTestStep(platform, options) {
const { arch } = platform;
const targetKey = getTargetKey(platform);
const archArg = arch === "x64" ? "x64" : "aarch64";
return {
key: `${targetKey}-jit-stress-qemu`,
label: `${getTargetLabel(platform)} - jit-stress-qemu`,
depends_on: [`${targetKey}-build-bun`],
agents: getLinkBunAgent(platform, options),
retry: getRetry(),
cancel_on_build_failing: isMergeQueue(),
// JIT stress tests are slow under QEMU emulation
timeout_in_minutes: 30,
command: [
`buildkite-agent artifact download '*.zip' . --step ${targetKey}-build-bun`,
`unzip -o '${getTargetTriplet(platform)}.zip'`,
`chmod +x ${getTargetTriplet(platform)}/bun`,
`./scripts/verify-jit-stress-qemu.sh --arch ${archArg} --binary ${getTargetTriplet(platform)}/bun`,
...setupCommands,
`bun scripts/verify-baseline.ts --binary ${triplet}/${os === "windows" ? "bun.exe" : "bun"} --emulator ${emulator}${jitStressFlag}`,
],
};
}
@@ -1264,10 +1266,6 @@ async function getPipeline(options = {}) {
if (needsBaselineVerification(target)) {
steps.push(getVerifyBaselineStep(target, options));
// Run JIT stress tests under QEMU when WebKit is updated
if (hasWebKitChanges(options)) {
steps.push(getJitStressTestStep(target, options));
}
}
return getStepWithDependsOn(
@@ -1349,6 +1347,10 @@ async function main() {
{ headers: { Authorization: `Bearer ${getSecret("GITHUB_TOKEN")}` } },
);
const doc = await res.json();
if (!Array.isArray(doc)) {
console.error(`-> page ${i}, unexpected response:`, JSON.stringify(doc));
break;
}
console.log(`-> page ${i}, found ${doc.length} items`);
if (doc.length === 0) break;
for (const { filename, status } of doc) {
@@ -1363,7 +1365,7 @@ async function main() {
} catch (e) {
console.error(e);
}
if (allFiles.every(filename => filename.startsWith("docs/"))) {
if (allFiles.length > 0 && allFiles.every(filename => filename.startsWith("docs/"))) {
console.log(`- PR is only docs, skipping tests!`);
return;
}

33
.github/workflows/on-slop.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
name: Close AI Slop PRs
on:
pull_request_target:
types: [labeled]
jobs:
on-slop:
runs-on: ubuntu-latest
if: github.event.label.name == 'slop' && github.repository == 'oven-sh/bun'
permissions:
issues: write
pull-requests: write
steps:
- name: Comment and close PR
uses: actions/github-script@v7
with:
script: |
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.payload.pull_request.number,
body: 'This PR has been closed because it was flagged as AI slop.\n\nMany AI-generated PRs are fine, but this one was identified as having one or more of the following issues:\n- Fails to verify the problem actually exists\n- Fails to test that the fix works\n- Makes incorrect assumptions about the codebase\n- Submits changes that are incomplete or misleading\n\nIf you believe this was done in error, please leave a comment explaining why.'
});
await github.rest.pulls.update({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: context.payload.pull_request.number,
title: 'ai slop',
body: 'This PR has been marked as AI slop and the description has been updated to avoid confusion or misleading reviewers.\n\nMany AI PRs are fine, but sometimes they submit a PR too early, fail to test if the problem is real, fail to reproduce the problem, or fail to test that the problem is fixed. If you think this PR is not AI slop, please leave a comment.',
state: 'closed'
});

View File

@@ -13,6 +13,11 @@ else()
set(LSHPACK_INCLUDES .)
endif()
# Suppress all warnings from vendored lshpack on Windows (clang-cl)
if(WIN32)
set(LSHPACK_CMAKE_ARGS "-DCMAKE_C_FLAGS=-w")
endif()
register_cmake_command(
TARGET
lshpack
@@ -28,6 +33,7 @@ register_cmake_command(
# _lshpack_enc_get_static_name in libls-hpack.a(lshpack.c.o)
# _update_hash in libls-hpack.a(lshpack.c.o)
-DCMAKE_BUILD_TYPE=Release
${LSHPACK_CMAKE_ARGS}
INCLUDES
${LSHPACK_INCLUDES}
)

View File

@@ -79,12 +79,22 @@ endif()
if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|arm64|ARM64|AARCH64" AND NOT APPLE)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_NO_OPT_ARCH=ON)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OPT_SIMD=ON)
list(APPEND MIMALLOC_CMAKE_ARGS "-DCMAKE_C_FLAGS=-moutline-atomics")
if(NOT WIN32)
list(APPEND MIMALLOC_CMAKE_ARGS "-DCMAKE_C_FLAGS=-moutline-atomics")
endif()
elseif(NOT ENABLE_BASELINE)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OPT_ARCH=ON)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OPT_SIMD=ON)
endif()
# Suppress all warnings from mimalloc on Windows — it's vendored C code compiled
# as C++ (MI_USE_CXX=ON) which triggers many clang-cl warnings (-Wold-style-cast,
# -Wzero-as-null-pointer-constant, -Wc++98-compat-pedantic, etc.)
if(WIN32)
list(APPEND MIMALLOC_CMAKE_ARGS "-DCMAKE_C_FLAGS=-w")
list(APPEND MIMALLOC_CMAKE_ARGS "-DCMAKE_CXX_FLAGS=-w")
endif()
if(WIN32)
if(DEBUG)
set(MIMALLOC_LIBRARY mimalloc-static-debug)

View File

@@ -7,9 +7,16 @@ register_repository(
12882eee073cfe5c7621bcfadf679e1372d4537b
)
# Suppress all warnings from vendored tinycc on Windows (clang-cl)
if(WIN32)
set(TINYCC_CMAKE_ARGS "-DCMAKE_C_FLAGS=-w")
endif()
register_cmake_command(
TARGET
tinycc
ARGS
${TINYCC_CMAKE_ARGS}
LIBRARIES
tcc
)

233
scripts/verify-baseline.ts Normal file
View File

@@ -0,0 +1,233 @@
// Verify that a Bun binary doesn't use CPU instructions beyond its baseline target.
//
// Detects the platform and chooses the appropriate emulator:
// Linux x64: QEMU with Nehalem CPU (no AVX)
// Linux arm64: QEMU with Cortex-A53 (no LSE/SVE)
// Windows x64: Intel SDE with -nhm (no AVX)
//
// Usage:
// bun scripts/verify-baseline.ts --binary ./bun --emulator /usr/bin/qemu-x86_64
// bun scripts/verify-baseline.ts --binary ./bun.exe --emulator ./sde.exe
import { readdirSync } from "node:fs";
import { basename, dirname, join, resolve } from "node:path";
const { parseArgs } = require("node:util");
const { values } = parseArgs({
args: process.argv.slice(2),
options: {
binary: { type: "string" },
emulator: { type: "string" },
"jit-stress": { type: "boolean", default: false },
},
strict: true,
});
const binary = resolve(values.binary!);
function resolveEmulator(name: string): string {
const found = Bun.which(name);
if (found) return found;
// Try without -static suffix (e.g. qemu-aarch64 instead of qemu-aarch64-static)
if (name.endsWith("-static")) {
const fallback = Bun.which(name.slice(0, -"-static".length));
if (fallback) return fallback;
}
// Last resort: resolve as a relative path (e.g. sde-external/sde.exe)
return resolve(name);
}
const emulatorPath = resolveEmulator(values.emulator!);
const scriptDir = dirname(import.meta.path);
const repoRoot = resolve(scriptDir, "..");
const fixturesDir = join(repoRoot, "test", "js", "bun", "jsc-stress", "fixtures");
const wasmFixturesDir = join(fixturesDir, "wasm");
const preloadPath = join(repoRoot, "test", "js", "bun", "jsc-stress", "preload.js");
// Platform detection
const isWindows = process.platform === "win32";
const isAarch64 = process.arch === "arm64";
// SDE outputs this when a chip-check violation occurs
const SDE_VIOLATION_PATTERN = /SDE-ERROR:.*not valid for specified chip/i;
// Configure emulator based on platform
const config = isWindows
? {
runnerCmd: [emulatorPath, "-nhm", "--"],
cpuDesc: "Nehalem (SSE4.2, no AVX/AVX2/AVX512)",
// SDE must run from its own directory for Pin DLL resolution
cwd: dirname(emulatorPath),
}
: isAarch64
? {
runnerCmd: [emulatorPath, "-cpu", "cortex-a53"],
cpuDesc: "Cortex-A53 (ARMv8.0-A+CRC, no LSE/SVE)",
cwd: undefined,
}
: {
runnerCmd: [emulatorPath, "-cpu", "Nehalem"],
cpuDesc: "Nehalem (SSE4.2, no AVX/AVX2/AVX512)",
cwd: undefined,
};
function isInstructionViolation(exitCode: number, output: string): boolean {
if (isWindows) return SDE_VIOLATION_PATTERN.test(output);
return exitCode === 132; // SIGILL = 128 + signal 4
}
console.log(`--- Verifying ${basename(binary)} on ${config.cpuDesc}`);
console.log(` Binary: ${binary}`);
console.log(` Emulator: ${config.runnerCmd.join(" ")}`);
console.log();
let instructionFailures = 0;
let otherFailures = 0;
let passed = 0;
const failedTests: string[] = [];
interface RunTestOptions {
cwd?: string;
/** Tee output live to the console while still capturing it for analysis */
live?: boolean;
}
/** Read a stream, write each chunk to a writable, and return the full text. */
async function teeStream(stream: ReadableStream<Uint8Array>, output: NodeJS.WriteStream): Promise<string> {
const chunks: Uint8Array[] = [];
for await (const chunk of stream) {
chunks.push(chunk);
output.write(chunk);
}
return Buffer.concat(chunks).toString();
}
async function runTest(label: string, binaryArgs: string[], options?: RunTestOptions): Promise<boolean> {
console.log(`+++ ${label}`);
const start = performance.now();
const live = options?.live ?? false;
const proc = Bun.spawn([...config.runnerCmd, binary, ...binaryArgs], {
// config.cwd takes priority — SDE on Windows must run from its own directory for Pin DLL resolution
cwd: config.cwd ?? options?.cwd,
stdout: "pipe",
stderr: "pipe",
});
let stdout: string;
let stderr: string;
if (live) {
[stdout, stderr] = await Promise.all([
teeStream(proc.stdout as ReadableStream<Uint8Array>, process.stdout),
teeStream(proc.stderr as ReadableStream<Uint8Array>, process.stderr),
proc.exited,
]);
} else {
[stdout, stderr] = await Promise.all([
new Response(proc.stdout).text(),
new Response(proc.stderr).text(),
proc.exited,
]);
}
const exitCode = proc.exitCode!;
const elapsed = ((performance.now() - start) / 1000).toFixed(1);
const output = stdout + "\n" + stderr;
if (exitCode === 0) {
if (!live && stdout.trim()) console.log(stdout.trim());
console.log(` PASS (${elapsed}s)`);
passed++;
return true;
}
if (isInstructionViolation(exitCode, output)) {
if (!live && output.trim()) console.log(output.trim());
console.log();
console.log(` FAIL: CPU instruction violation detected (${elapsed}s)`);
if (isAarch64) {
console.log(" The aarch64 build targets Cortex-A53 (ARMv8.0-A+CRC).");
console.log(" LSE atomics, SVE, and dotprod instructions are not allowed.");
} else {
console.log(" The baseline x64 build targets Nehalem (SSE4.2).");
console.log(" AVX, AVX2, and AVX512 instructions are not allowed.");
}
instructionFailures++;
failedTests.push(label);
} else {
if (!live && output.trim()) console.log(output.trim());
console.log(` WARN: exit code ${exitCode} (${elapsed}s, not a CPU instruction issue)`);
otherFailures++;
}
return false;
}
// Phase 1: SIMD code path verification (always runs)
const simdTestPath = join(repoRoot, "test", "js", "bun", "jsc-stress", "fixtures", "simd-baseline.test.ts");
await runTest("SIMD baseline tests", ["test", simdTestPath], { live: true });
// Phase 2: JIT stress fixtures (only with --jit-stress, e.g. on WebKit changes)
if (values["jit-stress"]) {
const jsFixtures = readdirSync(fixturesDir)
.filter(f => f.endsWith(".js"))
.sort();
console.log();
console.log(`--- JS fixtures (DFG/FTL) — ${jsFixtures.length} tests`);
for (let i = 0; i < jsFixtures.length; i++) {
const fixture = jsFixtures[i];
await runTest(`[${i + 1}/${jsFixtures.length}] ${fixture}`, ["--preload", preloadPath, join(fixturesDir, fixture)]);
}
const wasmFixtures = readdirSync(wasmFixturesDir)
.filter(f => f.endsWith(".js"))
.sort();
console.log();
console.log(`--- Wasm fixtures (BBQ/OMG) — ${wasmFixtures.length} tests`);
for (let i = 0; i < wasmFixtures.length; i++) {
const fixture = wasmFixtures[i];
await runTest(
`[${i + 1}/${wasmFixtures.length}] ${fixture}`,
["--preload", preloadPath, join(wasmFixturesDir, fixture)],
{ cwd: wasmFixturesDir },
);
}
} else {
console.log();
console.log("--- Skipping JIT stress fixtures (pass --jit-stress to enable)");
}
// Summary
console.log();
console.log("--- Summary");
console.log(` Passed: ${passed}`);
console.log(` Instruction failures: ${instructionFailures}`);
console.log(` Other failures: ${otherFailures} (warnings, not CPU instruction issues)`);
console.log();
if (instructionFailures > 0) {
console.error(" FAILED: Code uses unsupported CPU instructions.");
// Report to Buildkite annotations tab
const platform = isWindows ? "Windows x64" : isAarch64 ? "Linux aarch64" : "Linux x64";
const annotation = [
`<details>`,
`<summary>CPU instruction violation on ${platform}${instructionFailures} failed</summary>`,
`<p>The baseline build uses instructions not available on <code>${config.cpuDesc}</code>.</p>`,
`<ul>${failedTests.map(t => `<li><code>${t}</code></li>`).join("")}</ul>`,
`</details>`,
].join("\n");
Bun.spawnSync(["buildkite-agent", "annotate", "--append", "--style", "error", "--context", "verify-baseline"], {
stdin: new Blob([annotation]),
});
process.exit(1);
}
if (otherFailures > 0) {
console.log(" Some tests failed for reasons unrelated to CPU instructions.");
}
console.log(` All baseline verification passed on ${config.cpuDesc}.`);

View File

@@ -21,16 +21,3 @@ export const gc = fn({
},
ret: t.usize,
});
export const StringWidthOptions = t.dictionary({
countAnsiEscapeCodes: t.boolean.default(false),
ambiguousIsNarrow: t.boolean.default(true),
});
export const stringWidth = fn({
args: {
str: t.DOMString.default(""),
opts: StringWidthOptions.default({}),
},
ret: t.usize,
});

View File

@@ -34,6 +34,7 @@ pub const BunObject = struct {
pub const sha = toJSCallback(host_fn.wrapStaticMethod(Crypto.SHA512_256, "hash_", true));
pub const shellEscape = toJSCallback(Bun.shellEscape);
pub const shrink = toJSCallback(Bun.shrink);
pub const stringWidth = toJSCallback(Bun.stringWidth);
pub const sleepSync = toJSCallback(Bun.sleepSync);
pub const spawn = toJSCallback(host_fn.wrapStaticMethod(api.Subprocess, "spawn", false));
pub const spawnSync = toJSCallback(host_fn.wrapStaticMethod(api.Subprocess, "spawnSync", false));
@@ -179,6 +180,7 @@ pub const BunObject = struct {
@export(&BunObject.sha, .{ .name = callbackName("sha") });
@export(&BunObject.shellEscape, .{ .name = callbackName("shellEscape") });
@export(&BunObject.shrink, .{ .name = callbackName("shrink") });
@export(&BunObject.stringWidth, .{ .name = callbackName("stringWidth") });
@export(&BunObject.sleepSync, .{ .name = callbackName("sleepSync") });
@export(&BunObject.spawn, .{ .name = callbackName("spawn") });
@export(&BunObject.spawnSync, .{ .name = callbackName("spawnSync") });
@@ -1382,14 +1384,8 @@ pub fn getUnsafe(globalThis: *jsc.JSGlobalObject, _: *jsc.JSObject) jsc.JSValue
return UnsafeObject.create(globalThis);
}
pub fn stringWidth(str: bun.String, opts: gen.StringWidthOptions) usize {
if (str.length() == 0)
return 0;
if (opts.count_ansi_escape_codes)
return str.visibleWidth(!opts.ambiguous_is_narrow);
return str.visibleWidthExcludeANSIColors(!opts.ambiguous_is_narrow);
pub fn stringWidth(globalObject: *jsc.JSGlobalObject, callFrame: *jsc.CallFrame) bun.JSError!jsc.JSValue {
return bun.String.jsGetStringWidth(globalObject, callFrame);
}
/// EnvironmentVariables is runtime defined.

View File

@@ -50,23 +50,28 @@ fn getArgv0(globalThis: *jsc.JSGlobalObject, PATH: []const u8, cwd: []const u8,
/// `argv` for `Bun.spawn` & `Bun.spawnSync`
fn getArgv(globalThis: *jsc.JSGlobalObject, args: JSValue, PATH: []const u8, cwd: []const u8, argv0: *?[*:0]const u8, allocator: std.mem.Allocator, argv: *std.array_list.Managed(?[*:0]const u8)) bun.JSError!void {
var cmds_array = try args.arrayIterator(globalThis);
// + 1 for argv0
// + 1 for null terminator
argv.* = try @TypeOf(argv.*).initCapacity(allocator, cmds_array.len + 2);
if (args.isEmptyOrUndefinedOrNull()) {
return globalThis.throwInvalidArguments("cmd must be an array of strings", .{});
}
var cmds_array = try args.arrayIterator(globalThis);
if (cmds_array.len == 0) {
return globalThis.throwInvalidArguments("cmd must not be empty", .{});
}
if (cmds_array.len > std.math.maxInt(u32) - 2) {
return globalThis.throwInvalidArguments("cmd array is too large", .{});
}
// + 1 for argv0
// + 1 for null terminator
argv.* = try @TypeOf(argv.*).initCapacity(allocator, @as(usize, cmds_array.len) + 2);
const argv0_result = try getArgv0(globalThis, PATH, cwd, argv0.*, (try cmds_array.next()).?, allocator);
argv0.* = argv0_result.argv0.ptr;
argv.appendAssumeCapacity(argv0_result.arg0.ptr);
try argv.append(argv0_result.arg0.ptr);
var arg_index: usize = 1;
while (try cmds_array.next()) |value| {
@@ -78,7 +83,7 @@ fn getArgv(globalThis: *jsc.JSGlobalObject, args: JSValue, PATH: []const u8, cwd
return globalThis.ERR(.INVALID_ARG_VALUE, "The argument 'args[{d}]' must be a string without null bytes. Received \"{f}\"", .{ arg_index, arg.toZigString() }).throw();
}
argv.appendAssumeCapacity(try arg.toOwnedSliceZ(allocator));
try argv.append(try arg.toOwnedSliceZ(allocator));
arg_index += 1;
}

View File

@@ -1087,8 +1087,10 @@ pub const WindowsSpawnOptions = struct {
dup2: struct { out: bun.jsc.Subprocess.StdioKind, to: bun.jsc.Subprocess.StdioKind },
pub fn deinit(this: *const Stdio) void {
if (this.* == .buffer) {
this.buffer.closeAndDestroy();
switch (this.*) {
.buffer => |pipe| pipe.closeAndDestroy(),
.ipc => |pipe| pipe.closeAndDestroy(),
else => {},
}
}
};
@@ -1629,9 +1631,10 @@ pub fn spawnProcessWindows(
stdio.flags = uv.UV_INHERIT_FD;
stdio.data.fd = fd_i;
},
.ipc => |my_pipe| {
// ipc option inside stdin, stderr or stdout are not supported
bun.default_allocator.destroy(my_pipe);
.ipc => {
// ipc option inside stdin, stderr or stdout is not supported.
// Don't free the pipe here — the caller owns it and will
// clean it up via WindowsSpawnOptions.deinit().
stdio.flags = uv.UV_IGNORE;
},
.ignore => {
@@ -1829,7 +1832,7 @@ pub const sync = struct {
.ignore => .ignore,
.buffer => .{
.buffer = if (Environment.isWindows)
bun.handleOom(bun.default_allocator.create(bun.windows.libuv.Pipe)),
bun.new(bun.windows.libuv.Pipe, std.mem.zeroes(bun.windows.libuv.Pipe)),
},
};
}

View File

@@ -851,6 +851,9 @@ pub fn getsockname(this: *Listener, globalThis: *jsc.JSGlobalObject, callFrame:
}
const out = callFrame.argumentsAsArray(1)[0];
if (!out.isObject()) {
return globalThis.throwInvalidArguments("Expected object", .{});
}
const socket = this.listener.uws;
var buf: [64]u8 = [_]u8{0} ** 64;

View File

@@ -187,7 +187,7 @@ pub fn create(globalThis: *jsc.JSGlobalObject, socket: SocketType) *WindowsNamed
});
// named_pipe owns the pipe (PipeWriter owns the pipe and will close and deinit it)
this.named_pipe = uws.WindowsNamedPipe.from(bun.handleOom(bun.default_allocator.create(uv.Pipe)), .{
this.named_pipe = uws.WindowsNamedPipe.from(bun.new(uv.Pipe, std.mem.zeroes(uv.Pipe)), .{
.ctx = this,
.ref_ctx = @ptrCast(&WindowsNamedPipeContext.ref),
.deref_ctx = @ptrCast(&WindowsNamedPipeContext.deref),
@@ -288,6 +288,8 @@ pub fn deinit(this: *WindowsNamedPipeContext) void {
bun.destroy(this);
}
const std = @import("std");
const bun = @import("bun");
const Output = bun.Output;
const jsc = bun.jsc;

View File

@@ -235,10 +235,10 @@ pub const Stdio = union(enum) {
return .{ .err = .blob_used_as_out };
}
break :brk .{ .buffer = bun.handleOom(bun.default_allocator.create(uv.Pipe)) };
break :brk .{ .buffer = createZeroedPipe() };
},
.ipc => .{ .ipc = bun.handleOom(bun.default_allocator.create(uv.Pipe)) },
.capture, .pipe, .array_buffer, .readable_stream => .{ .buffer = bun.handleOom(bun.default_allocator.create(uv.Pipe)) },
.ipc => .{ .ipc = createZeroedPipe() },
.capture, .pipe, .array_buffer, .readable_stream => .{ .buffer = createZeroedPipe() },
.fd => |fd| .{ .pipe = fd },
.dup2 => .{ .dup2 = .{ .out = stdio.dup2.out, .to = stdio.dup2.to } },
.path => |pathlike| .{ .path = pathlike.slice() },
@@ -487,12 +487,18 @@ pub const Stdio = union(enum) {
}
};
/// Allocate a zero-initialized uv.Pipe. Zero-init ensures `pipe.loop` is null
/// for pipes that never reach `uv_pipe_init`, so `closeAndDestroy` can tell
/// whether `uv_close` is needed.
fn createZeroedPipe() *uv.Pipe {
return bun.new(uv.Pipe, std.mem.zeroes(uv.Pipe));
}
const std = @import("std");
const bun = @import("bun");
const Environment = bun.Environment;
const Output = bun.Output;
const default_allocator = bun.default_allocator;
const uv = bun.windows.libuv;
const jsc = bun.jsc;

View File

@@ -995,7 +995,7 @@ JSC_DEFINE_HOST_FUNCTION(functionFileURLToPath, (JSC::JSGlobalObject * globalObj
stderr BunObject_lazyPropCb_wrap_stderr DontDelete|PropertyCallback
stdin BunObject_lazyPropCb_wrap_stdin DontDelete|PropertyCallback
stdout BunObject_lazyPropCb_wrap_stdout DontDelete|PropertyCallback
stringWidth Generated::BunObject::jsStringWidth DontDelete|Function 2
stringWidth BunObject_callback_stringWidth DontDelete|Function 2
stripANSI jsFunctionBunStripANSI DontDelete|Function 1
wrapAnsi jsFunctionBunWrapAnsi DontDelete|Function 3
Terminal BunObject_lazyPropCb_wrap_Terminal DontDelete|PropertyCallback

View File

@@ -1164,6 +1164,7 @@ void JSCommonJSModule::visitChildrenImpl(JSCell* cell, Visitor& visitor)
visitor.appendHidden(thisObject->m_dirname);
visitor.appendHidden(thisObject->m_paths);
visitor.appendHidden(thisObject->m_overriddenParent);
visitor.appendHidden(thisObject->m_overriddenCompile);
visitor.appendHidden(thisObject->m_childrenValue);
visitor.appendValues(thisObject->m_children.begin(), thisObject->m_children.size());
}

View File

@@ -703,6 +703,17 @@ void NodeVMSpecialSandbox::finishCreation(VM& vm)
const JSC::ClassInfo NodeVMSpecialSandbox::s_info = { "NodeVMSpecialSandbox"_s, &Base::s_info, nullptr, nullptr, CREATE_METHOD_TABLE(NodeVMSpecialSandbox) };
template<typename Visitor>
void NodeVMSpecialSandbox::visitChildrenImpl(JSCell* cell, Visitor& visitor)
{
auto* thisObject = jsCast<NodeVMSpecialSandbox*>(cell);
ASSERT_GC_OBJECT_INHERITS(thisObject, info());
Base::visitChildren(thisObject, visitor);
visitor.append(thisObject->m_parentGlobal);
}
DEFINE_VISIT_CHILDREN(NodeVMSpecialSandbox);
NodeVMGlobalObject::NodeVMGlobalObject(JSC::VM& vm, JSC::Structure* structure, NodeVMContextOptions contextOptions, JSValue importer)
: Base(vm, structure, &globalObjectMethodTable())
, m_dynamicImportCallback(vm, this, importer)

View File

@@ -85,6 +85,7 @@ public:
static NodeVMSpecialSandbox* create(VM& vm, Structure* structure, NodeVMGlobalObject* globalObject);
DECLARE_INFO;
DECLARE_VISIT_CHILDREN;
template<typename, JSC::SubspaceAccess mode> static JSC::GCClient::IsoSubspace* subspaceFor(JSC::VM& vm);
static Structure* createStructure(JSC::VM& vm, JSC::JSGlobalObject* globalObject, JSC::JSValue prototype);

View File

@@ -2020,7 +2020,7 @@ JSC_DEFINE_HOST_FUNCTION(jsSQLStatementSetPrototypeFunction, (JSGlobalObject * l
return {};
}
castedThis->userPrototype.set(vm, classObject, prototype.getObject());
castedThis->userPrototype.set(vm, castedThis, prototype.getObject());
// Force the prototypes to be re-created
if (castedThis->version_db) {

View File

@@ -31,8 +31,17 @@
#include <utility>
#include <vector>
#ifdef _WIN32
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wmicrosoft-include"
#endif
#define v8 real_v8
#define private public
#include "node/v8.h"
#undef private
#undef v8
#ifdef _WIN32
#pragma clang diagnostic pop
#endif

View File

@@ -923,9 +923,9 @@ pub const SendQueue = struct {
pub fn windowsConfigureClient(this: *SendQueue, pipe_fd: bun.FileDescriptor) !void {
log("configureClient", .{});
const ipc_pipe = bun.handleOom(bun.default_allocator.create(uv.Pipe));
const ipc_pipe = bun.new(uv.Pipe, std.mem.zeroes(uv.Pipe));
ipc_pipe.init(uv.Loop.get(), true).unwrap() catch |err| {
bun.default_allocator.destroy(ipc_pipe);
bun.destroy(ipc_pipe);
return err;
};
ipc_pipe.open(pipe_fd).unwrap() catch |err| {

View File

@@ -75,6 +75,7 @@ pub const PendingValue = struct {
onStartBuffering: ?*const fn (ctx: *anyopaque) void = null,
onStartStreaming: ?*const fn (ctx: *anyopaque) jsc.WebCore.DrainResult = null,
onReadableStreamAvailable: ?*const fn (ctx: *anyopaque, globalThis: *jsc.JSGlobalObject, readable: jsc.WebCore.ReadableStream) void = null,
onStreamCancelled: ?*const fn (ctx: ?*anyopaque) void = null,
size_hint: Blob.SizeType = 0,
deinit: bool = false,
@@ -495,6 +496,13 @@ pub const Value = union(Tag) {
.globalThis = globalThis,
});
if (locked.onStreamCancelled) |onCancelled| {
if (locked.task) |task| {
reader.cancel_handler = onCancelled;
reader.cancel_ctx = task;
}
}
reader.context.setup();
if (drain_result == .estimated_size) {

View File

@@ -442,6 +442,8 @@ pub fn NewSource(
close_handler: ?*const fn (?*anyopaque) void = null,
close_ctx: ?*anyopaque = null,
close_jsvalue: jsc.Strong.Optional = .empty,
cancel_handler: ?*const fn (?*anyopaque) void = null,
cancel_ctx: ?*anyopaque = null,
globalThis: *JSGlobalObject = undefined,
this_jsvalue: jsc.JSValue = .zero,
is_closed: bool = false,
@@ -493,6 +495,10 @@ pub fn NewSource(
this.cancelled = true;
onCancel(&this.context);
if (this.cancel_handler) |handler| {
this.cancel_handler = null;
handler(this.cancel_ctx);
}
}
pub fn onClose(this: *This) void {

View File

@@ -231,6 +231,7 @@ pub const FetchTasklet = struct {
response.unref();
}
this.clearStreamCancelHandler();
this.readable_stream_ref.deinit();
this.scheduled_response_buffer.deinit();
@@ -363,6 +364,7 @@ pub const FetchTasklet = struct {
bun.default_allocator,
);
} else {
this.clearStreamCancelHandler();
var prev = this.readable_stream_ref;
this.readable_stream_ref = .{};
defer prev.deinit();
@@ -865,6 +867,25 @@ pub const FetchTasklet = struct {
};
}
/// Clear the cancel_handler on the ByteStream.Source to prevent use-after-free.
/// Must be called before releasing readable_stream_ref, while the Strong ref
/// still keeps the ReadableStream (and thus the ByteStream.Source) alive.
fn clearStreamCancelHandler(this: *FetchTasklet) void {
if (this.readable_stream_ref.get(this.global_this)) |readable| {
if (readable.ptr == .Bytes) {
const source = readable.ptr.Bytes.parent();
source.cancel_handler = null;
source.cancel_ctx = null;
}
}
}
fn onStreamCancelledCallback(ctx: ?*anyopaque) void {
const this = bun.cast(*FetchTasklet, ctx.?);
if (this.ignore_data) return;
this.ignoreRemainingResponseBody();
}
fn toBodyValue(this: *FetchTasklet) Body.Value {
if (this.getAbortError()) |err| {
return .{ .Error = err };
@@ -877,6 +898,7 @@ pub const FetchTasklet = struct {
.global = this.global_this,
.onStartStreaming = FetchTasklet.onStartStreamingHTTPResponseBodyCallback,
.onReadableStreamAvailable = FetchTasklet.onReadableStreamAvailable,
.onStreamCancelled = FetchTasklet.onStreamCancelledCallback,
},
};
return response;
@@ -930,7 +952,8 @@ pub const FetchTasklet = struct {
// we should not keep the process alive if we are ignoring the body
const vm = this.javascript_vm;
this.poll_ref.unref(vm);
// clean any remaining refereces
// clean any remaining references
this.clearStreamCancelHandler();
this.readable_stream_ref.deinit();
this.response.deinit();

View File

@@ -154,10 +154,8 @@ fn prepareCssAstsForChunkImpl(c: *LinkerContext, chunk: *Chunk, allocator: std.m
filter: {
// Filter out "@charset", "@import", and leading "@layer" rules
// TODO: we are doing simple version rn, only @import
for (ast.rules.v.items, 0..) |*rule, ruleidx| {
// if ((rule.* == .import and import_records[source_index.get()].at(rule.import.import_record_idx).flags.is_internal) or rule.* == .ignored) {} else {
if (rule.* == .import or rule.* == .ignored) {} else {
if (rule.* == .import or rule.* == .ignored or rule.* == .layer_statement) {} else {
// It's okay to do this because AST is allocated into arena
const reslice = ast.rules.v.items[ruleidx..];
ast.rules.v = .{

View File

@@ -564,8 +564,8 @@ pub fn runScriptsWithFilter(ctx: Command.Context) !noreturn {
.config = script,
.options = .{
.stdin = .ignore,
.stdout = if (Environment.isPosix) .buffer else .{ .buffer = try bun.default_allocator.create(bun.windows.libuv.Pipe) },
.stderr = if (Environment.isPosix) .buffer else .{ .buffer = try bun.default_allocator.create(bun.windows.libuv.Pipe) },
.stdout = if (Environment.isPosix) .buffer else .{ .buffer = bun.new(bun.windows.libuv.Pipe, std.mem.zeroes(bun.windows.libuv.Pipe)) },
.stderr = if (Environment.isPosix) .buffer else .{ .buffer = bun.new(bun.windows.libuv.Pipe, std.mem.zeroes(bun.windows.libuv.Pipe)) },
.cwd = std.fs.path.dirname(script.package_json_path) orelse "",
.windows = if (Environment.isWindows) .{ .loop = bun.jsc.EventLoopHandle.init(event_loop) },
.stream = true,

View File

@@ -762,8 +762,8 @@ pub fn run(ctx: Command.Context) !noreturn {
.color_idx = color_idx,
.options = .{
.stdin = .ignore,
.stdout = if (Environment.isPosix) .buffer else .{ .buffer = try bun.default_allocator.create(bun.windows.libuv.Pipe) },
.stderr = if (Environment.isPosix) .buffer else .{ .buffer = try bun.default_allocator.create(bun.windows.libuv.Pipe) },
.stdout = if (Environment.isPosix) .buffer else .{ .buffer = bun.new(bun.windows.libuv.Pipe, std.mem.zeroes(bun.windows.libuv.Pipe)) },
.stderr = if (Environment.isPosix) .buffer else .{ .buffer = bun.new(bun.windows.libuv.Pipe, std.mem.zeroes(bun.windows.libuv.Pipe)) },
.cwd = config.cwd,
.windows = if (Environment.isWindows) .{ .loop = bun.jsc.EventLoopHandle.init(event_loop) },
.stream = true,

View File

@@ -142,6 +142,36 @@ function resolveComplexArgumentStrategy(
}
}
function collectStringTemps(args: readonly { type: TypeImpl }[]): Map<number, string[]> {
const result = new Map<number, string[]>();
// Use the same JS argument indexing as the arg processing loop:
// skip virtual args entirely, increment for everything else.
let jsArgIdx = 0;
for (const arg of args) {
const type = arg.type;
if (type.isVirtualArgument()) continue;
if (type.isIgnoredUndefinedType()) {
jsArgIdx++;
continue;
}
if (type.isStringType()) {
result.set(jsArgIdx, [cpp.nextTemporaryName("wtfString")]);
} else if (type.kind === "dictionary") {
const temps: string[] = [];
for (const field of type.data as DictionaryField[]) {
if (field.type.isStringType()) {
temps.push(cpp.nextTemporaryName("wtfString"));
}
}
if (temps.length > 0) {
result.set(jsArgIdx, temps);
}
}
jsArgIdx++;
}
return result;
}
function emitCppCallToVariant(name: string, variant: Variant, dispatchFunctionName: string) {
cpp.line(`auto& vm = JSC::getVM(global);`);
cpp.line(`auto throwScope = DECLARE_THROW_SCOPE(vm);`);
@@ -157,6 +187,16 @@ function emitCppCallToVariant(name: string, variant: Variant, dispatchFunctionNa
communicationStruct.emitCpp(cppInternal, communicationStruct.name());
}
// Hoist all WTF::String temps to the dispatch scope so they outlive any
// BunString values that reference them. Bun::toString() does not ref the
// StringImpl, so the WTF::String must stay alive until the dispatch call.
const hoistedTemps = collectStringTemps(variant.args);
for (const temps of hoistedTemps.values()) {
for (const temp of temps) {
cpp.line(`WTF::String ${temp};`);
}
}
let i = 0;
for (const arg of variant.args) {
const type = arg.type;
@@ -201,6 +241,8 @@ function emitCppCallToVariant(name: string, variant: Variant, dispatchFunctionNa
/** If the final representation may include null */
const isNullable = type.flags.optional && !("default" in type.flags);
const argTemps = hoistedTemps.get(i);
if (isOptionalToUser) {
if (needDeclare) {
addHeaderForType(type);
@@ -215,7 +257,8 @@ function emitCppCallToVariant(name: string, variant: Variant, dispatchFunctionNa
cpp.line(`if (!${jsValueRef}.${isUndefinedOrNull}()) {`);
}
cpp.indent();
emitConvertValue(storageLocation, arg.type, jsValueRef, exceptionContext, "assign");
const hoistedTemp = type.isStringType() ? argTemps?.[0] : undefined;
emitConvertValue(storageLocation, arg.type, jsValueRef, exceptionContext, "assign", hoistedTemp, argTemps);
cpp.dedent();
if ("default" in type.flags) {
cpp.line(`} else {`);
@@ -229,7 +272,16 @@ function emitCppCallToVariant(name: string, variant: Variant, dispatchFunctionNa
}
cpp.line(`}`);
} else {
emitConvertValue(storageLocation, arg.type, jsValueRef, exceptionContext, needDeclare ? "declare" : "assign");
const hoistedTemp = type.isStringType() ? argTemps?.[0] : undefined;
emitConvertValue(
storageLocation,
arg.type,
jsValueRef,
exceptionContext,
needDeclare ? "declare" : "assign",
hoistedTemp,
argTemps,
);
}
i += 1;
@@ -424,6 +476,8 @@ function emitConvertValue(
jsValueRef: string,
exceptionContext: ExceptionContext,
decl: "declare" | "assign",
hoistedTemp?: string,
dictStringTemps?: string[],
) {
if (decl === "declare") {
addHeaderForType(type);
@@ -473,8 +527,12 @@ function emitConvertValue(
case "USVString":
case "DOMString":
case "ByteString": {
const temp = cpp.nextTemporaryName("wtfString");
cpp.line(`WTF::String ${temp} = WebCore::convert<WebCore::IDL${type.kind}>(*global, ${jsValueRef});`);
const temp = hoistedTemp ?? cpp.nextTemporaryName("wtfString");
if (hoistedTemp) {
cpp.line(`${temp} = WebCore::convert<WebCore::IDL${type.kind}>(*global, ${jsValueRef});`);
} else {
cpp.line(`WTF::String ${temp} = WebCore::convert<WebCore::IDL${type.kind}>(*global, ${jsValueRef});`);
}
cpp.line(`RETURN_IF_EXCEPTION(throwScope, {});`);
if (decl === "declare") {
@@ -484,8 +542,12 @@ function emitConvertValue(
break;
}
case "UTF8String": {
const temp = cpp.nextTemporaryName("wtfString");
cpp.line(`WTF::String ${temp} = WebCore::convert<WebCore::IDLDOMString>(*global, ${jsValueRef});`);
const temp = hoistedTemp ?? cpp.nextTemporaryName("wtfString");
if (hoistedTemp) {
cpp.line(`${temp} = WebCore::convert<WebCore::IDLDOMString>(*global, ${jsValueRef});`);
} else {
cpp.line(`WTF::String ${temp} = WebCore::convert<WebCore::IDLDOMString>(*global, ${jsValueRef});`);
}
cpp.line(`RETURN_IF_EXCEPTION(throwScope, {});`);
if (decl === "declare") {
@@ -498,7 +560,13 @@ function emitConvertValue(
if (decl === "declare") {
cpp.line(`${type.cppName()} ${storageLocation};`);
}
cpp.line(`auto did_convert = convert${type.cppInternalName()}(&${storageLocation}, global, ${jsValueRef});`);
if (dictStringTemps && dictStringTemps.length > 0) {
cpp.line(
`auto did_convert = convert${type.cppInternalName()}(&${storageLocation}, global, ${jsValueRef}, ${dictStringTemps.join(", ")});`,
);
} else {
cpp.line(`auto did_convert = convert${type.cppInternalName()}(&${storageLocation}, global, ${jsValueRef});`);
}
cpp.line(`RETURN_IF_EXCEPTION(throwScope, {});`);
cpp.line(`if (!did_convert) return {};`);
break;
@@ -582,10 +650,22 @@ function emitConvertDictionaryFunction(type: TypeImpl) {
addHeaderForType(type);
// Build WTF::String& params for string fields (caller owns the storage).
const stringFieldParams: string[] = [];
for (const field of fields) {
if (field.type.isStringType()) {
stringFieldParams.push(`WTF::String& ${field.key}_str`);
}
}
cpp.line(`// Internal dictionary parse for ${type.name()}`);
cpp.line(
`bool convert${type.cppInternalName()}(${type.cppName()}* result, JSC::JSGlobalObject* global, JSC::JSValue value) {`,
);
const params = [
`${type.cppName()}* result`,
`JSC::JSGlobalObject* global`,
`JSC::JSValue value`,
...stringFieldParams,
];
cpp.line(`bool convert${type.cppInternalName()}(${params.join(", ")}) {`);
cpp.indent();
cpp.line(`auto& vm = JSC::getVM(global);`);
@@ -610,9 +690,12 @@ function emitConvertDictionaryFunction(type: TypeImpl) {
);
cpp.line(` RETURN_IF_EXCEPTION(throwScope, false);`);
cpp.line(`}`);
// For string fields, use the caller-owned WTF::String& ref so the
// string data outlives this function and the BunString in the result.
const hoistedTemp = fieldType.isStringType() ? `${key}_str` : undefined;
cpp.line(`if (!propValue.isUndefined()) {`);
cpp.indent();
emitConvertValue(`result->${key}`, fieldType, "propValue", { type: "none" }, "assign");
emitConvertValue(`result->${key}`, fieldType, "propValue", { type: "none" }, "assign", hoistedTemp);
cpp.dedent();
cpp.line(`} else {`);
cpp.indent();

View File

@@ -452,6 +452,14 @@ pub fn CssRuleList(comptime AtRule: type) type {
}
bun.handleOom(rules.append(context.allocator, rule.*));
moved_rule = true;
// Non-style rules (e.g. @property, @keyframes) act as a barrier for
// style rule deduplication. We cannot safely merge identical style rules
// across such boundaries because the intervening at-rule may affect how
// the declarations are interpreted (e.g. @property defines a custom
// property that a :root rule above may set differently than one below).
style_rules.clearRetainingCapacity();
}
// MISSING SHIT HERE

View File

@@ -1415,15 +1415,23 @@ pub const Pipe = extern struct {
return @ptrCast(this);
}
/// Close the pipe handle and then free it in the close callback.
/// Use this when a pipe has been init'd but needs to be destroyed
/// (e.g. when open() fails after init() succeeded).
/// Close the pipe handle (if needed) and then free it.
/// Handles all states: never-initialized (loop == null), already closing,
/// or active. After uv_pipe_init the handle is in the event loop's
/// handle_queue; freeing without uv_close corrupts that list.
pub fn closeAndDestroy(this: *@This()) void {
this.close(&onCloseDestroy);
if (this.loop == null) {
// Never initialized — safe to free directly.
bun.destroy(this);
} else if (!this.isClosing()) {
// Initialized and not yet closing — must uv_close first.
this.close(&onCloseDestroy);
}
// else: already closing — the pending close callback owns the lifetime.
}
fn onCloseDestroy(handle: *@This()) callconv(.c) void {
bun.default_allocator.destroy(handle);
bun.destroy(handle);
}
};
const union_unnamed_416 = extern union {

View File

@@ -480,7 +480,9 @@ pub fn NewHTTPContext(comptime ssl: bool) type {
const DeadSocket = struct {
garbage: u8 = 0,
pub var dead_socket: DeadSocket = .{};
/// Must be aligned to `@alignOf(usize)` so that tagged pointer values
/// embedding this address pass the `@alignCast` in `bun.cast`.
pub var dead_socket: DeadSocket align(@alignOf(usize)) = .{};
};
var dead_socket = &DeadSocket.dead_socket;

View File

@@ -383,6 +383,7 @@ pub const PackageInstall = struct {
&[_]bun.OSPathSlice{},
&[_]bun.OSPathSlice{},
) catch |err| return Result.fail(err, .opening_cache_dir, @errorReturnTrace());
walker_.resolve_unknown_entry_types = true;
defer walker_.deinit();
const FileCopier = struct {
@@ -520,6 +521,7 @@ pub const PackageInstall = struct {
else
&[_]bun.OSPathSlice{},
) catch |err| bun.handleOom(err);
state.walker.resolve_unknown_entry_types = true;
if (!Environment.isWindows) {
state.subdir = destbase.makeOpenPath(bun.span(destpath), .{

View File

@@ -1011,6 +1011,7 @@ pub const PackageInstaller = struct {
.local_tarball => {
this.manager.enqueueTarballForReading(
dependency_id,
package_id,
alias.slice(this.lockfile.buffers.string_bytes.items),
resolution,
context,

View File

@@ -126,6 +126,7 @@ pub fn enqueueTarballForDownload(
pub fn enqueueTarballForReading(
this: *PackageManager,
dependency_id: DependencyID,
package_id: PackageID,
alias: string,
resolution: *const Resolution,
task_context: TaskCallbackContext,
@@ -144,6 +145,8 @@ pub fn enqueueTarballForReading(
if (task_queue.found_existing) return;
const integrity = this.lockfile.packages.items(.meta)[package_id].integrity;
this.task_batch.push(ThreadPool.Batch.from(enqueueLocalTarball(
this,
task_id,
@@ -151,6 +154,7 @@ pub fn enqueueTarballForReading(
alias,
path,
resolution.*,
integrity,
)));
}
@@ -1133,6 +1137,7 @@ pub fn enqueueDependencyWithMainAndSuccessFn(
this.lockfile.str(&dependency.name),
url,
res,
.{},
)));
},
.remote => {
@@ -1294,6 +1299,7 @@ fn enqueueLocalTarball(
name: string,
path: string,
resolution: Resolution,
integrity: Integrity,
) *ThreadPool.Task {
var task = this.preallocated_resolve_tasks.get();
task.* = Task{
@@ -1313,6 +1319,7 @@ fn enqueueLocalTarball(
.cache_dir = this.getCacheDirectory(),
.temp_dir = this.getTemporaryDirectory().handle,
.dependency_id = dependency_id,
.integrity = integrity,
.url = strings.StringOrTinyString.initAppendIfNeeded(
path,
*FileSystem.FilenameStore,
@@ -1886,6 +1893,7 @@ const DependencyID = bun.install.DependencyID;
const ExtractTarball = bun.install.ExtractTarball;
const Features = bun.install.Features;
const FolderResolution = bun.install.FolderResolution;
const Integrity = bun.install.Integrity;
const Npm = bun.install.Npm;
const PackageID = bun.install.PackageID;
const PackageNameHash = bun.install.PackageNameHash;

View File

@@ -193,6 +193,9 @@ pub fn processExtractedTarballPackage(
};
package.meta.setHasInstallScript(has_scripts);
if (data.integrity.tag.isSupported()) {
package.meta.integrity = data.integrity;
}
package = manager.lockfile.appendPackage(package) catch unreachable;
package_id.* = package.meta.id;

View File

@@ -25,21 +25,23 @@ pub inline fn run(this: *const ExtractTarball, log: *logger.Log, bytes: []const
}
var result = try this.extract(log, bytes);
// Compute and store SHA-512 integrity hash for GitHub tarballs so the
// lockfile can pin the exact tarball content. On subsequent installs the
// hash stored in the lockfile is forwarded via this.integrity and verified
// Compute and store SHA-512 integrity hash for GitHub / URL / local tarballs
// so the lockfile can pin the exact tarball content. On subsequent installs
// the hash stored in the lockfile is forwarded via this.integrity and verified
// above, preventing a compromised server from silently swapping the tarball.
if (this.resolution.tag == .github) {
if (this.integrity.tag.isSupported()) {
// Re-installing with an existing lockfile: integrity was already
// verified above, propagate the known value to ExtractData so that
// the lockfile keeps it on re-serialisation.
result.integrity = this.integrity;
} else {
// First install (no integrity in the lockfile yet): compute it.
result.integrity = .{ .tag = .sha512 };
Crypto.SHA512.hash(bytes, result.integrity.value[0..Crypto.SHA512.digest]);
}
switch (this.resolution.tag) {
.github, .remote_tarball, .local_tarball => {
if (this.integrity.tag.isSupported()) {
// Re-installing with an existing lockfile: integrity was already
// verified above, propagate the known value to ExtractData so that
// the lockfile keeps it on re-serialisation.
result.integrity = this.integrity;
} else {
// First install (no integrity in the lockfile yet): compute it.
result.integrity = Integrity.forBytes(bytes);
}
},
else => {},
}
return result;
@@ -566,7 +568,6 @@ const string = []const u8;
const Npm = @import("./npm.zig");
const std = @import("std");
const Crypto = @import("../sha.zig").Hashers;
const FileSystem = @import("../fs.zig").FileSystem;
const Integrity = @import("./integrity.zig").Integrity;
const Resolution = @import("./resolution.zig").Resolution;

View File

@@ -209,6 +209,9 @@ pub const ExtractData = struct {
path: string = "",
buf: []u8 = "",
} = null,
/// Integrity hash computed from the raw tarball bytes.
/// Used for HTTPS/local tarball dependencies where the hash
/// is not available from a registry manifest.
integrity: Integrity = .{},
};
@@ -270,9 +273,9 @@ pub const ExternalStringList = external.ExternalStringList;
pub const ExternalStringMap = external.ExternalStringMap;
pub const VersionSlice = external.VersionSlice;
pub const Integrity = @import("./integrity.zig").Integrity;
pub const Dependency = @import("./dependency.zig");
pub const Behavior = @import("./dependency.zig").Behavior;
pub const Integrity = @import("./integrity.zig").Integrity;
pub const Lockfile = @import("./lockfile.zig");
pub const PatchedDep = Lockfile.PatchedDep;

View File

@@ -180,6 +180,14 @@ pub const Integrity = extern struct {
}
}
/// Compute a sha512 integrity hash from raw bytes (e.g. a downloaded tarball).
pub fn forBytes(bytes: []const u8) Integrity {
const len = std.crypto.hash.sha2.Sha512.digest_length;
var value: [digest_buf_len]u8 = empty_digest_buf;
Crypto.SHA512.hash(bytes, value[0..len]);
return .{ .tag = .sha512, .value = value };
}
pub fn verify(this: *const Integrity, bytes: []const u8) bool {
return @call(bun.callmod_inline, verifyByTag, .{ this.tag, bytes, &this.value });
}

View File

@@ -1082,6 +1082,7 @@ pub fn installIsolatedPackages(
.local_tarball => {
manager.enqueueTarballForReading(
dep_id,
pkg_id,
dep.name.slice(string_buf),
&pkg_res,
ctx,

View File

@@ -12,12 +12,16 @@ pub const FileCopier = struct {
return .{
.src_path = src_path,
.dest_subpath = dest_subpath,
.walker = try .walk(
src_dir,
bun.default_allocator,
&.{},
skip_dirnames,
),
.walker = walker: {
var w = try Walker.walk(
src_dir,
bun.default_allocator,
&.{},
skip_dirnames,
);
w.resolve_unknown_entry_types = true;
break :walker w;
},
};
}

View File

@@ -15,12 +15,16 @@ pub fn init(
.src_dir = folder_dir,
.src = src,
.dest = dest,
.walker = try .walk(
folder_dir,
bun.default_allocator,
&.{},
skip_dirnames,
),
.walker = walker: {
var w = try Walker.walk(
folder_dir,
bun.default_allocator,
&.{},
skip_dirnames,
);
w.resolve_unknown_entry_types = true;
break :walker w;
},
};
}

View File

@@ -187,8 +187,8 @@ pub const LifecycleScriptSubprocess = struct {
null,
};
if (Environment.isWindows) {
this.stdout.source = .{ .pipe = bun.handleOom(bun.default_allocator.create(uv.Pipe)) };
this.stderr.source = .{ .pipe = bun.handleOom(bun.default_allocator.create(uv.Pipe)) };
this.stdout.source = .{ .pipe = bun.new(uv.Pipe, std.mem.zeroes(uv.Pipe)) };
this.stderr.source = .{ .pipe = bun.new(uv.Pipe, std.mem.zeroes(uv.Pipe)) };
}
const spawn_options = bun.spawn.SpawnOptions{
.stdin = if (this.foreground)

View File

@@ -535,7 +535,13 @@ pub const Stringifier = struct {
&path_buf,
);
try writer.writeByte(']');
if (pkg_meta.integrity.tag.isSupported()) {
try writer.print(", \"{f}\"]", .{
pkg_meta.integrity,
});
} else {
try writer.writeByte(']');
}
},
.remote_tarball => {
try writer.print("[\"{f}@{f}\", ", .{
@@ -558,7 +564,13 @@ pub const Stringifier = struct {
&path_buf,
);
try writer.writeByte(']');
if (pkg_meta.integrity.tag.isSupported()) {
try writer.print(", \"{f}\"]", .{
pkg_meta.integrity,
});
} else {
try writer.writeByte(']');
}
},
.symlink => {
try writer.print("[\"{f}@link:{f}\", ", .{
@@ -1876,6 +1888,16 @@ pub fn parseIntoBinaryLockfile(
pkg.meta.integrity = Integrity.parse(integrity_str);
},
.local_tarball, .remote_tarball => {
// integrity is optional for tarball deps (backward compat)
if (i < pkg_info.len) {
const integrity_expr = pkg_info.at(i);
if (integrity_expr.asString(allocator)) |integrity_str| {
pkg.meta.integrity = Integrity.parse(integrity_str);
i += 1;
}
}
},
inline .git, .github => |tag| {
// .bun-tag
if (i >= pkg_info.len) {

View File

@@ -1200,7 +1200,7 @@ pub const WindowsBufferedReader = struct {
fn onPipeClose(handle: *uv.Pipe) callconv(.c) void {
const this = bun.cast(*uv.Pipe, handle.data);
bun.default_allocator.destroy(this);
bun.destroy(this);
}
fn onTTYClose(handle: *uv.uv_tty_t) callconv(.c) void {

View File

@@ -210,11 +210,11 @@ pub const Source = union(enum) {
pub fn openPipe(loop: *uv.Loop, fd: bun.FileDescriptor) bun.sys.Maybe(*Source.Pipe) {
log("openPipe (fd = {f})", .{fd});
const pipe = bun.default_allocator.create(Source.Pipe) catch |err| bun.handleOom(err);
const pipe = bun.new(Source.Pipe, std.mem.zeroes(Source.Pipe));
// we should never init using IPC here see ipc.zig
switch (pipe.init(loop, false)) {
.err => |err| {
bun.default_allocator.destroy(pipe);
bun.destroy(pipe);
return .{ .err = err };
},
else => {},

View File

@@ -499,6 +499,23 @@ class Database implements SqliteTypes.Database {
close(throwOnError = false) {
this.clearQueryCache();
// Finalize any prepared statements created by db.transaction()
if (controllers) {
const controller = controllers.get(this);
if (controller) {
controllers.delete(this);
const seen = new Set();
for (const ctrl of [controller.default, controller.deferred, controller.immediate, controller.exclusive]) {
if (!ctrl) continue;
for (const stmt of [ctrl.begin, ctrl.commit, ctrl.rollback, ctrl.savepoint, ctrl.release, ctrl.rollbackTo]) {
if (stmt && !seen.has(stmt)) {
seen.add(stmt);
stmt.finalize?.();
}
}
}
}
}
this.#hasClosed = true;
return SQL.close(this.#handle, throwOnError);
}

View File

@@ -1708,13 +1708,19 @@ fn NewPrinter(
}
// Internal "require()" or "import()"
const has_side_effects = meta.wrapper_ref.isValid() or
meta.exports_ref.isValid() or
meta.was_unwrapped_require or
p.options.input_files_for_dev_server != null;
if (record.kind == .dynamic) {
p.printSpaceBeforeIdentifier();
p.print("Promise.resolve()");
level = p.printDotThenPrefix();
if (has_side_effects) {
level = p.printDotThenPrefix();
}
}
defer if (record.kind == .dynamic) p.printDotThenSuffix();
defer if (record.kind == .dynamic and has_side_effects) p.printDotThenSuffix();
// Make sure the comma operator is properly wrapped
const wrap_comma_operator = meta.exports_ref.isValid() and

View File

@@ -1,6 +1,10 @@
pub const panic = _bun.crash_handler.panic;
pub const std_options = std.Options{
.enable_segfault_handler = false,
// Use BoringSSL's RAND_bytes instead of the default getrandom() syscall.
// BoringSSL falls back to /dev/urandom on older kernels (< 3.17) where
// the getrandom syscall doesn't exist, avoiding a panic on ENOSYS.
.cryptoRandomSeed = _bun.csprng,
};
pub const io_mode = .blocking;

View File

@@ -7,6 +7,7 @@ const Environment = bun.Environment;
pub const panic = recover.panic;
pub const std_options = std.Options{
.enable_segfault_handler = false,
.cryptoRandomSeed = bun.csprng,
};
pub const io_mode = .blocking;

View File

@@ -731,23 +731,22 @@ pub const Route = struct {
if (abs_path_str.len == 0) {
var file: std.fs.File = undefined;
var needs_close = false;
var needs_close = true;
defer if (needs_close) file.close();
if (entry.cache.fd.unwrapValid()) |valid| {
file = valid.stdFile();
needs_close = false;
} else {
var parts = [_]string{ entry.dir, entry.base() };
abs_path_str = FileSystem.instance.absBuf(&parts, &route_file_buf);
route_file_buf[abs_path_str.len] = 0;
const buf = route_file_buf[0..abs_path_str.len :0];
file = std.fs.openFileAbsoluteZ(buf, .{ .mode = .read_only }) catch |err| {
needs_close = false;
log.addErrorFmt(null, Logger.Loc.Empty, allocator, "{s} opening route: {s}", .{ @errorName(err), abs_path_str }) catch unreachable;
return null;
};
FileSystem.setMaxFd(file.handle);
needs_close = FileSystem.instance.fs.needToCloseFiles();
if (!needs_close) entry.cache.fd = .fromStdFile(file);
}
const _abs = bun.getFdPath(.fromStdFile(file), &route_file_buf) catch |err| {

View File

@@ -675,18 +675,53 @@ pub fn readableStream(
}
}
/// Clear the cancel_handler on the ByteStream.Source to prevent use-after-free.
/// Must be called before releasing readable_stream_ref.
fn clearStreamCancelHandler(self: *@This()) void {
if (self.readable_stream_ref.get(self.global)) |readable| {
if (readable.ptr == .Bytes) {
const source = readable.ptr.Bytes.parent();
source.cancel_handler = null;
source.cancel_ctx = null;
}
}
}
pub fn deinit(self: *@This()) void {
self.clearStreamCancelHandler();
self.readable_stream_ref.deinit();
bun.default_allocator.free(self.path);
bun.destroy(self);
}
fn onStreamCancelled(ctx: ?*anyopaque) void {
const self: *@This() = @ptrCast(@alignCast(ctx.?));
// Release the Strong ref so the ReadableStream can be GC'd.
// The download may still be in progress, but the callback will
// see readable_stream_ref.get() return null and skip data delivery.
// When the download finishes (has_more == false), deinit() will
// clean up the remaining resources.
self.readable_stream_ref.deinit();
}
pub fn opaqueCallback(chunk: bun.MutableString, has_more: bool, err: ?Error.S3Error, opaque_self: *anyopaque) void {
const self: *@This() = @ptrCast(@alignCast(opaque_self));
callback(chunk, has_more, err, self) catch {}; // TODO: properly propagate exception upwards
}
};
const wrapper = S3DownloadStreamWrapper.new(.{
.readable_stream_ref = jsc.WebCore.ReadableStream.Strong.init(.{
.ptr = .{ .Bytes = &reader.context },
.value = readable_value,
}, globalThis),
.path = bun.handleOom(bun.default_allocator.dupe(u8, path)),
.global = globalThis,
});
reader.cancel_handler = S3DownloadStreamWrapper.onStreamCancelled;
reader.cancel_ctx = wrapper;
downloadStream(
this,
path,
@@ -695,14 +730,7 @@ pub fn readableStream(
proxy_url,
request_payer,
S3DownloadStreamWrapper.opaqueCallback,
S3DownloadStreamWrapper.new(.{
.readable_stream_ref = jsc.WebCore.ReadableStream.Strong.init(.{
.ptr = .{ .Bytes = &reader.context },
.value = readable_value,
}, globalThis),
.path = bun.handleOom(bun.default_allocator.dupe(u8, path)),
.global = globalThis,
}),
wrapper,
);
return readable_value;
}

View File

@@ -12,27 +12,79 @@ state: union(enum) {
pub fn start(this: *Echo) Yield {
var args = this.bltn().argsSlice();
const no_newline = args.len >= 1 and std.mem.eql(u8, bun.sliceTo(args[0], 0), "-n");
args = args[if (no_newline) 1 else 0..];
// Parse flags: echo accepts -n, -e, -E in any combination.
// Flag parsing stops at the first arg that doesn't start with '-'
// or contains an invalid flag character.
var no_newline = false;
var escape_sequences = false;
var flags_done = false;
var args_start: usize = 0;
for (args) |arg| {
if (flags_done) break;
const flag = std.mem.span(arg);
if (flag.len < 2 or flag[0] != '-') {
flags_done = true;
break;
}
// Validate all characters are valid echo flags
var valid = true;
for (flag[1..]) |c| {
switch (c) {
'n', 'e', 'E' => {},
else => {
valid = false;
break;
},
}
}
if (!valid) {
flags_done = true;
break;
}
// Apply flags (last -e/-E wins)
for (flag[1..]) |c| {
switch (c) {
'n' => no_newline = true,
'e' => escape_sequences = true,
'E' => escape_sequences = false,
else => unreachable,
}
}
args_start += 1;
}
args = args[args_start..];
const args_len = args.len;
var has_leading_newline: bool = false;
var stop_output = false;
// TODO: Should flush buffer after it gets to a certain size
for (args, 0..) |arg, i| {
if (stop_output) break;
const thearg = std.mem.span(arg);
if (i < args_len - 1) {
bun.handleOom(this.output.appendSlice(thearg));
bun.handleOom(this.output.append(' '));
const is_last = i == args_len - 1;
if (escape_sequences) {
stop_output = appendWithEscapes(&this.output, thearg);
} else {
if (thearg.len > 0 and thearg[thearg.len - 1] == '\n') {
has_leading_newline = true;
if (is_last) {
if (thearg.len > 0 and thearg[thearg.len - 1] == '\n') {
has_leading_newline = true;
}
bun.handleOom(this.output.appendSlice(bun.strings.trimSubsequentLeadingChars(thearg, '\n')));
} else {
bun.handleOom(this.output.appendSlice(thearg));
}
bun.handleOom(this.output.appendSlice(bun.strings.trimSubsequentLeadingChars(thearg, '\n')));
}
if (!stop_output and !is_last) {
bun.handleOom(this.output.append(' '));
}
}
if (!has_leading_newline and !no_newline) bun.handleOom(this.output.append('\n'));
if (!stop_output and !has_leading_newline and !no_newline) bun.handleOom(this.output.append('\n'));
if (this.bltn().stdout.needsIO()) |safeguard| {
this.state = .waiting;
@@ -43,6 +95,109 @@ pub fn start(this: *Echo) Yield {
return this.bltn().done(0);
}
/// Appends `input` to `output`, interpreting backslash escape sequences.
/// Returns true if a \c escape was encountered (meaning stop all output).
fn appendWithEscapes(output: *std.array_list.Managed(u8), input: []const u8) bool {
var i: usize = 0;
while (i < input.len) {
if (input[i] == '\\' and i + 1 < input.len) {
switch (input[i + 1]) {
'\\' => {
bun.handleOom(output.append('\\'));
i += 2;
},
'a' => {
bun.handleOom(output.append('\x07'));
i += 2;
},
'b' => {
bun.handleOom(output.append('\x08'));
i += 2;
},
'c' => {
// \c: produce no further output
return true;
},
'e', 'E' => {
bun.handleOom(output.append('\x1b'));
i += 2;
},
'f' => {
bun.handleOom(output.append('\x0c'));
i += 2;
},
'n' => {
bun.handleOom(output.append('\n'));
i += 2;
},
'r' => {
bun.handleOom(output.append('\r'));
i += 2;
},
't' => {
bun.handleOom(output.append('\t'));
i += 2;
},
'v' => {
bun.handleOom(output.append('\x0b'));
i += 2;
},
'0' => {
// \0nnn: octal value (up to 3 octal digits)
i += 2; // skip \0
var val: u8 = 0;
var digits: usize = 0;
while (digits < 3 and i < input.len and input[i] >= '0' and input[i] <= '7') {
val = val *% 8 +% (input[i] - '0');
i += 1;
digits += 1;
}
bun.handleOom(output.append(val));
},
'x' => {
// \xHH: hex value (up to 2 hex digits)
i += 2; // skip \x
var val: u8 = 0;
var digits: usize = 0;
while (digits < 2 and i < input.len) {
const hex_val = hexDigitValue(input[i]);
if (hex_val) |hv| {
val = val *% 16 +% hv;
i += 1;
digits += 1;
} else {
break;
}
}
if (digits > 0) {
bun.handleOom(output.append(val));
} else {
// No valid hex digits: output \x literally
bun.handleOom(output.appendSlice("\\x"));
}
},
else => {
// Unknown escape: output backslash and the character as-is
bun.handleOom(output.append('\\'));
bun.handleOom(output.append(input[i + 1]));
i += 2;
},
}
} else {
bun.handleOom(output.append(input[i]));
i += 1;
}
}
return false;
}
fn hexDigitValue(c: u8) ?u8 {
if (c >= '0' and c <= '9') return c - '0';
if (c >= 'a' and c <= 'f') return c - 'a' + 10;
if (c >= 'A' and c <= 'F') return c - 'A' + 10;
return null;
}
pub fn onIOWriterChunk(this: *Echo, _: usize, e: ?jsc.SystemError) Yield {
if (comptime bun.Environment.allow_assert) {
assert(this.state == .waiting or this.state == .waiting_write_err);

View File

@@ -3251,11 +3251,15 @@ pub fn NewLexer(comptime encoding: StringEncoding) type {
self.chars.current = .{ .char = cur_ascii_char };
return;
}
// Set the cursor to decode the codepoint at new_idx.
// Use width=0 so that nextCursor (which computes pos = width + i)
// starts reading from exactly new_idx.
self.chars.src.cursor = CodepointIterator.Cursor{
.i = @intCast(new_idx),
.c = cur_ascii_char,
.width = 1,
.c = 0,
.width = 0,
};
SrcUnicode.nextCursor(&self.chars.src.iter, &self.chars.src.cursor);
self.chars.src.next_cursor = self.chars.src.cursor;
SrcUnicode.nextCursor(&self.chars.src.iter, &self.chars.src.next_cursor);
if (prev_ascii_char) |pc| self.chars.prev = .{ .char = pc };
@@ -3602,13 +3606,13 @@ pub fn ShellCharIter(comptime encoding: StringEncoding) type {
return bytes[self.src.i..];
}
if (self.src.iter.i >= bytes.len) return "";
return bytes[self.src.iter.i..];
if (self.src.cursor.i >= bytes.len) return "";
return bytes[self.src.cursor.i..];
}
pub fn cursorPos(self: *@This()) usize {
if (comptime encoding == .ascii) return self.src.i;
return self.src.iter.i;
return self.src.cursor.i;
}
pub fn eat(self: *@This()) ?InputChar {

View File

@@ -155,6 +155,7 @@ pub fn enqueueRequest(this: *@This(), item: *JSMySQLQuery) void {
this.#connection.enqueueRequest(item);
this.resetConnectionTimeout();
this.registerAutoFlusher();
this.updateReferenceType();
}
pub fn close(this: *@This()) void {

View File

@@ -1093,15 +1093,41 @@ pub const String = extern struct {
extern fn JSC__createRangeError(*jsc.JSGlobalObject, str: *const String) jsc.JSValue;
pub fn jsGetStringWidth(globalObject: *jsc.JSGlobalObject, callFrame: *jsc.CallFrame) bun.JSError!jsc.JSValue {
const argument = callFrame.argument(0);
const str = try argument.toJSString(globalObject);
const view = str.view(globalObject);
const args = callFrame.argumentsAsArray(2);
const argument = args[0];
const opts_val = args[1];
if (argument == .zero or argument.isUndefined()) {
return .jsNumber(@as(i32, 0));
}
const js_str = try argument.toJSString(globalObject);
const view = js_str.view(globalObject);
if (view.isEmpty()) {
return .jsNumber(@as(i32, 0));
}
const width = bun.String.init(view).visibleWidth(false);
const str = bun.String.init(view);
// Parse options: { countAnsiEscapeCodes?: bool, ambiguousIsNarrow?: bool }
var count_ansi: bool = false;
var ambiguous_is_narrow: bool = true;
if (opts_val.isObject()) {
if (try opts_val.getTruthyComptime(globalObject, "countAnsiEscapeCodes")) |v| {
count_ansi = v.toBoolean();
}
if (try opts_val.getTruthyComptime(globalObject, "ambiguousIsNarrow")) |v| {
ambiguous_is_narrow = v.toBoolean();
}
}
const width = if (count_ansi)
str.visibleWidth(!ambiguous_is_narrow)
else
str.visibleWidthExcludeANSIColors(!ambiguous_is_narrow);
return .jsNumber(width);
}

View File

@@ -6,6 +6,7 @@ skip_filenames: []const u64 = &[_]u64{},
skip_dirnames: []const u64 = &[_]u64{},
skip_all: []const u64 = &[_]u64{},
seed: u64 = 0,
resolve_unknown_entry_types: bool = false,
const NameBufferList = std.array_list.Managed(bun.OSPathChar);
@@ -38,7 +39,22 @@ pub fn next(self: *Walker) bun.sys.Maybe(?WalkerEntry) {
.err => |err| return .initErr(err),
.result => |res| {
if (res) |base| {
switch (base.kind) {
// Some filesystems (NFS, FUSE, bind mounts) don't provide
// d_type and return DT_UNKNOWN. Optionally resolve via
// fstatat so callers get accurate types for recursion.
// This only affects POSIX; Windows always provides types.
const kind: @TypeOf(base.kind) = if (comptime !Environment.isWindows)
(if (base.kind == .unknown and self.resolve_unknown_entry_types) brk: {
const dir_fd = top.iter.iter.dir;
break :brk switch (bun.sys.lstatat(dir_fd, base.name.sliceAssumeZ())) {
.result => |stat_buf| bun.sys.kindFromMode(stat_buf.mode),
.err => continue, // skip entries we can't stat
};
} else base.kind)
else
base.kind;
switch (kind) {
.directory => {
if (std.mem.indexOfScalar(
u64,
@@ -78,7 +94,7 @@ pub fn next(self: *Walker) bun.sys.Maybe(?WalkerEntry) {
const cur_len = self.name_buffer.items.len;
bun.handleOom(self.name_buffer.append(0));
if (base.kind == .directory) {
if (kind == .directory) {
const new_dir = switch (bun.openDirForIterationOSPath(top.iter.iter.dir, base.name.slice())) {
.result => |fd| fd,
.err => |err| return .initErr(err),
@@ -95,7 +111,7 @@ pub fn next(self: *Walker) bun.sys.Maybe(?WalkerEntry) {
.dir = top.iter.iter.dir,
.basename = self.name_buffer.items[dirname_len..cur_len :0],
.path = self.name_buffer.items[0..cur_len :0],
.kind = base.kind,
.kind = kind,
});
} else {
var item = self.stack.pop().?;

View File

@@ -13,7 +13,7 @@ exports[`should write plaintext lockfiles 1`] = `
},
},
"packages": {
"dummy-package": ["bar@./bar-0.0.2.tgz", {}],
"dummy-package": ["bar@./bar-0.0.2.tgz", {}, "sha512-DXWxn8qZ4n87XMJjwZUdYPnsrl8Ntz66PudFoxDVkaPEkZBBzENAKsJPgbBacD782W8RwD/v4mjwVyqlPpQ59w=="],
}
}
"

View File

@@ -0,0 +1,491 @@
import { file, spawn } from "bun";
import { afterAll, beforeAll, describe, expect, it, setDefaultTimeout } from "bun:test";
import { rm, writeFile } from "fs/promises";
import { bunExe, bunEnv as env, readdirSorted } from "harness";
import { join } from "path";
import {
createTestContext,
destroyTestContext,
dummyAfterAll,
dummyBeforeAll,
dummyRegistryForContext,
setContextHandler,
type TestContext,
} from "./dummy.registry";
beforeAll(() => {
setDefaultTimeout(1000 * 60 * 5);
dummyBeforeAll();
});
afterAll(dummyAfterAll);
// Helper function that sets up test context and ensures cleanup
async function withContext(
opts: { linker?: "hoisted" | "isolated" } | undefined,
fn: (ctx: TestContext) => Promise<void>,
): Promise<void> {
const ctx = await createTestContext(opts ? { linker: opts.linker! } : undefined);
try {
await fn(ctx);
} finally {
destroyTestContext(ctx);
}
}
// Default context options for most tests
const defaultOpts = { linker: "hoisted" as const };
describe.concurrent("tarball integrity", () => {
it("should store integrity hash for tarball URL in text lockfile", async () => {
await withContext(defaultOpts, async ctx => {
const urls: string[] = [];
setContextHandler(ctx, dummyRegistryForContext(ctx, urls));
await writeFile(
join(ctx.package_dir, "package.json"),
JSON.stringify({
name: "foo",
version: "0.0.1",
dependencies: {
baz: `${ctx.registry_url}baz-0.0.3.tgz`,
},
}),
);
await using proc = spawn({
cmd: [bunExe(), "install", "--save-text-lockfile"],
cwd: ctx.package_dir,
stdout: "pipe",
stdin: "pipe",
stderr: "pipe",
env,
});
const err = await proc.stderr.text();
expect(err).toContain("Saved lockfile");
expect(await proc.exited).toBe(0);
// Read the text lockfile and verify integrity hash is present for the tarball package
const lockContent = await file(join(ctx.package_dir, "bun.lock")).text();
// bun.lock uses trailing commas (not strict JSON), so match with regex
expect(lockContent).toMatch(/"baz":\s*\[.*"sha512-[A-Za-z0-9+/]+=*"\]/s);
});
});
it("should store integrity hash for local tarball in text lockfile", async () => {
await withContext(defaultOpts, async ctx => {
const urls: string[] = [];
setContextHandler(ctx, dummyRegistryForContext(ctx, urls));
await writeFile(
join(ctx.package_dir, "package.json"),
JSON.stringify({
name: "foo",
version: "0.0.1",
dependencies: {
baz: join(import.meta.dir, "baz-0.0.3.tgz"),
},
}),
);
await using proc = spawn({
cmd: [bunExe(), "install", "--save-text-lockfile"],
cwd: ctx.package_dir,
stdout: "pipe",
stdin: "pipe",
stderr: "pipe",
env,
});
const err = await proc.stderr.text();
expect(err).toContain("Saved lockfile");
expect(await proc.exited).toBe(0);
// Read the text lockfile and verify integrity hash is present for the local tarball package
const lockContent = await file(join(ctx.package_dir, "bun.lock")).text();
expect(lockContent).toMatch(/"baz":\s*\[.*"sha512-[A-Za-z0-9+/]+=*"\]/s);
});
});
it("should store consistent integrity hash for tarball URL across reinstalls", async () => {
await withContext(defaultOpts, async ctx => {
const urls: string[] = [];
setContextHandler(ctx, dummyRegistryForContext(ctx, urls));
await writeFile(
join(ctx.package_dir, "package.json"),
JSON.stringify({
name: "foo",
version: "0.0.1",
dependencies: {
baz: `${ctx.registry_url}baz-0.0.3.tgz`,
},
}),
);
// First install to generate lockfile with integrity
{
await using proc = spawn({
cmd: [bunExe(), "install", "--save-text-lockfile"],
cwd: ctx.package_dir,
stdout: "pipe",
stderr: "pipe",
env,
});
const err = await proc.stderr.text();
expect(err).toContain("Saved lockfile");
expect(await proc.exited).toBe(0);
}
// Read and verify integrity hash exists
const lockContent1 = await file(join(ctx.package_dir, "bun.lock")).text();
const integrityMatch1 = lockContent1.match(/"(sha512-[A-Za-z0-9+/]+=*)"/);
expect(integrityMatch1).not.toBeNull();
const integrity1 = integrityMatch1![1];
// Delete lockfile and node_modules, reinstall from scratch
await rm(join(ctx.package_dir, "bun.lock"), { force: true });
await rm(join(ctx.package_dir, "node_modules"), { recursive: true, force: true });
{
await using proc = spawn({
cmd: [bunExe(), "install", "--save-text-lockfile"],
cwd: ctx.package_dir,
stdout: "pipe",
stderr: "pipe",
env,
});
const err = await proc.stderr.text();
expect(err).toContain("Saved lockfile");
expect(await proc.exited).toBe(0);
}
// Verify the same integrity hash was computed
const lockContent2 = await file(join(ctx.package_dir, "bun.lock")).text();
const integrityMatch2 = lockContent2.match(/"(sha512-[A-Za-z0-9+/]+=*)"/);
expect(integrityMatch2).not.toBeNull();
expect(integrityMatch2![1]).toBe(integrity1);
});
});
it("should fail integrity check when tarball URL content changes", async () => {
await withContext(defaultOpts, async ctx => {
// Serve baz-0.0.3.tgz on first install, then baz-0.0.5.tgz (different content) on second
let requestCount = 0;
setContextHandler(ctx, async request => {
const url = request.url;
if (url.endsWith(".tgz")) {
requestCount++;
// First request: serve baz-0.0.3.tgz, subsequent: serve baz-0.0.5.tgz (different content)
const tgzFile = requestCount <= 1 ? "baz-0.0.3.tgz" : "baz-0.0.5.tgz";
return new Response(file(join(import.meta.dir, tgzFile)));
}
return new Response("Not found", { status: 404 });
});
await writeFile(
join(ctx.package_dir, "package.json"),
JSON.stringify({
name: "foo",
version: "0.0.1",
dependencies: {
baz: `${ctx.registry_url}baz-0.0.3.tgz`,
},
}),
);
// First install - succeeds, stores integrity hash
{
await using proc = spawn({
cmd: [bunExe(), "install", "--save-text-lockfile"],
cwd: ctx.package_dir,
stdout: "pipe",
stderr: "pipe",
env,
});
const err = await proc.stderr.text();
expect(err).toContain("Saved lockfile");
expect(await proc.exited).toBe(0);
}
// Verify integrity hash was stored
const lockContent = await file(join(ctx.package_dir, "bun.lock")).text();
expect(lockContent).toMatch(/"sha512-[A-Za-z0-9+/]+=*"/);
// Remove node_modules to force re-download
await rm(join(ctx.package_dir, "node_modules"), { recursive: true, force: true });
// Second install - server now returns different tarball, integrity should fail
{
await using proc = spawn({
cmd: [bunExe(), "install"],
cwd: ctx.package_dir,
stdout: "pipe",
stderr: "pipe",
env,
});
const err = await proc.stderr.text();
const out = await proc.stdout.text();
expect(err + out).toContain("Integrity check failed");
expect(await proc.exited).toBe(1);
}
});
});
it("should install successfully from text lockfile without integrity hash (backward compat)", async () => {
await withContext(defaultOpts, async ctx => {
const urls: string[] = [];
setContextHandler(ctx, dummyRegistryForContext(ctx, urls));
// Write a text lockfile WITHOUT integrity hash (old format / backward compat)
await writeFile(
join(ctx.package_dir, "package.json"),
JSON.stringify({
name: "foo",
version: "0.0.1",
dependencies: {
baz: `${ctx.registry_url}baz-0.0.3.tgz`,
},
}),
);
await writeFile(
join(ctx.package_dir, "bun.lock"),
JSON.stringify({
lockfileVersion: 1,
configVersion: 1,
workspaces: {
"": {
name: "foo",
dependencies: {
baz: `${ctx.registry_url}baz-0.0.3.tgz`,
},
},
},
packages: {
baz: [`baz@${ctx.registry_url}baz-0.0.3.tgz`, { bin: { "baz-run": "index.js" } }],
},
}),
);
// Install with the old-format lockfile - should succeed without errors
await using proc = spawn({
cmd: [bunExe(), "install"],
cwd: ctx.package_dir,
stdout: "pipe",
stderr: "pipe",
env,
});
const err = await proc.stderr.text();
const out = await proc.stdout.text();
// Should not contain any integrity-related errors
expect(err).not.toContain("Integrity check failed");
expect(err).not.toContain("error:");
expect(await proc.exited).toBe(0);
// Package should be installed
expect(await readdirSorted(join(ctx.package_dir, "node_modules", "baz"))).toEqual(["index.js", "package.json"]);
});
});
it("should add integrity hash to lockfile when re-resolving tarball dep", async () => {
await withContext(defaultOpts, async ctx => {
const urls: string[] = [];
setContextHandler(ctx, dummyRegistryForContext(ctx, urls));
await writeFile(
join(ctx.package_dir, "package.json"),
JSON.stringify({
name: "foo",
version: "0.0.1",
dependencies: {
baz: `${ctx.registry_url}baz-0.0.3.tgz`,
},
}),
);
// Fresh install (no existing lockfile) should produce integrity hash
await using proc = spawn({
cmd: [bunExe(), "install", "--save-text-lockfile"],
cwd: ctx.package_dir,
stdout: "pipe",
stderr: "pipe",
env,
});
const err = await proc.stderr.text();
expect(err).toContain("Saved lockfile");
expect(await proc.exited).toBe(0);
// The newly generated lockfile should have the integrity hash
const lockContent = await file(join(ctx.package_dir, "bun.lock")).text();
expect(lockContent).toMatch(/"baz":\s*\[.*"sha512-[A-Za-z0-9+/]+=*"\]/s);
});
});
it("should store consistent integrity hash for local tarball across reinstalls", async () => {
await withContext(defaultOpts, async ctx => {
const urls: string[] = [];
setContextHandler(ctx, dummyRegistryForContext(ctx, urls));
await writeFile(
join(ctx.package_dir, "package.json"),
JSON.stringify({
name: "foo",
version: "0.0.1",
dependencies: {
baz: join(import.meta.dir, "baz-0.0.3.tgz"),
},
}),
);
// First install
{
await using proc = spawn({
cmd: [bunExe(), "install", "--save-text-lockfile"],
cwd: ctx.package_dir,
stdout: "pipe",
stderr: "pipe",
env,
});
const err = await proc.stderr.text();
expect(err).toContain("Saved lockfile");
expect(await proc.exited).toBe(0);
}
const lockContent1 = await file(join(ctx.package_dir, "bun.lock")).text();
const integrityMatch1 = lockContent1.match(/"(sha512-[A-Za-z0-9+/]+=*)"/);
expect(integrityMatch1).not.toBeNull();
const integrity1 = integrityMatch1![1];
// Delete lockfile and node_modules, reinstall
await rm(join(ctx.package_dir, "bun.lock"), { force: true });
await rm(join(ctx.package_dir, "node_modules"), { recursive: true, force: true });
{
await using proc = spawn({
cmd: [bunExe(), "install", "--save-text-lockfile"],
cwd: ctx.package_dir,
stdout: "pipe",
stderr: "pipe",
env,
});
const err = await proc.stderr.text();
expect(err).toContain("Saved lockfile");
expect(await proc.exited).toBe(0);
}
const lockContent2 = await file(join(ctx.package_dir, "bun.lock")).text();
const integrityMatch2 = lockContent2.match(/"(sha512-[A-Za-z0-9+/]+=*)"/);
expect(integrityMatch2).not.toBeNull();
expect(integrityMatch2![1]).toBe(integrity1);
});
});
it("should produce same integrity hash for same tarball via URL and local path", async () => {
await withContext(defaultOpts, async ctx => {
const urls: string[] = [];
setContextHandler(ctx, dummyRegistryForContext(ctx, urls));
// Install via URL
await writeFile(
join(ctx.package_dir, "package.json"),
JSON.stringify({
name: "foo",
version: "0.0.1",
dependencies: {
baz: `${ctx.registry_url}baz-0.0.3.tgz`,
},
}),
);
{
await using proc = spawn({
cmd: [bunExe(), "install", "--save-text-lockfile"],
cwd: ctx.package_dir,
stdout: "pipe",
stderr: "pipe",
env,
});
expect(await proc.exited).toBe(0);
}
const lockContent1 = await file(join(ctx.package_dir, "bun.lock")).text();
const integrityMatch1 = lockContent1.match(/"(sha512-[A-Za-z0-9+/]+=*)"/);
expect(integrityMatch1).not.toBeNull();
const urlIntegrity = integrityMatch1![1];
// Clean up
await rm(join(ctx.package_dir, "bun.lock"), { force: true });
await rm(join(ctx.package_dir, "node_modules"), { recursive: true, force: true });
// Install via local path
await writeFile(
join(ctx.package_dir, "package.json"),
JSON.stringify({
name: "foo",
version: "0.0.1",
dependencies: {
baz: join(import.meta.dir, "baz-0.0.3.tgz"),
},
}),
);
{
await using proc = spawn({
cmd: [bunExe(), "install", "--save-text-lockfile"],
cwd: ctx.package_dir,
stdout: "pipe",
stderr: "pipe",
env,
});
expect(await proc.exited).toBe(0);
}
const lockContent2 = await file(join(ctx.package_dir, "bun.lock")).text();
const integrityMatch2 = lockContent2.match(/"(sha512-[A-Za-z0-9+/]+=*)"/);
expect(integrityMatch2).not.toBeNull();
expect(integrityMatch2![1]).toBe(urlIntegrity);
});
});
it("should install successfully from text lockfile without integrity hash for local tarball (backward compat)", async () => {
await withContext(defaultOpts, async ctx => {
const urls: string[] = [];
setContextHandler(ctx, dummyRegistryForContext(ctx, urls));
const tgzPath = join(import.meta.dir, "baz-0.0.3.tgz");
await writeFile(
join(ctx.package_dir, "package.json"),
JSON.stringify({
name: "foo",
version: "0.0.1",
dependencies: {
baz: tgzPath,
},
}),
);
await writeFile(
join(ctx.package_dir, "bun.lock"),
JSON.stringify({
lockfileVersion: 1,
configVersion: 1,
workspaces: {
"": {
name: "foo",
dependencies: {
baz: tgzPath,
},
},
},
packages: {
baz: [`baz@${tgzPath}`, { bin: { "baz-run": "index.js" } }],
},
}),
);
await using proc = spawn({
cmd: [bunExe(), "install"],
cwd: ctx.package_dir,
stdout: "pipe",
stderr: "pipe",
env,
});
const err = await proc.stderr.text();
expect(err).not.toContain("Integrity check failed");
expect(err).not.toContain("error:");
expect(await proc.exited).toBe(0);
expect(await readdirSorted(join(ctx.package_dir, "node_modules", "baz"))).toEqual(["index.js", "package.json"]);
});
});
});

View File

@@ -68,9 +68,13 @@ it("should not print anything to stderr when running bun.lockb", async () => {
});
const stdoutOutput = await stdout.text();
expect(stdoutOutput).toBe(
`# THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY.\n# yarn lockfile v1\n# bun ./bun.lockb --hash: 8B7A1C2DA8966A48-f4830e6e283fffe9-DE5BD0E91FD9910F-f0bf88071b3f7ec9\n\n\n\"bar@file:./bar-0.0.2.tgz\":\n version \"./bar-0.0.2.tgz\"\n resolved \"./bar-0.0.2.tgz\"\n`,
);
expect(stdoutOutput).toContain("# THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY.");
expect(stdoutOutput).toContain("# yarn lockfile v1");
expect(stdoutOutput).toContain("# bun ./bun.lockb --hash:");
expect(stdoutOutput).toContain('"bar@file:./bar-0.0.2.tgz":');
expect(stdoutOutput).toContain(' version "./bar-0.0.2.tgz"');
expect(stdoutOutput).toContain(' resolved "./bar-0.0.2.tgz"');
expect(stdoutOutput).toContain(" integrity sha512-");
const stderrOutput = await stderr.text();
expect(stderrOutput).toBe("");

View File

@@ -0,0 +1,49 @@
import { expect, test } from "bun:test";
test("Listener.getsockname works with an object argument", () => {
using listener = Bun.listen({
hostname: "localhost",
port: 0,
socket: {
data() {},
},
});
const out: Record<string, unknown> = {};
const result = listener.getsockname(out);
expect(result).toBeUndefined(); // returns undefined, populates object in-place
expect(out).toEqual(
expect.objectContaining({
family: expect.any(String),
address: expect.any(String),
port: expect.any(Number),
}),
);
});
test("Listener.getsockname throws with non-object argument", () => {
using listener = Bun.listen({
hostname: "localhost",
port: 0,
socket: {
data() {},
},
});
expect(() => (listener as any).getsockname(123)).toThrow();
expect(() => (listener as any).getsockname("foo")).toThrow();
});
test("Listener.getsockname throws with no argument", () => {
using listener = Bun.listen({
hostname: "localhost",
port: 0,
socket: {
data() {},
},
});
// Previously crashed with null pointer dereference in BunString.cpp
// when called without an object argument. Now it should throw a TypeError.
expect(() => (listener as any).getsockname()).toThrow();
});

View File

@@ -0,0 +1,203 @@
// Exercises Bun's SIMD code paths to verify the baseline binary doesn't
// emit instructions beyond its CPU target (no AVX on x64, no LSE/SVE on aarch64).
//
// Each test uses inputs large enough to hit vectorized fast paths (>= 16 bytes
// for @Vector(16, u8), >= 64 bytes for wider paths) and validates correctness
// to catch both SIGILL and miscompilation from wrong instruction lowering.
import { describe, expect, test } from "bun:test";
// Use Buffer.alloc instead of "x".repeat() — repeat is slow in debug JSC builds.
const ascii256 = Buffer.alloc(256, "a").toString();
const ascii1k = Buffer.alloc(1024, "x").toString();
describe("escapeHTML — @Vector(16, u8) gated by enableSIMD", () => {
test("clean passthrough", () => {
expect(Bun.escapeHTML(ascii256)).toBe(ascii256);
});
test("ampersand in middle", () => {
const input = ascii256 + "&" + ascii256;
const escaped = Bun.escapeHTML(input);
expect(escaped).toContain("&amp;");
// The raw "&" should have been replaced — only "&amp;" should remain
expect(escaped.replaceAll("&amp;", "").includes("&")).toBe(false);
});
test("all special chars", () => {
const input = '<div class="test">' + ascii256 + "</div>";
const escaped = Bun.escapeHTML(input);
expect(escaped).toContain("&lt;");
expect(escaped).toContain("&gt;");
expect(escaped).toContain("&quot;");
});
});
describe("stringWidth — @Vector(16, u8) ungated", () => {
test("ascii", () => {
expect(Bun.stringWidth(ascii256)).toBe(256);
});
test("empty", () => {
expect(Bun.stringWidth("")).toBe(0);
});
test("tabs", () => {
expect(Bun.stringWidth(Buffer.alloc(32, "\t").toString())).toBe(0);
});
test("mixed printable and zero-width", () => {
const mixed = "hello" + "\x00".repeat(64) + "world";
expect(Bun.stringWidth(mixed)).toBe(10);
});
});
describe("Buffer hex encoding — @Vector(16, u8) gated by enableSIMD", () => {
test.each([16, 32, 64, 128, 256])("size %d", size => {
const buf = Buffer.alloc(size, 0xab);
const hex = buf.toString("hex");
expect(hex.length).toBe(size * 2);
expect(hex).toBe("ab".repeat(size));
});
test("all byte values", () => {
const varied = Buffer.alloc(256);
for (let i = 0; i < 256; i++) varied[i] = i;
const hex = varied.toString("hex");
expect(hex).toStartWith("000102030405");
expect(hex).toEndWith("fdfeff");
});
});
describe("base64 — simdutf runtime dispatch", () => {
test("ascii roundtrip", () => {
const encoded = btoa(ascii1k);
expect(atob(encoded)).toBe(ascii1k);
});
test("binary roundtrip", () => {
const binary = String.fromCharCode(...Array.from({ length: 256 }, (_, i) => i));
expect(atob(btoa(binary))).toBe(binary);
});
});
describe("TextEncoder/TextDecoder — simdutf runtime dispatch", () => {
const encoder = new TextEncoder();
const decoder = new TextDecoder();
test("ascii roundtrip", () => {
const bytes = encoder.encode(ascii1k);
expect(bytes.length).toBe(1024);
expect(decoder.decode(bytes)).toBe(ascii1k);
});
test("mixed ascii + multibyte", () => {
const mixed = ascii256 + "\u00e9\u00e9\u00e9" + ascii256 + "\u2603\u2603" + ascii256;
expect(decoder.decode(encoder.encode(mixed))).toBe(mixed);
});
test("emoji surrogate pairs", () => {
const emoji = "\u{1F600}".repeat(64);
expect(decoder.decode(encoder.encode(emoji))).toBe(emoji);
});
});
describe("decodeURIComponent — SIMD % scanning", () => {
test("clean passthrough", () => {
const clean = Buffer.alloc(256, "a").toString();
expect(decodeURIComponent(clean)).toBe(clean);
});
test("encoded at various positions", () => {
const input = "a".repeat(128) + "%20" + "b".repeat(128) + "%21";
expect(decodeURIComponent(input)).toBe("a".repeat(128) + " " + "b".repeat(128) + "!");
});
test("heavy utf8 encoding", () => {
const input = Array.from({ length: 64 }, () => "%C3%A9").join("");
expect(decodeURIComponent(input)).toBe("\u00e9".repeat(64));
});
});
describe("URL parsing — Highway indexOfChar/indexOfAny", () => {
test("long URL with all components", () => {
const longPath = "/" + "segment/".repeat(32) + "end";
const url = new URL("https://user:pass@example.com:8080" + longPath + "?key=value&foo=bar#section");
expect(url.protocol).toBe("https:");
expect(url.hostname).toBe("example.com");
expect(url.port).toBe("8080");
expect(url.pathname).toBe(longPath);
expect(url.search).toBe("?key=value&foo=bar");
expect(url.hash).toBe("#section");
});
});
describe("JSON — JS lexer SIMD string scanning", () => {
test("large object roundtrip", () => {
const obj: Record<string, string> = {};
for (let i = 0; i < 100; i++) {
obj["key_" + Buffer.alloc(32, "a").toString() + "_" + i] = "value_" + Buffer.alloc(64, "b").toString() + "_" + i;
}
const parsed = JSON.parse(JSON.stringify(obj));
expect(Object.keys(parsed).length).toBe(100);
expect(parsed["key_" + Buffer.alloc(32, "a").toString() + "_0"]).toBe(
"value_" + Buffer.alloc(64, "b").toString() + "_0",
);
});
test("string with escape sequences", () => {
const original = { msg: 'quote"here\nand\ttab' + Buffer.alloc(256, "x").toString() };
const reparsed = JSON.parse(JSON.stringify(original));
expect(reparsed.msg).toBe(original.msg);
});
});
describe("HTTP parsing — llhttp SSE4.2 PCMPESTRI", () => {
test("long headers", async () => {
const longHeaderValue = Buffer.alloc(512, "v").toString();
using server = Bun.serve({
port: 0,
fetch(req) {
return new Response(req.headers.get("X-Test-Header") || "missing");
},
});
const resp = await fetch(`http://localhost:${server.port}/` + "path/".repeat(20), {
headers: {
"X-Test-Header": longHeaderValue,
"X-Header-A": Buffer.alloc(64, "a").toString(),
"X-Header-B": Buffer.alloc(64, "b").toString(),
"X-Header-C": Buffer.alloc(64, "c").toString(),
"Accept": "application/json",
"Accept-Language": "en-US,en;q=0.9,fr;q=0.8,de;q=0.7",
},
});
expect(await resp.text()).toBe(longHeaderValue);
});
});
describe("Latin-1 to UTF-8 — @Vector(16, u8) ungated", () => {
test("full byte range", () => {
const latin1Bytes = Buffer.alloc(256);
for (let i = 0; i < 256; i++) latin1Bytes[i] = i;
const latin1Str = latin1Bytes.toString("latin1");
const utf8Buf = Buffer.from(latin1Str, "utf-8");
expect(utf8Buf.length).toBeGreaterThan(256);
expect(utf8Buf.toString("utf-8").length).toBe(256);
});
});
describe("String search — Highway memMem/indexOfChar", () => {
test("indexOf long string", () => {
const haystack = Buffer.alloc(1000, "a").toString() + "needle" + Buffer.alloc(1000, "b").toString();
expect(haystack.indexOf("needle")).toBe(1000);
expect(haystack.indexOf("missing")).toBe(-1);
expect(haystack.lastIndexOf("needle")).toBe(1000);
});
test("includes long string", () => {
const haystack = Buffer.alloc(1000, "a").toString() + "needle" + Buffer.alloc(1000, "b").toString();
expect(haystack.includes("needle")).toBe(true);
expect(haystack.includes("missing")).toBe(false);
});
});

View File

@@ -0,0 +1,89 @@
import { S3Client } from "bun";
import { heapStats } from "bun:jsc";
import { expect, test } from "bun:test";
// Test that ReadableStream objects from cancelled S3 download streams are properly GC'd.
//
// When a streaming S3 download body is cancelled mid-stream, S3DownloadStreamWrapper's
// readable_stream_ref (a Strong GC root) is not released because:
// 1. ByteStream.onCancel() doesn't notify the S3DownloadStreamWrapper
// 2. The S3 download continues in the background, so has_more never becomes false
// 3. The Strong ref prevents GC of the ReadableStream
//
// This is the same pattern as the FetchTasklet stream cancel leak.
test("ReadableStream from S3 stream() should be GC'd after reader.cancel()", async () => {
// Use a raw TCP server to mock an S3 GET response.
// The server sends one HTTP chunk immediately, then keeps the connection open
// to simulate a large file download in progress.
using server = Bun.listen({
port: 0,
hostname: "127.0.0.1",
socket: {
data(socket) {
// Respond to any incoming request with a chunked 200 OK
socket.write(
"HTTP/1.1 200 OK\r\n" +
"Transfer-Encoding: chunked\r\n" +
"Connection: keep-alive\r\n" +
"Content-Type: application/octet-stream\r\n" +
"\r\n" +
"400\r\n" +
Buffer.alloc(0x400, "x").toString() +
"\r\n",
);
// Don't send terminal chunk "0\r\n\r\n" — keep connection open
},
open() {},
close() {},
error() {},
},
});
const s3 = new S3Client({
accessKeyId: "test",
secretAccessKey: "test",
endpoint: `http://127.0.0.1:${server.port}`,
bucket: "test",
});
const N = 30;
// Warmup: ensure JIT, lazy init, and connection pool are warmed up
for (let i = 0; i < 5; i++) {
const file = s3.file(`warmup-${i}.bin`);
const stream = file.stream();
const reader = stream.getReader();
await reader.read();
await reader.cancel();
}
Bun.gc(true);
await Bun.sleep(10);
Bun.gc(true);
const baseline = heapStats().objectTypeCounts.ReadableStream ?? 0;
// Main test: stream, read one chunk, cancel, repeat N times
for (let i = 0; i < N; i++) {
const file = s3.file(`test-${i}.bin`);
const stream = file.stream();
const reader = stream.getReader();
await reader.read();
await reader.cancel();
}
// Allow finalizers to run, then GC aggressively
Bun.gc(true);
await Bun.sleep(50);
Bun.gc(true);
await Bun.sleep(50);
Bun.gc(true);
const after = heapStats().objectTypeCounts.ReadableStream ?? 0;
const leaked = after - baseline;
// With the bug: leaked ≈ N (each cancelled stream's Strong ref prevents GC)
// When fixed: leaked should be near 0 (Strong ref released on cancel)
expect(leaked).toBeLessThanOrEqual(5);
});

View File

@@ -61,11 +61,7 @@ describe("echo error handling", async () => {
});
describe("echo special cases", async () => {
TestBuilder.command`echo -n -n hello`
.exitCode(0)
.stdout("-n hello")
.stderr("")
.runAsTest("-n flag with -n as argument");
TestBuilder.command`echo -n -n hello`.exitCode(0).stdout("hello").stderr("").runAsTest("-n flag with -n as argument");
TestBuilder.command`echo -- -n hello`
.exitCode(0)

View File

@@ -0,0 +1,25 @@
import { describe, expect, test } from "bun:test";
describe("spawn with spoofed array length", () => {
test("Bun.spawnSync throws on array with length near u32 max", () => {
const arr = ["echo", "hello"];
Object.defineProperty(arr, "length", { value: 4294967295 });
expect(() => {
Bun.spawnSync(arr);
}).toThrow(/cmd array is too large/);
});
test("Bun.spawn throws on array with length near u32 max", () => {
const arr = ["echo", "hello"];
Object.defineProperty(arr, "length", { value: 4294967295 });
expect(() => {
Bun.spawn(arr);
}).toThrow(/cmd array is too large/);
});
test("Bun.spawnSync still works with normal arrays", () => {
const result = Bun.spawnSync(["echo", "hello"]);
expect(result.stdout.toString().trim()).toBe("hello");
expect(result.exitCode).toBe(0);
});
});

View File

@@ -0,0 +1,34 @@
// Fixture for issue #27362: Sequential await sql.unsafe() calls should not hang.
// Without the fix, the process hangs on the 3rd-4th sequential call because the
// poll_ref is not re-refed when a new query is enqueued on an idle connection.
const tls = process.env.CA_PATH ? { ca: Bun.file(process.env.CA_PATH) } : undefined;
const sql = new Bun.SQL({
url: process.env.MYSQL_URL,
tls,
max: 1,
idleTimeout: 100,
maxLifetime: 100,
connectionTimeout: 100,
});
// Warmup / establish connection
await sql`SELECT 1`;
// Sequential queries - these would hang without the fix
const r1 = await sql.unsafe("SELECT 1 as v");
console.log("query1:", r1[0].v);
const r2 = await sql.unsafe("SELECT 2 as v");
console.log("query2:", r2[0].v);
const r3 = await sql.unsafe("SELECT 3 as v");
console.log("query3:", r3[0].v);
const r4 = await sql.unsafe("SELECT 4 as v");
console.log("query4:", r4[0].v);
const r5 = await sql.unsafe("SELECT 5 as v");
console.log("query5:", r5[0].v);
console.log("all queries completed");
// process should exit with code 0

View File

@@ -63,6 +63,15 @@ if (isDockerEnabled()) {
});
expect(stderr).toBe("");
});
test("sequential queries should not hang (#27362)", async () => {
const { stdout, stderr } = bunRun(path.join(import.meta.dir, "sql-mysql-sequential-fixture.ts"), {
...bunEnv,
MYSQL_URL: getOptions().url,
CA_PATH: image.name === "MySQL with TLS" ? path.join(import.meta.dir, "mysql-tls", "ssl", "ca.pem") : "",
});
expect(stderr).toBe("");
expect(stdout).toContain("all queries completed");
});
test("should return lastInsertRowid and affectedRows", async () => {
await using db = new SQL({ ...getOptions(), max: 1, idleTimeout: 5 });
using sql = await db.reserve();
@@ -483,9 +492,7 @@ if (isDockerEnabled()) {
test("Binary", async () => {
const random_name = ("t_" + Bun.randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TEMPORARY TABLE ${sql(random_name)} (a binary(1), b varbinary(1), c blob)`;
const values = [
{ a: Buffer.from([1]), b: Buffer.from([2]), c: Buffer.from([3]) },
];
const values = [{ a: Buffer.from([1]), b: Buffer.from([2]), c: Buffer.from([3]) }];
await sql`INSERT INTO ${sql(random_name)} ${sql(values)}`;
const results = await sql`select * from ${sql(random_name)}`;
// return buffers
@@ -497,7 +504,7 @@ if (isDockerEnabled()) {
expect(results2[0].a).toEqual(Buffer.from([1]));
expect(results2[0].b).toEqual(Buffer.from([2]));
expect(results2[0].c).toEqual(Buffer.from([3]));
})
});
test("bulk insert nested sql()", async () => {
await using sql = new SQL({ ...getOptions(), max: 1 });

View File

@@ -0,0 +1,140 @@
import { heapStats } from "bun:jsc";
import { expect, test } from "bun:test";
// Test that ReadableStream objects from cancelled fetch responses are properly GC'd.
//
// When a streaming HTTP response body is cancelled mid-stream, FetchTasklet's
// readable_stream_ref (a Strong GC root) is not released because:
// 1. ByteStream.onCancel() doesn't notify the FetchTasklet
// 2. The HTTP connection stays open, so has_more never becomes false
// 3. Bun__FetchResponse_finalize sees the Strong ref and skips cleanup
//
// This creates a circular dependency where the Strong ref prevents GC,
// and the GC finalizer skips cleanup because the Strong ref exists.
test("ReadableStream from fetch should be GC'd after reader.cancel()", async () => {
// Use a raw TCP server to avoid server-side JS ReadableStream objects
// that would add noise to objectTypeCounts.
// The server sends one HTTP chunk immediately, then keeps the connection open.
using server = Bun.listen({
port: 0,
hostname: "127.0.0.1",
socket: {
data(socket) {
socket.write(
"HTTP/1.1 200 OK\r\n" +
"Transfer-Encoding: chunked\r\n" +
"Connection: keep-alive\r\n" +
"\r\n" +
"400\r\n" +
Buffer.alloc(0x400, "x").toString() +
"\r\n",
);
// Don't send terminal chunk "0\r\n\r\n" — keep connection open
},
open() {},
close() {},
error() {},
},
});
const url = `http://127.0.0.1:${server.port}/`;
const N = 30;
// Warmup: ensure JIT, lazy init, and connection pool are warmed up
for (let i = 0; i < 5; i++) {
const response = await fetch(url);
const reader = response.body!.getReader();
await reader.read();
await reader.cancel();
}
Bun.gc(true);
await Bun.sleep(10);
Bun.gc(true);
const baseline = heapStats().objectTypeCounts.ReadableStream ?? 0;
// Main test: fetch, read one chunk, cancel, repeat N times
for (let i = 0; i < N; i++) {
const response = await fetch(url);
const reader = response.body!.getReader();
await reader.read();
await reader.cancel();
}
// Allow finalizers to run, then GC aggressively
Bun.gc(true);
await Bun.sleep(50);
Bun.gc(true);
await Bun.sleep(50);
Bun.gc(true);
const after = heapStats().objectTypeCounts.ReadableStream ?? 0;
const leaked = after - baseline;
// With the bug: leaked ≈ N (each cancelled stream's Strong ref prevents GC)
// When fixed: leaked should be near 0 (Strong ref released on cancel)
expect(leaked).toBeLessThanOrEqual(5);
});
test("ReadableStream from fetch should be GC'd after body.cancel()", async () => {
using server = Bun.listen({
port: 0,
hostname: "127.0.0.1",
socket: {
data(socket) {
socket.write(
"HTTP/1.1 200 OK\r\n" +
"Transfer-Encoding: chunked\r\n" +
"Connection: keep-alive\r\n" +
"\r\n" +
"400\r\n" +
Buffer.alloc(0x400, "x").toString() +
"\r\n",
);
},
open() {},
close() {},
error() {},
},
});
const url = `http://127.0.0.1:${server.port}/`;
const N = 30;
// Warmup
for (let i = 0; i < 5; i++) {
const response = await fetch(url);
const reader = response.body!.getReader();
await reader.read();
reader.releaseLock();
await response.body!.cancel();
}
Bun.gc(true);
await Bun.sleep(10);
Bun.gc(true);
const baseline = heapStats().objectTypeCounts.ReadableStream ?? 0;
// Main test: fetch, read, releaseLock, cancel body directly
for (let i = 0; i < N; i++) {
const response = await fetch(url);
const reader = response.body!.getReader();
await reader.read();
reader.releaseLock();
await response.body!.cancel();
}
Bun.gc(true);
await Bun.sleep(50);
Bun.gc(true);
await Bun.sleep(50);
Bun.gc(true);
const after = heapStats().objectTypeCounts.ReadableStream ?? 0;
const leaked = after - baseline;
expect(leaked).toBeLessThanOrEqual(5);
});

View File

@@ -430,3 +430,5 @@ test/js/bun/test/parallel/test-http-should-not-accept-untrusted-certificates.ts
# Need to run the event loop once more to ensure sockets close
test/js/node/test/parallel/test-https-localaddress-bind-error.js
test/js/node/test/parallel/test-crypto-op-during-process-exit.js
test/js/third_party/prisma/prisma.test.ts

View File

@@ -0,0 +1,51 @@
import { Database } from "bun:sqlite";
import { expect, test } from "bun:test";
test("db.close(true) works after db.transaction()", () => {
const db = new Database(":memory:");
db.transaction(() => {})();
expect(() => db.close(true)).not.toThrow();
});
test("db.close(true) works after db.transaction() with actual work", () => {
const db = new Database(":memory:");
db.run("CREATE TABLE test (id INTEGER PRIMARY KEY, value TEXT)");
const insert = db.transaction((items: string[]) => {
const stmt = db.query("INSERT INTO test (value) VALUES (?)");
for (const item of items) {
stmt.run(item);
}
});
insert(["a", "b", "c"]);
expect(db.query("SELECT COUNT(*) as count FROM test").get()).toEqual({ count: 3 });
expect(() => db.close(true)).not.toThrow();
});
test("using declaration works with db.transaction()", () => {
using db = new Database(":memory:");
db.transaction(() => {})();
// Symbol.dispose calls close(true), should not throw
});
test("db.close(true) works after multiple transaction types", () => {
const db = new Database(":memory:");
db.transaction(() => {})();
db.transaction(() => {}).deferred();
db.transaction(() => {}).immediate();
db.transaction(() => {}).exclusive();
expect(() => db.close(true)).not.toThrow();
});
test("db.close(true) works after nested transactions", () => {
const db = new Database(":memory:");
db.run("CREATE TABLE test (id INTEGER PRIMARY KEY)");
const outer = db.transaction(() => {
db.run("INSERT INTO test (id) VALUES (1)");
const inner = db.transaction(() => {
db.run("INSERT INTO test (id) VALUES (2)");
});
inner();
});
outer();
expect(() => db.close(true)).not.toThrow();
});

View File

@@ -0,0 +1,43 @@
import { $ } from "bun";
import { expect, test } from "bun:test";
// https://github.com/oven-sh/bun/issues/17244
// Shell template literals leaked __bunstr_N when the first interpolated value
// contained a space and a subsequent value contained a multi-byte UTF-8 character.
test("shell interpolation with space and multi-byte UTF-8", async () => {
const a = " ";
const b = "Í";
const result = await $`echo ${a} ${b}`.text();
expect(result.trim()).toBe("Í");
expect(result).not.toContain("__bunstr");
});
test("shell interpolation with trailing-space string and 2-byte UTF-8", async () => {
const a = "a ";
const b = "Í";
const result = await $`echo ${a} ${b}`.text();
// "a " (with trailing space preserved) + " " (template separator) + "Í"
expect(result.trim()).toBe("a Í");
expect(result).not.toContain("__bunstr");
});
test("shell interpolation with space and 3-byte UTF-8", async () => {
const a = " ";
const b = "€";
const result = await $`echo ${a} ${b}`.text();
expect(result.trim()).toBe("€");
expect(result).not.toContain("__bunstr");
});
test("shell interpolation with embedded space and multi-byte UTF-8", async () => {
const a = "a b";
const b = "Í";
const result = await $`echo ${a} ${b}`.text();
expect(result.trim()).toBe("a b Í");
expect(result).not.toContain("__bunstr");
});

View File

@@ -0,0 +1,117 @@
import { $ } from "bun";
import { describe, expect, test } from "bun:test";
describe("echo -e flag support", () => {
test("echo -e does not output -e as literal text", async () => {
const result = await $`echo -e hello`.text();
expect(result).toBe("hello\n");
});
test("echo -e interprets backslash-n", async () => {
const result = await $`echo -e ${"hello\\nworld"}`.text();
expect(result).toBe("hello\nworld\n");
});
test("echo -e interprets backslash-t", async () => {
const result = await $`echo -e ${"hello\\tworld"}`.text();
expect(result).toBe("hello\tworld\n");
});
test("echo -e interprets backslash-backslash", async () => {
const result = await $`echo -e ${"hello\\\\world"}`.text();
expect(result).toBe("hello\\world\n");
});
test("echo -e interprets \\a (bell)", async () => {
const result = await $`echo -e ${"\\a"}`.text();
expect(result).toBe("\x07\n");
});
test("echo -e interprets \\b (backspace)", async () => {
const result = await $`echo -e ${"a\\bb"}`.text();
expect(result).toBe("a\bb\n");
});
test("echo -e interprets \\r (carriage return)", async () => {
const result = await $`echo -e ${"hello\\rworld"}`.text();
expect(result).toBe("hello\rworld\n");
});
test("echo -e interprets \\f (form feed)", async () => {
const result = await $`echo -e ${"\\f"}`.text();
expect(result).toBe("\f\n");
});
test("echo -e interprets \\v (vertical tab)", async () => {
const result = await $`echo -e ${"\\v"}`.text();
expect(result).toBe("\v\n");
});
test("echo -e interprets \\0nnn (octal)", async () => {
// \0101 = 'A' (65 decimal)
const result = await $`echo -e ${"\\0101"}`.text();
expect(result).toBe("A\n");
});
test("echo -e interprets \\xHH (hex)", async () => {
// \x41 = 'A'
const result = await $`echo -e ${"\\x41\\x42\\x43"}`.text();
expect(result).toBe("ABC\n");
});
test("echo -e \\c stops output", async () => {
const result = await $`echo -e ${"hello\\cworld"}`.text();
expect(result).toBe("hello");
});
test("echo -e with \\e (escape character)", async () => {
const result = await $`echo -e ${"\\e"}`.text();
expect(result).toBe("\x1b\n");
});
test("echo -E disables escape interpretation", async () => {
const result = await $`echo -E ${"hello\\nworld"}`.text();
expect(result).toBe("hello\\nworld\n");
});
test("echo -eE (last wins: -E disables)", async () => {
const result = await $`echo -eE ${"hello\\tworld"}`.text();
expect(result).toBe("hello\\tworld\n");
});
test("echo -Ee (last wins: -e enables)", async () => {
const result = await $`echo -Ee ${"hello\\tworld"}`.text();
expect(result).toBe("hello\tworld\n");
});
test("echo -ne (no newline + escapes)", async () => {
const result = await $`echo -ne ${"hello\\tworld"}`.text();
expect(result).toBe("hello\tworld");
});
test("echo -en (same as -ne)", async () => {
const result = await $`echo -en ${"hello\\tworld"}`.text();
expect(result).toBe("hello\tworld");
});
test("echo -n still works (no newline)", async () => {
const result = await $`echo -n hello`.text();
expect(result).toBe("hello");
});
test("echo with invalid flag outputs literally", async () => {
const result = await $`echo -x hello`.text();
expect(result).toBe("-x hello\n");
});
test("echo -e piped to cat (original issue scenario)", async () => {
const pw = "mypassword";
const result = await $`echo -e ${pw} | cat`.text();
expect(result).toBe("mypassword\n");
});
test("echo without -e still works normally", async () => {
const result = await $`echo hello world`.text();
expect(result).toBe("hello world\n");
});
});

View File

@@ -0,0 +1,62 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe, tempDir } from "harness";
test("Bun.build works multiple times after FileSystemRouter is created", async () => {
using dir = tempDir("issue-18242", {
"pages/index.ts": `console.log("Hello via Bun!");`,
"build.ts": `
import path from "path";
const PAGES_DIR = path.resolve(process.cwd(), "pages");
const srcRouter = new Bun.FileSystemRouter({
dir: PAGES_DIR,
style: "nextjs",
});
const entrypoints = Object.values(srcRouter.routes);
const result1 = await Bun.build({
entrypoints,
outdir: "dist/browser",
});
const result2 = await Bun.build({
entrypoints,
outdir: "dist/bun",
target: "bun",
});
const result3 = await Bun.build({
entrypoints,
outdir: "dist/third",
});
console.log(JSON.stringify({
build1: result1.success,
build2: result2.success,
build3: result3.success,
build2Logs: result2.logs.map(String),
build3Logs: result3.logs.map(String),
}));
`,
});
await using proc = Bun.spawn({
cmd: [bunExe(), "build.ts"],
env: bunEnv,
cwd: String(dir),
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
const result = JSON.parse(stdout.trim());
expect(result.build1).toBe(true);
expect(result.build2).toBe(true);
expect(result.build3).toBe(true);
expect(result.build2Logs).toEqual([]);
expect(result.build3Logs).toEqual([]);
expect(exitCode).toBe(0);
});

View File

@@ -0,0 +1,97 @@
import { describe, expect, test } from "bun:test";
import { bunEnv, bunExe, tempDir } from "harness";
describe("issue #20546 - CSS @layer declarations should be stripped from source files", () => {
test("separate @layer statements with @import layer()", async () => {
using dir = tempDir("css-layer-20546", {
"main.css": /* css */ `
@layer one;
@layer two;
@layer three;
@import url('./a.css') layer(one);
@import url('./b.css') layer(two);
@import url('./c.css') layer(three);
`,
"a.css": /* css */ `body { margin: 0; }`,
"b.css": /* css */ `h1 { font-family: sans-serif; }`,
"c.css": /* css */ `.text-centered { text-align: center; }`,
});
await using proc = Bun.spawn({
cmd: [bunExe(), "build", "./main.css", "--outdir=out"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stderr).toBe("");
const outCss = await Bun.file(`${dir}/out/main.css`).text();
// @layer declarations should appear at the top (hoisted or as part of the layer blocks)
// @import statements should NOT appear in the output (they've been inlined)
expect(outCss).not.toContain("@import");
// The bare @layer declarations should not be duplicated at the bottom
// They should either be hoisted to the top or removed entirely since
// the layer blocks establish the same ordering
const layerOneStatements = outCss.match(/@layer one;/g);
const layerTwoStatements = outCss.match(/@layer two;/g);
const layerThreeStatements = outCss.match(/@layer three;/g);
// Each @layer declaration should appear at most once (hoisted)
expect((layerOneStatements ?? []).length).toBeLessThanOrEqual(1);
expect((layerTwoStatements ?? []).length).toBeLessThanOrEqual(1);
expect((layerThreeStatements ?? []).length).toBeLessThanOrEqual(1);
// The actual layer block content should be present
expect(outCss).toContain("margin: 0");
expect(outCss).toContain("font-family: sans-serif");
expect(outCss).toContain("text-align: center");
expect(exitCode).toBe(0);
});
test("comma syntax @layer statement with @import layer()", async () => {
using dir = tempDir("css-layer-20546-comma", {
"main.css": /* css */ `
@layer one, two, three;
@import url('./a.css') layer(one);
@import url('./b.css') layer(two);
@import url('./c.css') layer(three);
`,
"a.css": /* css */ `body { margin: 0; }`,
"b.css": /* css */ `h1 { font-family: sans-serif; }`,
"c.css": /* css */ `.text-centered { text-align: center; }`,
});
await using proc = Bun.spawn({
cmd: [bunExe(), "build", "./main.css", "--outdir=out"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stderr).toBe("");
const outCss = await Bun.file(`${dir}/out/main.css`).text();
// @import statements should NOT appear in the output
expect(outCss).not.toContain("@import");
// The actual layer block content should be present
expect(outCss).toContain("margin: 0");
expect(outCss).toContain("font-family: sans-serif");
expect(outCss).toContain("text-align: center");
expect(exitCode).toBe(0);
});
});

View File

@@ -0,0 +1,49 @@
import { expect, test } from "bun:test";
import { tempDir } from "harness";
test("bun build produces valid JS for unused dynamic imports", async () => {
using dir = tempDir("issue-24709", {
"void-import.ts": `
export function main() {
void import("./dep.ts");
}
`,
"bare-import.ts": `
export function main() {
import("./dep.ts");
}
`,
"dep.ts": `export const x = 1;`,
});
const transpiler = new Bun.Transpiler();
// Test void import("...")
{
const result = await Bun.build({
entrypoints: [`${dir}/void-import.ts`],
});
expect(result.success).toBe(true);
const output = await result.outputs[0].text();
// The output must not contain an empty arrow function body like "() => )"
expect(output).not.toContain("() => )");
// Validate the output is syntactically valid JS by scanning it
expect(() => transpiler.scanImports(output)).not.toThrow();
}
// Test bare import("...")
{
const result = await Bun.build({
entrypoints: [`${dir}/bare-import.ts`],
});
expect(result.success).toBe(true);
const output = await result.outputs[0].text();
expect(output).not.toContain("() => )");
expect(() => transpiler.scanImports(output)).not.toThrow();
}
});

View File

@@ -0,0 +1,39 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe, tempDir } from "harness";
test("CSS bundler should not drop :root rule before @property", async () => {
using dir = tempDir("css-property-root-dedup", {
"input.css": `:root {
--bar: 1;
}
@property --foo {
syntax: "<number>";
inherits: true;
initial-value: 0;
}
:root {
--baz: 2;
}
`,
});
await using proc = Bun.spawn({
cmd: [bunExe(), "build", "input.css", "--outdir", "out"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
const output = await Bun.file(`${dir}/out/input.css`).text();
// Both :root blocks must be preserved — they cannot be merged across the @property boundary
expect(output).toContain("--bar: 1");
expect(output).toContain("--baz: 2");
expect(output).toContain("@property --foo");
expect(exitCode).toBe(0);
});