Compare commits

..

8 Commits

Author SHA1 Message Date
autofix-ci[bot]
053637d55f [autofix.ci] apply automated fixes 2026-02-11 16:24:28 +00:00
Claude Bot
b746919e38 fix(web): give File a distinct prototype from Blob
Previously `File.prototype === Blob.prototype` was `true`, causing
`new File(...).constructor.name` to return `"Blob"` instead of `"File"`.
This creates a separate `FilePrototype` that inherits from
`Blob.prototype`, fixing the prototype chain per the spec.

Closes #26899

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-11 16:22:22 +00:00
SUZUKI Sosuke
b7475d8768 fix(buffer): return fixed-length view from slice on resizable ArrayBuffer (#26822)
## Summary

Follow-up to #26819 ([review
comment](https://github.com/oven-sh/bun/pull/26819#discussion_r2781484939)).
Fixes `Buffer.slice()` / `Buffer.subarray()` on resizable `ArrayBuffer`
/ growable `SharedArrayBuffer` to return a **fixed-length view** instead
of a length-tracking view.

## Problem

The resizable/growable branch was passing `std::nullopt` to
`JSUint8Array::create()`, which creates a length-tracking view. When the
underlying buffer grows, the sliced view's length would incorrectly
expand:

```js
const rab = new ArrayBuffer(10, { maxByteLength: 20 });
const buf = Buffer.from(rab);
const sliced = buf.slice(0, 5);
sliced.length; // 5

rab.resize(20);
sliced.length; // was 10 (wrong), now 5 (correct)
```

Node.js specifies that `Buffer.slice()` always returns a fixed-length
view (verified on Node.js v22).

## Fix

Replace `std::nullopt` with `newLength` in the
`isResizableOrGrowableShared()` branch of
`jsBufferPrototypeFunction_sliceBody`.

## Test

Added a regression test that creates a `Buffer` from a resizable
`ArrayBuffer`, slices it, resizes the buffer, and verifies the slice
length doesn't change.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 04:48:20 -08:00
Jarred Sumner
4494170f74 perf(event_loop): avoid eventfd wakeup for setImmediate on POSIX (#26821)
### What does this PR do?

Instead of calling event_loop.wakeup() (which writes to the eventfd)
when there are pending immediate tasks, use a zero timeout in
getTimeout() so epoll/kqueue returns immediately. This avoids the
overhead of the eventfd write/read cycle on each setImmediate iteration.

On Windows, continue to call .wakeup() since that's cheap for libuv.

Verified with strace: system bun makes ~44k eventfd writes for a 5s
setImmediate loop, while this change makes 0.


### How did you verify your code works?

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-09 04:47:52 -08:00
SUZUKI Sosuke
9484218ba4 perf(buffer): move Buffer.slice/subarray to native C++ with int32 fast path (#26819)
## Summary

Move `Buffer.slice()` / `Buffer.subarray()` from a JS builtin to a
native C++ implementation, eliminating the `adjustOffset` closure
allocation and JS→C++ constructor overhead on every call. Additionally,
add an int32 fast path that skips `toNumber()` (which can invoke
`valueOf`/`Symbol.toPrimitive`) when arguments are already int32—the
common case for calls like `buf.slice(0, 10)`.

## Changes

- **`src/bun.js/bindings/JSBuffer.cpp`**: Add
`jsBufferPrototypeFunction_sliceBody` with `adjustSliceOffsetInt32` /
`adjustSliceOffsetDouble` helpers. Update prototype hash table entries
from `BuiltinGeneratorType` to `NativeFunctionType` for both `slice` and
`subarray`.
- **`src/js/builtins/JSBufferPrototype.ts`**: Remove the JS `slice`
function (was lines 667–687).
- **`bench/snippets/buffer-slice.mjs`**: Add mitata benchmark.

## Benchmark (Apple M4 Max)

| Benchmark | Before (v1.3.8) | After | Speedup |
|---|---|---|---|
| `Buffer(64).slice()` | 27.19 ns | **14.56 ns** | **1.87x** |
| `Buffer(1024).slice()` | 27.84 ns | **14.62 ns** | **1.90x** |
| `Buffer(1M).slice()` | 29.20 ns | **14.89 ns** | **1.96x** |
| `Buffer(64).slice(10)` | 30.26 ns | **16.01 ns** | **1.89x** |
| `Buffer(1024).slice(10, 100)` | 30.92 ns | **18.32 ns** | **1.69x** |
| `Buffer(1024).slice(-100, -10)` | 28.82 ns | **17.37 ns** | **1.66x**
|
| `Buffer(1024).subarray(10, 100)` | 28.67 ns | **16.32 ns** | **1.76x**
|

**~1.7–1.9x faster** across all cases. All 449 buffer tests pass.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 01:46:33 -08:00
robobun
2a5e8ef38c fix(kqueue): fix incorrect filter comparison causing excessive CPU on macOS (#26812)
## Summary

Fixes the remaining kqueue filter comparison bug in
`packages/bun-usockets/src/eventing/epoll_kqueue.c` that caused
excessive CPU usage with network requests on macOS:

- **`us_loop_run_bun_tick` filter comparison (line 302-303):** kqueue
filter values (`EVFILT_READ=-1`, `EVFILT_WRITE=-2`) were compared using
bitwise AND (`&`) instead of equality (`==`). Since these are signed
negative integers (not bitmasks), `(-2) & (-1)` = `-2` (truthy), meaning
every `EVFILT_WRITE` event was also misidentified as `EVFILT_READ`. This
was already fixed in `us_loop_run` (by PR #25475) but the same bug
remained in `us_loop_run_bun_tick`, which is the primary event loop
function used by Bun.

This is a macOS-only issue (Linux uses epoll, which is unaffected).

Closes #26811

## Test plan

- [x] Added regression test at `test/regression/issue/26811.test.ts`
that makes concurrent HTTPS POST requests
- [x] Test passes with `bun bd test test/regression/issue/26811.test.ts`
- [ ] Manual verification on macOS: run the reporter's [repro
script](https://gist.github.com/jkoppel/d26732574dfcdcc6bfc4958596054d2e)
and confirm CPU usage stays low

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-09 00:52:17 -08:00
robobun
a84f12b816 Use edge-triggered epoll for eventfd wakeups (#26815)
## Summary

- Switch both eventfd wakeup sites (Zig IO watcher loop and usockets
async) to edge-triggered (`EPOLLET`) epoll mode, eliminating unnecessary
`read()` syscalls on every event loop wakeup
- Add `EAGAIN`/`EINTR` overflow handling in `us_internal_async_wakeup`,
matching libuv's approach ([commit
`e5cb1d3d`](https://github.com/libuv/libuv/commit/e5cb1d3d))

With edge-triggered mode, each `write()` to the eventfd produces a new
edge event regardless of the current counter value, so draining the
counter via `read()` is unnecessary. The counter will never overflow in
practice (~18 quintillion wakeups), but overflow handling is included
defensively.

### Files changed

- **`src/io/io.zig`** — Add `EPOLL.ET` to eventfd registration, replace
drain `read()` with `continue`
- **`packages/bun-usockets/src/eventing/epoll_kqueue.c`** — Set
`leave_poll_ready = 1` for async callbacks, upgrade to `EPOLLET` via
`EPOLL_CTL_MOD`, add `EAGAIN`/`EINTR` handling in wakeup write

## Test plan

- [x] Verified with `strace -f -e trace=read,eventfd2` that eventfd
reads are fully eliminated after the change (0 reads on the eventfd fd)
- [x] Confirmed remaining 8-byte reads in traces are timerfd reads
(legitimate, required)
- [x] Stress tested with 50 concurrent async tasks (1000 total
`Bun.sleep(1)` iterations) — all completed correctly
- [x] `LinuxWaker.wait()` (used by `BundleThread` as a blocking sleep)
is intentionally unchanged

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-09 00:36:30 -08:00
SUZUKI Sosuke
0f43ea9bec perf(structuredClone): add fast path for root-level dense arrays (#26814)
## Summary

Add a fast path for `structuredClone` and `postMessage` when the root
value is a dense array of primitives or strings. This bypasses the full
`CloneSerializer`/`CloneDeserializer` machinery by keeping data in
native C++ structures instead of serializing to a byte stream.

**Important:** This optimization only applies when the root value passed
to `structuredClone()` / `postMessage()` is an array. Nested arrays
within objects still go through the normal serialization path.

## Implementation

Three tiers of array fast paths, checked in order:

| Tier | Indexing Type | Strategy | Applies When |
|------|--------------|----------|--------------|
| **Tier 1** | `ArrayWithInt32` | `memcpy` butterfly data | Dense int32
array, no holes, no named properties |
| **Tier 2** | `ArrayWithDouble` | `memcpy` butterfly data | Dense
double array, no holes, no named properties |
| **Tier 3** | `ArrayWithContiguous` | Copy elements into
`FixedVector<variant<JSValue, String>>` | Dense array of
primitives/strings, no holes, no named properties |

All tiers fall through to the normal serialization path when:
- The array has holes that must forward to the prototype
- The array has named properties (e.g., `arr.foo = "bar"`) — checked via
`structure->maxOffset() != invalidOffset`
- Elements contain non-primitive, non-string values (objects, arrays,
etc.)
- The context requires wire-format serialization (storage, cross-process
transfer)

### Deserialization

- **Tier 1/2:** Allocate a new `Butterfly` via `vm.auxiliarySpace()`,
`memcpy` data back, create array with `JSArray::createWithButterfly()`.
Falls back to normal deserialization if `isHavingABadTime` (forced
ArrayStorage mode).
- **Tier 3:** Pre-convert elements to `JSValue` (including `jsString()`
allocation), then use `JSArray::tryCreateUninitializedRestricted()` +
`initializeIndex()`.

## Benchmarks

Apple M4 Max, comparing system Bun 1.3.8 vs this branch (release build):

| Benchmark | Before | After | Speedup |
|-----------|--------|-------|---------|
| `structuredClone([10 numbers])` | 308.71 ns | 40.38 ns | **7.6x** |
| `structuredClone([100 numbers])` | 1.62 µs | 86.87 ns | **18.7x** |
| `structuredClone([1000 numbers])` | 13.79 µs | 544.56 ns | **25.3x** |
| `structuredClone([10 strings])` | 642.38 ns | 307.38 ns | **2.1x** |
| `structuredClone([100 strings])` | 5.67 µs | 2.57 µs | **2.2x** |
| `structuredClone([10 mixed])` | 446.32 ns | 198.35 ns | **2.3x** |
| `structuredClone(nested array)` | 1.84 µs | 1.79 µs | 1.0x (not
eligible) |
| `structuredClone({a: 123})` | 95.98 ns | 100.07 ns | 1.0x (no
regression) |

Int32 arrays see the largest gains (up to 25x) since they use a direct
`memcpy` of butterfly memory. String/mixed arrays see ~2x improvement.
No performance regression on non-eligible inputs.

## Bug Fix

Also fixes a correctness bug where arrays with named properties (e.g.,
`arr.foo = "bar"`) would lose those properties when going through the
array fast path. Added a `structure->maxOffset() != invalidOffset` guard
to fall back to normal serialization for such arrays.

Fixed a minor double-counting issue in `computeMemoryCost` where
`JSValue` elements in `SimpleArray` were counted both by `byteSize()`
and individually.

## Test Plan

38 tests in `test/js/web/structured-clone-fastpath.test.ts` covering:

- Basic array types: empty, numbers, strings, mixed primitives, special
numbers (`-0`, `NaN`, `Infinity`)
- Large arrays (10,000 elements)
- Tier 2: double arrays, Int32→Double transition
- Deep clone independence verification
- Named properties on Int32, Double, and Contiguous arrays
- `postMessage` via `MessageChannel` for Int32, Double, and mixed arrays
- Edge cases: frozen/sealed arrays, deleted elements (holes), `length`
extension, single-element arrays
- Prototype modification (custom prototype, indexed prototype properties
with holes)
- `Array` subclass identity loss (per spec)
- `undefined`-only and `null`-only arrays
- Multiple independent clones from the same source

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-08 21:36:59 -08:00
23 changed files with 1148 additions and 158 deletions

View File

@@ -0,0 +1,38 @@
// @runtime bun,node
import { bench, group, run } from "../runner.mjs";
const small = Buffer.alloc(64, 0x42);
const medium = Buffer.alloc(1024, 0x42);
const large = Buffer.alloc(1024 * 1024, 0x42);
group("slice - no args", () => {
bench("Buffer(64).slice()", () => small.slice());
bench("Buffer(1024).slice()", () => medium.slice());
bench("Buffer(1M).slice()", () => large.slice());
});
group("slice - one int arg", () => {
bench("Buffer(64).slice(10)", () => small.slice(10));
bench("Buffer(1024).slice(10)", () => medium.slice(10));
bench("Buffer(1M).slice(1024)", () => large.slice(1024));
});
group("slice - two int args", () => {
bench("Buffer(64).slice(10, 50)", () => small.slice(10, 50));
bench("Buffer(1024).slice(10, 100)", () => medium.slice(10, 100));
bench("Buffer(1M).slice(1024, 4096)", () => large.slice(1024, 4096));
});
group("slice - negative args", () => {
bench("Buffer(64).slice(-10)", () => small.slice(-10));
bench("Buffer(1024).slice(-100, -10)", () => medium.slice(-100, -10));
bench("Buffer(1M).slice(-4096, -1024)", () => large.slice(-4096, -1024));
});
group("subarray - two int args", () => {
bench("Buffer(64).subarray(10, 50)", () => small.subarray(10, 50));
bench("Buffer(1024).subarray(10, 100)", () => medium.subarray(10, 100));
bench("Buffer(1M).subarray(1024, 4096)", () => large.subarray(1024, 4096));
});
await run();

View File

@@ -33,7 +33,23 @@ var testArray = [
import { bench, run } from "../runner.mjs";
bench("structuredClone(array)", () => structuredClone(testArray));
bench("structuredClone(nested array)", () => structuredClone(testArray));
bench("structuredClone(123)", () => structuredClone(123));
bench("structuredClone({a: 123})", () => structuredClone({ a: 123 }));
// Array fast path targets
var numbersSmall = Array.from({ length: 10 }, (_, i) => i);
var numbersMedium = Array.from({ length: 100 }, (_, i) => i);
var numbersLarge = Array.from({ length: 1000 }, (_, i) => i);
var stringsSmall = Array.from({ length: 10 }, (_, i) => `item-${i}`);
var stringsMedium = Array.from({ length: 100 }, (_, i) => `item-${i}`);
var mixed = [1, "hello", true, null, undefined, 3.14, "world", false, 42, "test"];
bench("structuredClone([10 numbers])", () => structuredClone(numbersSmall));
bench("structuredClone([100 numbers])", () => structuredClone(numbersMedium));
bench("structuredClone([1000 numbers])", () => structuredClone(numbersLarge));
bench("structuredClone([10 strings])", () => structuredClone(stringsSmall));
bench("structuredClone([100 strings])", () => structuredClone(stringsMedium));
bench("structuredClone([10 mixed])", () => structuredClone(mixed));
await run();

View File

@@ -188,6 +188,103 @@ struct us_loop_t *us_create_loop(void *hint, void (*wakeup_cb)(struct us_loop_t
return loop;
}
/* Shared dispatch loop for both us_loop_run and us_loop_run_bun_tick */
static void us_internal_dispatch_ready_polls(struct us_loop_t *loop) {
#ifdef LIBUS_USE_EPOLL
for (loop->current_ready_poll = 0; loop->current_ready_poll < loop->num_ready_polls; loop->current_ready_poll++) {
struct us_poll_t *poll = GET_READY_POLL(loop, loop->current_ready_poll);
if (LIKELY(poll)) {
if (CLEAR_POINTER_TAG(poll) != poll) {
Bun__internal_dispatch_ready_poll(loop, poll);
continue;
}
int events = loop->ready_polls[loop->current_ready_poll].events;
const int error = events & EPOLLERR;
const int eof = events & EPOLLHUP;
events &= us_poll_events(poll);
if (events || error || eof) {
us_internal_dispatch_ready_poll(poll, error, eof, events);
}
}
}
#else
/* Kqueue delivers each filter (READ, WRITE, TIMER, etc.) as a separate kevent,
* so the same fd/poll can appear twice in ready_polls. We coalesce them into a
* single set of flags per poll before dispatching, matching epoll's behavior
* where each fd appears once with a combined bitmask. */
struct kevent_flags {
uint8_t readable : 1;
uint8_t writable : 1;
uint8_t error : 1;
uint8_t eof : 1;
uint8_t skip : 1;
uint8_t _pad : 3;
};
_Static_assert(sizeof(struct kevent_flags) == 1, "kevent_flags must be 1 byte");
struct kevent_flags coalesced[LIBUS_MAX_READY_POLLS]; /* no zeroing needed — every index is written in the first pass */
/* First pass: decode kevents and coalesce same-poll entries */
for (int i = 0; i < loop->num_ready_polls; i++) {
struct us_poll_t *poll = GET_READY_POLL(loop, i);
if (!poll || CLEAR_POINTER_TAG(poll) != poll) {
coalesced[i] = (struct kevent_flags){ .skip = 1 };
continue;
}
const int16_t filter = loop->ready_polls[i].filter;
const uint16_t flags = loop->ready_polls[i].flags;
struct kevent_flags bits = {
.readable = (filter == EVFILT_READ || filter == EVFILT_TIMER || filter == EVFILT_MACHPORT),
.writable = (filter == EVFILT_WRITE),
.error = !!(flags & EV_ERROR),
.eof = !!(flags & EV_EOF),
};
/* Look backward for a prior entry with the same poll to coalesce into.
* Kqueue returns at most 2 kevents per fd (READ + WRITE). */
int merged = 0;
for (int j = i - 1; j >= 0; j--) {
if (!coalesced[j].skip && GET_READY_POLL(loop, j) == poll) {
coalesced[j].readable |= bits.readable;
coalesced[j].writable |= bits.writable;
coalesced[j].error |= bits.error;
coalesced[j].eof |= bits.eof;
coalesced[i] = (struct kevent_flags){ .skip = 1 };
merged = 1;
break;
}
}
if (!merged) {
coalesced[i] = bits;
}
}
/* Second pass: dispatch everything in order — tagged pointers and coalesced events */
for (loop->current_ready_poll = 0; loop->current_ready_poll < loop->num_ready_polls; loop->current_ready_poll++) {
struct us_poll_t *poll = GET_READY_POLL(loop, loop->current_ready_poll);
if (!poll) continue;
/* Tagged pointers (FilePoll) go through Bun's own dispatch */
if (CLEAR_POINTER_TAG(poll) != poll) {
Bun__internal_dispatch_ready_poll(loop, poll);
continue;
}
struct kevent_flags bits = coalesced[loop->current_ready_poll];
if (bits.skip) continue;
int events = (bits.readable ? LIBUS_SOCKET_READABLE : 0)
| (bits.writable ? LIBUS_SOCKET_WRITABLE : 0);
events &= us_poll_events(poll);
if (events || bits.error || bits.eof) {
us_internal_dispatch_ready_poll(poll, bits.error, bits.eof, events);
}
}
#endif
}
void us_loop_run(struct us_loop_t *loop) {
us_loop_integrate(loop);
@@ -205,41 +302,7 @@ void us_loop_run(struct us_loop_t *loop) {
} while (IS_EINTR(loop->num_ready_polls));
#endif
/* Iterate ready polls, dispatching them by type */
for (loop->current_ready_poll = 0; loop->current_ready_poll < loop->num_ready_polls; loop->current_ready_poll++) {
struct us_poll_t *poll = GET_READY_POLL(loop, loop->current_ready_poll);
/* Any ready poll marked with nullptr will be ignored */
if (LIKELY(poll)) {
if (CLEAR_POINTER_TAG(poll) != poll) {
Bun__internal_dispatch_ready_poll(loop, poll);
continue;
}
#ifdef LIBUS_USE_EPOLL
int events = loop->ready_polls[loop->current_ready_poll].events;
const int error = events & EPOLLERR;
const int eof = events & EPOLLHUP;
#else
const struct kevent64_s* current_kevent = &loop->ready_polls[loop->current_ready_poll];
const int16_t filter = current_kevent->filter;
const uint16_t flags = current_kevent->flags;
const uint32_t fflags = current_kevent->fflags;
// > Multiple events which trigger the filter do not result in multiple kevents being placed on the kqueue
// > Instead, the filter will aggregate the events into a single kevent struct
// Note: EV_ERROR only sets the error in data as part of changelist. Not in this call!
int events = 0
| ((filter == EVFILT_READ) ? LIBUS_SOCKET_READABLE : 0)
| ((filter == EVFILT_WRITE) ? LIBUS_SOCKET_WRITABLE : 0);
const int error = (flags & (EV_ERROR)) ? ((int)fflags || 1) : 0;
const int eof = (flags & (EV_EOF));
#endif
/* Always filter all polls by what they actually poll for (callback polls always poll for readable) */
events &= us_poll_events(poll);
if (events || error || eof) {
us_internal_dispatch_ready_poll(poll, error, eof, events);
}
}
}
us_internal_dispatch_ready_polls(loop);
/* Emit post callback */
us_internal_loop_post(loop);
@@ -263,57 +326,33 @@ void us_loop_run_bun_tick(struct us_loop_t *loop, const struct timespec* timeout
/* Emit pre callback */
us_internal_loop_pre(loop);
if (loop->data.jsc_vm)
const unsigned int had_wakeups = __atomic_exchange_n(&loop->pending_wakeups, 0, __ATOMIC_ACQUIRE);
const int will_idle_inside_event_loop = had_wakeups == 0 && (!timeout || (timeout->tv_nsec != 0 || timeout->tv_sec != 0));
if (will_idle_inside_event_loop && loop->data.jsc_vm)
Bun__JSC_onBeforeWait(loop->data.jsc_vm);
/* Fetch ready polls */
#ifdef LIBUS_USE_EPOLL
/* A zero timespec already has a fast path in ep_poll (fs/eventpoll.c):
* it sets timed_out=1 (line 1952) and returns before any scheduler
* interaction (line 1975). No equivalent of KEVENT_FLAG_IMMEDIATE needed. */
loop->num_ready_polls = bun_epoll_pwait2(loop->fd, loop->ready_polls, 1024, timeout);
#else
do {
loop->num_ready_polls = kevent64(loop->fd, NULL, 0, loop->ready_polls, 1024, 0, timeout);
loop->num_ready_polls = kevent64(loop->fd, NULL, 0, loop->ready_polls, 1024,
/* When we won't idle (pending wakeups or zero timeout), use KEVENT_FLAG_IMMEDIATE.
* In XNU's kqueue_scan (bsd/kern/kern_event.c):
* - KEVENT_FLAG_IMMEDIATE: returns immediately after kqueue_process() (line 8031)
* - Zero timespec without the flag: falls through to assert_wait_deadline (line 8039)
* and thread_block (line 8048), doing a full context switch cycle (~14us) even
* though the deadline is already in the past. */
will_idle_inside_event_loop ? 0 : KEVENT_FLAG_IMMEDIATE,
timeout);
} while (IS_EINTR(loop->num_ready_polls));
#endif
/* Iterate ready polls, dispatching them by type */
for (loop->current_ready_poll = 0; loop->current_ready_poll < loop->num_ready_polls; loop->current_ready_poll++) {
struct us_poll_t *poll = GET_READY_POLL(loop, loop->current_ready_poll);
/* Any ready poll marked with nullptr will be ignored */
if (LIKELY(poll)) {
if (CLEAR_POINTER_TAG(poll) != poll) {
Bun__internal_dispatch_ready_poll(loop, poll);
continue;
}
#ifdef LIBUS_USE_EPOLL
int events = loop->ready_polls[loop->current_ready_poll].events;
const int error = events & EPOLLERR;
const int eof = events & EPOLLHUP;
#else
const struct kevent64_s* current_kevent = &loop->ready_polls[loop->current_ready_poll];
const int16_t filter = current_kevent->filter;
const uint16_t flags = current_kevent->flags;
const uint32_t fflags = current_kevent->fflags;
// > Multiple events which trigger the filter do not result in multiple kevents being placed on the kqueue
// > Instead, the filter will aggregate the events into a single kevent struct
int events = 0
| ((filter & EVFILT_READ) ? LIBUS_SOCKET_READABLE : 0)
| ((filter & EVFILT_WRITE) ? LIBUS_SOCKET_WRITABLE : 0);
// Note: EV_ERROR only sets the error in data as part of changelist. Not in this call!
const int error = (flags & (EV_ERROR)) ? ((int)fflags || 1) : 0;
const int eof = (flags & (EV_EOF));
#endif
/* Always filter all polls by what they actually poll for (callback polls always poll for readable) */
events &= us_poll_events(poll);
if (events || error || eof) {
us_internal_dispatch_ready_poll(poll, error, eof, events);
}
}
}
us_internal_dispatch_ready_polls(loop);
/* Emit post callback */
us_internal_loop_post(loop);
@@ -613,7 +652,7 @@ struct us_internal_async *us_internal_create_async(struct us_loop_t *loop, int f
struct us_internal_callback_t *cb = (struct us_internal_callback_t *) p;
cb->loop = loop;
cb->cb_expects_the_loop = 1;
cb->leave_poll_ready = 0;
cb->leave_poll_ready = 1; /* Edge-triggered: skip reading eventfd on wakeup */
return (struct us_internal_async *) cb;
}
@@ -635,12 +674,28 @@ void us_internal_async_set(struct us_internal_async *a, void (*cb)(struct us_int
internal_cb->cb = (void (*)(struct us_internal_callback_t *)) cb;
us_poll_start((struct us_poll_t *) a, internal_cb->loop, LIBUS_SOCKET_READABLE);
#ifdef LIBUS_USE_EPOLL
/* Upgrade to edge-triggered to avoid reading the eventfd on each wakeup */
struct epoll_event event;
event.events = EPOLLIN | EPOLLET;
event.data.ptr = (struct us_poll_t *) a;
epoll_ctl(internal_cb->loop->fd, EPOLL_CTL_MOD,
us_poll_fd((struct us_poll_t *) a), &event);
#endif
}
void us_internal_async_wakeup(struct us_internal_async *a) {
uint64_t one = 1;
int written = write(us_poll_fd((struct us_poll_t *) a), &one, 8);
(void)written;
int fd = us_poll_fd((struct us_poll_t *) a);
uint64_t val;
for (val = 1; ; val = 1) {
if (write(fd, &val, 8) >= 0) return;
if (errno == EINTR) continue;
if (errno == EAGAIN) {
/* Counter overflow — drain and retry */
if (read(fd, &val, 8) > 0 || errno == EAGAIN || errno == EINTR) continue;
}
break;
}
}
#else

View File

@@ -54,6 +54,10 @@ struct us_loop_t {
/* Number of polls owned by bun */
unsigned int bun_polls;
/* Incremented atomically by wakeup(), swapped to 0 before epoll/kqueue.
* If non-zero, the event loop will return immediately so we can skip the GC safepoint. */
unsigned int pending_wakeups;
/* The list of ready polls */
#ifdef LIBUS_USE_EPOLL
alignas(LIBUS_EXT_ALIGNMENT) struct epoll_event ready_polls[1024];

View File

@@ -93,6 +93,9 @@ void us_internal_loop_data_free(struct us_loop_t *loop) {
}
void us_wakeup_loop(struct us_loop_t *loop) {
#ifndef LIBUS_USE_LIBUV
__atomic_fetch_add(&loop->pending_wakeups, 1, __ATOMIC_RELEASE);
#endif
us_internal_async_wakeup(loop->data.wakeup_async);
}
@@ -393,8 +396,12 @@ void us_internal_dispatch_ready_poll(struct us_poll_t *p, int error, int eof, in
if (events & LIBUS_SOCKET_WRITABLE && !error) {
s->flags.last_write_failed = 0;
#ifdef LIBUS_USE_KQUEUE
/* Kqueue is one-shot so is not writable anymore */
p->state.poll_type = us_internal_poll_type(p) | ((events & LIBUS_SOCKET_READABLE) ? POLL_TYPE_POLLING_IN : 0);
/* Kqueue EVFILT_WRITE is one-shot so the filter is removed after delivery.
* Clear POLLING_OUT to reflect this.
* Keep POLLING_IN from the poll's own state, NOT from `events`: kqueue delivers
* each filter as a separate kevent, so a pure EVFILT_WRITE event won't have
* LIBUS_SOCKET_READABLE set even though the socket is still registered for reads. */
p->state.poll_type = us_internal_poll_type(p) | (p->state.poll_type & POLL_TYPE_POLLING_IN);
#endif
s = s->context->on_writable(s);
@@ -412,7 +419,7 @@ void us_internal_dispatch_ready_poll(struct us_poll_t *p, int error, int eof, in
us_poll_change(&s->p, loop, us_poll_events(&s->p) & LIBUS_SOCKET_READABLE);
} else {
#ifdef LIBUS_USE_KQUEUE
/* Kqueue one-shot writable needs to be re-enabled */
/* Kqueue one-shot writable needs to be re-registered */
us_poll_change(&s->p, loop, us_poll_events(&s->p) | LIBUS_SOCKET_WRITABLE);
#endif
}

View File

@@ -1139,35 +1139,13 @@ export fn Bun__runVirtualModule(globalObject: *JSGlobalObject, specifier_ptr: *c
fn getHardcodedModule(jsc_vm: *VirtualMachine, specifier: bun.String, hardcoded: HardcodedModule) ?ResolvedSource {
analytics.Features.builtin_modules.insert(hardcoded);
return switch (hardcoded) {
.@"bun:main" => {
// For standalone executables with bytecode, look up the entry point
// in the module graph to attach cached bytecode.
if (jsc_vm.standalone_module_graph) |graph| {
const entry_file = graph.entryPoint();
if (entry_file.bytecode.len > 0) {
return .{
.source_code = entry_file.toWTFString(),
.specifier = specifier,
.source_url = specifier,
.bytecode_origin_path = if (entry_file.bytecode_origin_path.len > 0) bun.String.fromBytes(entry_file.bytecode_origin_path) else bun.String.empty,
.source_code_needs_deref = false,
.bytecode_cache = entry_file.bytecode.ptr,
.bytecode_cache_size = entry_file.bytecode.len,
.module_info = if (entry_file.module_info.len > 0)
analyze_transpiled_module.ModuleInfoDeserialized.createFromCachedRecord(entry_file.module_info, bun.default_allocator)
else
null,
.is_commonjs_module = entry_file.module_format == .cjs,
};
}
}
return .{
.source_code = bun.String.cloneUTF8(jsc_vm.entry_point.source.contents),
.specifier = specifier,
.source_url = specifier,
.tag = .esm,
.source_code_needs_deref = true,
};
.@"bun:main" => .{
.allocator = null,
.source_code = bun.String.cloneUTF8(jsc_vm.entry_point.source.contents),
.specifier = specifier,
.source_url = specifier,
.tag = .esm,
.source_code_needs_deref = true,
},
.@"bun:internal-for-testing" => {
if (!Environment.isDebug) {

View File

@@ -245,6 +245,16 @@ pub const All = struct {
}
pub fn getTimeout(this: *All, spec: *timespec, vm: *VirtualMachine) bool {
// On POSIX, if there are pending immediate tasks, use a zero timeout
// so epoll/kqueue returns immediately without the overhead of writing
// to the eventfd via wakeup().
if (comptime Environment.isPosix) {
if (vm.event_loop.immediate_tasks.items.len > 0) {
spec.* = .{ .nsec = 0, .sec = 0 };
return true;
}
}
var maybe_now: ?timespec = null;
while (this.timers.peek()) |min| {
const now = maybe_now orelse now: {

View File

@@ -119,6 +119,7 @@ JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_swap16);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_swap32);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_swap64);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_toString);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_slice);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_write);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_writeBigInt64LE);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_writeBigInt64BE);
@@ -1879,6 +1880,103 @@ bool inline parseArrayIndex(JSC::ThrowScope& scope, JSC::JSGlobalObject* globalO
return true;
}
static ALWAYS_INLINE size_t adjustSliceOffsetInt32(int32_t offset, size_t length)
{
if (offset < 0) {
int64_t adjusted = static_cast<int64_t>(offset) + static_cast<int64_t>(length);
return adjusted > 0 ? static_cast<size_t>(adjusted) : 0;
}
return static_cast<size_t>(offset) < length ? static_cast<size_t>(offset) : length;
}
static ALWAYS_INLINE size_t adjustSliceOffsetDouble(double offset, size_t length)
{
if (std::isnan(offset)) {
return 0;
}
offset = std::trunc(offset);
if (offset == 0) {
return 0;
} else if (offset < 0) {
double adjusted = offset + static_cast<double>(length);
return adjusted > 0 ? static_cast<size_t>(adjusted) : 0;
} else {
return offset < static_cast<double>(length) ? static_cast<size_t>(offset) : length;
}
}
static JSC::EncodedJSValue jsBufferPrototypeFunction_sliceBody(JSC::JSGlobalObject* lexicalGlobalObject, JSC::CallFrame* callFrame, typename IDLOperation<JSArrayBufferView>::ClassParameter castedThis)
{
auto& vm = JSC::getVM(lexicalGlobalObject);
auto throwScope = DECLARE_THROW_SCOPE(vm);
auto* globalObject = defaultGlobalObject(lexicalGlobalObject);
size_t byteLength = castedThis->byteLength();
size_t byteOffset = castedThis->byteOffset();
size_t startOffset = 0;
size_t endOffset = byteLength;
unsigned argCount = callFrame->argumentCount();
if (argCount > 0) {
JSValue startArg = callFrame->uncheckedArgument(0);
if (startArg.isInt32()) {
startOffset = adjustSliceOffsetInt32(startArg.asInt32(), byteLength);
} else if (!startArg.isUndefined()) {
double startD = startArg.toNumber(lexicalGlobalObject);
RETURN_IF_EXCEPTION(throwScope, {});
startOffset = adjustSliceOffsetDouble(startD, byteLength);
}
}
if (argCount > 1) {
JSValue endArg = callFrame->uncheckedArgument(1);
if (endArg.isInt32()) {
endOffset = adjustSliceOffsetInt32(endArg.asInt32(), byteLength);
} else if (!endArg.isUndefined()) {
double endD = endArg.toNumber(lexicalGlobalObject);
RETURN_IF_EXCEPTION(throwScope, {});
endOffset = adjustSliceOffsetDouble(endD, byteLength);
}
}
size_t newLength = endOffset > startOffset ? endOffset - startOffset : 0;
if (castedThis->isDetached()) [[unlikely]] {
throwVMTypeError(lexicalGlobalObject, throwScope, "Buffer is detached"_s);
return {};
}
RefPtr<ArrayBuffer> buffer = castedThis->possiblySharedBuffer();
if (!buffer) {
throwOutOfMemoryError(globalObject, throwScope);
return {};
}
if (castedThis->isResizableOrGrowableShared()) {
auto* subclassStructure = globalObject->JSResizableOrGrowableSharedBufferSubclassStructure();
auto* uint8Array = JSC::JSUint8Array::create(lexicalGlobalObject, subclassStructure, WTF::move(buffer), byteOffset + startOffset, newLength);
RETURN_IF_EXCEPTION(throwScope, {});
if (!uint8Array) [[unlikely]] {
throwOutOfMemoryError(globalObject, throwScope);
return {};
}
RELEASE_AND_RETURN(throwScope, JSC::JSValue::encode(uint8Array));
}
auto* subclassStructure = globalObject->JSBufferSubclassStructure();
auto* uint8Array = JSC::JSUint8Array::create(lexicalGlobalObject, subclassStructure, WTF::move(buffer), byteOffset + startOffset, newLength);
RETURN_IF_EXCEPTION(throwScope, {});
if (!uint8Array) [[unlikely]] {
throwOutOfMemoryError(globalObject, throwScope);
return {};
}
RELEASE_AND_RETURN(throwScope, JSC::JSValue::encode(uint8Array));
}
// https://github.com/nodejs/node/blob/v22.9.0/lib/buffer.js#L834
// using byteLength and byte offsets here is intentional
static JSC::EncodedJSValue jsBufferPrototypeFunction_toStringBody(JSC::JSGlobalObject* lexicalGlobalObject, JSC::CallFrame* callFrame, typename IDLOperation<JSArrayBufferView>::ClassParameter castedThis)
@@ -2430,6 +2528,11 @@ JSC_DEFINE_HOST_FUNCTION(jsBufferPrototypeFunction_swap64, (JSGlobalObject * lex
return IDLOperation<JSArrayBufferView>::call<jsBufferPrototypeFunction_swap64Body>(*lexicalGlobalObject, *callFrame, "swap64");
}
JSC_DEFINE_HOST_FUNCTION(jsBufferPrototypeFunction_slice, (JSGlobalObject * lexicalGlobalObject, CallFrame* callFrame))
{
return IDLOperation<JSArrayBufferView>::call<jsBufferPrototypeFunction_sliceBody>(*lexicalGlobalObject, *callFrame, "slice");
}
JSC_DEFINE_HOST_FUNCTION(jsBufferPrototypeFunction_toString, (JSGlobalObject * lexicalGlobalObject, CallFrame* callFrame))
{
return IDLOperation<JSArrayBufferView>::call<jsBufferPrototypeFunction_toStringBody>(*lexicalGlobalObject, *callFrame, "toString");
@@ -2711,8 +2814,8 @@ static const HashTableValue JSBufferPrototypeTableValues[]
{ "readUIntBE"_s, static_cast<unsigned>(JSC::PropertyAttribute::Builtin), NoIntrinsic, { HashTableValue::BuiltinGeneratorType, jsBufferPrototypeReadUIntBECodeGenerator, 1 } },
{ "readUIntLE"_s, static_cast<unsigned>(JSC::PropertyAttribute::Builtin), NoIntrinsic, { HashTableValue::BuiltinGeneratorType, jsBufferPrototypeReadUIntLECodeGenerator, 1 } },
{ "slice"_s, static_cast<unsigned>(JSC::PropertyAttribute::Builtin), NoIntrinsic, { HashTableValue::BuiltinGeneratorType, jsBufferPrototypeSliceCodeGenerator, 2 } },
{ "subarray"_s, static_cast<unsigned>(JSC::PropertyAttribute::Builtin), NoIntrinsic, { HashTableValue::BuiltinGeneratorType, jsBufferPrototypeSliceCodeGenerator, 2 } },
{ "slice"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_slice, 2 } },
{ "subarray"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_slice, 2 } },
{ "swap16"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_swap16, 0 } },
{ "swap32"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_swap32, 0 } },
{ "swap64"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_swap64, 0 } },

View File

@@ -10,8 +10,56 @@ using namespace JSC;
extern "C" SYSV_ABI void* JSDOMFile__construct(JSC::JSGlobalObject*, JSC::CallFrame* callframe);
extern "C" SYSV_ABI bool JSDOMFile__hasInstance(EncodedJSValue, JSC::JSGlobalObject*, EncodedJSValue);
// TODO: make this inehrit from JSBlob instead of InternalFunction
// That will let us remove this hack for [Symbol.hasInstance] and fix the prototype chain.
// File.prototype inherits from Blob.prototype per the spec.
// This gives File instances all Blob methods while having a distinct prototype
// with constructor === File and [Symbol.toStringTag] === "File".
class JSDOMFilePrototype final : public JSC::JSNonFinalObject {
using Base = JSC::JSNonFinalObject;
public:
static constexpr unsigned StructureFlags = Base::StructureFlags;
static JSDOMFilePrototype* create(JSC::VM& vm, JSC::JSGlobalObject* globalObject, JSC::Structure* structure)
{
JSDOMFilePrototype* prototype = new (NotNull, JSC::allocateCell<JSDOMFilePrototype>(vm)) JSDOMFilePrototype(vm, structure);
prototype->finishCreation(vm, globalObject);
return prototype;
}
DECLARE_INFO;
static JSC::Structure* createStructure(JSC::VM& vm, JSC::JSGlobalObject* globalObject, JSC::JSValue prototype)
{
auto* structure = JSC::Structure::create(vm, globalObject, prototype, JSC::TypeInfo(JSC::ObjectType, StructureFlags), info());
structure->setMayBePrototype(true);
return structure;
}
template<typename CellType, JSC::SubspaceAccess>
static JSC::GCClient::IsoSubspace* subspaceFor(JSC::VM& vm)
{
STATIC_ASSERT_ISO_SUBSPACE_SHARABLE(JSDOMFilePrototype, Base);
return &vm.plainObjectSpace();
}
protected:
JSDOMFilePrototype(JSC::VM& vm, JSC::Structure* structure)
: Base(vm, structure)
{
}
void finishCreation(JSC::VM& vm, JSC::JSGlobalObject* globalObject)
{
Base::finishCreation(vm);
// Set [Symbol.toStringTag] = "File" so Object.prototype.toString.call(file) === "[object File]"
this->putDirectWithoutTransition(vm, vm.propertyNames->toStringTagSymbol,
jsNontrivialString(vm, "File"_s),
JSC::PropertyAttribute::DontEnum | JSC::PropertyAttribute::ReadOnly);
}
};
const JSC::ClassInfo JSDOMFilePrototype::s_info = { "File"_s, &Base::s_info, nullptr, nullptr, CREATE_METHOD_TABLE(JSDOMFilePrototype) };
class JSDOMFile : public JSC::InternalFunction {
using Base = JSC::InternalFunction;
@@ -40,15 +88,20 @@ public:
Base::finishCreation(vm, 2, "File"_s);
}
static JSDOMFile* create(JSC::VM& vm, JSGlobalObject* globalObject)
static JSDOMFile* create(JSC::VM& vm, JSGlobalObject* globalObject, JSC::JSObject* filePrototype)
{
auto* zigGlobal = defaultGlobalObject(globalObject);
auto structure = createStructure(vm, globalObject, zigGlobal->functionPrototype());
auto* object = new (NotNull, JSC::allocateCell<JSDOMFile>(vm)) JSDOMFile(vm, structure);
object->finishCreation(vm);
// This is not quite right. But we'll fix it if someone files an issue about it.
object->putDirect(vm, vm.propertyNames->prototype, zigGlobal->JSBlobPrototype(), JSC::PropertyAttribute::DontEnum | JSC::PropertyAttribute::DontDelete | JSC::PropertyAttribute::ReadOnly | 0);
// Set File.prototype to the distinct FilePrototype object (which inherits from Blob.prototype).
object->putDirect(vm, vm.propertyNames->prototype, filePrototype,
JSC::PropertyAttribute::DontEnum | JSC::PropertyAttribute::DontDelete | JSC::PropertyAttribute::ReadOnly);
// Set FilePrototype.constructor = File
filePrototype->putDirect(vm, vm.propertyNames->constructor, object,
static_cast<unsigned>(JSC::PropertyAttribute::DontEnum));
return object;
}
@@ -69,7 +122,7 @@ public:
auto& vm = JSC::getVM(globalObject);
JSObject* newTarget = asObject(callFrame->newTarget());
auto* constructor = globalObject->JSDOMFileConstructor();
Structure* structure = globalObject->JSBlobStructure();
Structure* structure = globalObject->JSFileStructure();
if (constructor != newTarget) {
auto scope = DECLARE_THROW_SCOPE(vm);
@@ -77,7 +130,7 @@ public:
// ShadowRealm functions belong to a different global object.
getFunctionRealm(lexicalGlobalObject, newTarget));
RETURN_IF_EXCEPTION(scope, {});
structure = InternalFunction::createSubclassStructure(lexicalGlobalObject, newTarget, functionGlobalObject->JSBlobStructure());
structure = InternalFunction::createSubclassStructure(lexicalGlobalObject, newTarget, functionGlobalObject->JSFileStructure());
RETURN_IF_EXCEPTION(scope, {});
}
@@ -103,9 +156,30 @@ const JSC::ClassInfo JSDOMFile::s_info = { "File"_s, &Base::s_info, nullptr, nul
namespace Bun {
JSC::Structure* createJSFileStructure(JSC::VM& vm, JSC::JSGlobalObject* globalObject)
{
auto* zigGlobal = defaultGlobalObject(globalObject);
JSC::JSObject* blobPrototype = zigGlobal->JSBlobPrototype();
// Create FilePrototype with [[Prototype]] = Blob.prototype
auto* protoStructure = JSDOMFilePrototype::createStructure(vm, globalObject, blobPrototype);
auto* filePrototype = JSDOMFilePrototype::create(vm, globalObject, protoStructure);
// Create the structure for File instances: [[Prototype]] = FilePrototype
return JSC::Structure::create(vm, globalObject, filePrototype,
JSC::TypeInfo(static_cast<JSC::JSType>(0b11101110), WebCore::JSBlob::StructureFlags),
WebCore::JSBlob::info(), NonArray);
}
JSC::JSObject* createJSDOMFileConstructor(JSC::VM& vm, JSC::JSGlobalObject* globalObject)
{
return JSDOMFile::create(vm, globalObject);
auto* zigGlobal = defaultGlobalObject(globalObject);
// Get the File instance structure - its prototype is the FilePrototype we need
auto* fileStructure = zigGlobal->JSFileStructure();
auto* filePrototype = fileStructure->storedPrototypeObject();
return JSDOMFile::create(vm, globalObject, filePrototype);
}
}

View File

@@ -4,4 +4,5 @@
namespace Bun {
JSC::JSObject* createJSDOMFileConstructor(JSC::VM&, JSC::JSGlobalObject*);
JSC::Structure* createJSFileStructure(JSC::VM&, JSC::JSGlobalObject*);
}

View File

@@ -1805,6 +1805,11 @@ void GlobalObject::finishCreation(VM& vm)
init.set(CustomGetterSetter::create(init.vm, errorInstanceLazyStackCustomGetter, errorInstanceLazyStackCustomSetter));
});
m_JSFileStructure.initLater(
[](const Initializer<Structure>& init) {
init.set(Bun::createJSFileStructure(init.vm, init.owner));
});
m_JSDOMFileConstructor.initLater(
[](const Initializer<JSObject>& init) {
JSObject* fileConstructor = Bun::createJSDOMFileConstructor(init.vm, init.owner);

View File

@@ -610,6 +610,7 @@ public:
V(private, LazyPropertyOfGlobalObject<Structure>, m_importMetaBakeObjectStructure) \
V(private, LazyPropertyOfGlobalObject<Structure>, m_asyncBoundFunctionStructure) \
V(public, LazyPropertyOfGlobalObject<JSC::JSObject>, m_JSDOMFileConstructor) \
V(public, LazyPropertyOfGlobalObject<Structure>, m_JSFileStructure) \
V(public, LazyPropertyOfGlobalObject<JSC::JSObject>, m_JSMIMEParamsConstructor) \
V(public, LazyPropertyOfGlobalObject<JSC::JSObject>, m_JSMIMETypeConstructor) \
\
@@ -712,6 +713,7 @@ public:
JSObject* cryptoObject() const { return m_cryptoObject.getInitializedOnMainThread(this); }
JSObject* JSDOMFileConstructor() const { return m_JSDOMFileConstructor.getInitializedOnMainThread(this); }
JSC::Structure* JSFileStructure() const { return m_JSFileStructure.getInitializedOnMainThread(this); }
JSMap* nodeWorkerEnvironmentData() { return m_nodeWorkerEnvironmentData.get(); }
void setNodeWorkerEnvironmentData(JSMap* data);

View File

@@ -78,6 +78,9 @@
#include <JavaScriptCore/ArrayBuffer.h>
#include <JavaScriptCore/JSArrayBufferView.h>
#include <JavaScriptCore/JSCInlines.h>
#include <JavaScriptCore/JSArrayInlines.h>
#include <JavaScriptCore/ButterflyInlines.h>
#include <JavaScriptCore/ObjectInitializationScope.h>
#include <JavaScriptCore/JSDataView.h>
#include <JavaScriptCore/JSMapInlines.h>
#include <JavaScriptCore/JSMapIterator.h>
@@ -5574,6 +5577,13 @@ SerializedScriptValue::SerializedScriptValue(WTF::FixedVector<SimpleInMemoryProp
m_memoryCost = computeMemoryCost();
}
SerializedScriptValue::SerializedScriptValue(WTF::FixedVector<SimpleCloneableValue>&& elements)
: m_simpleArrayElements(WTF::move(elements))
, m_fastPath(FastPath::SimpleArray)
{
m_memoryCost = computeMemoryCost();
}
SerializedScriptValue::SerializedScriptValue(const String& fastPathString)
: m_fastPathString(fastPathString)
, m_fastPath(FastPath::String)
@@ -5581,6 +5591,14 @@ SerializedScriptValue::SerializedScriptValue(const String& fastPathString)
m_memoryCost = computeMemoryCost();
}
SerializedScriptValue::SerializedScriptValue(Vector<uint8_t>&& butterflyData, uint32_t length, FastPath fastPath)
: m_arrayButterflyData(WTF::move(butterflyData))
, m_arrayLength(length)
, m_fastPath(fastPath)
{
m_memoryCost = computeMemoryCost();
}
size_t SerializedScriptValue::computeMemoryCost() const
{
size_t cost = m_data.size();
@@ -5652,6 +5670,19 @@ size_t SerializedScriptValue::computeMemoryCost() const
}
}
break;
case FastPath::SimpleArray:
cost += m_simpleArrayElements.byteSize();
for (const auto& elem : m_simpleArrayElements) {
std::visit(WTF::makeVisitor(
[&](JSC::JSValue) { /* already included in byteSize() */ },
[&](const String& s) { cost += s.sizeInBytes(); }),
elem);
}
break;
case FastPath::Int32Array:
case FastPath::DoubleArray:
cost += m_arrayButterflyData.size();
break;
case FastPath::None:
break;
@@ -5843,7 +5874,9 @@ ExceptionOr<Ref<SerializedScriptValue>> SerializedScriptValue::create(JSGlobalOb
if (canUseFastPath) {
bool canUseStringFastPath = false;
bool canUseObjectFastPath = false;
bool canUseArrayFastPath = false;
JSObject* object = nullptr;
JSArray* array = nullptr;
Structure* structure = nullptr;
if (value.isCell()) {
auto* cell = value.asCell();
@@ -5853,7 +5886,10 @@ ExceptionOr<Ref<SerializedScriptValue>> SerializedScriptValue::create(JSGlobalOb
object = cell->getObject();
structure = object->structure();
if (isObjectFastPathCandidate(structure)) {
if (auto* jsArray = jsDynamicCast<JSArray*>(object)) {
canUseArrayFastPath = true;
array = jsArray;
} else if (isObjectFastPathCandidate(structure)) {
canUseObjectFastPath = true;
}
}
@@ -5866,6 +5902,84 @@ ExceptionOr<Ref<SerializedScriptValue>> SerializedScriptValue::create(JSGlobalOb
return SerializedScriptValue::createStringFastPath(stringValue);
}
if (canUseArrayFastPath) {
ASSERT(array != nullptr);
// Arrays with named properties (e.g. arr.foo = "bar") cannot use fast path
// as we only copy indexed elements. maxOffset == invalidOffset means no named properties.
if (structure->maxOffset() != invalidOffset)
canUseArrayFastPath = false;
}
if (canUseArrayFastPath) {
ASSERT(array != nullptr);
unsigned length = array->length();
auto arrayType = array->indexingType();
// Tier 1/2: Int32 / Double butterfly memcpy fast path
if ((arrayType == ArrayWithInt32 || arrayType == ArrayWithDouble)
&& length <= array->butterfly()->vectorLength()
&& !array->structure()->holesMustForwardToPrototype(array)) {
if (arrayType == ArrayWithInt32) {
auto* data = array->butterfly()->contiguous().data();
if (!containsHole(data, length)) {
size_t byteSize = sizeof(JSValue) * length;
Vector<uint8_t> buffer(byteSize, 0);
memcpy(buffer.mutableSpan().data(), data, byteSize);
return SerializedScriptValue::createInt32ArrayFastPath(WTF::move(buffer), length);
}
} else {
auto* data = array->butterfly()->contiguousDouble().data();
if (!containsHole(data, length)) {
size_t byteSize = sizeof(double) * length;
Vector<uint8_t> buffer(byteSize, 0);
memcpy(buffer.mutableSpan().data(), data, byteSize);
return SerializedScriptValue::createDoubleArrayFastPath(WTF::move(buffer), length);
}
}
// Holes present → fall through to normal path
}
// Tier 3: Contiguous array with butterfly direct access
if (arrayType == ArrayWithContiguous
&& length <= array->butterfly()->vectorLength()
&& !array->structure()->holesMustForwardToPrototype(array)) {
auto* data = array->butterfly()->contiguous().data();
WTF::Vector<SimpleCloneableValue> elements;
elements.reserveInitialCapacity(length);
bool ok = true;
for (unsigned i = 0; i < length; i++) {
JSValue elem = data[i].get();
if (!elem) {
ok = false;
break;
}
if (elem.isCell()) {
if (!elem.isString()) {
ok = false;
break;
}
auto* str = asString(elem);
String strValue = str->value(&lexicalGlobalObject);
RETURN_IF_EXCEPTION(scope, Exception { ExistingExceptionError });
elements.append(Bun::toCrossThreadShareable(strValue));
} else {
elements.append(elem);
}
}
if (ok) {
return SerializedScriptValue::createArrayFastPath(
WTF::FixedVector<SimpleCloneableValue>(WTF::move(elements)));
}
}
// ArrayStorage / Undecided / holes forwarding → fall through to normal serialization path
}
if (canUseObjectFastPath) {
ASSERT(object != nullptr);
@@ -6142,6 +6256,21 @@ Ref<SerializedScriptValue> SerializedScriptValue::createObjectFastPath(WTF::Fixe
return adoptRef(*new SerializedScriptValue(WTF::move(object)));
}
Ref<SerializedScriptValue> SerializedScriptValue::createArrayFastPath(WTF::FixedVector<SimpleCloneableValue>&& elements)
{
return adoptRef(*new SerializedScriptValue(WTF::move(elements)));
}
Ref<SerializedScriptValue> SerializedScriptValue::createInt32ArrayFastPath(Vector<uint8_t>&& data, uint32_t length)
{
return adoptRef(*new SerializedScriptValue(WTF::move(data), length, FastPath::Int32Array));
}
Ref<SerializedScriptValue> SerializedScriptValue::createDoubleArrayFastPath(Vector<uint8_t>&& data, uint32_t length)
{
return adoptRef(*new SerializedScriptValue(WTF::move(data), length, FastPath::DoubleArray));
}
RefPtr<SerializedScriptValue> SerializedScriptValue::create(JSContextRef originContext, JSValueRef apiValue, JSValueRef* exception)
{
JSGlobalObject* lexicalGlobalObject = toJS(originContext);
@@ -6288,6 +6417,78 @@ JSValue SerializedScriptValue::deserialize(JSGlobalObject& lexicalGlobalObject,
return object;
}
case FastPath::SimpleArray: {
unsigned length = m_simpleArrayElements.size();
// Pre-convert all elements to JSValues (including creating JSStrings)
// before entering ObjectInitializationScope, since jsString() allocates
// GC cells which is not allowed inside the initialization scope.
MarkedArgumentBuffer values;
values.ensureCapacity(length);
for (unsigned i = 0; i < length; i++) {
JSValue elemValue = std::visit(
WTF::makeVisitor(
[](JSValue v) -> JSValue { return v; },
[&](const String& s) -> JSValue { return jsString(vm, s); }),
m_simpleArrayElements[i]);
values.append(elemValue);
}
Structure* resultStructure = globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithContiguous);
ObjectInitializationScope initScope(vm);
JSArray* resultArray = JSArray::tryCreateUninitializedRestricted(initScope, resultStructure, length);
if (!resultArray) [[unlikely]] {
if (didFail)
*didFail = true;
return {};
}
for (unsigned i = 0; i < length; i++)
resultArray->initializeIndex(initScope, i, values.at(i));
if (didFail)
*didFail = false;
return resultArray;
}
case FastPath::Int32Array:
case FastPath::DoubleArray: {
IndexingType arrayType = (m_fastPath == FastPath::Int32Array) ? ArrayWithInt32 : ArrayWithDouble;
Structure* resultStructure = globalObject->arrayStructureForIndexingTypeDuringAllocation(arrayType);
if (hasAnyArrayStorage(resultStructure->indexingType())) [[unlikely]]
break; // isHavingABadTime → fall through to normal deserialization
unsigned outOfLineStorage = resultStructure->outOfLineCapacity();
unsigned vectorLength = Butterfly::optimalContiguousVectorLength(resultStructure, m_arrayLength);
void* memory = vm.auxiliarySpace().allocate(
vm,
Butterfly::totalSize(0, outOfLineStorage, true, vectorLength * sizeof(EncodedJSValue)),
nullptr, AllocationFailureMode::ReturnNull);
if (!memory) [[unlikely]] {
if (didFail)
*didFail = true;
return {};
}
Butterfly* butterfly = Butterfly::fromBase(memory, 0, outOfLineStorage);
butterfly->setVectorLength(vectorLength);
butterfly->setPublicLength(m_arrayLength);
if (m_fastPath == FastPath::DoubleArray)
memcpy(butterfly->contiguousDouble().data(), m_arrayButterflyData.span().data(), m_arrayButterflyData.size());
else
memcpy(butterfly->contiguous().data(), m_arrayButterflyData.span().data(), m_arrayButterflyData.size());
// Clear unused tail slots with hole values
Butterfly::clearRange(arrayType, butterfly, m_arrayLength, vectorLength);
JSArray* resultArray = JSArray::createWithButterfly(vm, nullptr, resultStructure, butterfly);
if (didFail)
*didFail = false;
return resultArray;
}
case FastPath::None: {
break;
}

View File

@@ -60,15 +60,12 @@ class MemoryHandle;
namespace WebCore {
// Shared value type for fast path cloning: primitives (JSValue) or strings.
using SimpleCloneableValue = std::variant<JSC::JSValue, WTF::String>;
class SimpleInMemoryPropertyTableEntry {
public:
// Only:
// - String
// - Number
// - Boolean
// - Null
// - Undefined
using Value = std::variant<JSC::JSValue, WTF::String>;
using Value = SimpleCloneableValue;
WTF::String propertyName;
Value value;
@@ -78,6 +75,9 @@ enum class FastPath : uint8_t {
None,
String,
SimpleObject,
SimpleArray,
Int32Array,
DoubleArray,
};
#if ENABLE(OFFSCREEN_CANVAS_IN_WORKERS)
@@ -129,6 +129,13 @@ public:
// Fast path for postMessage with simple objects
static Ref<SerializedScriptValue> createObjectFastPath(WTF::FixedVector<SimpleInMemoryPropertyTableEntry>&& object);
// Fast path for postMessage with dense arrays of primitives/strings
static Ref<SerializedScriptValue> createArrayFastPath(WTF::FixedVector<SimpleCloneableValue>&& elements);
// Fast path for postMessage with dense Int32/Double arrays (butterfly memcpy)
static Ref<SerializedScriptValue> createInt32ArrayFastPath(Vector<uint8_t>&& butterflyData, uint32_t length);
static Ref<SerializedScriptValue> createDoubleArrayFastPath(Vector<uint8_t>&& butterflyData, uint32_t length);
static Ref<SerializedScriptValue> nullValue();
WEBCORE_EXPORT JSC::JSValue deserialize(JSC::JSGlobalObject&, JSC::JSGlobalObject*, SerializationErrorMode = SerializationErrorMode::Throwing, bool* didFail = nullptr);
@@ -231,6 +238,9 @@ private:
// Constructor for string fast path
explicit SerializedScriptValue(const String& fastPathString);
explicit SerializedScriptValue(WTF::FixedVector<SimpleInMemoryPropertyTableEntry>&& object);
explicit SerializedScriptValue(WTF::FixedVector<SimpleCloneableValue>&& elements);
// Constructor for Int32Array/DoubleArray butterfly memcpy fast path
SerializedScriptValue(Vector<uint8_t>&& butterflyData, uint32_t length, FastPath fastPath);
size_t computeMemoryCost() const;
@@ -260,6 +270,13 @@ private:
size_t m_memoryCost { 0 };
FixedVector<SimpleInMemoryPropertyTableEntry> m_simpleInMemoryPropertyTable {};
// m_simpleArrayElements and m_arrayButterflyData/m_arrayLength are used exclusively:
// SimpleArray uses m_simpleArrayElements; Int32Array/DoubleArray use m_arrayButterflyData + m_arrayLength.
FixedVector<SimpleCloneableValue> m_simpleArrayElements {};
// Int32Array / DoubleArray fast path: raw butterfly data
Vector<uint8_t> m_arrayButterflyData {};
uint32_t m_arrayLength { 0 };
};
template<class Encoder>

View File

@@ -351,11 +351,13 @@ pub fn autoTick(this: *EventLoop) void {
const ctx = this.virtual_machine;
this.tickImmediateTasks(ctx);
if (comptime Environment.isPosix) {
if (comptime Environment.isWindows) {
if (this.immediate_tasks.items.len > 0) {
this.wakeup();
}
}
// On POSIX, pending immediates are handled via an immediate timeout in
// getTimeout() instead of writing to the eventfd, avoiding that overhead.
if (comptime Environment.isPosix) {
// Some tasks need to keep the event loop alive for one more tick.
@@ -438,11 +440,13 @@ pub fn autoTickActive(this: *EventLoop) void {
var ctx = this.virtual_machine;
this.tickImmediateTasks(ctx);
if (comptime Environment.isPosix) {
if (comptime Environment.isWindows) {
if (this.immediate_tasks.items.len > 0) {
this.wakeup();
}
}
// On POSIX, pending immediates are handled via an immediate timeout in
// getTimeout() instead of writing to the eventfd, avoiding that overhead.
if (comptime Environment.isPosix) {
const pending_unref = ctx.pending_unref_counter;

View File

@@ -16,6 +16,10 @@ pub const PosixLoop = extern struct {
/// Number of polls owned by Bun
active: u32 = 0,
/// Incremented atomically by wakeup(), swapped to 0 before epoll/kqueue.
/// If non-zero, the event loop will return immediately so we can skip the GC safepoint.
pending_wakeups: u32 = 0,
/// The list of ready polls
ready_polls: [1024]EventType align(16),

View File

@@ -34,7 +34,7 @@ pub const Loop = struct {
{
var epoll = std.mem.zeroes(std.os.linux.epoll_event);
epoll.events = std.os.linux.EPOLL.IN | std.os.linux.EPOLL.ERR | std.os.linux.EPOLL.HUP;
epoll.events = std.os.linux.EPOLL.IN | std.os.linux.EPOLL.ET | std.os.linux.EPOLL.ERR | std.os.linux.EPOLL.HUP;
epoll.data.ptr = @intFromPtr(&loop);
const rc = std.os.linux.epoll_ctl(loop.epoll_fd.cast(), std.os.linux.EPOLL.CTL_ADD, loop.waker.getFd().cast(), &epoll);
@@ -165,9 +165,8 @@ pub const Loop = struct {
const pollable: Pollable = Pollable.from(event.data.u64);
if (pollable.tag() == .empty) {
if (event.data.ptr == @intFromPtr(&loop)) {
// this is the event poll, lets read it
var bytes: [8]u8 = undefined;
_ = bun.sys.read(loop.fd(), &bytes);
// Edge-triggered: no need to read the eventfd counter
continue;
}
}
_ = Poll.onUpdateEpoll(pollable.poll(), pollable.tag(), event);

View File

@@ -664,28 +664,6 @@ export function toJSON(this: BufferExt) {
return { type, data };
}
export function slice(this: BufferExt, start, end) {
var { buffer, byteOffset, byteLength } = this;
function adjustOffset(offset, length) {
// Use Math.trunc() to convert offset to an integer value that can be larger
// than an Int32. Hence, don't use offset | 0 or similar techniques.
offset = Math.trunc(offset);
if (offset === 0 || offset !== offset) {
return 0;
} else if (offset < 0) {
offset += length;
return offset > 0 ? offset : 0;
} else {
return offset < length ? offset : length;
}
}
var start_ = adjustOffset(start, byteLength);
var end_ = end !== undefined ? adjustOffset(end, byteLength) : byteLength;
return new $Buffer(buffer, byteOffset + start_, end_ > start_ ? end_ - start_ : 0);
}
$getter;
export function parent(this: BufferExt) {
return $isObject(this) && this instanceof $Buffer ? this.buffer : undefined;

View File

@@ -0,0 +1,94 @@
// Regression test for kqueue filter comparison bug (macOS).
//
// On kqueue, EVFILT_READ (-1) and EVFILT_WRITE (-2) are negative integers. The old
// code used bitwise AND to identify filters:
//
// events |= (filter & EVFILT_READ) ? READABLE : 0
// events |= (filter & EVFILT_WRITE) ? WRITABLE : 0
//
// Since all negative numbers AND'd with -1 or -2 produce truthy values, EVERY kqueue
// event was misidentified as BOTH readable AND writable. This caused the drain handler
// to fire spuriously on every readable event and vice versa.
//
// The fix uses equality comparison (filter == EVFILT_READ), plus coalescing duplicate
// kevents for the same fd (kqueue returns separate events per filter) into a single
// dispatch with combined flags — matching epoll's single-entry-per-fd behavior.
//
// This test creates unix socket connections with small buffers to force partial writes
// (which registers EVFILT_WRITE). The client sends pings on each data callback, causing
// EVFILT_READ events on the server. With the bug, each EVFILT_READ also triggers drain,
// giving a drain/data ratio of ~2.0. With the fix, the ratio is ~1.0.
//
// Example output:
// system bun (bug): data: 38970 drain: 77940 ratio: 2.0
// fixed bun: data: 52965 drain: 52965 ratio: 1.0
import { setSocketOptions } from "bun:internal-for-testing";
const CHUNK = Buffer.alloc(64 * 1024, "x");
const PING = Buffer.from("p");
const sockPath = `kqueue-bench-${process.pid}.sock`;
let drainCalls = 0;
let dataCalls = 0;
const server = Bun.listen({
unix: sockPath,
socket: {
open(socket) {
setSocketOptions(socket, 1, 512);
setSocketOptions(socket, 2, 512);
socket.write(CHUNK);
},
data() {
dataCalls++;
},
drain(socket) {
drainCalls++;
socket.write(CHUNK);
},
close() {},
error() {},
},
});
const clients = [];
for (let i = 0; i < 10; i++) {
clients.push(
await Bun.connect({
unix: sockPath,
socket: {
open(socket) {
setSocketOptions(socket, 1, 512);
setSocketOptions(socket, 2, 512);
},
data(socket) {
socket.write(PING);
},
drain() {},
close() {},
error() {},
},
}),
);
}
await Bun.sleep(50);
drainCalls = 0;
dataCalls = 0;
await Bun.sleep(100);
const ratio = dataCalls > 0 ? drainCalls / dataCalls : 0;
console.log(`data: ${dataCalls} drain: ${drainCalls} ratio: ${ratio.toFixed(1)}`);
for (const c of clients) c.end();
server.stop(true);
try {
require("fs").unlinkSync(sockPath);
} catch {}
if (dataCalls === 0 || drainCalls === 0) {
console.error("test invalid: no data or drain callbacks fired");
process.exit(1);
}
process.exit(ratio < 1.5 ? 0 : 1);

View File

@@ -339,6 +339,10 @@ describe.concurrent("socket", () => {
expect([fileURLToPath(new URL("./socket-huge-fixture.js", import.meta.url))]).toRun();
}, 60_000);
it.skipIf(isWindows)("kqueue should not dispatch spurious drain events on readable", async () => {
expect([fileURLToPath(new URL("./kqueue-filter-coalesce-fixture.ts", import.meta.url))]).toRun();
});
it("it should not crash when getting a ReferenceError on client socket open", async () => {
using server = Bun.serve({
port: 0,

View File

@@ -887,6 +887,68 @@ for (let withOverridenBufferWrite of [false, true]) {
expect(f[1]).toBe(0x6f);
});
it("slice() with fractional offsets truncates toward zero", () => {
const buf = Buffer.from([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]);
// -0.1 should truncate to 0, not -1
const a = buf.slice(-0.1);
expect(a.length).toBe(10);
expect(a[0]).toBe(0);
// -1.9 should truncate to -1, not -2
const b = buf.slice(-1.9);
expect(b.length).toBe(1);
expect(b[0]).toBe(9);
// 1.9 should truncate to 1
const c = buf.slice(1.9, 4.1);
expect(c.length).toBe(3);
expect(c[0]).toBe(1);
expect(c[1]).toBe(2);
expect(c[2]).toBe(3);
// NaN should be treated as 0
const d = buf.slice(NaN, NaN);
expect(d.length).toBe(0);
const e = buf.slice(NaN);
expect(e.length).toBe(10);
});
it("slice() on detached buffer throws TypeError", () => {
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab);
// Detach the ArrayBuffer by transferring it
structuredClone(ab, { transfer: [ab] });
expect(() => buf.slice(0, 5)).toThrow(TypeError);
});
it("subarray() on detached buffer throws TypeError", () => {
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab);
structuredClone(ab, { transfer: [ab] });
expect(() => buf.subarray(0, 5)).toThrow(TypeError);
});
it("slice() on resizable ArrayBuffer returns fixed-length view", () => {
const rab = new ArrayBuffer(10, { maxByteLength: 20 });
const buf = Buffer.from(rab);
buf[0] = 1;
buf[1] = 2;
buf[2] = 3;
buf[3] = 4;
buf[4] = 5;
const sliced = buf.slice(0, 5);
expect(sliced.length).toBe(5);
expect(sliced[0]).toBe(1);
expect(sliced[4]).toBe(5);
// Growing the buffer should NOT change the slice length
rab.resize(20);
expect(sliced.length).toBe(5);
});
function forEachUnicode(label, test) {
["ucs2", "ucs-2", "utf16le", "utf-16le"].forEach(encoding =>
it(`${label} (${encoding})`, test.bind(null, encoding)),

View File

@@ -90,6 +90,273 @@ describe("Structured Clone Fast Path", () => {
expect(delta).toBeLessThan(1024 * 1024);
});
// === Array fast path tests ===
test("structuredClone should work with empty array", () => {
expect(structuredClone([])).toEqual([]);
});
test("structuredClone should work with array of numbers", () => {
const input = [1, 2, 3, 4, 5];
expect(structuredClone(input)).toEqual(input);
});
test("structuredClone should work with array of strings", () => {
const input = ["hello", "world", ""];
expect(structuredClone(input)).toEqual(input);
});
test("structuredClone should work with array of mixed primitives", () => {
const input = [1, "hello", true, false, null, undefined, 3.14];
const cloned = structuredClone(input);
expect(cloned).toEqual(input);
});
test("structuredClone should work with array of special numbers", () => {
const cloned = structuredClone([-0, NaN, Infinity, -Infinity]);
expect(Object.is(cloned[0], -0)).toBe(true);
expect(cloned[1]).toBeNaN();
expect(cloned[2]).toBe(Infinity);
expect(cloned[3]).toBe(-Infinity);
});
test("structuredClone should work with large array of numbers", () => {
const input = Array.from({ length: 10000 }, (_, i) => i);
expect(structuredClone(input)).toEqual(input);
});
test("structuredClone should fallback for arrays with nested objects", () => {
const input = [{ a: 1 }, { b: 2 }];
expect(structuredClone(input)).toEqual(input);
});
test("structuredClone should fallback for arrays with holes", () => {
const input = [1, , 3]; // sparse
const cloned = structuredClone(input);
// structured clone spec: holes become undefined
expect(cloned[0]).toBe(1);
expect(cloned[1]).toBe(undefined);
expect(cloned[2]).toBe(3);
});
test("structuredClone should work with array of doubles", () => {
const input = [1.5, 2.7, 3.14, 0.1 + 0.2];
const cloned = structuredClone(input);
expect(cloned).toEqual(input);
});
test("structuredClone creates independent copy of array", () => {
const input = [1, 2, 3];
const cloned = structuredClone(input);
cloned[0] = 999;
expect(input[0]).toBe(1);
});
test("structuredClone should preserve named properties on arrays", () => {
const input: any = [1, 2, 3];
input.foo = "bar";
const cloned = structuredClone(input);
expect(cloned.foo).toBe("bar");
expect(Array.from(cloned)).toEqual([1, 2, 3]);
});
test("postMessage should work with array fast path", async () => {
const { port1, port2 } = new MessageChannel();
const input = [1, 2, 3, "hello", true];
const { promise, resolve } = Promise.withResolvers();
port2.onmessage = (e: MessageEvent) => resolve(e.data);
port1.postMessage(input);
const result = await promise;
expect(result).toEqual(input);
port1.close();
port2.close();
});
// === Edge case tests ===
test("structuredClone of frozen array should produce a non-frozen clone", () => {
const input = Object.freeze([1, 2, 3]);
const cloned = structuredClone(input);
expect(cloned).toEqual([1, 2, 3]);
expect(Object.isFrozen(cloned)).toBe(false);
cloned[0] = 999;
expect(cloned[0]).toBe(999);
});
test("structuredClone of sealed array should produce a non-sealed clone", () => {
const input = Object.seal([1, 2, 3]);
const cloned = structuredClone(input);
expect(cloned).toEqual([1, 2, 3]);
expect(Object.isSealed(cloned)).toBe(false);
cloned.push(4);
expect(cloned).toEqual([1, 2, 3, 4]);
});
test("structuredClone of array with deleted element (hole via delete)", () => {
const input = [1, 2, 3];
delete (input as any)[1];
const cloned = structuredClone(input);
expect(cloned[0]).toBe(1);
expect(cloned[1]).toBe(undefined);
expect(cloned[2]).toBe(3);
expect(1 in cloned).toBe(false); // holes remain holes after structuredClone
});
test("structuredClone of array with length > actual elements", () => {
const input = [1, 2, 3];
input.length = 6;
const cloned = structuredClone(input);
expect(cloned.length).toBe(6);
expect(cloned[0]).toBe(1);
expect(cloned[1]).toBe(2);
expect(cloned[2]).toBe(3);
expect(cloned[3]).toBe(undefined);
});
test("structuredClone of single element arrays", () => {
expect(structuredClone([42])).toEqual([42]);
expect(structuredClone([3.14])).toEqual([3.14]);
expect(structuredClone(["hello"])).toEqual(["hello"]);
expect(structuredClone([true])).toEqual([true]);
expect(structuredClone([null])).toEqual([null]);
});
test("structuredClone of array with named properties on Int32 array", () => {
const input: any = [1, 2, 3]; // Int32 indexing
input.name = "test";
input.count = 42;
const cloned = structuredClone(input);
expect(cloned.name).toBe("test");
expect(cloned.count).toBe(42);
expect(Array.from(cloned)).toEqual([1, 2, 3]);
});
test("structuredClone of array with named properties on Double array", () => {
const input: any = [1.1, 2.2, 3.3]; // Double indexing
input.label = "doubles";
const cloned = structuredClone(input);
expect(cloned.label).toBe("doubles");
expect(Array.from(cloned)).toEqual([1.1, 2.2, 3.3]);
});
test("structuredClone of array that transitions Int32 to Double", () => {
const input = [1, 2, 3]; // starts as Int32
input.push(4.5); // transitions to Double
const cloned = structuredClone(input);
expect(cloned).toEqual([1, 2, 3, 4.5]);
});
test("structuredClone of array with modified prototype", () => {
const input = [1, 2, 3];
Object.setPrototypeOf(input, {
customMethod() {
return 42;
},
});
const cloned = structuredClone(input);
// Clone should have standard Array prototype, not the custom one
expect(Array.from(cloned)).toEqual([1, 2, 3]);
expect(cloned).toBeInstanceOf(Array);
expect((cloned as any).customMethod).toBeUndefined();
});
test("structuredClone of array with prototype indexed properties and holes", () => {
const proto = Object.create(Array.prototype);
proto[1] = "from proto";
const input = new Array(3);
Object.setPrototypeOf(input, proto);
input[0] = "a";
input[2] = "c";
// structuredClone only copies own properties; prototype values are not included
const cloned = structuredClone(input);
expect(cloned[0]).toBe("a");
expect(1 in cloned).toBe(false); // hole, not "from proto"
expect(cloned[2]).toBe("c");
expect(cloned).toBeInstanceOf(Array);
});
test("postMessage with Int32 array via MessageChannel", async () => {
const { port1, port2 } = new MessageChannel();
const input = [10, 20, 30, 40, 50];
const { promise, resolve } = Promise.withResolvers();
port2.onmessage = (e: MessageEvent) => resolve(e.data);
port1.postMessage(input);
const result = await promise;
expect(result).toEqual(input);
port1.close();
port2.close();
});
test("postMessage with Double array via MessageChannel", async () => {
const { port1, port2 } = new MessageChannel();
const input = [1.1, 2.2, 3.3];
const { promise, resolve } = Promise.withResolvers();
port2.onmessage = (e: MessageEvent) => resolve(e.data);
port1.postMessage(input);
const result = await promise;
expect(result).toEqual(input);
port1.close();
port2.close();
});
test("structuredClone of array multiple times produces independent copies", () => {
const input = [1, 2, 3];
const clones = Array.from({ length: 10 }, () => structuredClone(input));
clones[0][0] = 999;
clones[5][1] = 888;
// All other clones and the original should be unaffected
expect(input).toEqual([1, 2, 3]);
for (let i = 1; i < 10; i++) {
if (i === 5) {
expect(clones[i]).toEqual([1, 888, 3]);
} else {
expect(clones[i]).toEqual([1, 2, 3]);
}
}
});
test("structuredClone of Array subclass loses subclass identity", () => {
class MyArray extends Array {
customProp = "hello";
sum() {
return this.reduce((a: number, b: number) => a + b, 0);
}
}
const input = new MyArray(1, 2, 3);
input.customProp = "world";
const cloned = structuredClone(input);
// structuredClone spec: result is a plain Array, not a subclass
expect(Array.from(cloned)).toEqual([1, 2, 3]);
expect(cloned).toBeInstanceOf(Array);
expect((cloned as any).sum).toBeUndefined();
});
test("structuredClone of array with only undefined values", () => {
const input = [undefined, undefined, undefined];
const cloned = structuredClone(input);
expect(cloned).toEqual([undefined, undefined, undefined]);
expect(cloned.length).toBe(3);
// Ensure they are actual values, not holes
expect(0 in cloned).toBe(true);
expect(1 in cloned).toBe(true);
expect(2 in cloned).toBe(true);
});
test("structuredClone of array with only null values", () => {
const input = [null, null, null];
const cloned = structuredClone(input);
expect(cloned).toEqual([null, null, null]);
});
test("structuredClone of dense double array preserves -0 and NaN", () => {
const input = [-0, NaN, -0, NaN];
const cloned = structuredClone(input);
expect(Object.is(cloned[0], -0)).toBe(true);
expect(cloned[1]).toBeNaN();
expect(Object.is(cloned[2], -0)).toBe(true);
expect(cloned[3]).toBeNaN();
});
test("structuredClone on object with simple properties can exceed JSFinalObject::maxInlineCapacity", () => {
let largeValue = {};
for (let i = 0; i < 100; i++) {

View File

@@ -0,0 +1,67 @@
import { expect, test } from "bun:test";
// https://github.com/oven-sh/bun/issues/26899
// File.prototype should be distinct from Blob.prototype
test("File.prototype !== Blob.prototype", () => {
expect(File.prototype).not.toBe(Blob.prototype);
});
test("File.prototype inherits from Blob.prototype", () => {
expect(Object.getPrototypeOf(File.prototype)).toBe(Blob.prototype);
});
test("new File(...).constructor.name === 'File'", () => {
const file = new File(["hello"], "hello.txt");
expect(file.constructor.name).toBe("File");
});
test("new File(...).constructor === File", () => {
const file = new File(["hello"], "hello.txt");
expect(file.constructor).toBe(File);
});
test("new File(...).constructor !== Blob", () => {
const file = new File(["hello"], "hello.txt");
expect(file.constructor).not.toBe(Blob);
});
test("Object.prototype.toString.call(file) === '[object File]'", () => {
const file = new File(["hello"], "hello.txt");
expect(Object.prototype.toString.call(file)).toBe("[object File]");
});
test("file instanceof File", () => {
const file = new File(["hello"], "hello.txt");
expect(file instanceof File).toBe(true);
});
test("file instanceof Blob", () => {
const file = new File(["hello"], "hello.txt");
expect(file instanceof Blob).toBe(true);
});
test("blob is not instanceof File", () => {
const blob = new Blob(["hello"]);
expect(blob instanceof File).toBe(false);
});
test("File instances have Blob methods", () => {
const file = new File(["hello"], "hello.txt");
expect(typeof file.text).toBe("function");
expect(typeof file.arrayBuffer).toBe("function");
expect(typeof file.slice).toBe("function");
expect(typeof file.stream).toBe("function");
});
test("File name and lastModified work", () => {
const file = new File(["hello"], "hello.txt", { lastModified: 12345 });
expect(file.name).toBe("hello.txt");
expect(file.lastModified).toBe(12345);
});
test("File.prototype has correct Symbol.toStringTag", () => {
const desc = Object.getOwnPropertyDescriptor(File.prototype, Symbol.toStringTag);
expect(desc).toBeDefined();
expect(desc!.value).toBe("File");
});