Compare commits

...

13 Commits

Author SHA1 Message Date
Dylan Conway
e0b8d46e65 test bytecode fix 2026-02-09 14:07:58 -08:00
SUZUKI Sosuke
b7475d8768 fix(buffer): return fixed-length view from slice on resizable ArrayBuffer (#26822)
## Summary

Follow-up to #26819 ([review
comment](https://github.com/oven-sh/bun/pull/26819#discussion_r2781484939)).
Fixes `Buffer.slice()` / `Buffer.subarray()` on resizable `ArrayBuffer`
/ growable `SharedArrayBuffer` to return a **fixed-length view** instead
of a length-tracking view.

## Problem

The resizable/growable branch was passing `std::nullopt` to
`JSUint8Array::create()`, which creates a length-tracking view. When the
underlying buffer grows, the sliced view's length would incorrectly
expand:

```js
const rab = new ArrayBuffer(10, { maxByteLength: 20 });
const buf = Buffer.from(rab);
const sliced = buf.slice(0, 5);
sliced.length; // 5

rab.resize(20);
sliced.length; // was 10 (wrong), now 5 (correct)
```

Node.js specifies that `Buffer.slice()` always returns a fixed-length
view (verified on Node.js v22).

## Fix

Replace `std::nullopt` with `newLength` in the
`isResizableOrGrowableShared()` branch of
`jsBufferPrototypeFunction_sliceBody`.

## Test

Added a regression test that creates a `Buffer` from a resizable
`ArrayBuffer`, slices it, resizes the buffer, and verifies the slice
length doesn't change.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 04:48:20 -08:00
Jarred Sumner
4494170f74 perf(event_loop): avoid eventfd wakeup for setImmediate on POSIX (#26821)
### What does this PR do?

Instead of calling event_loop.wakeup() (which writes to the eventfd)
when there are pending immediate tasks, use a zero timeout in
getTimeout() so epoll/kqueue returns immediately. This avoids the
overhead of the eventfd write/read cycle on each setImmediate iteration.

On Windows, continue to call .wakeup() since that's cheap for libuv.

Verified with strace: system bun makes ~44k eventfd writes for a 5s
setImmediate loop, while this change makes 0.


### How did you verify your code works?

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-09 04:47:52 -08:00
SUZUKI Sosuke
9484218ba4 perf(buffer): move Buffer.slice/subarray to native C++ with int32 fast path (#26819)
## Summary

Move `Buffer.slice()` / `Buffer.subarray()` from a JS builtin to a
native C++ implementation, eliminating the `adjustOffset` closure
allocation and JS→C++ constructor overhead on every call. Additionally,
add an int32 fast path that skips `toNumber()` (which can invoke
`valueOf`/`Symbol.toPrimitive`) when arguments are already int32—the
common case for calls like `buf.slice(0, 10)`.

## Changes

- **`src/bun.js/bindings/JSBuffer.cpp`**: Add
`jsBufferPrototypeFunction_sliceBody` with `adjustSliceOffsetInt32` /
`adjustSliceOffsetDouble` helpers. Update prototype hash table entries
from `BuiltinGeneratorType` to `NativeFunctionType` for both `slice` and
`subarray`.
- **`src/js/builtins/JSBufferPrototype.ts`**: Remove the JS `slice`
function (was lines 667–687).
- **`bench/snippets/buffer-slice.mjs`**: Add mitata benchmark.

## Benchmark (Apple M4 Max)

| Benchmark | Before (v1.3.8) | After | Speedup |
|---|---|---|---|
| `Buffer(64).slice()` | 27.19 ns | **14.56 ns** | **1.87x** |
| `Buffer(1024).slice()` | 27.84 ns | **14.62 ns** | **1.90x** |
| `Buffer(1M).slice()` | 29.20 ns | **14.89 ns** | **1.96x** |
| `Buffer(64).slice(10)` | 30.26 ns | **16.01 ns** | **1.89x** |
| `Buffer(1024).slice(10, 100)` | 30.92 ns | **18.32 ns** | **1.69x** |
| `Buffer(1024).slice(-100, -10)` | 28.82 ns | **17.37 ns** | **1.66x**
|
| `Buffer(1024).subarray(10, 100)` | 28.67 ns | **16.32 ns** | **1.76x**
|

**~1.7–1.9x faster** across all cases. All 449 buffer tests pass.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 01:46:33 -08:00
robobun
2a5e8ef38c fix(kqueue): fix incorrect filter comparison causing excessive CPU on macOS (#26812)
## Summary

Fixes the remaining kqueue filter comparison bug in
`packages/bun-usockets/src/eventing/epoll_kqueue.c` that caused
excessive CPU usage with network requests on macOS:

- **`us_loop_run_bun_tick` filter comparison (line 302-303):** kqueue
filter values (`EVFILT_READ=-1`, `EVFILT_WRITE=-2`) were compared using
bitwise AND (`&`) instead of equality (`==`). Since these are signed
negative integers (not bitmasks), `(-2) & (-1)` = `-2` (truthy), meaning
every `EVFILT_WRITE` event was also misidentified as `EVFILT_READ`. This
was already fixed in `us_loop_run` (by PR #25475) but the same bug
remained in `us_loop_run_bun_tick`, which is the primary event loop
function used by Bun.

This is a macOS-only issue (Linux uses epoll, which is unaffected).

Closes #26811

## Test plan

- [x] Added regression test at `test/regression/issue/26811.test.ts`
that makes concurrent HTTPS POST requests
- [x] Test passes with `bun bd test test/regression/issue/26811.test.ts`
- [ ] Manual verification on macOS: run the reporter's [repro
script](https://gist.github.com/jkoppel/d26732574dfcdcc6bfc4958596054d2e)
and confirm CPU usage stays low

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-09 00:52:17 -08:00
robobun
a84f12b816 Use edge-triggered epoll for eventfd wakeups (#26815)
## Summary

- Switch both eventfd wakeup sites (Zig IO watcher loop and usockets
async) to edge-triggered (`EPOLLET`) epoll mode, eliminating unnecessary
`read()` syscalls on every event loop wakeup
- Add `EAGAIN`/`EINTR` overflow handling in `us_internal_async_wakeup`,
matching libuv's approach ([commit
`e5cb1d3d`](https://github.com/libuv/libuv/commit/e5cb1d3d))

With edge-triggered mode, each `write()` to the eventfd produces a new
edge event regardless of the current counter value, so draining the
counter via `read()` is unnecessary. The counter will never overflow in
practice (~18 quintillion wakeups), but overflow handling is included
defensively.

### Files changed

- **`src/io/io.zig`** — Add `EPOLL.ET` to eventfd registration, replace
drain `read()` with `continue`
- **`packages/bun-usockets/src/eventing/epoll_kqueue.c`** — Set
`leave_poll_ready = 1` for async callbacks, upgrade to `EPOLLET` via
`EPOLL_CTL_MOD`, add `EAGAIN`/`EINTR` handling in wakeup write

## Test plan

- [x] Verified with `strace -f -e trace=read,eventfd2` that eventfd
reads are fully eliminated after the change (0 reads on the eventfd fd)
- [x] Confirmed remaining 8-byte reads in traces are timerfd reads
(legitimate, required)
- [x] Stress tested with 50 concurrent async tasks (1000 total
`Bun.sleep(1)` iterations) — all completed correctly
- [x] `LinuxWaker.wait()` (used by `BundleThread` as a blocking sleep)
is intentionally unchanged

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-09 00:36:30 -08:00
SUZUKI Sosuke
0f43ea9bec perf(structuredClone): add fast path for root-level dense arrays (#26814)
## Summary

Add a fast path for `structuredClone` and `postMessage` when the root
value is a dense array of primitives or strings. This bypasses the full
`CloneSerializer`/`CloneDeserializer` machinery by keeping data in
native C++ structures instead of serializing to a byte stream.

**Important:** This optimization only applies when the root value passed
to `structuredClone()` / `postMessage()` is an array. Nested arrays
within objects still go through the normal serialization path.

## Implementation

Three tiers of array fast paths, checked in order:

| Tier | Indexing Type | Strategy | Applies When |
|------|--------------|----------|--------------|
| **Tier 1** | `ArrayWithInt32` | `memcpy` butterfly data | Dense int32
array, no holes, no named properties |
| **Tier 2** | `ArrayWithDouble` | `memcpy` butterfly data | Dense
double array, no holes, no named properties |
| **Tier 3** | `ArrayWithContiguous` | Copy elements into
`FixedVector<variant<JSValue, String>>` | Dense array of
primitives/strings, no holes, no named properties |

All tiers fall through to the normal serialization path when:
- The array has holes that must forward to the prototype
- The array has named properties (e.g., `arr.foo = "bar"`) — checked via
`structure->maxOffset() != invalidOffset`
- Elements contain non-primitive, non-string values (objects, arrays,
etc.)
- The context requires wire-format serialization (storage, cross-process
transfer)

### Deserialization

- **Tier 1/2:** Allocate a new `Butterfly` via `vm.auxiliarySpace()`,
`memcpy` data back, create array with `JSArray::createWithButterfly()`.
Falls back to normal deserialization if `isHavingABadTime` (forced
ArrayStorage mode).
- **Tier 3:** Pre-convert elements to `JSValue` (including `jsString()`
allocation), then use `JSArray::tryCreateUninitializedRestricted()` +
`initializeIndex()`.

## Benchmarks

Apple M4 Max, comparing system Bun 1.3.8 vs this branch (release build):

| Benchmark | Before | After | Speedup |
|-----------|--------|-------|---------|
| `structuredClone([10 numbers])` | 308.71 ns | 40.38 ns | **7.6x** |
| `structuredClone([100 numbers])` | 1.62 µs | 86.87 ns | **18.7x** |
| `structuredClone([1000 numbers])` | 13.79 µs | 544.56 ns | **25.3x** |
| `structuredClone([10 strings])` | 642.38 ns | 307.38 ns | **2.1x** |
| `structuredClone([100 strings])` | 5.67 µs | 2.57 µs | **2.2x** |
| `structuredClone([10 mixed])` | 446.32 ns | 198.35 ns | **2.3x** |
| `structuredClone(nested array)` | 1.84 µs | 1.79 µs | 1.0x (not
eligible) |
| `structuredClone({a: 123})` | 95.98 ns | 100.07 ns | 1.0x (no
regression) |

Int32 arrays see the largest gains (up to 25x) since they use a direct
`memcpy` of butterfly memory. String/mixed arrays see ~2x improvement.
No performance regression on non-eligible inputs.

## Bug Fix

Also fixes a correctness bug where arrays with named properties (e.g.,
`arr.foo = "bar"`) would lose those properties when going through the
array fast path. Added a `structure->maxOffset() != invalidOffset` guard
to fall back to normal serialization for such arrays.

Fixed a minor double-counting issue in `computeMemoryCost` where
`JSValue` elements in `SimpleArray` were counted both by `byteSize()`
and individually.

## Test Plan

38 tests in `test/js/web/structured-clone-fastpath.test.ts` covering:

- Basic array types: empty, numbers, strings, mixed primitives, special
numbers (`-0`, `NaN`, `Infinity`)
- Large arrays (10,000 elements)
- Tier 2: double arrays, Int32→Double transition
- Deep clone independence verification
- Named properties on Int32, Double, and Contiguous arrays
- `postMessage` via `MessageChannel` for Int32, Double, and mixed arrays
- Edge cases: frozen/sealed arrays, deleted elements (holes), `length`
extension, single-element arrays
- Prototype modification (custom prototype, indexed prototype properties
with holes)
- `Array` subclass identity loss (per spec)
- `undefined`-only and `null`-only arrays
- Multiple independent clones from the same source

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-08 21:36:59 -08:00
Jarred Sumner
0889897a1c Revert "feat(bundler): add configurable CJS→ESM unwrapping via unwrapCJSToESM"
This reverts commit e3c25260ed.
2026-02-08 19:49:26 -08:00
Jarred Sumner
68f2ea4b95 Fix release script 2026-02-08 01:39:10 -08:00
Jarred Sumner
d4ebfd9771 Bump 2026-02-08 01:32:25 -08:00
Jarred Sumner
e3c25260ed feat(bundler): add configurable CJS→ESM unwrapping via unwrapCJSToESM
Add `minify.unwrapCJSToESM` JS API option and `--unwrap-cjs-to-esm` CLI
flag to force CJS-to-ESM conversion for specific packages, eliminating
the `__commonJS` wrapper. Supports wildcard patterns (e.g. `"@scope/*"`).
User entries extend the default React family list.

Also removes the react/react-dom version check that gated conversion,
and fixes `packageName()` to handle scoped packages (`@scope/pkg`).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 01:32:10 -08:00
Alistair Smith
1bded85718 types: Enable --splitting with compile (#26796)
### What does this PR do?

Enables --splitting with compile

### How did you verify your code works?

Bun types integration test fixture updates
2026-02-07 13:39:18 -08:00
Dylan Conway
cf6cdbbbad Revert "Mimalloc v3 update (#26379)" (#26783)
This reverts commit c63415c9c9.

### What does this PR do?

### How did you verify your code works?
2026-02-06 18:05:17 -08:00
27 changed files with 1131 additions and 238 deletions

2
LATEST
View File

@@ -1 +1 @@
1.3.8
1.3.9

View File

@@ -0,0 +1,38 @@
// @runtime bun,node
import { bench, group, run } from "../runner.mjs";
const small = Buffer.alloc(64, 0x42);
const medium = Buffer.alloc(1024, 0x42);
const large = Buffer.alloc(1024 * 1024, 0x42);
group("slice - no args", () => {
bench("Buffer(64).slice()", () => small.slice());
bench("Buffer(1024).slice()", () => medium.slice());
bench("Buffer(1M).slice()", () => large.slice());
});
group("slice - one int arg", () => {
bench("Buffer(64).slice(10)", () => small.slice(10));
bench("Buffer(1024).slice(10)", () => medium.slice(10));
bench("Buffer(1M).slice(1024)", () => large.slice(1024));
});
group("slice - two int args", () => {
bench("Buffer(64).slice(10, 50)", () => small.slice(10, 50));
bench("Buffer(1024).slice(10, 100)", () => medium.slice(10, 100));
bench("Buffer(1M).slice(1024, 4096)", () => large.slice(1024, 4096));
});
group("slice - negative args", () => {
bench("Buffer(64).slice(-10)", () => small.slice(-10));
bench("Buffer(1024).slice(-100, -10)", () => medium.slice(-100, -10));
bench("Buffer(1M).slice(-4096, -1024)", () => large.slice(-4096, -1024));
});
group("subarray - two int args", () => {
bench("Buffer(64).subarray(10, 50)", () => small.subarray(10, 50));
bench("Buffer(1024).subarray(10, 100)", () => medium.subarray(10, 100));
bench("Buffer(1M).subarray(1024, 4096)", () => large.subarray(1024, 4096));
});
await run();

View File

@@ -33,7 +33,23 @@ var testArray = [
import { bench, run } from "../runner.mjs";
bench("structuredClone(array)", () => structuredClone(testArray));
bench("structuredClone(nested array)", () => structuredClone(testArray));
bench("structuredClone(123)", () => structuredClone(123));
bench("structuredClone({a: 123})", () => structuredClone({ a: 123 }));
// Array fast path targets
var numbersSmall = Array.from({ length: 10 }, (_, i) => i);
var numbersMedium = Array.from({ length: 100 }, (_, i) => i);
var numbersLarge = Array.from({ length: 1000 }, (_, i) => i);
var stringsSmall = Array.from({ length: 10 }, (_, i) => `item-${i}`);
var stringsMedium = Array.from({ length: 100 }, (_, i) => `item-${i}`);
var mixed = [1, "hello", true, null, undefined, 3.14, "world", false, 42, "test"];
bench("structuredClone([10 numbers])", () => structuredClone(numbersSmall));
bench("structuredClone([100 numbers])", () => structuredClone(numbersMedium));
bench("structuredClone([1000 numbers])", () => structuredClone(numbersLarge));
bench("structuredClone([10 strings])", () => structuredClone(stringsSmall));
bench("structuredClone([100 strings])", () => structuredClone(stringsMedium));
bench("structuredClone([10 mixed])", () => structuredClone(mixed));
await run();

View File

@@ -4,7 +4,7 @@ register_repository(
REPOSITORY
oven-sh/mimalloc
COMMIT
ffa38ab8ac914f9eb7af75c1f8ad457643dc14f2
1beadf9651a7bfdec6b5367c380ecc3fe1c40d1a
)
set(MIMALLOC_CMAKE_ARGS
@@ -14,7 +14,7 @@ set(MIMALLOC_CMAKE_ARGS
-DMI_BUILD_TESTS=OFF
-DMI_USE_CXX=ON
-DMI_SKIP_COLLECT_ON_EXIT=ON
# ```
# mimalloc_allow_large_os_pages=0 BUN_PORT=3004 mem bun http-hello.js
# Started development server: http://localhost:3004
@@ -51,7 +51,7 @@ if(ENABLE_ASAN)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_DEBUG_UBSAN=ON)
elseif(APPLE OR LINUX)
if(APPLE)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OVERRIDE=OFF)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OVERRIDE=OFF)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OSX_ZONE=OFF)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OSX_INTERPOSE=OFF)
else()
@@ -87,9 +87,9 @@ endif()
if(WIN32)
if(DEBUG)
set(MIMALLOC_LIBRARY mimalloc-debug)
set(MIMALLOC_LIBRARY mimalloc-static-debug)
else()
set(MIMALLOC_LIBRARY mimalloc)
set(MIMALLOC_LIBRARY mimalloc-static)
endif()
elseif(DEBUG)
if (ENABLE_ASAN)

View File

@@ -6,7 +6,7 @@ option(WEBKIT_LOCAL "If a local version of WebKit should be used instead of down
option(WEBKIT_BUILD_TYPE "The build type for local WebKit (defaults to CMAKE_BUILD_TYPE)")
if(NOT WEBKIT_VERSION)
set(WEBKIT_VERSION 8af7958ff0e2a4787569edf64641a1ae7cfe074a)
set(WEBKIT_VERSION autobuild-preview-pr-157-68c51d5a)
endif()
# Use preview build URL for Windows ARM64 until the fix is merged to main

View File

@@ -1,7 +1,7 @@
{
"private": true,
"name": "bun",
"version": "1.3.9",
"version": "1.3.10",
"workspaces": [
"./packages/bun-types",
"./packages/@types/bun"

View File

@@ -95,12 +95,12 @@ export const platforms: Platform[] = [
bin: "bun-windows-x64-baseline",
exe: "bin/bun.exe",
},
{
os: "win32",
arch: "arm64",
bin: "bun-windows-aarch64",
exe: "bin/bun.exe",
},
// {
// os: "win32",
// arch: "arm64",
// bin: "bun-windows-aarch64",
// exe: "bin/bun.exe",
// },
];
export const supportedPlatforms: Platform[] = platforms

View File

@@ -2445,7 +2445,12 @@ declare module "bun" {
/**
* @see [Bun.build API docs](https://bun.com/docs/bundler#api)
*/
interface BuildConfigBase {
interface BuildConfig {
/**
* Enable code splitting
*/
splitting?: boolean;
/**
* List of entrypoints, usually file paths
*/
@@ -2774,6 +2779,33 @@ declare module "bun" {
metafile?: boolean;
outdir?: string;
/**
* Create a standalone executable
*
* When `true`, creates an executable for the current platform.
* When a target string, creates an executable for that platform.
*
* @example
* ```ts
* // Create executable for current platform
* await Bun.build({
* entrypoints: ['./app.js'],
* compile: {
* target: 'linux-x64',
* },
* outfile: './my-app'
* });
*
* // Cross-compile for Linux x64
* await Bun.build({
* entrypoints: ['./app.js'],
* compile: 'linux-x64',
* outfile: './my-app'
* });
* ```
*/
compile?: boolean | Bun.Build.CompileTarget | CompileBuildOptions;
}
interface CompileBuildOptions {
@@ -2832,57 +2864,6 @@ declare module "bun" {
};
}
// Compile build config - uses outfile for executable output
interface CompileBuildConfig extends BuildConfigBase {
/**
* Create a standalone executable
*
* When `true`, creates an executable for the current platform.
* When a target string, creates an executable for that platform.
*
* @example
* ```ts
* // Create executable for current platform
* await Bun.build({
* entrypoints: ['./app.js'],
* compile: {
* target: 'linux-x64',
* },
* outfile: './my-app'
* });
*
* // Cross-compile for Linux x64
* await Bun.build({
* entrypoints: ['./app.js'],
* compile: 'linux-x64',
* outfile: './my-app'
* });
* ```
*/
compile: boolean | Bun.Build.CompileTarget | CompileBuildOptions;
/**
* Splitting is not currently supported with `.compile`
*/
splitting?: never;
}
interface NormalBuildConfig extends BuildConfigBase {
/**
* Enable code splitting
*
* This does not currently work with {@link CompileBuildConfig.compile `compile`}
*
* @default true
*/
splitting?: boolean;
}
/**
* @see [Bun.build API docs](https://bun.com/docs/bundler#api)
*/
type BuildConfig = CompileBuildConfig | NormalBuildConfig;
/**
* Hash and verify passwords using argon2 or bcrypt
*

View File

@@ -188,6 +188,103 @@ struct us_loop_t *us_create_loop(void *hint, void (*wakeup_cb)(struct us_loop_t
return loop;
}
/* Shared dispatch loop for both us_loop_run and us_loop_run_bun_tick */
static void us_internal_dispatch_ready_polls(struct us_loop_t *loop) {
#ifdef LIBUS_USE_EPOLL
for (loop->current_ready_poll = 0; loop->current_ready_poll < loop->num_ready_polls; loop->current_ready_poll++) {
struct us_poll_t *poll = GET_READY_POLL(loop, loop->current_ready_poll);
if (LIKELY(poll)) {
if (CLEAR_POINTER_TAG(poll) != poll) {
Bun__internal_dispatch_ready_poll(loop, poll);
continue;
}
int events = loop->ready_polls[loop->current_ready_poll].events;
const int error = events & EPOLLERR;
const int eof = events & EPOLLHUP;
events &= us_poll_events(poll);
if (events || error || eof) {
us_internal_dispatch_ready_poll(poll, error, eof, events);
}
}
}
#else
/* Kqueue delivers each filter (READ, WRITE, TIMER, etc.) as a separate kevent,
* so the same fd/poll can appear twice in ready_polls. We coalesce them into a
* single set of flags per poll before dispatching, matching epoll's behavior
* where each fd appears once with a combined bitmask. */
struct kevent_flags {
uint8_t readable : 1;
uint8_t writable : 1;
uint8_t error : 1;
uint8_t eof : 1;
uint8_t skip : 1;
uint8_t _pad : 3;
};
_Static_assert(sizeof(struct kevent_flags) == 1, "kevent_flags must be 1 byte");
struct kevent_flags coalesced[LIBUS_MAX_READY_POLLS]; /* no zeroing needed — every index is written in the first pass */
/* First pass: decode kevents and coalesce same-poll entries */
for (int i = 0; i < loop->num_ready_polls; i++) {
struct us_poll_t *poll = GET_READY_POLL(loop, i);
if (!poll || CLEAR_POINTER_TAG(poll) != poll) {
coalesced[i] = (struct kevent_flags){ .skip = 1 };
continue;
}
const int16_t filter = loop->ready_polls[i].filter;
const uint16_t flags = loop->ready_polls[i].flags;
struct kevent_flags bits = {
.readable = (filter == EVFILT_READ || filter == EVFILT_TIMER || filter == EVFILT_MACHPORT),
.writable = (filter == EVFILT_WRITE),
.error = !!(flags & EV_ERROR),
.eof = !!(flags & EV_EOF),
};
/* Look backward for a prior entry with the same poll to coalesce into.
* Kqueue returns at most 2 kevents per fd (READ + WRITE). */
int merged = 0;
for (int j = i - 1; j >= 0; j--) {
if (!coalesced[j].skip && GET_READY_POLL(loop, j) == poll) {
coalesced[j].readable |= bits.readable;
coalesced[j].writable |= bits.writable;
coalesced[j].error |= bits.error;
coalesced[j].eof |= bits.eof;
coalesced[i] = (struct kevent_flags){ .skip = 1 };
merged = 1;
break;
}
}
if (!merged) {
coalesced[i] = bits;
}
}
/* Second pass: dispatch everything in order — tagged pointers and coalesced events */
for (loop->current_ready_poll = 0; loop->current_ready_poll < loop->num_ready_polls; loop->current_ready_poll++) {
struct us_poll_t *poll = GET_READY_POLL(loop, loop->current_ready_poll);
if (!poll) continue;
/* Tagged pointers (FilePoll) go through Bun's own dispatch */
if (CLEAR_POINTER_TAG(poll) != poll) {
Bun__internal_dispatch_ready_poll(loop, poll);
continue;
}
struct kevent_flags bits = coalesced[loop->current_ready_poll];
if (bits.skip) continue;
int events = (bits.readable ? LIBUS_SOCKET_READABLE : 0)
| (bits.writable ? LIBUS_SOCKET_WRITABLE : 0);
events &= us_poll_events(poll);
if (events || bits.error || bits.eof) {
us_internal_dispatch_ready_poll(poll, bits.error, bits.eof, events);
}
}
#endif
}
void us_loop_run(struct us_loop_t *loop) {
us_loop_integrate(loop);
@@ -205,41 +302,7 @@ void us_loop_run(struct us_loop_t *loop) {
} while (IS_EINTR(loop->num_ready_polls));
#endif
/* Iterate ready polls, dispatching them by type */
for (loop->current_ready_poll = 0; loop->current_ready_poll < loop->num_ready_polls; loop->current_ready_poll++) {
struct us_poll_t *poll = GET_READY_POLL(loop, loop->current_ready_poll);
/* Any ready poll marked with nullptr will be ignored */
if (LIKELY(poll)) {
if (CLEAR_POINTER_TAG(poll) != poll) {
Bun__internal_dispatch_ready_poll(loop, poll);
continue;
}
#ifdef LIBUS_USE_EPOLL
int events = loop->ready_polls[loop->current_ready_poll].events;
const int error = events & EPOLLERR;
const int eof = events & EPOLLHUP;
#else
const struct kevent64_s* current_kevent = &loop->ready_polls[loop->current_ready_poll];
const int16_t filter = current_kevent->filter;
const uint16_t flags = current_kevent->flags;
const uint32_t fflags = current_kevent->fflags;
// > Multiple events which trigger the filter do not result in multiple kevents being placed on the kqueue
// > Instead, the filter will aggregate the events into a single kevent struct
// Note: EV_ERROR only sets the error in data as part of changelist. Not in this call!
int events = 0
| ((filter == EVFILT_READ) ? LIBUS_SOCKET_READABLE : 0)
| ((filter == EVFILT_WRITE) ? LIBUS_SOCKET_WRITABLE : 0);
const int error = (flags & (EV_ERROR)) ? ((int)fflags || 1) : 0;
const int eof = (flags & (EV_EOF));
#endif
/* Always filter all polls by what they actually poll for (callback polls always poll for readable) */
events &= us_poll_events(poll);
if (events || error || eof) {
us_internal_dispatch_ready_poll(poll, error, eof, events);
}
}
}
us_internal_dispatch_ready_polls(loop);
/* Emit post callback */
us_internal_loop_post(loop);
@@ -263,57 +326,33 @@ void us_loop_run_bun_tick(struct us_loop_t *loop, const struct timespec* timeout
/* Emit pre callback */
us_internal_loop_pre(loop);
if (loop->data.jsc_vm)
const unsigned int had_wakeups = __atomic_exchange_n(&loop->pending_wakeups, 0, __ATOMIC_ACQUIRE);
const int will_idle_inside_event_loop = had_wakeups == 0 && (!timeout || (timeout->tv_nsec != 0 || timeout->tv_sec != 0));
if (will_idle_inside_event_loop && loop->data.jsc_vm)
Bun__JSC_onBeforeWait(loop->data.jsc_vm);
/* Fetch ready polls */
#ifdef LIBUS_USE_EPOLL
/* A zero timespec already has a fast path in ep_poll (fs/eventpoll.c):
* it sets timed_out=1 (line 1952) and returns before any scheduler
* interaction (line 1975). No equivalent of KEVENT_FLAG_IMMEDIATE needed. */
loop->num_ready_polls = bun_epoll_pwait2(loop->fd, loop->ready_polls, 1024, timeout);
#else
do {
loop->num_ready_polls = kevent64(loop->fd, NULL, 0, loop->ready_polls, 1024, 0, timeout);
loop->num_ready_polls = kevent64(loop->fd, NULL, 0, loop->ready_polls, 1024,
/* When we won't idle (pending wakeups or zero timeout), use KEVENT_FLAG_IMMEDIATE.
* In XNU's kqueue_scan (bsd/kern/kern_event.c):
* - KEVENT_FLAG_IMMEDIATE: returns immediately after kqueue_process() (line 8031)
* - Zero timespec without the flag: falls through to assert_wait_deadline (line 8039)
* and thread_block (line 8048), doing a full context switch cycle (~14us) even
* though the deadline is already in the past. */
will_idle_inside_event_loop ? 0 : KEVENT_FLAG_IMMEDIATE,
timeout);
} while (IS_EINTR(loop->num_ready_polls));
#endif
/* Iterate ready polls, dispatching them by type */
for (loop->current_ready_poll = 0; loop->current_ready_poll < loop->num_ready_polls; loop->current_ready_poll++) {
struct us_poll_t *poll = GET_READY_POLL(loop, loop->current_ready_poll);
/* Any ready poll marked with nullptr will be ignored */
if (LIKELY(poll)) {
if (CLEAR_POINTER_TAG(poll) != poll) {
Bun__internal_dispatch_ready_poll(loop, poll);
continue;
}
#ifdef LIBUS_USE_EPOLL
int events = loop->ready_polls[loop->current_ready_poll].events;
const int error = events & EPOLLERR;
const int eof = events & EPOLLHUP;
#else
const struct kevent64_s* current_kevent = &loop->ready_polls[loop->current_ready_poll];
const int16_t filter = current_kevent->filter;
const uint16_t flags = current_kevent->flags;
const uint32_t fflags = current_kevent->fflags;
// > Multiple events which trigger the filter do not result in multiple kevents being placed on the kqueue
// > Instead, the filter will aggregate the events into a single kevent struct
int events = 0
| ((filter & EVFILT_READ) ? LIBUS_SOCKET_READABLE : 0)
| ((filter & EVFILT_WRITE) ? LIBUS_SOCKET_WRITABLE : 0);
// Note: EV_ERROR only sets the error in data as part of changelist. Not in this call!
const int error = (flags & (EV_ERROR)) ? ((int)fflags || 1) : 0;
const int eof = (flags & (EV_EOF));
#endif
/* Always filter all polls by what they actually poll for (callback polls always poll for readable) */
events &= us_poll_events(poll);
if (events || error || eof) {
us_internal_dispatch_ready_poll(poll, error, eof, events);
}
}
}
us_internal_dispatch_ready_polls(loop);
/* Emit post callback */
us_internal_loop_post(loop);
@@ -613,7 +652,7 @@ struct us_internal_async *us_internal_create_async(struct us_loop_t *loop, int f
struct us_internal_callback_t *cb = (struct us_internal_callback_t *) p;
cb->loop = loop;
cb->cb_expects_the_loop = 1;
cb->leave_poll_ready = 0;
cb->leave_poll_ready = 1; /* Edge-triggered: skip reading eventfd on wakeup */
return (struct us_internal_async *) cb;
}
@@ -635,12 +674,28 @@ void us_internal_async_set(struct us_internal_async *a, void (*cb)(struct us_int
internal_cb->cb = (void (*)(struct us_internal_callback_t *)) cb;
us_poll_start((struct us_poll_t *) a, internal_cb->loop, LIBUS_SOCKET_READABLE);
#ifdef LIBUS_USE_EPOLL
/* Upgrade to edge-triggered to avoid reading the eventfd on each wakeup */
struct epoll_event event;
event.events = EPOLLIN | EPOLLET;
event.data.ptr = (struct us_poll_t *) a;
epoll_ctl(internal_cb->loop->fd, EPOLL_CTL_MOD,
us_poll_fd((struct us_poll_t *) a), &event);
#endif
}
void us_internal_async_wakeup(struct us_internal_async *a) {
uint64_t one = 1;
int written = write(us_poll_fd((struct us_poll_t *) a), &one, 8);
(void)written;
int fd = us_poll_fd((struct us_poll_t *) a);
uint64_t val;
for (val = 1; ; val = 1) {
if (write(fd, &val, 8) >= 0) return;
if (errno == EINTR) continue;
if (errno == EAGAIN) {
/* Counter overflow — drain and retry */
if (read(fd, &val, 8) > 0 || errno == EAGAIN || errno == EINTR) continue;
}
break;
}
}
#else

View File

@@ -54,6 +54,10 @@ struct us_loop_t {
/* Number of polls owned by bun */
unsigned int bun_polls;
/* Incremented atomically by wakeup(), swapped to 0 before epoll/kqueue.
* If non-zero, the event loop will return immediately so we can skip the GC safepoint. */
unsigned int pending_wakeups;
/* The list of ready polls */
#ifdef LIBUS_USE_EPOLL
alignas(LIBUS_EXT_ALIGNMENT) struct epoll_event ready_polls[1024];

View File

@@ -93,6 +93,9 @@ void us_internal_loop_data_free(struct us_loop_t *loop) {
}
void us_wakeup_loop(struct us_loop_t *loop) {
#ifndef LIBUS_USE_LIBUV
__atomic_fetch_add(&loop->pending_wakeups, 1, __ATOMIC_RELEASE);
#endif
us_internal_async_wakeup(loop->data.wakeup_async);
}
@@ -393,8 +396,12 @@ void us_internal_dispatch_ready_poll(struct us_poll_t *p, int error, int eof, in
if (events & LIBUS_SOCKET_WRITABLE && !error) {
s->flags.last_write_failed = 0;
#ifdef LIBUS_USE_KQUEUE
/* Kqueue is one-shot so is not writable anymore */
p->state.poll_type = us_internal_poll_type(p) | ((events & LIBUS_SOCKET_READABLE) ? POLL_TYPE_POLLING_IN : 0);
/* Kqueue EVFILT_WRITE is one-shot so the filter is removed after delivery.
* Clear POLLING_OUT to reflect this.
* Keep POLLING_IN from the poll's own state, NOT from `events`: kqueue delivers
* each filter as a separate kevent, so a pure EVFILT_WRITE event won't have
* LIBUS_SOCKET_READABLE set even though the socket is still registered for reads. */
p->state.poll_type = us_internal_poll_type(p) | (p->state.poll_type & POLL_TYPE_POLLING_IN);
#endif
s = s->context->on_writable(s);
@@ -412,7 +419,7 @@ void us_internal_dispatch_ready_poll(struct us_poll_t *p, int error, int eof, in
us_poll_change(&s->p, loop, us_poll_events(&s->p) & LIBUS_SOCKET_READABLE);
} else {
#ifdef LIBUS_USE_KQUEUE
/* Kqueue one-shot writable needs to be re-enabled */
/* Kqueue one-shot writable needs to be re-registered */
us_poll_change(&s->p, loop, us_poll_events(&s->p) | LIBUS_SOCKET_WRITABLE);
#endif
}

View File

@@ -2,10 +2,7 @@
const Self = @This();
const safety_checks = bun.Environment.isDebug or bun.Environment.enable_asan;
#heap: *mimalloc.Heap,
thread_id: if (safety_checks) std.Thread.Id else void,
#heap: if (safety_checks) Owned(*DebugHeap) else *mimalloc.Heap,
/// Uses the default thread-local heap. This type is zero-sized.
///
@@ -23,18 +20,18 @@ pub const Default = struct {
///
/// This type is a `GenericAllocator`; see `src/allocators.zig`.
pub const Borrowed = struct {
#heap: *mimalloc.Heap,
#heap: BorrowedHeap,
pub fn allocator(self: Borrowed) std.mem.Allocator {
return .{ .ptr = self.#heap, .vtable = c_allocator_vtable };
return .{ .ptr = self.#heap, .vtable = &c_allocator_vtable };
}
pub fn getDefault() Borrowed {
return .{ .#heap = mimalloc.mi_heap_main() };
return .{ .#heap = getThreadHeap() };
}
pub fn gc(self: Borrowed) void {
mimalloc.mi_heap_collect(self.#heap, false);
mimalloc.mi_heap_collect(self.getMimallocHeap(), false);
}
pub fn helpCatchMemoryIssues(self: Borrowed) void {
@@ -44,17 +41,30 @@ pub const Borrowed = struct {
}
}
pub fn ownsPtr(self: Borrowed, ptr: *const anyopaque) bool {
return mimalloc.mi_heap_check_owned(self.getMimallocHeap(), ptr);
}
fn fromOpaque(ptr: *anyopaque) Borrowed {
return .{ .#heap = @ptrCast(@alignCast(ptr)) };
}
fn getMimallocHeap(self: Borrowed) *mimalloc.Heap {
return if (comptime safety_checks) self.#heap.inner else self.#heap;
}
fn assertThreadLock(self: Borrowed) void {
if (comptime safety_checks) self.#heap.thread_lock.assertLocked();
}
fn alignedAlloc(self: Borrowed, len: usize, alignment: Alignment) ?[*]u8 {
log("Malloc: {d}\n", .{len});
const heap = self.getMimallocHeap();
const ptr: ?*anyopaque = if (mimalloc.mustUseAlignedAlloc(alignment))
mimalloc.mi_heap_malloc_aligned(self.#heap, len, alignment.toByteUnits())
mimalloc.mi_heap_malloc_aligned(heap, len, alignment.toByteUnits())
else
mimalloc.mi_heap_malloc(self.#heap, len);
mimalloc.mi_heap_malloc(heap, len);
if (comptime bun.Environment.isDebug) {
const usable = mimalloc.mi_malloc_usable_size(ptr);
@@ -79,17 +89,42 @@ pub const Borrowed = struct {
}
};
const BorrowedHeap = if (safety_checks) *DebugHeap else *mimalloc.Heap;
const DebugHeap = struct {
inner: *mimalloc.Heap,
thread_lock: bun.safety.ThreadLock,
pub const deinit = void;
};
threadlocal var thread_heap: if (safety_checks) ?DebugHeap else void = if (safety_checks) null;
fn getThreadHeap() BorrowedHeap {
if (comptime !safety_checks) return mimalloc.mi_heap_get_default();
if (thread_heap == null) {
thread_heap = .{
.inner = mimalloc.mi_heap_get_default(),
.thread_lock = .initLocked(),
};
}
return &thread_heap.?;
}
const log = bun.Output.scoped(.mimalloc, .hidden);
pub fn allocator(self: Self) std.mem.Allocator {
self.assertThreadOwnership();
return self.borrow().allocator();
}
pub fn borrow(self: Self) Borrowed {
return .{ .#heap = self.#heap };
return .{ .#heap = if (comptime safety_checks) self.#heap.get() else self.#heap };
}
/// Internally, mimalloc calls mi_heap_get_default()
/// to get the default heap.
/// It uses pthread_getspecific to do that.
/// We can save those extra calls if we just do it once in here
pub fn getThreadLocalDefault() std.mem.Allocator {
if (bun.Environment.enable_asan) return bun.default_allocator;
return Borrowed.getDefault().allocator();
@@ -122,15 +157,22 @@ pub fn dumpStats(_: Self) void {
}
pub fn deinit(self: *Self) void {
mimalloc.mi_heap_destroy(self.#heap);
const mimalloc_heap = self.borrow().getMimallocHeap();
if (comptime safety_checks) {
self.#heap.deinit();
}
mimalloc.mi_heap_destroy(mimalloc_heap);
self.* = undefined;
}
pub fn init() Self {
return .{
.#heap = mimalloc.mi_heap_new() orelse bun.outOfMemory(),
.thread_id = if (safety_checks) std.Thread.getCurrentId() else {},
};
const mimalloc_heap = mimalloc.mi_heap_new() orelse bun.outOfMemory();
if (comptime !safety_checks) return .{ .#heap = mimalloc_heap };
const heap: Owned(*DebugHeap) = .new(.{
.inner = mimalloc_heap,
.thread_lock = .initLocked(),
});
return .{ .#heap = heap };
}
pub fn gc(self: Self) void {
@@ -141,16 +183,8 @@ pub fn helpCatchMemoryIssues(self: Self) void {
self.borrow().helpCatchMemoryIssues();
}
fn assertThreadOwnership(self: Self) void {
if (comptime safety_checks) {
const current_thread = std.Thread.getCurrentId();
if (current_thread != self.thread_id) {
std.debug.panic(
"MimallocArena used from wrong thread: arena belongs to thread {d}, but current thread is {d}",
.{ self.thread_id, current_thread },
);
}
}
pub fn ownsPtr(self: Self, ptr: *const anyopaque) bool {
return self.borrow().ownsPtr(ptr);
}
fn alignedAllocSize(ptr: [*]u8) usize {
@@ -159,10 +193,13 @@ fn alignedAllocSize(ptr: [*]u8) usize {
fn vtable_alloc(ptr: *anyopaque, len: usize, alignment: Alignment, _: usize) ?[*]u8 {
const self: Borrowed = .fromOpaque(ptr);
self.assertThreadLock();
return self.alignedAlloc(len, alignment);
}
fn vtable_resize(_: *anyopaque, buf: []u8, _: Alignment, new_len: usize, _: usize) bool {
fn vtable_resize(ptr: *anyopaque, buf: []u8, _: Alignment, new_len: usize, _: usize) bool {
const self: Borrowed = .fromOpaque(ptr);
self.assertThreadLock();
return mimalloc.mi_expand(buf.ptr, new_len) != null;
}
@@ -186,17 +223,39 @@ fn vtable_free(
}
}
/// Attempt to expand or shrink memory, allowing relocation.
///
/// `memory.len` must equal the length requested from the most recent
/// successful call to `alloc`, `resize`, or `remap`. `alignment` must
/// equal the same value that was passed as the `alignment` parameter to
/// the original `alloc` call.
///
/// A non-`null` return value indicates the resize was successful. The
/// allocation may have same address, or may have been relocated. In either
/// case, the allocation now has size of `new_len`. A `null` return value
/// indicates that the resize would be equivalent to allocating new memory,
/// copying the bytes from the old memory, and then freeing the old memory.
/// In such case, it is more efficient for the caller to perform the copy.
///
/// `new_len` must be greater than zero.
///
/// `ret_addr` is optionally provided as the first return address of the
/// allocation call stack. If the value is `0` it means no return address
/// has been provided.
fn vtable_remap(ptr: *anyopaque, buf: []u8, alignment: Alignment, new_len: usize, _: usize) ?[*]u8 {
const self: Borrowed = .fromOpaque(ptr);
const value = mimalloc.mi_heap_realloc_aligned(self.#heap, buf.ptr, new_len, alignment.toByteUnits());
self.assertThreadLock();
const heap = self.getMimallocHeap();
const aligned_size = alignment.toByteUnits();
const value = mimalloc.mi_heap_realloc_aligned(heap, buf.ptr, new_len, aligned_size);
return @ptrCast(value);
}
pub fn isInstance(alloc: std.mem.Allocator) bool {
return alloc.vtable == c_allocator_vtable;
return alloc.vtable == &c_allocator_vtable;
}
const c_allocator_vtable = &std.mem.Allocator.VTable{
const c_allocator_vtable = std.mem.Allocator.VTable{
.alloc = vtable_alloc,
.resize = vtable_resize,
.remap = vtable_remap,
@@ -209,3 +268,5 @@ const Alignment = std.mem.Alignment;
const bun = @import("bun");
const assert = bun.assert;
const mimalloc = bun.mimalloc;
const Owned = bun.ptr.Owned;
const safety_checks = bun.Environment.ci_assert;

View File

@@ -60,29 +60,17 @@ pub const Heap = opaque {
return mi_heap_realloc(self, p, newsize);
}
pub fn isOwned(self: *Heap, p: ?*const anyopaque) bool {
return mi_heap_contains(self, p);
pub fn isOwned(self: *Heap, p: ?*anyopaque) bool {
return mi_heap_check_owned(self, p);
}
};
pub extern fn mi_heap_new() ?*Heap;
pub extern fn mi_heap_delete(heap: *Heap) void;
pub extern fn mi_heap_destroy(heap: *Heap) void;
pub extern fn mi_heap_set_default(heap: *Heap) *Heap;
pub extern fn mi_heap_get_default() *Heap;
pub extern fn mi_heap_get_backing() *Heap;
pub extern fn mi_heap_collect(heap: *Heap, force: bool) void;
pub extern fn mi_heap_main() *Heap;
// Thread-local heap (theap) API - new in mimalloc v3
pub const THeap = opaque {};
pub extern fn mi_theap_get_default() *THeap;
pub extern fn mi_theap_set_default(theap: *THeap) *THeap;
pub extern fn mi_theap_collect(theap: *THeap, force: bool) void;
pub extern fn mi_theap_malloc(theap: *THeap, size: usize) ?*anyopaque;
pub extern fn mi_theap_zalloc(theap: *THeap, size: usize) ?*anyopaque;
pub extern fn mi_theap_calloc(theap: *THeap, count: usize, size: usize) ?*anyopaque;
pub extern fn mi_theap_malloc_small(theap: *THeap, size: usize) ?*anyopaque;
pub extern fn mi_theap_malloc_aligned(theap: *THeap, size: usize, alignment: usize) ?*anyopaque;
pub extern fn mi_theap_realloc(theap: *THeap, p: ?*anyopaque, newsize: usize) ?*anyopaque;
pub extern fn mi_theap_destroy(theap: *THeap) void;
pub extern fn mi_heap_theap(heap: *Heap) *THeap;
pub extern fn mi_heap_malloc(heap: *Heap, size: usize) ?*anyopaque;
pub extern fn mi_heap_zalloc(heap: *Heap, size: usize) ?*anyopaque;
pub extern fn mi_heap_calloc(heap: *Heap, count: usize, size: usize) ?*anyopaque;
@@ -114,7 +102,8 @@ pub extern fn mi_heap_rezalloc_aligned(heap: *Heap, p: ?*anyopaque, newsize: usi
pub extern fn mi_heap_rezalloc_aligned_at(heap: *Heap, p: ?*anyopaque, newsize: usize, alignment: usize, offset: usize) ?*anyopaque;
pub extern fn mi_heap_recalloc_aligned(heap: *Heap, p: ?*anyopaque, newcount: usize, size: usize, alignment: usize) ?*anyopaque;
pub extern fn mi_heap_recalloc_aligned_at(heap: *Heap, p: ?*anyopaque, newcount: usize, size: usize, alignment: usize, offset: usize) ?*anyopaque;
pub extern fn mi_heap_contains(heap: *const Heap, p: ?*const anyopaque) bool;
pub extern fn mi_heap_contains_block(heap: *Heap, p: *const anyopaque) bool;
pub extern fn mi_heap_check_owned(heap: *Heap, p: *const anyopaque) bool;
pub extern fn mi_check_owned(p: ?*const anyopaque) bool;
pub const struct_mi_heap_area_s = extern struct {
blocks: ?*anyopaque,

View File

@@ -245,6 +245,16 @@ pub const All = struct {
}
pub fn getTimeout(this: *All, spec: *timespec, vm: *VirtualMachine) bool {
// On POSIX, if there are pending immediate tasks, use a zero timeout
// so epoll/kqueue returns immediately without the overhead of writing
// to the eventfd via wakeup().
if (comptime Environment.isPosix) {
if (vm.event_loop.immediate_tasks.items.len > 0) {
spec.* = .{ .nsec = 0, .sec = 0 };
return true;
}
}
var maybe_now: ?timespec = null;
while (this.timers.peek()) |min| {
const now = maybe_now orelse now: {

View File

@@ -119,6 +119,7 @@ JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_swap16);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_swap32);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_swap64);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_toString);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_slice);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_write);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_writeBigInt64LE);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_writeBigInt64BE);
@@ -1879,6 +1880,103 @@ bool inline parseArrayIndex(JSC::ThrowScope& scope, JSC::JSGlobalObject* globalO
return true;
}
static ALWAYS_INLINE size_t adjustSliceOffsetInt32(int32_t offset, size_t length)
{
if (offset < 0) {
int64_t adjusted = static_cast<int64_t>(offset) + static_cast<int64_t>(length);
return adjusted > 0 ? static_cast<size_t>(adjusted) : 0;
}
return static_cast<size_t>(offset) < length ? static_cast<size_t>(offset) : length;
}
static ALWAYS_INLINE size_t adjustSliceOffsetDouble(double offset, size_t length)
{
if (std::isnan(offset)) {
return 0;
}
offset = std::trunc(offset);
if (offset == 0) {
return 0;
} else if (offset < 0) {
double adjusted = offset + static_cast<double>(length);
return adjusted > 0 ? static_cast<size_t>(adjusted) : 0;
} else {
return offset < static_cast<double>(length) ? static_cast<size_t>(offset) : length;
}
}
static JSC::EncodedJSValue jsBufferPrototypeFunction_sliceBody(JSC::JSGlobalObject* lexicalGlobalObject, JSC::CallFrame* callFrame, typename IDLOperation<JSArrayBufferView>::ClassParameter castedThis)
{
auto& vm = JSC::getVM(lexicalGlobalObject);
auto throwScope = DECLARE_THROW_SCOPE(vm);
auto* globalObject = defaultGlobalObject(lexicalGlobalObject);
size_t byteLength = castedThis->byteLength();
size_t byteOffset = castedThis->byteOffset();
size_t startOffset = 0;
size_t endOffset = byteLength;
unsigned argCount = callFrame->argumentCount();
if (argCount > 0) {
JSValue startArg = callFrame->uncheckedArgument(0);
if (startArg.isInt32()) {
startOffset = adjustSliceOffsetInt32(startArg.asInt32(), byteLength);
} else if (!startArg.isUndefined()) {
double startD = startArg.toNumber(lexicalGlobalObject);
RETURN_IF_EXCEPTION(throwScope, {});
startOffset = adjustSliceOffsetDouble(startD, byteLength);
}
}
if (argCount > 1) {
JSValue endArg = callFrame->uncheckedArgument(1);
if (endArg.isInt32()) {
endOffset = adjustSliceOffsetInt32(endArg.asInt32(), byteLength);
} else if (!endArg.isUndefined()) {
double endD = endArg.toNumber(lexicalGlobalObject);
RETURN_IF_EXCEPTION(throwScope, {});
endOffset = adjustSliceOffsetDouble(endD, byteLength);
}
}
size_t newLength = endOffset > startOffset ? endOffset - startOffset : 0;
if (castedThis->isDetached()) [[unlikely]] {
throwVMTypeError(lexicalGlobalObject, throwScope, "Buffer is detached"_s);
return {};
}
RefPtr<ArrayBuffer> buffer = castedThis->possiblySharedBuffer();
if (!buffer) {
throwOutOfMemoryError(globalObject, throwScope);
return {};
}
if (castedThis->isResizableOrGrowableShared()) {
auto* subclassStructure = globalObject->JSResizableOrGrowableSharedBufferSubclassStructure();
auto* uint8Array = JSC::JSUint8Array::create(lexicalGlobalObject, subclassStructure, WTF::move(buffer), byteOffset + startOffset, newLength);
RETURN_IF_EXCEPTION(throwScope, {});
if (!uint8Array) [[unlikely]] {
throwOutOfMemoryError(globalObject, throwScope);
return {};
}
RELEASE_AND_RETURN(throwScope, JSC::JSValue::encode(uint8Array));
}
auto* subclassStructure = globalObject->JSBufferSubclassStructure();
auto* uint8Array = JSC::JSUint8Array::create(lexicalGlobalObject, subclassStructure, WTF::move(buffer), byteOffset + startOffset, newLength);
RETURN_IF_EXCEPTION(throwScope, {});
if (!uint8Array) [[unlikely]] {
throwOutOfMemoryError(globalObject, throwScope);
return {};
}
RELEASE_AND_RETURN(throwScope, JSC::JSValue::encode(uint8Array));
}
// https://github.com/nodejs/node/blob/v22.9.0/lib/buffer.js#L834
// using byteLength and byte offsets here is intentional
static JSC::EncodedJSValue jsBufferPrototypeFunction_toStringBody(JSC::JSGlobalObject* lexicalGlobalObject, JSC::CallFrame* callFrame, typename IDLOperation<JSArrayBufferView>::ClassParameter castedThis)
@@ -2430,6 +2528,11 @@ JSC_DEFINE_HOST_FUNCTION(jsBufferPrototypeFunction_swap64, (JSGlobalObject * lex
return IDLOperation<JSArrayBufferView>::call<jsBufferPrototypeFunction_swap64Body>(*lexicalGlobalObject, *callFrame, "swap64");
}
JSC_DEFINE_HOST_FUNCTION(jsBufferPrototypeFunction_slice, (JSGlobalObject * lexicalGlobalObject, CallFrame* callFrame))
{
return IDLOperation<JSArrayBufferView>::call<jsBufferPrototypeFunction_sliceBody>(*lexicalGlobalObject, *callFrame, "slice");
}
JSC_DEFINE_HOST_FUNCTION(jsBufferPrototypeFunction_toString, (JSGlobalObject * lexicalGlobalObject, CallFrame* callFrame))
{
return IDLOperation<JSArrayBufferView>::call<jsBufferPrototypeFunction_toStringBody>(*lexicalGlobalObject, *callFrame, "toString");
@@ -2711,8 +2814,8 @@ static const HashTableValue JSBufferPrototypeTableValues[]
{ "readUIntBE"_s, static_cast<unsigned>(JSC::PropertyAttribute::Builtin), NoIntrinsic, { HashTableValue::BuiltinGeneratorType, jsBufferPrototypeReadUIntBECodeGenerator, 1 } },
{ "readUIntLE"_s, static_cast<unsigned>(JSC::PropertyAttribute::Builtin), NoIntrinsic, { HashTableValue::BuiltinGeneratorType, jsBufferPrototypeReadUIntLECodeGenerator, 1 } },
{ "slice"_s, static_cast<unsigned>(JSC::PropertyAttribute::Builtin), NoIntrinsic, { HashTableValue::BuiltinGeneratorType, jsBufferPrototypeSliceCodeGenerator, 2 } },
{ "subarray"_s, static_cast<unsigned>(JSC::PropertyAttribute::Builtin), NoIntrinsic, { HashTableValue::BuiltinGeneratorType, jsBufferPrototypeSliceCodeGenerator, 2 } },
{ "slice"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_slice, 2 } },
{ "subarray"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_slice, 2 } },
{ "swap16"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_swap16, 0 } },
{ "swap32"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_swap32, 0 } },
{ "swap64"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_swap64, 0 } },

View File

@@ -78,6 +78,9 @@
#include <JavaScriptCore/ArrayBuffer.h>
#include <JavaScriptCore/JSArrayBufferView.h>
#include <JavaScriptCore/JSCInlines.h>
#include <JavaScriptCore/JSArrayInlines.h>
#include <JavaScriptCore/ButterflyInlines.h>
#include <JavaScriptCore/ObjectInitializationScope.h>
#include <JavaScriptCore/JSDataView.h>
#include <JavaScriptCore/JSMapInlines.h>
#include <JavaScriptCore/JSMapIterator.h>
@@ -5574,6 +5577,13 @@ SerializedScriptValue::SerializedScriptValue(WTF::FixedVector<SimpleInMemoryProp
m_memoryCost = computeMemoryCost();
}
SerializedScriptValue::SerializedScriptValue(WTF::FixedVector<SimpleCloneableValue>&& elements)
: m_simpleArrayElements(WTF::move(elements))
, m_fastPath(FastPath::SimpleArray)
{
m_memoryCost = computeMemoryCost();
}
SerializedScriptValue::SerializedScriptValue(const String& fastPathString)
: m_fastPathString(fastPathString)
, m_fastPath(FastPath::String)
@@ -5581,6 +5591,14 @@ SerializedScriptValue::SerializedScriptValue(const String& fastPathString)
m_memoryCost = computeMemoryCost();
}
SerializedScriptValue::SerializedScriptValue(Vector<uint8_t>&& butterflyData, uint32_t length, FastPath fastPath)
: m_arrayButterflyData(WTF::move(butterflyData))
, m_arrayLength(length)
, m_fastPath(fastPath)
{
m_memoryCost = computeMemoryCost();
}
size_t SerializedScriptValue::computeMemoryCost() const
{
size_t cost = m_data.size();
@@ -5652,6 +5670,19 @@ size_t SerializedScriptValue::computeMemoryCost() const
}
}
break;
case FastPath::SimpleArray:
cost += m_simpleArrayElements.byteSize();
for (const auto& elem : m_simpleArrayElements) {
std::visit(WTF::makeVisitor(
[&](JSC::JSValue) { /* already included in byteSize() */ },
[&](const String& s) { cost += s.sizeInBytes(); }),
elem);
}
break;
case FastPath::Int32Array:
case FastPath::DoubleArray:
cost += m_arrayButterflyData.size();
break;
case FastPath::None:
break;
@@ -5843,7 +5874,9 @@ ExceptionOr<Ref<SerializedScriptValue>> SerializedScriptValue::create(JSGlobalOb
if (canUseFastPath) {
bool canUseStringFastPath = false;
bool canUseObjectFastPath = false;
bool canUseArrayFastPath = false;
JSObject* object = nullptr;
JSArray* array = nullptr;
Structure* structure = nullptr;
if (value.isCell()) {
auto* cell = value.asCell();
@@ -5853,7 +5886,10 @@ ExceptionOr<Ref<SerializedScriptValue>> SerializedScriptValue::create(JSGlobalOb
object = cell->getObject();
structure = object->structure();
if (isObjectFastPathCandidate(structure)) {
if (auto* jsArray = jsDynamicCast<JSArray*>(object)) {
canUseArrayFastPath = true;
array = jsArray;
} else if (isObjectFastPathCandidate(structure)) {
canUseObjectFastPath = true;
}
}
@@ -5866,6 +5902,84 @@ ExceptionOr<Ref<SerializedScriptValue>> SerializedScriptValue::create(JSGlobalOb
return SerializedScriptValue::createStringFastPath(stringValue);
}
if (canUseArrayFastPath) {
ASSERT(array != nullptr);
// Arrays with named properties (e.g. arr.foo = "bar") cannot use fast path
// as we only copy indexed elements. maxOffset == invalidOffset means no named properties.
if (structure->maxOffset() != invalidOffset)
canUseArrayFastPath = false;
}
if (canUseArrayFastPath) {
ASSERT(array != nullptr);
unsigned length = array->length();
auto arrayType = array->indexingType();
// Tier 1/2: Int32 / Double butterfly memcpy fast path
if ((arrayType == ArrayWithInt32 || arrayType == ArrayWithDouble)
&& length <= array->butterfly()->vectorLength()
&& !array->structure()->holesMustForwardToPrototype(array)) {
if (arrayType == ArrayWithInt32) {
auto* data = array->butterfly()->contiguous().data();
if (!containsHole(data, length)) {
size_t byteSize = sizeof(JSValue) * length;
Vector<uint8_t> buffer(byteSize, 0);
memcpy(buffer.mutableSpan().data(), data, byteSize);
return SerializedScriptValue::createInt32ArrayFastPath(WTF::move(buffer), length);
}
} else {
auto* data = array->butterfly()->contiguousDouble().data();
if (!containsHole(data, length)) {
size_t byteSize = sizeof(double) * length;
Vector<uint8_t> buffer(byteSize, 0);
memcpy(buffer.mutableSpan().data(), data, byteSize);
return SerializedScriptValue::createDoubleArrayFastPath(WTF::move(buffer), length);
}
}
// Holes present → fall through to normal path
}
// Tier 3: Contiguous array with butterfly direct access
if (arrayType == ArrayWithContiguous
&& length <= array->butterfly()->vectorLength()
&& !array->structure()->holesMustForwardToPrototype(array)) {
auto* data = array->butterfly()->contiguous().data();
WTF::Vector<SimpleCloneableValue> elements;
elements.reserveInitialCapacity(length);
bool ok = true;
for (unsigned i = 0; i < length; i++) {
JSValue elem = data[i].get();
if (!elem) {
ok = false;
break;
}
if (elem.isCell()) {
if (!elem.isString()) {
ok = false;
break;
}
auto* str = asString(elem);
String strValue = str->value(&lexicalGlobalObject);
RETURN_IF_EXCEPTION(scope, Exception { ExistingExceptionError });
elements.append(Bun::toCrossThreadShareable(strValue));
} else {
elements.append(elem);
}
}
if (ok) {
return SerializedScriptValue::createArrayFastPath(
WTF::FixedVector<SimpleCloneableValue>(WTF::move(elements)));
}
}
// ArrayStorage / Undecided / holes forwarding → fall through to normal serialization path
}
if (canUseObjectFastPath) {
ASSERT(object != nullptr);
@@ -6142,6 +6256,21 @@ Ref<SerializedScriptValue> SerializedScriptValue::createObjectFastPath(WTF::Fixe
return adoptRef(*new SerializedScriptValue(WTF::move(object)));
}
Ref<SerializedScriptValue> SerializedScriptValue::createArrayFastPath(WTF::FixedVector<SimpleCloneableValue>&& elements)
{
return adoptRef(*new SerializedScriptValue(WTF::move(elements)));
}
Ref<SerializedScriptValue> SerializedScriptValue::createInt32ArrayFastPath(Vector<uint8_t>&& data, uint32_t length)
{
return adoptRef(*new SerializedScriptValue(WTF::move(data), length, FastPath::Int32Array));
}
Ref<SerializedScriptValue> SerializedScriptValue::createDoubleArrayFastPath(Vector<uint8_t>&& data, uint32_t length)
{
return adoptRef(*new SerializedScriptValue(WTF::move(data), length, FastPath::DoubleArray));
}
RefPtr<SerializedScriptValue> SerializedScriptValue::create(JSContextRef originContext, JSValueRef apiValue, JSValueRef* exception)
{
JSGlobalObject* lexicalGlobalObject = toJS(originContext);
@@ -6288,6 +6417,78 @@ JSValue SerializedScriptValue::deserialize(JSGlobalObject& lexicalGlobalObject,
return object;
}
case FastPath::SimpleArray: {
unsigned length = m_simpleArrayElements.size();
// Pre-convert all elements to JSValues (including creating JSStrings)
// before entering ObjectInitializationScope, since jsString() allocates
// GC cells which is not allowed inside the initialization scope.
MarkedArgumentBuffer values;
values.ensureCapacity(length);
for (unsigned i = 0; i < length; i++) {
JSValue elemValue = std::visit(
WTF::makeVisitor(
[](JSValue v) -> JSValue { return v; },
[&](const String& s) -> JSValue { return jsString(vm, s); }),
m_simpleArrayElements[i]);
values.append(elemValue);
}
Structure* resultStructure = globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithContiguous);
ObjectInitializationScope initScope(vm);
JSArray* resultArray = JSArray::tryCreateUninitializedRestricted(initScope, resultStructure, length);
if (!resultArray) [[unlikely]] {
if (didFail)
*didFail = true;
return {};
}
for (unsigned i = 0; i < length; i++)
resultArray->initializeIndex(initScope, i, values.at(i));
if (didFail)
*didFail = false;
return resultArray;
}
case FastPath::Int32Array:
case FastPath::DoubleArray: {
IndexingType arrayType = (m_fastPath == FastPath::Int32Array) ? ArrayWithInt32 : ArrayWithDouble;
Structure* resultStructure = globalObject->arrayStructureForIndexingTypeDuringAllocation(arrayType);
if (hasAnyArrayStorage(resultStructure->indexingType())) [[unlikely]]
break; // isHavingABadTime → fall through to normal deserialization
unsigned outOfLineStorage = resultStructure->outOfLineCapacity();
unsigned vectorLength = Butterfly::optimalContiguousVectorLength(resultStructure, m_arrayLength);
void* memory = vm.auxiliarySpace().allocate(
vm,
Butterfly::totalSize(0, outOfLineStorage, true, vectorLength * sizeof(EncodedJSValue)),
nullptr, AllocationFailureMode::ReturnNull);
if (!memory) [[unlikely]] {
if (didFail)
*didFail = true;
return {};
}
Butterfly* butterfly = Butterfly::fromBase(memory, 0, outOfLineStorage);
butterfly->setVectorLength(vectorLength);
butterfly->setPublicLength(m_arrayLength);
if (m_fastPath == FastPath::DoubleArray)
memcpy(butterfly->contiguousDouble().data(), m_arrayButterflyData.span().data(), m_arrayButterflyData.size());
else
memcpy(butterfly->contiguous().data(), m_arrayButterflyData.span().data(), m_arrayButterflyData.size());
// Clear unused tail slots with hole values
Butterfly::clearRange(arrayType, butterfly, m_arrayLength, vectorLength);
JSArray* resultArray = JSArray::createWithButterfly(vm, nullptr, resultStructure, butterfly);
if (didFail)
*didFail = false;
return resultArray;
}
case FastPath::None: {
break;
}

View File

@@ -60,15 +60,12 @@ class MemoryHandle;
namespace WebCore {
// Shared value type for fast path cloning: primitives (JSValue) or strings.
using SimpleCloneableValue = std::variant<JSC::JSValue, WTF::String>;
class SimpleInMemoryPropertyTableEntry {
public:
// Only:
// - String
// - Number
// - Boolean
// - Null
// - Undefined
using Value = std::variant<JSC::JSValue, WTF::String>;
using Value = SimpleCloneableValue;
WTF::String propertyName;
Value value;
@@ -78,6 +75,9 @@ enum class FastPath : uint8_t {
None,
String,
SimpleObject,
SimpleArray,
Int32Array,
DoubleArray,
};
#if ENABLE(OFFSCREEN_CANVAS_IN_WORKERS)
@@ -129,6 +129,13 @@ public:
// Fast path for postMessage with simple objects
static Ref<SerializedScriptValue> createObjectFastPath(WTF::FixedVector<SimpleInMemoryPropertyTableEntry>&& object);
// Fast path for postMessage with dense arrays of primitives/strings
static Ref<SerializedScriptValue> createArrayFastPath(WTF::FixedVector<SimpleCloneableValue>&& elements);
// Fast path for postMessage with dense Int32/Double arrays (butterfly memcpy)
static Ref<SerializedScriptValue> createInt32ArrayFastPath(Vector<uint8_t>&& butterflyData, uint32_t length);
static Ref<SerializedScriptValue> createDoubleArrayFastPath(Vector<uint8_t>&& butterflyData, uint32_t length);
static Ref<SerializedScriptValue> nullValue();
WEBCORE_EXPORT JSC::JSValue deserialize(JSC::JSGlobalObject&, JSC::JSGlobalObject*, SerializationErrorMode = SerializationErrorMode::Throwing, bool* didFail = nullptr);
@@ -231,6 +238,9 @@ private:
// Constructor for string fast path
explicit SerializedScriptValue(const String& fastPathString);
explicit SerializedScriptValue(WTF::FixedVector<SimpleInMemoryPropertyTableEntry>&& object);
explicit SerializedScriptValue(WTF::FixedVector<SimpleCloneableValue>&& elements);
// Constructor for Int32Array/DoubleArray butterfly memcpy fast path
SerializedScriptValue(Vector<uint8_t>&& butterflyData, uint32_t length, FastPath fastPath);
size_t computeMemoryCost() const;
@@ -260,6 +270,13 @@ private:
size_t m_memoryCost { 0 };
FixedVector<SimpleInMemoryPropertyTableEntry> m_simpleInMemoryPropertyTable {};
// m_simpleArrayElements and m_arrayButterflyData/m_arrayLength are used exclusively:
// SimpleArray uses m_simpleArrayElements; Int32Array/DoubleArray use m_arrayButterflyData + m_arrayLength.
FixedVector<SimpleCloneableValue> m_simpleArrayElements {};
// Int32Array / DoubleArray fast path: raw butterfly data
Vector<uint8_t> m_arrayButterflyData {};
uint32_t m_arrayLength { 0 };
};
template<class Encoder>

View File

@@ -351,11 +351,13 @@ pub fn autoTick(this: *EventLoop) void {
const ctx = this.virtual_machine;
this.tickImmediateTasks(ctx);
if (comptime Environment.isPosix) {
if (comptime Environment.isWindows) {
if (this.immediate_tasks.items.len > 0) {
this.wakeup();
}
}
// On POSIX, pending immediates are handled via an immediate timeout in
// getTimeout() instead of writing to the eventfd, avoiding that overhead.
if (comptime Environment.isPosix) {
// Some tasks need to keep the event loop alive for one more tick.
@@ -438,11 +440,13 @@ pub fn autoTickActive(this: *EventLoop) void {
var ctx = this.virtual_machine;
this.tickImmediateTasks(ctx);
if (comptime Environment.isPosix) {
if (comptime Environment.isWindows) {
if (this.immediate_tasks.items.len > 0) {
this.wakeup();
}
}
// On POSIX, pending immediates are handled via an immediate timeout in
// getTimeout() instead of writing to the eventfd, avoiding that overhead.
if (comptime Environment.isPosix) {
const pending_unref = ctx.pending_unref_counter;

View File

@@ -16,6 +16,10 @@ pub const PosixLoop = extern struct {
/// Number of polls owned by Bun
active: u32 = 0,
/// Incremented atomically by wakeup(), swapped to 0 before epoll/kqueue.
/// If non-zero, the event loop will return immediately so we can skip the GC safepoint.
pending_wakeups: u32 = 0,
/// The list of ready polls
ready_polls: [1024]EventType align(16),

View File

@@ -34,7 +34,7 @@ pub const Loop = struct {
{
var epoll = std.mem.zeroes(std.os.linux.epoll_event);
epoll.events = std.os.linux.EPOLL.IN | std.os.linux.EPOLL.ERR | std.os.linux.EPOLL.HUP;
epoll.events = std.os.linux.EPOLL.IN | std.os.linux.EPOLL.ET | std.os.linux.EPOLL.ERR | std.os.linux.EPOLL.HUP;
epoll.data.ptr = @intFromPtr(&loop);
const rc = std.os.linux.epoll_ctl(loop.epoll_fd.cast(), std.os.linux.EPOLL.CTL_ADD, loop.waker.getFd().cast(), &epoll);
@@ -165,9 +165,8 @@ pub const Loop = struct {
const pollable: Pollable = Pollable.from(event.data.u64);
if (pollable.tag() == .empty) {
if (event.data.ptr == @intFromPtr(&loop)) {
// this is the event poll, lets read it
var bytes: [8]u8 = undefined;
_ = bun.sys.read(loop.fd(), &bytes);
// Edge-triggered: no need to read the eventfd counter
continue;
}
}
_ = Poll.onUpdateEpoll(pollable.poll(), pollable.tag(), event);

View File

@@ -664,28 +664,6 @@ export function toJSON(this: BufferExt) {
return { type, data };
}
export function slice(this: BufferExt, start, end) {
var { buffer, byteOffset, byteLength } = this;
function adjustOffset(offset, length) {
// Use Math.trunc() to convert offset to an integer value that can be larger
// than an Int32. Hence, don't use offset | 0 or similar techniques.
offset = Math.trunc(offset);
if (offset === 0 || offset !== offset) {
return 0;
} else if (offset < 0) {
offset += length;
return offset > 0 ? offset : 0;
} else {
return offset < length ? offset : length;
}
}
var start_ = adjustOffset(start, byteLength);
var end_ = end !== undefined ? adjustOffset(end, byteLength) : byteLength;
return new $Buffer(buffer, byteOffset + start_, end_ > start_ ? end_ - start_ : 0);
}
$getter;
export function parent(this: BufferExt) {
return $isObject(this) && this instanceof $Buffer ? this.buffer : undefined;

View File

@@ -19,7 +19,6 @@ expectAssignable<Bun.Build.CompileTarget>("bun-windows-x64-modern");
Bun.build({
entrypoints: ["hey"],
splitting: false,
// @ts-expect-error Currently not supported
compile: {},
});

View File

@@ -0,0 +1,94 @@
// Regression test for kqueue filter comparison bug (macOS).
//
// On kqueue, EVFILT_READ (-1) and EVFILT_WRITE (-2) are negative integers. The old
// code used bitwise AND to identify filters:
//
// events |= (filter & EVFILT_READ) ? READABLE : 0
// events |= (filter & EVFILT_WRITE) ? WRITABLE : 0
//
// Since all negative numbers AND'd with -1 or -2 produce truthy values, EVERY kqueue
// event was misidentified as BOTH readable AND writable. This caused the drain handler
// to fire spuriously on every readable event and vice versa.
//
// The fix uses equality comparison (filter == EVFILT_READ), plus coalescing duplicate
// kevents for the same fd (kqueue returns separate events per filter) into a single
// dispatch with combined flags — matching epoll's single-entry-per-fd behavior.
//
// This test creates unix socket connections with small buffers to force partial writes
// (which registers EVFILT_WRITE). The client sends pings on each data callback, causing
// EVFILT_READ events on the server. With the bug, each EVFILT_READ also triggers drain,
// giving a drain/data ratio of ~2.0. With the fix, the ratio is ~1.0.
//
// Example output:
// system bun (bug): data: 38970 drain: 77940 ratio: 2.0
// fixed bun: data: 52965 drain: 52965 ratio: 1.0
import { setSocketOptions } from "bun:internal-for-testing";
const CHUNK = Buffer.alloc(64 * 1024, "x");
const PING = Buffer.from("p");
const sockPath = `kqueue-bench-${process.pid}.sock`;
let drainCalls = 0;
let dataCalls = 0;
const server = Bun.listen({
unix: sockPath,
socket: {
open(socket) {
setSocketOptions(socket, 1, 512);
setSocketOptions(socket, 2, 512);
socket.write(CHUNK);
},
data() {
dataCalls++;
},
drain(socket) {
drainCalls++;
socket.write(CHUNK);
},
close() {},
error() {},
},
});
const clients = [];
for (let i = 0; i < 10; i++) {
clients.push(
await Bun.connect({
unix: sockPath,
socket: {
open(socket) {
setSocketOptions(socket, 1, 512);
setSocketOptions(socket, 2, 512);
},
data(socket) {
socket.write(PING);
},
drain() {},
close() {},
error() {},
},
}),
);
}
await Bun.sleep(50);
drainCalls = 0;
dataCalls = 0;
await Bun.sleep(100);
const ratio = dataCalls > 0 ? drainCalls / dataCalls : 0;
console.log(`data: ${dataCalls} drain: ${drainCalls} ratio: ${ratio.toFixed(1)}`);
for (const c of clients) c.end();
server.stop(true);
try {
require("fs").unlinkSync(sockPath);
} catch {}
if (dataCalls === 0 || drainCalls === 0) {
console.error("test invalid: no data or drain callbacks fired");
process.exit(1);
}
process.exit(ratio < 1.5 ? 0 : 1);

View File

@@ -339,6 +339,10 @@ describe.concurrent("socket", () => {
expect([fileURLToPath(new URL("./socket-huge-fixture.js", import.meta.url))]).toRun();
}, 60_000);
it.skipIf(isWindows)("kqueue should not dispatch spurious drain events on readable", async () => {
expect([fileURLToPath(new URL("./kqueue-filter-coalesce-fixture.ts", import.meta.url))]).toRun();
});
it("it should not crash when getting a ReferenceError on client socket open", async () => {
using server = Bun.serve({
port: 0,

View File

@@ -68,6 +68,6 @@ describe("static initializers", () => {
expect(
bunInitializers.length,
`Do not add static initializers to Bun. Static initializers are called when Bun starts up, regardless of whether you use the variables or not. This makes Bun slower.`,
).toBe(process.arch === "arm64" ? 2 : 3);
).toBe(process.arch === "arm64" ? 1 : 2);
});
});

View File

@@ -887,6 +887,68 @@ for (let withOverridenBufferWrite of [false, true]) {
expect(f[1]).toBe(0x6f);
});
it("slice() with fractional offsets truncates toward zero", () => {
const buf = Buffer.from([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]);
// -0.1 should truncate to 0, not -1
const a = buf.slice(-0.1);
expect(a.length).toBe(10);
expect(a[0]).toBe(0);
// -1.9 should truncate to -1, not -2
const b = buf.slice(-1.9);
expect(b.length).toBe(1);
expect(b[0]).toBe(9);
// 1.9 should truncate to 1
const c = buf.slice(1.9, 4.1);
expect(c.length).toBe(3);
expect(c[0]).toBe(1);
expect(c[1]).toBe(2);
expect(c[2]).toBe(3);
// NaN should be treated as 0
const d = buf.slice(NaN, NaN);
expect(d.length).toBe(0);
const e = buf.slice(NaN);
expect(e.length).toBe(10);
});
it("slice() on detached buffer throws TypeError", () => {
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab);
// Detach the ArrayBuffer by transferring it
structuredClone(ab, { transfer: [ab] });
expect(() => buf.slice(0, 5)).toThrow(TypeError);
});
it("subarray() on detached buffer throws TypeError", () => {
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab);
structuredClone(ab, { transfer: [ab] });
expect(() => buf.subarray(0, 5)).toThrow(TypeError);
});
it("slice() on resizable ArrayBuffer returns fixed-length view", () => {
const rab = new ArrayBuffer(10, { maxByteLength: 20 });
const buf = Buffer.from(rab);
buf[0] = 1;
buf[1] = 2;
buf[2] = 3;
buf[3] = 4;
buf[4] = 5;
const sliced = buf.slice(0, 5);
expect(sliced.length).toBe(5);
expect(sliced[0]).toBe(1);
expect(sliced[4]).toBe(5);
// Growing the buffer should NOT change the slice length
rab.resize(20);
expect(sliced.length).toBe(5);
});
function forEachUnicode(label, test) {
["ucs2", "ucs-2", "utf16le", "utf-16le"].forEach(encoding =>
it(`${label} (${encoding})`, test.bind(null, encoding)),

View File

@@ -90,6 +90,273 @@ describe("Structured Clone Fast Path", () => {
expect(delta).toBeLessThan(1024 * 1024);
});
// === Array fast path tests ===
test("structuredClone should work with empty array", () => {
expect(structuredClone([])).toEqual([]);
});
test("structuredClone should work with array of numbers", () => {
const input = [1, 2, 3, 4, 5];
expect(structuredClone(input)).toEqual(input);
});
test("structuredClone should work with array of strings", () => {
const input = ["hello", "world", ""];
expect(structuredClone(input)).toEqual(input);
});
test("structuredClone should work with array of mixed primitives", () => {
const input = [1, "hello", true, false, null, undefined, 3.14];
const cloned = structuredClone(input);
expect(cloned).toEqual(input);
});
test("structuredClone should work with array of special numbers", () => {
const cloned = structuredClone([-0, NaN, Infinity, -Infinity]);
expect(Object.is(cloned[0], -0)).toBe(true);
expect(cloned[1]).toBeNaN();
expect(cloned[2]).toBe(Infinity);
expect(cloned[3]).toBe(-Infinity);
});
test("structuredClone should work with large array of numbers", () => {
const input = Array.from({ length: 10000 }, (_, i) => i);
expect(structuredClone(input)).toEqual(input);
});
test("structuredClone should fallback for arrays with nested objects", () => {
const input = [{ a: 1 }, { b: 2 }];
expect(structuredClone(input)).toEqual(input);
});
test("structuredClone should fallback for arrays with holes", () => {
const input = [1, , 3]; // sparse
const cloned = structuredClone(input);
// structured clone spec: holes become undefined
expect(cloned[0]).toBe(1);
expect(cloned[1]).toBe(undefined);
expect(cloned[2]).toBe(3);
});
test("structuredClone should work with array of doubles", () => {
const input = [1.5, 2.7, 3.14, 0.1 + 0.2];
const cloned = structuredClone(input);
expect(cloned).toEqual(input);
});
test("structuredClone creates independent copy of array", () => {
const input = [1, 2, 3];
const cloned = structuredClone(input);
cloned[0] = 999;
expect(input[0]).toBe(1);
});
test("structuredClone should preserve named properties on arrays", () => {
const input: any = [1, 2, 3];
input.foo = "bar";
const cloned = structuredClone(input);
expect(cloned.foo).toBe("bar");
expect(Array.from(cloned)).toEqual([1, 2, 3]);
});
test("postMessage should work with array fast path", async () => {
const { port1, port2 } = new MessageChannel();
const input = [1, 2, 3, "hello", true];
const { promise, resolve } = Promise.withResolvers();
port2.onmessage = (e: MessageEvent) => resolve(e.data);
port1.postMessage(input);
const result = await promise;
expect(result).toEqual(input);
port1.close();
port2.close();
});
// === Edge case tests ===
test("structuredClone of frozen array should produce a non-frozen clone", () => {
const input = Object.freeze([1, 2, 3]);
const cloned = structuredClone(input);
expect(cloned).toEqual([1, 2, 3]);
expect(Object.isFrozen(cloned)).toBe(false);
cloned[0] = 999;
expect(cloned[0]).toBe(999);
});
test("structuredClone of sealed array should produce a non-sealed clone", () => {
const input = Object.seal([1, 2, 3]);
const cloned = structuredClone(input);
expect(cloned).toEqual([1, 2, 3]);
expect(Object.isSealed(cloned)).toBe(false);
cloned.push(4);
expect(cloned).toEqual([1, 2, 3, 4]);
});
test("structuredClone of array with deleted element (hole via delete)", () => {
const input = [1, 2, 3];
delete (input as any)[1];
const cloned = structuredClone(input);
expect(cloned[0]).toBe(1);
expect(cloned[1]).toBe(undefined);
expect(cloned[2]).toBe(3);
expect(1 in cloned).toBe(false); // holes remain holes after structuredClone
});
test("structuredClone of array with length > actual elements", () => {
const input = [1, 2, 3];
input.length = 6;
const cloned = structuredClone(input);
expect(cloned.length).toBe(6);
expect(cloned[0]).toBe(1);
expect(cloned[1]).toBe(2);
expect(cloned[2]).toBe(3);
expect(cloned[3]).toBe(undefined);
});
test("structuredClone of single element arrays", () => {
expect(structuredClone([42])).toEqual([42]);
expect(structuredClone([3.14])).toEqual([3.14]);
expect(structuredClone(["hello"])).toEqual(["hello"]);
expect(structuredClone([true])).toEqual([true]);
expect(structuredClone([null])).toEqual([null]);
});
test("structuredClone of array with named properties on Int32 array", () => {
const input: any = [1, 2, 3]; // Int32 indexing
input.name = "test";
input.count = 42;
const cloned = structuredClone(input);
expect(cloned.name).toBe("test");
expect(cloned.count).toBe(42);
expect(Array.from(cloned)).toEqual([1, 2, 3]);
});
test("structuredClone of array with named properties on Double array", () => {
const input: any = [1.1, 2.2, 3.3]; // Double indexing
input.label = "doubles";
const cloned = structuredClone(input);
expect(cloned.label).toBe("doubles");
expect(Array.from(cloned)).toEqual([1.1, 2.2, 3.3]);
});
test("structuredClone of array that transitions Int32 to Double", () => {
const input = [1, 2, 3]; // starts as Int32
input.push(4.5); // transitions to Double
const cloned = structuredClone(input);
expect(cloned).toEqual([1, 2, 3, 4.5]);
});
test("structuredClone of array with modified prototype", () => {
const input = [1, 2, 3];
Object.setPrototypeOf(input, {
customMethod() {
return 42;
},
});
const cloned = structuredClone(input);
// Clone should have standard Array prototype, not the custom one
expect(Array.from(cloned)).toEqual([1, 2, 3]);
expect(cloned).toBeInstanceOf(Array);
expect((cloned as any).customMethod).toBeUndefined();
});
test("structuredClone of array with prototype indexed properties and holes", () => {
const proto = Object.create(Array.prototype);
proto[1] = "from proto";
const input = new Array(3);
Object.setPrototypeOf(input, proto);
input[0] = "a";
input[2] = "c";
// structuredClone only copies own properties; prototype values are not included
const cloned = structuredClone(input);
expect(cloned[0]).toBe("a");
expect(1 in cloned).toBe(false); // hole, not "from proto"
expect(cloned[2]).toBe("c");
expect(cloned).toBeInstanceOf(Array);
});
test("postMessage with Int32 array via MessageChannel", async () => {
const { port1, port2 } = new MessageChannel();
const input = [10, 20, 30, 40, 50];
const { promise, resolve } = Promise.withResolvers();
port2.onmessage = (e: MessageEvent) => resolve(e.data);
port1.postMessage(input);
const result = await promise;
expect(result).toEqual(input);
port1.close();
port2.close();
});
test("postMessage with Double array via MessageChannel", async () => {
const { port1, port2 } = new MessageChannel();
const input = [1.1, 2.2, 3.3];
const { promise, resolve } = Promise.withResolvers();
port2.onmessage = (e: MessageEvent) => resolve(e.data);
port1.postMessage(input);
const result = await promise;
expect(result).toEqual(input);
port1.close();
port2.close();
});
test("structuredClone of array multiple times produces independent copies", () => {
const input = [1, 2, 3];
const clones = Array.from({ length: 10 }, () => structuredClone(input));
clones[0][0] = 999;
clones[5][1] = 888;
// All other clones and the original should be unaffected
expect(input).toEqual([1, 2, 3]);
for (let i = 1; i < 10; i++) {
if (i === 5) {
expect(clones[i]).toEqual([1, 888, 3]);
} else {
expect(clones[i]).toEqual([1, 2, 3]);
}
}
});
test("structuredClone of Array subclass loses subclass identity", () => {
class MyArray extends Array {
customProp = "hello";
sum() {
return this.reduce((a: number, b: number) => a + b, 0);
}
}
const input = new MyArray(1, 2, 3);
input.customProp = "world";
const cloned = structuredClone(input);
// structuredClone spec: result is a plain Array, not a subclass
expect(Array.from(cloned)).toEqual([1, 2, 3]);
expect(cloned).toBeInstanceOf(Array);
expect((cloned as any).sum).toBeUndefined();
});
test("structuredClone of array with only undefined values", () => {
const input = [undefined, undefined, undefined];
const cloned = structuredClone(input);
expect(cloned).toEqual([undefined, undefined, undefined]);
expect(cloned.length).toBe(3);
// Ensure they are actual values, not holes
expect(0 in cloned).toBe(true);
expect(1 in cloned).toBe(true);
expect(2 in cloned).toBe(true);
});
test("structuredClone of array with only null values", () => {
const input = [null, null, null];
const cloned = structuredClone(input);
expect(cloned).toEqual([null, null, null]);
});
test("structuredClone of dense double array preserves -0 and NaN", () => {
const input = [-0, NaN, -0, NaN];
const cloned = structuredClone(input);
expect(Object.is(cloned[0], -0)).toBe(true);
expect(cloned[1]).toBeNaN();
expect(Object.is(cloned[2], -0)).toBe(true);
expect(cloned[3]).toBeNaN();
});
test("structuredClone on object with simple properties can exceed JSFinalObject::maxInlineCapacity", () => {
let largeValue = {};
for (let i = 0; i < 100; i++) {