Compare commits

..

27 Commits

Author SHA1 Message Date
Dylan Conway
e0b8d46e65 test bytecode fix 2026-02-09 14:07:58 -08:00
SUZUKI Sosuke
b7475d8768 fix(buffer): return fixed-length view from slice on resizable ArrayBuffer (#26822)
## Summary

Follow-up to #26819 ([review
comment](https://github.com/oven-sh/bun/pull/26819#discussion_r2781484939)).
Fixes `Buffer.slice()` / `Buffer.subarray()` on resizable `ArrayBuffer`
/ growable `SharedArrayBuffer` to return a **fixed-length view** instead
of a length-tracking view.

## Problem

The resizable/growable branch was passing `std::nullopt` to
`JSUint8Array::create()`, which creates a length-tracking view. When the
underlying buffer grows, the sliced view's length would incorrectly
expand:

```js
const rab = new ArrayBuffer(10, { maxByteLength: 20 });
const buf = Buffer.from(rab);
const sliced = buf.slice(0, 5);
sliced.length; // 5

rab.resize(20);
sliced.length; // was 10 (wrong), now 5 (correct)
```

Node.js specifies that `Buffer.slice()` always returns a fixed-length
view (verified on Node.js v22).

## Fix

Replace `std::nullopt` with `newLength` in the
`isResizableOrGrowableShared()` branch of
`jsBufferPrototypeFunction_sliceBody`.

## Test

Added a regression test that creates a `Buffer` from a resizable
`ArrayBuffer`, slices it, resizes the buffer, and verifies the slice
length doesn't change.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 04:48:20 -08:00
Jarred Sumner
4494170f74 perf(event_loop): avoid eventfd wakeup for setImmediate on POSIX (#26821)
### What does this PR do?

Instead of calling event_loop.wakeup() (which writes to the eventfd)
when there are pending immediate tasks, use a zero timeout in
getTimeout() so epoll/kqueue returns immediately. This avoids the
overhead of the eventfd write/read cycle on each setImmediate iteration.

On Windows, continue to call .wakeup() since that's cheap for libuv.

Verified with strace: system bun makes ~44k eventfd writes for a 5s
setImmediate loop, while this change makes 0.


### How did you verify your code works?

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-09 04:47:52 -08:00
SUZUKI Sosuke
9484218ba4 perf(buffer): move Buffer.slice/subarray to native C++ with int32 fast path (#26819)
## Summary

Move `Buffer.slice()` / `Buffer.subarray()` from a JS builtin to a
native C++ implementation, eliminating the `adjustOffset` closure
allocation and JS→C++ constructor overhead on every call. Additionally,
add an int32 fast path that skips `toNumber()` (which can invoke
`valueOf`/`Symbol.toPrimitive`) when arguments are already int32—the
common case for calls like `buf.slice(0, 10)`.

## Changes

- **`src/bun.js/bindings/JSBuffer.cpp`**: Add
`jsBufferPrototypeFunction_sliceBody` with `adjustSliceOffsetInt32` /
`adjustSliceOffsetDouble` helpers. Update prototype hash table entries
from `BuiltinGeneratorType` to `NativeFunctionType` for both `slice` and
`subarray`.
- **`src/js/builtins/JSBufferPrototype.ts`**: Remove the JS `slice`
function (was lines 667–687).
- **`bench/snippets/buffer-slice.mjs`**: Add mitata benchmark.

## Benchmark (Apple M4 Max)

| Benchmark | Before (v1.3.8) | After | Speedup |
|---|---|---|---|
| `Buffer(64).slice()` | 27.19 ns | **14.56 ns** | **1.87x** |
| `Buffer(1024).slice()` | 27.84 ns | **14.62 ns** | **1.90x** |
| `Buffer(1M).slice()` | 29.20 ns | **14.89 ns** | **1.96x** |
| `Buffer(64).slice(10)` | 30.26 ns | **16.01 ns** | **1.89x** |
| `Buffer(1024).slice(10, 100)` | 30.92 ns | **18.32 ns** | **1.69x** |
| `Buffer(1024).slice(-100, -10)` | 28.82 ns | **17.37 ns** | **1.66x**
|
| `Buffer(1024).subarray(10, 100)` | 28.67 ns | **16.32 ns** | **1.76x**
|

**~1.7–1.9x faster** across all cases. All 449 buffer tests pass.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 01:46:33 -08:00
robobun
2a5e8ef38c fix(kqueue): fix incorrect filter comparison causing excessive CPU on macOS (#26812)
## Summary

Fixes the remaining kqueue filter comparison bug in
`packages/bun-usockets/src/eventing/epoll_kqueue.c` that caused
excessive CPU usage with network requests on macOS:

- **`us_loop_run_bun_tick` filter comparison (line 302-303):** kqueue
filter values (`EVFILT_READ=-1`, `EVFILT_WRITE=-2`) were compared using
bitwise AND (`&`) instead of equality (`==`). Since these are signed
negative integers (not bitmasks), `(-2) & (-1)` = `-2` (truthy), meaning
every `EVFILT_WRITE` event was also misidentified as `EVFILT_READ`. This
was already fixed in `us_loop_run` (by PR #25475) but the same bug
remained in `us_loop_run_bun_tick`, which is the primary event loop
function used by Bun.

This is a macOS-only issue (Linux uses epoll, which is unaffected).

Closes #26811

## Test plan

- [x] Added regression test at `test/regression/issue/26811.test.ts`
that makes concurrent HTTPS POST requests
- [x] Test passes with `bun bd test test/regression/issue/26811.test.ts`
- [ ] Manual verification on macOS: run the reporter's [repro
script](https://gist.github.com/jkoppel/d26732574dfcdcc6bfc4958596054d2e)
and confirm CPU usage stays low

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-09 00:52:17 -08:00
robobun
a84f12b816 Use edge-triggered epoll for eventfd wakeups (#26815)
## Summary

- Switch both eventfd wakeup sites (Zig IO watcher loop and usockets
async) to edge-triggered (`EPOLLET`) epoll mode, eliminating unnecessary
`read()` syscalls on every event loop wakeup
- Add `EAGAIN`/`EINTR` overflow handling in `us_internal_async_wakeup`,
matching libuv's approach ([commit
`e5cb1d3d`](https://github.com/libuv/libuv/commit/e5cb1d3d))

With edge-triggered mode, each `write()` to the eventfd produces a new
edge event regardless of the current counter value, so draining the
counter via `read()` is unnecessary. The counter will never overflow in
practice (~18 quintillion wakeups), but overflow handling is included
defensively.

### Files changed

- **`src/io/io.zig`** — Add `EPOLL.ET` to eventfd registration, replace
drain `read()` with `continue`
- **`packages/bun-usockets/src/eventing/epoll_kqueue.c`** — Set
`leave_poll_ready = 1` for async callbacks, upgrade to `EPOLLET` via
`EPOLL_CTL_MOD`, add `EAGAIN`/`EINTR` handling in wakeup write

## Test plan

- [x] Verified with `strace -f -e trace=read,eventfd2` that eventfd
reads are fully eliminated after the change (0 reads on the eventfd fd)
- [x] Confirmed remaining 8-byte reads in traces are timerfd reads
(legitimate, required)
- [x] Stress tested with 50 concurrent async tasks (1000 total
`Bun.sleep(1)` iterations) — all completed correctly
- [x] `LinuxWaker.wait()` (used by `BundleThread` as a blocking sleep)
is intentionally unchanged

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-09 00:36:30 -08:00
SUZUKI Sosuke
0f43ea9bec perf(structuredClone): add fast path for root-level dense arrays (#26814)
## Summary

Add a fast path for `structuredClone` and `postMessage` when the root
value is a dense array of primitives or strings. This bypasses the full
`CloneSerializer`/`CloneDeserializer` machinery by keeping data in
native C++ structures instead of serializing to a byte stream.

**Important:** This optimization only applies when the root value passed
to `structuredClone()` / `postMessage()` is an array. Nested arrays
within objects still go through the normal serialization path.

## Implementation

Three tiers of array fast paths, checked in order:

| Tier | Indexing Type | Strategy | Applies When |
|------|--------------|----------|--------------|
| **Tier 1** | `ArrayWithInt32` | `memcpy` butterfly data | Dense int32
array, no holes, no named properties |
| **Tier 2** | `ArrayWithDouble` | `memcpy` butterfly data | Dense
double array, no holes, no named properties |
| **Tier 3** | `ArrayWithContiguous` | Copy elements into
`FixedVector<variant<JSValue, String>>` | Dense array of
primitives/strings, no holes, no named properties |

All tiers fall through to the normal serialization path when:
- The array has holes that must forward to the prototype
- The array has named properties (e.g., `arr.foo = "bar"`) — checked via
`structure->maxOffset() != invalidOffset`
- Elements contain non-primitive, non-string values (objects, arrays,
etc.)
- The context requires wire-format serialization (storage, cross-process
transfer)

### Deserialization

- **Tier 1/2:** Allocate a new `Butterfly` via `vm.auxiliarySpace()`,
`memcpy` data back, create array with `JSArray::createWithButterfly()`.
Falls back to normal deserialization if `isHavingABadTime` (forced
ArrayStorage mode).
- **Tier 3:** Pre-convert elements to `JSValue` (including `jsString()`
allocation), then use `JSArray::tryCreateUninitializedRestricted()` +
`initializeIndex()`.

## Benchmarks

Apple M4 Max, comparing system Bun 1.3.8 vs this branch (release build):

| Benchmark | Before | After | Speedup |
|-----------|--------|-------|---------|
| `structuredClone([10 numbers])` | 308.71 ns | 40.38 ns | **7.6x** |
| `structuredClone([100 numbers])` | 1.62 µs | 86.87 ns | **18.7x** |
| `structuredClone([1000 numbers])` | 13.79 µs | 544.56 ns | **25.3x** |
| `structuredClone([10 strings])` | 642.38 ns | 307.38 ns | **2.1x** |
| `structuredClone([100 strings])` | 5.67 µs | 2.57 µs | **2.2x** |
| `structuredClone([10 mixed])` | 446.32 ns | 198.35 ns | **2.3x** |
| `structuredClone(nested array)` | 1.84 µs | 1.79 µs | 1.0x (not
eligible) |
| `structuredClone({a: 123})` | 95.98 ns | 100.07 ns | 1.0x (no
regression) |

Int32 arrays see the largest gains (up to 25x) since they use a direct
`memcpy` of butterfly memory. String/mixed arrays see ~2x improvement.
No performance regression on non-eligible inputs.

## Bug Fix

Also fixes a correctness bug where arrays with named properties (e.g.,
`arr.foo = "bar"`) would lose those properties when going through the
array fast path. Added a `structure->maxOffset() != invalidOffset` guard
to fall back to normal serialization for such arrays.

Fixed a minor double-counting issue in `computeMemoryCost` where
`JSValue` elements in `SimpleArray` were counted both by `byteSize()`
and individually.

## Test Plan

38 tests in `test/js/web/structured-clone-fastpath.test.ts` covering:

- Basic array types: empty, numbers, strings, mixed primitives, special
numbers (`-0`, `NaN`, `Infinity`)
- Large arrays (10,000 elements)
- Tier 2: double arrays, Int32→Double transition
- Deep clone independence verification
- Named properties on Int32, Double, and Contiguous arrays
- `postMessage` via `MessageChannel` for Int32, Double, and mixed arrays
- Edge cases: frozen/sealed arrays, deleted elements (holes), `length`
extension, single-element arrays
- Prototype modification (custom prototype, indexed prototype properties
with holes)
- `Array` subclass identity loss (per spec)
- `undefined`-only and `null`-only arrays
- Multiple independent clones from the same source

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-08 21:36:59 -08:00
Jarred Sumner
0889897a1c Revert "feat(bundler): add configurable CJS→ESM unwrapping via unwrapCJSToESM"
This reverts commit e3c25260ed.
2026-02-08 19:49:26 -08:00
Jarred Sumner
68f2ea4b95 Fix release script 2026-02-08 01:39:10 -08:00
Jarred Sumner
d4ebfd9771 Bump 2026-02-08 01:32:25 -08:00
Jarred Sumner
e3c25260ed feat(bundler): add configurable CJS→ESM unwrapping via unwrapCJSToESM
Add `minify.unwrapCJSToESM` JS API option and `--unwrap-cjs-to-esm` CLI
flag to force CJS-to-ESM conversion for specific packages, eliminating
the `__commonJS` wrapper. Supports wildcard patterns (e.g. `"@scope/*"`).
User entries extend the default React family list.

Also removes the react/react-dom version check that gated conversion,
and fixes `packageName()` to handle scoped packages (`@scope/pkg`).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 01:32:10 -08:00
Alistair Smith
1bded85718 types: Enable --splitting with compile (#26796)
### What does this PR do?

Enables --splitting with compile

### How did you verify your code works?

Bun types integration test fixture updates
2026-02-07 13:39:18 -08:00
Dylan Conway
cf6cdbbbad Revert "Mimalloc v3 update (#26379)" (#26783)
This reverts commit c63415c9c9.

### What does this PR do?

### How did you verify your code works?
2026-02-06 18:05:17 -08:00
robobun
89d2b1cd0b fix(websocket): add missing incPendingActivityCount() in blob binaryType case (#26670)
## Summary

- Fix crash ("Pure virtual function called!") when WebSocket client
receives binary data with `binaryType = "blob"` and no event listener
attached
- Add missing `incPendingActivityCount()` call before `postTask` in the
Blob case of `didReceiveBinaryData`
- Add regression test for issue #26669

## Root Cause

The Blob case in `didReceiveBinaryData` (WebSocket.cpp:1324-1331) was
calling `decPendingActivityCount()` inside the `postTask` callback
without a matching `incPendingActivityCount()` beforehand. This bug was
introduced in #21471 when Blob support was added.

The ArrayBuffer and NodeBuffer cases correctly call
`incPendingActivityCount()` before `postTask`, but the Blob case was
missing this call.

## Test plan

- [x] New regression test verifies WebSocket with `binaryType = "blob"`
doesn't crash on ping frames
- [x] `bun bd test test/regression/issue/26669.test.ts` passes

Fixes #26669

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Ciro Spaciari MacBook <ciro@anthropic.com>
2026-02-05 20:39:19 -08:00
Jarred Sumner
2019a1b11d Bump WebKit 2026-02-05 20:09:39 -08:00
SUZUKI Sosuke
6c70ce2485 Update WebKit to 7bc2f97e28353062bb54776ce01e4c2ff24c35cc (#26769)
### What does this PR do?

### How did you verify your code works?
2026-02-05 17:58:30 -08:00
SUZUKI Sosuke
0e386c4168 fix(stringWidth): correct width for Thai/Lao spacing vowels (#26728)
## Summary

`Bun.stringWidth` was incorrectly treating Thai SARA AA (U+0E32), SARA
AM (U+0E33), and their Lao equivalents (U+0EB2, U+0EB3) as zero-width
characters.

## Root Cause

In `src/string/immutable/visible.zig`, the range check for Thai/Lao
combining marks was too broad:
- Thai: `0xe31 <= cp <= 0xe3a` included U+0E32 and U+0E33
- Lao: `0xeb1 <= cp <= 0xebc` included U+0EB2 and U+0EB3

According to Unicode (UCD Grapheme_Break property), these are **spacing
vowels** (Grapheme_Base), not combining marks.

## Changes

- **`src/string/immutable/visible.zig`**: Exclude U+0E32, U+0E33,
U+0EB2, U+0EB3 from zero-width ranges
- **`test/js/bun/util/stringWidth.test.ts`**: Add tests for Thai and Lao
spacing vowels

## Before/After

| Character | Before | After |
|-----------|--------|-------|
| `\u0E32` (SARA AA) | 0 | 1 |
| `\u0E33` (SARA AM) | 0 | 1 |
| `คำ` (common Thai word) | 1 | 2 |
| `\u0EB2` (Lao AA) | 0 | 1 |
| `\u0EB3` (Lao AM) | 0 | 1 |

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-05 17:31:15 -08:00
Alistair Smith
e5cd034e9a Define seed in crc32 types (#26754)
### What does this PR do?

Fixes #26711 

### How did you verify your code works?

bun-types.test.ts integration test
2026-02-05 06:41:25 -08:00
Dylan Conway
45b9d1baba Revert "fix(bindgen): prevent use-after-free for optional string argu… (#26742)
…ments (#26717)"

This reverts commit 315e822866.

### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-04 19:38:12 -08:00
Ciro Spaciari
0ad562d3bd fix(http2) Fix SSLWrapper and allow injecting connections in Http2SecureServer (#26539)
### What does this PR do?

Enables the `net.Server → Http2SecureServer` connection upgrade pattern
used by libraries like
[http2-wrapper](https://github.com/szmarczak/http2-wrapper),
[crawlee](https://github.com/apify/crawlee), and custom HTTP/2 proxy
servers. This pattern works by accepting raw TCP connections on a
`net.Server` and forwarding them to an `Http2SecureServer` via
`h2Server.emit('connection', rawSocket)`.

#### Bug fixes

**SSLWrapper use-after-free (Zig)**

Two use-after-free bugs in `ssl_wrapper.zig` are fixed:

1. **`flush()` stale pointer** — `flush()` captured the `ssl` pointer
*before* calling `handleTraffic()`, which can trigger a close callback
that frees the SSL object via `deinit`. The pointer was then used after
being freed. Fix: read `this.ssl` *after* `handleTraffic()` returns.

2. **`handleReading()` null dereference** — `handleReading()` called
`triggerCloseCallback()` after `triggerDataCallback()` without checking
whether the data callback had already closed the connection. This led to
a null function pointer dereference. Fix: check `this.ssl == null ||
this.flags.closed_notified` before calling the close callback.

### How did you verify your code works?

- Added **13 in-process tests** (`node-http2-upgrade.test.mts`) covering
the `net.Server → Http2SecureServer` upgrade path:
  - GET/POST requests through upgraded connections
  - Sequential requests sharing a single H2 session
  - `session` event emission
  - Concurrent clients with independent sessions
  - Socket close ordering (rawSocket first vs session first) — no crash
  - ALPN protocol negotiation (`h2`)
  - Varied status codes (200, 302, 404)
  - Client disconnect mid-response (stream destroyed early)
  - Three independent clients producing three distinct sessions
- Tests use `node:test` + `node:assert` and **pass in both Bun and
Node.js**
- Ported `test-http2-socket-close.js` from the Node.js test suite,
verifying no segfault when the raw socket is destroyed before the H2
session is closed

---------

Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-04 19:23:29 -08:00
Ciro Spaciari
63a323a511 fix(http): don't enter tunnel mode for proxy-style absolute URLs in request line (#26737)
## Summary

Fixes a bug where sequential HTTP requests with proxy-style absolute
URLs (e.g. `GET http://example.com/path HTTP/1.1`) hang on the 2nd+
request when using keep-alive connections.

## Root Cause

In `packages/bun-uws/src/HttpParser.h`, the parser was treating
proxy-style absolute URLs identically to `CONNECT` method requests —
setting `isConnectRequest = true` and entering tunnel mode. This flag
was never reset between requests on the same keep-alive connection, so
the 2nd+ request was swallowed as raw tunnel data instead of being
parsed as HTTP.

## Fix

3-line change in `HttpParser.h:569`:
- **`isConnect`**: Now only matches actual `CONNECT` method requests
(removed `isHTTPorHTTPSPrefixForProxies` from the condition)
- **`isProxyStyleURL`**: New variable that detects `http://`/`https://`
prefixes and accepts them as valid request targets — without triggering
tunnel mode

## Who was affected

- Any Bun HTTP server (`Bun.serve()` or `node:http createServer`)
receiving proxy-style requests on keep-alive connections
- HTTP proxy servers built with Bun could only handle one request per
connection
- Bun's own HTTP client making sequential requests through an HTTP proxy
backed by a Bun server

## Test

Added `test/js/node/http/node-http-proxy-url.test.ts` with 3 test cases:
1. Sequential GET requests with absolute URL paths
2. Sequential POST requests with absolute URL paths
3. Mixed normal and proxy-style URLs

Tests run under both Node.js and Bun for compatibility verification.

-  Fails with system bun (2/3 tests timeout on 2nd request)
-  Passes with debug build (3/3 tests pass)

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-04 19:23:18 -08:00
星星魔法术
af76296637 fix(docs): update runtime/markdown page Callout component (#26729)
### What does this PR do?
Fix #26727 
fix the Page Not Found bug 
### How did you verify your code works?
I run the development server:
```bash
mint dev
```

<img width="1287" height="823" alt="Markdown"
src="https://github.com/user-attachments/assets/555716b4-1aee-46bd-b066-1e00986b3923"
/>
2026-02-03 22:59:12 -08:00
Dylan Conway
d1047c2cf1 fix ci (#26703)
### What does this PR do?

### How did you verify your code works?
2026-02-03 22:18:40 -08:00
robobun
315e822866 fix(bindgen): prevent use-after-free for optional string arguments (#26717)
## Summary
- Fix a use-after-free bug in the bindgen code generator where string
arguments with default values would have their underlying WTF::String
destroyed before the BunString was used
- The issue occurred because for optional string parameters with
defaults, a WTF::String was created inside an `if` block, converted to
BunString, then the if block closed and destroyed the WTF::String while
the BunString was still in use
- This manifested as a segfault in `Bun.stringWidth()` and potentially
other functions using optional string arguments

## Details

The crash stack trace showed:
```
Segmentation fault at address 0x31244B0F0
visible.zig:888: string.immutable.visible.visible.visibleUTF16WidthFn
BunObject.zig:1371: bindgen_BunObject_dispatchStringWidth1
GeneratedBindings.cpp:242: bindgen_BunObject_jsStringWidth
```

The generated code before this fix looked like:
```cpp
BunString argStr;
if (!arg0.value().isUndefinedOrNull()) {
    WTF::String wtfString_0 = WebCore::convert<...>(...);
    argStr = Bun::toString(wtfString_0);
}  // <-- wtfString_0 destroyed here!
// ... argStr used later, pointing to freed memory
```

The fix declares the WTF::String holder outside the if block:
```cpp
BunString argStr;
WTF::String wtfStringHolder_0;  // Lives until function returns
if (!arg0.value().isUndefinedOrNull()) {
    wtfStringHolder_0 = WebCore::convert<...>(...);
}
if (!wtfStringHolder_0.isEmpty()) argStr = Bun::toString(wtfStringHolder_0);
// argStr now points to valid memory
```

This fix applies to both:
- Direct string function arguments with defaults (e.g.,
`t.DOMString.default("")`)
- Dictionary fields with string defaults

## Test plan
- [x] Existing `stringWidth.test.ts` tests pass (105 tests)
- [x] Manual testing with GC stress shows no crashes
- [x] `os.userInfo()` with encoding option works correctly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 17:44:13 -08:00
Andrew Johnston
7f498a2e07 chore: improve markdown to html perf (#26644)
### What does this PR do?

I was looking at the [recent
support](https://github.com/oven-sh/bun/pull/26440) for markdown and did
some benchmarking against
[bindings](https://github.com/just-js/lo/blob/main/lib/md4c/api.js) i
created for my `lo` runtime to `md4c`. In some cases, Bun is quite a bit
slower, so i did a bit of digging and came up with this change. It uses
`indexOfAny` which should utilise `SIMD` where it's available to scan
ahead in the payload for characters that need escaping.

In
[benchmarks](https://gist.github.com/billywhizz/397f7929a8920c826c072139b695bb68#file-results-md)
I have done this results in anywhere from `3%` to `~15%` improvement in
throughput. The bigger the payload and the more space between entities
the bigger the gain afaict, which would make sense.

### How did you verify your code works?

It passes `test/js/bun/md/*.test.ts` running locally. Only tested on
macos. Can test on linux but I assume that will happen in CI anyway?

## main


![bun-main](https://github.com/user-attachments/assets/8b173b34-1f20-4e52-bb67-bb8b7e5658f3)

## patched


![bun-patch](https://github.com/user-attachments/assets/26bb600c-234c-4903-8f70-32f167481156)
2026-02-03 00:51:25 -08:00
Dylan Conway
5d4b1821f3 [publish images] test windows aarch64 CI (#26701)
### What does this PR do?

### How did you verify your code works?
2026-02-02 23:25:13 -08:00
Dylan Conway
41de7a3bfb [publish images] Upgrade LLVM toolchain from 19.1.7 to 21.1.8 (#26667)
## Summary
- Update LLVM version references across build scripts, Dockerfiles, CI,
Nix configs, and documentation
- Fix LLVM 21 `-Wcharacter-conversion` errors in WebKit bindings:
- `EncodingTables.h`: pragma for intentional char32_t/char16_t
comparisons
- `TextCodecCJK.cpp`: widen `gb18030AsymmetricEncode` param to char32_t
- `URLPatternParser`: widen `isValidNameCodepoint` param to char32_t,
cast for `startsWith`
- Fix `__libcpp_verbose_abort` noexcept mismatch (LLVM 21 uses
`_NOEXCEPT`)
- Fix dangling pointer in `BunJSCModule.h` (`toCString` temporary
lifetime)
- Remove `useMathSumPreciseMethod` (removed upstream in JSC)

**Before merging:** Merge https://github.com/oven-sh/WebKit/pull/153
first, then update `WEBKIT_VERSION` in `cmake/tools/SetupWebKit.cmake`
to point to the merged commit.

## Test plan
- [ ] Build bun debug on macOS with LLVM 21
- [ ] Build bun on Linux (glibc)
- [ ] Build bun on Linux (musl)
- [ ] Build bun on Windows
- [ ] Run test suite

Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-02 23:12:21 -08:00
82 changed files with 2706 additions and 468 deletions

View File

@@ -1,5 +1,5 @@
ARG LLVM_VERSION="19"
ARG REPORTED_LLVM_VERSION="19.1.7"
ARG LLVM_VERSION="21"
ARG REPORTED_LLVM_VERSION="21.1.8"
ARG OLD_BUN_VERSION="1.1.38"
ARG BUILDKITE_AGENT_TAGS="queue=linux,os=linux,arch=${TARGETARCH}"

View File

@@ -109,12 +109,12 @@ const buildPlatforms = [
{ os: "linux", arch: "x64", distro: "amazonlinux", release: "2023", features: ["docker"] },
{ os: "linux", arch: "x64", baseline: true, distro: "amazonlinux", release: "2023", features: ["docker"] },
{ os: "linux", arch: "x64", profile: "asan", distro: "amazonlinux", release: "2023", features: ["docker"] },
{ os: "linux", arch: "aarch64", abi: "musl", distro: "alpine", release: "3.22" },
{ os: "linux", arch: "x64", abi: "musl", distro: "alpine", release: "3.22" },
{ os: "linux", arch: "x64", abi: "musl", baseline: true, distro: "alpine", release: "3.22" },
{ os: "linux", arch: "aarch64", abi: "musl", distro: "alpine", release: "3.23" },
{ os: "linux", arch: "x64", abi: "musl", distro: "alpine", release: "3.23" },
{ os: "linux", arch: "x64", abi: "musl", baseline: true, distro: "alpine", release: "3.23" },
{ os: "windows", arch: "x64", release: "2019" },
{ os: "windows", arch: "x64", baseline: true, release: "2019" },
// TODO: Enable when Windows ARM64 CI runners are ready
// TODO: Re-enable when Windows ARM64 VS component installation is resolved on Buildkite runners
// { os: "windows", arch: "aarch64", release: "2019" },
];
@@ -133,9 +133,9 @@ const testPlatforms = [
{ os: "linux", arch: "aarch64", distro: "ubuntu", release: "25.04", tier: "latest" },
{ os: "linux", arch: "x64", distro: "ubuntu", release: "25.04", tier: "latest" },
{ os: "linux", arch: "x64", baseline: true, distro: "ubuntu", release: "25.04", tier: "latest" },
{ os: "linux", arch: "aarch64", abi: "musl", distro: "alpine", release: "3.22", tier: "latest" },
{ os: "linux", arch: "x64", abi: "musl", distro: "alpine", release: "3.22", tier: "latest" },
{ os: "linux", arch: "x64", abi: "musl", baseline: true, distro: "alpine", release: "3.22", tier: "latest" },
{ os: "linux", arch: "aarch64", abi: "musl", distro: "alpine", release: "3.23", tier: "latest" },
{ os: "linux", arch: "x64", abi: "musl", distro: "alpine", release: "3.23", tier: "latest" },
{ os: "linux", arch: "x64", abi: "musl", baseline: true, distro: "alpine", release: "3.23", tier: "latest" },
{ os: "windows", arch: "x64", release: "2019", tier: "oldest" },
{ os: "windows", arch: "x64", release: "2019", baseline: true, tier: "oldest" },
// TODO: Enable when Windows ARM64 CI runners are ready
@@ -304,6 +304,13 @@ function getCppAgent(platform, options) {
};
}
// Cross-compile Windows ARM64 from x64 runners
if (os === "windows" && arch === "aarch64") {
return getEc2Agent({ ...platform, arch: "x64" }, options, {
instanceType: "c7i.4xlarge",
});
}
return getEc2Agent(platform, options, {
instanceType: arch === "aarch64" ? "c8g.4xlarge" : "c7i.4xlarge",
});
@@ -326,8 +333,10 @@ function getLinkBunAgent(platform, options) {
}
if (os === "windows") {
return getEc2Agent(platform, options, {
instanceType: arch === "aarch64" ? "r8g.large" : "r7i.large",
// Cross-compile Windows ARM64 from x64 runners
const agentPlatform = arch === "aarch64" ? { ...platform, arch: "x64" } : platform;
return getEc2Agent(agentPlatform, options, {
instanceType: "r7i.large",
});
}
@@ -345,7 +354,7 @@ function getZigPlatform() {
arch: "aarch64",
abi: "musl",
distro: "alpine",
release: "3.22",
release: "3.23",
};
}
@@ -456,6 +465,17 @@ function getBuildCommand(target, options, label) {
return `bun run build:${buildProfile}`;
}
/**
* Get extra flags needed when cross-compiling Windows ARM64 from x64.
* Applied to C++ and link steps (not Zig, which has its own toolchain handling).
*/
function getWindowsArm64CrossFlags(target) {
if (target.os === "windows" && target.arch === "aarch64") {
return " --toolchain windows-aarch64";
}
return "";
}
/**
* @param {Platform} platform
* @param {PipelineOptions} options
@@ -463,6 +483,8 @@ function getBuildCommand(target, options, label) {
*/
function getBuildCppStep(platform, options) {
const command = getBuildCommand(platform, options);
const crossFlags = getWindowsArm64CrossFlags(platform);
return {
key: `${getTargetKey(platform)}-build-cpp`,
label: `${getTargetLabel(platform)} - build-cpp`,
@@ -476,7 +498,7 @@ function getBuildCppStep(platform, options) {
// We used to build the C++ dependencies and bun in separate steps.
// However, as long as the zig build takes longer than both sequentially,
// it's cheaper to run them in the same step. Can be revisited in the future.
command: [`${command} --target bun`, `${command} --target dependencies`],
command: [`${command}${crossFlags} --target bun`, `${command}${crossFlags} --target dependencies`],
};
}
@@ -533,7 +555,7 @@ function getLinkBunStep(platform, options) {
ASAN_OPTIONS: "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=0",
...getBuildEnv(platform, options),
},
command: `${getBuildCommand(platform, options, "build-bun")} --target bun`,
command: `${getBuildCommand(platform, options, "build-bun")}${getWindowsArm64CrossFlags(platform)} --target bun`,
};
}
@@ -1179,6 +1201,8 @@ async function getPipeline(options = {}) {
buildImages || publishImages
? [...buildPlatforms, ...testPlatforms]
.filter(({ os }) => os !== "darwin")
// Windows ARM64 cross-compiles from x64 runners, no separate image needed
.filter(({ os, arch }) => !(os === "windows" && arch === "aarch64"))
.map(platform => [getImageKey(platform), platform])
: [],
);

View File

@@ -219,9 +219,8 @@ function create_release() {
bun-windows-x64-profile.zip
bun-windows-x64-baseline.zip
bun-windows-x64-baseline-profile.zip
# TODO: Enable when Windows ARM64 CI runners are ready
# bun-windows-aarch64.zip
# bun-windows-aarch64-profile.zip
bun-windows-aarch64.zip
bun-windows-aarch64-profile.zip
)
function upload_artifact() {

View File

@@ -33,8 +33,8 @@ The workflow runs all three formatters simultaneously:
#### 3. Tool Installation
##### Clang-format-19
- Installs ONLY `clang-format-19` package (not the entire LLVM toolchain)
##### Clang-format-21
- Installs ONLY `clang-format-21` package (not the entire LLVM toolchain)
- Uses `--no-install-recommends --no-install-suggests` to skip unnecessary packages
- Quiet installation with `-qq` and `-o=Dpkg::Use-Pty=0`

View File

@@ -10,8 +10,8 @@ on:
merge_group:
env:
BUN_VERSION: "1.3.2"
LLVM_VERSION: "19.1.7"
LLVM_VERSION_MAJOR: "19"
LLVM_VERSION: "21.1.8"
LLVM_VERSION_MAJOR: "21"
jobs:
autofix:

View File

@@ -35,7 +35,7 @@ $ sudo pacman -S base-devel cmake git go libiconv libtool make ninja pkg-config
```
```bash#Fedora
$ sudo dnf install cargo clang19 llvm19 lld19 cmake git golang libtool ninja-build pkg-config rustc ruby libatomic-static libstdc++-static sed unzip which libicu-devel 'perl(Math::BigInt)'
$ sudo dnf install cargo clang21 llvm21 lld21 cmake git golang libtool ninja-build pkg-config rustc ruby libatomic-static libstdc++-static sed unzip which libicu-devel 'perl(Math::BigInt)'
```
```bash#openSUSE Tumbleweed
@@ -90,17 +90,17 @@ Our build scripts will automatically detect and use `ccache` if available. You c
## Install LLVM
Bun requires LLVM 19 (`clang` is part of LLVM). This version requirement is to match WebKit (precompiled), as mismatching versions will cause memory allocation failures at runtime. In most cases, you can install LLVM through your system package manager:
Bun requires LLVM 21.1.8 (`clang` is part of LLVM). This version is enforced by the build system — mismatching versions will cause memory allocation failures at runtime. In most cases, you can install LLVM through your system package manager:
{% codetabs group="os" %}
```bash#macOS (Homebrew)
$ brew install llvm@19
$ brew install llvm@21
```
```bash#Ubuntu/Debian
$ # LLVM has an automatic installation script that is compatible with all versions of Ubuntu
$ wget https://apt.llvm.org/llvm.sh -O - | sudo bash -s -- 19 all
$ wget https://apt.llvm.org/llvm.sh -O - | sudo bash -s -- 21 all
```
```bash#Arch
@@ -112,17 +112,17 @@ $ sudo dnf install llvm clang lld-devel
```
```bash#openSUSE Tumbleweed
$ sudo zypper install clang19 lld19 llvm19
$ sudo zypper install clang21 lld21 llvm21
```
{% /codetabs %}
If none of the above solutions apply, you will have to install it [manually](https://github.com/llvm/llvm-project/releases/tag/llvmorg-19.1.7).
If none of the above solutions apply, you will have to install it [manually](https://github.com/llvm/llvm-project/releases/tag/llvmorg-21.1.8).
Make sure Clang/LLVM 19 is in your path:
Make sure Clang/LLVM 21 is in your path:
```bash
$ which clang-19
$ which clang-21
```
If not, run this to manually add it:
@@ -131,13 +131,13 @@ If not, run this to manually add it:
```bash#macOS (Homebrew)
# use fish_add_path if you're using fish
# use path+="$(brew --prefix llvm@19)/bin" if you are using zsh
$ export PATH="$(brew --prefix llvm@19)/bin:$PATH"
# use path+="$(brew --prefix llvm@21)/bin" if you are using zsh
$ export PATH="$(brew --prefix llvm@21)/bin:$PATH"
```
```bash#Arch
# use fish_add_path if you're using fish
$ export PATH="$PATH:/usr/lib/llvm19/bin"
$ export PATH="$PATH:/usr/lib/llvm21/bin"
```
{% /codetabs %}
@@ -299,7 +299,7 @@ The issue may manifest when initially running `bun setup` as Clang being unable
```
The C++ compiler
"/usr/bin/clang++-19"
"/usr/bin/clang++-21"
is not able to compile a simple test program.
```

2
LATEST
View File

@@ -1 +1 @@
1.3.8
1.3.9

View File

@@ -0,0 +1,38 @@
// @runtime bun,node
import { bench, group, run } from "../runner.mjs";
const small = Buffer.alloc(64, 0x42);
const medium = Buffer.alloc(1024, 0x42);
const large = Buffer.alloc(1024 * 1024, 0x42);
group("slice - no args", () => {
bench("Buffer(64).slice()", () => small.slice());
bench("Buffer(1024).slice()", () => medium.slice());
bench("Buffer(1M).slice()", () => large.slice());
});
group("slice - one int arg", () => {
bench("Buffer(64).slice(10)", () => small.slice(10));
bench("Buffer(1024).slice(10)", () => medium.slice(10));
bench("Buffer(1M).slice(1024)", () => large.slice(1024));
});
group("slice - two int args", () => {
bench("Buffer(64).slice(10, 50)", () => small.slice(10, 50));
bench("Buffer(1024).slice(10, 100)", () => medium.slice(10, 100));
bench("Buffer(1M).slice(1024, 4096)", () => large.slice(1024, 4096));
});
group("slice - negative args", () => {
bench("Buffer(64).slice(-10)", () => small.slice(-10));
bench("Buffer(1024).slice(-100, -10)", () => medium.slice(-100, -10));
bench("Buffer(1M).slice(-4096, -1024)", () => large.slice(-4096, -1024));
});
group("subarray - two int args", () => {
bench("Buffer(64).subarray(10, 50)", () => small.subarray(10, 50));
bench("Buffer(1024).subarray(10, 100)", () => medium.subarray(10, 100));
bench("Buffer(1M).subarray(1024, 4096)", () => large.subarray(1024, 4096));
});
await run();

View File

@@ -33,7 +33,23 @@ var testArray = [
import { bench, run } from "../runner.mjs";
bench("structuredClone(array)", () => structuredClone(testArray));
bench("structuredClone(nested array)", () => structuredClone(testArray));
bench("structuredClone(123)", () => structuredClone(123));
bench("structuredClone({a: 123})", () => structuredClone({ a: 123 }));
// Array fast path targets
var numbersSmall = Array.from({ length: 10 }, (_, i) => i);
var numbersMedium = Array.from({ length: 100 }, (_, i) => i);
var numbersLarge = Array.from({ length: 1000 }, (_, i) => i);
var stringsSmall = Array.from({ length: 10 }, (_, i) => `item-${i}`);
var stringsMedium = Array.from({ length: 100 }, (_, i) => `item-${i}`);
var mixed = [1, "hello", true, null, undefined, 3.14, "world", false, 42, "test"];
bench("structuredClone([10 numbers])", () => structuredClone(numbersSmall));
bench("structuredClone([100 numbers])", () => structuredClone(numbersMedium));
bench("structuredClone([1000 numbers])", () => structuredClone(numbersLarge));
bench("structuredClone([10 strings])", () => structuredClone(stringsSmall));
bench("structuredClone([100 strings])", () => structuredClone(stringsMedium));
bench("structuredClone([10 mixed])", () => structuredClone(mixed));
await run();

View File

@@ -7,6 +7,13 @@ register_repository(
4f4f5ef8ebc6e23cbf393428f0ab1b526773f7ac
)
set(BORINGSSL_CMAKE_ARGS -DBUILD_SHARED_LIBS=OFF)
# Disable ASM on Windows ARM64 to avoid mixing non-ARM object files into ARM64 libs
if(WIN32 AND CMAKE_SYSTEM_PROCESSOR MATCHES "ARM64|aarch64|AARCH64")
list(APPEND BORINGSSL_CMAKE_ARGS -DOPENSSL_NO_ASM=1)
endif()
register_cmake_command(
TARGET
boringssl
@@ -15,7 +22,7 @@ register_cmake_command(
ssl
decrepit
ARGS
-DBUILD_SHARED_LIBS=OFF
${BORINGSSL_CMAKE_ARGS}
INCLUDES
include
)

View File

@@ -1016,6 +1016,7 @@ if(NOT WIN32)
-Wno-unused-function
-Wno-c++23-lambda-attributes
-Wno-nullability-completeness
-Wno-character-conversion
-Werror
)
else()
@@ -1033,6 +1034,7 @@ if(NOT WIN32)
-Werror=sometimes-uninitialized
-Wno-c++23-lambda-attributes
-Wno-nullability-completeness
-Wno-character-conversion
-Werror
)
@@ -1061,6 +1063,7 @@ else()
-Wno-inconsistent-dllimport
-Wno-incompatible-pointer-types
-Wno-deprecated-declarations
-Wno-character-conversion
)
endif()
@@ -1136,6 +1139,15 @@ if(LINUX)
-Wl,--wrap=pow
-Wl,--wrap=powf
)
# Disable LTO for workaround-missing-symbols.cpp to prevent LLD 21 from emitting
# glibc versioned symbol names (e.g. exp@GLIBC_2.17) from .symver directives into
# the .lto_discard assembler directive, which fails to parse the '@' character.
if(ENABLE_LTO)
set_source_files_properties(${CWD}/src/bun.js/bindings/workaround-missing-symbols.cpp
PROPERTIES COMPILE_OPTIONS "-fno-lto"
)
endif()
endif()
if(NOT ABI STREQUAL "musl")
@@ -1445,6 +1457,8 @@ if(NOT BUN_CPP_ONLY)
# ==856230==See https://github.com/google/sanitizers/issues/856 for possible workarounds.
# the linked issue refers to very old kernels but this still happens to us on modern ones.
# disabling ASLR to run the binary works around it
# Skip post-build test/features when cross-compiling (can't run the target binary on the host)
if(NOT CMAKE_CROSSCOMPILING)
set(TEST_BUN_COMMAND_BASE ${BUILD_PATH}/${bunExe} --revision)
set(TEST_BUN_COMMAND_ENV_WRAP
${CMAKE_COMMAND} -E env BUN_DEBUG_QUIET_LOGS=1)
@@ -1493,6 +1507,7 @@ if(NOT BUN_CPP_ONLY)
${BUILD_PATH}/features.json
)
endif()
endif() # NOT CMAKE_CROSSCOMPILING
if(CMAKE_HOST_APPLE AND bunStrip)
register_command(
@@ -1539,7 +1554,10 @@ if(NOT BUN_CPP_ONLY)
string(REPLACE bun ${bunTriplet} bunPath ${bun})
endif()
set(bunFiles ${bunExe} features.json)
set(bunFiles ${bunExe})
if(NOT CMAKE_CROSSCOMPILING)
list(APPEND bunFiles features.json)
endif()
if(WIN32)
list(APPEND bunFiles ${bun}.pdb)
elseif(APPLE)

View File

@@ -26,6 +26,12 @@ if(RELEASE)
list(APPEND LOLHTML_BUILD_ARGS --release)
endif()
# Cross-compilation: tell cargo to target ARM64
if(WIN32 AND CMAKE_SYSTEM_PROCESSOR MATCHES "ARM64|aarch64|AARCH64")
list(APPEND LOLHTML_BUILD_ARGS --target aarch64-pc-windows-msvc)
set(LOLHTML_LIBRARY ${LOLHTML_BUILD_PATH}/aarch64-pc-windows-msvc/${LOLHTML_BUILD_TYPE}/${CMAKE_STATIC_LIBRARY_PREFIX}lolhtml${CMAKE_STATIC_LIBRARY_SUFFIX})
endif()
# Windows requires unwind tables, apparently.
if (NOT WIN32)
# The encoded escape sequences are intentional. They're how you delimit multiple arguments in a single environment variable.
@@ -51,11 +57,18 @@ if(WIN32)
if(MSVC_VERSIONS)
list(GET MSVC_VERSIONS -1 MSVC_LATEST) # Get the latest version
if(CMAKE_SYSTEM_PROCESSOR MATCHES "ARM64|aarch64")
set(MSVC_LINK_PATH "${MSVC_LATEST}/bin/HostARM64/arm64/link.exe")
# Use Hostx64/arm64 for cross-compilation from x64, fall back to native
if(EXISTS "${MSVC_LATEST}/bin/Hostx64/arm64/link.exe")
set(MSVC_LINK_PATH "${MSVC_LATEST}/bin/Hostx64/arm64/link.exe")
else()
set(MSVC_LINK_PATH "${MSVC_LATEST}/bin/HostARM64/arm64/link.exe")
endif()
set(CARGO_LINKER_VAR "CARGO_TARGET_AARCH64_PC_WINDOWS_MSVC_LINKER")
set(MSVC_LIB_ARCH "arm64")
else()
set(MSVC_LINK_PATH "${MSVC_LATEST}/bin/Hostx64/x64/link.exe")
set(CARGO_LINKER_VAR "CARGO_TARGET_X86_64_PC_WINDOWS_MSVC_LINKER")
set(MSVC_LIB_ARCH "x64")
endif()
if(EXISTS "${MSVC_LINK_PATH}")
list(APPEND LOLHTML_ENV "${CARGO_LINKER_VAR}=${MSVC_LINK_PATH}")

View File

@@ -4,7 +4,7 @@ register_repository(
REPOSITORY
oven-sh/mimalloc
COMMIT
ffa38ab8ac914f9eb7af75c1f8ad457643dc14f2
1beadf9651a7bfdec6b5367c380ecc3fe1c40d1a
)
set(MIMALLOC_CMAKE_ARGS
@@ -14,7 +14,7 @@ set(MIMALLOC_CMAKE_ARGS
-DMI_BUILD_TESTS=OFF
-DMI_USE_CXX=ON
-DMI_SKIP_COLLECT_ON_EXIT=ON
# ```
# mimalloc_allow_large_os_pages=0 BUN_PORT=3004 mem bun http-hello.js
# Started development server: http://localhost:3004
@@ -51,7 +51,7 @@ if(ENABLE_ASAN)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_DEBUG_UBSAN=ON)
elseif(APPLE OR LINUX)
if(APPLE)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OVERRIDE=OFF)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OVERRIDE=OFF)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OSX_ZONE=OFF)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OSX_INTERPOSE=OFF)
else()
@@ -87,9 +87,9 @@ endif()
if(WIN32)
if(DEBUG)
set(MIMALLOC_LIBRARY mimalloc-debug)
set(MIMALLOC_LIBRARY mimalloc-static-debug)
else()
set(MIMALLOC_LIBRARY mimalloc)
set(MIMALLOC_LIBRARY mimalloc-static)
endif()
elseif(DEBUG)
if (ENABLE_ASAN)

View File

@@ -3,18 +3,35 @@ set(CMAKE_SYSTEM_PROCESSOR aarch64)
set(CMAKE_C_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)
set(CMAKE_CROSSCOMPILING ON)
# Force ARM64 architecture ID - this is what CMake uses to determine /machine: flag
set(MSVC_C_ARCHITECTURE_ID ARM64 CACHE INTERNAL "")
set(MSVC_CXX_ARCHITECTURE_ID ARM64 CACHE INTERNAL "")
# The rest only applies when building on Windows (C++ and link steps).
# The Zig step runs on Linux and only needs CMAKE_SYSTEM_NAME/PROCESSOR above.
if(CMAKE_HOST_SYSTEM_NAME STREQUAL "Windows")
# CMake 4.0+ policy CMP0197 controls how MSVC machine type flags are handled
set(CMAKE_POLICY_DEFAULT_CMP0197 NEW CACHE INTERNAL "")
# Ensure clang/clang-cl targets Windows ARM64 (otherwise ARM64-specific flags like
# -march=armv8-a are rejected as x86-only).
set(CMAKE_C_COMPILER_TARGET aarch64-pc-windows-msvc CACHE STRING "" FORCE)
set(CMAKE_CXX_COMPILER_TARGET aarch64-pc-windows-msvc CACHE STRING "" FORCE)
# Clear any inherited static linker flags that might have wrong machine types
set(CMAKE_STATIC_LINKER_FLAGS "" CACHE STRING "" FORCE)
# ARM64 has lock-free atomics (highway's FindAtomics check can't run ARM64 test binary on x64)
set(ATOMICS_LOCK_FREE_INSTRUCTIONS TRUE CACHE BOOL "" FORCE)
set(HAVE_CXX_ATOMICS_WITHOUT_LIB TRUE CACHE BOOL "" FORCE)
set(HAVE_CXX_ATOMICS64_WITHOUT_LIB TRUE CACHE BOOL "" FORCE)
# Use wrapper script for llvm-lib that strips /machine:x64 flags
# This works around CMake 4.1.0 bug where both ARM64 and x64 machine flags are added
get_filename_component(_TOOLCHAIN_DIR "${CMAKE_CURRENT_LIST_DIR}" DIRECTORY)
set(CMAKE_AR "${_TOOLCHAIN_DIR}/scripts/llvm-lib-wrapper.bat" CACHE FILEPATH "" FORCE)
# Force ARM64 architecture ID - this is what CMake uses to determine /machine: flag
set(MSVC_C_ARCHITECTURE_ID ARM64 CACHE INTERNAL "")
set(MSVC_CXX_ARCHITECTURE_ID ARM64 CACHE INTERNAL "")
# CMake 4.0+ policy CMP0197 controls how MSVC machine type flags are handled
set(CMAKE_POLICY_DEFAULT_CMP0197 NEW CACHE INTERNAL "")
# Clear any inherited static linker flags that might have wrong machine types
set(CMAKE_STATIC_LINKER_FLAGS "" CACHE STRING "" FORCE)
# Use wrapper script for llvm-lib that strips /machine:x64 flags
# This works around CMake 4.1.0 bug where both ARM64 and x64 machine flags are added
get_filename_component(_TOOLCHAIN_DIR "${CMAKE_CURRENT_LIST_DIR}" DIRECTORY)
set(CMAKE_AR "${_TOOLCHAIN_DIR}/scripts/llvm-lib-wrapper.bat" CACHE FILEPATH "" FORCE)
endif()

View File

@@ -12,13 +12,7 @@ if(NOT ENABLE_LLVM)
return()
endif()
# LLVM 21 is required for Windows ARM64 (first version with ARM64 Windows builds)
# Other platforms use LLVM 19.1.7
if(WIN32 AND CMAKE_SYSTEM_PROCESSOR MATCHES "ARM64|aarch64|AARCH64")
set(DEFAULT_LLVM_VERSION "21.1.8")
else()
set(DEFAULT_LLVM_VERSION "19.1.7")
endif()
set(DEFAULT_LLVM_VERSION "21.1.8")
optionx(LLVM_VERSION STRING "The version of LLVM to use" DEFAULT ${DEFAULT_LLVM_VERSION})
@@ -27,6 +21,8 @@ if(USE_LLVM_VERSION)
set(LLVM_VERSION_MAJOR ${CMAKE_MATCH_1})
set(LLVM_VERSION_MINOR ${CMAKE_MATCH_2})
set(LLVM_VERSION_PATCH ${CMAKE_MATCH_3})
# Accept any LLVM version within the same major.minor range (e.g. Alpine 3.23 ships 21.1.2)
set(LLVM_VERSION_RANGE ">=${LLVM_VERSION_MAJOR}.${LLVM_VERSION_MINOR}.0 <${LLVM_VERSION_MAJOR}.${LLVM_VERSION_MINOR}.99")
endif()
set(LLVM_PATHS)
@@ -54,6 +50,11 @@ if(APPLE)
list(APPEND LLVM_PATHS ${HOMEBREW_PREFIX}/opt/llvm/bin)
endif()
if(WIN32)
# Prefer standalone LLVM over VS-bundled (standalone supports cross-compilation)
list(APPEND LLVM_PATHS "C:/Program Files/LLVM/bin")
endif()
if(UNIX)
list(APPEND LLVM_PATHS /usr/lib/llvm/bin)
@@ -78,14 +79,12 @@ macro(find_llvm_command variable command)
)
endif()
math(EXPR LLVM_VERSION_NEXT_MAJOR "${LLVM_VERSION_MAJOR} + 1")
find_command(
VARIABLE ${variable}
VERSION_VARIABLE LLVM_VERSION
COMMAND ${commands}
PATHS ${LLVM_PATHS}
VERSION ">=${LLVM_VERSION_MAJOR}.1.0 <${LLVM_VERSION_NEXT_MAJOR}.0.0"
VERSION "${LLVM_VERSION_RANGE}"
)
list(APPEND CMAKE_ARGS -D${variable}=${${variable}})
endmacro()

View File

@@ -6,7 +6,7 @@ option(WEBKIT_LOCAL "If a local version of WebKit should be used instead of down
option(WEBKIT_BUILD_TYPE "The build type for local WebKit (defaults to CMAKE_BUILD_TYPE)")
if(NOT WEBKIT_VERSION)
set(WEBKIT_VERSION 515344bc5d65aa2d4f9ff277b5fb944f0e051dcd)
set(WEBKIT_VERSION autobuild-preview-pr-157-68c51d5a)
endif()
# Use preview build URL for Windows ARM64 until the fix is merged to main

View File

@@ -35,7 +35,7 @@ winget install "Visual Studio Community 2022" --override "--add Microsoft.Visual
After Visual Studio, you need the following:
- LLVM (19.1.7 for x64, 21.1.8 for ARM64)
- LLVM 21.1.8
- Go
- Rust
- NASM
@@ -51,7 +51,7 @@ After Visual Studio, you need the following:
irm https://get.scoop.sh | iex
scoop install nodejs-lts go rust nasm ruby perl ccache
# scoop seems to be buggy if you install llvm and the rest at the same time
scoop install llvm@19.1.7
scoop install llvm@21.1.8
```
For Windows ARM64, download LLVM 21.1.8 directly from GitHub releases (first version with ARM64 Windows builds):

View File

@@ -40,7 +40,7 @@ sudo pacman -S base-devel cmake git go libiconv libtool make ninja pkg-config py
```
```bash Fedora
sudo dnf install cargo clang19 llvm19 lld19 cmake git golang libtool ninja-build pkg-config rustc ruby libatomic-static libstdc++-static sed unzip which libicu-devel 'perl(Math::BigInt)'
sudo dnf install cargo clang21 llvm21 lld21 cmake git golang libtool ninja-build pkg-config rustc ruby libatomic-static libstdc++-static sed unzip which libicu-devel 'perl(Math::BigInt)'
```
```bash openSUSE Tumbleweed
@@ -95,17 +95,17 @@ Our build scripts will automatically detect and use `ccache` if available. You c
## Install LLVM
Bun requires LLVM 19 (`clang` is part of LLVM). This version requirement is to match WebKit (precompiled), as mismatching versions will cause memory allocation failures at runtime. In most cases, you can install LLVM through your system package manager:
Bun requires LLVM 21.1.8 (`clang` is part of LLVM). This version is enforced by the build system — mismatching versions will cause memory allocation failures at runtime. In most cases, you can install LLVM through your system package manager:
<CodeGroup>
```bash macOS (Homebrew)
brew install llvm@19
brew install llvm@21
```
```bash Ubuntu/Debian
# LLVM has an automatic installation script that is compatible with all versions of Ubuntu
wget https://apt.llvm.org/llvm.sh -O - | sudo bash -s -- 19 all
wget https://apt.llvm.org/llvm.sh -O - | sudo bash -s -- 21 all
```
```bash Arch
@@ -117,17 +117,17 @@ sudo dnf install llvm clang lld-devel
```
```bash openSUSE Tumbleweed
sudo zypper install clang19 lld19 llvm19
sudo zypper install clang21 lld21 llvm21
```
</CodeGroup>
If none of the above solutions apply, you will have to install it [manually](https://github.com/llvm/llvm-project/releases/tag/llvmorg-19.1.7).
If none of the above solutions apply, you will have to install it [manually](https://github.com/llvm/llvm-project/releases/tag/llvmorg-21.1.8).
Make sure Clang/LLVM 19 is in your path:
Make sure Clang/LLVM 21 is in your path:
```bash
which clang-19
which clang-21
```
If not, run this to manually add it:
@@ -136,13 +136,13 @@ If not, run this to manually add it:
```bash macOS (Homebrew)
# use fish_add_path if you're using fish
# use path+="$(brew --prefix llvm@19)/bin" if you are using zsh
export PATH="$(brew --prefix llvm@19)/bin:$PATH"
# use path+="$(brew --prefix llvm@21)/bin" if you are using zsh
export PATH="$(brew --prefix llvm@21)/bin:$PATH"
```
```bash Arch
# use fish_add_path if you're using fish
export PATH="$PATH:/usr/lib/llvm19/bin"
export PATH="$PATH:/usr/lib/llvm21/bin"
```
</CodeGroup>
@@ -309,7 +309,7 @@ The issue may manifest when initially running `bun setup` as Clang being unable
```txt
The C++ compiler
"/usr/bin/clang++-19"
"/usr/bin/clang++-21"
is not able to compile a simple test program.
```

View File

@@ -3,9 +3,9 @@ title: Markdown
description: Parse and render Markdown with Bun's built-in Markdown API, supporting GFM extensions and custom rendering callbacks
---
{% callout type="note" %}
**Unstable API** — This API is under active development and may change in future versions of Bun.
{% /callout %}
<Callout type="note">
**Unstable API** — This API is under active development and may change in future versions of Bun.
</Callout>
Bun includes a fast, built-in Markdown parser written in Zig. It supports GitHub Flavored Markdown (GFM) extensions and provides three APIs:

View File

@@ -26,10 +26,10 @@
};
};
# LLVM 19 - matching the bootstrap script (targets 19.1.7, actual version from nixpkgs-unstable)
llvm = pkgs.llvm_19;
clang = pkgs.clang_19;
lld = pkgs.lld_19;
# LLVM 21 - matching the bootstrap script (targets 21.1.8, actual version from nixpkgs-unstable)
llvm = pkgs.llvm_21;
clang = pkgs.clang_21;
lld = pkgs.lld_21;
# Node.js 24 - matching the bootstrap script (targets 24.3.0, actual version from nixpkgs-unstable)
nodejs = pkgs.nodejs_24;
@@ -42,7 +42,7 @@
pkgs.pkg-config
pkgs.ccache
# Compilers and toolchain - version pinned to LLVM 19
# Compilers and toolchain - version pinned to LLVM 21
clang
llvm
lld

View File

@@ -1,7 +1,7 @@
{
"private": true,
"name": "bun",
"version": "1.3.9",
"version": "1.3.10",
"workspaces": [
"./packages/bun-types",
"./packages/@types/bun"

View File

@@ -0,0 +1,3 @@
# Bun
This is the Windows ARM64 binary for Bun, a fast all-in-one JavaScript runtime. https://bun.com

View File

@@ -95,6 +95,12 @@ export const platforms: Platform[] = [
bin: "bun-windows-x64-baseline",
exe: "bin/bun.exe",
},
// {
// os: "win32",
// arch: "arm64",
// bin: "bun-windows-aarch64",
// exe: "bin/bun.exe",
// },
];
export const supportedPlatforms: Platform[] = platforms

View File

@@ -2154,7 +2154,7 @@ declare module "bun" {
interface Hash {
wyhash: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer, seed?: bigint) => bigint;
adler32: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer) => number;
crc32: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer) => number;
crc32: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer, seed?: number) => number;
cityHash32: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer) => number;
cityHash64: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer, seed?: bigint) => bigint;
xxHash32: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer, seed?: number) => number;
@@ -2438,14 +2438,19 @@ declare module "bun" {
| `bun-linux-${Architecture}-${Libc}`
| `bun-linux-${Architecture}-${SIMD}`
| `bun-linux-${Architecture}-${SIMD}-${Libc}`
| "bun-windows-x64"
| `bun-windows-${Architecture}`
| `bun-windows-x64-${SIMD}`;
}
/**
* @see [Bun.build API docs](https://bun.com/docs/bundler#api)
*/
interface BuildConfigBase {
interface BuildConfig {
/**
* Enable code splitting
*/
splitting?: boolean;
/**
* List of entrypoints, usually file paths
*/
@@ -2774,6 +2779,33 @@ declare module "bun" {
metafile?: boolean;
outdir?: string;
/**
* Create a standalone executable
*
* When `true`, creates an executable for the current platform.
* When a target string, creates an executable for that platform.
*
* @example
* ```ts
* // Create executable for current platform
* await Bun.build({
* entrypoints: ['./app.js'],
* compile: {
* target: 'linux-x64',
* },
* outfile: './my-app'
* });
*
* // Cross-compile for Linux x64
* await Bun.build({
* entrypoints: ['./app.js'],
* compile: 'linux-x64',
* outfile: './my-app'
* });
* ```
*/
compile?: boolean | Bun.Build.CompileTarget | CompileBuildOptions;
}
interface CompileBuildOptions {
@@ -2832,57 +2864,6 @@ declare module "bun" {
};
}
// Compile build config - uses outfile for executable output
interface CompileBuildConfig extends BuildConfigBase {
/**
* Create a standalone executable
*
* When `true`, creates an executable for the current platform.
* When a target string, creates an executable for that platform.
*
* @example
* ```ts
* // Create executable for current platform
* await Bun.build({
* entrypoints: ['./app.js'],
* compile: {
* target: 'linux-x64',
* },
* outfile: './my-app'
* });
*
* // Cross-compile for Linux x64
* await Bun.build({
* entrypoints: ['./app.js'],
* compile: 'linux-x64',
* outfile: './my-app'
* });
* ```
*/
compile: boolean | Bun.Build.CompileTarget | CompileBuildOptions;
/**
* Splitting is not currently supported with `.compile`
*/
splitting?: never;
}
interface NormalBuildConfig extends BuildConfigBase {
/**
* Enable code splitting
*
* This does not currently work with {@link CompileBuildConfig.compile `compile`}
*
* @default true
*/
splitting?: boolean;
}
/**
* @see [Bun.build API docs](https://bun.com/docs/bundler#api)
*/
type BuildConfig = CompileBuildConfig | NormalBuildConfig;
/**
* Hash and verify passwords using argon2 or bcrypt
*

View File

@@ -188,6 +188,103 @@ struct us_loop_t *us_create_loop(void *hint, void (*wakeup_cb)(struct us_loop_t
return loop;
}
/* Shared dispatch loop for both us_loop_run and us_loop_run_bun_tick */
static void us_internal_dispatch_ready_polls(struct us_loop_t *loop) {
#ifdef LIBUS_USE_EPOLL
for (loop->current_ready_poll = 0; loop->current_ready_poll < loop->num_ready_polls; loop->current_ready_poll++) {
struct us_poll_t *poll = GET_READY_POLL(loop, loop->current_ready_poll);
if (LIKELY(poll)) {
if (CLEAR_POINTER_TAG(poll) != poll) {
Bun__internal_dispatch_ready_poll(loop, poll);
continue;
}
int events = loop->ready_polls[loop->current_ready_poll].events;
const int error = events & EPOLLERR;
const int eof = events & EPOLLHUP;
events &= us_poll_events(poll);
if (events || error || eof) {
us_internal_dispatch_ready_poll(poll, error, eof, events);
}
}
}
#else
/* Kqueue delivers each filter (READ, WRITE, TIMER, etc.) as a separate kevent,
* so the same fd/poll can appear twice in ready_polls. We coalesce them into a
* single set of flags per poll before dispatching, matching epoll's behavior
* where each fd appears once with a combined bitmask. */
struct kevent_flags {
uint8_t readable : 1;
uint8_t writable : 1;
uint8_t error : 1;
uint8_t eof : 1;
uint8_t skip : 1;
uint8_t _pad : 3;
};
_Static_assert(sizeof(struct kevent_flags) == 1, "kevent_flags must be 1 byte");
struct kevent_flags coalesced[LIBUS_MAX_READY_POLLS]; /* no zeroing needed — every index is written in the first pass */
/* First pass: decode kevents and coalesce same-poll entries */
for (int i = 0; i < loop->num_ready_polls; i++) {
struct us_poll_t *poll = GET_READY_POLL(loop, i);
if (!poll || CLEAR_POINTER_TAG(poll) != poll) {
coalesced[i] = (struct kevent_flags){ .skip = 1 };
continue;
}
const int16_t filter = loop->ready_polls[i].filter;
const uint16_t flags = loop->ready_polls[i].flags;
struct kevent_flags bits = {
.readable = (filter == EVFILT_READ || filter == EVFILT_TIMER || filter == EVFILT_MACHPORT),
.writable = (filter == EVFILT_WRITE),
.error = !!(flags & EV_ERROR),
.eof = !!(flags & EV_EOF),
};
/* Look backward for a prior entry with the same poll to coalesce into.
* Kqueue returns at most 2 kevents per fd (READ + WRITE). */
int merged = 0;
for (int j = i - 1; j >= 0; j--) {
if (!coalesced[j].skip && GET_READY_POLL(loop, j) == poll) {
coalesced[j].readable |= bits.readable;
coalesced[j].writable |= bits.writable;
coalesced[j].error |= bits.error;
coalesced[j].eof |= bits.eof;
coalesced[i] = (struct kevent_flags){ .skip = 1 };
merged = 1;
break;
}
}
if (!merged) {
coalesced[i] = bits;
}
}
/* Second pass: dispatch everything in order — tagged pointers and coalesced events */
for (loop->current_ready_poll = 0; loop->current_ready_poll < loop->num_ready_polls; loop->current_ready_poll++) {
struct us_poll_t *poll = GET_READY_POLL(loop, loop->current_ready_poll);
if (!poll) continue;
/* Tagged pointers (FilePoll) go through Bun's own dispatch */
if (CLEAR_POINTER_TAG(poll) != poll) {
Bun__internal_dispatch_ready_poll(loop, poll);
continue;
}
struct kevent_flags bits = coalesced[loop->current_ready_poll];
if (bits.skip) continue;
int events = (bits.readable ? LIBUS_SOCKET_READABLE : 0)
| (bits.writable ? LIBUS_SOCKET_WRITABLE : 0);
events &= us_poll_events(poll);
if (events || bits.error || bits.eof) {
us_internal_dispatch_ready_poll(poll, bits.error, bits.eof, events);
}
}
#endif
}
void us_loop_run(struct us_loop_t *loop) {
us_loop_integrate(loop);
@@ -205,41 +302,7 @@ void us_loop_run(struct us_loop_t *loop) {
} while (IS_EINTR(loop->num_ready_polls));
#endif
/* Iterate ready polls, dispatching them by type */
for (loop->current_ready_poll = 0; loop->current_ready_poll < loop->num_ready_polls; loop->current_ready_poll++) {
struct us_poll_t *poll = GET_READY_POLL(loop, loop->current_ready_poll);
/* Any ready poll marked with nullptr will be ignored */
if (LIKELY(poll)) {
if (CLEAR_POINTER_TAG(poll) != poll) {
Bun__internal_dispatch_ready_poll(loop, poll);
continue;
}
#ifdef LIBUS_USE_EPOLL
int events = loop->ready_polls[loop->current_ready_poll].events;
const int error = events & EPOLLERR;
const int eof = events & EPOLLHUP;
#else
const struct kevent64_s* current_kevent = &loop->ready_polls[loop->current_ready_poll];
const int16_t filter = current_kevent->filter;
const uint16_t flags = current_kevent->flags;
const uint32_t fflags = current_kevent->fflags;
// > Multiple events which trigger the filter do not result in multiple kevents being placed on the kqueue
// > Instead, the filter will aggregate the events into a single kevent struct
// Note: EV_ERROR only sets the error in data as part of changelist. Not in this call!
int events = 0
| ((filter == EVFILT_READ) ? LIBUS_SOCKET_READABLE : 0)
| ((filter == EVFILT_WRITE) ? LIBUS_SOCKET_WRITABLE : 0);
const int error = (flags & (EV_ERROR)) ? ((int)fflags || 1) : 0;
const int eof = (flags & (EV_EOF));
#endif
/* Always filter all polls by what they actually poll for (callback polls always poll for readable) */
events &= us_poll_events(poll);
if (events || error || eof) {
us_internal_dispatch_ready_poll(poll, error, eof, events);
}
}
}
us_internal_dispatch_ready_polls(loop);
/* Emit post callback */
us_internal_loop_post(loop);
@@ -263,57 +326,33 @@ void us_loop_run_bun_tick(struct us_loop_t *loop, const struct timespec* timeout
/* Emit pre callback */
us_internal_loop_pre(loop);
if (loop->data.jsc_vm)
const unsigned int had_wakeups = __atomic_exchange_n(&loop->pending_wakeups, 0, __ATOMIC_ACQUIRE);
const int will_idle_inside_event_loop = had_wakeups == 0 && (!timeout || (timeout->tv_nsec != 0 || timeout->tv_sec != 0));
if (will_idle_inside_event_loop && loop->data.jsc_vm)
Bun__JSC_onBeforeWait(loop->data.jsc_vm);
/* Fetch ready polls */
#ifdef LIBUS_USE_EPOLL
/* A zero timespec already has a fast path in ep_poll (fs/eventpoll.c):
* it sets timed_out=1 (line 1952) and returns before any scheduler
* interaction (line 1975). No equivalent of KEVENT_FLAG_IMMEDIATE needed. */
loop->num_ready_polls = bun_epoll_pwait2(loop->fd, loop->ready_polls, 1024, timeout);
#else
do {
loop->num_ready_polls = kevent64(loop->fd, NULL, 0, loop->ready_polls, 1024, 0, timeout);
loop->num_ready_polls = kevent64(loop->fd, NULL, 0, loop->ready_polls, 1024,
/* When we won't idle (pending wakeups or zero timeout), use KEVENT_FLAG_IMMEDIATE.
* In XNU's kqueue_scan (bsd/kern/kern_event.c):
* - KEVENT_FLAG_IMMEDIATE: returns immediately after kqueue_process() (line 8031)
* - Zero timespec without the flag: falls through to assert_wait_deadline (line 8039)
* and thread_block (line 8048), doing a full context switch cycle (~14us) even
* though the deadline is already in the past. */
will_idle_inside_event_loop ? 0 : KEVENT_FLAG_IMMEDIATE,
timeout);
} while (IS_EINTR(loop->num_ready_polls));
#endif
/* Iterate ready polls, dispatching them by type */
for (loop->current_ready_poll = 0; loop->current_ready_poll < loop->num_ready_polls; loop->current_ready_poll++) {
struct us_poll_t *poll = GET_READY_POLL(loop, loop->current_ready_poll);
/* Any ready poll marked with nullptr will be ignored */
if (LIKELY(poll)) {
if (CLEAR_POINTER_TAG(poll) != poll) {
Bun__internal_dispatch_ready_poll(loop, poll);
continue;
}
#ifdef LIBUS_USE_EPOLL
int events = loop->ready_polls[loop->current_ready_poll].events;
const int error = events & EPOLLERR;
const int eof = events & EPOLLHUP;
#else
const struct kevent64_s* current_kevent = &loop->ready_polls[loop->current_ready_poll];
const int16_t filter = current_kevent->filter;
const uint16_t flags = current_kevent->flags;
const uint32_t fflags = current_kevent->fflags;
// > Multiple events which trigger the filter do not result in multiple kevents being placed on the kqueue
// > Instead, the filter will aggregate the events into a single kevent struct
int events = 0
| ((filter & EVFILT_READ) ? LIBUS_SOCKET_READABLE : 0)
| ((filter & EVFILT_WRITE) ? LIBUS_SOCKET_WRITABLE : 0);
// Note: EV_ERROR only sets the error in data as part of changelist. Not in this call!
const int error = (flags & (EV_ERROR)) ? ((int)fflags || 1) : 0;
const int eof = (flags & (EV_EOF));
#endif
/* Always filter all polls by what they actually poll for (callback polls always poll for readable) */
events &= us_poll_events(poll);
if (events || error || eof) {
us_internal_dispatch_ready_poll(poll, error, eof, events);
}
}
}
us_internal_dispatch_ready_polls(loop);
/* Emit post callback */
us_internal_loop_post(loop);
@@ -613,7 +652,7 @@ struct us_internal_async *us_internal_create_async(struct us_loop_t *loop, int f
struct us_internal_callback_t *cb = (struct us_internal_callback_t *) p;
cb->loop = loop;
cb->cb_expects_the_loop = 1;
cb->leave_poll_ready = 0;
cb->leave_poll_ready = 1; /* Edge-triggered: skip reading eventfd on wakeup */
return (struct us_internal_async *) cb;
}
@@ -635,12 +674,28 @@ void us_internal_async_set(struct us_internal_async *a, void (*cb)(struct us_int
internal_cb->cb = (void (*)(struct us_internal_callback_t *)) cb;
us_poll_start((struct us_poll_t *) a, internal_cb->loop, LIBUS_SOCKET_READABLE);
#ifdef LIBUS_USE_EPOLL
/* Upgrade to edge-triggered to avoid reading the eventfd on each wakeup */
struct epoll_event event;
event.events = EPOLLIN | EPOLLET;
event.data.ptr = (struct us_poll_t *) a;
epoll_ctl(internal_cb->loop->fd, EPOLL_CTL_MOD,
us_poll_fd((struct us_poll_t *) a), &event);
#endif
}
void us_internal_async_wakeup(struct us_internal_async *a) {
uint64_t one = 1;
int written = write(us_poll_fd((struct us_poll_t *) a), &one, 8);
(void)written;
int fd = us_poll_fd((struct us_poll_t *) a);
uint64_t val;
for (val = 1; ; val = 1) {
if (write(fd, &val, 8) >= 0) return;
if (errno == EINTR) continue;
if (errno == EAGAIN) {
/* Counter overflow — drain and retry */
if (read(fd, &val, 8) > 0 || errno == EAGAIN || errno == EINTR) continue;
}
break;
}
}
#else

View File

@@ -54,6 +54,10 @@ struct us_loop_t {
/* Number of polls owned by bun */
unsigned int bun_polls;
/* Incremented atomically by wakeup(), swapped to 0 before epoll/kqueue.
* If non-zero, the event loop will return immediately so we can skip the GC safepoint. */
unsigned int pending_wakeups;
/* The list of ready polls */
#ifdef LIBUS_USE_EPOLL
alignas(LIBUS_EXT_ALIGNMENT) struct epoll_event ready_polls[1024];

View File

@@ -93,6 +93,9 @@ void us_internal_loop_data_free(struct us_loop_t *loop) {
}
void us_wakeup_loop(struct us_loop_t *loop) {
#ifndef LIBUS_USE_LIBUV
__atomic_fetch_add(&loop->pending_wakeups, 1, __ATOMIC_RELEASE);
#endif
us_internal_async_wakeup(loop->data.wakeup_async);
}
@@ -393,8 +396,12 @@ void us_internal_dispatch_ready_poll(struct us_poll_t *p, int error, int eof, in
if (events & LIBUS_SOCKET_WRITABLE && !error) {
s->flags.last_write_failed = 0;
#ifdef LIBUS_USE_KQUEUE
/* Kqueue is one-shot so is not writable anymore */
p->state.poll_type = us_internal_poll_type(p) | ((events & LIBUS_SOCKET_READABLE) ? POLL_TYPE_POLLING_IN : 0);
/* Kqueue EVFILT_WRITE is one-shot so the filter is removed after delivery.
* Clear POLLING_OUT to reflect this.
* Keep POLLING_IN from the poll's own state, NOT from `events`: kqueue delivers
* each filter as a separate kevent, so a pure EVFILT_WRITE event won't have
* LIBUS_SOCKET_READABLE set even though the socket is still registered for reads. */
p->state.poll_type = us_internal_poll_type(p) | (p->state.poll_type & POLL_TYPE_POLLING_IN);
#endif
s = s->context->on_writable(s);
@@ -412,7 +419,7 @@ void us_internal_dispatch_ready_poll(struct us_poll_t *p, int error, int eof, in
us_poll_change(&s->p, loop, us_poll_events(&s->p) & LIBUS_SOCKET_READABLE);
} else {
#ifdef LIBUS_USE_KQUEUE
/* Kqueue one-shot writable needs to be re-enabled */
/* Kqueue one-shot writable needs to be re-registered */
us_poll_change(&s->p, loop, us_poll_events(&s->p) | LIBUS_SOCKET_WRITABLE);
#endif
}

View File

@@ -566,8 +566,10 @@ namespace uWS
bool isHTTPMethod = (__builtin_expect(data[1] == '/', 1));
bool isConnect = !isHTTPMethod && (isHTTPorHTTPSPrefixForProxies(data + 1, end) == 1 || ((data - start) == 7 && memcmp(start, "CONNECT", 7) == 0));
if (isHTTPMethod || isConnect) [[likely]] {
bool isConnect = !isHTTPMethod && ((data - start) == 7 && memcmp(start, "CONNECT", 7) == 0);
/* Also accept proxy-style absolute URLs (http://... or https://...) as valid request targets */
bool isProxyStyleURL = !isHTTPMethod && !isConnect && data[0] == 32 && isHTTPorHTTPSPrefixForProxies(data + 1, end) == 1;
if (isHTTPMethod || isConnect || isProxyStyleURL) [[likely]] {
header.key = {start, (size_t) (data - start)};
data++;
if(!isValidMethod(header.key, useStrictMethodValidation)) {

View File

@@ -1,4 +1,4 @@
# Version: 11
# Version: 12
# A script that installs the dependencies needed to build and test Bun.
# This should work on Windows 10 or newer with PowerShell.
@@ -387,7 +387,7 @@ function Install-PdbAddr2line {
function Install-Llvm {
Install-Package llvm `
-Command clang-cl `
-Version "19.1.7"
-Version "21.1.8"
Add-To-Path "$env:ProgramFiles\LLVM\bin"
}

View File

@@ -1,5 +1,5 @@
#!/bin/sh
# Version: 27
# Version: 28
# A script that installs the dependencies needed to build and test Bun.
# This should work on macOS and Linux with a POSIX shell.
@@ -1096,7 +1096,7 @@ install_build_essentials() {
}
llvm_version_exact() {
print "19.1.7"
print "21.1.8"
}
llvm_version() {
@@ -1106,23 +1106,20 @@ llvm_version() {
install_llvm() {
case "$pm" in
apt)
# Debian 13 (Trixie) has LLVM 19 natively, and apt.llvm.org doesn't have a trixie repo
if [ "$distro" = "debian" ]; then
install_packages \
"llvm-$(llvm_version)" \
"clang-$(llvm_version)" \
"lld-$(llvm_version)" \
"llvm-$(llvm_version)-dev" \
"llvm-$(llvm_version)-tools" \
"libclang-rt-$(llvm_version)-dev"
else
bash="$(require bash)"
llvm_script="$(download_file "https://apt.llvm.org/llvm.sh")"
execute_sudo "$bash" "$llvm_script" "$(llvm_version)" all
# Install llvm-symbolizer explicitly to ensure it's available for ASAN
install_packages "llvm-$(llvm_version)-tools"
# apt.llvm.org's GPG key uses SHA1, which Debian 13+ (sqv) rejects since 2026-02-01.
# Override the sequoia crypto policy to extend the SHA1 deadline.
# See: https://github.com/llvm/llvm-project/issues/153385
if [ -x /usr/bin/sqv ] && [ -f /usr/share/apt/default-sequoia.config ]; then
execute_sudo mkdir -p /etc/crypto-policies/back-ends
execute_sudo /usr/bin/sh -c "sed 's/sha1.second_preimage_resistance = 2026-02-01/sha1.second_preimage_resistance = 2028-02-01/' /usr/share/apt/default-sequoia.config > /etc/crypto-policies/back-ends/apt-sequoia.config"
fi
bash="$(require bash)"
llvm_script="$(download_file "https://apt.llvm.org/llvm.sh")"
execute_sudo "$bash" "$llvm_script" "$(llvm_version)" all
# Install llvm-symbolizer explicitly to ensure it's available for ASAN
install_packages "llvm-$(llvm_version)-tools"
;;
brew)
install_packages "llvm@$(llvm_version)"
@@ -1177,7 +1174,7 @@ install_gcc() {
;;
esac
llvm_v="19"
llvm_v="21"
append_to_profile "export CC=clang-${llvm_v}"
append_to_profile "export CXX=clang++-${llvm_v}"

View File

@@ -77,10 +77,10 @@ const HAS_CCACHE = CCACHE !== null;
// On Windows, use clang-cl for MSVC compatibility
const CC_BASE = IS_WINDOWS
? findExecutable(["clang-cl.exe", "clang-cl"]) || "clang-cl"
: findExecutable(["clang-19", "clang"]) || "clang";
: findExecutable(["clang-21", "clang"]) || "clang";
const CXX_BASE = IS_WINDOWS
? findExecutable(["clang-cl.exe", "clang-cl"]) || "clang-cl"
: findExecutable(["clang++-19", "clang++"]) || "clang++";
: findExecutable(["clang++-21", "clang++"]) || "clang++";
const CC = HAS_CCACHE ? CCACHE : CC_BASE;
const CXX = HAS_CCACHE ? CCACHE : CXX_BASE;

View File

@@ -57,7 +57,11 @@ async function build(args) {
if (process.platform === "win32" && !process.env["VSINSTALLDIR"]) {
const shellPath = join(import.meta.dirname, "vs-shell.ps1");
const scriptPath = import.meta.filename;
return spawn("pwsh", ["-NoProfile", "-NoLogo", "-File", shellPath, process.argv0, scriptPath, ...args]);
// When cross-compiling to ARM64, tell vs-shell.ps1 to set up the x64_arm64 VS environment
const toolchainIdx = args.indexOf("--toolchain");
const requestedVsArch = toolchainIdx !== -1 && args[toolchainIdx + 1] === "windows-aarch64" ? "arm64" : undefined;
const env = requestedVsArch ? { ...process.env, BUN_VS_ARCH: requestedVsArch } : undefined;
return spawn("pwsh", ["-NoProfile", "-NoLogo", "-File", shellPath, process.argv0, scriptPath, ...args], { env });
}
if (isCI) {
@@ -92,21 +96,9 @@ async function build(args) {
generateOptions["--toolchain"] = toolchainPath;
}
// Windows ARM64: automatically set required options
// Windows ARM64: log detection (compiler is selected by CMake/toolchain)
if (isWindowsARM64) {
// Use clang-cl instead of MSVC cl.exe for proper ARM64 flag support
if (!generateOptions["-DCMAKE_C_COMPILER"]) {
generateOptions["-DCMAKE_C_COMPILER"] = "clang-cl";
}
if (!generateOptions["-DCMAKE_CXX_COMPILER"]) {
generateOptions["-DCMAKE_CXX_COMPILER"] = "clang-cl";
}
// Skip codegen by default since x64 bun crashes under WoW64 emulation
// Can be overridden with -DSKIP_CODEGEN=OFF once ARM64 bun is available
if (!generateOptions["-DSKIP_CODEGEN"]) {
generateOptions["-DSKIP_CODEGEN"] = "ON";
}
console.log("Windows ARM64 detected: using clang-cl and SKIP_CODEGEN=ON");
console.log("Windows ARM64 detected");
}
const generateArgs = Object.entries(generateOptions).flatMap(([flag, value]) =>

View File

@@ -12,7 +12,7 @@ PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
MODE="${1:-format}"
# Use LLVM_VERSION_MAJOR from environment or default to 19
LLVM_VERSION="${LLVM_VERSION_MAJOR:-19}"
LLVM_VERSION="${LLVM_VERSION_MAJOR:-21}"
# Ensure we have the specific clang-format version
CLANG_FORMAT="clang-format-${LLVM_VERSION}"

View File

@@ -5,7 +5,22 @@ $ErrorActionPreference = "Stop"
# Detect system architecture
$script:IsARM64 = [System.Runtime.InteropServices.RuntimeInformation]::OSArchitecture -eq [System.Runtime.InteropServices.Architecture]::Arm64
$script:VsArch = if ($script:IsARM64) { "arm64" } else { "amd64" }
# Allow overriding the target arch (useful for cross-compiling on x64 -> ARM64)
$script:VsArch = $null
if ($env:BUN_VS_ARCH) {
switch ($env:BUN_VS_ARCH.ToLowerInvariant()) {
"arm64" { $script:VsArch = "arm64" }
"aarch64" { $script:VsArch = "arm64" }
"amd64" { $script:VsArch = "amd64" }
"x64" { $script:VsArch = "amd64" }
default { throw "Invalid BUN_VS_ARCH: $env:BUN_VS_ARCH (expected arm64|amd64)" }
}
}
if (-not $script:VsArch) {
$script:VsArch = if ($script:IsARM64) { "arm64" } else { "amd64" }
}
if($env:VSINSTALLDIR -eq $null) {
Write-Host "Loading Visual Studio environment, this may take a second..."
@@ -17,17 +32,29 @@ if($env:VSINSTALLDIR -eq $null) {
$vsDir = (& $vswhere -prerelease -latest -property installationPath)
if ($vsDir -eq $null) {
$vsDir = Get-ChildItem -Path "C:\Program Files\Microsoft Visual Studio\2022" -Directory
# Check common VS installation paths
$searchPaths = @(
"C:\Program Files\Microsoft Visual Studio\2022",
"C:\Program Files (x86)\Microsoft Visual Studio\2022"
)
foreach ($searchPath in $searchPaths) {
if (Test-Path $searchPath) {
$vsDir = (Get-ChildItem -Path $searchPath -Directory | Select-Object -First 1).FullName
if ($vsDir -ne $null) { break }
}
}
if ($vsDir -eq $null) {
throw "Visual Studio directory not found."
}
$vsDir = $vsDir.FullName
}
Push-Location $vsDir
try {
$vsShell = (Join-Path -Path $vsDir -ChildPath "Common7\Tools\Launch-VsDevShell.ps1")
. $vsShell -Arch $script:VsArch -HostArch $script:VsArch
# Visual Studio's Launch-VsDevShell.ps1 only supports x86/amd64 for HostArch
# For ARM64 builds, use amd64 as HostArch since it can cross-compile to ARM64
$hostArch = if ($script:VsArch -eq "arm64") { "amd64" } else { $script:VsArch }
. $vsShell -Arch $script:VsArch -HostArch $hostArch
} finally {
Pop-Location
}
@@ -61,7 +88,7 @@ if ($args.Count -gt 0) {
$displayArgs += $arg
}
}
Write-Host "$ $command $displayArgs"
& $command $commandArgs
exit $LASTEXITCODE

View File

@@ -8,9 +8,9 @@ pkgs.mkShell rec {
# Core build tools (matching bootstrap.sh)
cmake
ninja
clang_19
llvm_19
lld_19
clang_21
llvm_21
lld_21
nodejs_24
bun
rustc
@@ -77,10 +77,10 @@ pkgs.mkShell rec {
];
shellHook = ''
export CC="${pkgs.lib.getExe pkgs.clang_19}"
export CXX="${pkgs.lib.getExe' pkgs.clang_19 "clang++"}"
export AR="${pkgs.llvm_19}/bin/llvm-ar"
export RANLIB="${pkgs.llvm_19}/bin/llvm-ranlib"
export CC="${pkgs.lib.getExe pkgs.clang_21}"
export CXX="${pkgs.lib.getExe' pkgs.clang_21 "clang++"}"
export AR="${pkgs.llvm_21}/bin/llvm-ar"
export RANLIB="${pkgs.llvm_21}/bin/llvm-ranlib"
export CMAKE_C_COMPILER="$CC"
export CMAKE_CXX_COMPILER="$CXX"
export CMAKE_AR="$AR"
@@ -88,7 +88,7 @@ pkgs.mkShell rec {
export CMAKE_SYSTEM_PROCESSOR=$(uname -m)
export TMPDIR=''${TMPDIR:-/tmp}
'' + pkgs.lib.optionalString pkgs.stdenv.isLinux ''
export LD="${pkgs.lib.getExe' pkgs.lld_19 "ld.lld"}"
export LD="${pkgs.lib.getExe' pkgs.lld_21 "ld.lld"}"
export NIX_CFLAGS_LINK="''${NIX_CFLAGS_LINK:+$NIX_CFLAGS_LINK }-fuse-ld=lld"
export LD_LIBRARY_PATH="${pkgs.lib.makeLibraryPath packages}''${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}"
'' + ''

View File

@@ -2,10 +2,7 @@
const Self = @This();
const safety_checks = bun.Environment.isDebug or bun.Environment.enable_asan;
#heap: *mimalloc.Heap,
thread_id: if (safety_checks) std.Thread.Id else void,
#heap: if (safety_checks) Owned(*DebugHeap) else *mimalloc.Heap,
/// Uses the default thread-local heap. This type is zero-sized.
///
@@ -23,18 +20,18 @@ pub const Default = struct {
///
/// This type is a `GenericAllocator`; see `src/allocators.zig`.
pub const Borrowed = struct {
#heap: *mimalloc.Heap,
#heap: BorrowedHeap,
pub fn allocator(self: Borrowed) std.mem.Allocator {
return .{ .ptr = self.#heap, .vtable = c_allocator_vtable };
return .{ .ptr = self.#heap, .vtable = &c_allocator_vtable };
}
pub fn getDefault() Borrowed {
return .{ .#heap = mimalloc.mi_heap_main() };
return .{ .#heap = getThreadHeap() };
}
pub fn gc(self: Borrowed) void {
mimalloc.mi_heap_collect(self.#heap, false);
mimalloc.mi_heap_collect(self.getMimallocHeap(), false);
}
pub fn helpCatchMemoryIssues(self: Borrowed) void {
@@ -44,17 +41,30 @@ pub const Borrowed = struct {
}
}
pub fn ownsPtr(self: Borrowed, ptr: *const anyopaque) bool {
return mimalloc.mi_heap_check_owned(self.getMimallocHeap(), ptr);
}
fn fromOpaque(ptr: *anyopaque) Borrowed {
return .{ .#heap = @ptrCast(@alignCast(ptr)) };
}
fn getMimallocHeap(self: Borrowed) *mimalloc.Heap {
return if (comptime safety_checks) self.#heap.inner else self.#heap;
}
fn assertThreadLock(self: Borrowed) void {
if (comptime safety_checks) self.#heap.thread_lock.assertLocked();
}
fn alignedAlloc(self: Borrowed, len: usize, alignment: Alignment) ?[*]u8 {
log("Malloc: {d}\n", .{len});
const heap = self.getMimallocHeap();
const ptr: ?*anyopaque = if (mimalloc.mustUseAlignedAlloc(alignment))
mimalloc.mi_heap_malloc_aligned(self.#heap, len, alignment.toByteUnits())
mimalloc.mi_heap_malloc_aligned(heap, len, alignment.toByteUnits())
else
mimalloc.mi_heap_malloc(self.#heap, len);
mimalloc.mi_heap_malloc(heap, len);
if (comptime bun.Environment.isDebug) {
const usable = mimalloc.mi_malloc_usable_size(ptr);
@@ -79,17 +89,42 @@ pub const Borrowed = struct {
}
};
const BorrowedHeap = if (safety_checks) *DebugHeap else *mimalloc.Heap;
const DebugHeap = struct {
inner: *mimalloc.Heap,
thread_lock: bun.safety.ThreadLock,
pub const deinit = void;
};
threadlocal var thread_heap: if (safety_checks) ?DebugHeap else void = if (safety_checks) null;
fn getThreadHeap() BorrowedHeap {
if (comptime !safety_checks) return mimalloc.mi_heap_get_default();
if (thread_heap == null) {
thread_heap = .{
.inner = mimalloc.mi_heap_get_default(),
.thread_lock = .initLocked(),
};
}
return &thread_heap.?;
}
const log = bun.Output.scoped(.mimalloc, .hidden);
pub fn allocator(self: Self) std.mem.Allocator {
self.assertThreadOwnership();
return self.borrow().allocator();
}
pub fn borrow(self: Self) Borrowed {
return .{ .#heap = self.#heap };
return .{ .#heap = if (comptime safety_checks) self.#heap.get() else self.#heap };
}
/// Internally, mimalloc calls mi_heap_get_default()
/// to get the default heap.
/// It uses pthread_getspecific to do that.
/// We can save those extra calls if we just do it once in here
pub fn getThreadLocalDefault() std.mem.Allocator {
if (bun.Environment.enable_asan) return bun.default_allocator;
return Borrowed.getDefault().allocator();
@@ -122,15 +157,22 @@ pub fn dumpStats(_: Self) void {
}
pub fn deinit(self: *Self) void {
mimalloc.mi_heap_destroy(self.#heap);
const mimalloc_heap = self.borrow().getMimallocHeap();
if (comptime safety_checks) {
self.#heap.deinit();
}
mimalloc.mi_heap_destroy(mimalloc_heap);
self.* = undefined;
}
pub fn init() Self {
return .{
.#heap = mimalloc.mi_heap_new() orelse bun.outOfMemory(),
.thread_id = if (safety_checks) std.Thread.getCurrentId() else {},
};
const mimalloc_heap = mimalloc.mi_heap_new() orelse bun.outOfMemory();
if (comptime !safety_checks) return .{ .#heap = mimalloc_heap };
const heap: Owned(*DebugHeap) = .new(.{
.inner = mimalloc_heap,
.thread_lock = .initLocked(),
});
return .{ .#heap = heap };
}
pub fn gc(self: Self) void {
@@ -141,16 +183,8 @@ pub fn helpCatchMemoryIssues(self: Self) void {
self.borrow().helpCatchMemoryIssues();
}
fn assertThreadOwnership(self: Self) void {
if (comptime safety_checks) {
const current_thread = std.Thread.getCurrentId();
if (current_thread != self.thread_id) {
std.debug.panic(
"MimallocArena used from wrong thread: arena belongs to thread {d}, but current thread is {d}",
.{ self.thread_id, current_thread },
);
}
}
pub fn ownsPtr(self: Self, ptr: *const anyopaque) bool {
return self.borrow().ownsPtr(ptr);
}
fn alignedAllocSize(ptr: [*]u8) usize {
@@ -159,10 +193,13 @@ fn alignedAllocSize(ptr: [*]u8) usize {
fn vtable_alloc(ptr: *anyopaque, len: usize, alignment: Alignment, _: usize) ?[*]u8 {
const self: Borrowed = .fromOpaque(ptr);
self.assertThreadLock();
return self.alignedAlloc(len, alignment);
}
fn vtable_resize(_: *anyopaque, buf: []u8, _: Alignment, new_len: usize, _: usize) bool {
fn vtable_resize(ptr: *anyopaque, buf: []u8, _: Alignment, new_len: usize, _: usize) bool {
const self: Borrowed = .fromOpaque(ptr);
self.assertThreadLock();
return mimalloc.mi_expand(buf.ptr, new_len) != null;
}
@@ -186,17 +223,39 @@ fn vtable_free(
}
}
/// Attempt to expand or shrink memory, allowing relocation.
///
/// `memory.len` must equal the length requested from the most recent
/// successful call to `alloc`, `resize`, or `remap`. `alignment` must
/// equal the same value that was passed as the `alignment` parameter to
/// the original `alloc` call.
///
/// A non-`null` return value indicates the resize was successful. The
/// allocation may have same address, or may have been relocated. In either
/// case, the allocation now has size of `new_len`. A `null` return value
/// indicates that the resize would be equivalent to allocating new memory,
/// copying the bytes from the old memory, and then freeing the old memory.
/// In such case, it is more efficient for the caller to perform the copy.
///
/// `new_len` must be greater than zero.
///
/// `ret_addr` is optionally provided as the first return address of the
/// allocation call stack. If the value is `0` it means no return address
/// has been provided.
fn vtable_remap(ptr: *anyopaque, buf: []u8, alignment: Alignment, new_len: usize, _: usize) ?[*]u8 {
const self: Borrowed = .fromOpaque(ptr);
const value = mimalloc.mi_heap_realloc_aligned(self.#heap, buf.ptr, new_len, alignment.toByteUnits());
self.assertThreadLock();
const heap = self.getMimallocHeap();
const aligned_size = alignment.toByteUnits();
const value = mimalloc.mi_heap_realloc_aligned(heap, buf.ptr, new_len, aligned_size);
return @ptrCast(value);
}
pub fn isInstance(alloc: std.mem.Allocator) bool {
return alloc.vtable == c_allocator_vtable;
return alloc.vtable == &c_allocator_vtable;
}
const c_allocator_vtable = &std.mem.Allocator.VTable{
const c_allocator_vtable = std.mem.Allocator.VTable{
.alloc = vtable_alloc,
.resize = vtable_resize,
.remap = vtable_remap,
@@ -209,3 +268,5 @@ const Alignment = std.mem.Alignment;
const bun = @import("bun");
const assert = bun.assert;
const mimalloc = bun.mimalloc;
const Owned = bun.ptr.Owned;
const safety_checks = bun.Environment.ci_assert;

View File

@@ -60,29 +60,17 @@ pub const Heap = opaque {
return mi_heap_realloc(self, p, newsize);
}
pub fn isOwned(self: *Heap, p: ?*const anyopaque) bool {
return mi_heap_contains(self, p);
pub fn isOwned(self: *Heap, p: ?*anyopaque) bool {
return mi_heap_check_owned(self, p);
}
};
pub extern fn mi_heap_new() ?*Heap;
pub extern fn mi_heap_delete(heap: *Heap) void;
pub extern fn mi_heap_destroy(heap: *Heap) void;
pub extern fn mi_heap_set_default(heap: *Heap) *Heap;
pub extern fn mi_heap_get_default() *Heap;
pub extern fn mi_heap_get_backing() *Heap;
pub extern fn mi_heap_collect(heap: *Heap, force: bool) void;
pub extern fn mi_heap_main() *Heap;
// Thread-local heap (theap) API - new in mimalloc v3
pub const THeap = opaque {};
pub extern fn mi_theap_get_default() *THeap;
pub extern fn mi_theap_set_default(theap: *THeap) *THeap;
pub extern fn mi_theap_collect(theap: *THeap, force: bool) void;
pub extern fn mi_theap_malloc(theap: *THeap, size: usize) ?*anyopaque;
pub extern fn mi_theap_zalloc(theap: *THeap, size: usize) ?*anyopaque;
pub extern fn mi_theap_calloc(theap: *THeap, count: usize, size: usize) ?*anyopaque;
pub extern fn mi_theap_malloc_small(theap: *THeap, size: usize) ?*anyopaque;
pub extern fn mi_theap_malloc_aligned(theap: *THeap, size: usize, alignment: usize) ?*anyopaque;
pub extern fn mi_theap_realloc(theap: *THeap, p: ?*anyopaque, newsize: usize) ?*anyopaque;
pub extern fn mi_theap_destroy(theap: *THeap) void;
pub extern fn mi_heap_theap(heap: *Heap) *THeap;
pub extern fn mi_heap_malloc(heap: *Heap, size: usize) ?*anyopaque;
pub extern fn mi_heap_zalloc(heap: *Heap, size: usize) ?*anyopaque;
pub extern fn mi_heap_calloc(heap: *Heap, count: usize, size: usize) ?*anyopaque;
@@ -114,7 +102,8 @@ pub extern fn mi_heap_rezalloc_aligned(heap: *Heap, p: ?*anyopaque, newsize: usi
pub extern fn mi_heap_rezalloc_aligned_at(heap: *Heap, p: ?*anyopaque, newsize: usize, alignment: usize, offset: usize) ?*anyopaque;
pub extern fn mi_heap_recalloc_aligned(heap: *Heap, p: ?*anyopaque, newcount: usize, size: usize, alignment: usize) ?*anyopaque;
pub extern fn mi_heap_recalloc_aligned_at(heap: *Heap, p: ?*anyopaque, newcount: usize, size: usize, alignment: usize, offset: usize) ?*anyopaque;
pub extern fn mi_heap_contains(heap: *const Heap, p: ?*const anyopaque) bool;
pub extern fn mi_heap_contains_block(heap: *Heap, p: *const anyopaque) bool;
pub extern fn mi_heap_check_owned(heap: *Heap, p: *const anyopaque) bool;
pub extern fn mi_check_owned(p: ?*const anyopaque) bool;
pub const struct_mi_heap_area_s = extern struct {
blocks: ?*anyopaque,

View File

@@ -245,6 +245,16 @@ pub const All = struct {
}
pub fn getTimeout(this: *All, spec: *timespec, vm: *VirtualMachine) bool {
// On POSIX, if there are pending immediate tasks, use a zero timeout
// so epoll/kqueue returns immediately without the overhead of writing
// to the eventfd via wakeup().
if (comptime Environment.isPosix) {
if (vm.event_loop.immediate_tasks.items.len > 0) {
spec.* = .{ .nsec = 0, .sec = 0 };
return true;
}
}
var maybe_now: ?timespec = null;
while (this.timers.peek()) |min| {
const now = maybe_now orelse now: {

View File

@@ -256,7 +256,7 @@ pub fn NewSocket(comptime ssl: bool) type {
jsc.markBinding(@src());
if (this.socket.isDetached()) return;
const handlers = this.getHandlers();
log("onTimeout {s}", .{if (handlers.is_server) "S" else "C"});
log("onTimeout {s}", .{if (handlers.mode == .server) "S" else "C"});
const callback = handlers.onTimeout;
if (callback == .zero or this.flags.finalizing) return;
if (handlers.vm.isShuttingDown()) {
@@ -281,7 +281,7 @@ pub fn NewSocket(comptime ssl: bool) type {
pub fn handleConnectError(this: *This, errno: c_int) bun.JSError!void {
const handlers = this.getHandlers();
log("onConnectError {s} ({d}, {d})", .{ if (handlers.is_server) "S" else "C", errno, this.ref_count.get() });
log("onConnectError {s} ({d}, {d})", .{ if (handlers.mode == .server) "S" else "C", errno, this.ref_count.get() });
// Ensure the socket is still alive for any defer's we have
this.ref();
defer this.deref();
@@ -397,7 +397,8 @@ pub fn NewSocket(comptime ssl: bool) type {
}
pub fn isServer(this: *const This) bool {
return this.getHandlers().is_server;
const handlers = this.getHandlers();
return handlers.mode.isServer();
}
pub fn onOpen(this: *This, socket: Socket) void {
@@ -502,7 +503,7 @@ pub fn NewSocket(comptime ssl: bool) type {
jsc.markBinding(@src());
if (this.socket.isDetached()) return;
const handlers = this.getHandlers();
log("onEnd {s}", .{if (handlers.is_server) "S" else "C"});
log("onEnd {s}", .{if (handlers.mode == .server) "S" else "C"});
// Ensure the socket remains alive until this is finished
this.ref();
defer this.deref();
@@ -534,7 +535,7 @@ pub fn NewSocket(comptime ssl: bool) type {
this.socket = s;
if (this.socket.isDetached()) return;
const handlers = this.getHandlers();
log("onHandshake {s} ({d})", .{ if (handlers.is_server) "S" else "C", success });
log("onHandshake {s} ({d})", .{ if (handlers.mode == .server) "S" else "C", success });
const authorized = if (success == 1) true else false;
@@ -571,7 +572,7 @@ pub fn NewSocket(comptime ssl: bool) type {
result = callback.call(globalObject, this_value, &[_]JSValue{this_value}) catch |err| globalObject.takeException(err);
// only call onOpen once for clients
if (!handlers.is_server) {
if (handlers.mode != .server) {
// clean onOpen callback so only called in the first handshake and not in every renegotiation
// on servers this would require a different approach but it's not needed because our servers will not call handshake multiple times
// servers don't support renegotiation
@@ -600,7 +601,7 @@ pub fn NewSocket(comptime ssl: bool) type {
pub fn onClose(this: *This, _: Socket, err: c_int, _: ?*anyopaque) bun.JSError!void {
jsc.markBinding(@src());
const handlers = this.getHandlers();
log("onClose {s}", .{if (handlers.is_server) "S" else "C"});
log("onClose {s}", .{if (handlers.mode == .server) "S" else "C"});
this.detachNativeCallback();
this.socket.detach();
defer this.deref();
@@ -648,7 +649,7 @@ pub fn NewSocket(comptime ssl: bool) type {
this.socket = s;
if (this.socket.isDetached()) return;
const handlers = this.getHandlers();
log("onData {s} ({d})", .{ if (handlers.is_server) "S" else "C", data.len });
log("onData {s} ({d})", .{ if (handlers.mode == .server) "S" else "C", data.len });
if (this.native_callback.onData(data)) return;
const callback = handlers.onData;
@@ -691,7 +692,7 @@ pub fn NewSocket(comptime ssl: bool) type {
pub fn getListener(this: *This, _: *jsc.JSGlobalObject) JSValue {
const handlers = this.handlers orelse return .js_undefined;
if (!handlers.is_server or this.socket.isDetached()) {
if (handlers.mode != .server or this.socket.isDetached()) {
return .js_undefined;
}
@@ -1352,7 +1353,7 @@ pub fn NewSocket(comptime ssl: bool) type {
};
const this_handlers = this.getHandlers();
const handlers = try Handlers.fromJS(globalObject, socket_obj, this_handlers.is_server);
const handlers = try Handlers.fromJS(globalObject, socket_obj, this_handlers.mode == .server);
this_handlers.deinit();
this_handlers.* = handlers;
@@ -1380,6 +1381,9 @@ pub fn NewSocket(comptime ssl: bool) type {
if (this.socket.isDetached() or this.socket.isNamedPipe()) {
return .js_undefined;
}
if (this.isServer()) {
return globalObject.throw("Server-side upgradeTLS is not supported. Use upgradeDuplexToTLS with isServer: true instead.", .{});
}
const args = callframe.arguments_old(1);
if (args.len < 1) {
@@ -1571,7 +1575,7 @@ pub fn NewSocket(comptime ssl: bool) type {
this.socket.detach();
// start TLS handshake after we set extension on the socket
new_socket.startTLS(!handlers_ptr.is_server);
new_socket.startTLS(handlers_ptr.mode != .server);
success = true;
return array;
@@ -1754,6 +1758,23 @@ pub fn NewWrappedHandler(comptime tls: bool) type {
};
}
/// Unified socket mode replacing the old is_server bool + TLSMode pair.
pub const SocketMode = enum {
/// Default — TLS client or non-TLS socket
client,
/// Listener-owned server. TLS (if any) configured at the listener level.
server,
/// Duplex upgraded to TLS server role. Not listener-owned —
/// markInactive uses client lifecycle path.
duplex_server,
/// Returns true for any mode that acts as a TLS server (ALPN, handshake direction).
/// Both .server and .duplex_server present as server to peers.
pub fn isServer(this: SocketMode) bool {
return this == .server or this == .duplex_server;
}
};
pub const DuplexUpgradeContext = struct {
upgrade: uws.UpgradedDuplex,
// We only us a tls and not a raw socket when upgrading a Duplex, Duplex dont support socketpairs
@@ -1764,6 +1785,7 @@ pub const DuplexUpgradeContext = struct {
task_event: EventState = .StartTLS,
ssl_config: ?jsc.API.ServerConfig.SSLConfig,
is_open: bool = false,
#mode: SocketMode = .client,
pub const EventState = enum(u8) {
StartTLS,
@@ -1846,7 +1868,8 @@ pub const DuplexUpgradeContext = struct {
switch (this.task_event) {
.StartTLS => {
if (this.ssl_config) |config| {
this.upgrade.startTLS(config, true) catch |err| {
log("DuplexUpgradeContext.startTLS mode={s}", .{@tagName(this.#mode)});
this.upgrade.startTLS(config, this.#mode == .client) catch |err| {
switch (err) {
error.OutOfMemory => {
bun.outOfMemory();
@@ -1914,8 +1937,15 @@ pub fn jsUpgradeDuplexToTLS(globalObject: *jsc.JSGlobalObject, callframe: *jsc.C
return globalObject.throw("Expected \"socket\" option", .{});
};
const is_server = false; // A duplex socket is always handled as a client
const handlers = try Handlers.fromJS(globalObject, socket_obj, is_server);
var is_server = false;
if (try opts.getTruthy(globalObject, "isServer")) |is_server_val| {
is_server = is_server_val.toBoolean();
}
// Note: Handlers.fromJS is_server=false because these handlers are standalone
// allocations (not embedded in a Listener). The mode field on Handlers
// controls lifecycle (markInactive expects a Listener parent when .server).
// The TLS direction (client vs server) is controlled by DuplexUpgradeContext.mode.
const handlers = try Handlers.fromJS(globalObject, socket_obj, false);
var ssl_opts: ?jsc.API.ServerConfig.SSLConfig = null;
if (try opts.getTruthy(globalObject, "tls")) |tls| {
@@ -1937,6 +1967,9 @@ pub fn jsUpgradeDuplexToTLS(globalObject: *jsc.JSGlobalObject, callframe: *jsc.C
const handlers_ptr = bun.handleOom(handlers.vm.allocator.create(Handlers));
handlers_ptr.* = handlers;
// Set mode to duplex_server so TLSSocket.isServer() returns true for ALPN server mode
// without affecting markInactive lifecycle (which requires a Listener parent).
handlers_ptr.mode = if (is_server) .duplex_server else .client;
var tls = bun.new(TLSSocket, .{
.ref_count = .init(),
.handlers = handlers_ptr,
@@ -1963,6 +1996,7 @@ pub fn jsUpgradeDuplexToTLS(globalObject: *jsc.JSGlobalObject, callframe: *jsc.C
.vm = globalObject.bunVM(),
.task = undefined,
.ssl_config = socket_config.*,
.#mode = if (is_server) .duplex_server else .client,
});
tls.ref();

View File

@@ -15,7 +15,7 @@ binary_type: BinaryType = .Buffer,
vm: *jsc.VirtualMachine,
globalObject: *jsc.JSGlobalObject,
active_connections: u32 = 0,
is_server: bool,
mode: SocketMode = .client,
promise: jsc.Strong.Optional = .empty,
protection_count: if (Environment.ci_assert) u32 else void = if (Environment.ci_assert) 0,
@@ -81,7 +81,7 @@ pub fn markInactive(this: *Handlers) void {
Listener.log("markInactive", .{});
this.active_connections -= 1;
if (this.active_connections == 0) {
if (this.is_server) {
if (this.mode == .server) {
const listen_socket: *Listener = @fieldParentPtr("handlers", this);
// allow it to be GC'd once the last connection is closed and it's not listening anymore
if (listen_socket.listener == .none) {
@@ -133,7 +133,7 @@ pub fn fromGenerated(
var result: Handlers = .{
.vm = globalObject.bunVM(),
.globalObject = globalObject,
.is_server = is_server,
.mode = if (is_server) .server else .client,
.binary_type = switch (generated.binary_type) {
.arraybuffer => .ArrayBuffer,
.buffer => .Buffer,
@@ -217,7 +217,7 @@ pub fn clone(this: *const Handlers) Handlers {
.vm = this.vm,
.globalObject = this.globalObject,
.binary_type = this.binary_type,
.is_server = this.is_server,
.mode = this.mode,
};
inline for (callback_fields) |field| {
@field(result, field) = @field(this, field);
@@ -346,6 +346,7 @@ const strings = bun.strings;
const uws = bun.uws;
const Listener = bun.api.Listener;
const SSLConfig = bun.api.ServerConfig.SSLConfig;
const SocketMode = bun.api.socket.SocketMode;
const jsc = bun.jsc;
const JSValue = jsc.JSValue;

View File

@@ -91,7 +91,7 @@ pub fn reload(this: *Listener, globalObject: *jsc.JSGlobalObject, callframe: *js
return globalObject.throw("Expected \"socket\" object", .{});
};
const handlers = try Handlers.fromJS(globalObject, socket_obj, this.handlers.is_server);
const handlers = try Handlers.fromJS(globalObject, socket_obj, this.handlers.mode == .server);
this.handlers.deinit();
this.handlers = handlers;
@@ -773,7 +773,7 @@ pub fn connectInner(globalObject: *jsc.JSGlobalObject, prev_maybe_tcp: ?*TCPSock
const handlers_ptr = bun.handleOom(handlers.vm.allocator.create(Handlers));
handlers_ptr.* = handlers.*;
handlers_ptr.is_server = false;
handlers_ptr.mode = .client;
var promise = jsc.JSPromise.create(globalObject);
const promise_value = promise.toJS();

View File

@@ -173,8 +173,10 @@ pub fn SSLWrapper(comptime T: type) type {
// flush buffered data and returns amount of pending data to write
pub fn flush(this: *This) usize {
const ssl = this.ssl orelse return 0;
// handleTraffic may trigger a close callback which frees ssl,
// so we must not capture the ssl pointer before calling it.
this.handleTraffic();
const ssl = this.ssl orelse return 0;
const pending = BoringSSL.BIO_ctrl_pending(BoringSSL.SSL_get_wbio(ssl));
if (pending > 0) return @intCast(pending);
return 0;
@@ -428,6 +430,8 @@ pub fn SSLWrapper(comptime T: type) type {
if (read > 0) {
log("triggering data callback (read {d})", .{read});
this.triggerDataCallback(buffer[0..read]);
// The data callback may have closed the connection
if (this.ssl == null or this.flags.closed_notified) return false;
}
this.triggerCloseCallback();
return false;

View File

@@ -69,6 +69,7 @@ static uint8_t x86_cpu_features()
#if CPU(ARM64)
#if OS(WINDOWS)
#include <windows.h>
#elif OS(MACOS)
#include <sys/sysctl.h>
#elif OS(LINUX)
@@ -81,7 +82,18 @@ static uint8_t aarch64_cpu_features()
uint8_t features = 0;
#if OS(WINDOWS)
#pragma error "TODO: Implement AArch64 CPU features for Windows"
// FP is mandatory on AArch64 — no separate PF_ constant exists for it
features |= 1 << static_cast<uint8_t>(AArch64CPUFeature::fp);
if (IsProcessorFeaturePresent(PF_ARM_NEON_INSTRUCTIONS_AVAILABLE))
features |= 1 << static_cast<uint8_t>(AArch64CPUFeature::neon);
if (IsProcessorFeaturePresent(PF_ARM_V8_CRYPTO_INSTRUCTIONS_AVAILABLE))
features |= 1 << static_cast<uint8_t>(AArch64CPUFeature::aes);
if (IsProcessorFeaturePresent(PF_ARM_V8_CRC32_INSTRUCTIONS_AVAILABLE))
features |= 1 << static_cast<uint8_t>(AArch64CPUFeature::crc32);
if (IsProcessorFeaturePresent(PF_ARM_V81_ATOMIC_INSTRUCTIONS_AVAILABLE))
features |= 1 << static_cast<uint8_t>(AArch64CPUFeature::atomics);
if (IsProcessorFeaturePresent(PF_ARM_SVE_INSTRUCTIONS_AVAILABLE))
features |= 1 << static_cast<uint8_t>(AArch64CPUFeature::sve);
#elif OS(MACOS)
int value = 0;
size_t size = sizeof(value);

View File

@@ -39,7 +39,7 @@ static WebCore::ExceptionOr<void> encode(VM& vm, const WTF::BitSet<256>& doNotEs
// 4-d-ii-1. Let V be the code unit value of C.
char32_t codePoint;
if (!U16_IS_LEAD(character))
codePoint = character;
codePoint = static_cast<char32_t>(character);
else {
// 4-d-iii. Else,
// 4-d-iii-1. Increase k by 1.

View File

@@ -55,6 +55,10 @@ template<typename CollectionType, typename KeyType> static auto findInSortedPair
inline void checkEncodingTableInvariants() {}
#endif
// LLVM 21+ -Wcharacter-conversion flags intentional char32_t/char16_t comparisons
// used for Unicode code point range checks in findFirstInSortedPairs.
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wcharacter-conversion"
struct CompareFirst {
template<typename TypeA, typename TypeB> bool operator()(const TypeA& a, const TypeB& b)
{
@@ -132,5 +136,6 @@ template<typename CollectionType, typename KeyType> static auto findInSortedPair
}
return std::ranges::equal_range(collection, makeFirstAdapter(key), CompareFirst {});
}
#pragma clang diagnostic pop
}

View File

@@ -119,6 +119,7 @@ JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_swap16);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_swap32);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_swap64);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_toString);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_slice);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_write);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_writeBigInt64LE);
JSC_DECLARE_HOST_FUNCTION(jsBufferPrototypeFunction_writeBigInt64BE);
@@ -1879,6 +1880,103 @@ bool inline parseArrayIndex(JSC::ThrowScope& scope, JSC::JSGlobalObject* globalO
return true;
}
static ALWAYS_INLINE size_t adjustSliceOffsetInt32(int32_t offset, size_t length)
{
if (offset < 0) {
int64_t adjusted = static_cast<int64_t>(offset) + static_cast<int64_t>(length);
return adjusted > 0 ? static_cast<size_t>(adjusted) : 0;
}
return static_cast<size_t>(offset) < length ? static_cast<size_t>(offset) : length;
}
static ALWAYS_INLINE size_t adjustSliceOffsetDouble(double offset, size_t length)
{
if (std::isnan(offset)) {
return 0;
}
offset = std::trunc(offset);
if (offset == 0) {
return 0;
} else if (offset < 0) {
double adjusted = offset + static_cast<double>(length);
return adjusted > 0 ? static_cast<size_t>(adjusted) : 0;
} else {
return offset < static_cast<double>(length) ? static_cast<size_t>(offset) : length;
}
}
static JSC::EncodedJSValue jsBufferPrototypeFunction_sliceBody(JSC::JSGlobalObject* lexicalGlobalObject, JSC::CallFrame* callFrame, typename IDLOperation<JSArrayBufferView>::ClassParameter castedThis)
{
auto& vm = JSC::getVM(lexicalGlobalObject);
auto throwScope = DECLARE_THROW_SCOPE(vm);
auto* globalObject = defaultGlobalObject(lexicalGlobalObject);
size_t byteLength = castedThis->byteLength();
size_t byteOffset = castedThis->byteOffset();
size_t startOffset = 0;
size_t endOffset = byteLength;
unsigned argCount = callFrame->argumentCount();
if (argCount > 0) {
JSValue startArg = callFrame->uncheckedArgument(0);
if (startArg.isInt32()) {
startOffset = adjustSliceOffsetInt32(startArg.asInt32(), byteLength);
} else if (!startArg.isUndefined()) {
double startD = startArg.toNumber(lexicalGlobalObject);
RETURN_IF_EXCEPTION(throwScope, {});
startOffset = adjustSliceOffsetDouble(startD, byteLength);
}
}
if (argCount > 1) {
JSValue endArg = callFrame->uncheckedArgument(1);
if (endArg.isInt32()) {
endOffset = adjustSliceOffsetInt32(endArg.asInt32(), byteLength);
} else if (!endArg.isUndefined()) {
double endD = endArg.toNumber(lexicalGlobalObject);
RETURN_IF_EXCEPTION(throwScope, {});
endOffset = adjustSliceOffsetDouble(endD, byteLength);
}
}
size_t newLength = endOffset > startOffset ? endOffset - startOffset : 0;
if (castedThis->isDetached()) [[unlikely]] {
throwVMTypeError(lexicalGlobalObject, throwScope, "Buffer is detached"_s);
return {};
}
RefPtr<ArrayBuffer> buffer = castedThis->possiblySharedBuffer();
if (!buffer) {
throwOutOfMemoryError(globalObject, throwScope);
return {};
}
if (castedThis->isResizableOrGrowableShared()) {
auto* subclassStructure = globalObject->JSResizableOrGrowableSharedBufferSubclassStructure();
auto* uint8Array = JSC::JSUint8Array::create(lexicalGlobalObject, subclassStructure, WTF::move(buffer), byteOffset + startOffset, newLength);
RETURN_IF_EXCEPTION(throwScope, {});
if (!uint8Array) [[unlikely]] {
throwOutOfMemoryError(globalObject, throwScope);
return {};
}
RELEASE_AND_RETURN(throwScope, JSC::JSValue::encode(uint8Array));
}
auto* subclassStructure = globalObject->JSBufferSubclassStructure();
auto* uint8Array = JSC::JSUint8Array::create(lexicalGlobalObject, subclassStructure, WTF::move(buffer), byteOffset + startOffset, newLength);
RETURN_IF_EXCEPTION(throwScope, {});
if (!uint8Array) [[unlikely]] {
throwOutOfMemoryError(globalObject, throwScope);
return {};
}
RELEASE_AND_RETURN(throwScope, JSC::JSValue::encode(uint8Array));
}
// https://github.com/nodejs/node/blob/v22.9.0/lib/buffer.js#L834
// using byteLength and byte offsets here is intentional
static JSC::EncodedJSValue jsBufferPrototypeFunction_toStringBody(JSC::JSGlobalObject* lexicalGlobalObject, JSC::CallFrame* callFrame, typename IDLOperation<JSArrayBufferView>::ClassParameter castedThis)
@@ -2430,6 +2528,11 @@ JSC_DEFINE_HOST_FUNCTION(jsBufferPrototypeFunction_swap64, (JSGlobalObject * lex
return IDLOperation<JSArrayBufferView>::call<jsBufferPrototypeFunction_swap64Body>(*lexicalGlobalObject, *callFrame, "swap64");
}
JSC_DEFINE_HOST_FUNCTION(jsBufferPrototypeFunction_slice, (JSGlobalObject * lexicalGlobalObject, CallFrame* callFrame))
{
return IDLOperation<JSArrayBufferView>::call<jsBufferPrototypeFunction_sliceBody>(*lexicalGlobalObject, *callFrame, "slice");
}
JSC_DEFINE_HOST_FUNCTION(jsBufferPrototypeFunction_toString, (JSGlobalObject * lexicalGlobalObject, CallFrame* callFrame))
{
return IDLOperation<JSArrayBufferView>::call<jsBufferPrototypeFunction_toStringBody>(*lexicalGlobalObject, *callFrame, "toString");
@@ -2711,8 +2814,8 @@ static const HashTableValue JSBufferPrototypeTableValues[]
{ "readUIntBE"_s, static_cast<unsigned>(JSC::PropertyAttribute::Builtin), NoIntrinsic, { HashTableValue::BuiltinGeneratorType, jsBufferPrototypeReadUIntBECodeGenerator, 1 } },
{ "readUIntLE"_s, static_cast<unsigned>(JSC::PropertyAttribute::Builtin), NoIntrinsic, { HashTableValue::BuiltinGeneratorType, jsBufferPrototypeReadUIntLECodeGenerator, 1 } },
{ "slice"_s, static_cast<unsigned>(JSC::PropertyAttribute::Builtin), NoIntrinsic, { HashTableValue::BuiltinGeneratorType, jsBufferPrototypeSliceCodeGenerator, 2 } },
{ "subarray"_s, static_cast<unsigned>(JSC::PropertyAttribute::Builtin), NoIntrinsic, { HashTableValue::BuiltinGeneratorType, jsBufferPrototypeSliceCodeGenerator, 2 } },
{ "slice"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_slice, 2 } },
{ "subarray"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_slice, 2 } },
{ "swap16"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_swap16, 0 } },
{ "swap32"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_swap32, 0 } },
{ "swap64"_s, static_cast<unsigned>(JSC::PropertyAttribute::Function), NoIntrinsic, { HashTableValue::NativeFunctionType, jsBufferPrototypeFunction_swap64, 0 } },

View File

@@ -136,23 +136,23 @@ private:
bool load_functions()
{
CFRelease = (void (*)(CFTypeRef))dlsym(cf_handle, "CFRelease");
CFStringCreateWithCString = (CFStringRef(*)(CFAllocatorRef, const char*, CFStringEncoding))dlsym(cf_handle, "CFStringCreateWithCString");
CFDataCreate = (CFDataRef(*)(CFAllocatorRef, const UInt8*, CFIndex))dlsym(cf_handle, "CFDataCreate");
CFStringCreateWithCString = (CFStringRef (*)(CFAllocatorRef, const char*, CFStringEncoding))dlsym(cf_handle, "CFStringCreateWithCString");
CFDataCreate = (CFDataRef (*)(CFAllocatorRef, const UInt8*, CFIndex))dlsym(cf_handle, "CFDataCreate");
CFDataGetBytePtr = (const UInt8* (*)(CFDataRef))dlsym(cf_handle, "CFDataGetBytePtr");
CFDataGetLength = (CFIndex(*)(CFDataRef))dlsym(cf_handle, "CFDataGetLength");
CFDictionaryCreateMutable = (CFMutableDictionaryRef(*)(CFAllocatorRef, CFIndex, const CFDictionaryKeyCallBacks*, const CFDictionaryValueCallBacks*))dlsym(cf_handle, "CFDictionaryCreateMutable");
CFDataGetLength = (CFIndex (*)(CFDataRef))dlsym(cf_handle, "CFDataGetLength");
CFDictionaryCreateMutable = (CFMutableDictionaryRef (*)(CFAllocatorRef, CFIndex, const CFDictionaryKeyCallBacks*, const CFDictionaryValueCallBacks*))dlsym(cf_handle, "CFDictionaryCreateMutable");
CFDictionaryAddValue = (void (*)(CFMutableDictionaryRef, const void*, const void*))dlsym(cf_handle, "CFDictionaryAddValue");
CFStringGetCString = (Boolean(*)(CFStringRef, char*, CFIndex, CFStringEncoding))dlsym(cf_handle, "CFStringGetCString");
CFStringGetCString = (Boolean (*)(CFStringRef, char*, CFIndex, CFStringEncoding))dlsym(cf_handle, "CFStringGetCString");
CFStringGetCStringPtr = (const char* (*)(CFStringRef, CFStringEncoding))dlsym(cf_handle, "CFStringGetCStringPtr");
CFStringGetLength = (CFIndex(*)(CFStringRef))dlsym(cf_handle, "CFStringGetLength");
CFStringGetMaximumSizeForEncoding = (CFIndex(*)(CFIndex, CFStringEncoding))dlsym(cf_handle, "CFStringGetMaximumSizeForEncoding");
CFStringGetLength = (CFIndex (*)(CFStringRef))dlsym(cf_handle, "CFStringGetLength");
CFStringGetMaximumSizeForEncoding = (CFIndex (*)(CFIndex, CFStringEncoding))dlsym(cf_handle, "CFStringGetMaximumSizeForEncoding");
SecItemAdd = (OSStatus(*)(CFDictionaryRef, CFTypeRef*))dlsym(handle, "SecItemAdd");
SecItemCopyMatching = (OSStatus(*)(CFDictionaryRef, CFTypeRef*))dlsym(handle, "SecItemCopyMatching");
SecItemUpdate = (OSStatus(*)(CFDictionaryRef, CFDictionaryRef))dlsym(handle, "SecItemUpdate");
SecItemDelete = (OSStatus(*)(CFDictionaryRef))dlsym(handle, "SecItemDelete");
SecCopyErrorMessageString = (CFStringRef(*)(OSStatus, void*))dlsym(handle, "SecCopyErrorMessageString");
SecAccessCreate = (OSStatus(*)(CFStringRef, CFArrayRef, SecAccessRef*))dlsym(handle, "SecAccessCreate");
SecItemAdd = (OSStatus (*)(CFDictionaryRef, CFTypeRef*))dlsym(handle, "SecItemAdd");
SecItemCopyMatching = (OSStatus (*)(CFDictionaryRef, CFTypeRef*))dlsym(handle, "SecItemCopyMatching");
SecItemUpdate = (OSStatus (*)(CFDictionaryRef, CFDictionaryRef))dlsym(handle, "SecItemUpdate");
SecItemDelete = (OSStatus (*)(CFDictionaryRef))dlsym(handle, "SecItemDelete");
SecCopyErrorMessageString = (CFStringRef (*)(OSStatus, void*))dlsym(handle, "SecCopyErrorMessageString");
SecAccessCreate = (OSStatus (*)(CFStringRef, CFArrayRef, SecAccessRef*))dlsym(handle, "SecAccessCreate");
return CFRelease && CFStringCreateWithCString && CFDataCreate && CFDataGetBytePtr && CFDataGetLength && CFDictionaryCreateMutable && CFDictionaryAddValue && SecItemAdd && SecItemCopyMatching && SecItemUpdate && SecItemDelete && SecCopyErrorMessageString && SecAccessCreate && CFStringGetCString && CFStringGetCStringPtr && CFStringGetLength && CFStringGetMaximumSizeForEncoding;
}

View File

@@ -199,19 +199,19 @@ private:
g_free = (void (*)(gpointer))dlsym(glib_handle, "g_free");
g_hash_table_new = (GHashTable * (*)(void*, void*)) dlsym(glib_handle, "g_hash_table_new");
g_hash_table_destroy = (void (*)(GHashTable*))dlsym(glib_handle, "g_hash_table_destroy");
g_hash_table_lookup = (gpointer(*)(GHashTable*, gpointer))dlsym(glib_handle, "g_hash_table_lookup");
g_hash_table_lookup = (gpointer (*)(GHashTable*, gpointer))dlsym(glib_handle, "g_hash_table_lookup");
g_hash_table_insert = (void (*)(GHashTable*, gpointer, gpointer))dlsym(glib_handle, "g_hash_table_insert");
g_list_free = (void (*)(GList*))dlsym(glib_handle, "g_list_free");
g_list_free_full = (void (*)(GList*, void (*)(gpointer)))dlsym(glib_handle, "g_list_free_full");
g_str_hash = (guint(*)(gpointer))dlsym(glib_handle, "g_str_hash");
g_str_equal = (gboolean(*)(gpointer, gpointer))dlsym(glib_handle, "g_str_equal");
g_str_hash = (guint (*)(gpointer))dlsym(glib_handle, "g_str_hash");
g_str_equal = (gboolean (*)(gpointer, gpointer))dlsym(glib_handle, "g_str_equal");
// Load libsecret functions
secret_password_store_sync = (gboolean(*)(const SecretSchema*, const gchar*, const gchar*, const gchar*, void*, GError**, ...))
secret_password_store_sync = (gboolean (*)(const SecretSchema*, const gchar*, const gchar*, const gchar*, void*, GError**, ...))
dlsym(secret_handle, "secret_password_store_sync");
secret_password_lookup_sync = (gchar * (*)(const SecretSchema*, void*, GError**, ...))
dlsym(secret_handle, "secret_password_lookup_sync");
secret_password_clear_sync = (gboolean(*)(const SecretSchema*, void*, GError**, ...))
secret_password_clear_sync = (gboolean (*)(const SecretSchema*, void*, GError**, ...))
dlsym(secret_handle, "secret_password_clear_sync");
secret_password_free = (void (*)(gchar*))dlsym(secret_handle, "secret_password_free");
secret_service_search_sync = (GList * (*)(SecretService*, const SecretSchema*, GHashTable*, SecretSearchFlags, void*, GError**))
@@ -220,7 +220,7 @@ private:
secret_value_get_text = (const gchar* (*)(SecretValue*))dlsym(secret_handle, "secret_value_get_text");
secret_value_unref = (void (*)(gpointer))dlsym(secret_handle, "secret_value_unref");
secret_item_get_attributes = (GHashTable * (*)(SecretItem*)) dlsym(secret_handle, "secret_item_get_attributes");
secret_item_load_secret_sync = (gboolean(*)(SecretItem*, void*, GError**))dlsym(secret_handle, "secret_item_load_secret_sync");
secret_item_load_secret_sync = (gboolean (*)(SecretItem*, void*, GError**))dlsym(secret_handle, "secret_item_load_secret_sync");
return g_error_free && g_free && g_hash_table_new && g_hash_table_destroy && g_hash_table_lookup && g_hash_table_insert && g_list_free && secret_password_store_sync && secret_password_lookup_sync && secret_password_clear_sync && secret_password_free;
}

View File

@@ -890,7 +890,7 @@ static const GB18030EncodeIndex& gb18030EncodeIndex()
// https://unicode-org.atlassian.net/browse/ICU-22357
// The 2-byte values are handled correctly by values from gb18030()
// but these need to be exceptions from gb18030Ranges().
static std::optional<uint16_t> gb18030AsymmetricEncode(char16_t codePoint)
static std::optional<uint16_t> gb18030AsymmetricEncode(char32_t codePoint)
{
switch (codePoint) {
case 0xE81E:

View File

@@ -80,16 +80,20 @@ size_t IndexOfAnyCharImpl(const uint8_t* HWY_RESTRICT text, size_t text_len, con
return text_len;
} else {
ASSERT(chars_len <= 16);
constexpr size_t kMaxPreloadedChars = 16;
const size_t simd_text_len = text_len - (text_len % N);
size_t i = 0;
#if !HWY_HAVE_SCALABLE && !HWY_TARGET_IS_SVE
// Preload search characters into native-width vectors.
// On non-SVE targets, Vec has a known size and can be stored in arrays.
static constexpr size_t kMaxPreloadedChars = 16;
hn::Vec<D8> char_vecs[kMaxPreloadedChars];
const size_t num_chars_to_preload = std::min(chars_len, kMaxPreloadedChars);
for (size_t c = 0; c < num_chars_to_preload; ++c) {
char_vecs[c] = hn::Set(d, chars[c]);
}
const size_t simd_text_len = text_len - (text_len % N);
size_t i = 0;
for (; i < simd_text_len; i += N) {
const auto text_vec = hn::LoadN(d, text + i, N);
auto found_mask = hn::MaskFalse(d);
@@ -97,11 +101,18 @@ size_t IndexOfAnyCharImpl(const uint8_t* HWY_RESTRICT text, size_t text_len, con
for (size_t c = 0; c < num_chars_to_preload; ++c) {
found_mask = hn::Or(found_mask, hn::Eq(text_vec, char_vecs[c]));
}
if (chars_len > num_chars_to_preload) {
for (size_t c = num_chars_to_preload; c < chars_len; ++c) {
found_mask = hn::Or(found_mask, hn::Eq(text_vec, hn::Set(d, chars[c])));
}
#else
// SVE types are sizeless and cannot be stored in arrays.
// hn::Set is a single broadcast instruction; the compiler will
// hoist these loop-invariant broadcasts out of the outer loop.
for (; i < simd_text_len; i += N) {
const auto text_vec = hn::LoadN(d, text + i, N);
auto found_mask = hn::MaskFalse(d);
for (size_t c = 0; c < chars_len; ++c) {
found_mask = hn::Or(found_mask, hn::Eq(text_vec, hn::Set(d, chars[c])));
}
#endif
const intptr_t pos = hn::FindFirstTrue(d, found_mask);
if (pos >= 0) {

View File

@@ -105,7 +105,7 @@ bool EventTarget::addEventListener(const AtomString& eventType, Ref<EventListene
if (options.signal) {
options.signal->addAlgorithm([weakThis = WeakPtr { *this }, eventType, listener = WeakPtr { listener }, capture = options.capture](JSC::JSValue) {
if (weakThis && listener)
Ref { *weakThis } -> removeEventListener(eventType, *listener, capture);
Ref { *weakThis }->removeEventListener(eventType, *listener, capture);
});
}

View File

@@ -78,6 +78,9 @@
#include <JavaScriptCore/ArrayBuffer.h>
#include <JavaScriptCore/JSArrayBufferView.h>
#include <JavaScriptCore/JSCInlines.h>
#include <JavaScriptCore/JSArrayInlines.h>
#include <JavaScriptCore/ButterflyInlines.h>
#include <JavaScriptCore/ObjectInitializationScope.h>
#include <JavaScriptCore/JSDataView.h>
#include <JavaScriptCore/JSMapInlines.h>
#include <JavaScriptCore/JSMapIterator.h>
@@ -5574,6 +5577,13 @@ SerializedScriptValue::SerializedScriptValue(WTF::FixedVector<SimpleInMemoryProp
m_memoryCost = computeMemoryCost();
}
SerializedScriptValue::SerializedScriptValue(WTF::FixedVector<SimpleCloneableValue>&& elements)
: m_simpleArrayElements(WTF::move(elements))
, m_fastPath(FastPath::SimpleArray)
{
m_memoryCost = computeMemoryCost();
}
SerializedScriptValue::SerializedScriptValue(const String& fastPathString)
: m_fastPathString(fastPathString)
, m_fastPath(FastPath::String)
@@ -5581,6 +5591,14 @@ SerializedScriptValue::SerializedScriptValue(const String& fastPathString)
m_memoryCost = computeMemoryCost();
}
SerializedScriptValue::SerializedScriptValue(Vector<uint8_t>&& butterflyData, uint32_t length, FastPath fastPath)
: m_arrayButterflyData(WTF::move(butterflyData))
, m_arrayLength(length)
, m_fastPath(fastPath)
{
m_memoryCost = computeMemoryCost();
}
size_t SerializedScriptValue::computeMemoryCost() const
{
size_t cost = m_data.size();
@@ -5652,6 +5670,19 @@ size_t SerializedScriptValue::computeMemoryCost() const
}
}
break;
case FastPath::SimpleArray:
cost += m_simpleArrayElements.byteSize();
for (const auto& elem : m_simpleArrayElements) {
std::visit(WTF::makeVisitor(
[&](JSC::JSValue) { /* already included in byteSize() */ },
[&](const String& s) { cost += s.sizeInBytes(); }),
elem);
}
break;
case FastPath::Int32Array:
case FastPath::DoubleArray:
cost += m_arrayButterflyData.size();
break;
case FastPath::None:
break;
@@ -5843,7 +5874,9 @@ ExceptionOr<Ref<SerializedScriptValue>> SerializedScriptValue::create(JSGlobalOb
if (canUseFastPath) {
bool canUseStringFastPath = false;
bool canUseObjectFastPath = false;
bool canUseArrayFastPath = false;
JSObject* object = nullptr;
JSArray* array = nullptr;
Structure* structure = nullptr;
if (value.isCell()) {
auto* cell = value.asCell();
@@ -5853,7 +5886,10 @@ ExceptionOr<Ref<SerializedScriptValue>> SerializedScriptValue::create(JSGlobalOb
object = cell->getObject();
structure = object->structure();
if (isObjectFastPathCandidate(structure)) {
if (auto* jsArray = jsDynamicCast<JSArray*>(object)) {
canUseArrayFastPath = true;
array = jsArray;
} else if (isObjectFastPathCandidate(structure)) {
canUseObjectFastPath = true;
}
}
@@ -5866,6 +5902,84 @@ ExceptionOr<Ref<SerializedScriptValue>> SerializedScriptValue::create(JSGlobalOb
return SerializedScriptValue::createStringFastPath(stringValue);
}
if (canUseArrayFastPath) {
ASSERT(array != nullptr);
// Arrays with named properties (e.g. arr.foo = "bar") cannot use fast path
// as we only copy indexed elements. maxOffset == invalidOffset means no named properties.
if (structure->maxOffset() != invalidOffset)
canUseArrayFastPath = false;
}
if (canUseArrayFastPath) {
ASSERT(array != nullptr);
unsigned length = array->length();
auto arrayType = array->indexingType();
// Tier 1/2: Int32 / Double butterfly memcpy fast path
if ((arrayType == ArrayWithInt32 || arrayType == ArrayWithDouble)
&& length <= array->butterfly()->vectorLength()
&& !array->structure()->holesMustForwardToPrototype(array)) {
if (arrayType == ArrayWithInt32) {
auto* data = array->butterfly()->contiguous().data();
if (!containsHole(data, length)) {
size_t byteSize = sizeof(JSValue) * length;
Vector<uint8_t> buffer(byteSize, 0);
memcpy(buffer.mutableSpan().data(), data, byteSize);
return SerializedScriptValue::createInt32ArrayFastPath(WTF::move(buffer), length);
}
} else {
auto* data = array->butterfly()->contiguousDouble().data();
if (!containsHole(data, length)) {
size_t byteSize = sizeof(double) * length;
Vector<uint8_t> buffer(byteSize, 0);
memcpy(buffer.mutableSpan().data(), data, byteSize);
return SerializedScriptValue::createDoubleArrayFastPath(WTF::move(buffer), length);
}
}
// Holes present → fall through to normal path
}
// Tier 3: Contiguous array with butterfly direct access
if (arrayType == ArrayWithContiguous
&& length <= array->butterfly()->vectorLength()
&& !array->structure()->holesMustForwardToPrototype(array)) {
auto* data = array->butterfly()->contiguous().data();
WTF::Vector<SimpleCloneableValue> elements;
elements.reserveInitialCapacity(length);
bool ok = true;
for (unsigned i = 0; i < length; i++) {
JSValue elem = data[i].get();
if (!elem) {
ok = false;
break;
}
if (elem.isCell()) {
if (!elem.isString()) {
ok = false;
break;
}
auto* str = asString(elem);
String strValue = str->value(&lexicalGlobalObject);
RETURN_IF_EXCEPTION(scope, Exception { ExistingExceptionError });
elements.append(Bun::toCrossThreadShareable(strValue));
} else {
elements.append(elem);
}
}
if (ok) {
return SerializedScriptValue::createArrayFastPath(
WTF::FixedVector<SimpleCloneableValue>(WTF::move(elements)));
}
}
// ArrayStorage / Undecided / holes forwarding → fall through to normal serialization path
}
if (canUseObjectFastPath) {
ASSERT(object != nullptr);
@@ -6142,6 +6256,21 @@ Ref<SerializedScriptValue> SerializedScriptValue::createObjectFastPath(WTF::Fixe
return adoptRef(*new SerializedScriptValue(WTF::move(object)));
}
Ref<SerializedScriptValue> SerializedScriptValue::createArrayFastPath(WTF::FixedVector<SimpleCloneableValue>&& elements)
{
return adoptRef(*new SerializedScriptValue(WTF::move(elements)));
}
Ref<SerializedScriptValue> SerializedScriptValue::createInt32ArrayFastPath(Vector<uint8_t>&& data, uint32_t length)
{
return adoptRef(*new SerializedScriptValue(WTF::move(data), length, FastPath::Int32Array));
}
Ref<SerializedScriptValue> SerializedScriptValue::createDoubleArrayFastPath(Vector<uint8_t>&& data, uint32_t length)
{
return adoptRef(*new SerializedScriptValue(WTF::move(data), length, FastPath::DoubleArray));
}
RefPtr<SerializedScriptValue> SerializedScriptValue::create(JSContextRef originContext, JSValueRef apiValue, JSValueRef* exception)
{
JSGlobalObject* lexicalGlobalObject = toJS(originContext);
@@ -6288,6 +6417,78 @@ JSValue SerializedScriptValue::deserialize(JSGlobalObject& lexicalGlobalObject,
return object;
}
case FastPath::SimpleArray: {
unsigned length = m_simpleArrayElements.size();
// Pre-convert all elements to JSValues (including creating JSStrings)
// before entering ObjectInitializationScope, since jsString() allocates
// GC cells which is not allowed inside the initialization scope.
MarkedArgumentBuffer values;
values.ensureCapacity(length);
for (unsigned i = 0; i < length; i++) {
JSValue elemValue = std::visit(
WTF::makeVisitor(
[](JSValue v) -> JSValue { return v; },
[&](const String& s) -> JSValue { return jsString(vm, s); }),
m_simpleArrayElements[i]);
values.append(elemValue);
}
Structure* resultStructure = globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithContiguous);
ObjectInitializationScope initScope(vm);
JSArray* resultArray = JSArray::tryCreateUninitializedRestricted(initScope, resultStructure, length);
if (!resultArray) [[unlikely]] {
if (didFail)
*didFail = true;
return {};
}
for (unsigned i = 0; i < length; i++)
resultArray->initializeIndex(initScope, i, values.at(i));
if (didFail)
*didFail = false;
return resultArray;
}
case FastPath::Int32Array:
case FastPath::DoubleArray: {
IndexingType arrayType = (m_fastPath == FastPath::Int32Array) ? ArrayWithInt32 : ArrayWithDouble;
Structure* resultStructure = globalObject->arrayStructureForIndexingTypeDuringAllocation(arrayType);
if (hasAnyArrayStorage(resultStructure->indexingType())) [[unlikely]]
break; // isHavingABadTime → fall through to normal deserialization
unsigned outOfLineStorage = resultStructure->outOfLineCapacity();
unsigned vectorLength = Butterfly::optimalContiguousVectorLength(resultStructure, m_arrayLength);
void* memory = vm.auxiliarySpace().allocate(
vm,
Butterfly::totalSize(0, outOfLineStorage, true, vectorLength * sizeof(EncodedJSValue)),
nullptr, AllocationFailureMode::ReturnNull);
if (!memory) [[unlikely]] {
if (didFail)
*didFail = true;
return {};
}
Butterfly* butterfly = Butterfly::fromBase(memory, 0, outOfLineStorage);
butterfly->setVectorLength(vectorLength);
butterfly->setPublicLength(m_arrayLength);
if (m_fastPath == FastPath::DoubleArray)
memcpy(butterfly->contiguousDouble().data(), m_arrayButterflyData.span().data(), m_arrayButterflyData.size());
else
memcpy(butterfly->contiguous().data(), m_arrayButterflyData.span().data(), m_arrayButterflyData.size());
// Clear unused tail slots with hole values
Butterfly::clearRange(arrayType, butterfly, m_arrayLength, vectorLength);
JSArray* resultArray = JSArray::createWithButterfly(vm, nullptr, resultStructure, butterfly);
if (didFail)
*didFail = false;
return resultArray;
}
case FastPath::None: {
break;
}

View File

@@ -60,15 +60,12 @@ class MemoryHandle;
namespace WebCore {
// Shared value type for fast path cloning: primitives (JSValue) or strings.
using SimpleCloneableValue = std::variant<JSC::JSValue, WTF::String>;
class SimpleInMemoryPropertyTableEntry {
public:
// Only:
// - String
// - Number
// - Boolean
// - Null
// - Undefined
using Value = std::variant<JSC::JSValue, WTF::String>;
using Value = SimpleCloneableValue;
WTF::String propertyName;
Value value;
@@ -78,6 +75,9 @@ enum class FastPath : uint8_t {
None,
String,
SimpleObject,
SimpleArray,
Int32Array,
DoubleArray,
};
#if ENABLE(OFFSCREEN_CANVAS_IN_WORKERS)
@@ -129,6 +129,13 @@ public:
// Fast path for postMessage with simple objects
static Ref<SerializedScriptValue> createObjectFastPath(WTF::FixedVector<SimpleInMemoryPropertyTableEntry>&& object);
// Fast path for postMessage with dense arrays of primitives/strings
static Ref<SerializedScriptValue> createArrayFastPath(WTF::FixedVector<SimpleCloneableValue>&& elements);
// Fast path for postMessage with dense Int32/Double arrays (butterfly memcpy)
static Ref<SerializedScriptValue> createInt32ArrayFastPath(Vector<uint8_t>&& butterflyData, uint32_t length);
static Ref<SerializedScriptValue> createDoubleArrayFastPath(Vector<uint8_t>&& butterflyData, uint32_t length);
static Ref<SerializedScriptValue> nullValue();
WEBCORE_EXPORT JSC::JSValue deserialize(JSC::JSGlobalObject&, JSC::JSGlobalObject*, SerializationErrorMode = SerializationErrorMode::Throwing, bool* didFail = nullptr);
@@ -231,6 +238,9 @@ private:
// Constructor for string fast path
explicit SerializedScriptValue(const String& fastPathString);
explicit SerializedScriptValue(WTF::FixedVector<SimpleInMemoryPropertyTableEntry>&& object);
explicit SerializedScriptValue(WTF::FixedVector<SimpleCloneableValue>&& elements);
// Constructor for Int32Array/DoubleArray butterfly memcpy fast path
SerializedScriptValue(Vector<uint8_t>&& butterflyData, uint32_t length, FastPath fastPath);
size_t computeMemoryCost() const;
@@ -260,6 +270,13 @@ private:
size_t m_memoryCost { 0 };
FixedVector<SimpleInMemoryPropertyTableEntry> m_simpleInMemoryPropertyTable {};
// m_simpleArrayElements and m_arrayButterflyData/m_arrayLength are used exclusively:
// SimpleArray uses m_simpleArrayElements; Int32Array/DoubleArray use m_arrayButterflyData + m_arrayLength.
FixedVector<SimpleCloneableValue> m_simpleArrayElements {};
// Int32Array / DoubleArray fast path: raw butterfly data
Vector<uint8_t> m_arrayButterflyData {};
uint32_t m_arrayLength { 0 };
};
template<class Encoder>

View File

@@ -470,7 +470,7 @@ String generatePatternString(const Vector<Part>& partList, const URLPatternStrin
if (!needsGrouping && part.prefix.isEmpty() && previousPart && previousPart->type == PartType::FixedText && !previousPart->value.isEmpty()) {
if (options.prefixCodepoint.length() == 1
&& options.prefixCodepoint.startsWith(*StringView(previousPart->value).codePoints().codePointAt(previousPart->value.length() - 1)))
&& options.prefixCodepoint.startsWith(static_cast<char16_t>(*StringView(previousPart->value).codePoints().codePointAt(previousPart->value.length() - 1))))
needsGrouping = true;
}
@@ -541,7 +541,7 @@ String escapePatternString(StringView input)
}
// https://urlpattern.spec.whatwg.org/#is-a-valid-name-code-point
bool isValidNameCodepoint(char16_t codepoint, URLPatternUtilities::IsFirst first)
bool isValidNameCodepoint(char32_t codepoint, URLPatternUtilities::IsFirst first)
{
if (first == URLPatternUtilities::IsFirst::Yes)
return u_hasBinaryProperty(codepoint, UCHAR_ID_START) || codepoint == '_' || codepoint == '$';

View File

@@ -104,7 +104,7 @@ ASCIILiteral convertModifierToString(Modifier);
std::pair<String, Vector<String>> generateRegexAndNameList(const Vector<Part>& partList, const URLPatternStringOptions&);
String generatePatternString(const Vector<Part>& partList, const URLPatternStringOptions&);
String escapePatternString(StringView input);
bool isValidNameCodepoint(char16_t codepoint, URLPatternUtilities::IsFirst);
bool isValidNameCodepoint(char32_t codepoint, URLPatternUtilities::IsFirst);
} // namespace URLPatternUtilities
} // namespace WebCore

View File

@@ -1323,6 +1323,7 @@ void WebSocket::didReceiveBinaryData(const AtomString& eventName, const std::spa
if (auto* context = scriptExecutionContext()) {
RefPtr<Blob> blob = Blob::create(binaryData, context->jsGlobalObject());
this->incPendingActivityCount();
context->postTask([this, name = eventName, blob = blob.releaseNonNull(), protectedThis = Ref { *this }](ScriptExecutionContext& context) {
ASSERT(scriptExecutionContext());
protectedThis->dispatchEvent(MessageEvent::create(name, blob, protectedThis->m_url.string()));

View File

@@ -249,32 +249,9 @@ extern "C" __attribute__((used)) char __libc_single_threaded = 0;
#endif
#endif
#ifdef _LIBCPP_VERBOSE_ABORT_NOEXCEPT
// Workaround for this error:
// workaround-missing-symbols.cpp:245:11: error: '__libcpp_verbose_abort' is missing exception specification 'noexcept'
// 2025-07-10 15:59:47 PDT
// 245 | void std::__libcpp_verbose_abort(char const* format, ...)
// 2025-07-10 15:59:47 PDT
// | ^
// 2025-07-10 15:59:47 PDT
// | noexcept
// 2025-07-10 15:59:47 PDT
// /opt/homebrew/Cellar/llvm/20.1.7/bin/../include/c++/v1/__verbose_abort:30:28: note: previous declaration is here
// 2025-07-10 15:59:47 PDT
// 30 | __printf__, 1, 2) void __libcpp_verbose_abort(const char* __format, ...) _LIBCPP_VERBOSE_ABORT_NOEXCEPT;
// 2025-07-10 15:59:47 PDT
// | ^
// 2025-07-10 15:59:47 PDT
// 1 error generated.
// 2025-07-10 15:59:47 PDT
// [515/540] Building CXX
#define BUN_VERBOSE_ABORT_NOEXCEPT _LIBCPP_VERBOSE_ABORT_NOEXCEPT
#else
#define BUN_VERBOSE_ABORT_NOEXCEPT
#endif
// Provide our implementation
void std::__libcpp_verbose_abort(char const* format, ...) BUN_VERBOSE_ABORT_NOEXCEPT
// LLVM 20 used _LIBCPP_VERBOSE_ABORT_NOEXCEPT, LLVM 21+ uses _NOEXCEPT (always noexcept).
void std::__libcpp_verbose_abort(char const* format, ...) noexcept
{
va_list list;
va_start(list, format);

View File

@@ -33,7 +33,7 @@ static char32_t decodeUTF16(const UChar* ptr, size_t available, size_t& outLen)
}
outLen = 1;
return c;
return static_cast<char32_t>(c);
}
static inline uint8_t getVisibleWidth(char32_t cp, bool ambiguousIsWide)

View File

@@ -351,11 +351,13 @@ pub fn autoTick(this: *EventLoop) void {
const ctx = this.virtual_machine;
this.tickImmediateTasks(ctx);
if (comptime Environment.isPosix) {
if (comptime Environment.isWindows) {
if (this.immediate_tasks.items.len > 0) {
this.wakeup();
}
}
// On POSIX, pending immediates are handled via an immediate timeout in
// getTimeout() instead of writing to the eventfd, avoiding that overhead.
if (comptime Environment.isPosix) {
// Some tasks need to keep the event loop alive for one more tick.
@@ -438,11 +440,13 @@ pub fn autoTickActive(this: *EventLoop) void {
var ctx = this.virtual_machine;
this.tickImmediateTasks(ctx);
if (comptime Environment.isPosix) {
if (comptime Environment.isWindows) {
if (this.immediate_tasks.items.len > 0) {
this.wakeup();
}
}
// On POSIX, pending immediates are handled via an immediate timeout in
// getTimeout() instead of writing to the eventfd, avoiding that overhead.
if (comptime Environment.isPosix) {
const pending_unref = ctx.pending_unref_counter;

View File

@@ -79,11 +79,13 @@ JSC_DEFINE_HOST_FUNCTION(functionStartRemoteDebugger,
JSC::JSValue hostValue = callFrame->argument(0);
JSC::JSValue portValue = callFrame->argument(1);
const char* host = defaultHost;
WTF::CString hostCString;
if (hostValue.isString()) {
auto str = hostValue.toWTFString(globalObject);
hostCString = toCString(str);
if (!str.isEmpty())
host = toCString(str).span().data();
host = hostCString.span().data();
} else if (!hostValue.isUndefined()) {
throwVMError(globalObject, scope,
createTypeError(globalObject, "host must be a string"_s));

View File

@@ -1700,8 +1700,8 @@ pub fn dumpStackTrace(trace: std.builtin.StackTrace, limits: WriteStackTraceLimi
const programs: []const [:0]const u8 = switch (bun.Environment.os) {
.windows => &.{"pdb-addr2line"},
// if `llvm-symbolizer` doesn't work, also try `llvm-symbolizer-19`
else => &.{ "llvm-symbolizer", "llvm-symbolizer-19" },
// if `llvm-symbolizer` doesn't work, also try `llvm-symbolizer-21`
else => &.{ "llvm-symbolizer", "llvm-symbolizer-21" },
};
for (programs) |program| {
var arena = bun.ArenaAllocator.init(bun.default_allocator);

View File

@@ -16,6 +16,10 @@ pub const PosixLoop = extern struct {
/// Number of polls owned by Bun
active: u32 = 0,
/// Incremented atomically by wakeup(), swapped to 0 before epoll/kqueue.
/// If non-zero, the event loop will return immediately so we can skip the GC safepoint.
pending_wakeups: u32 = 0,
/// The list of ready polls
ready_polls: [1024]EventType align(16),

View File

@@ -34,7 +34,7 @@ pub const Loop = struct {
{
var epoll = std.mem.zeroes(std.os.linux.epoll_event);
epoll.events = std.os.linux.EPOLL.IN | std.os.linux.EPOLL.ERR | std.os.linux.EPOLL.HUP;
epoll.events = std.os.linux.EPOLL.IN | std.os.linux.EPOLL.ET | std.os.linux.EPOLL.ERR | std.os.linux.EPOLL.HUP;
epoll.data.ptr = @intFromPtr(&loop);
const rc = std.os.linux.epoll_ctl(loop.epoll_fd.cast(), std.os.linux.EPOLL.CTL_ADD, loop.waker.getFd().cast(), &epoll);
@@ -165,9 +165,8 @@ pub const Loop = struct {
const pollable: Pollable = Pollable.from(event.data.u64);
if (pollable.tag() == .empty) {
if (event.data.ptr == @intFromPtr(&loop)) {
// this is the event poll, lets read it
var bytes: [8]u8 = undefined;
_ = bun.sys.read(loop.fd(), &bytes);
// Edge-triggered: no need to read the eventfd counter
continue;
}
}
_ = Poll.onUpdateEpoll(pollable.poll(), pollable.tag(), event);

View File

@@ -664,28 +664,6 @@ export function toJSON(this: BufferExt) {
return { type, data };
}
export function slice(this: BufferExt, start, end) {
var { buffer, byteOffset, byteLength } = this;
function adjustOffset(offset, length) {
// Use Math.trunc() to convert offset to an integer value that can be larger
// than an Int32. Hence, don't use offset | 0 or similar techniques.
offset = Math.trunc(offset);
if (offset === 0 || offset !== offset) {
return 0;
} else if (offset < 0) {
offset += length;
return offset > 0 ? offset : 0;
} else {
return offset < length ? offset : length;
}
}
var start_ = adjustOffset(start, byteLength);
var end_ = end !== undefined ? adjustOffset(end, byteLength) : byteLength;
return new $Buffer(buffer, byteOffset + start_, end_ > start_ ? end_ - start_ : 0);
}
$getter;
export function parent(this: BufferExt) {
return $isObject(this) && this instanceof $Buffer ? this.buffer : undefined;

View File

@@ -0,0 +1,395 @@
const { Duplex } = require("node:stream");
const upgradeDuplexToTLS = $newZigFunction("socket.zig", "jsUpgradeDuplexToTLS", 2);
interface NativeHandle {
resume(): void;
close(): void;
end(): void;
$write(chunk: Buffer, encoding: string): boolean;
alpnProtocol?: string;
}
interface UpgradeContextType {
connectionListener: (...args: any[]) => any;
server: Http2SecureServer;
rawSocket: import("node:net").Socket;
nativeHandle: NativeHandle | null;
events: [(...args: any[]) => void, ...Function[]] | null;
}
interface Http2SecureServer {
key?: Buffer;
cert?: Buffer;
ca?: Buffer;
passphrase?: string;
ALPNProtocols?: Buffer;
_requestCert?: boolean;
_rejectUnauthorized?: boolean;
emit(event: string, ...args: any[]): boolean;
}
interface TLSProxySocket {
_ctx: UpgradeContextType;
_writeCallback: ((err?: Error | null) => void) | null;
alpnProtocol: string | null;
authorized: boolean;
encrypted: boolean;
server: Http2SecureServer;
_requestCert: boolean;
_rejectUnauthorized: boolean;
_securePending: boolean;
secureConnecting: boolean;
_secureEstablished: boolean;
authorizationError?: string;
push(chunk: Buffer | null): boolean;
destroy(err?: Error): this;
emit(event: string, ...args: any[]): boolean;
resume(): void;
readonly destroyed: boolean;
}
/**
* Context object holding upgrade-time state for the TLS proxy socket.
* Attached as `tlsSocket._ctx` so named functions can reach it via `this._ctx`
* (Duplex methods) or via a bound `this` (socket callbacks).
*/
function UpgradeContext(
connectionListener: (...args: any[]) => any,
server: Http2SecureServer,
rawSocket: import("node:net").Socket,
) {
this.connectionListener = connectionListener;
this.server = server;
this.rawSocket = rawSocket;
this.nativeHandle = null;
this.events = null;
}
// ---------------------------------------------------------------------------
// Duplex stream methods — called with `this` = tlsSocket (standard stream API)
// ---------------------------------------------------------------------------
// _read: called by stream machinery when the H2 session wants data.
// Resume the native TLS handle so it feeds decrypted data via the data callback.
// Mirrors net.ts Socket.prototype._read which calls socket.resume().
function tlsSocketRead(this: TLSProxySocket) {
const h = this._ctx.nativeHandle;
if (h) {
h.resume();
}
this._ctx.rawSocket.resume();
}
// _write: called when the H2 session writes outbound frames.
// Forward to the native TLS handle for encryption, then back to rawSocket.
// Mirrors net.ts Socket.prototype._write which calls socket.$write().
function tlsSocketWrite(this: TLSProxySocket, chunk: Buffer, encoding: string, callback: (err?: Error) => void) {
const h = this._ctx.nativeHandle;
if (!h) {
callback(new Error("Socket is closed"));
return;
}
// $write returns true if fully flushed, false if buffered
if (h.$write(chunk, encoding)) {
callback();
} else {
// Store callback so drain event can invoke it (backpressure)
this._writeCallback = callback;
}
}
// _destroy: called when the stream is destroyed (e.g. tlsSocket.destroy(err)).
// Cleans up the native TLS handle.
// Mirrors net.ts Socket.prototype._destroy.
function tlsSocketDestroy(this: TLSProxySocket, err: Error | null, callback: (err?: Error | null) => void) {
const h = this._ctx.nativeHandle;
if (h) {
h.close();
this._ctx.nativeHandle = null;
}
// Must invoke pending write callback with error per Writable stream contract
const writeCb = this._writeCallback;
if (writeCb) {
this._writeCallback = null;
writeCb(err ?? new Error("Socket destroyed"));
}
callback(err);
}
// _final: called when the writable side is ending (all data flushed).
// Shuts down the TLS write side gracefully.
// Mirrors net.ts Socket.prototype._final.
function tlsSocketFinal(this: TLSProxySocket, callback: () => void) {
const h = this._ctx.nativeHandle;
if (!h) return callback();
// Signal end-of-stream to the TLS layer
h.end();
callback();
}
// ---------------------------------------------------------------------------
// Socket callbacks — called by Zig with `this` = native handle (not useful).
// All are bound to tlsSocket so `this` inside each = tlsSocket.
// ---------------------------------------------------------------------------
// open: called when the TLS layer is initialized (before handshake).
// No action needed; we wait for the handshake callback.
function socketOpen() {}
// data: called with decrypted plaintext after the TLS layer decrypts incoming data.
// Push into tlsSocket so the H2 session's _read() receives these frames.
function socketData(this: TLSProxySocket, _socket: NativeHandle, chunk: Buffer) {
if (!this.push(chunk)) {
this._ctx.rawSocket.pause();
}
}
// end: TLS peer signaled end-of-stream; signal EOF to the H2 session.
function socketEnd(this: TLSProxySocket) {
this.push(null);
}
// drain: raw socket is writable again after being full; propagate backpressure signal.
// If _write stored a callback waiting for drain, invoke it now.
function socketDrain(this: TLSProxySocket) {
const cb = this._writeCallback;
if (cb) {
this._writeCallback = null;
cb();
}
}
// close: TLS connection closed; tear down the tlsSocket Duplex.
function socketClose(this: TLSProxySocket) {
if (!this.destroyed) {
this.destroy();
}
}
// error: TLS-level error (e.g. certificate verification failure).
// In server mode without _requestCert, the server doesn't request a client cert,
// so issuer verification errors on the server's own cert are non-fatal.
function socketError(this: TLSProxySocket, _socket: NativeHandle, err: NodeJS.ErrnoException) {
const ctx = this._ctx;
if (!ctx.server._requestCert && err?.code === "UNABLE_TO_GET_ISSUER_CERT") {
return;
}
this.destroy(err);
}
// timeout: socket idle timeout; forward to the Duplex so H2 session can handle it.
function socketTimeout(this: TLSProxySocket) {
this.emit("timeout");
}
// handshake: TLS handshake completed. This is the critical callback that triggers
// H2 session creation.
//
// Mirrors the handshake logic in net.ts ServerHandlers.handshake:
// - Set secure-connection state flags on tlsSocket
// - Read alpnProtocol from the native handle (set by ALPN negotiation)
// - Handle _requestCert / _rejectUnauthorized for mutual TLS
// - Call connectionListener to create the ServerHttp2Session
function socketHandshake(
this: TLSProxySocket,
nativeHandle: NativeHandle,
success: boolean,
verifyError: NodeJS.ErrnoException | null,
) {
const tlsSocket = this; // bound
const ctx = tlsSocket._ctx;
if (!success) {
const err = verifyError || new Error("TLS handshake failed");
ctx.server.emit("tlsClientError", err, tlsSocket);
tlsSocket.destroy(err);
return;
}
// Mark TLS handshake as complete on the proxy socket
tlsSocket._securePending = false;
tlsSocket.secureConnecting = false;
tlsSocket._secureEstablished = true;
// Copy the negotiated ALPN protocol (e.g. "h2") from the native TLS handle.
// The H2 session checks this to confirm HTTP/2 was negotiated.
tlsSocket.alpnProtocol = nativeHandle?.alpnProtocol ?? null;
// Handle mutual TLS: if the server requested a client cert, check for errors
if (tlsSocket._requestCert || tlsSocket._rejectUnauthorized) {
if (verifyError) {
tlsSocket.authorized = false;
tlsSocket.authorizationError = verifyError.code || verifyError.message;
ctx.server.emit("tlsClientError", verifyError, tlsSocket);
if (tlsSocket._rejectUnauthorized) {
tlsSocket.emit("secure", tlsSocket);
tlsSocket.destroy(verifyError);
return;
}
} else {
tlsSocket.authorized = true;
}
} else {
tlsSocket.authorized = true;
}
// Invoke the H2 connectionListener which creates a ServerHttp2Session.
// This is the same function passed to Http2SecureServer's constructor
// and is what normally fires on the 'secureConnection' event.
ctx.connectionListener.$call(ctx.server, tlsSocket);
// Resume the Duplex so the H2 session can read frames from it.
// Mirrors net.ts ServerHandlers.handshake line 438: `self.resume()`.
tlsSocket.resume();
}
// ---------------------------------------------------------------------------
// Close-cleanup handler
// ---------------------------------------------------------------------------
// onTlsClose: when the TLS socket closes (e.g. H2 session destroyed), clean up
// the raw socket listeners to prevent memory leaks and stale callback references.
// EventEmitter calls 'close' handlers with `this` = emitter (tlsSocket).
function onTlsClose(this: TLSProxySocket) {
const ctx = this._ctx;
const raw = ctx.rawSocket;
const ev = ctx.events;
if (!ev) return;
raw.removeListener("data", ev[0]);
raw.removeListener("end", ev[1]);
raw.removeListener("drain", ev[2]);
raw.removeListener("close", ev[3]);
}
// ---------------------------------------------------------------------------
// Module-scope noop (replaces anonymous () => {} for the error suppression)
// ---------------------------------------------------------------------------
// no-op handler used to suppress unhandled error events until
// the H2 session attaches its own error handler.
function noop() {}
// ---------------------------------------------------------------------------
// Main upgrade function
// ---------------------------------------------------------------------------
// Upgrades a raw TCP socket to TLS and initiates an H2 session on it.
//
// When a net.Server forwards an accepted TCP connection to an Http2SecureServer
// via `h2Server.emit('connection', socket)`, the socket has not been TLS-upgraded.
// Node.js Http2SecureServer expects to receive this and perform the upgrade itself.
//
// This mirrors the TLS server handshake pattern from net.ts ServerHandlers, but
// targets the H2 connectionListener instead of a generic secureConnection event.
//
// Data flow after upgrade:
// rawSocket (TCP) → upgradeDuplexToTLS (Zig TLS layer) → socket callbacks
// → tlsSocket.push() → H2 session reads
// H2 session writes → tlsSocket._write() → handle.$write() → Zig TLS layer → rawSocket
//
// CRITICAL: We do NOT set tlsSocket._handle to the native TLS handle.
// If we did, the H2FrameParser constructor would detect it as a JSTLSSocket
// and call attachNativeCallback(), which intercepts all decrypted data at the
// Zig level, completely bypassing our JS data callback and Duplex.push() path.
// Instead, we store the handle in _ctx.nativeHandle so _read/_write/_destroy
// can use it, while the H2 session sees _handle as null and uses the JS-level
// socket.on("data") → Duplex → parser.read() path for incoming frames.
function upgradeRawSocketToH2(
connectionListener: (...args: any[]) => any,
server: Http2SecureServer,
rawSocket: import("node:net").Socket,
): boolean {
// Create a Duplex stream that acts as the TLS "socket" from the H2 session's perspective.
const tlsSocket = new Duplex() as unknown as TLSProxySocket;
tlsSocket._ctx = new UpgradeContext(connectionListener, server, rawSocket);
// Duplex stream methods — `this` is tlsSocket, no bind needed
tlsSocket._read = tlsSocketRead;
tlsSocket._write = tlsSocketWrite;
tlsSocket._destroy = tlsSocketDestroy;
tlsSocket._final = tlsSocketFinal;
// Suppress unhandled error events until the H2 session attaches its own error handler
tlsSocket.on("error", noop);
// Set TLS-like properties that connectionListener and the H2 session expect.
// These are set on the Duplex because we cannot use a real TLSSocket here —
// its internal state machine would conflict with upgradeDuplexToTLS.
tlsSocket.alpnProtocol = null;
tlsSocket.authorized = false;
tlsSocket.encrypted = true;
tlsSocket.server = server;
// Only enforce client cert verification if the server explicitly requests it.
// tls.Server defaults _rejectUnauthorized to true, but without _requestCert
// the server doesn't actually ask for a client cert, so verification errors
// (e.g. UNABLE_TO_GET_ISSUER_CERT for the server's own self-signed cert) are
// spurious and must be ignored.
tlsSocket._requestCert = server._requestCert || false;
tlsSocket._rejectUnauthorized = server._requestCert ? server._rejectUnauthorized : false;
// socket: callbacks — bind to tlsSocket since Zig calls them with native handle as `this`
let handle: NativeHandle, events: UpgradeContextType["events"];
try {
// upgradeDuplexToTLS wraps rawSocket with a TLS layer in server mode (isServer: true).
// The Zig side will:
// 1. Read encrypted data from rawSocket via events[0..3]
// 2. Decrypt it through the TLS engine (with ALPN negotiation for "h2")
// 3. Call our socket callbacks below with the decrypted plaintext
//
// ALPNProtocols: server.ALPNProtocols is a Buffer in wire format (e.g. <Buffer 02 68 32>
// for ["h2"]). The Zig SSLConfig expects an ArrayBuffer, so we slice the underlying buffer.
[handle, events] = upgradeDuplexToTLS(rawSocket, {
isServer: true,
tls: {
key: server.key,
cert: server.cert,
ca: server.ca,
passphrase: server.passphrase,
ALPNProtocols: server.ALPNProtocols
? server.ALPNProtocols.buffer.slice(
server.ALPNProtocols.byteOffset,
server.ALPNProtocols.byteOffset + server.ALPNProtocols.byteLength,
)
: null,
},
socket: {
open: socketOpen,
data: socketData.bind(tlsSocket),
end: socketEnd.bind(tlsSocket),
drain: socketDrain.bind(tlsSocket),
close: socketClose.bind(tlsSocket),
error: socketError.bind(tlsSocket),
timeout: socketTimeout.bind(tlsSocket),
handshake: socketHandshake.bind(tlsSocket),
},
data: {},
});
} catch (e) {
rawSocket.destroy(e as Error);
tlsSocket.destroy(e as Error);
return true;
}
// Store handle in _ctx (NOT on tlsSocket._handle).
// This prevents H2FrameParser from attaching as native callback which would
// intercept data at the Zig level and bypass our Duplex push path.
tlsSocket._ctx.nativeHandle = handle;
tlsSocket._ctx.events = events;
// Wire up the raw TCP socket to feed encrypted data into the TLS layer.
// events[0..3] are native event handlers returned by upgradeDuplexToTLS that
// the Zig TLS engine expects to receive data/end/drain/close through.
rawSocket.on("data", events[0]);
rawSocket.on("end", events[1]);
rawSocket.on("drain", events[2]);
rawSocket.on("close", events[3]);
// When the TLS socket closes (e.g. H2 session destroyed), clean up the raw socket
// listeners to prevent memory leaks and stale callback references.
// EventEmitter calls 'close' handlers with `this` = emitter (tlsSocket).
tlsSocket.once("close", onTlsClose);
return true;
}
export default { upgradeRawSocketToH2 };

View File

@@ -73,6 +73,7 @@ const H2FrameParser = $zig("h2_frame_parser.zig", "H2FrameParserConstructor");
const assertSettings = $newZigFunction("h2_frame_parser.zig", "jsAssertSettings", 1);
const getPackedSettings = $newZigFunction("h2_frame_parser.zig", "jsGetPackedSettings", 1);
const getUnpackedSettings = $newZigFunction("h2_frame_parser.zig", "jsGetUnpackedSettings", 1);
const { upgradeRawSocketToH2 } = require("node:_http2_upgrade");
const sensitiveHeaders = Symbol.for("nodejs.http2.sensitiveHeaders");
const bunHTTP2Native = Symbol.for("::bunhttp2native::");
@@ -3881,6 +3882,7 @@ Http2Server.prototype[EventEmitter.captureRejectionSymbol] = function (err, even
function onErrorSecureServerSession(err, socket) {
if (!this.emit("clientError", err, socket)) socket.destroy(err);
}
function emitFrameErrorEventNT(stream, frameType, errorCode) {
stream.emit("frameError", frameType, errorCode);
}
@@ -3918,6 +3920,15 @@ class Http2SecureServer extends tls.Server {
}
this.on("tlsClientError", onErrorSecureServerSession);
}
emit(event: string, ...args: any[]) {
if (event === "connection") {
const socket = args[0];
if (socket && !(socket instanceof TLSSocket)) {
return upgradeRawSocketToH2(connectionListener, this, socket);
}
}
return super.emit(event, ...args);
}
setTimeout(ms, callback) {
this.timeout = ms;
if (typeof callback === "function") {

View File

@@ -486,22 +486,27 @@ pub const HtmlRenderer = struct {
}
pub fn writeHtmlEscaped(self: *HtmlRenderer, txt: []const u8) void {
var start: usize = 0;
for (txt, 0..) |c, i| {
const replacement: ?[]const u8 = switch (c) {
'&' => "&amp;",
'<' => "&lt;",
'>' => "&gt;",
'"' => "&quot;",
else => null,
var i: usize = 0;
const needle = "&<>\"";
while (true) {
const next = bun.strings.indexOfAny(txt[i..], needle) orelse {
self.write(txt[i..]);
return;
};
if (replacement) |r| {
if (i > start) self.write(txt[start..i]);
self.write(r);
start = i + 1;
const pos = i + next;
if (pos > i)
self.write(txt[i..pos]);
const c = txt[pos];
switch (c) {
'&' => self.write("&amp;"),
'<' => self.write("&lt;"),
'>' => self.write("&gt;"),
'"' => self.write("&quot;"),
else => unreachable,
}
i = pos + 1;
}
if (start < txt.len) self.write(txt[start..]);
}
fn writeUrlEscaped(self: *HtmlRenderer, txt: []const u8) void {

View File

@@ -70,11 +70,13 @@ pub fn isZeroWidthCodepointType(comptime T: type, cp: T) bool {
}
// Thai combining marks
if ((cp >= 0xe31 and cp <= 0xe3a) or (cp >= 0xe47 and cp <= 0xe4e))
// Note: U+0E32 (SARA AA) and U+0E33 (SARA AM) are Grapheme_Base (spacing vowels), not combining
if (cp == 0xe31 or (cp >= 0xe34 and cp <= 0xe3a) or (cp >= 0xe47 and cp <= 0xe4e))
return true;
// Lao combining marks
if ((cp >= 0xeb1 and cp <= 0xebc) or (cp >= 0xec8 and cp <= 0xecd))
// Note: U+0EB2 and U+0EB3 are spacing vowels like Thai, not combining
if (cp == 0xeb1 or (cp >= 0xeb4 and cp <= 0xebc) or (cp >= 0xec8 and cp <= 0xecd))
return true;
// Combining Diacritical Marks Extended

View File

@@ -19,7 +19,6 @@ expectAssignable<Bun.Build.CompileTarget>("bun-windows-x64-modern");
Bun.build({
entrypoints: ["hey"],
splitting: false,
// @ts-expect-error Currently not supported
compile: {},
});

View File

@@ -1 +1,7 @@
Bun.hash.wyhash("asdf", 1234n);
// https://github.com/oven-sh/bun/issues/26043
// Bun.hash.crc32 accepts optional seed parameter for incremental CRC32 computation
let crc = 0;
crc = Bun.hash.crc32(new Uint8Array([1, 2, 3]), crc);
crc = Bun.hash.crc32(new Uint8Array([4, 5, 6]), crc);

View File

@@ -0,0 +1,94 @@
// Regression test for kqueue filter comparison bug (macOS).
//
// On kqueue, EVFILT_READ (-1) and EVFILT_WRITE (-2) are negative integers. The old
// code used bitwise AND to identify filters:
//
// events |= (filter & EVFILT_READ) ? READABLE : 0
// events |= (filter & EVFILT_WRITE) ? WRITABLE : 0
//
// Since all negative numbers AND'd with -1 or -2 produce truthy values, EVERY kqueue
// event was misidentified as BOTH readable AND writable. This caused the drain handler
// to fire spuriously on every readable event and vice versa.
//
// The fix uses equality comparison (filter == EVFILT_READ), plus coalescing duplicate
// kevents for the same fd (kqueue returns separate events per filter) into a single
// dispatch with combined flags — matching epoll's single-entry-per-fd behavior.
//
// This test creates unix socket connections with small buffers to force partial writes
// (which registers EVFILT_WRITE). The client sends pings on each data callback, causing
// EVFILT_READ events on the server. With the bug, each EVFILT_READ also triggers drain,
// giving a drain/data ratio of ~2.0. With the fix, the ratio is ~1.0.
//
// Example output:
// system bun (bug): data: 38970 drain: 77940 ratio: 2.0
// fixed bun: data: 52965 drain: 52965 ratio: 1.0
import { setSocketOptions } from "bun:internal-for-testing";
const CHUNK = Buffer.alloc(64 * 1024, "x");
const PING = Buffer.from("p");
const sockPath = `kqueue-bench-${process.pid}.sock`;
let drainCalls = 0;
let dataCalls = 0;
const server = Bun.listen({
unix: sockPath,
socket: {
open(socket) {
setSocketOptions(socket, 1, 512);
setSocketOptions(socket, 2, 512);
socket.write(CHUNK);
},
data() {
dataCalls++;
},
drain(socket) {
drainCalls++;
socket.write(CHUNK);
},
close() {},
error() {},
},
});
const clients = [];
for (let i = 0; i < 10; i++) {
clients.push(
await Bun.connect({
unix: sockPath,
socket: {
open(socket) {
setSocketOptions(socket, 1, 512);
setSocketOptions(socket, 2, 512);
},
data(socket) {
socket.write(PING);
},
drain() {},
close() {},
error() {},
},
}),
);
}
await Bun.sleep(50);
drainCalls = 0;
dataCalls = 0;
await Bun.sleep(100);
const ratio = dataCalls > 0 ? drainCalls / dataCalls : 0;
console.log(`data: ${dataCalls} drain: ${drainCalls} ratio: ${ratio.toFixed(1)}`);
for (const c of clients) c.end();
server.stop(true);
try {
require("fs").unlinkSync(sockPath);
} catch {}
if (dataCalls === 0 || drainCalls === 0) {
console.error("test invalid: no data or drain callbacks fired");
process.exit(1);
}
process.exit(ratio < 1.5 ? 0 : 1);

View File

@@ -339,6 +339,10 @@ describe.concurrent("socket", () => {
expect([fileURLToPath(new URL("./socket-huge-fixture.js", import.meta.url))]).toRun();
}, 60_000);
it.skipIf(isWindows)("kqueue should not dispatch spurious drain events on readable", async () => {
expect([fileURLToPath(new URL("./kqueue-filter-coalesce-fixture.ts", import.meta.url))]).toRun();
});
it("it should not crash when getting a ReferenceError on client socket open", async () => {
using server = Bun.serve({
port: 0,

View File

@@ -68,6 +68,6 @@ describe("static initializers", () => {
expect(
bunInitializers.length,
`Do not add static initializers to Bun. Static initializers are called when Bun starts up, regardless of whether you use the variables or not. This makes Bun slower.`,
).toBe(process.arch === "arm64" ? 2 : 3);
).toBe(process.arch === "arm64" ? 1 : 2);
});
});

View File

@@ -485,6 +485,28 @@ describe("stringWidth extended", () => {
expect(Bun.stringWidth("ก็")).toBe(1); // With maitaikhu
expect(Bun.stringWidth("ปฏัก")).toBe(3); // ป + ฏ + ั (combining) + ก = 3 visible
});
test("Thai spacing vowels (SARA AA and SARA AM)", () => {
// U+0E32 (SARA AA) and U+0E33 (SARA AM) are spacing vowels, not combining marks
expect(Bun.stringWidth("\u0E32")).toBe(1); // SARA AA alone
expect(Bun.stringWidth("\u0E33")).toBe(1); // SARA AM alone
expect(Bun.stringWidth("ก\u0E32")).toBe(2); // ก + SARA AA
expect(Bun.stringWidth("ก\u0E33")).toBe(2); // กำ (KO KAI + SARA AM)
expect(Bun.stringWidth("คำ")).toBe(2); // Common Thai word
expect(Bun.stringWidth("ทำ")).toBe(2); // Common Thai word
// True combining marks should still be zero-width
expect(Bun.stringWidth("\u0E31")).toBe(0); // MAI HAN-AKAT (combining)
expect(Bun.stringWidth("ก\u0E31")).toBe(1); // กั
});
test("Lao spacing vowels", () => {
// U+0EB2 and U+0EB3 are spacing vowels in Lao, similar to Thai
expect(Bun.stringWidth("\u0EB2")).toBe(1); // LAO VOWEL SIGN AA
expect(Bun.stringWidth("\u0EB3")).toBe(1); // LAO VOWEL SIGN AM
expect(Bun.stringWidth("ກ\u0EB2")).toBe(2); // KO + AA
// True combining marks should still be zero-width
expect(Bun.stringWidth("\u0EB1")).toBe(0); // MAI KAN (combining)
});
});
describe("non-ASCII in escape sequences and Indic script handling", () => {

View File

@@ -887,6 +887,68 @@ for (let withOverridenBufferWrite of [false, true]) {
expect(f[1]).toBe(0x6f);
});
it("slice() with fractional offsets truncates toward zero", () => {
const buf = Buffer.from([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]);
// -0.1 should truncate to 0, not -1
const a = buf.slice(-0.1);
expect(a.length).toBe(10);
expect(a[0]).toBe(0);
// -1.9 should truncate to -1, not -2
const b = buf.slice(-1.9);
expect(b.length).toBe(1);
expect(b[0]).toBe(9);
// 1.9 should truncate to 1
const c = buf.slice(1.9, 4.1);
expect(c.length).toBe(3);
expect(c[0]).toBe(1);
expect(c[1]).toBe(2);
expect(c[2]).toBe(3);
// NaN should be treated as 0
const d = buf.slice(NaN, NaN);
expect(d.length).toBe(0);
const e = buf.slice(NaN);
expect(e.length).toBe(10);
});
it("slice() on detached buffer throws TypeError", () => {
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab);
// Detach the ArrayBuffer by transferring it
structuredClone(ab, { transfer: [ab] });
expect(() => buf.slice(0, 5)).toThrow(TypeError);
});
it("subarray() on detached buffer throws TypeError", () => {
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab);
structuredClone(ab, { transfer: [ab] });
expect(() => buf.subarray(0, 5)).toThrow(TypeError);
});
it("slice() on resizable ArrayBuffer returns fixed-length view", () => {
const rab = new ArrayBuffer(10, { maxByteLength: 20 });
const buf = Buffer.from(rab);
buf[0] = 1;
buf[1] = 2;
buf[2] = 3;
buf[3] = 4;
buf[4] = 5;
const sliced = buf.slice(0, 5);
expect(sliced.length).toBe(5);
expect(sliced[0]).toBe(1);
expect(sliced[4]).toBe(5);
// Growing the buffer should NOT change the slice length
rab.resize(20);
expect(sliced.length).toBe(5);
});
function forEachUnicode(label, test) {
["ucs2", "ucs-2", "utf16le", "utf-16le"].forEach(encoding =>
it(`${label} (${encoding})`, test.bind(null, encoding)),

View File

@@ -0,0 +1,161 @@
/**
* All tests in this file should also run in Node.js.
*
* Do not add any tests that only run in Bun.
*/
import { describe, test } from "node:test";
import assert from "node:assert";
import { Agent, createServer, request as httpRequest } from "node:http";
import type { AddressInfo } from "node:net";
// Helper to make a request and get the response.
// Uses a shared agent so that all requests go through the same TCP connection,
// which is critical for actually testing the keep-alive / proxy-URL bug.
function makeRequest(
port: number,
path: string,
agent: Agent,
): Promise<{ statusCode: number; body: string; url: string }> {
return new Promise((resolve, reject) => {
const req = httpRequest({ host: "127.0.0.1", port, path, method: "GET", agent }, res => {
let body = "";
res.on("data", chunk => {
body += chunk;
});
res.on("end", () => {
resolve({ statusCode: res.statusCode!, body, url: path });
});
});
req.on("error", reject);
req.end();
});
}
function listenOnRandomPort(server: ReturnType<typeof createServer>): Promise<number> {
return new Promise((resolve) => {
server.listen(0, "127.0.0.1", () => {
const addr = server.address() as AddressInfo;
resolve(addr.port);
});
});
}
describe("HTTP server with proxy-style absolute URLs", () => {
test("sequential GET requests with absolute URL paths don't hang", async () => {
const agent = new Agent({ keepAlive: true, maxSockets: 1 });
const server = createServer((req, res) => {
res.writeHead(200, { "Content-Type": "text/plain" });
res.end(req.url);
});
const port = await listenOnRandomPort(server);
try {
// Make 3 sequential requests with proxy-style absolute URLs
// Before the fix, request 2 would hang because the parser entered tunnel mode
const r1 = await makeRequest(port, "http://example.com/test1", agent);
assert.strictEqual(r1.statusCode, 200);
assert.ok(r1.body.includes("example.com"), `Expected body to contain "example.com", got: ${r1.body}`);
assert.ok(r1.body.includes("/test1"), `Expected body to contain "/test1", got: ${r1.body}`);
const r2 = await makeRequest(port, "http://example.com/test2", agent);
assert.strictEqual(r2.statusCode, 200);
assert.ok(r2.body.includes("example.com"), `Expected body to contain "example.com", got: ${r2.body}`);
assert.ok(r2.body.includes("/test2"), `Expected body to contain "/test2", got: ${r2.body}`);
const r3 = await makeRequest(port, "http://other.com/test3", agent);
assert.strictEqual(r3.statusCode, 200);
assert.ok(r3.body.includes("other.com"), `Expected body to contain "other.com", got: ${r3.body}`);
assert.ok(r3.body.includes("/test3"), `Expected body to contain "/test3", got: ${r3.body}`);
} finally {
agent.destroy();
server.close();
}
});
test("sequential POST requests with absolute URL paths don't hang", async () => {
const agent = new Agent({ keepAlive: true, maxSockets: 1 });
const server = createServer((req, res) => {
let body = "";
req.on("data", chunk => {
body += chunk;
});
req.on("end", () => {
res.writeHead(200, { "Content-Type": "text/plain" });
res.end(`${req.method} ${req.url} body=${body}`);
});
});
const port = await listenOnRandomPort(server);
try {
for (let i = 1; i <= 3; i++) {
const result = await new Promise<{ statusCode: number; body: string }>((resolve, reject) => {
const req = httpRequest(
{
host: "127.0.0.1",
port,
path: `http://example.com/post${i}`,
method: "POST",
headers: { "Content-Type": "text/plain" },
agent,
},
res => {
let body = "";
res.on("data", chunk => {
body += chunk;
});
res.on("end", () => {
resolve({ statusCode: res.statusCode!, body });
});
},
);
req.on("error", reject);
req.write(`data${i}`);
req.end();
});
assert.strictEqual(result.statusCode, 200);
assert.ok(result.body.includes(`/post${i}`), `Expected body to contain "/post${i}", got: ${result.body}`);
assert.ok(result.body.includes(`body=data${i}`), `Expected body to contain "body=data${i}", got: ${result.body}`);
}
} finally {
agent.destroy();
server.close();
}
});
test("mixed normal and proxy-style URLs work sequentially", async () => {
const agent = new Agent({ keepAlive: true, maxSockets: 1 });
const server = createServer((req, res) => {
res.writeHead(200, { "Content-Type": "text/plain" });
res.end(req.url);
});
const port = await listenOnRandomPort(server);
try {
// Mix of normal and proxy-style URLs
const r1 = await makeRequest(port, "/normal1", agent);
assert.strictEqual(r1.statusCode, 200);
assert.ok(r1.body.includes("/normal1"), `Expected body to contain "/normal1", got: ${r1.body}`);
const r2 = await makeRequest(port, "http://example.com/proxy1", agent);
assert.strictEqual(r2.statusCode, 200);
assert.ok(r2.body.includes("example.com"), `Expected body to contain "example.com", got: ${r2.body}`);
assert.ok(r2.body.includes("/proxy1"), `Expected body to contain "/proxy1", got: ${r2.body}`);
const r3 = await makeRequest(port, "/normal2", agent);
assert.strictEqual(r3.statusCode, 200);
assert.ok(r3.body.includes("/normal2"), `Expected body to contain "/normal2", got: ${r3.body}`);
const r4 = await makeRequest(port, "http://other.com/proxy2", agent);
assert.strictEqual(r4.statusCode, 200);
assert.ok(r4.body.includes("other.com"), `Expected body to contain "other.com", got: ${r4.body}`);
assert.ok(r4.body.includes("/proxy2"), `Expected body to contain "/proxy2", got: ${r4.body}`);
} finally {
agent.destroy();
server.close();
}
});
});

View File

@@ -0,0 +1,26 @@
import { describe, expect, test } from "bun:test";
import { bunEnv, bunExe, nodeExe } from "harness";
import { join } from "node:path";
describe("HTTP server with proxy-style absolute URLs", () => {
test("tests should run on node.js", async () => {
await using process = Bun.spawn({
cmd: [nodeExe(), "--test", join(import.meta.dir, "node-http-proxy-url.node.mts")],
stdout: "inherit",
stderr: "inherit",
stdin: "ignore",
env: bunEnv,
});
expect(await process.exited).toBe(0);
});
test("tests should run on bun", async () => {
await using process = Bun.spawn({
cmd: [bunExe(), "test", join(import.meta.dir, "node-http-proxy-url.node.mts")],
stdout: "inherit",
stderr: "inherit",
stdin: "ignore",
env: bunEnv,
});
expect(await process.exited).toBe(0);
});
});

View File

@@ -0,0 +1,428 @@
/**
* Tests for the net.Server → Http2SecureServer upgrade path
* (upgradeRawSocketToH2 in _http2_upgrade.ts).
*
* This pattern is used by http2-wrapper, crawlee, and other libraries that
* accept raw TCP connections and upgrade them to HTTP/2 via
* `h2Server.emit('connection', rawSocket)`.
*
* Works with both:
* bun bd test test/js/node/http2/node-http2-upgrade.test.ts
* node --experimental-strip-types --test test/js/node/http2/node-http2-upgrade.test.ts
*/
import assert from "node:assert";
import fs from "node:fs";
import http2 from "node:http2";
import net from "node:net";
import path from "node:path";
import { afterEach, describe, test } from "node:test";
import { fileURLToPath } from "node:url";
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const FIXTURES_PATH = path.join(__dirname, "..", "test", "fixtures", "keys");
const TLS = {
key: fs.readFileSync(path.join(FIXTURES_PATH, "agent1-key.pem")),
cert: fs.readFileSync(path.join(FIXTURES_PATH, "agent1-cert.pem")),
ALPNProtocols: ["h2"],
};
function createUpgradeServer(
handler: (req: http2.Http2ServerRequest, res: http2.Http2ServerResponse) => void,
opts: { onSession?: (session: http2.Http2Session) => void } = {},
): Promise<{ netServer: net.Server; h2Server: http2.Http2SecureServer; port: number }> {
return new Promise(resolve => {
const h2Server = http2.createSecureServer(TLS, handler);
h2Server.on("error", () => {});
if (opts.onSession) h2Server.on("session", opts.onSession);
const netServer = net.createServer(socket => {
h2Server.emit("connection", socket);
});
netServer.listen(0, "127.0.0.1", () => {
resolve({ netServer, h2Server, port: (netServer.address() as net.AddressInfo).port });
});
});
}
function connectClient(port: number): http2.ClientHttp2Session {
const client = http2.connect(`https://127.0.0.1:${port}`, { rejectUnauthorized: false });
client.on("error", () => {});
return client;
}
function request(
client: http2.ClientHttp2Session,
method: string,
reqPath: string,
body?: string,
): Promise<{ status: number; headers: http2.IncomingHttpHeaders; body: string }> {
return new Promise((resolve, reject) => {
const req = client.request({ ":method": method, ":path": reqPath });
let responseBody = "";
let responseHeaders: http2.IncomingHttpHeaders = {};
req.on("response", hdrs => {
responseHeaders = hdrs;
});
req.setEncoding("utf8");
req.on("data", (chunk: string) => {
responseBody += chunk;
});
req.on("end", () => {
resolve({
status: responseHeaders[":status"] as unknown as number,
headers: responseHeaders,
body: responseBody,
});
});
req.on("error", reject);
if (body !== undefined) {
req.end(body);
} else {
req.end();
}
});
}
describe("HTTP/2 upgrade via net.Server", () => {
let servers: { netServer: net.Server }[] = [];
let clients: http2.ClientHttp2Session[] = [];
afterEach(() => {
for (const c of clients) c.close();
for (const s of servers) s.netServer.close();
clients = [];
servers = [];
});
test("GET request succeeds with 200 and custom headers", async () => {
const srv = await createUpgradeServer((_req, res) => {
res.writeHead(200, { "x-upgrade-test": "yes" });
res.end("hello from upgraded server");
});
servers.push(srv);
const client = connectClient(srv.port);
clients.push(client);
const result = await request(client, "GET", "/");
assert.strictEqual(result.status, 200);
assert.strictEqual(result.headers["x-upgrade-test"], "yes");
assert.strictEqual(result.body, "hello from upgraded server");
});
test("POST request with body echoed back", async () => {
const srv = await createUpgradeServer((_req, res) => {
let body = "";
_req.on("data", (chunk: string) => {
body += chunk;
});
_req.on("end", () => {
res.writeHead(200);
res.end("echo:" + body);
});
});
servers.push(srv);
const client = connectClient(srv.port);
clients.push(client);
const result = await request(client, "POST", "/echo", "test payload");
assert.strictEqual(result.status, 200);
assert.strictEqual(result.body, "echo:test payload");
});
});
describe("HTTP/2 upgrade — multiple requests on one connection", () => {
test("three sequential requests share the same session", async () => {
let count = 0;
const srv = await createUpgradeServer((_req, res) => {
count++;
res.writeHead(200);
res.end(String(count));
});
const client = connectClient(srv.port);
const r1 = await request(client, "GET", "/");
const r2 = await request(client, "GET", "/");
const r3 = await request(client, "GET", "/");
assert.strictEqual(r1.body, "1");
assert.strictEqual(r2.body, "2");
assert.strictEqual(r3.body, "3");
client.close();
srv.netServer.close();
});
});
describe("HTTP/2 upgrade — session event", () => {
test("h2Server emits session event", async () => {
let sessionFired = false;
const srv = await createUpgradeServer(
(_req, res) => {
res.writeHead(200);
res.end("ok");
},
{
onSession: () => {
sessionFired = true;
},
},
);
const client = connectClient(srv.port);
await request(client, "GET", "/");
assert.strictEqual(sessionFired, true);
client.close();
srv.netServer.close();
});
});
describe("HTTP/2 upgrade — concurrent clients", () => {
test("two clients get independent sessions", async () => {
const srv = await createUpgradeServer((_req, res) => {
res.writeHead(200);
res.end(_req.url);
});
const c1 = connectClient(srv.port);
const c2 = connectClient(srv.port);
const [r1, r2] = await Promise.all([request(c1, "GET", "/from-client-1"), request(c2, "GET", "/from-client-2")]);
assert.strictEqual(r1.body, "/from-client-1");
assert.strictEqual(r2.body, "/from-client-2");
c1.close();
c2.close();
srv.netServer.close();
});
});
describe("HTTP/2 upgrade — socket close ordering", () => {
test("no crash when rawSocket.destroy() precedes session.close()", async () => {
let rawSocket: net.Socket | undefined;
let h2Session: http2.Http2Session | undefined;
const h2Server = http2.createSecureServer(TLS, (_req, res) => {
res.writeHead(200);
res.end("done");
});
h2Server.on("error", () => {});
h2Server.on("session", s => {
h2Session = s;
});
const netServer = net.createServer(socket => {
rawSocket = socket;
h2Server.emit("connection", socket);
});
const port = await new Promise<number>(resolve => {
netServer.listen(0, "127.0.0.1", () => resolve((netServer.address() as net.AddressInfo).port));
});
const client = connectClient(port);
await request(client, "GET", "/");
const socketClosed = Promise.withResolvers<void>();
rawSocket!.once("close", () => socketClosed.resolve());
rawSocket!.destroy();
await socketClosed.promise;
if (h2Session) h2Session.close();
client.close();
netServer.close();
});
test("no crash when session.close() precedes rawSocket.destroy()", async () => {
let rawSocket: net.Socket | undefined;
let h2Session: http2.Http2Session | undefined;
const h2Server = http2.createSecureServer(TLS, (_req, res) => {
res.writeHead(200);
res.end("done");
});
h2Server.on("error", () => {});
h2Server.on("session", s => {
h2Session = s;
});
const netServer = net.createServer(socket => {
rawSocket = socket;
h2Server.emit("connection", socket);
});
const port = await new Promise<number>(resolve => {
netServer.listen(0, "127.0.0.1", () => resolve((netServer.address() as net.AddressInfo).port));
});
const client = connectClient(port);
await request(client, "GET", "/");
if (h2Session) h2Session.close();
const socketClosed = Promise.withResolvers<void>();
rawSocket!.once("close", () => socketClosed.resolve());
rawSocket!.destroy();
await socketClosed.promise;
client.close();
netServer.close();
});
});
describe("HTTP/2 upgrade — ALPN negotiation", () => {
test("alpnProtocol is h2 after upgrade", async () => {
let observedAlpn: string | undefined;
const srv = await createUpgradeServer((_req, res) => {
const session = _req.stream.session;
if (session && session.socket) {
observedAlpn = (session.socket as any).alpnProtocol;
}
res.writeHead(200);
res.end("alpn-ok");
});
const client = connectClient(srv.port);
await request(client, "GET", "/");
assert.strictEqual(observedAlpn, "h2");
client.close();
srv.netServer.close();
});
});
describe("HTTP/2 upgrade — varied status codes", () => {
test("404 response with custom header", async () => {
const srv = await createUpgradeServer((_req, res) => {
res.writeHead(404, { "x-reason": "not-found" });
res.end("not found");
});
const client = connectClient(srv.port);
const result = await request(client, "GET", "/missing");
assert.strictEqual(result.status, 404);
assert.strictEqual(result.headers["x-reason"], "not-found");
assert.strictEqual(result.body, "not found");
client.close();
srv.netServer.close();
});
test("302 redirect response", async () => {
const srv = await createUpgradeServer((_req, res) => {
res.writeHead(302, { location: "/" });
res.end();
});
const client = connectClient(srv.port);
const result = await request(client, "GET", "/redirect");
assert.strictEqual(result.status, 302);
assert.strictEqual(result.headers["location"], "/");
client.close();
srv.netServer.close();
});
test("large response body (8KB) through upgraded socket", async () => {
const srv = await createUpgradeServer((_req, res) => {
res.writeHead(200);
res.end("x".repeat(8192));
});
const client = connectClient(srv.port);
const result = await request(client, "GET", "/large");
assert.strictEqual(result.body.length, 8192);
client.close();
srv.netServer.close();
});
});
describe("HTTP/2 upgrade — client disconnect mid-response", () => {
test("server does not crash when client destroys stream early", async () => {
const streamClosed = Promise.withResolvers<void>();
const srv = await createUpgradeServer((_req, res) => {
res.writeHead(200);
const interval = setInterval(() => {
if (res.destroyed || res.writableEnded) {
clearInterval(interval);
return;
}
res.write("chunk\n");
}, 5);
_req.stream.on("close", () => {
clearInterval(interval);
streamClosed.resolve();
});
});
const client = connectClient(srv.port);
const streamReady = Promise.withResolvers<http2.ClientHttp2Stream>();
const req = client.request({ ":method": "GET", ":path": "/" });
req.on("response", () => streamReady.resolve(req));
req.on("error", () => {});
const stream = await streamReady.promise;
stream.destroy();
await streamClosed.promise;
client.close();
srv.netServer.close();
});
});
describe("HTTP/2 upgrade — independent upgrade per connection", () => {
test("three clients produce three distinct sessions", async () => {
const sessions: http2.Http2Session[] = [];
const srv = await createUpgradeServer(
(_req, res) => {
res.writeHead(200);
res.end("ok");
},
{ onSession: s => sessions.push(s) },
);
const c1 = connectClient(srv.port);
const c2 = connectClient(srv.port);
const c3 = connectClient(srv.port);
await Promise.all([request(c1, "GET", "/"), request(c2, "GET", "/"), request(c3, "GET", "/")]);
assert.strictEqual(sessions.length, 3);
assert.notStrictEqual(sessions[0], sessions[1]);
assert.notStrictEqual(sessions[1], sessions[2]);
c1.close();
c2.close();
c3.close();
srv.netServer.close();
});
});
if (typeof Bun !== "undefined") {
describe("Node.js compatibility", () => {
test("tests should run on node.js", async () => {
await using proc = Bun.spawn({
cmd: [Bun.which("node") || "node", "--test", import.meta.filename],
stdout: "inherit",
stderr: "inherit",
stdin: "ignore",
});
assert.strictEqual(await proc.exited, 0);
});
});
}

View File

@@ -0,0 +1,69 @@
'use strict';
const common = require('../common');
const fixtures = require('../common/fixtures');
if (!common.hasCrypto)
common.skip('missing crypto');
const assert = require('assert');
const net = require('net');
const h2 = require('http2');
const tlsOptions = {
key: fixtures.readKey('agent1-key.pem'),
cert: fixtures.readKey('agent1-cert.pem'),
ALPNProtocols: ['h2']
};
// Create a net server that upgrades sockets to HTTP/2 manually, handles the
// request, and then shuts down via a short socket timeout and a longer H2 session
// timeout. This is an unconventional way to shut down a session (the underlying
// socket closing first) but it should work - critically, it shouldn't segfault
// (as it did until Node v20.5.1).
let serverRawSocket;
let serverH2Session;
const netServer = net.createServer((socket) => {
serverRawSocket = socket;
h2Server.emit('connection', socket);
});
const h2Server = h2.createSecureServer(tlsOptions, (req, res) => {
res.writeHead(200);
res.end();
});
h2Server.on('session', (session) => {
serverH2Session = session;
});
netServer.listen(0, common.mustCall(() => {
const proxyClient = h2.connect(`https://localhost:${netServer.address().port}`, {
rejectUnauthorized: false
});
proxyClient.on('error', () => {});
proxyClient.on('close', common.mustCall(() => {
netServer.close();
}));
const req = proxyClient.request({
':method': 'GET',
':path': '/'
});
req.on('error', () => {});
req.on('response', common.mustCall((response) => {
assert.strictEqual(response[':status'], 200);
// Asynchronously shut down the server's connections after the response,
// but not in the order it typically expects:
setTimeout(() => {
serverRawSocket.destroy();
setTimeout(() => {
serverH2Session.close();
}, 10);
}, 10);
}));
}));

View File

@@ -90,6 +90,273 @@ describe("Structured Clone Fast Path", () => {
expect(delta).toBeLessThan(1024 * 1024);
});
// === Array fast path tests ===
test("structuredClone should work with empty array", () => {
expect(structuredClone([])).toEqual([]);
});
test("structuredClone should work with array of numbers", () => {
const input = [1, 2, 3, 4, 5];
expect(structuredClone(input)).toEqual(input);
});
test("structuredClone should work with array of strings", () => {
const input = ["hello", "world", ""];
expect(structuredClone(input)).toEqual(input);
});
test("structuredClone should work with array of mixed primitives", () => {
const input = [1, "hello", true, false, null, undefined, 3.14];
const cloned = structuredClone(input);
expect(cloned).toEqual(input);
});
test("structuredClone should work with array of special numbers", () => {
const cloned = structuredClone([-0, NaN, Infinity, -Infinity]);
expect(Object.is(cloned[0], -0)).toBe(true);
expect(cloned[1]).toBeNaN();
expect(cloned[2]).toBe(Infinity);
expect(cloned[3]).toBe(-Infinity);
});
test("structuredClone should work with large array of numbers", () => {
const input = Array.from({ length: 10000 }, (_, i) => i);
expect(structuredClone(input)).toEqual(input);
});
test("structuredClone should fallback for arrays with nested objects", () => {
const input = [{ a: 1 }, { b: 2 }];
expect(structuredClone(input)).toEqual(input);
});
test("structuredClone should fallback for arrays with holes", () => {
const input = [1, , 3]; // sparse
const cloned = structuredClone(input);
// structured clone spec: holes become undefined
expect(cloned[0]).toBe(1);
expect(cloned[1]).toBe(undefined);
expect(cloned[2]).toBe(3);
});
test("structuredClone should work with array of doubles", () => {
const input = [1.5, 2.7, 3.14, 0.1 + 0.2];
const cloned = structuredClone(input);
expect(cloned).toEqual(input);
});
test("structuredClone creates independent copy of array", () => {
const input = [1, 2, 3];
const cloned = structuredClone(input);
cloned[0] = 999;
expect(input[0]).toBe(1);
});
test("structuredClone should preserve named properties on arrays", () => {
const input: any = [1, 2, 3];
input.foo = "bar";
const cloned = structuredClone(input);
expect(cloned.foo).toBe("bar");
expect(Array.from(cloned)).toEqual([1, 2, 3]);
});
test("postMessage should work with array fast path", async () => {
const { port1, port2 } = new MessageChannel();
const input = [1, 2, 3, "hello", true];
const { promise, resolve } = Promise.withResolvers();
port2.onmessage = (e: MessageEvent) => resolve(e.data);
port1.postMessage(input);
const result = await promise;
expect(result).toEqual(input);
port1.close();
port2.close();
});
// === Edge case tests ===
test("structuredClone of frozen array should produce a non-frozen clone", () => {
const input = Object.freeze([1, 2, 3]);
const cloned = structuredClone(input);
expect(cloned).toEqual([1, 2, 3]);
expect(Object.isFrozen(cloned)).toBe(false);
cloned[0] = 999;
expect(cloned[0]).toBe(999);
});
test("structuredClone of sealed array should produce a non-sealed clone", () => {
const input = Object.seal([1, 2, 3]);
const cloned = structuredClone(input);
expect(cloned).toEqual([1, 2, 3]);
expect(Object.isSealed(cloned)).toBe(false);
cloned.push(4);
expect(cloned).toEqual([1, 2, 3, 4]);
});
test("structuredClone of array with deleted element (hole via delete)", () => {
const input = [1, 2, 3];
delete (input as any)[1];
const cloned = structuredClone(input);
expect(cloned[0]).toBe(1);
expect(cloned[1]).toBe(undefined);
expect(cloned[2]).toBe(3);
expect(1 in cloned).toBe(false); // holes remain holes after structuredClone
});
test("structuredClone of array with length > actual elements", () => {
const input = [1, 2, 3];
input.length = 6;
const cloned = structuredClone(input);
expect(cloned.length).toBe(6);
expect(cloned[0]).toBe(1);
expect(cloned[1]).toBe(2);
expect(cloned[2]).toBe(3);
expect(cloned[3]).toBe(undefined);
});
test("structuredClone of single element arrays", () => {
expect(structuredClone([42])).toEqual([42]);
expect(structuredClone([3.14])).toEqual([3.14]);
expect(structuredClone(["hello"])).toEqual(["hello"]);
expect(structuredClone([true])).toEqual([true]);
expect(structuredClone([null])).toEqual([null]);
});
test("structuredClone of array with named properties on Int32 array", () => {
const input: any = [1, 2, 3]; // Int32 indexing
input.name = "test";
input.count = 42;
const cloned = structuredClone(input);
expect(cloned.name).toBe("test");
expect(cloned.count).toBe(42);
expect(Array.from(cloned)).toEqual([1, 2, 3]);
});
test("structuredClone of array with named properties on Double array", () => {
const input: any = [1.1, 2.2, 3.3]; // Double indexing
input.label = "doubles";
const cloned = structuredClone(input);
expect(cloned.label).toBe("doubles");
expect(Array.from(cloned)).toEqual([1.1, 2.2, 3.3]);
});
test("structuredClone of array that transitions Int32 to Double", () => {
const input = [1, 2, 3]; // starts as Int32
input.push(4.5); // transitions to Double
const cloned = structuredClone(input);
expect(cloned).toEqual([1, 2, 3, 4.5]);
});
test("structuredClone of array with modified prototype", () => {
const input = [1, 2, 3];
Object.setPrototypeOf(input, {
customMethod() {
return 42;
},
});
const cloned = structuredClone(input);
// Clone should have standard Array prototype, not the custom one
expect(Array.from(cloned)).toEqual([1, 2, 3]);
expect(cloned).toBeInstanceOf(Array);
expect((cloned as any).customMethod).toBeUndefined();
});
test("structuredClone of array with prototype indexed properties and holes", () => {
const proto = Object.create(Array.prototype);
proto[1] = "from proto";
const input = new Array(3);
Object.setPrototypeOf(input, proto);
input[0] = "a";
input[2] = "c";
// structuredClone only copies own properties; prototype values are not included
const cloned = structuredClone(input);
expect(cloned[0]).toBe("a");
expect(1 in cloned).toBe(false); // hole, not "from proto"
expect(cloned[2]).toBe("c");
expect(cloned).toBeInstanceOf(Array);
});
test("postMessage with Int32 array via MessageChannel", async () => {
const { port1, port2 } = new MessageChannel();
const input = [10, 20, 30, 40, 50];
const { promise, resolve } = Promise.withResolvers();
port2.onmessage = (e: MessageEvent) => resolve(e.data);
port1.postMessage(input);
const result = await promise;
expect(result).toEqual(input);
port1.close();
port2.close();
});
test("postMessage with Double array via MessageChannel", async () => {
const { port1, port2 } = new MessageChannel();
const input = [1.1, 2.2, 3.3];
const { promise, resolve } = Promise.withResolvers();
port2.onmessage = (e: MessageEvent) => resolve(e.data);
port1.postMessage(input);
const result = await promise;
expect(result).toEqual(input);
port1.close();
port2.close();
});
test("structuredClone of array multiple times produces independent copies", () => {
const input = [1, 2, 3];
const clones = Array.from({ length: 10 }, () => structuredClone(input));
clones[0][0] = 999;
clones[5][1] = 888;
// All other clones and the original should be unaffected
expect(input).toEqual([1, 2, 3]);
for (let i = 1; i < 10; i++) {
if (i === 5) {
expect(clones[i]).toEqual([1, 888, 3]);
} else {
expect(clones[i]).toEqual([1, 2, 3]);
}
}
});
test("structuredClone of Array subclass loses subclass identity", () => {
class MyArray extends Array {
customProp = "hello";
sum() {
return this.reduce((a: number, b: number) => a + b, 0);
}
}
const input = new MyArray(1, 2, 3);
input.customProp = "world";
const cloned = structuredClone(input);
// structuredClone spec: result is a plain Array, not a subclass
expect(Array.from(cloned)).toEqual([1, 2, 3]);
expect(cloned).toBeInstanceOf(Array);
expect((cloned as any).sum).toBeUndefined();
});
test("structuredClone of array with only undefined values", () => {
const input = [undefined, undefined, undefined];
const cloned = structuredClone(input);
expect(cloned).toEqual([undefined, undefined, undefined]);
expect(cloned.length).toBe(3);
// Ensure they are actual values, not holes
expect(0 in cloned).toBe(true);
expect(1 in cloned).toBe(true);
expect(2 in cloned).toBe(true);
});
test("structuredClone of array with only null values", () => {
const input = [null, null, null];
const cloned = structuredClone(input);
expect(cloned).toEqual([null, null, null]);
});
test("structuredClone of dense double array preserves -0 and NaN", () => {
const input = [-0, NaN, -0, NaN];
const cloned = structuredClone(input);
expect(Object.is(cloned[0], -0)).toBe(true);
expect(cloned[1]).toBeNaN();
expect(Object.is(cloned[2], -0)).toBe(true);
expect(cloned[3]).toBeNaN();
});
test("structuredClone on object with simple properties can exceed JSFinalObject::maxInlineCapacity", () => {
let largeValue = {};
for (let i = 0; i < 100; i++) {

View File

@@ -20,8 +20,8 @@ export async function build(dir: string) {
// so we make it use clang instead
...(process.platform == "linux" && isCI
? {
CC: !isMusl ? "/usr/lib/llvm-19/bin/clang" : "/usr/lib/llvm19/bin/clang",
CXX: !isMusl ? "/usr/lib/llvm-19/bin/clang++" : "/usr/lib/llvm19/bin/clang++",
CC: !isMusl ? "/usr/lib/llvm-21/bin/clang" : "/usr/lib/llvm21/bin/clang",
CXX: !isMusl ? "/usr/lib/llvm-21/bin/clang++" : "/usr/lib/llvm21/bin/clang++",
}
: {}),
},

View File

@@ -0,0 +1,69 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe } from "harness";
// https://github.com/oven-sh/bun/issues/26669
// WebSocket client crashes ("Pure virtual function called!") when binaryType = "blob"
// and no event listener is attached. The missing incPendingActivityCount() allows the
// WebSocket to be GC'd before the postTask callback runs.
test("WebSocket with binaryType blob should not crash when GC'd before postTask", async () => {
await using server = Bun.serve({
port: 0,
fetch(req, server) {
if (server.upgrade(req)) return undefined;
return new Response("Not a websocket");
},
websocket: {
open(ws) {
// Send binary data immediately - this triggers didReceiveBinaryData
// with the Blob path when client has binaryType = "blob"
ws.sendBinary(new Uint8Array(64));
ws.sendBinary(new Uint8Array(64));
ws.sendBinary(new Uint8Array(64));
},
message() {},
},
});
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
const url = process.argv[1];
// Create many short-lived WebSocket objects with blob binaryType and no listeners.
// Without the fix, the missing incPendingActivityCount() lets the WebSocket get GC'd
// before the postTask callback fires, causing "Pure virtual function called!".
async function run() {
for (let i = 0; i < 100; i++) {
const ws = new WebSocket(url);
ws.binaryType = "blob";
// Intentionally: NO event listeners attached.
// This forces the postTask path in didReceiveBinaryData's Blob case.
}
// Force GC to collect the unreferenced WebSocket objects while postTask
// callbacks are still pending.
Bun.gc(true);
await Bun.sleep(50);
Bun.gc(true);
await Bun.sleep(50);
Bun.gc(true);
await Bun.sleep(100);
}
await run();
Bun.gc(true);
await Bun.sleep(200);
console.log("OK");
process.exit(0);
`,
`ws://localhost:${server.port}`,
],
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stdout).toContain("OK");
expect(exitCode).toBe(0);
});