Compare commits

...

13 Commits

Author SHA1 Message Date
autofix-ci[bot]
07bb3cee25 [autofix.ci] apply automated fixes 2026-02-20 21:35:20 +00:00
Claude
a25996fc53 fix(socket): handle non-object argument in Listener.getsockname()
`Listener.getsockname()` crashed with a null pointer dereference when
called without arguments or with a non-object argument. The function
unconditionally called `.put()` on the first argument, which calls
`getObject()` in C++ — returning null for non-object values.

Create a new empty object when the argument is not an object, and
return it so the caller can use the result directly.

https://claude.ai/code/session_01UEQc8JbWybcPVLpCLoFMhB
2026-02-20 21:33:06 +00:00
robobun
89d2b1cd0b fix(websocket): add missing incPendingActivityCount() in blob binaryType case (#26670)
## Summary

- Fix crash ("Pure virtual function called!") when WebSocket client
receives binary data with `binaryType = "blob"` and no event listener
attached
- Add missing `incPendingActivityCount()` call before `postTask` in the
Blob case of `didReceiveBinaryData`
- Add regression test for issue #26669

## Root Cause

The Blob case in `didReceiveBinaryData` (WebSocket.cpp:1324-1331) was
calling `decPendingActivityCount()` inside the `postTask` callback
without a matching `incPendingActivityCount()` beforehand. This bug was
introduced in #21471 when Blob support was added.

The ArrayBuffer and NodeBuffer cases correctly call
`incPendingActivityCount()` before `postTask`, but the Blob case was
missing this call.

## Test plan

- [x] New regression test verifies WebSocket with `binaryType = "blob"`
doesn't crash on ping frames
- [x] `bun bd test test/regression/issue/26669.test.ts` passes

Fixes #26669

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Ciro Spaciari MacBook <ciro@anthropic.com>
2026-02-05 20:39:19 -08:00
Jarred Sumner
2019a1b11d Bump WebKit 2026-02-05 20:09:39 -08:00
SUZUKI Sosuke
6c70ce2485 Update WebKit to 7bc2f97e28353062bb54776ce01e4c2ff24c35cc (#26769)
### What does this PR do?

### How did you verify your code works?
2026-02-05 17:58:30 -08:00
SUZUKI Sosuke
0e386c4168 fix(stringWidth): correct width for Thai/Lao spacing vowels (#26728)
## Summary

`Bun.stringWidth` was incorrectly treating Thai SARA AA (U+0E32), SARA
AM (U+0E33), and their Lao equivalents (U+0EB2, U+0EB3) as zero-width
characters.

## Root Cause

In `src/string/immutable/visible.zig`, the range check for Thai/Lao
combining marks was too broad:
- Thai: `0xe31 <= cp <= 0xe3a` included U+0E32 and U+0E33
- Lao: `0xeb1 <= cp <= 0xebc` included U+0EB2 and U+0EB3

According to Unicode (UCD Grapheme_Break property), these are **spacing
vowels** (Grapheme_Base), not combining marks.

## Changes

- **`src/string/immutable/visible.zig`**: Exclude U+0E32, U+0E33,
U+0EB2, U+0EB3 from zero-width ranges
- **`test/js/bun/util/stringWidth.test.ts`**: Add tests for Thai and Lao
spacing vowels

## Before/After

| Character | Before | After |
|-----------|--------|-------|
| `\u0E32` (SARA AA) | 0 | 1 |
| `\u0E33` (SARA AM) | 0 | 1 |
| `คำ` (common Thai word) | 1 | 2 |
| `\u0EB2` (Lao AA) | 0 | 1 |
| `\u0EB3` (Lao AM) | 0 | 1 |

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-05 17:31:15 -08:00
Alistair Smith
e5cd034e9a Define seed in crc32 types (#26754)
### What does this PR do?

Fixes #26711 

### How did you verify your code works?

bun-types.test.ts integration test
2026-02-05 06:41:25 -08:00
Dylan Conway
45b9d1baba Revert "fix(bindgen): prevent use-after-free for optional string argu… (#26742)
…ments (#26717)"

This reverts commit 315e822866.

### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-04 19:38:12 -08:00
Ciro Spaciari
0ad562d3bd fix(http2) Fix SSLWrapper and allow injecting connections in Http2SecureServer (#26539)
### What does this PR do?

Enables the `net.Server → Http2SecureServer` connection upgrade pattern
used by libraries like
[http2-wrapper](https://github.com/szmarczak/http2-wrapper),
[crawlee](https://github.com/apify/crawlee), and custom HTTP/2 proxy
servers. This pattern works by accepting raw TCP connections on a
`net.Server` and forwarding them to an `Http2SecureServer` via
`h2Server.emit('connection', rawSocket)`.

#### Bug fixes

**SSLWrapper use-after-free (Zig)**

Two use-after-free bugs in `ssl_wrapper.zig` are fixed:

1. **`flush()` stale pointer** — `flush()` captured the `ssl` pointer
*before* calling `handleTraffic()`, which can trigger a close callback
that frees the SSL object via `deinit`. The pointer was then used after
being freed. Fix: read `this.ssl` *after* `handleTraffic()` returns.

2. **`handleReading()` null dereference** — `handleReading()` called
`triggerCloseCallback()` after `triggerDataCallback()` without checking
whether the data callback had already closed the connection. This led to
a null function pointer dereference. Fix: check `this.ssl == null ||
this.flags.closed_notified` before calling the close callback.

### How did you verify your code works?

- Added **13 in-process tests** (`node-http2-upgrade.test.mts`) covering
the `net.Server → Http2SecureServer` upgrade path:
  - GET/POST requests through upgraded connections
  - Sequential requests sharing a single H2 session
  - `session` event emission
  - Concurrent clients with independent sessions
  - Socket close ordering (rawSocket first vs session first) — no crash
  - ALPN protocol negotiation (`h2`)
  - Varied status codes (200, 302, 404)
  - Client disconnect mid-response (stream destroyed early)
  - Three independent clients producing three distinct sessions
- Tests use `node:test` + `node:assert` and **pass in both Bun and
Node.js**
- Ported `test-http2-socket-close.js` from the Node.js test suite,
verifying no segfault when the raw socket is destroyed before the H2
session is closed

---------

Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-04 19:23:29 -08:00
Ciro Spaciari
63a323a511 fix(http): don't enter tunnel mode for proxy-style absolute URLs in request line (#26737)
## Summary

Fixes a bug where sequential HTTP requests with proxy-style absolute
URLs (e.g. `GET http://example.com/path HTTP/1.1`) hang on the 2nd+
request when using keep-alive connections.

## Root Cause

In `packages/bun-uws/src/HttpParser.h`, the parser was treating
proxy-style absolute URLs identically to `CONNECT` method requests —
setting `isConnectRequest = true` and entering tunnel mode. This flag
was never reset between requests on the same keep-alive connection, so
the 2nd+ request was swallowed as raw tunnel data instead of being
parsed as HTTP.

## Fix

3-line change in `HttpParser.h:569`:
- **`isConnect`**: Now only matches actual `CONNECT` method requests
(removed `isHTTPorHTTPSPrefixForProxies` from the condition)
- **`isProxyStyleURL`**: New variable that detects `http://`/`https://`
prefixes and accepts them as valid request targets — without triggering
tunnel mode

## Who was affected

- Any Bun HTTP server (`Bun.serve()` or `node:http createServer`)
receiving proxy-style requests on keep-alive connections
- HTTP proxy servers built with Bun could only handle one request per
connection
- Bun's own HTTP client making sequential requests through an HTTP proxy
backed by a Bun server

## Test

Added `test/js/node/http/node-http-proxy-url.test.ts` with 3 test cases:
1. Sequential GET requests with absolute URL paths
2. Sequential POST requests with absolute URL paths
3. Mixed normal and proxy-style URLs

Tests run under both Node.js and Bun for compatibility verification.

-  Fails with system bun (2/3 tests timeout on 2nd request)
-  Passes with debug build (3/3 tests pass)

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-04 19:23:18 -08:00
星星魔法术
af76296637 fix(docs): update runtime/markdown page Callout component (#26729)
### What does this PR do?
Fix #26727 
fix the Page Not Found bug 
### How did you verify your code works?
I run the development server:
```bash
mint dev
```

<img width="1287" height="823" alt="Markdown"
src="https://github.com/user-attachments/assets/555716b4-1aee-46bd-b066-1e00986b3923"
/>
2026-02-03 22:59:12 -08:00
Dylan Conway
d1047c2cf1 fix ci (#26703)
### What does this PR do?

### How did you verify your code works?
2026-02-03 22:18:40 -08:00
robobun
315e822866 fix(bindgen): prevent use-after-free for optional string arguments (#26717)
## Summary
- Fix a use-after-free bug in the bindgen code generator where string
arguments with default values would have their underlying WTF::String
destroyed before the BunString was used
- The issue occurred because for optional string parameters with
defaults, a WTF::String was created inside an `if` block, converted to
BunString, then the if block closed and destroyed the WTF::String while
the BunString was still in use
- This manifested as a segfault in `Bun.stringWidth()` and potentially
other functions using optional string arguments

## Details

The crash stack trace showed:
```
Segmentation fault at address 0x31244B0F0
visible.zig:888: string.immutable.visible.visible.visibleUTF16WidthFn
BunObject.zig:1371: bindgen_BunObject_dispatchStringWidth1
GeneratedBindings.cpp:242: bindgen_BunObject_jsStringWidth
```

The generated code before this fix looked like:
```cpp
BunString argStr;
if (!arg0.value().isUndefinedOrNull()) {
    WTF::String wtfString_0 = WebCore::convert<...>(...);
    argStr = Bun::toString(wtfString_0);
}  // <-- wtfString_0 destroyed here!
// ... argStr used later, pointing to freed memory
```

The fix declares the WTF::String holder outside the if block:
```cpp
BunString argStr;
WTF::String wtfStringHolder_0;  // Lives until function returns
if (!arg0.value().isUndefinedOrNull()) {
    wtfStringHolder_0 = WebCore::convert<...>(...);
}
if (!wtfStringHolder_0.isEmpty()) argStr = Bun::toString(wtfStringHolder_0);
// argStr now points to valid memory
```

This fix applies to both:
- Direct string function arguments with defaults (e.g.,
`t.DOMString.default("")`)
- Dictionary fields with string defaults

## Test plan
- [x] Existing `stringWidth.test.ts` tests pass (105 tests)
- [x] Manual testing with GC stress shows no crashes
- [x] `os.userInfo()` with encoding option works correctly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 17:44:13 -08:00
31 changed files with 1439 additions and 105 deletions

View File

@@ -114,7 +114,8 @@ const buildPlatforms = [
{ os: "linux", arch: "x64", abi: "musl", baseline: true, distro: "alpine", release: "3.23" },
{ os: "windows", arch: "x64", release: "2019" },
{ os: "windows", arch: "x64", baseline: true, release: "2019" },
{ os: "windows", arch: "aarch64", release: "2019" },
// TODO: Re-enable when Windows ARM64 VS component installation is resolved on Buildkite runners
// { os: "windows", arch: "aarch64", release: "2019" },
];
/**
@@ -470,7 +471,7 @@ function getBuildCommand(target, options, label) {
*/
function getWindowsArm64CrossFlags(target) {
if (target.os === "windows" && target.arch === "aarch64") {
return " --toolchain windows-aarch64 -DSKIP_CODEGEN=ON -DCMAKE_C_COMPILER=clang-cl -DCMAKE_CXX_COMPILER=clang-cl";
return " --toolchain windows-aarch64";
}
return "";
}
@@ -483,6 +484,7 @@ function getWindowsArm64CrossFlags(target) {
function getBuildCppStep(platform, options) {
const command = getBuildCommand(platform, options);
const crossFlags = getWindowsArm64CrossFlags(platform);
return {
key: `${getTargetKey(platform)}-build-cpp`,
label: `${getTargetLabel(platform)} - build-cpp`,

View File

@@ -7,6 +7,13 @@ register_repository(
4f4f5ef8ebc6e23cbf393428f0ab1b526773f7ac
)
set(BORINGSSL_CMAKE_ARGS -DBUILD_SHARED_LIBS=OFF)
# Disable ASM on Windows ARM64 to avoid mixing non-ARM object files into ARM64 libs
if(WIN32 AND CMAKE_SYSTEM_PROCESSOR MATCHES "ARM64|aarch64|AARCH64")
list(APPEND BORINGSSL_CMAKE_ARGS -DOPENSSL_NO_ASM=1)
endif()
register_cmake_command(
TARGET
boringssl
@@ -15,7 +22,7 @@ register_cmake_command(
ssl
decrepit
ARGS
-DBUILD_SHARED_LIBS=OFF
${BORINGSSL_CMAKE_ARGS}
INCLUDES
include
)

View File

@@ -1457,6 +1457,8 @@ if(NOT BUN_CPP_ONLY)
# ==856230==See https://github.com/google/sanitizers/issues/856 for possible workarounds.
# the linked issue refers to very old kernels but this still happens to us on modern ones.
# disabling ASLR to run the binary works around it
# Skip post-build test/features when cross-compiling (can't run the target binary on the host)
if(NOT CMAKE_CROSSCOMPILING)
set(TEST_BUN_COMMAND_BASE ${BUILD_PATH}/${bunExe} --revision)
set(TEST_BUN_COMMAND_ENV_WRAP
${CMAKE_COMMAND} -E env BUN_DEBUG_QUIET_LOGS=1)
@@ -1505,6 +1507,7 @@ if(NOT BUN_CPP_ONLY)
${BUILD_PATH}/features.json
)
endif()
endif() # NOT CMAKE_CROSSCOMPILING
if(CMAKE_HOST_APPLE AND bunStrip)
register_command(
@@ -1551,7 +1554,10 @@ if(NOT BUN_CPP_ONLY)
string(REPLACE bun ${bunTriplet} bunPath ${bun})
endif()
set(bunFiles ${bunExe} features.json)
set(bunFiles ${bunExe})
if(NOT CMAKE_CROSSCOMPILING)
list(APPEND bunFiles features.json)
endif()
if(WIN32)
list(APPEND bunFiles ${bun}.pdb)
elseif(APPLE)

View File

@@ -26,6 +26,12 @@ if(RELEASE)
list(APPEND LOLHTML_BUILD_ARGS --release)
endif()
# Cross-compilation: tell cargo to target ARM64
if(WIN32 AND CMAKE_SYSTEM_PROCESSOR MATCHES "ARM64|aarch64|AARCH64")
list(APPEND LOLHTML_BUILD_ARGS --target aarch64-pc-windows-msvc)
set(LOLHTML_LIBRARY ${LOLHTML_BUILD_PATH}/aarch64-pc-windows-msvc/${LOLHTML_BUILD_TYPE}/${CMAKE_STATIC_LIBRARY_PREFIX}lolhtml${CMAKE_STATIC_LIBRARY_SUFFIX})
endif()
# Windows requires unwind tables, apparently.
if (NOT WIN32)
# The encoded escape sequences are intentional. They're how you delimit multiple arguments in a single environment variable.
@@ -51,11 +57,18 @@ if(WIN32)
if(MSVC_VERSIONS)
list(GET MSVC_VERSIONS -1 MSVC_LATEST) # Get the latest version
if(CMAKE_SYSTEM_PROCESSOR MATCHES "ARM64|aarch64")
set(MSVC_LINK_PATH "${MSVC_LATEST}/bin/HostARM64/arm64/link.exe")
# Use Hostx64/arm64 for cross-compilation from x64, fall back to native
if(EXISTS "${MSVC_LATEST}/bin/Hostx64/arm64/link.exe")
set(MSVC_LINK_PATH "${MSVC_LATEST}/bin/Hostx64/arm64/link.exe")
else()
set(MSVC_LINK_PATH "${MSVC_LATEST}/bin/HostARM64/arm64/link.exe")
endif()
set(CARGO_LINKER_VAR "CARGO_TARGET_AARCH64_PC_WINDOWS_MSVC_LINKER")
set(MSVC_LIB_ARCH "arm64")
else()
set(MSVC_LINK_PATH "${MSVC_LATEST}/bin/Hostx64/x64/link.exe")
set(CARGO_LINKER_VAR "CARGO_TARGET_X86_64_PC_WINDOWS_MSVC_LINKER")
set(MSVC_LIB_ARCH "x64")
endif()
if(EXISTS "${MSVC_LINK_PATH}")
list(APPEND LOLHTML_ENV "${CARGO_LINKER_VAR}=${MSVC_LINK_PATH}")

View File

@@ -3,18 +3,35 @@ set(CMAKE_SYSTEM_PROCESSOR aarch64)
set(CMAKE_C_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)
set(CMAKE_CROSSCOMPILING ON)
# Force ARM64 architecture ID - this is what CMake uses to determine /machine: flag
set(MSVC_C_ARCHITECTURE_ID ARM64 CACHE INTERNAL "")
set(MSVC_CXX_ARCHITECTURE_ID ARM64 CACHE INTERNAL "")
# The rest only applies when building on Windows (C++ and link steps).
# The Zig step runs on Linux and only needs CMAKE_SYSTEM_NAME/PROCESSOR above.
if(CMAKE_HOST_SYSTEM_NAME STREQUAL "Windows")
# CMake 4.0+ policy CMP0197 controls how MSVC machine type flags are handled
set(CMAKE_POLICY_DEFAULT_CMP0197 NEW CACHE INTERNAL "")
# Ensure clang/clang-cl targets Windows ARM64 (otherwise ARM64-specific flags like
# -march=armv8-a are rejected as x86-only).
set(CMAKE_C_COMPILER_TARGET aarch64-pc-windows-msvc CACHE STRING "" FORCE)
set(CMAKE_CXX_COMPILER_TARGET aarch64-pc-windows-msvc CACHE STRING "" FORCE)
# Clear any inherited static linker flags that might have wrong machine types
set(CMAKE_STATIC_LINKER_FLAGS "" CACHE STRING "" FORCE)
# ARM64 has lock-free atomics (highway's FindAtomics check can't run ARM64 test binary on x64)
set(ATOMICS_LOCK_FREE_INSTRUCTIONS TRUE CACHE BOOL "" FORCE)
set(HAVE_CXX_ATOMICS_WITHOUT_LIB TRUE CACHE BOOL "" FORCE)
set(HAVE_CXX_ATOMICS64_WITHOUT_LIB TRUE CACHE BOOL "" FORCE)
# Use wrapper script for llvm-lib that strips /machine:x64 flags
# This works around CMake 4.1.0 bug where both ARM64 and x64 machine flags are added
get_filename_component(_TOOLCHAIN_DIR "${CMAKE_CURRENT_LIST_DIR}" DIRECTORY)
set(CMAKE_AR "${_TOOLCHAIN_DIR}/scripts/llvm-lib-wrapper.bat" CACHE FILEPATH "" FORCE)
# Force ARM64 architecture ID - this is what CMake uses to determine /machine: flag
set(MSVC_C_ARCHITECTURE_ID ARM64 CACHE INTERNAL "")
set(MSVC_CXX_ARCHITECTURE_ID ARM64 CACHE INTERNAL "")
# CMake 4.0+ policy CMP0197 controls how MSVC machine type flags are handled
set(CMAKE_POLICY_DEFAULT_CMP0197 NEW CACHE INTERNAL "")
# Clear any inherited static linker flags that might have wrong machine types
set(CMAKE_STATIC_LINKER_FLAGS "" CACHE STRING "" FORCE)
# Use wrapper script for llvm-lib that strips /machine:x64 flags
# This works around CMake 4.1.0 bug where both ARM64 and x64 machine flags are added
get_filename_component(_TOOLCHAIN_DIR "${CMAKE_CURRENT_LIST_DIR}" DIRECTORY)
set(CMAKE_AR "${_TOOLCHAIN_DIR}/scripts/llvm-lib-wrapper.bat" CACHE FILEPATH "" FORCE)
endif()

View File

@@ -50,6 +50,11 @@ if(APPLE)
list(APPEND LLVM_PATHS ${HOMEBREW_PREFIX}/opt/llvm/bin)
endif()
if(WIN32)
# Prefer standalone LLVM over VS-bundled (standalone supports cross-compilation)
list(APPEND LLVM_PATHS "C:/Program Files/LLVM/bin")
endif()
if(UNIX)
list(APPEND LLVM_PATHS /usr/lib/llvm/bin)

View File

@@ -6,7 +6,7 @@ option(WEBKIT_LOCAL "If a local version of WebKit should be used instead of down
option(WEBKIT_BUILD_TYPE "The build type for local WebKit (defaults to CMAKE_BUILD_TYPE)")
if(NOT WEBKIT_VERSION)
set(WEBKIT_VERSION 7bc2f97e28353062bb54776ce01e4c2ff24c35cc)
set(WEBKIT_VERSION 8af7958ff0e2a4787569edf64641a1ae7cfe074a)
endif()
# Use preview build URL for Windows ARM64 until the fix is merged to main

View File

@@ -3,9 +3,9 @@ title: Markdown
description: Parse and render Markdown with Bun's built-in Markdown API, supporting GFM extensions and custom rendering callbacks
---
{% callout type="note" %}
**Unstable API** — This API is under active development and may change in future versions of Bun.
{% /callout %}
<Callout type="note">
**Unstable API** — This API is under active development and may change in future versions of Bun.
</Callout>
Bun includes a fast, built-in Markdown parser written in Zig. It supports GitHub Flavored Markdown (GFM) extensions and provides three APIs:

View File

@@ -2154,7 +2154,7 @@ declare module "bun" {
interface Hash {
wyhash: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer, seed?: bigint) => bigint;
adler32: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer) => number;
crc32: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer) => number;
crc32: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer, seed?: number) => number;
cityHash32: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer) => number;
cityHash64: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer, seed?: bigint) => bigint;
xxHash32: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer, seed?: number) => number;

View File

@@ -566,8 +566,10 @@ namespace uWS
bool isHTTPMethod = (__builtin_expect(data[1] == '/', 1));
bool isConnect = !isHTTPMethod && (isHTTPorHTTPSPrefixForProxies(data + 1, end) == 1 || ((data - start) == 7 && memcmp(start, "CONNECT", 7) == 0));
if (isHTTPMethod || isConnect) [[likely]] {
bool isConnect = !isHTTPMethod && ((data - start) == 7 && memcmp(start, "CONNECT", 7) == 0);
/* Also accept proxy-style absolute URLs (http://... or https://...) as valid request targets */
bool isProxyStyleURL = !isHTTPMethod && !isConnect && data[0] == 32 && isHTTPorHTTPSPrefixForProxies(data + 1, end) == 1;
if (isHTTPMethod || isConnect || isProxyStyleURL) [[likely]] {
header.key = {start, (size_t) (data - start)};
data++;
if(!isValidMethod(header.key, useStrictMethodValidation)) {

View File

@@ -57,7 +57,11 @@ async function build(args) {
if (process.platform === "win32" && !process.env["VSINSTALLDIR"]) {
const shellPath = join(import.meta.dirname, "vs-shell.ps1");
const scriptPath = import.meta.filename;
return spawn("pwsh", ["-NoProfile", "-NoLogo", "-File", shellPath, process.argv0, scriptPath, ...args]);
// When cross-compiling to ARM64, tell vs-shell.ps1 to set up the x64_arm64 VS environment
const toolchainIdx = args.indexOf("--toolchain");
const requestedVsArch = toolchainIdx !== -1 && args[toolchainIdx + 1] === "windows-aarch64" ? "arm64" : undefined;
const env = requestedVsArch ? { ...process.env, BUN_VS_ARCH: requestedVsArch } : undefined;
return spawn("pwsh", ["-NoProfile", "-NoLogo", "-File", shellPath, process.argv0, scriptPath, ...args], { env });
}
if (isCI) {
@@ -92,21 +96,9 @@ async function build(args) {
generateOptions["--toolchain"] = toolchainPath;
}
// Windows ARM64: automatically set required options
// Windows ARM64: log detection (compiler is selected by CMake/toolchain)
if (isWindowsARM64) {
// Use clang-cl instead of MSVC cl.exe for proper ARM64 flag support
if (!generateOptions["-DCMAKE_C_COMPILER"]) {
generateOptions["-DCMAKE_C_COMPILER"] = "clang-cl";
}
if (!generateOptions["-DCMAKE_CXX_COMPILER"]) {
generateOptions["-DCMAKE_CXX_COMPILER"] = "clang-cl";
}
// Skip codegen by default since x64 bun crashes under WoW64 emulation
// Can be overridden with -DSKIP_CODEGEN=OFF once ARM64 bun is available
if (!generateOptions["-DSKIP_CODEGEN"]) {
generateOptions["-DSKIP_CODEGEN"] = "ON";
}
console.log("Windows ARM64 detected: using clang-cl and SKIP_CODEGEN=ON");
console.log("Windows ARM64 detected");
}
const generateArgs = Object.entries(generateOptions).flatMap(([flag, value]) =>

View File

@@ -5,7 +5,22 @@ $ErrorActionPreference = "Stop"
# Detect system architecture
$script:IsARM64 = [System.Runtime.InteropServices.RuntimeInformation]::OSArchitecture -eq [System.Runtime.InteropServices.Architecture]::Arm64
$script:VsArch = if ($script:IsARM64) { "arm64" } else { "amd64" }
# Allow overriding the target arch (useful for cross-compiling on x64 -> ARM64)
$script:VsArch = $null
if ($env:BUN_VS_ARCH) {
switch ($env:BUN_VS_ARCH.ToLowerInvariant()) {
"arm64" { $script:VsArch = "arm64" }
"aarch64" { $script:VsArch = "arm64" }
"amd64" { $script:VsArch = "amd64" }
"x64" { $script:VsArch = "amd64" }
default { throw "Invalid BUN_VS_ARCH: $env:BUN_VS_ARCH (expected arm64|amd64)" }
}
}
if (-not $script:VsArch) {
$script:VsArch = if ($script:IsARM64) { "arm64" } else { "amd64" }
}
if($env:VSINSTALLDIR -eq $null) {
Write-Host "Loading Visual Studio environment, this may take a second..."
@@ -17,17 +32,29 @@ if($env:VSINSTALLDIR -eq $null) {
$vsDir = (& $vswhere -prerelease -latest -property installationPath)
if ($vsDir -eq $null) {
$vsDir = Get-ChildItem -Path "C:\Program Files\Microsoft Visual Studio\2022" -Directory
# Check common VS installation paths
$searchPaths = @(
"C:\Program Files\Microsoft Visual Studio\2022",
"C:\Program Files (x86)\Microsoft Visual Studio\2022"
)
foreach ($searchPath in $searchPaths) {
if (Test-Path $searchPath) {
$vsDir = (Get-ChildItem -Path $searchPath -Directory | Select-Object -First 1).FullName
if ($vsDir -ne $null) { break }
}
}
if ($vsDir -eq $null) {
throw "Visual Studio directory not found."
}
$vsDir = $vsDir.FullName
}
Push-Location $vsDir
try {
$vsShell = (Join-Path -Path $vsDir -ChildPath "Common7\Tools\Launch-VsDevShell.ps1")
. $vsShell -Arch $script:VsArch -HostArch $script:VsArch
# Visual Studio's Launch-VsDevShell.ps1 only supports x86/amd64 for HostArch
# For ARM64 builds, use amd64 as HostArch since it can cross-compile to ARM64
$hostArch = if ($script:VsArch -eq "arm64") { "amd64" } else { $script:VsArch }
. $vsShell -Arch $script:VsArch -HostArch $hostArch
} finally {
Pop-Location
}
@@ -61,7 +88,7 @@ if ($args.Count -gt 0) {
$displayArgs += $arg
}
}
Write-Host "$ $command $displayArgs"
& $command $commandArgs
exit $LASTEXITCODE

View File

@@ -256,7 +256,7 @@ pub fn NewSocket(comptime ssl: bool) type {
jsc.markBinding(@src());
if (this.socket.isDetached()) return;
const handlers = this.getHandlers();
log("onTimeout {s}", .{if (handlers.is_server) "S" else "C"});
log("onTimeout {s}", .{if (handlers.mode == .server) "S" else "C"});
const callback = handlers.onTimeout;
if (callback == .zero or this.flags.finalizing) return;
if (handlers.vm.isShuttingDown()) {
@@ -281,7 +281,7 @@ pub fn NewSocket(comptime ssl: bool) type {
pub fn handleConnectError(this: *This, errno: c_int) bun.JSError!void {
const handlers = this.getHandlers();
log("onConnectError {s} ({d}, {d})", .{ if (handlers.is_server) "S" else "C", errno, this.ref_count.get() });
log("onConnectError {s} ({d}, {d})", .{ if (handlers.mode == .server) "S" else "C", errno, this.ref_count.get() });
// Ensure the socket is still alive for any defer's we have
this.ref();
defer this.deref();
@@ -397,7 +397,8 @@ pub fn NewSocket(comptime ssl: bool) type {
}
pub fn isServer(this: *const This) bool {
return this.getHandlers().is_server;
const handlers = this.getHandlers();
return handlers.mode.isServer();
}
pub fn onOpen(this: *This, socket: Socket) void {
@@ -502,7 +503,7 @@ pub fn NewSocket(comptime ssl: bool) type {
jsc.markBinding(@src());
if (this.socket.isDetached()) return;
const handlers = this.getHandlers();
log("onEnd {s}", .{if (handlers.is_server) "S" else "C"});
log("onEnd {s}", .{if (handlers.mode == .server) "S" else "C"});
// Ensure the socket remains alive until this is finished
this.ref();
defer this.deref();
@@ -534,7 +535,7 @@ pub fn NewSocket(comptime ssl: bool) type {
this.socket = s;
if (this.socket.isDetached()) return;
const handlers = this.getHandlers();
log("onHandshake {s} ({d})", .{ if (handlers.is_server) "S" else "C", success });
log("onHandshake {s} ({d})", .{ if (handlers.mode == .server) "S" else "C", success });
const authorized = if (success == 1) true else false;
@@ -571,7 +572,7 @@ pub fn NewSocket(comptime ssl: bool) type {
result = callback.call(globalObject, this_value, &[_]JSValue{this_value}) catch |err| globalObject.takeException(err);
// only call onOpen once for clients
if (!handlers.is_server) {
if (handlers.mode != .server) {
// clean onOpen callback so only called in the first handshake and not in every renegotiation
// on servers this would require a different approach but it's not needed because our servers will not call handshake multiple times
// servers don't support renegotiation
@@ -600,7 +601,7 @@ pub fn NewSocket(comptime ssl: bool) type {
pub fn onClose(this: *This, _: Socket, err: c_int, _: ?*anyopaque) bun.JSError!void {
jsc.markBinding(@src());
const handlers = this.getHandlers();
log("onClose {s}", .{if (handlers.is_server) "S" else "C"});
log("onClose {s}", .{if (handlers.mode == .server) "S" else "C"});
this.detachNativeCallback();
this.socket.detach();
defer this.deref();
@@ -648,7 +649,7 @@ pub fn NewSocket(comptime ssl: bool) type {
this.socket = s;
if (this.socket.isDetached()) return;
const handlers = this.getHandlers();
log("onData {s} ({d})", .{ if (handlers.is_server) "S" else "C", data.len });
log("onData {s} ({d})", .{ if (handlers.mode == .server) "S" else "C", data.len });
if (this.native_callback.onData(data)) return;
const callback = handlers.onData;
@@ -691,7 +692,7 @@ pub fn NewSocket(comptime ssl: bool) type {
pub fn getListener(this: *This, _: *jsc.JSGlobalObject) JSValue {
const handlers = this.handlers orelse return .js_undefined;
if (!handlers.is_server or this.socket.isDetached()) {
if (handlers.mode != .server or this.socket.isDetached()) {
return .js_undefined;
}
@@ -1352,7 +1353,7 @@ pub fn NewSocket(comptime ssl: bool) type {
};
const this_handlers = this.getHandlers();
const handlers = try Handlers.fromJS(globalObject, socket_obj, this_handlers.is_server);
const handlers = try Handlers.fromJS(globalObject, socket_obj, this_handlers.mode == .server);
this_handlers.deinit();
this_handlers.* = handlers;
@@ -1380,6 +1381,9 @@ pub fn NewSocket(comptime ssl: bool) type {
if (this.socket.isDetached() or this.socket.isNamedPipe()) {
return .js_undefined;
}
if (this.isServer()) {
return globalObject.throw("Server-side upgradeTLS is not supported. Use upgradeDuplexToTLS with isServer: true instead.", .{});
}
const args = callframe.arguments_old(1);
if (args.len < 1) {
@@ -1571,7 +1575,7 @@ pub fn NewSocket(comptime ssl: bool) type {
this.socket.detach();
// start TLS handshake after we set extension on the socket
new_socket.startTLS(!handlers_ptr.is_server);
new_socket.startTLS(handlers_ptr.mode != .server);
success = true;
return array;
@@ -1754,6 +1758,23 @@ pub fn NewWrappedHandler(comptime tls: bool) type {
};
}
/// Unified socket mode replacing the old is_server bool + TLSMode pair.
pub const SocketMode = enum {
/// Default — TLS client or non-TLS socket
client,
/// Listener-owned server. TLS (if any) configured at the listener level.
server,
/// Duplex upgraded to TLS server role. Not listener-owned —
/// markInactive uses client lifecycle path.
duplex_server,
/// Returns true for any mode that acts as a TLS server (ALPN, handshake direction).
/// Both .server and .duplex_server present as server to peers.
pub fn isServer(this: SocketMode) bool {
return this == .server or this == .duplex_server;
}
};
pub const DuplexUpgradeContext = struct {
upgrade: uws.UpgradedDuplex,
// We only us a tls and not a raw socket when upgrading a Duplex, Duplex dont support socketpairs
@@ -1764,6 +1785,7 @@ pub const DuplexUpgradeContext = struct {
task_event: EventState = .StartTLS,
ssl_config: ?jsc.API.ServerConfig.SSLConfig,
is_open: bool = false,
#mode: SocketMode = .client,
pub const EventState = enum(u8) {
StartTLS,
@@ -1846,7 +1868,8 @@ pub const DuplexUpgradeContext = struct {
switch (this.task_event) {
.StartTLS => {
if (this.ssl_config) |config| {
this.upgrade.startTLS(config, true) catch |err| {
log("DuplexUpgradeContext.startTLS mode={s}", .{@tagName(this.#mode)});
this.upgrade.startTLS(config, this.#mode == .client) catch |err| {
switch (err) {
error.OutOfMemory => {
bun.outOfMemory();
@@ -1914,8 +1937,15 @@ pub fn jsUpgradeDuplexToTLS(globalObject: *jsc.JSGlobalObject, callframe: *jsc.C
return globalObject.throw("Expected \"socket\" option", .{});
};
const is_server = false; // A duplex socket is always handled as a client
const handlers = try Handlers.fromJS(globalObject, socket_obj, is_server);
var is_server = false;
if (try opts.getTruthy(globalObject, "isServer")) |is_server_val| {
is_server = is_server_val.toBoolean();
}
// Note: Handlers.fromJS is_server=false because these handlers are standalone
// allocations (not embedded in a Listener). The mode field on Handlers
// controls lifecycle (markInactive expects a Listener parent when .server).
// The TLS direction (client vs server) is controlled by DuplexUpgradeContext.mode.
const handlers = try Handlers.fromJS(globalObject, socket_obj, false);
var ssl_opts: ?jsc.API.ServerConfig.SSLConfig = null;
if (try opts.getTruthy(globalObject, "tls")) |tls| {
@@ -1937,6 +1967,9 @@ pub fn jsUpgradeDuplexToTLS(globalObject: *jsc.JSGlobalObject, callframe: *jsc.C
const handlers_ptr = bun.handleOom(handlers.vm.allocator.create(Handlers));
handlers_ptr.* = handlers;
// Set mode to duplex_server so TLSSocket.isServer() returns true for ALPN server mode
// without affecting markInactive lifecycle (which requires a Listener parent).
handlers_ptr.mode = if (is_server) .duplex_server else .client;
var tls = bun.new(TLSSocket, .{
.ref_count = .init(),
.handlers = handlers_ptr,
@@ -1963,6 +1996,7 @@ pub fn jsUpgradeDuplexToTLS(globalObject: *jsc.JSGlobalObject, callframe: *jsc.C
.vm = globalObject.bunVM(),
.task = undefined,
.ssl_config = socket_config.*,
.#mode = if (is_server) .duplex_server else .client,
});
tls.ref();

View File

@@ -15,7 +15,7 @@ binary_type: BinaryType = .Buffer,
vm: *jsc.VirtualMachine,
globalObject: *jsc.JSGlobalObject,
active_connections: u32 = 0,
is_server: bool,
mode: SocketMode = .client,
promise: jsc.Strong.Optional = .empty,
protection_count: if (Environment.ci_assert) u32 else void = if (Environment.ci_assert) 0,
@@ -81,7 +81,7 @@ pub fn markInactive(this: *Handlers) void {
Listener.log("markInactive", .{});
this.active_connections -= 1;
if (this.active_connections == 0) {
if (this.is_server) {
if (this.mode == .server) {
const listen_socket: *Listener = @fieldParentPtr("handlers", this);
// allow it to be GC'd once the last connection is closed and it's not listening anymore
if (listen_socket.listener == .none) {
@@ -133,7 +133,7 @@ pub fn fromGenerated(
var result: Handlers = .{
.vm = globalObject.bunVM(),
.globalObject = globalObject,
.is_server = is_server,
.mode = if (is_server) .server else .client,
.binary_type = switch (generated.binary_type) {
.arraybuffer => .ArrayBuffer,
.buffer => .Buffer,
@@ -217,7 +217,7 @@ pub fn clone(this: *const Handlers) Handlers {
.vm = this.vm,
.globalObject = this.globalObject,
.binary_type = this.binary_type,
.is_server = this.is_server,
.mode = this.mode,
};
inline for (callback_fields) |field| {
@field(result, field) = @field(this, field);
@@ -346,6 +346,7 @@ const strings = bun.strings;
const uws = bun.uws;
const Listener = bun.api.Listener;
const SSLConfig = bun.api.ServerConfig.SSLConfig;
const SocketMode = bun.api.socket.SocketMode;
const jsc = bun.jsc;
const JSValue = jsc.JSValue;

View File

@@ -91,7 +91,7 @@ pub fn reload(this: *Listener, globalObject: *jsc.JSGlobalObject, callframe: *js
return globalObject.throw("Expected \"socket\" object", .{});
};
const handlers = try Handlers.fromJS(globalObject, socket_obj, this.handlers.is_server);
const handlers = try Handlers.fromJS(globalObject, socket_obj, this.handlers.mode == .server);
this.handlers.deinit();
this.handlers = handlers;
@@ -773,7 +773,7 @@ pub fn connectInner(globalObject: *jsc.JSGlobalObject, prev_maybe_tcp: ?*TCPSock
const handlers_ptr = bun.handleOom(handlers.vm.allocator.create(Handlers));
handlers_ptr.* = handlers.*;
handlers_ptr.is_server = false;
handlers_ptr.mode = .client;
var promise = jsc.JSPromise.create(globalObject);
const promise_value = promise.toJS();
@@ -850,7 +850,8 @@ pub fn getsockname(this: *Listener, globalThis: *jsc.JSGlobalObject, callFrame:
return .js_undefined;
}
const out = callFrame.argumentsAsArray(1)[0];
const arg = callFrame.argumentsAsArray(1)[0];
const out = if (arg.isObject()) arg else JSValue.createEmptyObject(globalThis, 3);
const socket = this.listener.uws;
var buf: [64]u8 = [_]u8{0} ** 64;
@@ -872,7 +873,7 @@ pub fn getsockname(this: *Listener, globalThis: *jsc.JSGlobalObject, callFrame:
out.put(globalThis, bun.String.static("family"), family_js);
out.put(globalThis, bun.String.static("address"), address_js);
out.put(globalThis, bun.String.static("port"), port_js);
return .js_undefined;
return out;
}
pub fn jsAddServerName(global: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!JSValue {

View File

@@ -173,8 +173,10 @@ pub fn SSLWrapper(comptime T: type) type {
// flush buffered data and returns amount of pending data to write
pub fn flush(this: *This) usize {
const ssl = this.ssl orelse return 0;
// handleTraffic may trigger a close callback which frees ssl,
// so we must not capture the ssl pointer before calling it.
this.handleTraffic();
const ssl = this.ssl orelse return 0;
const pending = BoringSSL.BIO_ctrl_pending(BoringSSL.SSL_get_wbio(ssl));
if (pending > 0) return @intCast(pending);
return 0;
@@ -428,6 +430,8 @@ pub fn SSLWrapper(comptime T: type) type {
if (read > 0) {
log("triggering data callback (read {d})", .{read});
this.triggerDataCallback(buffer[0..read]);
// The data callback may have closed the connection
if (this.ssl == null or this.flags.closed_notified) return false;
}
this.triggerCloseCallback();
return false;

View File

@@ -81,29 +81,38 @@ size_t IndexOfAnyCharImpl(const uint8_t* HWY_RESTRICT text, size_t text_len, con
} else {
ASSERT(chars_len <= 16);
// Use FixedTag to preload search characters into fixed-size vectors.
// ScalableTag vectors (SVE) are sizeless and cannot be stored in arrays.
// FixedTag gives us a known compile-time size that can be stored in arrays,
// then ResizeBitCast converts back to scalable vectors in the inner loop.
static constexpr size_t kMaxPreloadedChars = 16;
const hn::FixedTag<uint8_t, 16> d_fixed;
using VecFixed = hn::Vec<decltype(d_fixed)>;
VecFixed char_vecs[kMaxPreloadedChars];
const size_t num_chars_to_preload = std::min(chars_len, kMaxPreloadedChars);
for (size_t c = 0; c < num_chars_to_preload; ++c) {
char_vecs[c] = hn::Set(d_fixed, chars[c]);
}
const size_t simd_text_len = text_len - (text_len % N);
size_t i = 0;
#if !HWY_HAVE_SCALABLE && !HWY_TARGET_IS_SVE
// Preload search characters into native-width vectors.
// On non-SVE targets, Vec has a known size and can be stored in arrays.
static constexpr size_t kMaxPreloadedChars = 16;
hn::Vec<D8> char_vecs[kMaxPreloadedChars];
const size_t num_chars_to_preload = std::min(chars_len, kMaxPreloadedChars);
for (size_t c = 0; c < num_chars_to_preload; ++c) {
char_vecs[c] = hn::Set(d, chars[c]);
}
for (; i < simd_text_len; i += N) {
const auto text_vec = hn::LoadN(d, text + i, N);
auto found_mask = hn::MaskFalse(d);
for (size_t c = 0; c < num_chars_to_preload; ++c) {
found_mask = hn::Or(found_mask, hn::Eq(text_vec, hn::ResizeBitCast(d, char_vecs[c])));
found_mask = hn::Or(found_mask, hn::Eq(text_vec, char_vecs[c]));
}
#else
// SVE types are sizeless and cannot be stored in arrays.
// hn::Set is a single broadcast instruction; the compiler will
// hoist these loop-invariant broadcasts out of the outer loop.
for (; i < simd_text_len; i += N) {
const auto text_vec = hn::LoadN(d, text + i, N);
auto found_mask = hn::MaskFalse(d);
for (size_t c = 0; c < chars_len; ++c) {
found_mask = hn::Or(found_mask, hn::Eq(text_vec, hn::Set(d, chars[c])));
}
#endif
const intptr_t pos = hn::FindFirstTrue(d, found_mask);
if (pos >= 0) {

View File

@@ -1323,6 +1323,7 @@ void WebSocket::didReceiveBinaryData(const AtomString& eventName, const std::spa
if (auto* context = scriptExecutionContext()) {
RefPtr<Blob> blob = Blob::create(binaryData, context->jsGlobalObject());
this->incPendingActivityCount();
context->postTask([this, name = eventName, blob = blob.releaseNonNull(), protectedThis = Ref { *this }](ScriptExecutionContext& context) {
ASSERT(scriptExecutionContext());
protectedThis->dispatchEvent(MessageEvent::create(name, blob, protectedThis->m_url.string()));

View File

@@ -14,15 +14,10 @@ param(
[Switch]$DownloadWithoutCurl = $false
);
# Detect system architecture
$SystemType = (Get-CimInstance Win32_ComputerSystem).SystemType
if ($SystemType -match "ARM64-based") {
$IsArm64 = $true
} elseif ($SystemType -match "x64-based") {
$IsArm64 = $false
} else {
# filter out 32 bit + ARM
if (-not ((Get-CimInstance Win32_ComputerSystem)).SystemType -match "x64-based") {
Write-Output "Install Failed:"
Write-Output "Bun for Windows is currently only available for x86 64-bit and ARM64 Windows.`n"
Write-Output "Bun for Windows is currently only available for x86 64-bit Windows.`n"
return 1
}
@@ -108,18 +103,13 @@ function Install-Bun {
$Version = "bun-$Version"
}
if ($IsArm64) {
$Arch = "aarch64"
$IsBaseline = $false
} else {
$Arch = "x64"
$IsBaseline = $ForceBaseline
if (!$IsBaseline) {
$IsBaseline = !( `
Add-Type -MemberDefinition '[DllImport("kernel32.dll")] public static extern bool IsProcessorFeaturePresent(int ProcessorFeature);' `
-Name 'Kernel32' -Namespace 'Win32' -PassThru `
)::IsProcessorFeaturePresent(40);
}
$Arch = "x64"
$IsBaseline = $ForceBaseline
if (!$IsBaseline) {
$IsBaseline = !( `
Add-Type -MemberDefinition '[DllImport("kernel32.dll")] public static extern bool IsProcessorFeaturePresent(int ProcessorFeature);' `
-Name 'Kernel32' -Namespace 'Win32' -PassThru `
)::IsProcessorFeaturePresent(40);
}
$BunRoot = if ($env:BUN_INSTALL) { $env:BUN_INSTALL } else { "${Home}\.bun" }
@@ -229,8 +219,7 @@ function Install-Bun {
# I want to keep this error message in for a few months to ensure that
# if someone somehow runs into this, it can be reported.
Write-Output "Install Failed - You are missing a DLL required to run bun.exe"
$VCRedistArch = if ($Arch -eq "aarch64") { "arm64" } else { "x64" }
Write-Output "This can be solved by installing the Visual C++ Redistributable from Microsoft:`nSee https://learn.microsoft.com/cpp/windows/latest-supported-vc-redist`nDirect Download -> https://aka.ms/vs/17/release/vc_redist.${VCRedistArch}.exe`n`n"
Write-Output "This can be solved by installing the Visual C++ Redistributable from Microsoft:`nSee https://learn.microsoft.com/cpp/windows/latest-supported-vc-redist`nDirect Download -> https://aka.ms/vs/17/release/vc_redist.x64.exe`n`n"
Write-Output "The error above should be unreachable as Bun does not depend on this library. Please comment in https://github.com/oven-sh/bun/issues/8598 or open a new issue.`n`n"
Write-Output "The command '${BunBin}\bun.exe --revision' exited with code ${LASTEXITCODE}`n"
return 1

View File

@@ -0,0 +1,395 @@
const { Duplex } = require("node:stream");
const upgradeDuplexToTLS = $newZigFunction("socket.zig", "jsUpgradeDuplexToTLS", 2);
interface NativeHandle {
resume(): void;
close(): void;
end(): void;
$write(chunk: Buffer, encoding: string): boolean;
alpnProtocol?: string;
}
interface UpgradeContextType {
connectionListener: (...args: any[]) => any;
server: Http2SecureServer;
rawSocket: import("node:net").Socket;
nativeHandle: NativeHandle | null;
events: [(...args: any[]) => void, ...Function[]] | null;
}
interface Http2SecureServer {
key?: Buffer;
cert?: Buffer;
ca?: Buffer;
passphrase?: string;
ALPNProtocols?: Buffer;
_requestCert?: boolean;
_rejectUnauthorized?: boolean;
emit(event: string, ...args: any[]): boolean;
}
interface TLSProxySocket {
_ctx: UpgradeContextType;
_writeCallback: ((err?: Error | null) => void) | null;
alpnProtocol: string | null;
authorized: boolean;
encrypted: boolean;
server: Http2SecureServer;
_requestCert: boolean;
_rejectUnauthorized: boolean;
_securePending: boolean;
secureConnecting: boolean;
_secureEstablished: boolean;
authorizationError?: string;
push(chunk: Buffer | null): boolean;
destroy(err?: Error): this;
emit(event: string, ...args: any[]): boolean;
resume(): void;
readonly destroyed: boolean;
}
/**
* Context object holding upgrade-time state for the TLS proxy socket.
* Attached as `tlsSocket._ctx` so named functions can reach it via `this._ctx`
* (Duplex methods) or via a bound `this` (socket callbacks).
*/
function UpgradeContext(
connectionListener: (...args: any[]) => any,
server: Http2SecureServer,
rawSocket: import("node:net").Socket,
) {
this.connectionListener = connectionListener;
this.server = server;
this.rawSocket = rawSocket;
this.nativeHandle = null;
this.events = null;
}
// ---------------------------------------------------------------------------
// Duplex stream methods — called with `this` = tlsSocket (standard stream API)
// ---------------------------------------------------------------------------
// _read: called by stream machinery when the H2 session wants data.
// Resume the native TLS handle so it feeds decrypted data via the data callback.
// Mirrors net.ts Socket.prototype._read which calls socket.resume().
function tlsSocketRead(this: TLSProxySocket) {
const h = this._ctx.nativeHandle;
if (h) {
h.resume();
}
this._ctx.rawSocket.resume();
}
// _write: called when the H2 session writes outbound frames.
// Forward to the native TLS handle for encryption, then back to rawSocket.
// Mirrors net.ts Socket.prototype._write which calls socket.$write().
function tlsSocketWrite(this: TLSProxySocket, chunk: Buffer, encoding: string, callback: (err?: Error) => void) {
const h = this._ctx.nativeHandle;
if (!h) {
callback(new Error("Socket is closed"));
return;
}
// $write returns true if fully flushed, false if buffered
if (h.$write(chunk, encoding)) {
callback();
} else {
// Store callback so drain event can invoke it (backpressure)
this._writeCallback = callback;
}
}
// _destroy: called when the stream is destroyed (e.g. tlsSocket.destroy(err)).
// Cleans up the native TLS handle.
// Mirrors net.ts Socket.prototype._destroy.
function tlsSocketDestroy(this: TLSProxySocket, err: Error | null, callback: (err?: Error | null) => void) {
const h = this._ctx.nativeHandle;
if (h) {
h.close();
this._ctx.nativeHandle = null;
}
// Must invoke pending write callback with error per Writable stream contract
const writeCb = this._writeCallback;
if (writeCb) {
this._writeCallback = null;
writeCb(err ?? new Error("Socket destroyed"));
}
callback(err);
}
// _final: called when the writable side is ending (all data flushed).
// Shuts down the TLS write side gracefully.
// Mirrors net.ts Socket.prototype._final.
function tlsSocketFinal(this: TLSProxySocket, callback: () => void) {
const h = this._ctx.nativeHandle;
if (!h) return callback();
// Signal end-of-stream to the TLS layer
h.end();
callback();
}
// ---------------------------------------------------------------------------
// Socket callbacks — called by Zig with `this` = native handle (not useful).
// All are bound to tlsSocket so `this` inside each = tlsSocket.
// ---------------------------------------------------------------------------
// open: called when the TLS layer is initialized (before handshake).
// No action needed; we wait for the handshake callback.
function socketOpen() {}
// data: called with decrypted plaintext after the TLS layer decrypts incoming data.
// Push into tlsSocket so the H2 session's _read() receives these frames.
function socketData(this: TLSProxySocket, _socket: NativeHandle, chunk: Buffer) {
if (!this.push(chunk)) {
this._ctx.rawSocket.pause();
}
}
// end: TLS peer signaled end-of-stream; signal EOF to the H2 session.
function socketEnd(this: TLSProxySocket) {
this.push(null);
}
// drain: raw socket is writable again after being full; propagate backpressure signal.
// If _write stored a callback waiting for drain, invoke it now.
function socketDrain(this: TLSProxySocket) {
const cb = this._writeCallback;
if (cb) {
this._writeCallback = null;
cb();
}
}
// close: TLS connection closed; tear down the tlsSocket Duplex.
function socketClose(this: TLSProxySocket) {
if (!this.destroyed) {
this.destroy();
}
}
// error: TLS-level error (e.g. certificate verification failure).
// In server mode without _requestCert, the server doesn't request a client cert,
// so issuer verification errors on the server's own cert are non-fatal.
function socketError(this: TLSProxySocket, _socket: NativeHandle, err: NodeJS.ErrnoException) {
const ctx = this._ctx;
if (!ctx.server._requestCert && err?.code === "UNABLE_TO_GET_ISSUER_CERT") {
return;
}
this.destroy(err);
}
// timeout: socket idle timeout; forward to the Duplex so H2 session can handle it.
function socketTimeout(this: TLSProxySocket) {
this.emit("timeout");
}
// handshake: TLS handshake completed. This is the critical callback that triggers
// H2 session creation.
//
// Mirrors the handshake logic in net.ts ServerHandlers.handshake:
// - Set secure-connection state flags on tlsSocket
// - Read alpnProtocol from the native handle (set by ALPN negotiation)
// - Handle _requestCert / _rejectUnauthorized for mutual TLS
// - Call connectionListener to create the ServerHttp2Session
function socketHandshake(
this: TLSProxySocket,
nativeHandle: NativeHandle,
success: boolean,
verifyError: NodeJS.ErrnoException | null,
) {
const tlsSocket = this; // bound
const ctx = tlsSocket._ctx;
if (!success) {
const err = verifyError || new Error("TLS handshake failed");
ctx.server.emit("tlsClientError", err, tlsSocket);
tlsSocket.destroy(err);
return;
}
// Mark TLS handshake as complete on the proxy socket
tlsSocket._securePending = false;
tlsSocket.secureConnecting = false;
tlsSocket._secureEstablished = true;
// Copy the negotiated ALPN protocol (e.g. "h2") from the native TLS handle.
// The H2 session checks this to confirm HTTP/2 was negotiated.
tlsSocket.alpnProtocol = nativeHandle?.alpnProtocol ?? null;
// Handle mutual TLS: if the server requested a client cert, check for errors
if (tlsSocket._requestCert || tlsSocket._rejectUnauthorized) {
if (verifyError) {
tlsSocket.authorized = false;
tlsSocket.authorizationError = verifyError.code || verifyError.message;
ctx.server.emit("tlsClientError", verifyError, tlsSocket);
if (tlsSocket._rejectUnauthorized) {
tlsSocket.emit("secure", tlsSocket);
tlsSocket.destroy(verifyError);
return;
}
} else {
tlsSocket.authorized = true;
}
} else {
tlsSocket.authorized = true;
}
// Invoke the H2 connectionListener which creates a ServerHttp2Session.
// This is the same function passed to Http2SecureServer's constructor
// and is what normally fires on the 'secureConnection' event.
ctx.connectionListener.$call(ctx.server, tlsSocket);
// Resume the Duplex so the H2 session can read frames from it.
// Mirrors net.ts ServerHandlers.handshake line 438: `self.resume()`.
tlsSocket.resume();
}
// ---------------------------------------------------------------------------
// Close-cleanup handler
// ---------------------------------------------------------------------------
// onTlsClose: when the TLS socket closes (e.g. H2 session destroyed), clean up
// the raw socket listeners to prevent memory leaks and stale callback references.
// EventEmitter calls 'close' handlers with `this` = emitter (tlsSocket).
function onTlsClose(this: TLSProxySocket) {
const ctx = this._ctx;
const raw = ctx.rawSocket;
const ev = ctx.events;
if (!ev) return;
raw.removeListener("data", ev[0]);
raw.removeListener("end", ev[1]);
raw.removeListener("drain", ev[2]);
raw.removeListener("close", ev[3]);
}
// ---------------------------------------------------------------------------
// Module-scope noop (replaces anonymous () => {} for the error suppression)
// ---------------------------------------------------------------------------
// no-op handler used to suppress unhandled error events until
// the H2 session attaches its own error handler.
function noop() {}
// ---------------------------------------------------------------------------
// Main upgrade function
// ---------------------------------------------------------------------------
// Upgrades a raw TCP socket to TLS and initiates an H2 session on it.
//
// When a net.Server forwards an accepted TCP connection to an Http2SecureServer
// via `h2Server.emit('connection', socket)`, the socket has not been TLS-upgraded.
// Node.js Http2SecureServer expects to receive this and perform the upgrade itself.
//
// This mirrors the TLS server handshake pattern from net.ts ServerHandlers, but
// targets the H2 connectionListener instead of a generic secureConnection event.
//
// Data flow after upgrade:
// rawSocket (TCP) → upgradeDuplexToTLS (Zig TLS layer) → socket callbacks
// → tlsSocket.push() → H2 session reads
// H2 session writes → tlsSocket._write() → handle.$write() → Zig TLS layer → rawSocket
//
// CRITICAL: We do NOT set tlsSocket._handle to the native TLS handle.
// If we did, the H2FrameParser constructor would detect it as a JSTLSSocket
// and call attachNativeCallback(), which intercepts all decrypted data at the
// Zig level, completely bypassing our JS data callback and Duplex.push() path.
// Instead, we store the handle in _ctx.nativeHandle so _read/_write/_destroy
// can use it, while the H2 session sees _handle as null and uses the JS-level
// socket.on("data") → Duplex → parser.read() path for incoming frames.
function upgradeRawSocketToH2(
connectionListener: (...args: any[]) => any,
server: Http2SecureServer,
rawSocket: import("node:net").Socket,
): boolean {
// Create a Duplex stream that acts as the TLS "socket" from the H2 session's perspective.
const tlsSocket = new Duplex() as unknown as TLSProxySocket;
tlsSocket._ctx = new UpgradeContext(connectionListener, server, rawSocket);
// Duplex stream methods — `this` is tlsSocket, no bind needed
tlsSocket._read = tlsSocketRead;
tlsSocket._write = tlsSocketWrite;
tlsSocket._destroy = tlsSocketDestroy;
tlsSocket._final = tlsSocketFinal;
// Suppress unhandled error events until the H2 session attaches its own error handler
tlsSocket.on("error", noop);
// Set TLS-like properties that connectionListener and the H2 session expect.
// These are set on the Duplex because we cannot use a real TLSSocket here —
// its internal state machine would conflict with upgradeDuplexToTLS.
tlsSocket.alpnProtocol = null;
tlsSocket.authorized = false;
tlsSocket.encrypted = true;
tlsSocket.server = server;
// Only enforce client cert verification if the server explicitly requests it.
// tls.Server defaults _rejectUnauthorized to true, but without _requestCert
// the server doesn't actually ask for a client cert, so verification errors
// (e.g. UNABLE_TO_GET_ISSUER_CERT for the server's own self-signed cert) are
// spurious and must be ignored.
tlsSocket._requestCert = server._requestCert || false;
tlsSocket._rejectUnauthorized = server._requestCert ? server._rejectUnauthorized : false;
// socket: callbacks — bind to tlsSocket since Zig calls them with native handle as `this`
let handle: NativeHandle, events: UpgradeContextType["events"];
try {
// upgradeDuplexToTLS wraps rawSocket with a TLS layer in server mode (isServer: true).
// The Zig side will:
// 1. Read encrypted data from rawSocket via events[0..3]
// 2. Decrypt it through the TLS engine (with ALPN negotiation for "h2")
// 3. Call our socket callbacks below with the decrypted plaintext
//
// ALPNProtocols: server.ALPNProtocols is a Buffer in wire format (e.g. <Buffer 02 68 32>
// for ["h2"]). The Zig SSLConfig expects an ArrayBuffer, so we slice the underlying buffer.
[handle, events] = upgradeDuplexToTLS(rawSocket, {
isServer: true,
tls: {
key: server.key,
cert: server.cert,
ca: server.ca,
passphrase: server.passphrase,
ALPNProtocols: server.ALPNProtocols
? server.ALPNProtocols.buffer.slice(
server.ALPNProtocols.byteOffset,
server.ALPNProtocols.byteOffset + server.ALPNProtocols.byteLength,
)
: null,
},
socket: {
open: socketOpen,
data: socketData.bind(tlsSocket),
end: socketEnd.bind(tlsSocket),
drain: socketDrain.bind(tlsSocket),
close: socketClose.bind(tlsSocket),
error: socketError.bind(tlsSocket),
timeout: socketTimeout.bind(tlsSocket),
handshake: socketHandshake.bind(tlsSocket),
},
data: {},
});
} catch (e) {
rawSocket.destroy(e as Error);
tlsSocket.destroy(e as Error);
return true;
}
// Store handle in _ctx (NOT on tlsSocket._handle).
// This prevents H2FrameParser from attaching as native callback which would
// intercept data at the Zig level and bypass our Duplex push path.
tlsSocket._ctx.nativeHandle = handle;
tlsSocket._ctx.events = events;
// Wire up the raw TCP socket to feed encrypted data into the TLS layer.
// events[0..3] are native event handlers returned by upgradeDuplexToTLS that
// the Zig TLS engine expects to receive data/end/drain/close through.
rawSocket.on("data", events[0]);
rawSocket.on("end", events[1]);
rawSocket.on("drain", events[2]);
rawSocket.on("close", events[3]);
// When the TLS socket closes (e.g. H2 session destroyed), clean up the raw socket
// listeners to prevent memory leaks and stale callback references.
// EventEmitter calls 'close' handlers with `this` = emitter (tlsSocket).
tlsSocket.once("close", onTlsClose);
return true;
}
export default { upgradeRawSocketToH2 };

View File

@@ -73,6 +73,7 @@ const H2FrameParser = $zig("h2_frame_parser.zig", "H2FrameParserConstructor");
const assertSettings = $newZigFunction("h2_frame_parser.zig", "jsAssertSettings", 1);
const getPackedSettings = $newZigFunction("h2_frame_parser.zig", "jsGetPackedSettings", 1);
const getUnpackedSettings = $newZigFunction("h2_frame_parser.zig", "jsGetUnpackedSettings", 1);
const { upgradeRawSocketToH2 } = require("node:_http2_upgrade");
const sensitiveHeaders = Symbol.for("nodejs.http2.sensitiveHeaders");
const bunHTTP2Native = Symbol.for("::bunhttp2native::");
@@ -3881,6 +3882,7 @@ Http2Server.prototype[EventEmitter.captureRejectionSymbol] = function (err, even
function onErrorSecureServerSession(err, socket) {
if (!this.emit("clientError", err, socket)) socket.destroy(err);
}
function emitFrameErrorEventNT(stream, frameType, errorCode) {
stream.emit("frameError", frameType, errorCode);
}
@@ -3918,6 +3920,15 @@ class Http2SecureServer extends tls.Server {
}
this.on("tlsClientError", onErrorSecureServerSession);
}
emit(event: string, ...args: any[]) {
if (event === "connection") {
const socket = args[0];
if (socket && !(socket instanceof TLSSocket)) {
return upgradeRawSocketToH2(connectionListener, this, socket);
}
}
return super.emit(event, ...args);
}
setTimeout(ms, callback) {
this.timeout = ms;
if (typeof callback === "function") {

View File

@@ -490,7 +490,7 @@ pub const HtmlRenderer = struct {
const needle = "&<>\"";
while (true) {
const next = std.mem.indexOfAny(u8, txt[i..], needle) orelse {
const next = bun.strings.indexOfAny(txt[i..], needle) orelse {
self.write(txt[i..]);
return;
};

View File

@@ -70,11 +70,13 @@ pub fn isZeroWidthCodepointType(comptime T: type, cp: T) bool {
}
// Thai combining marks
if ((cp >= 0xe31 and cp <= 0xe3a) or (cp >= 0xe47 and cp <= 0xe4e))
// Note: U+0E32 (SARA AA) and U+0E33 (SARA AM) are Grapheme_Base (spacing vowels), not combining
if (cp == 0xe31 or (cp >= 0xe34 and cp <= 0xe3a) or (cp >= 0xe47 and cp <= 0xe4e))
return true;
// Lao combining marks
if ((cp >= 0xeb1 and cp <= 0xebc) or (cp >= 0xec8 and cp <= 0xecd))
// Note: U+0EB2 and U+0EB3 are spacing vowels like Thai, not combining
if (cp == 0xeb1 or (cp >= 0xeb4 and cp <= 0xebc) or (cp >= 0xec8 and cp <= 0xecd))
return true;
// Combining Diacritical Marks Extended

View File

@@ -1 +1,7 @@
Bun.hash.wyhash("asdf", 1234n);
// https://github.com/oven-sh/bun/issues/26043
// Bun.hash.crc32 accepts optional seed parameter for incremental CRC32 computation
let crc = 0;
crc = Bun.hash.crc32(new Uint8Array([1, 2, 3]), crc);
crc = Bun.hash.crc32(new Uint8Array([4, 5, 6]), crc);

View File

@@ -0,0 +1,35 @@
import { expect, test } from "bun:test";
test("Listener.getsockname() works without arguments", () => {
const listener = Bun.listen({
hostname: "localhost",
port: 0,
socket: {
data() {},
},
});
try {
// Calling getsockname() without arguments should return an object
// with family, address, and port properties (not crash).
const result = listener.getsockname();
expect(result).toBeObject();
expect(result.family).toMatch(/^IPv[46]$/);
expect(result.address).toBeString();
expect(result.port).toBeNumber();
// Calling with an object argument should still work (existing behavior).
const obj: Record<string, unknown> = {};
listener.getsockname(obj);
expect(obj.family).toMatch(/^IPv[46]$/);
expect(obj.address).toBeString();
expect(obj.port).toBeNumber();
// Calling with a non-object argument should return a new object (not crash).
const result2 = listener.getsockname(42 as any);
expect(result2).toBeObject();
expect(result2.family).toMatch(/^IPv[46]$/);
} finally {
listener.stop();
}
});

View File

@@ -485,6 +485,28 @@ describe("stringWidth extended", () => {
expect(Bun.stringWidth("ก็")).toBe(1); // With maitaikhu
expect(Bun.stringWidth("ปฏัก")).toBe(3); // ป + ฏ + ั (combining) + ก = 3 visible
});
test("Thai spacing vowels (SARA AA and SARA AM)", () => {
// U+0E32 (SARA AA) and U+0E33 (SARA AM) are spacing vowels, not combining marks
expect(Bun.stringWidth("\u0E32")).toBe(1); // SARA AA alone
expect(Bun.stringWidth("\u0E33")).toBe(1); // SARA AM alone
expect(Bun.stringWidth("ก\u0E32")).toBe(2); // ก + SARA AA
expect(Bun.stringWidth("ก\u0E33")).toBe(2); // กำ (KO KAI + SARA AM)
expect(Bun.stringWidth("คำ")).toBe(2); // Common Thai word
expect(Bun.stringWidth("ทำ")).toBe(2); // Common Thai word
// True combining marks should still be zero-width
expect(Bun.stringWidth("\u0E31")).toBe(0); // MAI HAN-AKAT (combining)
expect(Bun.stringWidth("ก\u0E31")).toBe(1); // กั
});
test("Lao spacing vowels", () => {
// U+0EB2 and U+0EB3 are spacing vowels in Lao, similar to Thai
expect(Bun.stringWidth("\u0EB2")).toBe(1); // LAO VOWEL SIGN AA
expect(Bun.stringWidth("\u0EB3")).toBe(1); // LAO VOWEL SIGN AM
expect(Bun.stringWidth("ກ\u0EB2")).toBe(2); // KO + AA
// True combining marks should still be zero-width
expect(Bun.stringWidth("\u0EB1")).toBe(0); // MAI KAN (combining)
});
});
describe("non-ASCII in escape sequences and Indic script handling", () => {

View File

@@ -0,0 +1,161 @@
/**
* All tests in this file should also run in Node.js.
*
* Do not add any tests that only run in Bun.
*/
import { describe, test } from "node:test";
import assert from "node:assert";
import { Agent, createServer, request as httpRequest } from "node:http";
import type { AddressInfo } from "node:net";
// Helper to make a request and get the response.
// Uses a shared agent so that all requests go through the same TCP connection,
// which is critical for actually testing the keep-alive / proxy-URL bug.
function makeRequest(
port: number,
path: string,
agent: Agent,
): Promise<{ statusCode: number; body: string; url: string }> {
return new Promise((resolve, reject) => {
const req = httpRequest({ host: "127.0.0.1", port, path, method: "GET", agent }, res => {
let body = "";
res.on("data", chunk => {
body += chunk;
});
res.on("end", () => {
resolve({ statusCode: res.statusCode!, body, url: path });
});
});
req.on("error", reject);
req.end();
});
}
function listenOnRandomPort(server: ReturnType<typeof createServer>): Promise<number> {
return new Promise((resolve) => {
server.listen(0, "127.0.0.1", () => {
const addr = server.address() as AddressInfo;
resolve(addr.port);
});
});
}
describe("HTTP server with proxy-style absolute URLs", () => {
test("sequential GET requests with absolute URL paths don't hang", async () => {
const agent = new Agent({ keepAlive: true, maxSockets: 1 });
const server = createServer((req, res) => {
res.writeHead(200, { "Content-Type": "text/plain" });
res.end(req.url);
});
const port = await listenOnRandomPort(server);
try {
// Make 3 sequential requests with proxy-style absolute URLs
// Before the fix, request 2 would hang because the parser entered tunnel mode
const r1 = await makeRequest(port, "http://example.com/test1", agent);
assert.strictEqual(r1.statusCode, 200);
assert.ok(r1.body.includes("example.com"), `Expected body to contain "example.com", got: ${r1.body}`);
assert.ok(r1.body.includes("/test1"), `Expected body to contain "/test1", got: ${r1.body}`);
const r2 = await makeRequest(port, "http://example.com/test2", agent);
assert.strictEqual(r2.statusCode, 200);
assert.ok(r2.body.includes("example.com"), `Expected body to contain "example.com", got: ${r2.body}`);
assert.ok(r2.body.includes("/test2"), `Expected body to contain "/test2", got: ${r2.body}`);
const r3 = await makeRequest(port, "http://other.com/test3", agent);
assert.strictEqual(r3.statusCode, 200);
assert.ok(r3.body.includes("other.com"), `Expected body to contain "other.com", got: ${r3.body}`);
assert.ok(r3.body.includes("/test3"), `Expected body to contain "/test3", got: ${r3.body}`);
} finally {
agent.destroy();
server.close();
}
});
test("sequential POST requests with absolute URL paths don't hang", async () => {
const agent = new Agent({ keepAlive: true, maxSockets: 1 });
const server = createServer((req, res) => {
let body = "";
req.on("data", chunk => {
body += chunk;
});
req.on("end", () => {
res.writeHead(200, { "Content-Type": "text/plain" });
res.end(`${req.method} ${req.url} body=${body}`);
});
});
const port = await listenOnRandomPort(server);
try {
for (let i = 1; i <= 3; i++) {
const result = await new Promise<{ statusCode: number; body: string }>((resolve, reject) => {
const req = httpRequest(
{
host: "127.0.0.1",
port,
path: `http://example.com/post${i}`,
method: "POST",
headers: { "Content-Type": "text/plain" },
agent,
},
res => {
let body = "";
res.on("data", chunk => {
body += chunk;
});
res.on("end", () => {
resolve({ statusCode: res.statusCode!, body });
});
},
);
req.on("error", reject);
req.write(`data${i}`);
req.end();
});
assert.strictEqual(result.statusCode, 200);
assert.ok(result.body.includes(`/post${i}`), `Expected body to contain "/post${i}", got: ${result.body}`);
assert.ok(result.body.includes(`body=data${i}`), `Expected body to contain "body=data${i}", got: ${result.body}`);
}
} finally {
agent.destroy();
server.close();
}
});
test("mixed normal and proxy-style URLs work sequentially", async () => {
const agent = new Agent({ keepAlive: true, maxSockets: 1 });
const server = createServer((req, res) => {
res.writeHead(200, { "Content-Type": "text/plain" });
res.end(req.url);
});
const port = await listenOnRandomPort(server);
try {
// Mix of normal and proxy-style URLs
const r1 = await makeRequest(port, "/normal1", agent);
assert.strictEqual(r1.statusCode, 200);
assert.ok(r1.body.includes("/normal1"), `Expected body to contain "/normal1", got: ${r1.body}`);
const r2 = await makeRequest(port, "http://example.com/proxy1", agent);
assert.strictEqual(r2.statusCode, 200);
assert.ok(r2.body.includes("example.com"), `Expected body to contain "example.com", got: ${r2.body}`);
assert.ok(r2.body.includes("/proxy1"), `Expected body to contain "/proxy1", got: ${r2.body}`);
const r3 = await makeRequest(port, "/normal2", agent);
assert.strictEqual(r3.statusCode, 200);
assert.ok(r3.body.includes("/normal2"), `Expected body to contain "/normal2", got: ${r3.body}`);
const r4 = await makeRequest(port, "http://other.com/proxy2", agent);
assert.strictEqual(r4.statusCode, 200);
assert.ok(r4.body.includes("other.com"), `Expected body to contain "other.com", got: ${r4.body}`);
assert.ok(r4.body.includes("/proxy2"), `Expected body to contain "/proxy2", got: ${r4.body}`);
} finally {
agent.destroy();
server.close();
}
});
});

View File

@@ -0,0 +1,26 @@
import { describe, expect, test } from "bun:test";
import { bunEnv, bunExe, nodeExe } from "harness";
import { join } from "node:path";
describe("HTTP server with proxy-style absolute URLs", () => {
test("tests should run on node.js", async () => {
await using process = Bun.spawn({
cmd: [nodeExe(), "--test", join(import.meta.dir, "node-http-proxy-url.node.mts")],
stdout: "inherit",
stderr: "inherit",
stdin: "ignore",
env: bunEnv,
});
expect(await process.exited).toBe(0);
});
test("tests should run on bun", async () => {
await using process = Bun.spawn({
cmd: [bunExe(), "test", join(import.meta.dir, "node-http-proxy-url.node.mts")],
stdout: "inherit",
stderr: "inherit",
stdin: "ignore",
env: bunEnv,
});
expect(await process.exited).toBe(0);
});
});

View File

@@ -0,0 +1,428 @@
/**
* Tests for the net.Server → Http2SecureServer upgrade path
* (upgradeRawSocketToH2 in _http2_upgrade.ts).
*
* This pattern is used by http2-wrapper, crawlee, and other libraries that
* accept raw TCP connections and upgrade them to HTTP/2 via
* `h2Server.emit('connection', rawSocket)`.
*
* Works with both:
* bun bd test test/js/node/http2/node-http2-upgrade.test.ts
* node --experimental-strip-types --test test/js/node/http2/node-http2-upgrade.test.ts
*/
import assert from "node:assert";
import fs from "node:fs";
import http2 from "node:http2";
import net from "node:net";
import path from "node:path";
import { afterEach, describe, test } from "node:test";
import { fileURLToPath } from "node:url";
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const FIXTURES_PATH = path.join(__dirname, "..", "test", "fixtures", "keys");
const TLS = {
key: fs.readFileSync(path.join(FIXTURES_PATH, "agent1-key.pem")),
cert: fs.readFileSync(path.join(FIXTURES_PATH, "agent1-cert.pem")),
ALPNProtocols: ["h2"],
};
function createUpgradeServer(
handler: (req: http2.Http2ServerRequest, res: http2.Http2ServerResponse) => void,
opts: { onSession?: (session: http2.Http2Session) => void } = {},
): Promise<{ netServer: net.Server; h2Server: http2.Http2SecureServer; port: number }> {
return new Promise(resolve => {
const h2Server = http2.createSecureServer(TLS, handler);
h2Server.on("error", () => {});
if (opts.onSession) h2Server.on("session", opts.onSession);
const netServer = net.createServer(socket => {
h2Server.emit("connection", socket);
});
netServer.listen(0, "127.0.0.1", () => {
resolve({ netServer, h2Server, port: (netServer.address() as net.AddressInfo).port });
});
});
}
function connectClient(port: number): http2.ClientHttp2Session {
const client = http2.connect(`https://127.0.0.1:${port}`, { rejectUnauthorized: false });
client.on("error", () => {});
return client;
}
function request(
client: http2.ClientHttp2Session,
method: string,
reqPath: string,
body?: string,
): Promise<{ status: number; headers: http2.IncomingHttpHeaders; body: string }> {
return new Promise((resolve, reject) => {
const req = client.request({ ":method": method, ":path": reqPath });
let responseBody = "";
let responseHeaders: http2.IncomingHttpHeaders = {};
req.on("response", hdrs => {
responseHeaders = hdrs;
});
req.setEncoding("utf8");
req.on("data", (chunk: string) => {
responseBody += chunk;
});
req.on("end", () => {
resolve({
status: responseHeaders[":status"] as unknown as number,
headers: responseHeaders,
body: responseBody,
});
});
req.on("error", reject);
if (body !== undefined) {
req.end(body);
} else {
req.end();
}
});
}
describe("HTTP/2 upgrade via net.Server", () => {
let servers: { netServer: net.Server }[] = [];
let clients: http2.ClientHttp2Session[] = [];
afterEach(() => {
for (const c of clients) c.close();
for (const s of servers) s.netServer.close();
clients = [];
servers = [];
});
test("GET request succeeds with 200 and custom headers", async () => {
const srv = await createUpgradeServer((_req, res) => {
res.writeHead(200, { "x-upgrade-test": "yes" });
res.end("hello from upgraded server");
});
servers.push(srv);
const client = connectClient(srv.port);
clients.push(client);
const result = await request(client, "GET", "/");
assert.strictEqual(result.status, 200);
assert.strictEqual(result.headers["x-upgrade-test"], "yes");
assert.strictEqual(result.body, "hello from upgraded server");
});
test("POST request with body echoed back", async () => {
const srv = await createUpgradeServer((_req, res) => {
let body = "";
_req.on("data", (chunk: string) => {
body += chunk;
});
_req.on("end", () => {
res.writeHead(200);
res.end("echo:" + body);
});
});
servers.push(srv);
const client = connectClient(srv.port);
clients.push(client);
const result = await request(client, "POST", "/echo", "test payload");
assert.strictEqual(result.status, 200);
assert.strictEqual(result.body, "echo:test payload");
});
});
describe("HTTP/2 upgrade — multiple requests on one connection", () => {
test("three sequential requests share the same session", async () => {
let count = 0;
const srv = await createUpgradeServer((_req, res) => {
count++;
res.writeHead(200);
res.end(String(count));
});
const client = connectClient(srv.port);
const r1 = await request(client, "GET", "/");
const r2 = await request(client, "GET", "/");
const r3 = await request(client, "GET", "/");
assert.strictEqual(r1.body, "1");
assert.strictEqual(r2.body, "2");
assert.strictEqual(r3.body, "3");
client.close();
srv.netServer.close();
});
});
describe("HTTP/2 upgrade — session event", () => {
test("h2Server emits session event", async () => {
let sessionFired = false;
const srv = await createUpgradeServer(
(_req, res) => {
res.writeHead(200);
res.end("ok");
},
{
onSession: () => {
sessionFired = true;
},
},
);
const client = connectClient(srv.port);
await request(client, "GET", "/");
assert.strictEqual(sessionFired, true);
client.close();
srv.netServer.close();
});
});
describe("HTTP/2 upgrade — concurrent clients", () => {
test("two clients get independent sessions", async () => {
const srv = await createUpgradeServer((_req, res) => {
res.writeHead(200);
res.end(_req.url);
});
const c1 = connectClient(srv.port);
const c2 = connectClient(srv.port);
const [r1, r2] = await Promise.all([request(c1, "GET", "/from-client-1"), request(c2, "GET", "/from-client-2")]);
assert.strictEqual(r1.body, "/from-client-1");
assert.strictEqual(r2.body, "/from-client-2");
c1.close();
c2.close();
srv.netServer.close();
});
});
describe("HTTP/2 upgrade — socket close ordering", () => {
test("no crash when rawSocket.destroy() precedes session.close()", async () => {
let rawSocket: net.Socket | undefined;
let h2Session: http2.Http2Session | undefined;
const h2Server = http2.createSecureServer(TLS, (_req, res) => {
res.writeHead(200);
res.end("done");
});
h2Server.on("error", () => {});
h2Server.on("session", s => {
h2Session = s;
});
const netServer = net.createServer(socket => {
rawSocket = socket;
h2Server.emit("connection", socket);
});
const port = await new Promise<number>(resolve => {
netServer.listen(0, "127.0.0.1", () => resolve((netServer.address() as net.AddressInfo).port));
});
const client = connectClient(port);
await request(client, "GET", "/");
const socketClosed = Promise.withResolvers<void>();
rawSocket!.once("close", () => socketClosed.resolve());
rawSocket!.destroy();
await socketClosed.promise;
if (h2Session) h2Session.close();
client.close();
netServer.close();
});
test("no crash when session.close() precedes rawSocket.destroy()", async () => {
let rawSocket: net.Socket | undefined;
let h2Session: http2.Http2Session | undefined;
const h2Server = http2.createSecureServer(TLS, (_req, res) => {
res.writeHead(200);
res.end("done");
});
h2Server.on("error", () => {});
h2Server.on("session", s => {
h2Session = s;
});
const netServer = net.createServer(socket => {
rawSocket = socket;
h2Server.emit("connection", socket);
});
const port = await new Promise<number>(resolve => {
netServer.listen(0, "127.0.0.1", () => resolve((netServer.address() as net.AddressInfo).port));
});
const client = connectClient(port);
await request(client, "GET", "/");
if (h2Session) h2Session.close();
const socketClosed = Promise.withResolvers<void>();
rawSocket!.once("close", () => socketClosed.resolve());
rawSocket!.destroy();
await socketClosed.promise;
client.close();
netServer.close();
});
});
describe("HTTP/2 upgrade — ALPN negotiation", () => {
test("alpnProtocol is h2 after upgrade", async () => {
let observedAlpn: string | undefined;
const srv = await createUpgradeServer((_req, res) => {
const session = _req.stream.session;
if (session && session.socket) {
observedAlpn = (session.socket as any).alpnProtocol;
}
res.writeHead(200);
res.end("alpn-ok");
});
const client = connectClient(srv.port);
await request(client, "GET", "/");
assert.strictEqual(observedAlpn, "h2");
client.close();
srv.netServer.close();
});
});
describe("HTTP/2 upgrade — varied status codes", () => {
test("404 response with custom header", async () => {
const srv = await createUpgradeServer((_req, res) => {
res.writeHead(404, { "x-reason": "not-found" });
res.end("not found");
});
const client = connectClient(srv.port);
const result = await request(client, "GET", "/missing");
assert.strictEqual(result.status, 404);
assert.strictEqual(result.headers["x-reason"], "not-found");
assert.strictEqual(result.body, "not found");
client.close();
srv.netServer.close();
});
test("302 redirect response", async () => {
const srv = await createUpgradeServer((_req, res) => {
res.writeHead(302, { location: "/" });
res.end();
});
const client = connectClient(srv.port);
const result = await request(client, "GET", "/redirect");
assert.strictEqual(result.status, 302);
assert.strictEqual(result.headers["location"], "/");
client.close();
srv.netServer.close();
});
test("large response body (8KB) through upgraded socket", async () => {
const srv = await createUpgradeServer((_req, res) => {
res.writeHead(200);
res.end("x".repeat(8192));
});
const client = connectClient(srv.port);
const result = await request(client, "GET", "/large");
assert.strictEqual(result.body.length, 8192);
client.close();
srv.netServer.close();
});
});
describe("HTTP/2 upgrade — client disconnect mid-response", () => {
test("server does not crash when client destroys stream early", async () => {
const streamClosed = Promise.withResolvers<void>();
const srv = await createUpgradeServer((_req, res) => {
res.writeHead(200);
const interval = setInterval(() => {
if (res.destroyed || res.writableEnded) {
clearInterval(interval);
return;
}
res.write("chunk\n");
}, 5);
_req.stream.on("close", () => {
clearInterval(interval);
streamClosed.resolve();
});
});
const client = connectClient(srv.port);
const streamReady = Promise.withResolvers<http2.ClientHttp2Stream>();
const req = client.request({ ":method": "GET", ":path": "/" });
req.on("response", () => streamReady.resolve(req));
req.on("error", () => {});
const stream = await streamReady.promise;
stream.destroy();
await streamClosed.promise;
client.close();
srv.netServer.close();
});
});
describe("HTTP/2 upgrade — independent upgrade per connection", () => {
test("three clients produce three distinct sessions", async () => {
const sessions: http2.Http2Session[] = [];
const srv = await createUpgradeServer(
(_req, res) => {
res.writeHead(200);
res.end("ok");
},
{ onSession: s => sessions.push(s) },
);
const c1 = connectClient(srv.port);
const c2 = connectClient(srv.port);
const c3 = connectClient(srv.port);
await Promise.all([request(c1, "GET", "/"), request(c2, "GET", "/"), request(c3, "GET", "/")]);
assert.strictEqual(sessions.length, 3);
assert.notStrictEqual(sessions[0], sessions[1]);
assert.notStrictEqual(sessions[1], sessions[2]);
c1.close();
c2.close();
c3.close();
srv.netServer.close();
});
});
if (typeof Bun !== "undefined") {
describe("Node.js compatibility", () => {
test("tests should run on node.js", async () => {
await using proc = Bun.spawn({
cmd: [Bun.which("node") || "node", "--test", import.meta.filename],
stdout: "inherit",
stderr: "inherit",
stdin: "ignore",
});
assert.strictEqual(await proc.exited, 0);
});
});
}

View File

@@ -0,0 +1,69 @@
'use strict';
const common = require('../common');
const fixtures = require('../common/fixtures');
if (!common.hasCrypto)
common.skip('missing crypto');
const assert = require('assert');
const net = require('net');
const h2 = require('http2');
const tlsOptions = {
key: fixtures.readKey('agent1-key.pem'),
cert: fixtures.readKey('agent1-cert.pem'),
ALPNProtocols: ['h2']
};
// Create a net server that upgrades sockets to HTTP/2 manually, handles the
// request, and then shuts down via a short socket timeout and a longer H2 session
// timeout. This is an unconventional way to shut down a session (the underlying
// socket closing first) but it should work - critically, it shouldn't segfault
// (as it did until Node v20.5.1).
let serverRawSocket;
let serverH2Session;
const netServer = net.createServer((socket) => {
serverRawSocket = socket;
h2Server.emit('connection', socket);
});
const h2Server = h2.createSecureServer(tlsOptions, (req, res) => {
res.writeHead(200);
res.end();
});
h2Server.on('session', (session) => {
serverH2Session = session;
});
netServer.listen(0, common.mustCall(() => {
const proxyClient = h2.connect(`https://localhost:${netServer.address().port}`, {
rejectUnauthorized: false
});
proxyClient.on('error', () => {});
proxyClient.on('close', common.mustCall(() => {
netServer.close();
}));
const req = proxyClient.request({
':method': 'GET',
':path': '/'
});
req.on('error', () => {});
req.on('response', common.mustCall((response) => {
assert.strictEqual(response[':status'], 200);
// Asynchronously shut down the server's connections after the response,
// but not in the order it typically expects:
setTimeout(() => {
serverRawSocket.destroy();
setTimeout(() => {
serverH2Session.close();
}, 10);
}, 10);
}));
}));

View File

@@ -0,0 +1,69 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe } from "harness";
// https://github.com/oven-sh/bun/issues/26669
// WebSocket client crashes ("Pure virtual function called!") when binaryType = "blob"
// and no event listener is attached. The missing incPendingActivityCount() allows the
// WebSocket to be GC'd before the postTask callback runs.
test("WebSocket with binaryType blob should not crash when GC'd before postTask", async () => {
await using server = Bun.serve({
port: 0,
fetch(req, server) {
if (server.upgrade(req)) return undefined;
return new Response("Not a websocket");
},
websocket: {
open(ws) {
// Send binary data immediately - this triggers didReceiveBinaryData
// with the Blob path when client has binaryType = "blob"
ws.sendBinary(new Uint8Array(64));
ws.sendBinary(new Uint8Array(64));
ws.sendBinary(new Uint8Array(64));
},
message() {},
},
});
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
const url = process.argv[1];
// Create many short-lived WebSocket objects with blob binaryType and no listeners.
// Without the fix, the missing incPendingActivityCount() lets the WebSocket get GC'd
// before the postTask callback fires, causing "Pure virtual function called!".
async function run() {
for (let i = 0; i < 100; i++) {
const ws = new WebSocket(url);
ws.binaryType = "blob";
// Intentionally: NO event listeners attached.
// This forces the postTask path in didReceiveBinaryData's Blob case.
}
// Force GC to collect the unreferenced WebSocket objects while postTask
// callbacks are still pending.
Bun.gc(true);
await Bun.sleep(50);
Bun.gc(true);
await Bun.sleep(50);
Bun.gc(true);
await Bun.sleep(100);
}
await run();
Bun.gc(true);
await Bun.sleep(200);
console.log("OK");
process.exit(0);
`,
`ws://localhost:${server.port}`,
],
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stdout).toContain("OK");
expect(exitCode).toBe(0);
});