Compare commits

..

11 Commits

Author SHA1 Message Date
Claude Bot
e5d06d48b1 fix(bmalloc): add patch verification, unistd.h include, named constants
Address review feedback:
- Add string(FIND) verification after each patch to warn if the
  expected pattern was not found (silent failure detection for
  WebKit version upgrades)
- Add #include <unistd.h> to pas_utils.h patch (guarded by
  !PAS_OS(WINDOWS)) so usleep() compiles
- Use named constants PAS_SYSCALL_MAX_RETRIES and
  PAS_SYSCALL_RETRY_DELAY_US in the PAS_SYSCALL macro for
  consistency with the BSyscall.h style

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-27 01:16:37 +00:00
autofix-ci[bot]
eacba7987f [autofix.ci] apply automated fixes 2026-02-27 01:04:27 +00:00
Claude Bot
1fc4dd4f83 fix(bmalloc): add backoff to SYSCALL macro and remove MADV_DONTDUMP
The SYSCALL and PAS_SYSCALL macros in bmalloc retry syscalls returning
EAGAIN in a zero-delay tight loop. When madvise(MADV_DONTDUMP) returns
EAGAIN under kernel mmap_write_lock contention (concurrent GC threads),
this causes 250K+ retries/sec/thread and 100% CPU, freezing the process.

Fix applied via patches to downloaded WebKit headers in SetupWebKit.cmake:

1. Add usleep(1000) backoff and 100-retry cap to SYSCALL/PAS_SYSCALL
   macros. When the syscall succeeds on first try (common case), zero
   overhead. On EAGAIN, retries up to 100 times with 1ms delay (~100ms
   max), then gives up gracefully.

2. Remove MADV_DONTDUMP/MADV_DODUMP calls on Linux. These require the
   kernel's exclusive mmap_write_lock (unlike MADV_DONTNEED which only
   needs a read lock) and are the primary contention source. They only
   affect core dump size, not allocation correctness.

Closes #27490

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-27 01:02:28 +00:00
Jarred Sumner
30e609e080 Windows ARM64 2026-02-25 20:55:23 -08:00
Luke Parker
84e4a5ce9c fix(windows): avoid standalone worker dotenv crash (#27434)
### What does this PR do?

Fixes #27431.

- fixes a Windows standalone executable crash when
`compile.autoloadDotenv = false`, a `.env` file exists in the runtime
cwd, and the executable spawns a `Worker`
- gives worker startup its own cloned `DotEnv.Loader` before
`configureDefines()`, so dotenv loading does not mutate env state owned
by another thread
- aligns worker startup with other Bun runtime paths by wiring
`resolver.env_loader = transpiler.env`
- extracts standalone runtime flag propagation into
`applyStandaloneRuntimeFlags(...)` so main and worker startup share the
same env/tsconfig/package.json behavior
- adds regression coverage in `test/regression/issue/27431.test.ts` and
bundler coverage in `test/bundler/bundler_compile_autoload.test.ts`

### How did you verify your code works?

- reproduced the original crash with `bun test
regression/issue/27431.test.ts` on stock `1.3.10-canary.104`; the test
fails on unpatched Bun
- rebuilt `build/debug/bun-debug.exe` with this patch and ran
`build/debug/bun-debug.exe test regression/issue/27431.test.ts`; the
test passes on the patched build
- manually validated the minimal repro from
`https://github.com/Hona/bun1310-minimal-repro` against the patched
`bun-debug.exe`; the standalone executable no longer crashes and still
keeps dotenv disabled (`process.env` does not pick up `.env`)
2026-02-25 19:49:56 -08:00
Jarred Sumner
89c70a76e8 feat(repl): add -e/-p flags, docs page, and shell completions (#27436)
## Summary

- Adds `bun repl -e <script>` / `-p <script>` for non-interactive
evaluation using REPL semantics (object literal wrapping, declaration
hoisting), draining the event loop before exit. Returns exit code 1 on
error.
- Adds `docs/runtime/repl.mdx` documenting the interactive REPL
(commands, keybindings, special variables, top-level await, imports) and
the new non-interactive mode.
- Updates bash/fish/zsh completions for the `repl` subcommand and its
flags.

## Test plan

- [x] `bun bd test test/js/bun/repl/repl.test.ts` — all 103 tests pass
(20 new)
- [x] `USE_SYSTEM_BUN=1 bun test` — new tests fail (validates they test
new behavior)
- [x] `bun run zig:check-all` — compiles on all platforms
- [x] `bash -n` / `fish -n` / `zsh -n` syntax checks on completion files

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-02-25 19:49:44 -08:00
robobun
b2d8504a09 fix(spawn): remove shutdown() on subprocess stdio socketpairs (#27435)
## Summary

- Remove `shutdown()` calls on subprocess stdio socketpair file
descriptors that were causing Python asyncio-based MCP servers to break

## Root Cause

Bun uses `SOCK_STREAM` socketpairs for subprocess stdio pipes. After
creating each socketpair, it called `shutdown(SHUT_WR)` on the parent's
read end (for stdout/stderr) and `shutdown(SHUT_RD)` on the parent's
write end (for stdin) to make them unidirectional.

On `SOCK_STREAM` sockets, `shutdown(fd, SHUT_WR)` sends a **FIN** to the
peer. Python's `asyncio.connect_write_pipe()` registers an `EPOLLIN`
watcher on the write pipe fd to detect peer closure. The FIN from
`shutdown()` triggers an immediate `EPOLLIN` event, causing asyncio to
interpret it as "connection closed" and tear down the write transport —
even though the pipe should remain open.

This broke **all Python MCP servers** using the `model_context_protocol`
SDK (which uses `connect_write_pipe()` in its stdio transport) whenever
they took more than a few seconds to initialize. Node.js does not have
this issue because it does not call `shutdown()` on its socketpairs.

## Fix

Remove the `shutdown()` calls entirely. The socketpairs are already used
unidirectionally by convention, and the `shutdown()` calls provided no
functional benefit while causing compatibility issues with any program
that polls its stdio fds for readability/writability events.

## Test plan

- [x] Added regression test
`test/js/bun/spawn/spawn-socketpair-shutdown.test.ts` with 3 test cases:
  - Subprocess stdout pipe stays writable after idle delay
  - Python asyncio `connect_write_pipe` works correctly with idle period
  - Subprocess stdin pipe stays readable for child after idle delay
- [x] Verified test fails on system bun (without fix) and passes on
debug build (with fix)
- [x] Verified existing spawn tests still pass
(`spawn-streaming-stdout`, `spawn-stdin-readable-stream`)
- [x] Verified original bug report repro script works with the fix


🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-25 18:25:13 -08:00
Ciro Spaciari
e735bffaa9 fix(http): enable TLS keepalive for custom SSL configs (#27385)
## Summary

- **Enable keepalive for custom TLS configs (mTLS):** Previously, all
connections using custom TLS configurations (client certificates, custom
CA, etc.) had `disable_keepalive=true` forced, causing a new TCP+TLS
handshake on every request. This removes that restriction and properly
tracks SSL contexts per connection.

- **Intern SSLConfig with reference counting:** Identical TLS
configurations are now deduplicated via a global registry
(`SSLConfig.GlobalRegistry`), enabling O(1) pointer-equality lookups
instead of O(n) content comparisons. Uses `ThreadSafeRefCount` for safe
lifetime management across threads.

- **Bounded SSL context cache with LRU eviction:** The custom SSL
context map in `HTTPThread` is now bounded (max 60 entries, 30-minute
TTL) with proper cleanup of both SSL contexts and their associated
SSLConfig references when evicted.

- **Correct keepalive pool isolation:** Pooled sockets now track their
`ssl_config` (with refcount) and `owner` context, ensuring connections
are only reused when the TLS configuration matches exactly, and sockets
return to the correct pool on release.

Fixes #27358

## Changed files

- `src/bun.js/api/server/SSLConfig.zig` — ref counting, content hashing,
GlobalRegistry interning
- `src/bun.js/webcore/fetch.zig` — intern SSLConfig on creation, deref
on cleanup
- `src/http.zig` — `custom_ssl_ctx` field, `getSslCtx()` helper, updated
all callback sites
- `src/http/HTTPContext.zig` — `ssl_config`/`owner` on PooledSocket,
pointer-equality matching
- `src/http/HTTPThread.zig` — `SslContextCacheEntry` with timestamps,
TTL + LRU eviction

## Test plan

- [x] `test/regression/issue/27358.test.ts` — verifies keepalive
connection reuse with custom TLS and isolation between different configs
- [x] `test/js/bun/http/tls-keepalive.test.ts` — comprehensive tests:
keepalive reuse, config isolation, stress test (50 sequential requests),
keepalive-disabled control
- [x] `test/js/bun/http/tls-keepalive-leak-fixture.js` — memory leak
detection fixture (50k requests with same config, 200 requests with
distinct configs)

## Changelog
<!-- CHANGELOG:START -->
Fixed a bug where HTTP connections using custom TLS configurations
(mTLS, custom CA certificates) could not reuse keepalive connections,
causing a new TCP+TLS handshake for every request and leaking SSL
contexts. Custom TLS connections now properly participate in keepalive
pooling with correct isolation between different configurations.
<!-- CHANGELOG:END -->

🤖 Generated with [Claude Code](https://claude.com/claude-code) (0%
16-shotted by claude-opus-4-6, 3 memories recalled)

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-25 15:39:13 -08:00
Dylan Conway
347c288d75 fix(shell): seq crashes when called with only flags and no numeric args (#27415)
## What

Fixes a null pointer dereference crash in the `seq` shell builtin when
called with only flags and no numeric arguments.

## Reproduction

```js
await Bun.$`seq -w`       // crash
await Bun.$`seq -s ,`     // crash
await Bun.$`seq -t .`     // crash
```

```
panic(main thread): attempt to use null value
src/shell/builtin/seq.zig:47:31
```

Also crashes release builds (segfault).

## Root cause

The flag-parsing loop at line 17 consumes all arguments. When the user
passes only flags, the iterator is exhausted after the loop exits. Line
47 then calls `iter.next().?` which panics on `null`.

The existing `args.len == 0` check on line 14 only catches the case
where no args are passed at all — it does not cover the case where all
args are consumed as flags.

## Fix

Changed `.?` to `orelse return this.fail(usageString)`, matching the
behavior when `seq` is called with zero arguments.
2026-02-25 15:28:43 -08:00
robobun
6cc1a70198 fix(streams): preserve AsyncLocalStorage context in stream.finished callback (#27429)
## Summary
- Bind the `stream.finished` callback with `AsyncLocalStorage.bind()`
before wrapping with `once()`, matching [Node.js
behavior](https://github.com/nodejs/node/blob/main/lib/internal/streams/end-of-stream.js#L70).
Without this, the async context active when `finished()` is called is
lost by the time the callback fires.

Closes #27428

## Test plan
- [x] Added regression test `test/regression/issue/27428.test.ts` that
spawns an HTTP server using `stream.finished` inside
`AsyncLocalStorage.run()` and verifies the store is preserved in the
callback
- [x] Verified test fails with system bun (`USE_SYSTEM_BUN=1`) and
passes with the debug build
- [x] Existing stream finished tests (`test-stream-finished.js`,
`test-stream-end-of-streams.js`, `test-http-outgoing-finished.js`,
`test-http-client-finished.js`) continue to pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-25 15:28:17 -08:00
robobun
fa3a30f075 feat(repl): implement native Zig REPL with full TUI support (#26304)
## Summary

This PR implements a native Zig REPL for Bun with full TUI (Text User
Interface) support, providing a modern and feature-rich interactive
experience.

### Features

- **Syntax highlighting** using `QuickAndDirtySyntaxHighlighter` for
colorized JavaScript code
- **Full line editing** with Emacs-style keybindings:
  - `Ctrl+A/E` - Move to start/end of line
  - `Ctrl+B/F` - Move backward/forward one character
  - `Ctrl+K/U` - Kill to end/start of line
  - `Ctrl+W` - Delete word backward
  - `Ctrl+L` - Clear screen
  - Arrow keys for cursor movement
- **Persistent history** with file storage (`~/.bun_repl_history`)
  - Up/Down arrow for history navigation
  - `Ctrl+P/N` also works for history
- **Tab completion** for properties and commands
- **Multi-line input support** with automatic continuation detection
- **REPL commands**: `.help`, `.exit`, `.clear`, `.load`, `.save`,
`.editor`
- **Special variables**:
  - `_` - Contains the result of the last expression
  - `_error` - Contains the last error that occurred
- **Result formatting** with `util.inspect` integration
- **replMode transforms** for proper REPL semantics:
  - Expression result capture via `{ value: expr }` wrapper
- Variable hoisting for persistence across REPL lines (`const`/`let` →
`var`)
  - Function and class declaration hoisting
  - Top-level await support with async IIFE wrapper
  - Object literal detection (no parentheses needed for `{ a: 1 }`)

### Implementation

The REPL is implemented in pure Zig (`src/repl.zig`) with C++ bindings
for JSC integration:
- Uses raw terminal mode for character-by-character input
- Integrates with Bun's existing `VirtualMachine` for JavaScript
evaluation
- Uses the parser with `repl_mode=true` to apply REPL-specific AST
transforms
- Provides access to all Bun globals (`Bun`, `Buffer`, `console`,
`process`, etc.)

### Files Changed

- `src/repl.zig` - Main REPL implementation (~1500 lines)
- `src/cli/repl_command.zig` - CLI entry point
- `src/bun.js/bindings/bindings.cpp` - C++ REPL functions
- `src/bun.js/bindings/headers.h` - C++ declarations
- `src/ast/repl_transforms.zig` - REPL-specific AST transforms
(cherry-picked from jarred/repl-mode)
- `test/js/bun/repl/repl.test.ts` - Comprehensive tests

## Test Plan

- [x] Run `bun bd test test/js/bun/repl/repl.test.ts` - 27 tests pass
- [x] Manual testing of interactive features:
  - Basic expression evaluation
  - Special variables `_` and `_error`
  - History navigation
  - Tab completion
  - Multi-line input
  - REPL commands
  - Top-level await
  - Variable persistence
- [x] Verified REPL starts without downloading packages (fixes #26058)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-25 15:15:37 -08:00
55 changed files with 5175 additions and 242 deletions

View File

@@ -474,7 +474,8 @@ function getBuildCommand(target, options, label) {
if (target.os === "windows" && label === "build-bun") {
// Only sign release builds, not canary builds (DigiCert charges per signature)
const enableSigning = !options.canary ? " -DENABLE_WINDOWS_CODESIGNING=ON" : "";
// Skip signing on ARM64 for now — smctl (x64-only) silently fails under emulation
const enableSigning = !options.canary && target.arch !== "aarch64" ? " -DENABLE_WINDOWS_CODESIGNING=ON" : "";
return `bun run build:${buildProfile}${enableSigning}`;
}

View File

@@ -43,7 +43,7 @@ bunx cowsay 'Hello, world!' # execute a package
## Install
Bun supports Linux (x64 & arm64), macOS (x64 & Apple Silicon) and Windows (x64).
Bun supports Linux (x64 & arm64), macOS (x64 & Apple Silicon) and Windows (x64 & arm64).
> **Linux users** — Kernel version 5.6 or higher is strongly recommended, but the minimum is 5.1.

View File

@@ -260,3 +260,116 @@ file(RENAME ${CACHE_PATH}/bun-webkit ${WEBKIT_PATH})
if(APPLE)
file(REMOVE_RECURSE ${WEBKIT_INCLUDE_PATH}/unicode)
endif()
# --- Apply bmalloc patches ---
# Fix: SYSCALL/PAS_SYSCALL macros spin at 100% CPU on madvise EAGAIN (oven-sh/bun#27490)
#
# The SYSCALL macro retries syscalls returning EAGAIN in a zero-delay tight loop.
# Under kernel mmap_write_lock contention (e.g. concurrent GC threads calling
# madvise(MADV_DONTDUMP)), this causes 250K+ retries/sec/thread and 100% CPU.
#
# Fix has two parts:
# 1. Add usleep(1000) backoff and 100-retry cap to SYSCALL/PAS_SYSCALL macros
# 2. Remove MADV_DONTDUMP/MADV_DODUMP calls which require mmap_write_lock
# (MADV_DONTDUMP only affects core dump size, not allocation correctness)
set(BMALLOC_INCLUDE ${WEBKIT_INCLUDE_PATH}/bmalloc)
# Patch BSyscall.h: add backoff and retry cap to SYSCALL macro
set(BSYSCALL_H ${BMALLOC_INCLUDE}/BSyscall.h)
if(EXISTS ${BSYSCALL_H})
file(READ ${BSYSCALL_H} BSYSCALL_CONTENT)
string(REPLACE
"#include <errno.h>
#define SYSCALL(x) do { \\
while ((x) == -1 && errno == EAGAIN) { } \\
} while (0);"
"#include <errno.h>
#include <unistd.h>
#define BSYSCALL_MAX_RETRIES 100
#define BSYSCALL_RETRY_DELAY_US 1000
#define SYSCALL(x) do { \\
int _syscall_tries = 0; \\
while ((x) == -1 && errno == EAGAIN) { \\
if (++_syscall_tries > BSYSCALL_MAX_RETRIES) break; \\
usleep(BSYSCALL_RETRY_DELAY_US); \\
} \\
} while (0);"
BSYSCALL_CONTENT "${BSYSCALL_CONTENT}")
string(FIND "${BSYSCALL_CONTENT}" "BSYSCALL_MAX_RETRIES" BSYSCALL_PATCH_APPLIED)
if(BSYSCALL_PATCH_APPLIED EQUAL -1)
message(WARNING "BSyscall.h patch did not apply - header may have changed in new WebKit version")
else()
message(STATUS "Patched BSyscall.h: SYSCALL macro backoff")
endif()
file(WRITE ${BSYSCALL_H} "${BSYSCALL_CONTENT}")
endif()
# Patch pas_utils.h: add backoff and retry cap to PAS_SYSCALL macro
# Also add #include <unistd.h> for usleep()
set(PAS_UTILS_H ${BMALLOC_INCLUDE}/pas_utils.h)
if(EXISTS ${PAS_UTILS_H})
file(READ ${PAS_UTILS_H} PAS_UTILS_CONTENT)
string(REPLACE
"#include <string.h>"
"#include <string.h>
#if !PAS_OS(WINDOWS)
#include <unistd.h>
#endif"
PAS_UTILS_CONTENT "${PAS_UTILS_CONTENT}")
string(REPLACE
"#define PAS_SYSCALL(x) do { \\
while ((x) == -1 && errno == EAGAIN) { } \\
} while (0)"
"#define PAS_SYSCALL_MAX_RETRIES 100
#define PAS_SYSCALL_RETRY_DELAY_US 1000
#define PAS_SYSCALL(x) do { \\
int _pas_syscall_tries = 0; \\
while ((x) == -1 && errno == EAGAIN) { \\
if (++_pas_syscall_tries > PAS_SYSCALL_MAX_RETRIES) break; \\
usleep(PAS_SYSCALL_RETRY_DELAY_US); \\
} \\
} while (0)"
PAS_UTILS_CONTENT "${PAS_UTILS_CONTENT}")
string(FIND "${PAS_UTILS_CONTENT}" "PAS_SYSCALL_MAX_RETRIES" PAS_PATCH_APPLIED)
if(PAS_PATCH_APPLIED EQUAL -1)
message(WARNING "pas_utils.h patch did not apply - header may have changed in new WebKit version")
else()
message(STATUS "Patched pas_utils.h: PAS_SYSCALL macro backoff")
endif()
file(WRITE ${PAS_UTILS_H} "${PAS_UTILS_CONTENT}")
endif()
# Patch VMAllocate.h: remove MADV_DONTDUMP/MADV_DODUMP (Linux only)
# These require mmap_write_lock and are the primary contention source.
# MADV_DONTDUMP only affects core dump size, not allocation correctness.
set(VMALLOCATE_H ${BMALLOC_INCLUDE}/VMAllocate.h)
if(EXISTS ${VMALLOCATE_H})
file(READ ${VMALLOCATE_H} VMALLOCATE_CONTENT)
string(FIND "${VMALLOCATE_CONTENT}" "MADV_DONTDUMP" VMALLOCATE_HAS_DONTDUMP)
string(REPLACE
" SYSCALL(madvise(p, vmSize, MADV_DONTNEED));
#if BOS(LINUX)
SYSCALL(madvise(p, vmSize, MADV_DONTDUMP));
#endif"
" SYSCALL(madvise(p, vmSize, MADV_DONTNEED));"
VMALLOCATE_CONTENT "${VMALLOCATE_CONTENT}")
string(REPLACE
" SYSCALL(madvise(p, vmSize, MADV_NORMAL));
#if BOS(LINUX)
SYSCALL(madvise(p, vmSize, MADV_DODUMP));
#endif"
" SYSCALL(madvise(p, vmSize, MADV_NORMAL));"
VMALLOCATE_CONTENT "${VMALLOCATE_CONTENT}")
string(FIND "${VMALLOCATE_CONTENT}" "MADV_DONTDUMP" VMALLOCATE_STILL_HAS_DONTDUMP)
if(NOT VMALLOCATE_HAS_DONTDUMP EQUAL -1 AND NOT VMALLOCATE_STILL_HAS_DONTDUMP EQUAL -1)
message(WARNING "VMAllocate.h patch did not apply - header may have changed in new WebKit version")
else()
message(STATUS "Patched VMAllocate.h: removed MADV_DONTDUMP/MADV_DODUMP")
endif()
file(WRITE ${VMALLOCATE_H} "${VMALLOCATE_CONTENT}")
endif()

View File

@@ -148,6 +148,9 @@ _bun_completions() {
upgrade)
COMPREPLY=( $(compgen -W "--version --cwd --help -v -h") );
return;;
repl)
COMPREPLY=( $(compgen -W "--help -h --eval -e --print -p --preload -r --smol --config -c --cwd --env-file --no-env-file" -- "${cur_word}") );
return;;
run)
_file_arguments "!(*.@(js|ts|jsx|tsx|mjs|cjs)?($|))";
COMPREPLY+=( $(compgen -W "--version --cwd --help --silent -v -h" -- "${cur_word}" ) );

View File

@@ -35,7 +35,7 @@ end
set -l bun_install_boolean_flags yarn production optional development no-save dry-run force no-cache silent verbose global
set -l bun_install_boolean_flags_descriptions "Write a yarn.lock file (yarn v1)" "Don't install devDependencies" "Add dependency to optionalDependencies" "Add dependency to devDependencies" "Don't update package.json or save a lockfile" "Don't install anything" "Always request the latest versions from the registry & reinstall all dependencies" "Ignore manifest cache entirely" "Don't output anything" "Excessively verbose logging" "Use global folder"
set -l bun_builtin_cmds_without_run dev create help bun upgrade discord install remove add update init pm x
set -l bun_builtin_cmds_without_run dev create help bun upgrade discord install remove add update init pm x repl
set -l bun_builtin_cmds_accepting_flags create help bun upgrade discord run init link unlink pm x update
function __bun_complete_bins_scripts --inherit-variable bun_builtin_cmds_without_run -d "Emit bun completions for bins and scripts"
@@ -185,3 +185,12 @@ complete -c bun -n "__fish_use_subcommand" -a "x" -d "Execute a package binary,
complete -c bun -n "__fish_use_subcommand" -a "outdated" -d "Display the latest versions of outdated dependencies" -f
complete -c bun -n "__fish_use_subcommand" -a "update" -d "Update dependencies to their latest versions" -f
complete -c bun -n "__fish_use_subcommand" -a "publish" -d "Publish your package from local to npm" -f
complete -c bun -n "__fish_use_subcommand" -a "repl" -d "Start a REPL session with Bun" -f
complete -c bun -n "__fish_seen_subcommand_from repl" -s "e" -l "eval" -r -d "Evaluate argument as a script, then exit" -f
complete -c bun -n "__fish_seen_subcommand_from repl" -s "p" -l "print" -r -d "Evaluate argument as a script, print the result, then exit" -f
complete -c bun -n "__fish_seen_subcommand_from repl" -s "r" -l "preload" -r -d "Import a module before other modules are loaded"
complete -c bun -n "__fish_seen_subcommand_from repl" -l "smol" -d "Use less memory, but run garbage collection more often" -f
complete -c bun -n "__fish_seen_subcommand_from repl" -s "c" -l "config" -r -d "Specify path to Bun config file"
complete -c bun -n "__fish_seen_subcommand_from repl" -l "cwd" -r -d "Absolute path to resolve files & entry points from"
complete -c bun -n "__fish_seen_subcommand_from repl" -l "env-file" -r -d "Load environment variables from the specified file(s)"
complete -c bun -n "__fish_seen_subcommand_from repl" -l "no-env-file" -d "Disable automatic loading of .env files" -f

View File

@@ -524,6 +524,33 @@ _bun_upgrade_completion() {
}
_bun_repl_completion() {
_arguments -s -C \
'1: :->cmd' \
'--help[Print this help menu]' \
'-h[Print this help menu]' \
'(-p --print)--eval[Evaluate argument as a script, then exit]:script' \
'(-p --print)-e[Evaluate argument as a script, then exit]:script' \
'(-e --eval)--print[Evaluate argument as a script, print the result, then exit]:script' \
'(-e --eval)-p[Evaluate argument as a script, print the result, then exit]:script' \
'--preload[Import a module before other modules are loaded]:preload' \
'-r[Import a module before other modules are loaded]:preload' \
'--smol[Use less memory, but run garbage collection more often]' \
'--config[Specify path to Bun config file]: :->config' \
'-c[Specify path to Bun config file]: :->config' \
'--cwd[Absolute path to resolve files & entry points from]:cwd' \
'--env-file[Load environment variables from the specified file(s)]:env-file' \
'--no-env-file[Disable automatic loading of .env files]' &&
ret=0
case $state in
config)
_bun_list_bunfig_toml
;;
esac
}
_bun_build_completion() {
_arguments -s -C \
'1: :->cmd' \
@@ -787,6 +814,10 @@ _bun() {
upgrade)
_bun_upgrade_completion
;;
repl)
_bun_repl_completion
;;
build)
_bun_build_completion
@@ -870,6 +901,10 @@ _bun() {
upgrade)
_bun_upgrade_completion
;;
repl)
_bun_repl_completion
;;
build)
_bun_build_completion

View File

@@ -157,6 +157,31 @@ To build for Windows x64:
</Tab>
</Tabs>
To build for Windows arm64:
<Tabs>
<Tab title="CLI">
```bash icon="terminal" terminal
bun build --compile --target=bun-windows-arm64 ./path/to/my/app.ts --outfile myapp
# note: if no .exe extension is provided, Bun will automatically add it for Windows executables
```
</Tab>
<Tab title="JavaScript">
```ts build.ts icon="/icons/typescript.svg"
await Bun.build({
entrypoints: ["./path/to/my/app.ts"],
compile: {
target: "bun-windows-arm64",
outfile: "./myapp", // .exe added automatically
},
});
```
</Tab>
</Tabs>
To build for macOS arm64:
<Tabs>
@@ -203,16 +228,16 @@ To build for macOS x64:
The order of the `--target` flag does not matter, as long as they're delimited by a `-`.
| --target | Operating System | Architecture | Modern | Baseline | Libc |
| --------------------- | ---------------- | ------------ | ------ | -------- | ----- |
| bun-linux-x64 | Linux | x64 | ✅ | ✅ | glibc |
| bun-linux-arm64 | Linux | arm64 | ✅ | N/A | glibc |
| bun-windows-x64 | Windows | x64 | ✅ | ✅ | - |
| ~~bun-windows-arm64~~ | ~~Windows~~ | ~~arm64~~ | | | - |
| bun-darwin-x64 | macOS | x64 | ✅ | ✅ | - |
| bun-darwin-arm64 | macOS | arm64 | ✅ | N/A | - |
| bun-linux-x64-musl | Linux | x64 | ✅ | ✅ | musl |
| bun-linux-arm64-musl | Linux | arm64 | ✅ | N/A | musl |
| --target | Operating System | Architecture | Modern | Baseline | Libc |
| -------------------- | ---------------- | ------------ | ------ | -------- | ----- |
| bun-linux-x64 | Linux | x64 | ✅ | ✅ | glibc |
| bun-linux-arm64 | Linux | arm64 | ✅ | N/A | glibc |
| bun-windows-x64 | Windows | x64 | ✅ | ✅ | - |
| bun-windows-arm64 | Windows | arm64 | | N/A | - |
| bun-darwin-x64 | macOS | x64 | ✅ | ✅ | - |
| bun-darwin-arm64 | macOS | arm64 | ✅ | N/A | - |
| bun-linux-x64-musl | Linux | x64 | ✅ | ✅ | musl |
| bun-linux-arm64-musl | Linux | arm64 | ✅ | N/A | musl |
<Warning>
On x64 platforms, Bun uses SIMD optimizations which require a modern CPU supporting AVX2 instructions. The `-baseline`
@@ -1252,7 +1277,8 @@ type Target =
| "bun-linux-arm64-musl"
| "bun-windows-x64"
| "bun-windows-x64-baseline"
| "bun-windows-x64-modern";
| "bun-windows-x64-modern"
| "bun-windows-arm64";
```
### Complete example

View File

@@ -75,7 +75,7 @@
{
"group": "Core Runtime",
"icon": "cog",
"pages": ["/runtime/index", "/runtime/watch-mode", "/runtime/debugger", "/runtime/bunfig"]
"pages": ["/runtime/index", "/runtime/watch-mode", "/runtime/debugger", "/runtime/repl", "/runtime/bunfig"]
},
{
"group": "File & Module System",

View File

@@ -260,6 +260,13 @@ To download Bun binaries directly, visit the [releases page on GitHub](https://g
>
For older CPUs without AVX2
</Card>
<Card
icon="/icons/windows.svg"
title="Windows ARM64"
href="https://github.com/oven-sh/bun/releases/latest/download/bun-windows-aarch64.zip"
>
Windows on ARM (Snapdragon, etc.)
</Card>
<Card
icon="/icons/apple.svg"
title="macOS ARM64"

176
docs/runtime/repl.mdx Normal file
View File

@@ -0,0 +1,176 @@
---
title: "REPL"
description: "An interactive JavaScript and TypeScript REPL with syntax highlighting, history, and tab completion"
---
`bun repl` starts an interactive Read-Eval-Print Loop (REPL) for evaluating JavaScript and TypeScript expressions. It's useful for quickly testing code snippets, exploring APIs, and debugging.
```sh terminal icon="terminal"
bun repl
```
```txt
Welcome to Bun v1.3.3
Type .copy [code] to copy to clipboard. .help for more info.
> 1 + 1
2
> const greeting = "Hello, Bun!"
undefined
> greeting
'Hello, Bun!'
```
---
## Features
- **TypeScript & JSX** — Write TypeScript and JSX directly. Bun transpiles everything on the fly.
- **Top-level `await`** — Await promises directly at the prompt without wrapping in an async function.
- **Syntax highlighting** — Input is highlighted as you type.
- **Persistent history** — History is saved to `~/.bun_repl_history` and persists across sessions.
- **Tab completion** — Press `Tab` to complete property names and REPL commands.
- **Multi-line input** — Unclosed brackets, braces, and parentheses automatically continue on the next line.
- **Node.js globals** — `require`, `module`, `__dirname`, and `__filename` are available, resolved relative to your current working directory.
---
## Special variables
The REPL exposes two special variables that update after each evaluation.
| Variable | Description |
| -------- | --------------------------------- |
| `_` | The result of the last expression |
| `_error` | The last error that was thrown |
```txt
> 2 + 2
4
> _ * 10
40
> JSON.parse("oops")
SyntaxError: JSON Parse error: Unexpected identifier "oops"
> _error
SyntaxError: JSON Parse error: Unexpected identifier "oops"
```
---
## Top-level `await`
Promises are automatically awaited. You can `await` any expression directly at the prompt.
```txt
> await fetch("https://api.github.com/repos/oven-sh/bun").then(r => r.json()).then(r => r.stargazers_count)
81234
> const response = await fetch("https://example.com")
undefined
> response.status
200
```
---
## Importing modules
Just like Bun's runtime, you can use either `require` or `import` in the REPL and it Just Works — mix ESM and CommonJS freely at the prompt. Module resolution uses the same rules as `bun run`, so you can import from `node_modules`, relative paths, or `node:` builtins.
```txt
> import { z } from "zod"
undefined
> const path = require("path")
undefined
> z.string().parse(path.join("/tmp", "file.txt"))
'/tmp/file.txt'
```
Declarations persist for the rest of the session, and `const`/`let` can be redeclared across evaluations (unlike in regular scripts) so you can re-run `import` and `require` statements while iterating.
---
## Multi-line input
When you press `Enter` on a line with unclosed brackets, braces, or parentheses, the REPL automatically continues on the next line. The prompt changes to `...` to indicate continuation.
```txt
> function add(a, b) {
... return a + b;
... }
undefined
> add(2, 3)
5
```
For longer multi-line entries, use `.editor` to enter editor mode, which buffers all input until you press `Ctrl+D`.
---
## REPL commands
Type `.help` at the prompt to see all available REPL commands.
| Command | Description |
| ---------- | ------------------------------------------------------------------------------------------------ |
| `.help` | Print the help message listing commands and keybindings |
| `.exit` | Exit the REPL |
| `.clear` | Clear the screen |
| `.copy` | Copy the last result to the clipboard. Pass an expression to evaluate and copy it: `.copy 1 + 1` |
| `.load` | Load a file into the REPL session: `.load ./script.ts` |
| `.save` | Save the current REPL history to a file: `.save ./session.txt` |
| `.editor` | Enter multi-line editor mode (press `Ctrl+D` to evaluate, `Ctrl+C` to cancel) |
| `.break` | Cancel the current multi-line input |
| `.history` | Print the command history |
---
## Keybindings
The REPL supports Emacs-style line editing.
| Keybinding | Action |
| ------------------- | -------------------------------------------------------- |
| `Ctrl+A` | Move to start of line |
| `Ctrl+E` | Move to end of line |
| `Ctrl+B` / `Ctrl+F` | Move backward/forward one character |
| `Alt+B` / `Alt+F` | Move backward/forward one word |
| `Ctrl+U` | Delete to start of line |
| `Ctrl+K` | Delete to end of line |
| `Ctrl+W` | Delete word backward |
| `Ctrl+D` | Delete character (or exit if line is empty) |
| `Ctrl+L` | Clear screen |
| `Ctrl+T` | Swap the two characters before the cursor |
| `Up` / `Down` | Navigate history |
| `Tab` | Auto-complete |
| `Ctrl+C` | Cancel current input (press twice on empty line to exit) |
---
## History
REPL history is automatically saved to `~/.bun_repl_history` (up to 1000 entries) and loaded at the start of each session. Use `Up`/`Down` to navigate.
To export your history to a different file, use `.save`:
```txt
> .save ./my-session.txt
```
---
## Non-interactive mode
Use `-e` / `--eval` to evaluate a script with REPL semantics and exit. Use `-p` / `--print` to additionally print the result.
```sh terminal icon="terminal"
bun repl -e "const x: number = 42; console.log(x)"
# 42
bun repl -p "await fetch('https://example.com').then(r => r.status)"
# 200
bun repl -p "{ a: 1, b: 2 }"
# { a: 1, b: 2 }
```
This uses the same transforms as the interactive REPL, so a bare object literal like `{ a: 1 }` is treated as an object expression instead of a block statement. The process exits after the event loop drains (pending timers and I/O complete first). On error, the process exits with code `1`.

View File

@@ -402,8 +402,8 @@ function Install-Bun {
}
if ($script:IsARM64) {
# No published ARM64 bun binary yet — download from our blob storage
Write-Output "Installing Bun (ARM64 from blob storage)..."
# ARM64 bun binary from blob storage (faster than GitHub releases for CI)
Write-Output "Installing Bun (ARM64)..."
$zip = Download-File "https://buncistore.blob.core.windows.net/artifacts/bun-windows-aarch64.zip" -Name "bun-arm64.zip"
$extractDir = "$env:TEMP\bun-arm64"
Expand-Archive -Path $zip -DestinationPath $extractDir -Force

View File

@@ -210,7 +210,13 @@ if (instructionFailures > 0) {
console.error(" FAILED: Code uses unsupported CPU instructions.");
// Report to Buildkite annotations tab
const platform = isWindows ? "Windows x64" : isAarch64 ? "Linux aarch64" : "Linux x64";
const platform = isWindows
? isAarch64
? "Windows aarch64"
: "Windows x64"
: isAarch64
? "Linux aarch64"
: "Linux x64";
const annotation = [
`<details>`,
`<summary>CPU instruction violation on ${platform}${instructionFailures} failed</summary>`,

View File

@@ -28,9 +28,6 @@ pub fn ReplTransforms(comptime P: type) type {
return;
}
// Check if there's top-level await
const has_top_level_await = p.top_level_await_keyword.len > 0;
// Collect all statements into a single array
var all_stmts = bun.handleOom(allocator.alloc(Stmt, total_stmts_count));
var stmt_idx: usize = 0;
@@ -41,6 +38,17 @@ pub fn ReplTransforms(comptime P: type) type {
}
}
// Check if there's top-level await or imports (imports become dynamic awaited imports)
var has_top_level_await = p.top_level_await_keyword.len > 0;
if (!has_top_level_await) {
for (all_stmts) |stmt| {
if (stmt.data == .s_import) {
has_top_level_await = true;
break;
}
}
}
// Apply transform with is_async based on presence of top-level await
try transformWithHoisting(p, parts, all_stmts, allocator, has_top_level_await);
}
@@ -154,6 +162,86 @@ pub fn ReplTransforms(comptime P: type) type {
try inner_stmts.append(stmt);
}
},
.s_import => |import_data| {
// Convert static imports to dynamic imports for REPL evaluation:
// import X from 'mod' -> var X = (await import('mod')).default
// import { a, b } from 'mod' -> var {a, b} = await import('mod')
// import * as X from 'mod' -> var X = await import('mod')
// import 'mod' -> await import('mod')
const path_str = p.import_records.items[import_data.import_record_index].path.text;
const import_expr = p.newExpr(E.Import{
.expr = p.newExpr(E.String{ .data = path_str }, stmt.loc),
.import_record_index = std.math.maxInt(u32),
}, stmt.loc);
const await_expr = p.newExpr(E.Await{ .value = import_expr }, stmt.loc);
if (import_data.star_name_loc) |_| {
// import * as X from 'mod' -> var X = await import('mod')
try hoisted_stmts.append(p.s(S.Local{
.kind = .k_var,
.decls = Decl.List.fromOwnedSlice(bun.handleOom(allocator.dupe(G.Decl, &.{
G.Decl{
.binding = p.b(B.Identifier{ .ref = import_data.namespace_ref }, stmt.loc),
.value = null,
},
}))),
}, stmt.loc));
const assign = p.newExpr(E.Binary{
.op = .bin_assign,
.left = p.newExpr(E.Identifier{ .ref = import_data.namespace_ref }, stmt.loc),
.right = await_expr,
}, stmt.loc);
try inner_stmts.append(p.s(S.SExpr{ .value = assign }, stmt.loc));
} else if (import_data.default_name) |default_name| {
// import X from 'mod' -> var X = (await import('mod')).default
// import X, { a } from 'mod' -> var __ns = await import('mod'); var X = __ns.default; var a = __ns.a;
try hoisted_stmts.append(p.s(S.Local{
.kind = .k_var,
.decls = Decl.List.fromOwnedSlice(bun.handleOom(allocator.dupe(G.Decl, &.{
G.Decl{
.binding = p.b(B.Identifier{ .ref = default_name.ref.? }, default_name.loc),
.value = null,
},
}))),
}, stmt.loc));
if (import_data.items.len > 0) {
// Share a single await import() between default and named imports.
// namespace_ref is synthesized by processImportStatement for all non-star imports.
try convertNamedImports(p, import_data, await_expr, &hoisted_stmts, &inner_stmts, allocator, stmt.loc);
const ns_ref_expr = p.newExpr(E.Identifier{ .ref = import_data.namespace_ref }, stmt.loc);
const dot_default = p.newExpr(E.Dot{
.target = ns_ref_expr,
.name = "default",
.name_loc = stmt.loc,
}, stmt.loc);
const assign = p.newExpr(E.Binary{
.op = .bin_assign,
.left = p.newExpr(E.Identifier{ .ref = default_name.ref.? }, default_name.loc),
.right = dot_default,
}, stmt.loc);
try inner_stmts.append(p.s(S.SExpr{ .value = assign }, stmt.loc));
} else {
const dot_default = p.newExpr(E.Dot{
.target = await_expr,
.name = "default",
.name_loc = stmt.loc,
}, stmt.loc);
const assign = p.newExpr(E.Binary{
.op = .bin_assign,
.left = p.newExpr(E.Identifier{ .ref = default_name.ref.? }, default_name.loc),
.right = dot_default,
}, stmt.loc);
try inner_stmts.append(p.s(S.SExpr{ .value = assign }, stmt.loc));
}
} else if (import_data.items.len > 0) {
// import { a, b } from 'mod' -> destructure from await import('mod')
try convertNamedImports(p, import_data, await_expr, &hoisted_stmts, &inner_stmts, allocator, stmt.loc);
} else {
// import 'mod' (side-effect only) -> await import('mod')
try inner_stmts.append(p.s(S.SExpr{ .value = await_expr }, stmt.loc));
}
},
.s_directive => |directive| {
// In REPL mode, treat directives (string literals) as expressions
const str_expr = p.newExpr(E.String{ .data = directive.value }, stmt.loc);
@@ -195,6 +283,63 @@ pub fn ReplTransforms(comptime P: type) type {
}
}
/// Convert named imports to individual var assignments from the dynamic import
/// import { a, b as c } from 'mod' ->
/// var a; var c; (hoisted)
/// var __mod = await import('mod'); a = __mod.a; c = __mod.b; (inner)
fn convertNamedImports(
p: *P,
import_data: *const S.Import,
await_expr: Expr,
hoisted_stmts: *ListManaged(Stmt),
inner_stmts: *ListManaged(Stmt),
allocator: Allocator,
loc: logger.Loc,
) !void {
// Store the module in the namespace ref: var __ns = await import('mod')
try hoisted_stmts.append(p.s(S.Local{
.kind = .k_var,
.decls = Decl.List.fromOwnedSlice(bun.handleOom(allocator.dupe(G.Decl, &.{
G.Decl{
.binding = p.b(B.Identifier{ .ref = import_data.namespace_ref }, loc),
.value = null,
},
}))),
}, loc));
const ns_assign = p.newExpr(E.Binary{
.op = .bin_assign,
.left = p.newExpr(E.Identifier{ .ref = import_data.namespace_ref }, loc),
.right = await_expr,
}, loc);
try inner_stmts.append(p.s(S.SExpr{ .value = ns_assign }, loc));
// For each named import: var name; name = __ns.originalName;
for (import_data.items) |item| {
try hoisted_stmts.append(p.s(S.Local{
.kind = .k_var,
.decls = Decl.List.fromOwnedSlice(bun.handleOom(allocator.dupe(G.Decl, &.{
G.Decl{
.binding = p.b(B.Identifier{ .ref = item.name.ref.? }, item.name.loc),
.value = null,
},
}))),
}, loc));
const ns_ref_expr = p.newExpr(E.Identifier{ .ref = import_data.namespace_ref }, loc);
const prop_access = p.newExpr(E.Dot{
.target = ns_ref_expr,
.name = item.alias,
.name_loc = item.name.loc,
}, loc);
const item_assign = p.newExpr(E.Binary{
.op = .bin_assign,
.left = p.newExpr(E.Identifier{ .ref = item.name.ref.? }, item.name.loc),
.right = prop_access,
}, loc);
try inner_stmts.append(p.s(S.SExpr{ .value = item_assign }, loc));
}
}
/// Wrap the last expression in return { value: expr }
fn wrapLastExpressionWithReturn(p: *P, inner_stmts: *ListManaged(Stmt), allocator: Allocator) void {
if (inner_stmts.items.len > 0) {

View File

@@ -3,6 +3,17 @@ pub const webcore = @import("./bun.js/webcore.zig");
pub const api = @import("./bun.js/api.zig");
pub const bindgen = @import("./bun.js/bindgen.zig");
pub fn applyStandaloneRuntimeFlags(b: *bun.Transpiler, graph: *const bun.StandaloneModuleGraph) void {
b.options.env.disable_default_env_files = graph.flags.disable_default_env_files;
b.options.env.behavior = if (graph.flags.disable_default_env_files)
.disable
else
.load_all_without_inlining;
b.resolver.opts.load_tsconfig_json = !graph.flags.disable_autoload_tsconfig;
b.resolver.opts.load_package_json = !graph.flags.disable_autoload_package_json;
}
pub const Run = struct {
ctx: Command.Context,
vm: *VirtualMachine,
@@ -82,18 +93,7 @@ pub const Run = struct {
.unspecified => {},
}
// If .env loading is disabled, only load process env vars
// Otherwise, load all .env files
if (graph_ptr.flags.disable_default_env_files) {
b.options.env.behavior = .disable;
} else {
b.options.env.behavior = .load_all_without_inlining;
}
// Control loading of tsconfig.json and package.json at runtime
// By default, these are disabled for standalone executables
b.resolver.opts.load_tsconfig_json = !graph_ptr.flags.disable_autoload_tsconfig;
b.resolver.opts.load_package_json = !graph_ptr.flags.disable_autoload_package_json;
applyStandaloneRuntimeFlags(b, graph_ptr);
b.configureDefines() catch {
failWithBuildError(vm);

View File

@@ -2669,7 +2669,7 @@ pub fn remapZigException(
allow_source_code_preview: bool,
) void {
error_instance.toZigException(this.global, exception);
const enable_source_code_preview = allow_source_code_preview and
var enable_source_code_preview = allow_source_code_preview and
!(bun.feature_flag.BUN_DISABLE_SOURCE_CODE_PREVIEW.get() or
bun.feature_flag.BUN_DISABLE_TRANSPILED_SOURCE_CODE_PREVIEW.get());
@@ -2764,6 +2764,12 @@ pub fn remapZigException(
}
}
// Don't show source code preview for REPL frames - it would show the
// transformed IIFE wrapper code, not what the user typed.
if (top.source_url.eqlComptime("[repl]")) {
enable_source_code_preview = false;
}
var top_source_url = top.source_url.toUTF8(bun.default_allocator);
defer top_source_url.deinit();
@@ -2815,7 +2821,6 @@ pub fn remapZigException(
// Avoid printing "export default 'native'"
break :code ZigString.Slice.empty;
}
var log = logger.Log.init(bun.default_allocator);
defer log.deinit();

View File

@@ -698,8 +698,7 @@ pub fn setRawMode(
if (comptime Environment.isPosix) {
// Use the existing TTY mode function
const mode: c_int = if (enabled) 1 else 0;
const tty_result = Bun__ttySetMode(this.master_fd.cast(), mode);
const tty_result = bun.tty.setMode(this.master_fd.cast(), if (enabled) .raw else .normal);
if (tty_result != 0) {
return globalObject.throw("Failed to set raw mode", .{});
}
@@ -708,9 +707,6 @@ pub fn setRawMode(
this.flags.raw_mode = enabled;
return .js_undefined;
}
extern fn Bun__ttySetMode(fd: c_int, mode: c_int) c_int;
/// POSIX termios struct for terminal flags manipulation
const Termios = if (Environment.isPosix) std.posix.termios else void;

View File

@@ -1379,36 +1379,22 @@ pub fn spawnProcessPosix(
break :brk .{ pair[if (i == 0) 1 else 0], pair[if (i == 0) 0 else 1] };
};
if (i == 0) {
// their copy of stdin should be readable
_ = std.c.shutdown(@intCast(fds[1].cast()), std.posix.SHUT.WR);
// our copy of stdin should be writable
_ = std.c.shutdown(@intCast(fds[0].cast()), std.posix.SHUT.RD);
if (comptime Environment.isMac) {
// macOS seems to default to around 8 KB for the buffer size
// this is comically small.
// TODO: investigate if this should be adjusted on Linux.
const so_recvbuf: c_int = 1024 * 512;
const so_sendbuf: c_int = 1024 * 512;
// Note: we intentionally do NOT call shutdown() on the
// socketpair fds. On SOCK_STREAM socketpairs, shutdown(fd, SHUT_WR)
// sends a FIN to the peer, which causes programs that poll the
// write end for readability (e.g. Python's asyncio connect_write_pipe)
// to interpret it as "connection closed" and tear down their transport.
// The socketpair is already used unidirectionally by convention.
if (comptime Environment.isMac) {
// macOS seems to default to around 8 KB for the buffer size
// this is comically small.
// TODO: investigate if this should be adjusted on Linux.
const so_recvbuf: c_int = 1024 * 512;
const so_sendbuf: c_int = 1024 * 512;
if (i == 0) {
_ = std.c.setsockopt(fds[1].cast(), std.posix.SOL.SOCKET, std.posix.SO.RCVBUF, &so_recvbuf, @sizeOf(c_int));
_ = std.c.setsockopt(fds[0].cast(), std.posix.SOL.SOCKET, std.posix.SO.SNDBUF, &so_sendbuf, @sizeOf(c_int));
}
} else {
// their copy of stdout or stderr should be writable
_ = std.c.shutdown(@intCast(fds[1].cast()), std.posix.SHUT.RD);
// our copy of stdout or stderr should be readable
_ = std.c.shutdown(@intCast(fds[0].cast()), std.posix.SHUT.WR);
if (comptime Environment.isMac) {
// macOS seems to default to around 8 KB for the buffer size
// this is comically small.
// TODO: investigate if this should be adjusted on Linux.
const so_recvbuf: c_int = 1024 * 512;
const so_sendbuf: c_int = 1024 * 512;
} else {
_ = std.c.setsockopt(fds[0].cast(), std.posix.SOL.SOCKET, std.posix.SO.RCVBUF, &so_recvbuf, @sizeOf(c_int));
_ = std.c.setsockopt(fds[1].cast(), std.posix.SOL.SOCKET, std.posix.SO.SNDBUF, &so_sendbuf, @sizeOf(c_int));
}

View File

@@ -24,6 +24,12 @@ client_renegotiation_window: u32 = 0,
requires_custom_request_ctx: bool = false,
is_using_default_ciphers: bool = true,
low_memory_mode: bool = false,
ref_count: RC = .init(),
cached_hash: u64 = 0,
const RC = bun.ptr.ThreadSafeRefCount(@This(), "ref_count", destroy, .{});
pub const ref = RC.ref;
pub const deref = RC.deref;
const ReadFromBlobError = bun.JSError || error{
NullStore,
@@ -113,6 +119,7 @@ pub fn forClientVerification(this: SSLConfig) SSLConfig {
pub fn isSame(this: *const SSLConfig, other: *const SSLConfig) bool {
inline for (comptime std.meta.fields(SSLConfig)) |field| {
if (comptime std.mem.eql(u8, field.name, "ref_count") or std.mem.eql(u8, field.name, "cached_hash")) continue;
const first = @field(this, field.name);
const second = @field(other, field.name);
switch (field.type) {
@@ -185,6 +192,8 @@ pub fn deinit(this: *SSLConfig) void {
.requires_custom_request_ctx = {},
.is_using_default_ciphers = {},
.low_memory_mode = {},
.ref_count = {},
.cached_hash = {},
});
}
@@ -222,9 +231,97 @@ pub fn clone(this: *const SSLConfig) SSLConfig {
.requires_custom_request_ctx = this.requires_custom_request_ctx,
.is_using_default_ciphers = this.is_using_default_ciphers,
.low_memory_mode = this.low_memory_mode,
.ref_count = .init(),
.cached_hash = 0,
};
}
pub fn contentHash(this: *SSLConfig) u64 {
if (this.cached_hash != 0) return this.cached_hash;
var hasher = std.hash.Wyhash.init(0);
inline for (comptime std.meta.fields(SSLConfig)) |field| {
if (comptime std.mem.eql(u8, field.name, "ref_count") or std.mem.eql(u8, field.name, "cached_hash")) continue;
const value = @field(this, field.name);
switch (field.type) {
?[*:0]const u8 => {
if (value) |s| {
hasher.update(bun.asByteSlice(s));
}
hasher.update(&.{0});
},
?[][*:0]const u8 => {
if (value) |slice| {
for (slice) |s| {
hasher.update(bun.asByteSlice(s));
hasher.update(&.{0});
}
}
hasher.update(&.{0});
},
else => {
hasher.update(std.mem.asBytes(&value));
},
}
}
const hash = hasher.final();
// Avoid 0 since it's the sentinel for "not computed"
this.cached_hash = if (hash == 0) 1 else hash;
return this.cached_hash;
}
/// Called by the RC mixin when refcount reaches 0.
fn destroy(this: *SSLConfig) void {
GlobalRegistry.remove(this);
this.deinit();
bun.default_allocator.destroy(this);
}
pub const GlobalRegistry = struct {
const MapContext = struct {
pub fn hash(_: @This(), key: *SSLConfig) u32 {
return @truncate(key.contentHash());
}
pub fn eql(_: @This(), a: *SSLConfig, b: *SSLConfig, _: usize) bool {
return a.isSame(b);
}
};
var mutex: bun.Mutex = .{};
var configs: std.ArrayHashMapUnmanaged(*SSLConfig, void, MapContext, true) = .empty;
/// Takes ownership of a heap-allocated SSLConfig.
/// If an identical config already exists in the registry, the new one is freed
/// and the existing one is returned (with refcount incremented).
/// If no match, the new config is registered and returned.
pub fn intern(new_config: *SSLConfig) *SSLConfig {
mutex.lock();
defer mutex.unlock();
// Look up by content hash/equality
const gop = bun.handleOom(configs.getOrPutContext(bun.default_allocator, new_config, .{}));
if (gop.found_existing) {
// Identical config already exists - free the new one, return existing
const existing = gop.key_ptr.*;
new_config.ref_count.clearWithoutDestructor();
new_config.deinit();
bun.default_allocator.destroy(new_config);
existing.ref();
return existing;
}
// New config - it's already inserted by getOrPut
// refcount is already 1 from initialization
return new_config;
}
/// Remove a config from the registry. Called when refcount reaches 0.
fn remove(config: *SSLConfig) void {
mutex.lock();
defer mutex.unlock();
_ = configs.swapRemoveContext(config, .{});
}
};
pub const zero = SSLConfig{};
pub fn fromJS(
@@ -294,9 +391,9 @@ pub fn fromGenerated(
const protocols = switch (generated.alpn_protocols) {
.none => null,
.string => |*ref| ref.get().toOwnedSliceZ(bun.default_allocator),
.buffer => |*ref| blk: {
const buffer: jsc.ArrayBuffer = ref.get().asArrayBuffer();
.string => |*val| val.get().toOwnedSliceZ(bun.default_allocator),
.buffer => |*val| blk: {
const buffer: jsc.ArrayBuffer = val.get().asArrayBuffer();
break :blk try bun.default_allocator.dupeZ(u8, buffer.byteSlice());
},
};
@@ -366,9 +463,9 @@ fn handleFile(
) ReadFromBlobError!?[][*:0]const u8 {
const single = try handleSingleFile(global, switch (file.*) {
.none => return null,
.string => |*ref| .{ .string = ref.get() },
.buffer => |*ref| .{ .buffer = ref.get() },
.file => |*ref| .{ .file = ref.get() },
.string => |*val| .{ .string = val.get() },
.buffer => |*val| .{ .buffer = val.get() },
.file => |*val| .{ .file = val.get() },
.array => |*list| return try handleFileArray(global, list.items()),
});
errdefer bun.freeSensitive(bun.default_allocator, single);
@@ -391,9 +488,9 @@ fn handleFileArray(
}
for (elements) |*elem| {
result.appendAssumeCapacity(try handleSingleFile(global, switch (elem.*) {
.string => |*ref| .{ .string = ref.get() },
.buffer => |*ref| .{ .buffer = ref.get() },
.file => |*ref| .{ .file = ref.get() },
.string => |*val| .{ .string = val.get() },
.buffer => |*val| .{ .buffer = val.get() },
.file => |*val| .{ .file = val.get() },
}));
}
return try result.toOwnedSlice();

View File

@@ -1,10 +1,15 @@
// clang-format off
#include "ModuleLoader.h"
#include "root.h"
#include "ModuleLoader.h"
#include "headers-handwritten.h"
#include "PathInlines.h"
#include "JSCommonJSModule.h"
#include <JavaScriptCore/JSBoundFunction.h>
#include <JavaScriptCore/PropertySlot.h>
#include <JavaScriptCore/JSMap.h>
#include <JavaScriptCore/JSString.h>
#include <JavaScriptCore/SourceCode.h>
#include "ZigGlobalObject.h"
#include "InternalModuleRegistry.h"
@@ -85,3 +90,44 @@ extern "C" [[ZIG_EXPORT(nothrow)]] void Bun__ExposeNodeModuleGlobals(Zig::Global
FOREACH_EXPOSED_BUILTIN_IMR(PUT_CUSTOM_GETTER_SETTER)
#undef PUT_CUSTOM_GETTER_SETTER
}
// Set up require(), module, __filename, __dirname on globalThis for the REPL.
// Creates a CommonJS module object rooted at the given directory so require() resolves correctly.
extern "C" [[ZIG_EXPORT(check_slow)]] void Bun__REPL__setupGlobalRequire(
Zig::GlobalObject* globalObject,
const unsigned char* cwdPtr,
size_t cwdLen)
{
using namespace JSC;
auto& vm = getVM(globalObject);
auto scope = DECLARE_THROW_SCOPE(vm);
auto cwdStr = WTF::String::fromUTF8(std::span { cwdPtr, cwdLen });
auto* filename = jsString(vm, makeString(cwdStr, PLATFORM_SEP_s, "[repl]"_s));
auto* dirname = jsString(vm, WTF::String(cwdStr));
auto* moduleObject = Bun::JSCommonJSModule::create(vm,
globalObject->CommonJSModuleObjectStructure(),
filename, filename, dirname, SourceCode());
moduleObject->hasEvaluated = true;
auto* resolveFunction = JSBoundFunction::create(vm, globalObject,
globalObject->requireResolveFunctionUnbound(), filename,
ArgList(), 1, globalObject->commonStrings().resolveString(globalObject),
makeSource("resolve"_s, SourceOrigin(), SourceTaintedOrigin::Untainted));
RETURN_IF_EXCEPTION(scope, );
auto* requireFunction = JSBoundFunction::create(vm, globalObject,
globalObject->requireFunctionUnbound(), moduleObject,
ArgList(), 1, globalObject->commonStrings().requireString(globalObject),
makeSource("require"_s, SourceOrigin(), SourceTaintedOrigin::Untainted));
RETURN_IF_EXCEPTION(scope, );
requireFunction->putDirect(vm, vm.propertyNames->resolve, resolveFunction, 0);
moduleObject->putDirect(vm, WebCore::clientData(vm)->builtinNames().requirePublicName(), requireFunction, 0);
globalObject->putDirect(vm, WebCore::builtinNames(vm).requirePublicName(), requireFunction, 0);
globalObject->putDirect(vm, Identifier::fromString(vm, "module"_s), moduleObject, 0);
globalObject->putDirect(vm, Identifier::fromString(vm, "__filename"_s), filename, 0);
globalObject->putDirect(vm, Identifier::fromString(vm, "__dirname"_s), dirname, 0);
}

View File

@@ -6151,6 +6151,166 @@ CPP_DECL [[ZIG_EXPORT(nothrow)]] unsigned int Bun__CallFrame__getLineNumber(JSC:
return lineColumn.line;
}
// REPL evaluation function - evaluates JavaScript code in the global scope
// Returns the result value, or undefined if an exception was thrown
// If an exception is thrown, the exception value is stored in *exception
extern "C" JSC::EncodedJSValue Bun__REPL__evaluate(
JSC::JSGlobalObject* globalObject,
const unsigned char* sourcePtr,
size_t sourceLen,
const unsigned char* filenamePtr,
size_t filenameLen,
JSC::EncodedJSValue* exception)
{
auto& vm = JSC::getVM(globalObject);
auto scope = DECLARE_TOP_EXCEPTION_SCOPE(vm);
WTF::String source = WTF::String::fromUTF8(std::span { sourcePtr, sourceLen });
WTF::String filename = filenameLen > 0
? WTF::String::fromUTF8(std::span { filenamePtr, filenameLen })
: "[repl]"_s;
JSC::SourceCode sourceCode = JSC::makeSource(
source,
JSC::SourceOrigin {},
JSC::SourceTaintedOrigin::Untainted,
filename,
WTF::TextPosition(),
JSC::SourceProviderSourceType::Program);
WTF::NakedPtr<JSC::Exception> evalException;
JSC::JSValue result = JSC::evaluate(globalObject, sourceCode, globalObject->globalThis(), evalException);
if (evalException) {
*exception = JSC::JSValue::encode(evalException->value());
// Set _error on the globalObject directly (not globalThis proxy)
globalObject->putDirect(vm, JSC::Identifier::fromString(vm, "_error"_s), evalException->value());
scope.clearException();
return JSC::JSValue::encode(JSC::jsUndefined());
}
if (scope.exception()) {
*exception = JSC::JSValue::encode(scope.exception()->value());
// Set _error on the globalObject directly (not globalThis proxy)
globalObject->putDirect(vm, JSC::Identifier::fromString(vm, "_error"_s), scope.exception()->value());
scope.clearException();
return JSC::JSValue::encode(JSC::jsUndefined());
}
// Note: _ is now set in Zig code (repl.zig) after extracting the value from
// the REPL transform wrapper. We don't set it here anymore.
return JSC::JSValue::encode(result);
}
// REPL completion function - gets completions for a partial property access
// Returns an array of completion strings, or undefined if no completions
extern "C" JSC::EncodedJSValue Bun__REPL__getCompletions(
JSC::JSGlobalObject* globalObject,
JSC::EncodedJSValue targetValue,
const unsigned char* prefixPtr,
size_t prefixLen)
{
auto& vm = JSC::getVM(globalObject);
auto scope = DECLARE_THROW_SCOPE(vm);
JSC::JSValue target = JSC::JSValue::decode(targetValue);
if (!target || target.isUndefined() || target.isNull()) {
target = globalObject->globalThis();
}
if (!target.isObject()) {
JSObject* boxed = target.toObject(globalObject);
RETURN_IF_EXCEPTION(scope, JSC::JSValue::encode(JSC::jsUndefined()));
target = boxed;
}
WTF::String prefix = prefixLen > 0
? WTF::String::fromUTF8(std::span { prefixPtr, prefixLen })
: WTF::String();
JSC::JSObject* object = target.getObject();
JSC::PropertyNameArrayBuilder propertyNames(vm, JSC::PropertyNameMode::Strings, JSC::PrivateSymbolMode::Exclude);
object->getPropertyNames(globalObject, propertyNames, DontEnumPropertiesMode::Include);
RETURN_IF_EXCEPTION(scope, JSC::JSValue::encode(JSC::jsUndefined()));
JSC::JSArray* completions = JSC::constructEmptyArray(globalObject, nullptr, 0);
RETURN_IF_EXCEPTION(scope, JSC::JSValue::encode(JSC::jsUndefined()));
unsigned completionIndex = 0;
for (const auto& propertyName : propertyNames) {
WTF::String name = propertyName.string();
if (prefix.isEmpty() || name.startsWith(prefix)) {
completions->putDirectIndex(globalObject, completionIndex++, JSC::jsString(vm, name));
RETURN_IF_EXCEPTION(scope, JSC::JSValue::encode(JSC::jsUndefined()));
}
}
// Also check the prototype chain
JSC::JSValue proto = object->getPrototype(globalObject);
RETURN_IF_EXCEPTION(scope, JSC::JSValue::encode(completions));
while (proto && proto.isObject()) {
JSC::JSObject* protoObj = proto.getObject();
JSC::PropertyNameArrayBuilder protoNames(vm, JSC::PropertyNameMode::Strings, JSC::PrivateSymbolMode::Exclude);
protoObj->getPropertyNames(globalObject, protoNames, DontEnumPropertiesMode::Include);
RETURN_IF_EXCEPTION(scope, JSC::JSValue::encode(completions));
for (const auto& propertyName : protoNames) {
WTF::String name = propertyName.string();
if (prefix.isEmpty() || name.startsWith(prefix)) {
completions->putDirectIndex(globalObject, completionIndex++, JSC::jsString(vm, name));
RETURN_IF_EXCEPTION(scope, JSC::JSValue::encode(completions));
}
}
proto = protoObj->getPrototype(globalObject);
RETURN_IF_EXCEPTION(scope, JSC::JSValue::encode(completions));
}
return JSC::JSValue::encode(completions);
}
// Format a value for REPL output using util.inspect style
extern "C" JSC::EncodedJSValue Bun__REPL__formatValue(
JSC::JSGlobalObject* globalObject,
JSC::EncodedJSValue valueEncoded,
int32_t depth,
bool colors)
{
auto& vm = JSC::getVM(globalObject);
auto scope = DECLARE_THROW_SCOPE(vm);
// Get the util.inspect function from the global object
auto* bunGlobal = jsCast<Zig::GlobalObject*>(globalObject);
JSC::JSValue inspectFn = bunGlobal->utilInspectFunction();
if (!inspectFn || !inspectFn.isCallable()) {
// Fallback to toString if util.inspect is not available
JSC::JSValue value = JSC::JSValue::decode(valueEncoded);
JSString* str = value.toString(globalObject);
RETURN_IF_EXCEPTION(scope, JSC::JSValue::encode(JSC::jsUndefined()));
return JSC::JSValue::encode(str);
}
// Create options object
JSC::JSObject* options = JSC::constructEmptyObject(globalObject);
options->putDirect(vm, JSC::Identifier::fromString(vm, "depth"_s), JSC::jsNumber(depth));
options->putDirect(vm, JSC::Identifier::fromString(vm, "colors"_s), JSC::jsBoolean(colors));
options->putDirect(vm, JSC::Identifier::fromString(vm, "maxArrayLength"_s), JSC::jsNumber(100));
options->putDirect(vm, JSC::Identifier::fromString(vm, "maxStringLength"_s), JSC::jsNumber(10000));
options->putDirect(vm, JSC::Identifier::fromString(vm, "breakLength"_s), JSC::jsNumber(80));
JSC::MarkedArgumentBuffer args;
args.append(JSC::JSValue::decode(valueEncoded));
args.append(options);
JSC::JSValue result = JSC::call(globalObject, inspectFn, JSC::ArgList(args), "util.inspect"_s);
RETURN_IF_EXCEPTION(scope, JSC::JSValue::encode(JSC::jsUndefined()));
return JSC::JSValue::encode(result);
}
extern "C" void JSC__ArrayBuffer__ref(JSC::ArrayBuffer* self) { self->ref(); }
extern "C" void JSC__ArrayBuffer__deref(JSC::ArrayBuffer* self) { self->deref(); }
extern "C" void JSC__ArrayBuffer__asBunArrayBuffer(JSC::ArrayBuffer* self, Bun__ArrayBuffer* out)

View File

@@ -168,6 +168,12 @@ CPP_DECL uint32_t JSC__JSInternalPromise__status(const JSC::JSInternalPromise* a
CPP_DECL void JSC__JSFunction__optimizeSoon(JSC::EncodedJSValue JSValue0);
#pragma mark - REPL Functions
CPP_DECL JSC::EncodedJSValue Bun__REPL__evaluate(JSC::JSGlobalObject* globalObject, const unsigned char* sourcePtr, size_t sourceLen, const unsigned char* filenamePtr, size_t filenameLen, JSC::EncodedJSValue* exception);
CPP_DECL JSC::EncodedJSValue Bun__REPL__getCompletions(JSC::JSGlobalObject* globalObject, JSC::EncodedJSValue targetValue, const unsigned char* prefixPtr, size_t prefixLen);
CPP_DECL JSC::EncodedJSValue Bun__REPL__formatValue(JSC::JSGlobalObject* globalObject, JSC::EncodedJSValue valueEncoded, int32_t depth, bool colors);
#pragma mark - JSC::JSGlobalObject
CPP_DECL VirtualMachine* JSC__JSGlobalObject__bunVM(JSC::JSGlobalObject* arg0);

View File

@@ -57,6 +57,46 @@ static std::optional<WTF::String> stripANSI(const std::span<const Char> input)
return result.toString();
}
struct BunANSIIterator {
const unsigned char* input;
size_t input_len;
size_t cursor;
const unsigned char* slice_ptr;
size_t slice_len;
};
extern "C" bool Bun__ANSI__next(BunANSIIterator* it)
{
auto start = it->input + it->cursor;
const auto end = it->input + it->input_len;
// Skip past any ANSI sequences at current position
while (start < end) {
const auto escPos = ANSI::findEscapeCharacter(start, end);
if (escPos != start) break;
const auto after = ANSI::consumeANSI(start, end);
if (after == start) {
start++;
break;
}
start = after;
}
if (start >= end) {
it->cursor = it->input_len;
it->slice_ptr = nullptr;
it->slice_len = 0;
return false;
}
const auto escPos = ANSI::findEscapeCharacter(start, end);
const auto slice_end = escPos ? escPos : end;
it->slice_ptr = start;
it->slice_len = slice_end - start;
it->cursor = slice_end - it->input;
return true;
}
JSC_DEFINE_HOST_FUNCTION(jsFunctionBunStripANSI, (JSC::JSGlobalObject * globalObject, JSC::CallFrame* callFrame))
{
auto& vm = globalObject->vm();

View File

@@ -325,16 +325,30 @@ pub fn start(
}
this.arena = bun.MimallocArena.init();
const allocator = this.arena.?.allocator();
const map = try allocator.create(bun.DotEnv.Map);
map.* = try this.parent.transpiler.env.map.cloneWithAllocator(allocator);
const loader = try allocator.create(bun.DotEnv.Loader);
loader.* = bun.DotEnv.Loader.init(map, allocator);
var vm = try jsc.VirtualMachine.initWorker(this, .{
.allocator = this.arena.?.allocator(),
.allocator = allocator,
.args = transform_options,
.env_loader = loader,
.store_fd = this.store_fd,
.graph = this.parent.standalone_module_graph,
});
vm.allocator = this.arena.?.allocator();
vm.allocator = allocator;
vm.arena = &this.arena.?;
var b = &vm.transpiler;
b.resolver.env_loader = b.env;
if (this.parent.standalone_module_graph) |graph| {
bun.bun_js.applyStandaloneRuntimeFlags(b, graph);
}
b.configureDefines() catch {
this.flushLogs();
@@ -342,16 +356,6 @@ pub fn start(
return;
};
// TODO: we may have to clone other parts of vm state. this will be more
// important when implementing vm.deinit()
const map = try vm.allocator.create(bun.DotEnv.Map);
map.* = try vm.transpiler.env.map.cloneWithAllocator(vm.allocator);
const loader = try vm.allocator.create(bun.DotEnv.Loader);
loader.* = bun.DotEnv.Loader.init(map, vm.allocator);
vm.transpiler.env = loader;
vm.loadExtraEnvAndSourceCodePrinter();
vm.is_main_thread = false;
jsc.VirtualMachine.is_main_thread_vm = false;

View File

@@ -275,8 +275,7 @@ fn fetchImpl(
if (ssl_config) |conf| {
ssl_config = null;
conf.deinit();
bun.default_allocator.destroy(conf);
conf.deref();
}
}
@@ -468,7 +467,8 @@ fn fetchImpl(
}) |config| {
const ssl_config_object = bun.handleOom(bun.default_allocator.create(SSLConfig));
ssl_config_object.* = config;
break :extract_ssl_config ssl_config_object;
// Intern via GlobalRegistry for deduplication and pointer equality
break :extract_ssl_config SSLConfig.GlobalRegistry.intern(ssl_config_object);
}
}
}

View File

@@ -78,6 +78,10 @@ pub const FetchTasklet = struct {
bun.debugAssert(count > 0);
if (count == 1) {
if (this.javascript_vm.isShuttingDown()) {
this.deinit() catch |err| switch (err) {};
return;
}
// this is really unlikely to happen, but can happen
// lets make sure that we always call deinit from main thread
@@ -1155,6 +1159,7 @@ pub const FetchTasklet = struct {
/// This is ALWAYS called from the http thread and we cannot touch the buffer here because is locked
pub fn onWriteRequestDataDrain(this: *FetchTasklet) void {
if (this.javascript_vm.isShuttingDown()) return;
// ref until the main thread callback is called
this.ref();
this.javascript_vm.eventLoop().enqueueTaskConcurrent(jsc.ConcurrentTask.fromCallback(this, FetchTasklet.resumeRequestDataStream));
@@ -1383,7 +1388,8 @@ pub const FetchTasklet = struct {
return;
}
}
// will deinit when done with the http client (when is_done = true)
if (task.javascript_vm.isShuttingDown()) return;
task.javascript_vm.eventLoop().enqueueTaskConcurrent(task.concurrent_task.from(task, .manual_deinit));
}
};

View File

@@ -191,6 +191,7 @@ pub const linux = @import("./linux.zig");
/// Translated from `c-headers-for-zig.h` for the current platform.
pub const c = @import("translated-c-headers");
pub const tty = @import("./tty.zig");
pub const sha = @import("./sha.zig");
pub const FeatureFlags = @import("./feature_flags.zig");

View File

@@ -219,10 +219,9 @@ pub const BundleV2 = struct {
client_transpiler.options.chunk_naming = bun.options.PathTemplate.chunk.data;
client_transpiler.options.entry_naming = "./[name]-[hash].[ext]";
// Use "/" so that asset URLs in HTML are absolute (e.g. "/chunk-abc.js"
// instead of "./chunk-abc.js"). Relative paths break when the HTML is
// served from a nested route like "/foo/".
client_transpiler.options.public_path = "/";
// Avoid setting a public path for --compile since all the assets
// will be served relative to the server root.
client_transpiler.options.public_path = "";
}
client_transpiler.setLog(this_transpiler.log);

View File

@@ -92,6 +92,7 @@ pub const AuditCommand = @import("./cli/audit_command.zig").AuditCommand;
pub const InitCommand = @import("./cli/init_command.zig").InitCommand;
pub const WhyCommand = @import("./cli/why_command.zig").WhyCommand;
pub const FuzzilliCommand = @import("./cli/fuzzilli_command.zig").FuzzilliCommand;
pub const ReplCommand = @import("./cli/repl_command.zig").ReplCommand;
pub const Arguments = @import("./cli/Arguments.zig");
@@ -842,12 +843,8 @@ pub const Command = struct {
return;
},
.ReplCommand => {
// TODO: Put this in native code.
var ctx = try Command.init(allocator, log, .BunxCommand);
ctx.debug.run_in_bun = true; // force the same version of bun used. fixes bun-debug for example
var args = bun.argv[0..];
args[1] = "bun-repl";
try BunxCommand.exec(ctx, args);
const ctx = try Command.init(allocator, log, .RunCommand);
try ReplCommand.exec(ctx);
return;
},
.RemoveCommand => {

View File

@@ -38,9 +38,6 @@ pub const InitCommand = struct {
return input.items[0 .. input.items.len - 1 :0];
}
}
extern fn Bun__ttySetMode(fd: i32, mode: i32) i32;
fn processRadioButton(label: string, comptime Choices: type) !Choices {
const colors = Output.enable_ansi_colors_stdout;
const choices = switch (colors) {
@@ -190,7 +187,7 @@ pub const InitCommand = struct {
}) catch null;
if (Environment.isPosix)
_ = Bun__ttySetMode(0, 1);
_ = bun.tty.setMode(0, .raw);
defer {
if (comptime Environment.isWindows) {
@@ -202,7 +199,7 @@ pub const InitCommand = struct {
}
}
if (Environment.isPosix) {
_ = Bun__ttySetMode(0, 0);
_ = bun.tty.setMode(0, .normal);
}
}

191
src/cli/repl_command.zig Normal file
View File

@@ -0,0 +1,191 @@
//! Bun REPL Command - Native Zig REPL with full TUI support
//!
//! This is the entry point for `bun repl` which provides an interactive
//! JavaScript REPL with:
//! - Syntax highlighting using QuickAndDirtySyntaxHighlighter
//! - Full line editing with Emacs-style keybindings
//! - Persistent history
//! - Tab completion
//! - Multi-line input support
//! - REPL commands (.help, .exit, .clear, .load, .save, .editor)
pub const ReplCommand = struct {
pub fn exec(ctx: Command.Context) !void {
@branchHint(.cold);
// Initialize the Zig REPL
var repl = Repl.init(ctx.allocator);
defer repl.deinit();
// Boot the JavaScript VM for the REPL
try bootReplVM(ctx, &repl);
}
fn bootReplVM(ctx: Command.Context, repl: *Repl) !void {
// Load bunfig if not already loaded
if (!ctx.debug.loaded_bunfig) {
try bun.cli.Arguments.loadConfigPath(ctx.allocator, true, "bunfig.toml", ctx, .RunCommand);
}
// Initialize JSC
bun.jsc.initialize(true); // true for eval mode
js_ast.Expr.Data.Store.create();
js_ast.Stmt.Data.Store.create();
const arena = Arena.init();
// Create a virtual path for REPL evaluation
const repl_path = "[repl]";
// Initialize the VM
const vm = try jsc.VirtualMachine.init(.{
.allocator = arena.allocator(),
.log = ctx.log,
.args = ctx.args,
.store_fd = false,
.smol = ctx.runtime_options.smol,
.eval = true,
.debugger = ctx.runtime_options.debugger,
.dns_result_order = DNSResolver.Order.fromStringOrDie(ctx.runtime_options.dns_result_order),
.is_main_thread = true,
});
var b = &vm.transpiler;
vm.preload = ctx.preloads;
vm.argv = ctx.passthrough;
vm.arena = @constCast(&arena);
vm.allocator = vm.arena.allocator();
// Configure bundler options
b.options.install = ctx.install;
b.resolver.opts.install = ctx.install;
b.resolver.opts.global_cache = ctx.debug.global_cache;
b.resolver.opts.prefer_offline_install = (ctx.debug.offline_mode_setting orelse .online) == .offline;
b.resolver.opts.prefer_latest_install = (ctx.debug.offline_mode_setting orelse .online) == .latest;
b.options.global_cache = b.resolver.opts.global_cache;
b.options.prefer_offline_install = b.resolver.opts.prefer_offline_install;
b.options.prefer_latest_install = b.resolver.opts.prefer_latest_install;
b.resolver.env_loader = b.env;
b.options.env.behavior = .load_all_without_inlining;
b.options.dead_code_elimination = false; // REPL needs all code
b.configureDefines() catch {
dumpBuildError(vm);
Global.exit(1);
};
bun.http.AsyncHTTP.loadEnv(vm.allocator, vm.log, b.env);
vm.loadExtraEnvAndSourceCodePrinter();
vm.is_main_thread = true;
jsc.VirtualMachine.is_main_thread_vm = true;
// Store VM reference in REPL (safe - no JS allocation)
repl.vm = vm;
repl.global = vm.global;
// Create the ReplRunner and execute within the API lock
// NOTE: JS-allocating operations like ExposeNodeModuleGlobals must
// be done inside the API lock callback, not before
var runner = ReplRunner{
.repl = repl,
.vm = vm,
.arena = arena,
.entry_path = repl_path,
.eval_script = ctx.runtime_options.eval.script,
.eval_and_print = ctx.runtime_options.eval.eval_and_print,
};
const callback = jsc.OpaqueWrap(ReplRunner, ReplRunner.start);
vm.global.vm().holdAPILock(&runner, callback);
}
fn dumpBuildError(vm: *jsc.VirtualMachine) void {
Output.flush();
const writer = Output.errorWriterBuffered();
defer Output.flush();
vm.log.print(writer) catch {};
}
};
/// Runs the REPL within the VM's API lock
const ReplRunner = struct {
repl: *Repl,
vm: *jsc.VirtualMachine,
arena: bun.allocators.MimallocArena,
entry_path: []const u8,
eval_script: []const u8,
eval_and_print: bool,
pub fn start(this: *ReplRunner) void {
const vm = this.vm;
// Set up the REPL environment (now inside API lock)
this.setupReplEnvironment() catch {
// setupGlobalRequire threw a JS exception — surface it and exit
if (vm.global.tryTakeException()) |exception| {
vm.printErrorLikeObjectToConsole(exception);
}
vm.exit_handler.exit_code = 1;
vm.onExit();
vm.globalExit();
};
if (this.eval_script.len > 0 or this.eval_and_print) {
// Non-interactive: evaluate the -e/--eval or -p/--print script,
// drain the event loop, and exit
const had_error = this.repl.evalScript(this.eval_script, this.eval_and_print);
Output.flush();
if (had_error) {
// Only overwrite on error so `process.exitCode = N` in the
// script is preserved on success.
vm.exit_handler.exit_code = 1;
} else {
// Fire process.on("beforeExit") and re-drain as needed
// (matches bun -e / Node.js semantics).
vm.onBeforeExit();
}
} else {
// Interactive: run the REPL loop
this.repl.runWithVM(vm) catch |err| {
Output.prettyErrorln("<r><red>REPL error: {s}<r>", .{@errorName(err)});
};
}
// Clean up
vm.onExit();
vm.globalExit();
}
fn setupReplEnvironment(this: *ReplRunner) bun.JSError!void {
const vm = this.vm;
// Expose Node.js module globals (__dirname, __filename, require, etc.)
// This must be done inside the API lock as it allocates JS objects
bun.cpp.Bun__ExposeNodeModuleGlobals(vm.global);
// Set up require(), module, __filename, __dirname relative to cwd
const cwd = vm.transpiler.fs.topLevelDirWithoutTrailingSlash();
try bun.cpp.Bun__REPL__setupGlobalRequire(vm.global, cwd.ptr, cwd.len);
// Set timezone if specified
if (vm.transpiler.env.get("TZ")) |tz| {
if (tz.len > 0) {
_ = vm.global.setTimeZone(&jsc.ZigString.init(tz));
}
}
vm.transpiler.env.loadTracy();
}
};
const Repl = @import("../repl.zig");
const bun = @import("bun");
const Global = bun.Global;
const Output = bun.Output;
const js_ast = bun.ast;
const jsc = bun.jsc;
const Arena = bun.allocators.MimallocArena;
const Command = bun.cli.Command;
const DNSResolver = bun.api.dns.Resolver;

View File

@@ -1068,7 +1068,7 @@ pub const UpdateInteractiveCommand = struct {
}) catch null;
if (Environment.isPosix)
_ = Bun__ttySetMode(0, 1);
_ = bun.tty.setMode(0, .raw);
defer {
if (comptime Environment.isWindows) {
@@ -1080,7 +1080,7 @@ pub const UpdateInteractiveCommand = struct {
}
}
if (Environment.isPosix) {
_ = Bun__ttySetMode(0, 0);
_ = bun.tty.setMode(0, .normal);
}
}
@@ -1810,9 +1810,6 @@ pub const UpdateInteractiveCommand = struct {
}
}
};
extern fn Bun__ttySetMode(fd: c_int, mode: c_int) c_int;
const string = []const u8;
pub const CatalogUpdateRequest = struct {

View File

@@ -73,7 +73,7 @@ pub fn checkServerIdentity(
};
// we inform the user that the cert is invalid
client.progressUpdate(is_ssl, if (is_ssl) &http_thread.https_context else &http_thread.http_context, socket);
client.progressUpdate(is_ssl, client.getSslCtx(is_ssl), socket);
// continue until we are aborted or not
return true;
} else {
@@ -217,7 +217,7 @@ pub fn onClose(
if (client.state.flags.is_redirect_pending) {
// if the connection is closed and we are pending redirect just do the redirect
// in this case we will re-connect or go to a different socket if needed
client.doRedirect(is_ssl, if (is_ssl) &http_thread.https_context else &http_thread.http_context, socket);
client.doRedirect(is_ssl, client.getSslCtx(is_ssl), socket);
return;
}
if (in_progress) {
@@ -226,7 +226,7 @@ pub fn onClose(
.CHUNKED_IN_TRAILERS_LINE_HEAD, .CHUNKED_IN_TRAILERS_LINE_MIDDLE => {
// ignore failure if we are in the middle of trailer headers, since we processed all the chunks and trailers are ignored
client.state.flags.received_last_chunk = true;
client.progressUpdate(comptime is_ssl, if (is_ssl) &http_thread.https_context else &http_thread.http_context, socket);
client.progressUpdate(comptime is_ssl, client.getSslCtx(is_ssl), socket);
return;
},
// here we are in the middle of a chunk so ECONNRESET is expected
@@ -235,7 +235,7 @@ pub fn onClose(
} else if (client.state.content_length == null and client.state.response_stage == .body) {
// no content length informed so we are done here
client.state.flags.received_last_chunk = true;
client.progressUpdate(comptime is_ssl, if (is_ssl) &http_thread.https_context else &http_thread.http_context, socket);
client.progressUpdate(comptime is_ssl, client.getSslCtx(is_ssl), socket);
return;
}
}
@@ -481,6 +481,9 @@ flags: Flags = Flags{},
state: InternalState = .{},
tls_props: ?*SSLConfig = null,
/// The custom SSL context used for this request (null = default context).
/// Set by HTTPThread.connect() when using custom TLS configs.
custom_ssl_ctx: ?*NewHTTPContext(true) = null,
result_callback: HTTPClientResult.Callback = undefined,
/// Some HTTP servers (such as npm) report Last-Modified times but ignore If-Modified-Since.
@@ -514,6 +517,11 @@ pub fn deinit(this: *HTTPClient) void {
this.proxy_tunnel = null;
tunnel.detachAndDeref();
}
// Release our reference on the interned SSLConfig
if (this.tls_props) |config| {
config.deref();
this.tls_props = null;
}
this.unix_socket_path.deinit();
this.unix_socket_path = jsc.ZigString.Slice.empty;
}
@@ -535,6 +543,16 @@ pub fn isKeepAlivePossible(this: *HTTPClient) bool {
return false;
}
/// Returns the SSL context for this client - either the custom context
/// (for mTLS/custom TLS) or the default global context.
pub fn getSslCtx(this: *HTTPClient, comptime is_ssl: bool) *NewHTTPContext(is_ssl) {
if (comptime is_ssl) {
return this.custom_ssl_ctx orelse &http_thread.https_context;
} else {
return &http_thread.http_context;
}
}
// lowercase hash header names so that we can be sure
pub fn hashHeaderName(name: string) u64 {
var hasher = std.hash.Wyhash.init(0);
@@ -805,6 +823,7 @@ pub fn doRedirect(
this.flags.did_have_handshaking_error and !this.flags.reject_unauthorized,
this.connected_url.hostname,
this.connected_url.getPortAuto(),
this.tls_props,
);
} else {
NewHTTPContext(is_ssl).closeSocket(socket);
@@ -942,12 +961,13 @@ fn printResponse(response: picohttp.Response) void {
pub fn onPreconnect(this: *HTTPClient, comptime is_ssl: bool, socket: NewHTTPContext(is_ssl).HTTPSocket) void {
log("onPreconnect({})", .{this.url});
this.unregisterAbortTracker();
const ctx = if (comptime is_ssl) &http_thread.https_context else &http_thread.http_context;
const ctx = this.getSslCtx(is_ssl);
ctx.releaseSocket(
socket,
this.flags.did_have_handshaking_error and !this.flags.reject_unauthorized,
this.url.hostname,
this.url.getPortAuto(),
this.tls_props,
);
this.state.reset(this.allocator);
@@ -1220,7 +1240,7 @@ pub fn onWritable(this: *HTTPClient, comptime is_first_call: bool, comptime is_s
this.state.request_stage = .body;
if (this.flags.is_streaming_request_body) {
// lets signal to start streaming the body
this.progressUpdate(is_ssl, if (is_ssl) &http_thread.https_context else &http_thread.http_context, socket);
this.progressUpdate(is_ssl, this.getSslCtx(is_ssl), socket);
}
}
return;
@@ -1233,7 +1253,7 @@ pub fn onWritable(this: *HTTPClient, comptime is_first_call: bool, comptime is_s
this.state.request_stage = .body;
if (this.flags.is_streaming_request_body) {
// lets signal to start streaming the body
this.progressUpdate(is_ssl, if (is_ssl) &http_thread.https_context else &http_thread.http_context, socket);
this.progressUpdate(is_ssl, this.getSslCtx(is_ssl), socket);
}
}
assert(
@@ -1388,7 +1408,7 @@ pub fn onWritable(this: *HTTPClient, comptime is_first_call: bool, comptime is_s
this.state.request_stage = .proxy_body;
if (this.flags.is_streaming_request_body) {
// lets signal to start streaming the body
this.progressUpdate(is_ssl, if (is_ssl) &http_thread.https_context else &http_thread.http_context, socket);
this.progressUpdate(is_ssl, this.getSslCtx(is_ssl), socket);
}
assert(this.state.request_body.len > 0);
@@ -1779,7 +1799,7 @@ pub fn drainResponseBody(this: *HTTPClient, comptime is_ssl: bool, socket: NewHT
return;
}
this.sendProgressUpdateWithoutStageCheck(is_ssl, http_thread.context(is_ssl), socket);
this.sendProgressUpdateWithoutStageCheck(is_ssl, this.getSslCtx(is_ssl), socket);
}
fn sendProgressUpdateWithoutStageCheck(this: *HTTPClient, comptime is_ssl: bool, ctx: *NewHTTPContext(is_ssl), socket: NewHTTPContext(is_ssl).HTTPSocket) void {
@@ -1808,6 +1828,7 @@ fn sendProgressUpdateWithoutStageCheck(this: *HTTPClient, comptime is_ssl: bool,
this.flags.did_have_handshaking_error and !this.flags.reject_unauthorized,
this.connected_url.hostname,
this.connected_url.getPortAuto(),
this.tls_props,
);
} else {
NewHTTPContext(is_ssl).closeSocket(socket);

View File

@@ -8,6 +8,11 @@ pub fn NewHTTPContext(comptime ssl: bool) type {
port: u16 = 0,
/// If you set `rejectUnauthorized` to `false`, the connection fails to verify,
did_have_handshaking_error_while_reject_unauthorized_is_false: bool = false,
/// The interned SSLConfig this socket was created with (null = default context).
/// Holds a ref while the socket is in the keepalive pool.
ssl_config: ?*SSLConfig = null,
/// The context that owns this pooled socket's memory (for returning to correct pool).
owner: *Context,
};
pub fn markTaggedSocketAsDead(socket: HTTPSocket, tagged: ActiveSocket) void {
@@ -79,6 +84,28 @@ pub fn NewHTTPContext(comptime ssl: bool) type {
}
pub fn deinit(this: *@This()) void {
// Replace callbacks with no-ops first to avoid UAF when closing sockets.
this.us_socket_context.cleanCallbacks(ssl);
// Drain pooled keepalive sockets: deref their ssl_config and force-close.
// Must force-close (code != 0) because SSL clean shutdown (code=0) requires a
// shutdown handshake with the peer, which won't complete during eviction.
// Without force-close, the socket stays linked and the context refcount never
// reaches 0, leaking the SSL_CTX.
if (comptime ssl) {
var iter = this.pending_sockets.used.iterator(.{ .kind = .set });
while (iter.next()) |idx| {
const pooled = this.pending_sockets.at(@intCast(idx));
if (pooled.ssl_config) |config| {
config.deref();
pooled.ssl_config = null;
}
pooled.http_socket.close(.failure);
}
}
// Use deferred free pattern (via nextTick) to avoid freeing the uSockets
// context while close callbacks may still reference it.
this.us_socket_context.deinit(ssl);
bun.default_allocator.destroy(this);
}
@@ -161,7 +188,7 @@ pub fn NewHTTPContext(comptime ssl: bool) type {
/// If `did_have_handshaking_error_while_reject_unauthorized_is_false`
/// is set, then we can only reuse the socket for HTTP Keep Alive if
/// `reject_unauthorized` is set to `false`.
pub fn releaseSocket(this: *@This(), socket: HTTPSocket, did_have_handshaking_error_while_reject_unauthorized_is_false: bool, hostname: []const u8, port: u16) void {
pub fn releaseSocket(this: *@This(), socket: HTTPSocket, did_have_handshaking_error_while_reject_unauthorized_is_false: bool, hostname: []const u8, port: u16, ssl_config: ?*SSLConfig) void {
// log("releaseSocket(0x{f})", .{bun.fmt.hexIntUpper(@intFromPtr(socket.socket))});
if (comptime Environment.allow_assert) {
@@ -186,6 +213,12 @@ pub fn NewHTTPContext(comptime ssl: bool) type {
@memcpy(pending.hostname_buf[0..hostname.len], hostname);
pending.hostname_len = @as(u8, @truncate(hostname.len));
pending.port = port;
pending.owner = this;
// Hold a ref on ssl_config while it's in the keepalive pool
pending.ssl_config = ssl_config;
if (ssl_config) |config| {
config.ref();
}
log("Keep-Alive release {s}:{d}", .{
hostname,
@@ -299,7 +332,12 @@ pub fn NewHTTPContext(comptime ssl: bool) type {
}
fn addMemoryBackToPool(pooled: *PooledSocket) void {
assert(context().pending_sockets.put(pooled));
// Release the ssl_config ref held by this pooled socket
if (pooled.ssl_config) |config| {
config.deref();
pooled.ssl_config = null;
}
assert(pooled.owner.pending_sockets.put(pooled));
}
pub fn onData(
@@ -312,7 +350,7 @@ pub fn NewHTTPContext(comptime ssl: bool) type {
return client.onData(
comptime ssl,
buf,
if (comptime ssl) &bun.http.http_thread.https_context else &bun.http.http_thread.http_context,
client.getSslCtx(ssl),
socket,
);
} else if (tagged.is(PooledSocket)) {
@@ -392,7 +430,7 @@ pub fn NewHTTPContext(comptime ssl: bool) type {
}
};
fn existingSocket(this: *@This(), reject_unauthorized: bool, hostname: []const u8, port: u16) ?HTTPSocket {
fn existingSocket(this: *@This(), reject_unauthorized: bool, hostname: []const u8, port: u16, ssl_config: ?*SSLConfig) ?HTTPSocket {
if (hostname.len > MAX_KEEPALIVE_HOSTNAME)
return null;
@@ -404,6 +442,11 @@ pub fn NewHTTPContext(comptime ssl: bool) type {
continue;
}
// Match ssl_config by pointer equality (interned configs)
if (socket.ssl_config != ssl_config) {
continue;
}
if (socket.did_have_handshaking_error_while_reject_unauthorized_is_false and reject_unauthorized) {
continue;
}
@@ -421,7 +464,12 @@ pub fn NewHTTPContext(comptime ssl: bool) type {
continue;
}
assert(context().pending_sockets.put(socket));
// Release the pooled socket's ssl_config ref (caller has its own ref)
if (socket.ssl_config) |config| {
config.deref();
socket.ssl_config = null;
}
assert(this.pending_sockets.put(socket));
log("+ Keep-Alive reuse {s}:{d}", .{ hostname, port });
return http_socket;
}
@@ -452,7 +500,7 @@ pub fn NewHTTPContext(comptime ssl: bool) type {
client.connected_url.hostname = hostname;
if (client.isKeepAlivePossible()) {
if (this.existingSocket(client.flags.reject_unauthorized, hostname, port)) |sock| {
if (this.existingSocket(client.flags.reject_unauthorized, hostname, port, client.tls_props)) |sock| {
if (sock.ext(**anyopaque)) |ctx| {
ctx.* = bun.cast(**anyopaque, ActiveSocket.init(client).ptr());
}
@@ -499,6 +547,7 @@ const assert = bun.assert;
const strings = bun.strings;
const uws = bun.uws;
const BoringSSL = bun.BoringSSL.c;
const SSLConfig = bun.api.server.ServerConfig.SSLConfig;
const HTTPClient = bun.http;
const InitError = HTTPClient.InitError;

View File

@@ -1,6 +1,15 @@
const HTTPThread = @This();
var custom_ssl_context_map = std.AutoArrayHashMap(*SSLConfig, *NewHTTPContext(true)).init(bun.default_allocator);
/// SSL context cache keyed by interned SSLConfig pointer.
/// Since configs are interned via SSLConfig.GlobalRegistry, pointer equality
/// is sufficient for lookup. Each entry holds a ref on its SSLConfig.
const SslContextCacheEntry = struct {
ctx: *NewHTTPContext(true),
last_used_ns: u64,
};
const ssl_context_cache_max_size = 60;
const ssl_context_cache_ttl_ns = 30 * std.time.ns_per_min;
var custom_ssl_context_map = std.AutoArrayHashMap(*SSLConfig, SslContextCacheEntry).init(bun.default_allocator);
loop: *jsc.MiniEventLoop,
http_context: NewHTTPContext(false),
@@ -226,32 +235,33 @@ pub fn connect(this: *@This(), client: *HTTPClient, comptime is_ssl: bool) !NewH
if (comptime is_ssl) {
const needs_own_context = client.tls_props != null and client.tls_props.?.requires_custom_request_ctx;
if (needs_own_context) {
var requested_config = client.tls_props.?;
for (custom_ssl_context_map.keys()) |other_config| {
if (requested_config.isSame(other_config)) {
// we free the callers config since we have a existing one
if (requested_config != client.tls_props) {
requested_config.deinit();
bun.default_allocator.destroy(requested_config);
}
client.tls_props = other_config;
if (client.http_proxy) |url| {
return try custom_ssl_context_map.get(other_config).?.connect(client, url.hostname, url.getPortAuto());
} else {
return try custom_ssl_context_map.get(other_config).?.connect(client, client.url.hostname, client.url.getPortAuto());
}
const requested_config = client.tls_props.?;
// Evict stale entries from the cache
evictStaleSslContexts(this);
// Look up by pointer equality (configs are interned)
if (custom_ssl_context_map.getPtr(requested_config)) |entry| {
// Cache hit - reuse existing SSL context
entry.last_used_ns = this.timer.read();
client.custom_ssl_ctx = entry.ctx;
// Keepalive is now supported for custom SSL contexts
if (client.http_proxy) |url| {
return try entry.ctx.connect(client, url.hostname, url.getPortAuto());
} else {
return try entry.ctx.connect(client, client.url.hostname, client.url.getPortAuto());
}
}
// we need the config so dont free it
var custom_context = try bun.default_allocator.create(NewHTTPContext(is_ssl));
custom_context.initWithClientConfig(client) catch |err| {
client.tls_props = null;
requested_config.deinit();
bun.default_allocator.destroy(requested_config);
// Cache miss - create new SSL context
var custom_context = try bun.default_allocator.create(NewHTTPContext(is_ssl));
custom_context.* = .{
.pending_sockets = NewHTTPContext(is_ssl).PooledSocketHiveAllocator.empty,
.us_socket_context = undefined,
};
custom_context.initWithClientConfig(client) catch |err| {
bun.default_allocator.destroy(custom_context);
// TODO: these error names reach js. figure out how they should be handled
return switch (err) {
error.FailedToOpenSocket => |e| e,
error.InvalidCA => error.FailedToOpenSocket,
@@ -259,14 +269,25 @@ pub fn connect(this: *@This(), client: *HTTPClient, comptime is_ssl: bool) !NewH
error.LoadCAFile => error.FailedToOpenSocket,
};
};
try custom_ssl_context_map.put(requested_config, custom_context);
// We might deinit the socket context, so we disable keepalive to make sure we don't
// free it while in use.
client.flags.disable_keepalive = true;
// Hold a ref on the config for the cache entry
requested_config.ref();
const now = this.timer.read();
bun.handleOom(custom_ssl_context_map.put(requested_config, .{
.ctx = custom_context,
.last_used_ns = now,
}));
// Enforce max cache size - evict oldest entry
if (custom_ssl_context_map.count() > ssl_context_cache_max_size) {
evictOldestSslContext();
}
client.custom_ssl_ctx = custom_context;
// Keepalive is now supported for custom SSL contexts
if (client.http_proxy) |url| {
// https://github.com/oven-sh/bun/issues/11343
if (url.protocol.len == 0 or strings.eqlComptime(url.protocol, "https") or strings.eqlComptime(url.protocol, "http")) {
return try this.context(is_ssl).connect(client, url.hostname, url.getPortAuto());
return try custom_context.connect(client, url.hostname, url.getPortAuto());
}
return error.UnsupportedProxyProtocol;
}
@@ -289,6 +310,41 @@ pub fn context(this: *@This(), comptime is_ssl: bool) *NewHTTPContext(is_ssl) {
return if (is_ssl) &this.https_context else &this.http_context;
}
/// Evict SSL context cache entries that haven't been used for ssl_context_cache_ttl_ns.
fn evictStaleSslContexts(this: *@This()) void {
const now = this.timer.read();
var i: usize = 0;
while (i < custom_ssl_context_map.count()) {
const entry = custom_ssl_context_map.values()[i];
if (now -| entry.last_used_ns > ssl_context_cache_ttl_ns) {
const config = custom_ssl_context_map.keys()[i];
custom_ssl_context_map.swapRemoveAt(i);
entry.ctx.deinit();
config.deref();
} else {
i += 1;
}
}
}
/// Evict the least-recently-used SSL context cache entry.
fn evictOldestSslContext() void {
if (custom_ssl_context_map.count() == 0) return;
var oldest_idx: usize = 0;
var oldest_time: u64 = std.math.maxInt(u64);
for (custom_ssl_context_map.values(), 0..) |entry, i| {
if (entry.last_used_ns < oldest_time) {
oldest_time = entry.last_used_ns;
oldest_idx = i;
}
}
const entry = custom_ssl_context_map.values()[oldest_idx];
const config = custom_ssl_context_map.keys()[oldest_idx];
custom_ssl_context_map.swapRemoveAt(oldest_idx);
entry.ctx.deinit();
config.deref();
}
fn drainQueuedShutdowns(this: *@This()) void {
while (true) {
// socket.close() can potentially be slow

View File

@@ -26,6 +26,7 @@ const SymbolDispose = Symbol.dispose;
const PromisePrototypeThen = $Promise.prototype.$then;
let addAbortListener;
let AsyncLocalStorage;
function isRequest(stream) {
return stream.setHeader && typeof stream.abort === "function";
@@ -45,7 +46,8 @@ function eos(stream, options, callback) {
validateFunction(callback, "callback");
validateAbortSignal(options.signal, "options.signal");
callback = once(callback);
AsyncLocalStorage ??= require("node:async_hooks").AsyncLocalStorage;
callback = once(AsyncLocalStorage.bind(callback));
if (isReadableStream(stream) || isWritableStream(stream)) {
return eosWeb(stream, options, callback);

2051
src/repl.zig Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -257,6 +257,7 @@ pub const Runtime = struct {
.emit_decorator_metadata,
.standard_decorators,
.lower_using,
.repl_mode,
// note that we do not include .inject_jest_globals, as we bail out of the cache entirely if this is true
};

View File

@@ -44,7 +44,7 @@ pub fn start(this: *@This()) Yield {
break;
}
const maybe1 = iter.next().?;
const maybe1 = iter.next() orelse return this.fail(Builtin.Kind.usageString(.seq));
const int1 = std.fmt.parseFloat(f32, bun.sliceTo(maybe1, 0)) catch return this.fail("seq: invalid argument\n");
if (!std.math.isFinite(int1)) return this.fail("seq: invalid argument\n");
this._end = int1;

View File

@@ -2387,6 +2387,36 @@ pub const CodePoint = i32;
const string = []const u8;
/// SIMD-accelerated iterator that yields slices of text between ANSI escape sequences.
/// The C++ side uses ANSI::findEscapeCharacter (SIMD) and ANSI::consumeANSI.
pub const ANSIIterator = extern struct {
input: [*]const u8,
input_len: usize,
cursor: usize,
slice_ptr: ?[*]const u8,
slice_len: usize,
pub fn init(input: []const u8) ANSIIterator {
return .{
.input = input.ptr,
.input_len = input.len,
.cursor = 0,
.slice_ptr = null,
.slice_len = 0,
};
}
/// Returns the next slice of non-ANSI text, or null when done.
pub fn next(self: *ANSIIterator) ?[]const u8 {
if (Bun__ANSI__next(self)) {
return (self.slice_ptr orelse return null)[0..self.slice_len];
}
return null;
}
extern fn Bun__ANSI__next(it: *ANSIIterator) bool;
};
const escapeHTML_ = @import("./immutable/escapeHTML.zig");
const escapeRegExp_ = @import("./escapeRegExp.zig");
const paths_ = @import("./immutable/paths.zig");

11
src/tty.zig Normal file
View File

@@ -0,0 +1,11 @@
pub const Mode = enum(c_int) {
normal = 0,
raw = 1,
io = 2,
};
pub fn setMode(fd: c_int, mode: Mode) c_int {
return Bun__ttySetMode(fd, @intFromEnum(mode));
}
extern fn Bun__ttySetMode(fd: c_int, mode: c_int) c_int;

View File

@@ -168,6 +168,40 @@ console.log("PRELOAD");
},
});
// Regression test: standalone workers must not load .env when autoloadDotenv is disabled
itBundled("compile/AutoloadDotenvDisabledWorkerCLI", {
compile: {
autoloadDotenv: false,
},
backend: "cli",
files: {
"/entry.ts": /* js */ `
import { rmSync } from "fs";
rmSync("./worker.ts", { force: true });
const worker = new Worker("./worker.ts");
console.log(await new Promise(resolve => {
worker.onmessage = event => resolve(event.data);
}));
worker.terminate();
`,
"/worker.ts": /* js */ `
postMessage(process.env.TEST_VAR || "not found");
`,
},
entryPointsRaw: ["./entry.ts", "./worker.ts"],
outfile: "dist/out",
runtimeFiles: {
"/.env": `TEST_VAR=from_dotenv`,
},
run: {
stdout: "not found",
file: "dist/out",
setCwd: true,
},
});
// Test CLI backend with autoloadDotenv: true
itBundled("compile/AutoloadDotenvEnabledCLI", {
compile: {

View File

@@ -1,12 +0,0 @@
import { expect, test } from "bun:test";
import "harness";
import { isArm64, isMusl } from "harness";
// https://github.com/oven-sh/bun/issues/12070
test.skipIf(
// swc, which bun-repl uses, published a glibc build for arm64 musl
// and so it crashes on process.exit.
isMusl && isArm64,
)("bun repl", () => {
expect(["repl", "-e", "process.exit(0)"]).toRun();
});

View File

@@ -129,6 +129,12 @@ describe.concurrent(() => {
"name": "bun-windows-x64-baseline.zip",
"browser_download_url": `https://pub-5e11e972747a44bf9aaf9394f185a982.r2.dev/releases/${tagName}/bun-windows-x64-baseline.zip`,
},
{
"url": "foo",
"content_type": "application/zip",
"name": "bun-windows-aarch64.zip",
"browser_download_url": `https://pub-5e11e972747a44bf9aaf9394f185a982.r2.dev/releases/${tagName}/bun-windows-aarch64.zip`,
},
{
"url": "foo",
"content_type": "application/zip",
@@ -141,6 +147,12 @@ describe.concurrent(() => {
"name": "bun-linux-x64-baseline.zip",
"browser_download_url": `https://pub-5e11e972747a44bf9aaf9394f185a982.r2.dev/releases/${tagName}/bun-linux-x64-baseline.zip`,
},
{
"url": "foo",
"content_type": "application/zip",
"name": "bun-linux-aarch64.zip",
"browser_download_url": `https://pub-5e11e972747a44bf9aaf9394f185a982.r2.dev/releases/${tagName}/bun-linux-aarch64.zip`,
},
{
"url": "foo",
"content_type": "application/zip",

View File

@@ -0,0 +1,74 @@
// Fixture for TLS keepalive memory leak detection.
// Spawned as a subprocess with --smol for clean memory measurement.
//
// Usage: bun --smol tls-keepalive-leak-fixture.js
// Env: TLS_CERT, TLS_KEY - PEM cert/key for the server
// NUM_REQUESTS - number of requests to make (default 50000)
// MODE - "same" (same TLS config) or "distinct" (unique configs)
const cert = process.env.TLS_CERT;
const key = process.env.TLS_KEY;
const numRequests = parseInt(process.env.NUM_REQUESTS || "50000", 10);
const mode = process.env.MODE || "same";
if (!cert || !key) {
throw new Error("TLS_CERT and TLS_KEY env vars required");
}
using server = Bun.serve({
port: 0,
tls: { cert, key },
hostname: "127.0.0.1",
fetch() {
return new Response("ok");
},
});
const url = `https://127.0.0.1:${server.port}`;
// Warmup
for (let i = 0; i < 20_000; i++) {
await fetch(url, {
tls: { ca: cert, rejectUnauthorized: false },
keepalive: true,
}).then(r => r.text());
}
Bun.gc(true);
const baselineRss = process.memoryUsage.rss();
const requests = [];
if (mode === "same") {
// All requests use the same TLS config — tests SSLConfig dedup
const tlsOpts = { ca: cert, rejectUnauthorized: false };
for (let i = 0; i < numRequests; i++) {
await fetch(url, { tls: tlsOpts, keepalive: true }).then(r => r.text());
}
} else if (mode === "distinct") {
// Each request uses a unique TLS config — tests cache eviction
for (let i = 0; i < numRequests; i++) {
await fetch(url, {
tls: { ca: cert, rejectUnauthorized: false, serverName: `host-${i}.example.com` },
keepalive: true,
}).then(r => r.text());
}
}
// Allow the HTTP thread to process deferred SSL context frees
await Bun.sleep(100);
Bun.gc(true);
await Bun.sleep(100);
Bun.gc(true);
const finalRss = process.memoryUsage.rss();
const growthMB = (finalRss - baselineRss) / (1024 * 1024);
// Output as JSON for the parent test to parse
console.log(
JSON.stringify({
baselineRss,
finalRss,
growthMB: Math.round(growthMB * 100) / 100,
numRequests,
mode,
}),
);

View File

@@ -0,0 +1,167 @@
import { describe, expect, setDefaultTimeout, test } from "bun:test";
import { bunEnv, bunExe, isASAN, tls as validTls } from "harness";
import { join } from "node:path";
setDefaultTimeout(30_000);
describe("TLS keepalive for custom SSL configs", () => {
test("keepalive reuses connections with same TLS config", async () => {
using server = Bun.serve({
port: 0,
tls: validTls,
hostname: "127.0.0.1",
fetch(req, server) {
const ip = server.requestIP(req);
return new Response(String(ip?.port ?? 0));
},
});
const url = `https://127.0.0.1:${server.port}`;
const tlsOpts = { ca: validTls.cert, rejectUnauthorized: false };
// Make sequential requests with keepalive enabled.
// With our fix: connections reuse → same client port.
// Without fix: disable_keepalive=true → new connection each time → different ports.
const ports: number[] = [];
for (let i = 0; i < 6; i++) {
const res = await fetch(url, { tls: tlsOpts, keepalive: true });
ports.push(parseInt(await res.text(), 10));
}
const uniquePorts = new Set(ports);
// Keepalive working: at most 2 unique ports (allowing one reconnect)
expect(uniquePorts.size).toBeLessThanOrEqual(2);
});
test("different TLS configs use separate connections", async () => {
using server = Bun.serve({
port: 0,
tls: validTls,
hostname: "127.0.0.1",
fetch(req, server) {
const ip = server.requestIP(req);
return new Response(String(ip?.port ?? 0));
},
});
const url = `https://127.0.0.1:${server.port}`;
// Two configs that differ (serverName makes them different SSLConfigs)
const tlsA = { ca: validTls.cert, rejectUnauthorized: false };
const tlsB = { ca: validTls.cert, rejectUnauthorized: false, serverName: "127.0.0.1" };
const resA = await fetch(url, { tls: tlsA, keepalive: true });
const portA = parseInt(await resA.text(), 10);
const resB = await fetch(url, { tls: tlsB, keepalive: true });
const portB = parseInt(await resB.text(), 10);
// Different SSL configs must not share keepalive connections
expect(portA).not.toBe(portB);
});
test("stress test - many sequential requests reuse connections", async () => {
using server = Bun.serve({
port: 0,
tls: validTls,
hostname: "127.0.0.1",
fetch(req, server) {
const ip = server.requestIP(req);
return new Response(String(ip?.port ?? 0));
},
});
const url = `https://127.0.0.1:${server.port}`;
const tlsOpts = { ca: validTls.cert, rejectUnauthorized: false };
const ports: number[] = [];
for (let i = 0; i < 50; i++) {
const res = await fetch(url, { tls: tlsOpts, keepalive: true });
ports.push(parseInt(await res.text(), 10));
}
const uniquePorts = new Set(ports);
// 50 requests through keepalive should use very few connections
expect(uniquePorts.size).toBeLessThanOrEqual(3);
});
test("keepalive disabled creates new connections each time", async () => {
using server = Bun.serve({
port: 0,
tls: validTls,
hostname: "127.0.0.1",
fetch(req, server) {
const ip = server.requestIP(req);
return new Response(String(ip?.port ?? 0));
},
});
const url = `https://127.0.0.1:${server.port}`;
const tlsOpts = { ca: validTls.cert, rejectUnauthorized: false };
// With keepalive explicitly disabled, each request should open a new connection
const ports: number[] = [];
for (let i = 0; i < 5; i++) {
const res = await fetch(url, { tls: tlsOpts, keepalive: false });
ports.push(parseInt(await res.text(), 10));
}
const uniquePorts = new Set(ports);
// Every request should use a different connection → different port
expect(uniquePorts.size).toBeGreaterThan(1);
});
});
describe.skipIf(isASAN)("TLS custom config memory leak detection", () => {
test("repeated fetches with same custom TLS config do not leak memory", async () => {
await using proc = Bun.spawn({
cmd: [bunExe(), "--smol", join(import.meta.dir, "tls-keepalive-leak-fixture.js")],
env: {
...bunEnv,
TLS_CERT: validTls.cert,
TLS_KEY: validTls.key,
NUM_REQUESTS: "100000",
MODE: "same",
},
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
const result = JSON.parse(stdout.trim());
console.log(`Same config: ${result.numRequests} requests, growth: ${result.growthMB} MB`);
if (exitCode !== 0) {
console.error(stderr);
}
expect(result.growthMB).toBeLessThan(50);
expect(exitCode).toBe(0);
});
test("many distinct TLS configs stay bounded by cache eviction", async () => {
await using proc = Bun.spawn({
cmd: [bunExe(), "--smol", join(import.meta.dir, "tls-keepalive-leak-fixture.js")],
env: {
...bunEnv,
TLS_CERT: validTls.cert,
TLS_KEY: validTls.key,
NUM_REQUESTS: "200",
MODE: "distinct",
},
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
const result = JSON.parse(stdout.trim());
console.log(`Distinct configs: ${result.numRequests} configs, growth: ${result.growthMB} MB`);
if (exitCode !== 0) {
console.error(stderr);
}
expect(result.growthMB).toBeLessThan(75 * (isASAN ? 8 : 1));
expect(exitCode).toBe(0);
});
});

File diff suppressed because it is too large Load Diff

View File

@@ -9,6 +9,36 @@ describe("seq", async () => {
.stderr("usage: seq [-w] [-f format] [-s string] [-t string] [first [incr]] last\n")
.runAsTest("prints usage");
TestBuilder.command`seq -w`
.exitCode(1)
.stdout("")
.stderr("usage: seq [-w] [-f format] [-s string] [-t string] [first [incr]] last\n")
.runAsTest("prints usage when only -w flag given");
TestBuilder.command`seq --fixed-width`
.exitCode(1)
.stdout("")
.stderr("usage: seq [-w] [-f format] [-s string] [-t string] [first [incr]] last\n")
.runAsTest("prints usage when only --fixed-width flag given");
TestBuilder.command`seq -s ,`
.exitCode(1)
.stdout("")
.stderr("usage: seq [-w] [-f format] [-s string] [-t string] [first [incr]] last\n")
.runAsTest("prints usage when only -s flag given");
TestBuilder.command`seq -t ,`
.exitCode(1)
.stdout("")
.stderr("usage: seq [-w] [-f format] [-s string] [-t string] [first [incr]] last\n")
.runAsTest("prints usage when only -t flag given");
TestBuilder.command`seq -w -s , -t .`
.exitCode(1)
.stdout("")
.stderr("usage: seq [-w] [-f format] [-s string] [-t string] [first [incr]] last\n")
.runAsTest("prints usage when only flags given");
TestBuilder.command`seq -s`
.exitCode(1)
.stdout("")

View File

@@ -0,0 +1,126 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe, isWindows } from "harness";
// Regression test: Bun used to call shutdown(SHUT_WR) on the parent's read end
// of a SOCK_STREAM socketpair used for subprocess stdout. This sent a FIN to
// the child's write end, causing programs that poll stdout for readability
// (like Python's asyncio connect_write_pipe) to interpret it as "peer closed"
// and tear down their write transport.
//
// This broke all Python MCP servers using the model_context_protocol SDK
// whenever they took more than a few seconds to initialize.
test("subprocess stdout pipe stays writable after idle delay", async () => {
// Spawn a child that delays before writing to stdout.
// The child uses poll() on stdout to detect if the read end was shutdown.
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
// Wait 2 seconds, then write to stdout
await Bun.sleep(2000);
process.stdout.write("hello after delay\\n");
`,
],
stdin: "pipe",
stdout: "pipe",
stderr: "pipe",
env: bunEnv,
});
const [stdout, stderr, exitCode] = await Promise.all([
new Response(proc.stdout).text(),
new Response(proc.stderr).text(),
proc.exited,
]);
expect(stdout).toBe("hello after delay\n");
expect(exitCode).toBe(0);
});
// Skip on Windows: Python's asyncio connect_write_pipe uses
// CreateIoCompletionPort internally, which doesn't work with
// subprocess pipe handles on Windows (OSError: [WinError 6]).
test.skipIf(isWindows)("subprocess stdout pipe works with Python asyncio connect_write_pipe", async () => {
// This is the exact scenario from the bug report: Python's asyncio
// connect_write_pipe registers stdout with epoll for read-readiness
// monitoring. If shutdown(SHUT_WR) was called on the parent's end,
// the child sees an immediate EPOLLIN event and interprets it as
// "connection closed".
const pythonScript = `
import sys, asyncio, os
async def main():
loop = asyncio.get_event_loop()
w_transport, w_protocol = await loop.connect_write_pipe(
asyncio.streams.FlowControlMixin, sys.stdout
)
writer = asyncio.StreamWriter(w_transport, w_protocol, None, loop)
# Idle period - this is where the bug would manifest
await asyncio.sleep(2)
writer.write(b"hello from asyncio\\n")
await writer.drain()
writer.close()
asyncio.run(main())
`;
await using proc = Bun.spawn({
cmd: ["python3", "-c", pythonScript],
stdin: "pipe",
stdout: "pipe",
stderr: "pipe",
env: bunEnv,
});
const [stdout, stderr, exitCode] = await Promise.all([
new Response(proc.stdout).text(),
new Response(proc.stderr).text(),
proc.exited,
]);
if (exitCode !== 0) {
console.error("stderr:", stderr);
}
expect(stdout).toBe("hello from asyncio\n");
expect(exitCode).toBe(0);
});
test("subprocess stdin pipe stays readable for child after idle delay", async () => {
// Also verify stdin works correctly after idle delay
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
// Wait, then read from stdin
await Bun.sleep(2000);
const reader = Bun.stdin.stream().getReader();
const { value } = await reader.read();
process.stdout.write(new TextDecoder().decode(value));
`,
],
stdin: "pipe",
stdout: "pipe",
stderr: "pipe",
env: bunEnv,
});
// Write to stdin after child is waiting
proc.stdin.write("hello via stdin\n");
proc.stdin.flush();
proc.stdin.end();
const [stdout, stderr, exitCode] = await Promise.all([
new Response(proc.stdout).text(),
new Response(proc.stderr).text(),
proc.exited,
]);
expect(stdout).toBe("hello via stdin\n");
expect(exitCode).toBe(0);
});

View File

@@ -90,6 +90,7 @@ test/cli/install/bun-lockb.test.ts
test/cli/install/bun-patch.test.ts
test/cli/install/bun-pm.test.ts
test/cli/install/bun-repl.test.ts
test/js/bun/repl/repl.test.ts
test/cli/install/bun-update.test.ts
test/cli/install/bun-workspaces.test.ts
test/cli/install/bunx.test.ts

View File

@@ -0,0 +1,38 @@
// Test for GitHub issue #26058: bun repl is slow
// This test verifies that `bun repl` now uses a built-in REPL instead of bunx bun-repl
import { spawnSync } from "bun";
import { describe, expect, test } from "bun:test";
import { bunEnv, bunExe } from "harness";
describe("issue #26058 - bun repl startup time", () => {
test("bun repl starts without downloading packages", () => {
// The key indicator that bunx is being used is the "Resolving dependencies" message
// Our built-in REPL should not print this
// Use timeout to prevent hanging since REPL requires TTY for interactive input
const result = spawnSync({
cmd: [bunExe(), "repl"],
env: {
...bunEnv,
TERM: "dumb",
},
stderr: "pipe",
stdout: "pipe",
stdin: "ignore",
timeout: 3000,
});
const stderr = result.stderr?.toString() || "";
const stdout = result.stdout?.toString() || "";
// Should NOT see package manager output from bunx
expect(stderr).not.toContain("Resolving dependencies");
expect(stderr).not.toContain("bun add");
expect(stdout).not.toContain("Resolving dependencies");
// The built-in REPL should print "Welcome to Bun" when starting
// Even without a TTY, the welcome message should appear in stdout
expect(stdout).toContain("Welcome to Bun");
});
});

View File

@@ -0,0 +1,71 @@
import { describe, expect, test } from "bun:test";
import { tls as validTls } from "harness";
describe("mTLS SSLConfig keepalive (#27358)", () => {
test("fetch with custom TLS reuses keepalive connections", async () => {
// Track client ports to detect connection reuse
const clientPorts: number[] = [];
using server = Bun.serve({
port: 0,
tls: validTls,
hostname: "127.0.0.1",
fetch(req, server) {
const ip = server.requestIP(req);
return new Response(String(ip?.port ?? 0));
},
});
const url = `https://127.0.0.1:${server.port}`;
const tlsOpts = { ca: validTls.cert, rejectUnauthorized: false };
// Make sequential requests with keepalive enabled.
// With our fix: keepalive works for custom TLS, connections are reused → same port.
// With old code: disable_keepalive=true, every request opens a new TCP connection → different ports.
const numRequests = 6;
for (let i = 0; i < numRequests; i++) {
const res = await fetch(url, { tls: tlsOpts, keepalive: true });
const port = parseInt(await res.text(), 10);
clientPorts.push(port);
}
// Count unique client ports.
const uniquePorts = new Set(clientPorts);
// With keepalive working: sequential requests reuse the connection,
// so we expect significantly fewer unique ports than requests.
// The first request establishes a connection, subsequent ones reuse it.
// Allow for at most 2 unique ports (in case of a one-time reconnect).
expect(uniquePorts.size).toBeLessThanOrEqual(2);
});
test("different custom TLS configs do NOT share keepalive connections", async () => {
using server = Bun.serve({
port: 0,
tls: validTls,
hostname: "127.0.0.1",
fetch(req, server) {
const ip = server.requestIP(req);
return new Response(String(ip?.port ?? 0));
},
});
const url = `https://127.0.0.1:${server.port}`;
// Config A - just CA
const tlsA = { ca: validTls.cert, rejectUnauthorized: false };
// Config B - CA + explicit serverName (makes it a different SSLConfig)
const tlsB = { ca: validTls.cert, rejectUnauthorized: false, serverName: "127.0.0.1" };
// Request with config A
const resA = await fetch(url, { tls: tlsA, keepalive: true });
const portA = parseInt(await resA.text(), 10);
// Request with config B — must open a new connection (different SSL context)
const resB = await fetch(url, { tls: tlsB, keepalive: true });
const portB = parseInt(await resB.text(), 10);
// Different configs → different connections → different ports
expect(portA).not.toBe(portB);
});
});

View File

@@ -0,0 +1,47 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe } from "harness";
test("stream.finished callback preserves AsyncLocalStorage context", async () => {
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
const asyncHooks = require('async_hooks');
const http = require('http');
const finished = require('stream').finished;
const asyncLocalStorage = new asyncHooks.AsyncLocalStorage();
const store = { foo: 'bar' };
const server = http.createServer(function (req, res) {
asyncLocalStorage.run(store, function () {
finished(res, function () {
const value = asyncLocalStorage.getStore()?.foo;
if (value !== 'bar') {
console.log('FAIL: expected "bar" but got ' + value);
process.exitCode = 1;
} else {
console.log('PASS');
}
});
});
setTimeout(res.end.bind(res), 0);
}).listen(0, function () {
const port = this.address().port;
http.get('http://127.0.0.1:' + port, function onResponse(res) {
res.resume();
res.on('end', server.close.bind(server));
});
});
`,
],
env: bunEnv,
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stdout).toContain("PASS");
expect(exitCode).toBe(0);
});

View File

@@ -0,0 +1,50 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe, isWindows, tempDir } from "harness";
import { join } from "path";
test.if(isWindows)("standalone worker does not crash when autoloadDotenv is disabled and .env exists", async () => {
const target = process.arch === "arm64" ? "bun-windows-aarch64" : "bun-windows-x64";
using dir = tempDir("issue-27431", {
".env": "TEST_VAR=from_dotenv\n",
"entry.ts": 'console.log(process.env.TEST_VAR || "not found")\nnew Worker("./worker.ts")\n',
"worker.ts": "",
"build.ts": `
await Bun.build({
entrypoints: ["./entry.ts", "./worker.ts"],
compile: {
autoloadDotenv: false,
target: "${target}",
outfile: "./app.exe",
},
});
`,
});
await using build = Bun.spawn({
cmd: [bunExe(), join(String(dir), "build.ts")],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [, buildStderr, buildExitCode] = await Promise.all([build.stdout.text(), build.stderr.text(), build.exited]);
expect(buildExitCode).toBe(0);
expect(buildStderr).toBe("");
await using proc = Bun.spawn({
cmd: [join(String(dir), "app.exe")],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stdout).toContain("not found");
expect(exitCode).toBe(0);
expect(stderr).toBe("");
});

View File

@@ -1,71 +0,0 @@
import { describe } from "bun:test";
import { itBundled } from "../../bundler/expectBundled";
describe("bundler", () => {
// Test that `bun build --compile` produces absolute asset URLs in HTML,
// so that assets load correctly when served from nested routes like "/foo/".
// Regression test for https://github.com/oven-sh/bun/issues/27465
for (const backend of ["api", "cli"] as const) {
itBundled(`compile/${backend}/HTMLNestedRouteAssetURLs`, {
compile: true,
backend: backend,
files: {
"/entry.ts": /* js */ `
import { serve } from "bun";
import index from "./index.html";
const server = serve({
port: 0,
routes: {
"/foo/": index,
"/foo/*": index,
},
});
const res = await fetch(server.url + "foo/");
const html = await res.text();
const srcMatch = html.match(/src="([^"]+)"/);
if (!srcMatch) {
console.log("ERROR: no src attribute found in HTML");
server.stop(true);
process.exit(1);
}
const src = srcMatch[1];
if (src.startsWith("./")) {
console.log("FAIL: relative URL " + src);
server.stop(true);
process.exit(1);
}
// Asset URLs should be absolute (start with "/")
const assetRes = await fetch(server.url + src.slice(1));
if (!assetRes.ok) {
console.log("FAIL: asset not accessible at " + src);
server.stop(true);
process.exit(1);
}
console.log("Asset URL is absolute: " + src);
server.stop(true);
`,
"/index.html": /* html */ `
<!DOCTYPE html>
<html>
<head><title>Test</title></head>
<body>
<h1>Hello</h1>
<script src="./app.ts"></script>
</body>
</html>
`,
"/app.ts": /* js */ `
console.log("client loaded");
`,
},
run: {
stdout: /Asset URL is absolute: \/.+/,
},
});
}
});

View File

@@ -0,0 +1,41 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe } from "harness";
// Regression test for https://github.com/oven-sh/bun/issues/27490
// bmalloc SYSCALL macro was spinning at 100% CPU on madvise EAGAIN
// due to zero-delay tight loop with no backoff or retry cap.
//
// This test verifies that heavy allocation workloads complete without
// hanging. The original bug caused GC threads to spin indefinitely
// on madvise(MADV_DONTDUMP) returning EAGAIN under mmap_write_lock
// contention, freezing the process.
test("heavy allocation workload completes without hanging", async () => {
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
// Simulate allocation-heavy workload that triggers GC pressure
const arrays = [];
for (let i = 0; i < 100; i++) {
// Allocate and release large buffers to trigger GC decommit cycles
for (let j = 0; j < 100; j++) {
arrays.push(new ArrayBuffer(1024 * 64));
}
// Force some to be collected
arrays.length = 0;
Bun.gc(true);
}
console.log("OK");
`,
],
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stdout.trim()).toBe("OK");
expect(exitCode).toBe(0);
}, 30_000);