When compiling with `bun build --compile`, multiple native NAPI modules
would have their exports corrupted on Linux. The second module would
incorrectly inherit the first module's exports.
Root cause: On Linux, we used memfd_create as an optimization to avoid
writing temp files. After dlopen loaded the first module, we closed the
memfd. When loading the second module, memfd_create reused the same fd
number, creating an identical path `/proc/self/fd/N`. Since dlopen caches
by path, it returned the cached handle from the first module instead of
loading the second module.
This bug occurs when loading multiple native modules in quick succession,
as the fd number is likely to be reused immediately after being closed.
Fix: Disable the memfd optimization and always use temp files, which have
unique paths and don't trigger dlopen's path-based caching.
Fixes https://github.com/oven-sh/bun/issues/26045
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Fixed `bun install --frozen-lockfile` to use scope-specific registry
for scoped packages when the lockfile has an empty registry URL
When parsing a `bun.lock` file with an empty registry URL for a scoped
package (like `@example/test-package`), bun was unconditionally using
the default npm registry (`https://registry.npmjs.org/`) instead of
looking up the scope-specific registry from `bunfig.toml`.
For example, with this configuration in `bunfig.toml`:
```toml
[install.scopes]
example = { url = "https://npm.pkg.github.com" }
```
And this lockfile entry with an empty registry URL:
```json
"@example/test-package": ["@example/test-package@1.0.0", "", {}, "sha512-AAAA"]
```
bun would try to fetch from
`https://registry.npmjs.org/@example/test-package/-/...` instead of
`https://npm.pkg.github.com/@example/test-package/-/...`.
The fix uses `manager.scopeForPackageName()` (the same pattern used in
`pnpm.zig`) to look up the correct scope-specific registry URL.
## Test plan
- [x] Added regression test `test/regression/issue/026039.test.ts` that
verifies:
- Scoped packages use the scope-specific registry from `bunfig.toml`
- Non-scoped packages continue to use the default registry
- [x] Verified test fails with system bun (without fix) and passes with
debug build (with fix)
Fixes#26039🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
### What does this PR do?
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
Previously, reactFastRefresh was silently ignored when target was not
'browser', even when explicitly enabled. This was confusing as there was
no warning or error.
This change removes the `target == .browser` check, trusting explicit
user intent. If users enable reactFastRefresh with a non-browser target,
the transform will now be applied. If `$RefreshReg$` is not defined at
runtime, it will fail fast with a clear error rather than silently doing
nothing.
Use case: Terminal UIs (like [termcast](https://termcast.app)) need
React Fast Refresh with target: 'bun' for hot reloading in non-browser
environments.
### How did you verify your code works?
Updated existing test removing target browser
## Summary
- Fixes `bun init --minimal` creating Cursor rules files and CLAUDE.md
when it shouldn't
- Adds regression test to verify `--minimal` only creates package.json
and tsconfig.json
## Test plan
- [x] Verify test fails with system bun (unfixed): `USE_SYSTEM_BUN=1 bun
test test/cli/init/init.test.ts -t "bun init --minimal"`
- [x] Verify test passes with debug build: `bun bd test
test/cli/init/init.test.ts -t "bun init --minimal"`
- [x] All existing init tests pass: `bun bd test
test/cli/init/init.test.ts`
Fixes#26050🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Add the `update` subcommand to Fish shell completions
- Apply the install/add/remove flags (--global, --dry-run, --force,
etc.) to the `update` command
Previously, Fish shell autocompletion for `bun update --gl<TAB>` would
not work because:
1. The `update` command was missing from the list of built-in commands
2. The install/add/remove flags were not being applied to `update`
Fixes#25953
## Test plan
- [x] Verify `update` appears in subcommand completions (`bun <TAB>`)
- [x] Verify `--global` flag completion works (`bun update --gl<TAB>`)
- [x] Verify other install flags work with update (--dry-run, --force,
etc.)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Add `_idleStart` property (getter/setter) to the Timeout object
returned by `setTimeout()` and `setInterval()`
- The property returns a monotonic timestamp (in milliseconds)
representing when the timer was created
- This mimics Node.js's behavior where `_idleStart` is the libuv
timestamp at timer creation time
## Test plan
- [x] Verified test fails with `USE_SYSTEM_BUN=1 bun test
test/regression/issue/25639.test.ts`
- [x] Verified test passes with `bun bd test
test/regression/issue/25639.test.ts`
- [x] Manual verification:
```bash
# Bun with fix - _idleStart exists
./build/debug/bun-debug -e "const t = setTimeout(() => {}, 0);
console.log('_idleStart' in t, typeof t._idleStart); clearTimeout(t)"
# Output: true number
# Node.js reference - same behavior
node -e "const t = setTimeout(() => {}, 0); console.log('_idleStart' in
t, typeof t._idleStart); clearTimeout(t)"
# Output: true number
```
Closes#25639🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Fixes handler context not being restored after minifying nested CSS
rules
- Adds regression test for the issue
## Test plan
- [x] Test fails with `USE_SYSTEM_BUN=1 bun test
test/regression/issue/25794.test.ts`
- [x] Test passes with `bun bd test test/regression/issue/25794.test.ts`
Fixes#25794🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
Fix a memory leak in `RequestContext.doRenderWithBody()` where
`Strong.Impl` memory was leaked when proxying streaming responses
through Bun's HTTP server.
## Problem
When a streaming response (e.g., from a proxied fetch request) was
forwarded through Bun's server:
1. `response_body_readable_stream_ref` was initialized at line 1836
(from `lock.readable`) or line 1841 (via `Strong.init()`)
2. For `.Bytes` streams with `has_received_last_chunk=false`, a **new**
Strong reference was created at line 1902
3. The old Strong reference was **never deinit'd**, causing
`Strong.Impl` memory to leak
This leak accumulated over time with every streaming response proxied
through the server.
## Solution
Add `this.response_body_readable_stream_ref.deinit()` before creating
the new Strong reference. This is safe because:
- `stream` exists as a stack-local variable
- JSC's conservative GC tracks stack-local JSValues
- No GC can occur between consecutive synchronous Zig statements
- Therefore, `stream` won't be collected between `deinit()` and
`Strong.init()`
## Test
Added `test/js/web/fetch/server-response-stream-leak.test.ts` which:
- Creates a backend server that returns delayed streaming responses
- Creates a proxy server that forwards the streaming responses
- Makes 200 requests and checks that ReadableStream objects don't
accumulate
- Fails on system Bun v1.3.5 (202 leaked), passes with the fix
## Related
Similar to the Strong reference leak fixes in:
- #23313 (fetch memory leak)
- #25846 (fetch cyclic reference leak)
## What does this PR do?
Currently binary columns are returned as strings which means they get
corrupted when encoded in UTF8. This PR returns binary columns as
Buffers which is what user's actually expect and is also consistent with
PostgreSQL and SQLite.
### How did you verify your code works?
I added tests to verify the correct behavior. Before there were no tests
for binary columns at all.
This fixes#23991
Fixes#25862
### What does this PR do?
When a client sends pipelined data immediately after CONNECT request
headers in the same TCP segment, Bun now properly delivers this data to
the `head` parameter of the 'connect' event handler, matching Node.js
behavior.
This enables compatibility with Cap'n Proto's KJ HTTP library used by
Cloudflare's workerd runtime, which pipelines RPC data after CONNECT.
### How did you verify your code works?
<img width="694" height="612" alt="CleanShot 2026-01-09 at 15 30 22@2x"
src="https://github.com/user-attachments/assets/3ffe840e-1792-429c-8303-d98ac3e6912a"
/>
Tests
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Add comprehensive TypeScript type definitions for `Bun.Archive` in
`bun.d.ts`
- `ArchiveInput` and `ArchiveCompression` types
- Full JSDoc documentation with examples for all methods (`from`,
`write`, `extract`, `blob`, `bytes`, `files`)
- Add documentation page at `docs/runtime/archive.mdx`
- Quickstart examples
- Creating and extracting archives
- `files()` method with glob filtering
- Compression support
- Full API reference section
- Add Archive to docs sidebar under "Data & Storage"
- Add `files()` benchmark comparing `Bun.Archive.files()` vs node-tar
- Shows ~7x speedup for reading archive contents into memory (59µs vs
434µs)
## Test plan
- [x] TypeScript types compile correctly
- [x] Documentation renders properly in Mintlify format
- [x] Benchmark runs successfully and shows performance comparison
- [x] Verified `files()` method works correctly with both Bun.Archive
and node-tar
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fixes a path traversal vulnerability via symlink when installing
GitHub packages
- Validates symlink targets before creation to ensure they stay within
the extraction directory
- Rejects absolute symlinks and relative paths that would escape the
extraction directory
## Details
When extracting GitHub tarballs, Bun did not validate symlink targets. A
malicious tarball could:
1. Create a symlink pointing outside the extraction directory (e.g.,
`../../../../../../../tmp`)
2. Include a file entry through that symlink path (e.g.,
`symlink-to-tmp/pwned.txt`)
When extracted, the symlink would be created first, then the file would
be written through it, ending up outside the intended package directory
(e.g., `/tmp/pwned.txt`).
### The Fix
Added `isSymlinkTargetSafe()` function that:
1. Rejects absolute symlink targets (starting with `/`)
2. Normalizes the combined path (symlink location + target) and rejects
if the result starts with `..` (would escape)
## Test plan
- [x] Added regression test
`test/cli/install/symlink-path-traversal.test.ts`
- [x] Tests verify relative path traversal symlinks are blocked
- [x] Tests verify absolute symlink targets are blocked
- [x] Tests verify safe relative symlinks within the package still work
- [x] Verified test fails with system bun (vulnerable) and passes with
debug build (fixed)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
## Summary
Adds a new CLI flag `--compile-executable-path` that allows specifying a
custom Bun executable path for cross-compilation instead of downloading
from the npm registry.
## Usage
```bash
bun build --compile --target=bun-linux-x64 \
--compile-executable-path=/path/to/bun-linux-x64 app.ts
```
## Motivation
The `executablePath` option was already available in the JavaScript
`Bun.build()` API. This exposes the same functionality from the CLI.
## Changes
- Added `--compile-executable-path <STR>` CLI parameter in
`src/cli/Arguments.zig`
- Added `compile_executable_path` field to `BundlerOptions` in
`src/cli.zig`
- Wired the option through to `StandaloneModuleGraph.toExecutable()` in
`src/cli/build_command.zig`
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
remove agent in global WebSocket (in node.js it uses dispatcher not
agent) add agent support in ws module (this actually uses agent)
### How did you verify your code works?
Tests
### What does this PR do?
Update FFI documentation with a note on Windows API handle values, as
pointer encoding to double causes intermittent failures with some
classes of handles.
Putting it in this accordion feels less than ideal but it's also a very
specific use case.
The specific issue I experienced: HDCs and HBITMAPs are basically 32
bits, although they are typedef'd in the headers to HANDLE. They are
returned from and passed to the GDI APIs as sign-extended versions of
the underlying 32-bit value, and so when going through the ptr <->
double pathway of bun FFI, the bottom 11 bits of those values are lost
if the original value had bit 31 set, and subsequent calls will fail.
Probably this is fixable by correctly encoding 'negative' pointers in
the double representation, and I might tackle that if I find time.
## Summary
Fixes#25869
Two fixes to enable `jest.useFakeTimers()` to work with
`@testing-library/react` and `@testing-library/user-event`:
- Set `setTimeout.clock = true` when fake timers are enabled.
testing-library/react's `jestFakeTimersAreEnabled()` checks for this
property to determine if `jest.advanceTimersByTime()` should be called
when draining the microtask queue. Without this, testing-library never
advances timers.
- Make `advanceTimersByTime(0)` fire `setTimeout(fn, 0)` timers.
`setTimeout(fn, 0)` is internally scheduled with a 1ms delay per HTML
spec. Jest/testing-library expect `advanceTimersByTime(0)` to fire such
"immediate" timers, but we were advancing by 0ms so they never fired.
## Test plan
- [x] All 30 existing fake timer tests pass
- [x] New regression test validates both fixes
- [x] Original user-event reproduction now works (test completes instead
of hanging)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- **libarchive.zig:110**: Fix self-assignment bug where `this.pos` was
assigned to itself instead of `new_pos`
- **s3/credentials.zig:165,176,199**: Fix impossible range checks -
`and` should be `or` for pageSize, partSize, and retry validation (a
value cannot be both less than MIN and greater than MAX simultaneously)
- **postgres.zig:14**: Fix copy-paste error where createConnection
function was internally named "createQuery"
## Test plan
- [ ] Verify S3 credential validation now properly rejects out-of-range
values for pageSize, partSize, and retry
- [ ] Verify libarchive seek operations work correctly
- [ ] Verify postgres createConnection function has correct internal
name
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
## Summary
- Fixes#25903 - `Bun.write()` mode option ignored when copying from
`Bun.file()`
- The destination file now correctly uses the specified `mode` option
instead of default permissions
- Works on Linux (via open flags), macOS (chmod after clonefile), and
Windows (chmod after copyfile)
## Test plan
- [x] Added regression test in `test/regression/issue/25903.test.ts`
- [x] Test passes with `bun bd test test/regression/issue/25903.test.ts`
- [x] Test fails with `USE_SYSTEM_BUN=1 bun test
test/regression/issue/25903.test.ts` (verifies the bug exists)
## Changes
- `src/bun.js/webcore/Blob.zig`: Add `mode` field to `WriteFileOptions`
and parse from options
- `src/bun.js/webcore/blob/copy_file.zig`: Use `destination_mode` in
`CopyFile` struct and `doOpenFile`
- `packages/bun-types/bun.d.ts`: Add `mode` option to BunFile copy
overloads
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Add support for in-memory entrypoints and files in `Bun.build` via the
`files` option:
```ts
await Bun.build({
entrypoints: ["/app/index.ts"],
files: {
"/app/index.ts": `
import { greet } from "./greet.ts";
console.log(greet("World"));
`,
"/app/greet.ts": `
export function greet(name: string) {
return "Hello, " + name + "!";
}
`,
},
});
```
### Features
- **Bundle entirely from memory**: No files on disk needed
- **Override files on disk**: In-memory files take priority over disk
files
- **Mix disk and virtual files**: Real files can import virtual files
and vice versa
- **Multiple content types**: Supports `string`, `Blob`, `TypedArray`,
and `ArrayBuffer`
### Use Cases
- Code generation at build time
- Injecting build-time constants
- Testing with mock modules
- Bundling dynamically generated code
- Overriding configuration files for different environments
### Implementation Details
- Added `FileMap` struct in `JSBundler.zig` with `resolve`, `get`,
`contains`, `fromJS`, and `deinit` methods
- Uses `"memory"` namespace to avoid `pathWithPrettyInitialized`
allocation issues during linking phase
- FileMap checks added in:
- `runResolver` (entry point resolution)
- `runResolutionForParseTask` (import resolution)
- `enqueueEntryPoints` (entry point handling)
- `getCodeForParseTaskWithoutPlugins` (file content reading)
- Root directory defaults to cwd when all entrypoints are in the FileMap
- Added TypeScript types with JSDoc documentation
- Added bundler documentation with examples
## Test plan
- [x] Basic in-memory file bundling
- [x] In-memory files with absolute imports
- [x] In-memory files with relative imports (same dir, subdirs, parent
dirs)
- [x] Nested/chained imports between in-memory files
- [x] TypeScript and JSX support
- [x] Blob, Uint8Array, and ArrayBuffer content types
- [x] Re-exports and default exports
- [x] In-memory file overrides real file on disk
- [x] Real file on disk imports in-memory file via relative path
- [x] Mixed disk and memory files with complex import graphs
Run tests with: `bun bd test test/bundler/bundler_files.test.ts`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
## Summary
This PR implements a new `Bun.JSONC.parse()` API that allows parsing
JSONC (JSON with Comments) files. It addresses the feature request from
issue #16257 by providing a native API for parsing JSON with comments
and trailing commas.
The implementation follows the same pattern as `Bun.YAML` and
`Bun.TOML`, leveraging the existing `TSConfigParser` which already
handles JSONC parsing internally.
## Features
- **Parse JSON with comments**: Supports both `//` single-line and `/*
*/` block comments
- **Handle trailing commas**: Works with trailing commas in objects and
arrays
- **Full JavaScript object conversion**: Returns native JavaScript
objects/arrays
- **Error handling**: Proper error throwing for invalid JSON
- **TypeScript compatibility**: Works with TypeScript config files and
other JSONC formats
## Usage Example
```javascript
const result = Bun.JSONC.parse(`{
// This is a comment
"name": "my-app",
"version": "1.0.0", // trailing comma is allowed
"dependencies": {
"react": "^18.0.0",
},
}`);
// Returns native JavaScript object
```
## Implementation Details
- Created `JSONCObject.zig` following the same pattern as
`YAMLObject.zig` and `TOMLObject.zig`
- Uses the existing `TSConfigParser` from `json.zig` which already
handles comments and trailing commas
- Added proper C++ bindings and exports following Bun's established
patterns
- Comprehensive test suite covering various JSONC features
## Test Plan
- [x] Basic JSON parsing works
- [x] Single-line comments (`//`) are handled correctly
- [x] Block comments (`/* */`) are handled correctly
- [x] Trailing commas in objects and arrays work
- [x] Complex nested structures parse correctly
- [x] Error handling for invalid JSON
- [x] Empty objects and arrays work
- [x] Boolean and null values work correctly
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Jarred-Sumner <709451+Jarred-Sumner@users.noreply.github.com>
### What does this PR do?
Fix bytecode CJS pragma detection when source file contains a shebang.
When bundling with `--bytecode` and the source file has a shebang, the
output silently fails to execute (exits 0, no output).
Reproduction:
[github.com/kjanat/bun-bytecode-banner-bug](https://github.com/kjanat/bun-bytecode-banner-bug)
```js
// Bundled output:
#!/usr/bin/env bun // shebang preserved
// @bun @bytecode @bun-cjs // pragma on line 2
(function(exports, require, module, __filename, __dirname) { ... })
```
The pragma parser in `hasBunPragma()` correctly skips the shebang line,
but uses `self.lexer.end` instead of `contents.len` when scanning for
`@bun-cjs`/`@bytecode` tokens. This causes the pragma to not be
recognized.
**Fix:**
```zig
// Before
while (cursor < self.lexer.end) : (cursor += 1) {
// After
while (cursor < end) : (cursor += 1) {
```
Where `end` is already defined as `contents.len` at the top of the
function.
### How did you verify your code works?
- Added bundler test `banner/SourceHashbangWithBytecodeAndCJSTargetBun`
in `test/bundler/bundler_banner.test.ts`
- Added regression tests in
`test/regression/issue/bun-bytecode-shebang.test.ts` that verify:
- CJS wrapper executes when source has shebang
- CJS wrapper executes when source has shebang + bytecode pragma
- End-to-end: bundled bytecode output with source shebang runs correctly
- Ran the tests in the
[kjanat/bun-bytecode-banner-bug](https://github.com/kjanat/bun-bytecode-banner-bug)
repo to verify the issue is fixed
---------
Signed-off-by: Kaj Kowalski <info@kajkowalski.nl>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Fixes `ASSERTION FAILED: isUInt32AsAnyInt()` errors that occurred
intermittently, particularly in inspector-related tests.
## Problem
The code was directly manipulating JSPromise internal fields using
`asUInt32AsAnyInt()`:
```cpp
promise->internalField(JSC::JSPromise::Field::Flags).get().asUInt32AsAnyInt()
```
This caused assertion failures when the internal field state was not a
valid uint32.
## Solution
Use WebKit's official JSPromise helper methods instead:
| Before | After |
|--------|-------|
| Direct `internalField` manipulation for rejected promise |
`promise->rejectAsHandled(vm, globalObject, value)` |
| Direct `internalField` manipulation for resolved promise |
`promise->fulfill(vm, globalObject, value)` |
| Direct `internalField` manipulation for handled flag |
`promise->markAsHandled()` |
## Files Changed
- `src/bun.js/bindings/ZigGlobalObject.cpp`
- `src/bun.js/bindings/ModuleLoader.cpp`
- `src/bake/BakeGlobalObject.cpp`
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Use `describe.concurrent` at module scope for parallel test execution
across node/bun executables and padding strategies
- Replace `Bun.spawnSync` with async `Bun.spawn` in memory leak test
- Replace `beforeEach`/`afterEach` server setup with `await using` in
each test
- Add `Symbol.asyncDispose` to `nodeEchoServer` helper for proper
cleanup
- Fix IPv6/IPv4 binding issue by explicitly binding echo server to
127.0.0.1
## Test plan
- [x] Run `bun test test/js/node/http2/node-http2.test.js` - all 245
tests pass (6 skipped)
- [x] Verify tests run faster due to concurrent execution
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Wrap all tests in `describe.concurrent` at module scope for parallel
test execution
- Replace `Bun.spawnSync` with `Bun.spawn` + `await` throughout
- Replace `run_dir`/`writeFile` pattern with `tempDir` for automatic
cleanup via `using` declarations
- Remove `beforeEach` hook that created shared temp directory
## Test plan
- [x] All 291 tests pass with `bun bd test
test/cli/install/bun-run.test.ts`
- [x] All tests pass with `USE_SYSTEM_BUN=1 bun test
test/cli/install/bun-run.test.ts`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Disable HTTP keep-alive when a proxy returns a 407 (Proxy
Authentication Required) status code
- This prevents subsequent requests from trying to reuse a connection
that the proxy server has closed
- Refactored proxy tests to use `describe.concurrent` and async
`Bun.spawn` patterns
## Test plan
- [x] Added test `simultaneous proxy auth failures should not hang` that
verifies multiple concurrent requests with invalid proxy credentials
complete without hanging
- [x] Existing proxy tests pass
🤖 Generated with [Claude Code](https://claude.ai/code)
## Summary
- **SQLClient.cpp**: Fix bug where `RETURN_IF_EXCEPTION` after
`JSONParse` would never trigger on JSON parse failure since `JSONParse`
doesn't throw
- **BunString.cpp**: Simplify by using `JSONParseWithException` instead
of manually checking for empty result and throwing
## Details
`JSC::JSONParse` returns an empty `JSValue` on failure **without
throwing an exception**. This means that `RETURN_IF_EXCEPTION(scope,
{})` will never catch JSON parsing errors when used after `JSONParse`.
Before this fix in `SQLClient.cpp`:
```cpp
JSC::JSValue json = JSC::JSONParse(globalObject, str);
RETURN_IF_EXCEPTION(scope, {}); // This never triggers on parse failure!
return json; // Returns empty JSValue
```
This could cause issues when parsing invalid JSON data from SQL
databases (e.g., PostgreSQL's JSON/JSONB columns).
`JSONParseWithException` properly throws a `SyntaxError` exception that
`RETURN_IF_EXCEPTION` can catch.
## Test plan
- [x] Build succeeds with `bun bd`
- The changes follow the same pattern used in `ModuleLoader.cpp` and
`BunObject.cpp`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Aligns Bun's temp directory resolution with Node.js's `os.tmpdir()`
behavior
- Checks `TMPDIR`, `TMP`, and `TEMP` environment variables in order
(matching Node.js)
- Uses `bun.once` for lazy initialization instead of mutable static
state
- Removes `setTempdir` function and simplifies the API to use
`RealFS.tmpdirPath()` directly
## Test plan
- [ ] Verify temp directory resolution matches Node.js behavior
- [ ] Test with various environment variable configurations
- [ ] Ensure existing tests pass with `bun bd test`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Without this change, building with `-DWEBKIT_LOCAL=ON` fails with:
```
/work/bun/src/bun.js/bindings/BunObject.cpp:12:10: fatal error: 'JavaScriptCore/JSBase.h' file not found
12 | #include <JavaScriptCore/JSBase.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
```
The reason for this is because the directory structure differs between
downloaded and local WebKit.
Downloaded WebKit:
```
build/debug/cache/webkit-6d0f3aac0b817cc0/
└── include/
└── JavaScriptCore/
└── JSBase.h ← Direct path
```
Local WebKit:
```
vendor/WebKit/WebKitBuild/Debug/
└── JavaScriptCore/Headers/
└── JavaScriptCore/
└── JSBase.h ← Nested path
```
The include paths are thus configured differently for each build type.
For Remote WebKit (when WEBKIT_LOCAL=OFF):
- SetupWebKit.cmake line 22 sets: WEBKIT_INCLUDE_PATH =
${WEBKIT_PATH}/include
- BuildBun.cmake line 1253 adds:
include_directories(${WEBKIT_INCLUDE_PATH})
- This resolves to: build/debug/cache/webkit-6d0f3aac0b817cc0/include/
- So #include <JavaScriptCore/JSBase.h> finds the file at
include/JavaScriptCore/JSBase.h ✅
For Local WebKit (when WEBKIT_LOCAL=ON):
- The original code only added:
${WEBKIT_PATH}/JavaScriptCore/Headers/JavaScriptCore
- This resolves to:
vendor/WebKit/WebKitBuild/Debug/JavaScriptCore/Headers/JavaScriptCore/
- So #include <JavaScriptCore/JSBase.h> fails because there's no
JavaScriptCore/ subdirectory at that level ❌
- The fix adds: ${WEBKIT_PATH}/JavaScriptCore/Headers
- Now the include path includes:
vendor/WebKit/WebKitBuild/Debug/JavaScriptCore/Headers/
- So #include <JavaScriptCore/JSBase.h> finds the file at
Headers/JavaScriptCore/JSBase.h ✅
### How did you verify your code works?
Built locally.
Co-authored-by: Carl Smedstad <carsme@archlinux.org>
### What does this PR do?
In CMake, failure to execute a process will place a message string in
the RESULT_VARIABLE. If the message string starts with 'No' such as in
'No such file or directory' then CMake will interpret that as the
boolean false and not halt the build. The new code uses a built-in
option to halt the build on any failure, so the script will halt
correctly if that error occurs. This could also be fixed by quoting, but
might as well use the CMake feature.
I encountered this error when I improperly defined BUN_EXECUTABLE.
### How did you verify your code works?
<details>
<summary>Ran the build with invalid executable path:</summary>
```
$ node ./scripts/build.mjs \
-GNinja \
-DCMAKE_BUILD_TYPE=Debug \
-B build/debug \
--log-level=NOTICE \
-DBUN_EXECUTABLE="foo" \
-DNPM_EXECUTABLE="$(which npm)" \
-DZIG_EXECUTABLE="$(which zig)" \
-DENABLE_ASAN=OFF
Globbed 1971 sources [178.21ms]
CMake Configure
$ cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug -B /work/bun/build/debug --log-level NOTICE -DBUN_EXECUTABLE=foo -DNPM_EXECUTABLE=/bin/npm -DZIG_EXECUTABLE=/bin/zig -DENABLE_ASAN=OFF -S /work/bun -DCACHE_STRATEGY=auto
sccache: Using local cache strategy.
```
</details>
Result:
```
CMake Error at cmake/targets/BuildBun.cmake:422 (execute_process):
execute_process failed command indexes:
1: "Abnormal exit with child return code: no such file or directory"
Call Stack (most recent call first):
CMakeLists.txt:66 (include)
```
### What does this PR do?
Before this change, downloading WebKit would fail silently if there were
e.g. a connection or TLS certificate issue. I experienced this when
trying to build bun in an overly-restrictive sandbox.
After this change, the failure reason will be visible.
### How did you verify your code works?
<details>
<summary>Brought down my network interface and ran the build:</summary>
```sh
$ node ./scripts/build.mjs \
-GNinja \
-DCMAKE_BUILD_TYPE=Debug \
-B build/debug \
--log-level=NOTICE \
-DBUN_EXECUTABLE="$(which node)" \
-DNPM_EXECUTABLE="$(which npm)" \
-DZIG_EXECUTABLE="$(which zig)" \
-DENABLE_ASAN=OFF
Globbed 1971 sources [180.67ms]
CMake Configure
$ cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug -B /work/bun/build/debug --log-level NOTICE -DBUN_EXECUTABLE=/bin/node -DNPM_EXECUTABLE=/bin/npm -DZIG_EXECUTABLE=/bin/zig -DENABLE_ASAN=OFF -S /work/bun -DCACHE_STRATEGY=auto
sccache: Using local cache strategy.
...
```
</details>
Result:
```
CMake Error at cmake/tools/SetupWebKit.cmake:99 (message):
Failed to download WebKit: 6;"Could not resolve hostname"
Call Stack (most recent call first):
cmake/targets/BuildBun.cmake:1204 (include)
CMakeLists.txt:66 (include)
```
## Summary
Fixes#20821
When `bun build --compile` was used with 8 or more embedded files, the
compiled binary would silently fail to execute any code (exit code 0, no
output).
**Root cause:** Chunks were sorted alphabetically by their `entry_bits`
key bytes. For entry point 0, the key starts with bit 0 set (byte
pattern `0x01`), but for entry point 8, the key has bit 8 set in the
byte (pattern `0x00, 0x01`). Alphabetically, `0x00 < 0x01`, so entry
point 8's chunk sorted before entry point 0.
This caused the wrong entry point to be identified as the main entry,
resulting in asset wrapper code being executed instead of the user's
code.
**Fix:** Custom sort that ensures `entry_point_id=0` (the main entry
point) always sorts first, with remaining chunks sorted alphabetically
for determinism.
## Test plan
- Added regression test `compile/ManyEmbeddedFiles` that embeds 8 files
and verifies the main entry point runs correctly
- Verified manually with reproduction case from issue
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
It is standard practice to send HTTP2 window resize frames once half the
window is used up:
- [nghttp2](https://en.wikipedia.org/wiki/Nghttp2#HTTP/2_implementation)
- [Go](https://github.com/golang/net/blob/master/http2/flow.go#L31)
- [Rust
h2](https://github.com/hyperium/h2/blob/master/src/proto/streams/flow_control.rs#L17)
The current behaviour of Bun is to send a window update once the client
has sent 65535 bytes exactly. This leads to an interruption in
throughput while the window update is received.
This is not just a performance concern however, I think some clients
will not handle this well, as it means that if you stop sending data
even 1 byte before 65535, you have a deadlock. The reason I came across
this issue is that it appears that the Rust `hyper` crate always
reserves an additional 1 byte from the connection for each http2 stream
(https://github.com/hyperium/hyper/blob/master/src/proto/h2/mod.rs#L122).
This means that when you have two concurrent requests, the client treats
it like the window is exhausted when it actually has a byte remaining,
leading to a sequential dependency between the requests that can create
deadlocks if they depend on running concurrently. I accept this is not a
problem with bun, but its a happy accident that we can resolve such
off-by-one issues by increasing the window size once it is 50% utilized
### How did you verify your code works?
Using wireshark, bun debug logging, and client logging I can observe
that the window updates are now sent after 32767 bytes. This also
resolves the h2 crate client issue.
### What does this PR do?
- I was trying to understand how the SEA bundling worked in Bun and
noticed the size of the `Offsets` struct is hard-coded here to 32. It
should use the computed size to be future proof to changes in the
schema.
### How did you verify your code works?
- I didn't. Can add tests if this is not covered by existing tests.
ChatGPT agreed with me though. =)
## Summary
This PR fixes a memory leak that occurs when `fetch()` is called with a
`ReadableStream` body. The ReadableStream objects were not being
properly released, causing them to accumulate in memory.
## Problem
When using `fetch()` with a ReadableStream body:
```javascript
const stream = new ReadableStream({
start(controller) {
controller.enqueue(new TextEncoder().encode("data"));
controller.close();
}
});
await fetch(url, { method: "POST", body: stream });
```
The ReadableStream objects leak because `FetchTasklet.clearData()` has a
conditional that prevents `detach()` from being called on ReadableStream
request bodies after streaming has started.
### Root Cause
The problematic condition in `clearData()`:
```zig
if (this.request_body != .ReadableStream or this.is_waiting_request_stream_start) {
this.request_body.detach();
}
```
After `startRequestStream()` is called:
- `is_waiting_request_stream_start` becomes `false`
- `request_body` is still `.ReadableStream`
- The condition evaluates to `(false or false) = false`
- `detach()` is skipped → **memory leak**
### Why the Original Code Was Wrong
The original code appears to assume that when `startRequestStream()` is
called, ownership of the Strong reference is transferred to
`ResumableSink`. However, this is incorrect:
1. `startRequestStream()` creates a **new independent** Strong reference
in `ResumableSink` (see `ResumableSink.zig:119`)
2. The FetchTasklet's original reference is **not transferred** - it
becomes redundant
3. Strong references in Bun are independent - calling `deinit()` on one
does not affect the other
## Solution
Remove the conditional and always call `detach()`:
```zig
// Always detach request_body regardless of type.
// When request_body is a ReadableStream, startRequestStream() creates
// an independent Strong reference in ResumableSink, so FetchTasklet's
// reference becomes redundant and must be released to avoid leaks.
this.request_body.detach();
```
### Safety Analysis
This change is safe because:
1. **Strong references are independent**: Each Strong reference
maintains its own ref count. Detaching FetchTasklet's reference doesn't
affect ResumableSink's reference
2. **Idempotency**: `detach()` is safe to call on already-detached
references
3. **Timing**: `clearData()` is only called from `deinit()` after
streaming has completed (ref_count = 0)
4. **No UAF risk**: `deinit()` only runs when ref_count reaches 0, which
means all streaming operations have completed
## Test Results
Before fix (with system Bun):
```
Expected: <= 100
Received: 501 (Request objects leaked)
Received: 1002 (ReadableStream objects leaked)
```
After fix:
```
6 pass
0 fail
```
## Test Coverage
Added comprehensive tests in
`test/js/web/fetch/fetch-cyclic-reference.test.ts` covering:
- Response stream leaks with cyclic references
- Streaming response body leaks
- Request body stream leaks with cyclic references
- ReadableStream body leaks (no cyclic reference needed to reproduce)
- Concurrent fetch operations with cyclic references
---------
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Include the Windows module definition file (symbols.def) in
link-metadata.json, similar to how
symbols.txt and symbols.dyn are already included for macOS and Linux.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
- Introduced `requestPayer` option in S3-related functions and
structures to handle Requester Pays buckets.
- Updated S3 client methods to accept and propagate the `requestPayer`
flag.
- Enhanced documentation for the `requestPayer` option in the S3 type
definitions.
- Adjusted existing S3 operations to utilize the `requestPayer`
parameter where applicable, ensuring compatibility with AWS S3's
Requester Pays feature.
- Ensured that the new functionality is integrated into multipart
uploads and simple requests.
### What does this PR do?
This change allows users to specify whether they are willing to pay for
data transfer costs when accessing objects in Requester Pays buckets,
improving flexibility and compliance with AWS S3's billing model.
This closes#25499
### How did you verify your code works?
I have added a new test file to verify this functionality, and all my
tests pass.
I also tested this against an actual S3 bucket which can only be
accessed if requester pays. I can confirm that it's accessible with
`requestPayer` is `true`, and the default of `false` does not allow
access.
An example bucket is here: s3://hl-mainnet-evm-blocks/0/0/1.rmp.lz4
(my usecase is indexing [hyperliquid block
data](https://hyperliquid.gitbook.io/hyperliquid-docs/for-developers/hyperevm/raw-hyperevm-block-data)
which is stored in s3, and I want to use bun to index faster)
---------
Co-authored-by: Alistair Smith <hi@alistair.sh>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Ciro Spaciari <ciro.spaciari@gmail.com>
### What does this PR do?
the `sql()` helper now filters out `undefined` values in INSERT
statements instead of converting them to `NULL`. This allows columns
with `DEFAULT` values to use their defaults when `undefined` is passed,
rather than being overridden with `NULL`.
**Before:** `sql({ foo: undefined, id: "123" })` in INSERT would
generate `(foo, id) VALUES (NULL, "123")`, causing NOT NULL constraint
violations even when the column has a DEFAULT.
**After:** `sql({ foo: undefined, id: "123" })` in INSERT generates
`(id) VALUES ("123")`, omitting the undefined column entirely and
letting the database use the DEFAULT value.
Also fixes a data loss bug in bulk inserts where columns were determined
only from the first item - now all items are checked, so values in later
items aren't silently dropped.
Fixes#25829
### How did you verify your code works?
- Added regression test for #25829 (NOT NULL column with DEFAULT)
- Added tests for bulk insert with mixed undefined patterns which is the
data loss scenario
## Summary
- Fix CMake JSON parsing error when Buildkite API returns commit
messages with newlines
CMake's `file(READ ...)` reads files with literal newlines, which breaks
`string(JSON ...)` when the JSON contains escape sequences like `\n` in
string values (e.g., commit messages from Buildkite API).
Use `file(STRINGS ...)` to read line-by-line, then join with `\n` to
preserve valid JSON escape sequences while avoiding literal newlines.
## Test plan
- Verify CMake configure succeeds when Buildkite build has commit
messages with newlines
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix O(n²) performance bug in JSON mode IPC when receiving large
messages that arrive in chunks
- Add `JsonIncomingBuffer` wrapper that tracks newline positions to
avoid re-scanning
- Each byte is now scanned exactly once (on arrival or when preceding
message is consumed)
## Problem
When data arrives in chunks in JSON mode, `decodeIPCMessage` was calling
`indexOfChar(data, '\n')` on the ENTIRE accumulated buffer every time.
For a 10MB message arriving in 160 chunks of 64KB:
- Chunk 1: scan 64KB
- Chunk 2: scan 128KB
- Chunk 3: scan 192KB
- ...
- Chunk 160: scan 10MB
Total: ~800MB scanned for one 10MB message.
## Solution
Introduced a `JsonIncomingBuffer` struct that:
1. Tracks `newline_pos: ?u32` - position of known upcoming newline (if
any)
2. On `append(bytes)`: Only scans new chunk for `\n` if no position is
cached
3. On `consume(bytes)`: Updates or re-scans as needed after message
processing
This ensures O(n) scanning instead of O(n²).
## Test plan
- [x] `bun run zig:check-all` passes (all platforms compile)
- [x] `bun bd test test/js/bun/spawn/spawn.ipc.test.ts` - 4 tests pass
- [x] `bun bd test test/js/node/child_process/child_process_ipc.test.js`
- 1 test pass
- [x] `bun bd test test/js/bun/spawn/bun-ipc-inherit.test.ts` - 1 test
pass
- [x] `bun bd test test/js/bun/spawn/spawn.ipc.bun-node.test.ts` - 1
test pass
- [x] `bun bd test test/js/bun/spawn/spawn.ipc.node-bun.test.ts` - 1
test pass
- [x] `bun bd test
test/js/node/child_process/child_process_ipc_large_disconnect.test.js` -
1 test pass
- [x] Manual verification with `child-process-send-cb-more.js` (32KB
messages)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Optimize `Buffer.indexOf` and `Buffer.includes` by replacing
`std::search` with Highway SIMD-optimized functions:
- **Single-byte search**: `highway_index_of_char` - SIMD-accelerated
character search
- **Multi-byte search**: `highway_memmem` - SIMD-accelerated substring
search
These Highway functions are already used throughout Bun's codebase for
fast string searching (URL parsing, JS lexer, etc.) and provide
significant speedups for pattern matching in large buffers.
### Changes
```cpp
// Before: std::search (scalar)
auto it = std::search(haystack.begin(), haystack.end(), needle.begin(), needle.end());
// After: SIMD-optimized
if (valueLength == 1) {
size_t result = highway_index_of_char(haystackPtr, haystackLen, valuePtr[0]);
} else {
void* result = highway_memmem(haystackPtr, haystackLen, valuePtr, valueLength);
}
```
## Test plan
- [x] Debug build compiles successfully
- [x] Basic functionality verified (`indexOf`, `includes` with single
and multi-byte patterns)
- [ ] Existing Buffer tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Fixes#25716
Adds support for a `reactFastRefresh: boolean` option in the `Bun.build`
JavaScript API, matching the existing `--react-fast-refresh` CLI flag.
```ts
const result = await Bun.build({
reactFastRefresh: true,
entrypoints: ["src/App.tsx"],
});
```
When enabled, the bundler adds React Fast Refresh transform code
(`$RefreshReg$`, `$RefreshSig$`) to the output.
## Summary
Apply the same optimization technique from PR #25717 (Response.json) to
other APIs that use JSON.stringify internally:
- **IPC message serialization** (`ipc.zig`) - used for inter-process
communication
- **console.log with %j format** (`ConsoleObject.zig`) - commonly used
for debugging
- **PostgreSQL JSON/JSONB types** (`PostgresRequest.zig`) - database
operations
- **MySQL JSON type** (`MySQLTypes.zig`) - database operations
- **Jest %j/%o format specifiers** (`jest.zig`) - test output formatting
- **Transpiler tsconfig/macros** (`JSTranspiler.zig`) - build
configuration
### Root Cause
When calling `JSONStringify(globalObject, value, 0)`, the space
parameter `0` becomes `jsNumber(0)`, which is NOT `undefined`. This
causes JSC's FastStringifier (SIMD-optimized) to bail out:
```cpp
// In WebKit's JSONObject.cpp FastStringifier::stringify()
if (!space.isUndefined()) {
logOutcome("space"_s);
return { }; // Bail out to slow path
}
```
Using `jsonStringifyFast` which passes `jsUndefined()` triggers the fast
path.
### Expected Performance Improvement
Based on PR #25717 results, these changes should provide ~3x speedup for
JSON serialization in the affected APIs.
## Test plan
- [x] Debug build compiles successfully
- [x] Basic functionality verified (IPC, console.log %j, Response.json)
- [x] Existing tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Add maximum decompressed message size limit to WebSocket client
deflate handling
- Add test coverage for decompression limits
## Test plan
- Run `bun test
test/js/web/websocket/websocket-permessage-deflate-edge-cases.test.ts`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Closes#8254
Fixes a data corruption bug in `Bun.write()` where files larger than 2GB
would have chunks skipped resulting in corrupted output with missing
data.
The `doWriteLoop` had an issue where it would essentially end up
offsetting twice every 2GB chunks:
- it first sliced the buffer by `total_written`:
```remain = remain[@min(this.total_written, remain.len)..]```
- it would then increment `bytes_blob.offset`:
`this.bytes_blob.offset += @truncate(wrote)`
but because `sharedView()` already uses the blob offset `slice_ = slice_[this.offset..]` it would end up doubling the offset.
In a local reproduction writing a 16GB file with each 2GB chunk filled with incrementing values `[1, 2, 3, 4, 5, 6, 7, 8]`, the buggy version produced: `[1, 3, 5, 7, …]`, skipping every other chunk.
The fix is to simply remove the redundant manual offset and rely only on `total_written` to track write progress.
### What does this PR do?
Enables `CHECK_REF_COUNTED_LIFECYCLE` in WebKit (
https://github.com/oven-sh/WebKit/pull/121 )
See also
a978fae619
#### `CHECK_REF_COUNTED_LIFECYCLE`?
A compile-time macro that enables lifecycle validation for
reference-counted objects in debug builds.
**Definition**
```cpp
#if ASSERT_ENABLED || ENABLE(SECURITY_ASSERTIONS)
#define CHECK_REF_COUNTED_LIFECYCLE 1
#else
#define CHECK_REF_COUNTED_LIFECYCLE 0
#endif
```
**Purpose**
Detects three categories of bugs:
1. Missing adoption - Objects stored in RefPtr without using adoptRef()
2. Ref during destruction - ref() called while destructor is running
(causes dangling pointers)
3. Thread safety violations - Unsafe ref/deref across threads
**Implementation**
When enabled, RefCountDebugger adds two tracking flags:
- m_deletionHasBegun - Set when destructor starts
- m_adoptionIsRequired - Cleared when adoptRef() is called
These flags are checked on every ref()/deref() call, with assertions
failing on violations.
**Motivation**
Refactored debug code into a separate RefCountDebugger class to:
- Improve readability of core refcount logic
- Eliminate duplicate code across RefCounted, ThreadSafeRefCounted, etc.
- Simplify adding new refcount classes
**Overhead**
Zero in release builds - the flags and checks are compiled out entirely.
### How did you verify your code works?
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Fix performance regression where `Response.json()` was 2-3x slower
than `JSON.stringify() + new Response()`
- Root cause: The existing code called `JSC::JSONStringify` with
`indent=0`, which internally passes `jsNumber(0)` as the space
parameter. This bypasses WebKit's FastStringifier optimization.
- Fix: Add a new `jsonStringifyFast` binding that passes `jsUndefined()`
for the space parameter, triggering JSC's FastStringifier
(SIMD-optimized) code path.
## Root Cause Analysis
In WebKit's `JSONObject.cpp`, the `stringify()` function has this logic:
```cpp
static NEVER_INLINE String stringify(JSGlobalObject& globalObject, JSValue value, JSValue replacer, JSValue space)
{
// ...
if (String result = FastStringifier<Latin1Character, BufferMode::StaticBuffer>::stringify(globalObject, value, replacer, space, failureReason); !result.isNull())
return result;
// Falls back to slow Stringifier...
}
```
And `FastStringifier::stringify()` checks:
```cpp
if (!space.isUndefined()) {
logOutcome("space"_s);
return { }; // Bail out to slow path
}
```
So when we called `JSONStringify(globalObject, value, (unsigned)0)`, it
converted to `jsNumber(0)` which is NOT `undefined`, causing
FastStringifier to bail out.
## Performance Results
### Before (3.5x slower than manual approach)
```
Response.json(): 2415ms
JSON.stringify() + Response(): 689ms
Ratio: 3.50x
```
### After (parity with manual approach)
```
Response.json(): ~700ms
JSON.stringify() + Response(): ~700ms
Ratio: ~1.09x
```
## Test plan
- [x] Existing `Response.json()` tests pass
(`test/regression/issue/21257.test.ts`)
- [x] Response tests pass (`test/js/web/fetch/response.test.ts`)
- [x] Manual verification that output is correct for various JSON inputs
Fixes#25693🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Sosuke Suzuki <sosuke@bun.com>
## What does this PR do?
Switch `Bun.hash.crc32` to use `zlib`'s CRC32 implementation. Bun
already links `zlib`, which provides highly optimized,
hardware-accelerated CRC32. Because `zlib.crc32` takes a 32-bit length,
chunk large inputs to avoid truncation/regressions on buffers >4GB.
Note: This was tried before (PR #12164 by Jarred), which switched CRC32
to zlib for speed. This proposal keeps that approach and adds explicit
chunking to avoid the >4GB length pitfall.
**Problem:** `Bun.hash.crc32` is a significant outlier in
microbenchmarks compared to other hash functions (about 21x slower than
`zlib.crc32` in a 1MB test on M1).
**Root cause:** `Bun.hash.crc32` uses Zig's `std.hash.Crc32`
implementation, which is software-only and does not leverage hardware
acceleration (e.g., `PCLMULQDQ` on x86 or `CRC32` instructions on ARM).
**Implementation:**
https://github.com/oven-sh/bun/blob/main/src/bun.js/api/HashObject.zig
```zig
pub const crc32 = hashWrap(struct {
pub fn hash(seed: u32, bytes: []const u8) u32 {
// zlib takes a 32-bit length, so chunk large inputs to avoid truncation.
var crc: u64 = seed;
var offset: usize = 0;
while (offset < bytes.len) {
const remaining = bytes.len - offset;
const max_len: usize = std.math.maxInt(u32);
const chunk_len: u32 = if (remaining > max_len) @intCast(max_len) else @intCast(remaining);
crc = bun.zlib.crc32(crc, bytes.ptr + offset, chunk_len);
offset += chunk_len;
}
return @intCast(crc);
}
});
```
### How did you verify your code works?
**Benchmark (1MB payload):**
- **Before:** Bun 1.3.5 `Bun.hash.crc32` = 2,644,444 ns/op vs
`zlib.crc32` = 124,324 ns/op (~21x slower)
- **After (local bun-debug):** `Bun.hash.crc32` = 360,591 ns/op vs
`zlib.crc32` = 359,069 ns/op (~1.0x), results match
## Test environment
- **Machine:** MacBook Pro 13" (M1, 2020)
- **OS:** macOS 15.7.3
- **Baseline Bun:** 1.3.5
- **After Bun:** local `bun-debug` (build/debug)
## Summary
- Reject null bytes in command-line arguments passed to `Bun.spawn` and
`Bun.spawnSync`
- Reject null bytes in environment variable keys and values
- Reject null bytes in shell (`$`) template literal arguments
This prevents null byte injection attacks (CWE-158) where null bytes in
strings could cause unintended truncation when passed to the OS,
potentially allowing attackers to bypass file extension validation or
create files with unexpected names.
## Test plan
- [x] Added tests in `test/js/bun/spawn/null-byte-injection.test.ts`
- [x] Tests pass with debug build: `bun bd test
test/js/bun/spawn/null-byte-injection.test.ts`
- [x] Tests fail with system Bun (confirming the fix works)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fixes dead code elimination producing invalid syntax like `{ ...a, x:
}` when simplifying empty objects in spread contexts
- The issue was that `simplifyUnusedExpr` and `joinAllWithCommaCallback`
could return `E.Missing` instead of `null` to indicate "no side effects"
- Added checks to return `null` when the result is `E.Missing`
Fixes#25609
## Test plan
- [x] Added regression test that fails on v1.3.5 and passes with fix
- [x] `bun bd test test/regression/issue/25609.test.ts` passes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Remove sccache support entirely, use ccache only
- Missing ccache no longer fails the build (just skips caching)
- Remove S3 distributed cache support
## Changes
- Remove `cmake/tools/SetupSccache.cmake` and S3 distributed cache
support
- Simplify `CMakeLists.txt` to only use ccache
- Update `SetupCcache.cmake` to not fail when ccache is missing
- Replace sccache with ccache in bootstrap scripts (sh, ps1)
- Update `.buildkite/Dockerfile` to install ccache instead of sccache
- Update `flake.nix` and `shell.nix` to use ccache
- Update documentation (CONTRIBUTING.md, contributing.mdx,
building-windows.mdx)
- Remove `scripts/build-cache/` directory (was only for sccache S3
access)
## Test plan
- [x] Build completes successfully with `bun bd`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
- Add test that is broken before the changes in the code and fix
previous test making script in dependency takes a bit of time to be
executed. Without the `setTimeout` in the tests, due race conditions it
always success. I tried adding a test combining both tests, with
dependencies `dep0` and `larger-than-8-char`, but if the timeout is the
same it success.
- Fix for the use case added, by using the correct buffer for
`Dependency.name` otherwise it gets garbage when package name is larger
than 8 characters. This should fix#12203
### How did you verify your code works?
Undo the changes in the code to verify the new test fails and check it
again after adding the changes in the code.
- WTFMove → WTF::move / std::move: Replaced WTFMove() macro with
WTF::move() function for WTF types, std::move() for std types
- SortedArrayMap removed: Replaced with if-else chains in
EventFactory.cpp, JSCryptoKeyUsage.cpp
- Wasm::Memory::create signature changed: Removed VM parameter
- URLPattern allocation: Changed from WTF_MAKE_ISO_ALLOCATED to
WTF_MAKE_TZONE_ALLOCATED
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Migrate Cursor rules to Claude Code skills format
- Add 4 new skills for development guidance:
- `writing-dev-server-tests`: HMR/dev server test guidance
- `implementing-jsc-classes-cpp`: C++ JSC class implementation
- `implementing-jsc-classes-zig`: Zig JSC bindings generator
- `writing-bundler-tests`: bundler test guidance with itBundled
- Remove all `.cursor/rules/` files
## Test plan
- [x] Skills follow Claude Code skill authoring guidelines
- [x] Each skill has proper YAML frontmatter with name and description
- [x] Skills are concise and actionable
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixed documentation that incorrectly claimed you could use `[env]` as
a TOML section to set environment variables directly
- The `env` option in bunfig.toml only controls whether automatic `.env`
file loading is disabled (via `env = false`)
- Updated to show the correct approaches: using preload scripts or
`.env` files with `--env-file`
## Test plan
- Documentation-only change, no code changes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix LLVM installation on Debian Trixie (13) by using the unstable
repository from apt.llvm.org
The `llvm.sh` script doesn't automatically detect that Debian Trixie
needs to use the unstable repository. This is because trixie's `VERSION`
is `13 (trixie)` rather than `testing`, and apt.llvm.org doesn't have a
dedicated trixie repository.
Without this fix, the LLVM installation falls back to Debian's main
repository packages, which don't include `libclang-rt-19-dev` (the
compiler-rt sanitizer libraries) by default. This causes builds with
ASan (AddressSanitizer) to fail with:
```
ld.lld: error: cannot open /usr/lib/llvm-19/lib/clang/19/lib/x86_64-pc-linux-gnu/libclang_rt.asan.a: No such file or directory
```
This was breaking the [Daily Docker
Build](https://github.com/oven-sh/bun-development-docker-image/actions/runs/20437290601)
in the bun-development-docker-image repo.
## Test plan
- [ ] Wait for the PR CI to pass
- [ ] After merging, the next Daily Docker Build should succeed
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Add `MemfdFlags` enum to replace raw integer flags for `memfd_create`,
providing semantic clarity for different use cases (`executable`,
`non_executable`, `cross_process`)
- Add support for `MFD_EXEC` and `MFD_NOEXEC_SEAL` flags (Linux 6.3+)
with automatic fallback to older kernel flags when `EINVAL` is returned
- Use memfd + `/proc/self/fd/{fd}` path for loading embedded `.node`
files in standalone builds, avoiding disk writes entirely on Linux
## Test plan
- [ ] Verify standalone builds with embedded `.node` files work on Linux
- [ ] Verify fallback works on older kernels (pre-6.3)
- [ ] Verify subprocess stdio memfd still works correctly
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix segmentation fault in `bun create` when using `--no-install` with
a template that has a `bun-create.postinstall` task starting with "bun "
- The bug was caused by unconditionally slicing `argv[2..]` which
created an empty array when `npm_client` was null
- Added check for `npm_client != null` before slicing
## Reproduction
```bash
# Create template with bun-create.postinstall
mkdir -p ~/.bun-create/test-template
echo '{"name":"test","bun-create":{"postinstall":"bun install"}}' > ~/.bun-create/test-template/package.json
# This would crash before the fix
bun create test-template /tmp/my-app --no-install
```
## Test plan
- [x] Verified the reproduction case crashes before the fix
- [x] Verified the reproduction case works after the fix
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Fix out-of-bounds access when parsing `NO_PROXY` environment variable
with empty entries
- Empty entries (e.g., `"localhost, , example.com"`) would cause a panic
when checking if the host starts with a dot
- Skip empty entries after trimming whitespace
fixes BUN-110G
fixes BUN-128V
## Test plan
- [x] Verify `NO_PROXY="localhost, , example.com"` no longer crashes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix use-after-free crash in async zstd compression, scrypt, and
JSTranspiler operations
- When `StringOrBuffer.fromJSMaybeAsync` is called with `is_async=true`,
the buffer's JSValue is now protected from garbage collection
- Previously, the buffer could be GC'd while a worker thread was still
accessing it, causing segfaults in zstd's `HIST_count_simple` and
similar functions
Fixes BUN-167Z
## Changes
- `fromJSMaybeAsync`: Call `protect()` on buffer when `is_async=true`
- `fromJSWithEncodingMaybeAsync`: Same protection for the early return
path
- `Scrypt`: Fix cleanup to use `deinitAndUnprotect()` for async path,
add missing `deinit()` in sync path
- `JSTranspiler`: Use new protection mechanism instead of manual
`protect()`/`unprotect()` calls
- Simplify `createOnJSThread` signatures to not return errors (OOM is
handled internally)
- Update all callers to use renamed/simplified APIs
## Test plan
- [x] Code review of all callsites to verify correct protect/unprotect
pairing
- [ ] Run existing zstd tests
- [ ] Run existing scrypt tests
- [ ] Run existing transpiler tests
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Add periodic memory reclamation for IPC buffers after processing
messages
- Fix missing `deref()` on `bun.String` created from `cmd` property in
`handleIPCMessage`
- Add `reclaimMemory()` function to shrink incoming buffer and send
queue when they exceed 2MB capacity
- Track message count to trigger memory reclamation every 256 messages
The incoming `ByteList` buffer and send queue `ArrayList` would grow but
never shrink, causing memory accumulation during sustained IPC
messaging.
## Test plan
- [x] Added regression tests in
`test/js/bun/spawn/spawn-ipc-memory.test.ts`
- [x] Existing IPC tests pass (`spawn.ipc.test.ts`)
- [x] Existing cluster tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Closes#25505. This adjusts the byte length check in `DataCell:
fromBytes` to 12 bytes instead of 16, as zero-dimensional arrays will
have a shorter preamble.
### How did you verify your code works?
Test suite passes, and I've added a new test that fails in the main
branch but passes with this change. The issue only seems to crop up when
a connection is _reused_, which is curious.
## Summary
Fix several memory leaks in the compression libraries:
- **NativeBrotli/NativeZstd reset()** - Each call to `reset()` allocated
a new encoder/decoder without freeing the previous one
- **NativeBrotli/NativeZstd init() error paths** - If `setParams()`
failed after `stream.init()` succeeded, the instance was leaked
- **NativeZstd init()** - If `setPledgedSrcSize()` failed after context
creation, the context was leaked
- **ZlibCompressorArrayList** - After `deflateInit2_()` succeeded, if
`ensureTotalCapacityPrecise()` failed with OOM, zlib internal state was
never freed
- **NativeBrotli close()** - Now sets state to null to prevent potential
double-free (defensive)
- **LibdeflateState** - Added `deinit()` for API consistency
## Test plan
- [x] Added regression test that calls `reset()` 100k times and measures
memory growth
- [x] Test shows memory growth dropped from ~600MB to ~10MB for Brotli
- [x] Verified no double-frees by tracing code paths
- [x] Existing zlib tests pass (except pre-existing timeout in debug
build)
Before fix (system bun 1.3.3):
```
Memory growth after 100000 reset() calls: 624.38 MB (BrotliCompress)
Memory growth after 100000 reset() calls: 540.63 MB (BrotliDecompress)
```
After fix:
```
Memory growth after 100000 reset() calls: 11.84 MB (BrotliCompress)
Memory growth after 100000 reset() calls: 0.16 MB (BrotliDecompress)
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Add missing `UV_UNKNOWN` and `UV_EAI_*` error code mappings to the
`errno()` function in `ReturnCode`
- Fixes panic "integer does not fit in destination type" on Windows when
libuv returns unmapped error codes
- Speculative fix for BUN-131E
## Root Cause
The `errno()` function was missing mappings for `UV_UNKNOWN` (-4094) and
all `UV_EAI_*` address info errors (-3000 to -3014). When libuv returned
these codes, the switch fell through to `else => null`, and the caller
at `sys_uv.zig:317` assumed success and tried to cast the negative
return code to `usize`, causing a panic.
This was triggered in `readFileWithOptions` -> `preadv` when:
- Memory-mapped file operations encounter exceptions (file
modified/truncated by another process, network drive issues)
- Windows returns error codes that libuv cannot map to standard errno
values
## Crash Report
```
Bun v1.3.5 (1e86ceb) on windows x86_64baseline []
panic: integer does not fit in destination type
sys_uv.zig:294: preadv
node_fs.zig:5039: readFileWithOptions
```
## Test plan
- [ ] This fix prevents a panic, converting it to a proper error.
Testing would require triggering `UV_UNKNOWN` from libuv, which is
difficult to do reliably (requires memory-mapped file exceptions or
unusual Windows errors).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Fix two off-by-one bounds check errors that used `>` instead of `>=`
- Both bugs could cause undefined behavior (array out-of-bounds access)
when an index equals the array length
## The Bugs
### 1. `src/install/postinstall_optimizer.zig:62`
```zig
// Before (buggy):
if (resolution > metas.len) continue;
const meta: *const Meta = &metas[resolution]; // Out-of-bounds when resolution == metas.len
// After (fixed):
if (resolution >= metas.len) continue;
```
### 2. `src/bundler/linker_context/doStep5.zig:10`
```zig
// Before (buggy):
if (id > c.graph.meta.len) return;
const resolved_exports = &c.graph.meta.items(.resolved_exports)[id]; // Out-of-bounds when id == c.graph.meta.len
// After (fixed):
if (id >= c.graph.meta.len) return;
```
## Why These Are Bugs
Valid array indices are `0` to `len - 1`. When `index == len`:
- `index > len` evaluates to `false` → check passes
- `array[index]` accesses `array[len]` → out-of-bounds / undefined
behavior
## Codebase Patterns
The rest of the codebase correctly uses `>=` for these checks:
- `lockfile.zig:484`: `if (old_resolution >= old.packages.len)
continue;`
- `lockfile.zig:522`: `if (old_resolution >= old.packages.len)
continue;`
- `LinkerContext.zig:389`: `if (source_index >= import_records_list.len)
continue;`
- `LinkerContext.zig:1667`: `if (source_index >= c.graph.ast.len) {`
## Test plan
- [x] Verified fix aligns with existing codebase patterns
- [ ] CI passes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix use-after-free bug when parsing `ca` option from `.npmrc`
- The `ca` string was being stored directly from the parser's arena
without duplication
- Since the parser arena is freed at the end of `loadNpmrc`, this
created a dangling pointer
## The Bug
In `src/ini.zig`, the `ca` string wasn't being duplicated like all other
string properties:
```zig
// Lines 983-986 explicitly warn about this:
// Need to be very, very careful here with strings.
// They are allocated in the Parser's arena, which of course gets
// deinitialized at the end of the scope.
// We need to dupe all strings
// Line 981: Parser arena is freed here
defer parser.deinit();
// Line 1016-1020: THE BUG - string not duped!
if (out.asProperty("ca")) |query| {
if (query.expr.asUtf8StringLiteral()) |str| {
install.ca = .{
.str = str, // ← Dangling pointer after parser.deinit()!
};
```
All other string properties in the same function correctly duplicate:
- `registry` (line 996): `try allocator.dupe(u8, str)`
- `cache` (line 1002): `try allocator.dupe(u8, str)`
- `cafile` (line 1037): `asStringCloned(allocator)`
- `ca` array items (line 1026): `asStringCloned(allocator)`
## User Impact
When a user has `ca=<certificate>` in their `.npmrc` file:
1. The certificate string is parsed and stored
2. The parser arena is freed
3. `install.ca.str` becomes a dangling pointer
4. Later TLS/SSL operations access freed memory
5. Could cause crashes, undefined behavior, or security issues
## Test plan
- Code inspection confirms this matches the pattern used for all other
string properties
- The fix adds `try allocator.dupe(u8, str)` to match `cache`,
`registry`, etc.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix typo in `onProcessExit` where `existing_stdin_value.isCell()` was
checked instead of `existing_value.isCell()`
- Since `existing_stdin_value` is always `.zero` at that point, the
condition was always false, making the inner block dead code
## The Bug
In `src/bun.js/api/bun/subprocess.zig:593`:
```zig
var existing_stdin_value = jsc.JSValue.zero; // Line 590 - always .zero
if (this_jsvalue != .zero) {
if (jsc.Codegen.JSSubprocess.stdinGetCached(this_jsvalue)) |existing_value| {
if (existing_stdin_value.isCell()) { // BUG! Should be existing_value
// This block was DEAD CODE - never executed
```
Compare with the correct pattern used elsewhere:
```zig
// shell/subproc.zig:251-252 (CORRECT)
if (jsc.Codegen.JSSubprocess.stdinGetCached(subprocess.this_jsvalue)) |existing_value| {
jsc.WebCore.FileSink.JSSink.setDestroyCallback(existing_value, 0); // Uses existing_value
}
```
## Impact
The dead code prevented:
- Recovery of stdin from cached JS value when `weak_file_sink_stdin_ptr`
is null
- Proper cleanup via `onAttachedProcessExit` on the FileSink
- `setDestroyCallback` cleanup in `onProcessExit`
Note: The user-visible impact was mitigated by redundant cleanup paths
in `Writable.zig` that also call `setDestroyCallback`.
## Test plan
- Code inspection confirms this is a straightforward typo fix
- Existing subprocess tests continue to pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix inverted buffer allocation logic when parsing strings in Postgres
arrays
- Strings larger than 16KB were incorrectly using the stack buffer
instead of dynamically allocating
- This caused spurious `InvalidByteSequence` errors for valid data
## The Bug
In `src/sql/postgres/DataCell.zig`, the condition for when to use
dynamic allocation was inverted:
```zig
// BEFORE (buggy):
const needs_dynamic_buffer = str_bytes.len < stack_buffer.len; // TRUE when SMALL
// AFTER (fixed):
const needs_dynamic_buffer = str_bytes.len > stack_buffer.len; // TRUE when LARGE
```
## What happened with large strings (>16KB):
1. `needs_dynamic_buffer` = false (e.g., 20000 < 16384 is false)
2. Uses `stack_buffer[0..]` which is only 16KB
3. `unescapePostgresString` hits bounds check and returns
`BufferTooSmall`
4. Error converted to `InvalidByteSequence`
5. User gets error even though data is valid
## User Impact
Users with Postgres arrays containing JSON or string elements larger
than 16KB would get spurious InvalidByteSequence errors even though
their data was perfectly valid.
## Test plan
- Code inspection confirms the logic was inverted
- The fix aligns with the intended behavior: use stack buffer for small
strings, dynamic allocation for large strings
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix double-close of file descriptor when using `&>` redirect with
shell builtin commands
- Add `dupeRef()` helper for cleaner reference counting semantics
- Add tests for `&>` and `&>>` redirects with builtins
## Test plan
- [x] Added tests in `test/js/bun/shell/file-io.test.ts` that reproduce
the bug
- [x] All file-io tests pass
## The Bug
When using `&>` to redirect both stdout and stderr to the same file with
a shell builtin command (e.g., `pwd &> file.txt`), the code was creating
two separate `IOWriter` instances that shared the same file descriptor.
When both `IOWriter`s were destroyed, they both tried to close the same
fd, causing an `EBADF` (bad file descriptor) error.
```javascript
import { $ } from "bun";
await $`pwd &> output.txt`; // Would crash with EBADF
```
## The Fix
1. Share a single `IOWriter` between stdout and stderr when both are
redirected to the same file, with proper reference counting
2. Rename `refSelf` to `dupeRef` for clarity across `IOReader`,
`IOWriter`, `CowFd`, and add it to `Blob` for consistency
3. Fix the `Body.Value` blob case to also properly reference count when
the same blob is assigned to multiple outputs
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Latest model <noreply@anthropic.com>
### What does this PR do?
- fixes both functions returning false for double-encoded values (even
if the numeric value is a valid int32/uint32)
- fixes IsUint32() returning false for values that don't fit in int32
- fixes the test from #22462 not testing anything (the native functions
were being passed a callback to run garbage collection as the first
argument, so it was only ever testing what the type check APIs returned
for that function)
- extends the test to cover the first edge case above
### How did you verify your code works?
The new tests fail without these fixes.
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Add `bun_version` field to `link-metadata.json`
- Pass `VERSION` CMake variable to the metadata script as `BUN_VERSION`
env var
This ensures the build version is captured in the link metadata JSON
file, which is useful for tracking which version produced a given build
artifact.
## Test plan
- Build with `bun bd` and verify `link-metadata.json` includes
`bun_version`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Fixes environment variable expansion in quoted `.npmrc` values and adds
support for the `?` optional modifier.
### Changes
**Simplified quoted value handling:**
- Removed unnecessary `isProperlyQuoted` check that added complexity
without benefit
- When JSON.parse succeeds for quoted strings, expand env vars in the
result
- When JSON.parse fails for single-quoted strings like `'${VAR}'`, still
expand env vars
**Added `?` modifier support (matching npm behavior):**
- `${VAR}` - if VAR is undefined, leaves as `${VAR}` (no expansion)
- `${VAR?}` - if VAR is undefined, expands to empty string
This applies consistently to both quoted and unquoted values.
### Examples
```ini
# Env var found - all expand to the value
token = ${NPM_TOKEN}
token = "${NPM_TOKEN}"
token = '${NPM_TOKEN}'
# Env var NOT found - left as-is
token = ${NPM_TOKEN} # → ${NPM_TOKEN}
token = "${NPM_TOKEN}" # → ${NPM_TOKEN}
token = '${NPM_TOKEN}' # → ${NPM_TOKEN}
# Optional modifier (?) - expands to empty if not found
token = ${NPM_TOKEN?} # → (empty)
token = "${NPM_TOKEN?}" # → (empty)
auth = "Bearer ${TOKEN?}" # → Bearer
```
### Test Plan
- Added 8 new tests for the `?` modifier covering quoted and unquoted
values
- Verified all expected values match `npm config get` behavior
- All 30 ini tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
https://github.com/oven-sh/bun/pull/25361 needs to be merged before this
PR
## Summary
- Move `last_write_failed` flag from loop-level to per-socket flag for
correctness
## Changes
- Move `last_write_failed` from `loop->data` to `socket->flags`
- More semantically correct since write status is per-socket, not
per-loop
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
Fixes#24593 - WebSocket segfault on Windows when publishing large
messages with `perMessageDeflate: true`.
Also fixes#21028 (duplicate issue).
Also closes#25457 (alternative PR).
**Root cause:**
On Windows, the C++ code was compiled against system zlib headers
(1.3.1) but linked against Bun's vendored Cloudflare zlib (1.2.8).
This version mismatch caused `deflateInit2()` to return
`Z_VERSION_ERROR` (-6), leaving the deflate stream in an invalid state.
All subsequent `deflate()` calls returned `Z_STREAM_ERROR` (-2),
producing zero output, which then caused an integer underflow when
subtracting the 4-byte trailer → segfault in memcpy.
**Fix:**
Add `${VENDOR_PATH}/zlib` to the C++ include paths in
`cmake/targets/BuildBun.cmake`. This ensures the vendored zlib headers
are found before system headers, maintaining header/library version
consistency.
This is a simpler alternative to #25457 which worked around the issue by
using libdeflate exclusively.
## Test plan
- [x] Added regression test `test/regression/issue/24593.test.ts` with 4
test cases:
- Large ~109KB JSON message publish (core reproduction)
- Multiple rapid publishes (buffer corruption)
- Broadcast to multiple subscribers
- Messages at CORK_BUFFER_SIZE boundary (16KB)
- [x] Tests pass on Windows (was crashing before fix)
- [x] Tests pass on macOS
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
This PR implements four V8 C++ API methods for type checking that are
commonly used by native Node.js modules:
- `v8::Value::IsMap()` - checks if value is a Map
- `v8::Value::IsArray()` - checks if value is an Array
- `v8::Value::IsInt32()` - checks if value is a 32-bit integer
- `v8::Value::IsBigInt()` - checks if value is a BigInt
## Implementation Details
The implementation maps V8's type checking APIs to JavaScriptCore's
equivalent functionality:
- `IsMap()` uses JSC's `inherits<JSC::JSMap>()` check
- `IsArray()` uses JSC's `isArray()` function with the global object
- `IsInt32()` uses JSC's `isInt32()` method
- `IsBigInt()` uses JSC's `isBigInt()` method
## Changes
- Added method declarations to `V8Value.h`
- Implemented the methods in `V8Value.cpp`
- Added symbol exports to `napi.zig` (both Unix and Windows mangled
names)
- Added symbols to `symbols.txt` and `symbols.dyn`
- Added comprehensive tests in `v8-module/main.cpp` and `v8.test.ts`
## Testing
The implementation has been verified to:
- Compile successfully without errors
- Export the correct symbols in the binary
- Follow established patterns in the V8 compatibility layer
Tests cover various value types including empty and populated
Maps/Arrays, different numeric ranges, BigInts, and other JavaScript
types.
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fix bug where `Response.clone()` would lock the original response's
body when `response.body` was accessed before cloning
- Apply the same fix to `Request.clone()`
## Root Cause
When `response.body` was accessed before calling `response.clone()`, the
original response's body would become locked after cloning. This
happened because:
1. When the cloned response was wrapped with `toJS()`,
`checkBodyStreamRef()` was called which moved the stream from
`Locked.readable` to `js.gc.stream` and cleared `Locked.readable`
2. The subsequent code tried to get the stream from `Locked.readable`,
which was now empty, so the body cache update was skipped
3. The JavaScript-level body property cache still held the old locked
stream
## Fix
Updated the cache update logic to:
1. For the cloned response: use `js.gc.stream.get()` instead of
`Locked.readable.get()` since `toJS()` already moved the stream
2. For the original response: use `Locked.readable.get()` which still
holds the teed stream since `checkBodyStreamRef` hasn't been called yet
## Reproduction
```javascript
const readableStream = new ReadableStream({
start(controller) {
controller.enqueue(new TextEncoder().encode("Hello, world!"));
controller.close();
},
});
const response = new Response(readableStream);
console.log(response.body?.locked); // Accessing body before clone
const cloned = response.clone();
console.log(response.body?.locked); // Expected: false, Actual: true ❌
console.log(cloned.body?.locked); // Expected: false, Actual: false ✅
```
## Test plan
- [x] Added regression tests for `Response.clone()` in
`test/js/web/fetch/response.test.ts`
- [x] Added regression test for `Request.clone()` in
`test/js/web/request/request.test.ts`
- [x] Verified tests fail with system bun (before fix) and pass with
debug build (after fix)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fix use-after-free vulnerability during socket adoption by properly
tracking reallocated sockets
- Add safety checks to prevent linking closed sockets to context lists
- Properly track socket state with new `is_closed`, `adopted`, and
`is_tls` flags
## What does this PR do?
This PR improves event loop stability by addressing potential
use-after-free issues that can occur when sockets are reallocated during
adoption (e.g., when upgrading a TCP socket to TLS).
### Key Changes
**Socket State Tracking
([internal.h](packages/bun-usockets/src/internal/internal.h))**
- Added `is_closed` flag to explicitly track when a socket has been
closed
- Added `adopted` flag to mark sockets that were reallocated during
context adoption
- Added `is_tls` flag to track TLS socket state for proper low-priority
queue handling
**Safe Socket Adoption
([context.c](packages/bun-usockets/src/context.c))**
- When `us_poll_resize()` returns a new pointer (reallocation occurred),
the old socket is now:
- Marked as closed (`is_closed = 1`)
- Added to the closed socket cleanup list
- Marked as adopted (`adopted = 1`)
- Has its `prev` pointer set to the new socket for event redirection
- Added guards to
`us_internal_socket_context_link_socket/listen_socket/connecting_socket`
to prevent linking already-closed sockets
**Event Loop Handling ([loop.c](packages/bun-usockets/src/loop.c))**
- After callbacks that can trigger socket adoption (`on_open`,
`on_writable`, `on_data`), the event loop now checks if the socket was
reallocated and redirects to the new socket
- Low-priority socket handling now properly checks `is_closed` state and
uses `is_tls` flag for correct SSL handling
**Poll Resize Safety
([epoll_kqueue.c](packages/bun-usockets/src/eventing/epoll_kqueue.c))**
- Changed `us_poll_resize()` to always allocate new memory with
`us_calloc()` instead of `us_realloc()` to ensure the old pointer
remains valid for cleanup
- Now takes `old_ext_size` parameter to correctly calculate memory sizes
- Re-enabled `us_internal_loop_update_pending_ready_polls()` call in
`us_poll_change()` to ensure pending events are properly redirected
### How did you verify your code works?
Run existing CI and existing socket upgrade tests under asan build
## Summary
- Improved validation for bunx metadata files on Windows
- Added graceful error handling for malformed metadata instead of
crashing
- Added regression test for the fix
## Test plan
- [x] Run `bun bd test test/cli/install/bunx.test.ts -t "should not
crash on corrupted"`
- [x] Manual testing with corrupted `.bunx` files
- [x] Verified normal operation still works
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Fixes an issue where loading the same native module
(NODE_MODULE_CONTEXT_AWARE) multiple times would fail with:
```
symbol 'napi_register_module_v1' not found in native module
```
Fixes https://github.com/oven-sh/bun/issues/23136
Fixes https://github.com/oven-sh/bun/issues/21432
## Root Cause
When a native module is loaded for the first time:
1. `dlopen()` loads the shared library
2. Static constructors run and call `node_module_register()`
3. The module registers successfully
On subsequent loads of the same module:
1. `dlopen()` returns the same handle (library already loaded)
2. Static constructors **do not run again**
3. No registration occurs, leading to the "symbol not found" error
## Solution
Implemented a thread-safe `DLHandleMap` to cache and replay module
registrations:
1. **Thread-local storage** captures the `node_module*` during static
constructor execution
2. **After successful first load**, save the registration to the global
map
3. **On subsequent loads**, look up the cached registration and replay
it
This approach matches Node.js's `global_handle_map` implementation.
## Changes
- Created `src/bun.js/bindings/DLHandleMap.h` - thread-safe singleton
cache
- Added thread-local storage in `src/bun.js/bindings/v8/node.cpp`
- Modified `src/bun.js/bindings/BunProcess.cpp` to save/lookup cached
modules
- Also includes the exports fix (using `toObject()` to match Node.js
behavior)
## Test Plan
Added `test/js/node/process/dlopen-duplicate-load.test.ts` with tests
that:
- Build a native addon using node-gyp
- Load it twice with `process.dlopen`
- Verify both loads succeed
- Test with different exports objects
All tests pass.
## Related Issue
Fixes the second bug discovered in the segfault investigation.
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Add `EINVAL` and `OPNOTSUPP` to the list of errors that trigger
fallback from `copy_file_range` to `sendfile`/read-write loop
- Fixes `Bun.write` and `fs.copyFile` failing on eCryptfs filesystems
## Test plan
- [x] Existing `copyFile` tests pass (`bun bd test
test/js/node/fs/fs.test.ts -t "copyFile"`)
- [x] Existing `copy_file_range` fallback tests pass (`bun bd test
test/js/bun/io/bun-write.test.js -t "should work when copyFileRange is
not available"`)
Fixes#13968🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Fixes#13316Fixes#18275
Running `bunx cowsay ""` (or any package with an empty string argument)
on Windows caused a panic. Additionally, `bunx concurrently "command
with spaces"` was splitting quoted arguments incorrectly.
**Repro #13316:**
```bash
bunx cowsay ""
# panic(main thread): reached unreachable code
```
**Repro #18275:**
```bash
bunx concurrently "bun --version" "bun --version"
# Only runs once, arguments split incorrectly
# Expected: ["bun --version", "bun --version"]
# Actual: ["bun", "--version", "bun", "--version"]
```
## Root Cause
The bunx fast path on Windows bypasses libuv and calls `CreateProcessW`
directly to save 5-12ms. The command line building logic had two issues:
1. **Empty strings**: Not quoted at all, resulting in invalid command
line
2. **Arguments with spaces**: Not quoted, causing them to be split into
multiple arguments
## Solution
Implement Windows command-line argument quoting using libuv's proven
algorithm:
- Port of libuv's `quote_cmd_arg` function (process backwards + reverse)
- Empty strings become `""`
- Strings with spaces/tabs/quotes are wrapped in quotes
- Backslashes before quotes are properly escaped per Windows rules
**Why not use libuv directly?**
- Normal `Bun.spawn()` uses `uv_spawn()` which handles quoting
internally
- bunx fast path bypasses libuv to save 5-12ms (calls `CreateProcessW`
directly)
- libuv's `quote_cmd_arg` is a static function (not exported)
- Solution: port the algorithm to Zig
## Test Plan
- [x] Added regression test for empty strings (#13316)
- [x] Added regression test for arguments with spaces (#18275)
- [x] Verified system bun (v1.3.3) fails both tests
- [x] Verified fix passes both tests
- [x] Implementation based on battle-tested libuv algorithm
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Fixes `url.domainToASCII` and `url.domainToUnicode` to return empty
string instead of throwing `TypeError` when given invalid domains
- Per Node.js docs: "if `domain` is an invalid domain, the empty string
is returned"
## Test plan
- [x] Run `bun bd test test/regression/issue/24191.test.ts` - all 2
tests pass
- [x] Verify tests fail with system Bun (`USE_SYSTEM_BUN=1`) to confirm
fix validity
- [x] Manual verification: `url.domainToASCII('xn--iñvalid.com')`
returns `""`
## Example
Before (bug):
```
$ bun -e "import url from 'node:url'; console.log(url.domainToASCII('xn--iñvalid.com'))"
TypeError: domainToASCII failed
```
After (fixed):
```
$ bun -e "import url from 'node:url'; console.log(url.domainToASCII('xn--iñvalid.com'))"
(empty string output)
```
Closes#24191🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixes silent 401 Unauthorized errors when using proxies with long
passwords (e.g., JWT tokens > 4096 chars)
- Bun was silently dropping proxy passwords exceeding 4095 characters,
falling through to code that only encoded the username
## Changes
- Added `PercentEncoding.decodeWithFallback` which uses a 4KB stack
buffer for the common case and falls back to heap allocation only for
larger inputs
- Updated proxy auth encoding in `AsyncHTTP.zig` to use the new fallback
method
## Test plan
- [x] Added test case that verifies passwords > 4096 chars are handled
correctly
- [x] Test fails with system bun (v1.3.3), passes with this fix
- [x] All 29 proxy tests pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Fixes a flaky test (`test-http-url.parse-https.request.js`) where
`request.socket._secureEstablished` was intermittently `false` when the
HTTP request handler was called on HTTPS servers.
## Root Cause
The `isAuthorized` flag was stored in
`HttpContextData::flags.isAuthorized`, which is **shared across all
sockets** in the same context. This meant multiple concurrent TLS
connections could overwrite each other's authorization state, and the
value could be stale when read.
## Fix
Moved the `isAuthorized` flag from the context-level `HttpContextData`
to the per-socket `AsyncSocketData` base class. This ensures each socket
has its own authorization state that is set correctly during its TLS
handshake callback.
## Changes
- **`AsyncSocketData.h`**: Added per-socket `bool isAuthorized` field
- **`HttpContext.h`**: Updated handshake callback to set per-socket flag
instead of context-level flag
- **`JSNodeHTTPServerSocket.cpp`**: Updated `isAuthorized()` to read
from per-socket `AsyncSocketData` (via `HttpResponseData` which inherits
from it)
## Testing
Ran the flaky test 50+ times with 100% pass rate.
Also verified gRPC and HTTP2 tests still pass.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Skip `test_handle_scope_gc` test on ASAN builds due to false positives
from dynamic library boundary crossing (Bun built with ASAN+UBSAN,
native addon without sanitizers)
## Test plan
- CI should pass on ASAN builds with this test skipped
- Non-ASAN builds continue to run the test normally
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Fixes a bug where idle WebSocket connections would cause 100% CPU usage
on macOS and other BSD systems using kqueue.
**Root cause:** The kqueue event filter comparison was using bitwise AND
(`&`) instead of equality (`==`) when checking the filter type. Combined
with missing `EV_ONESHOT` flags on writable events, this caused the
event loop to continuously spin even when no actual I/O was pending.
**Changes:**
1. **Fixed filter comparison** in `epoll_kqueue.c`: Changed `filter &
EVFILT_READ` to `filter == EVFILT_READ` (same for `EVFILT_WRITE`). The
filter field is a value, not a bitmask.
2. **Added `EV_ONESHOT` flag** to writable events: kqueue writable
events now use one-shot mode to prevent continuous triggering.
3. **Re-arm writable events when needed**: After a one-shot writable
event fires, the code now properly updates the poll state and re-arms
the writable event if another write is still pending.
### How did you verify your code works?
Added a test that:
1. Creates a TLS WebSocket server and client
2. Sends messages then lets the connection sit idle
3. Measures CPU usage over 3 seconds
4. Fails if CPU usage exceeds 2% (expected is ~0.XX% when idle)
## Summary
- The default trusted dependencies list should only apply to packages
installed from npm
- Non-npm sources (file:, link:, git:, github:) now require explicit
trustedDependencies
- This prevents malicious packages from spoofing trusted names through
local paths or git repos
## Test plan
- [x] Added test: file: dependency named "esbuild" does NOT auto-run
postinstall scripts
- [x] Added test: file: dependency runs scripts when explicitly added to
trustedDependencies
- [x] Verified tests fail with system bun (old behavior) and pass with
new build
- [x] Build compiles successfully
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
## Summary
- Adds `import { feature } from "bun:bundle"` for compile-time feature
flag checking
- `feature("FLAG_NAME")` calls are replaced with `true`/`false` at
bundle time
- Enables dead-code elimination through `--feature=FLAG_NAME` CLI
argument
- Works in `bun build`, `bun run`, and `bun test`
- Available in both CLI and `Bun.build()` JavaScript API
## Usage
```ts
import { feature } from "bun:bundle";
if (feature("SUPER_SECRET")) {
console.log("Secret feature enabled!");
} else {
console.log("Normal mode");
}
```
### CLI
```bash
# Enable feature during build
bun build --feature=SUPER_SECRET index.ts
# Enable at runtime
bun run --feature=SUPER_SECRET index.ts
# Enable in tests
bun test --feature=SUPER_SECRET
```
### JavaScript API
```ts
await Bun.build({
entrypoints: ['./index.ts'],
outdir: './out',
features: ['SUPER_SECRET', 'ANOTHER_FLAG'],
});
```
## Implementation
- Added `bundler_feature_flags` (as `*const bun.StringSet`) to
`RuntimeFeatures` and `BundleOptions`
- Added `bundler_feature_flag_ref` to Parser struct to track the
`feature` import
- Handle `bun:bundle` import at parse time (similar to macros) - capture
ref, return empty statement
- Handle `feature()` calls in `e_call` visitor - replace with boolean
based on flags
- Wire feature flags through CLI arguments and `Bun.build()` API to
bundler options
- Added `features` option to `JSBundler.zig` for JavaScript API support
- Added TypeScript types in `bun.d.ts`
- Added documentation to `docs/bundler/index.mdx`
## Test plan
- [x] Basic feature flag enabled/disabled tests (both CLI and API
backends)
- [x] Multiple feature flags test
- [x] Dead code elimination verification tests
- [x] Error handling for invalid arguments
- [x] Runtime tests with `bun run --feature=FLAG`
- [x] Test runner tests with `bun test --feature=FLAG`
- [x] Aliased import tests (`import { feature as checkFeature }`)
- [x] Ternary operator DCE tests
- [x] Tests use `itBundled` with both `backend: "cli"` and `backend:
"api"`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Alistair Smith <hi@alistair.sh>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
more accurately, developers cannot pass a value when expect values
resolve to never. this is easy to fall into when using the
`toContainKey*` matchers. falling back to PropertyKey when this happens
is a sensible/reasonable default
### What does this PR do?
fixes#25456, cc @MonsterDeveloper
fixes#25461
### How did you verify your code works?
bun types integration test
This PR significantly improves `Bun.stringWidth` to handle a wider
variety of Unicode characters and escape sequences correctly.
## Zero-width character handling
Added support for many previously unhandled zero-width characters:
- Soft hyphen (U+00AD)
- Word joiner and invisible operators (U+2060-U+2064)
- Lone surrogates (U+D800-U+DFFF)
- Arabic formatting characters (U+0600-U+0605, U+06DD, U+070F, U+08E2)
- Indic script combining marks (Devanagari through Malayalam)
- Thai and Lao combining marks
- Combining Diacritical Marks Extended and Supplement
- Tag characters (U+E0000-U+E007F)
## ANSI escape sequence handling
### CSI sequences
- Now properly handles ALL CSI final bytes (0x40-0x7E), not just `m`
- This means cursor movement (A/B/C/D), erase (J/K), scroll (S/T), and
other CSI commands are now correctly excluded from width calculation
### OSC sequences
- Added support for OSC sequences (ESC ] ... BEL/ST)
- OSC 8 hyperlinks are now properly handled
- Supports both BEL (0x07) and ST (ESC \) terminators
### ESC ESC fix
- Fixed state machine bug where `ESC ESC` would incorrectly reset state
- Now correctly handles consecutive ESC characters
## Emoji handling
Added proper grapheme-aware emoji width calculation:
- Flag emoji (regional indicator pairs) → width 2
- Skin tone modifiers → width 2
- ZWJ sequences (family, professions, etc.) → width 2
- Keycap sequences → width 2
- Variation selectors (VS15 for text, VS16 for emoji presentation)
- Uses ICU's `UCHAR_EMOJI` property for accurate emoji detection
## Test coverage
Added comprehensive test suite with **94 tests** covering:
- All zero-width character categories
- All CSI final bytes
- OSC sequences with various terminators
- Emoji edge cases (flags, skin tones, ZWJ, keycaps, variation
selectors)
- East Asian width (CJK, fullwidth, halfwidth katakana)
- Indic and Thai script combining marks
- Fuzzer-like stress tests for robustness
## Breaking changes
This is a behavior change - `stringWidth` will return different values
for some inputs. However, the new values are more accurate
representations of terminal display width:
| Input | Old | New | Why |
|-------|-----|-----|-----|
| Flag emoji 🇺🇸 | 1 | 2 | Flags display as 2 cells |
| Skin tone 👋🏽 | 4 | 2 | Emoji + modifier = 1 grapheme |
| ZWJ family 👨👩👧 | 8 | 2 | ZWJ sequence = 1 grapheme |
| Word joiner U+2060 | 1 | 0 | Invisible character |
| OSC 8 hyperlinks | counted URL | just visible text | URLs are
invisible |
| Cursor movement ESC[5A | counted | 0 | Control sequence |
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
### What does this PR do?
Remove a loose section from os-signals documentation
### How did you verify your code works?
Just a small documentation change.
Co-authored-by: Michael H <git@riskymh.dev>
## Summary
- Fixes strings ending with colons (e.g., `"tin:"`) not being quoted in
YAML.stringify output
- This caused YAML.parse to fail with "Unexpected token" when parsing
the output back
## Test plan
- Added regression tests in `test/regression/issue/25439.test.ts`
- Verified round-trip works for various strings ending with colons
- Ran existing YAML tests to ensure no regressions
Fixes#25439🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
- removes the `Unimplemented in Bun` comment on `CompressionStream` and
`DecompressionStream`
- updates the types for `CompressionStream` and `DecompressionStream` to
add a new internal `CompressionFormat` type to the constructor, which
adds `brotli` and `zstd` to the union
- adds tests for brotli and zstd usage
- adds lib.dom.d.ts exclusions for brotli and zstd as these don't exist
in the DOM version of CompressionFormat
fixes#25367
### How did you verify your code works?
typechecks and tests
## Summary
- When a URL object is passed as the proxy option, or when a proxy
object lacks a "url" property, ignore it instead of throwing an error
- This fixes a regression introduced in 1.3.4 where libraries like taze
that pass URL objects as proxy values would fail
## Test plan
- Added test: "proxy as URL object should be ignored (no url property)"
- passes a URL object directly as proxy
- Updated test: "proxy object without url is ignored (regression
#25413)" - proxy object with headers but no url
- Updated test: "proxy object with null url is ignored (regression
#25413)" - proxy object where url is null
- All 29 proxy tests pass
Fixes#25413🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
### What does this PR do?
- Add `contentDisposition` option to S3 file uploads to control the
`Content-Disposition` HTTP header
- Support passing `contentDisposition` through all S3 upload paths
(simple uploads, multipart uploads, and streaming uploads)
- Add TypeScript types for the new option
Fixes https://github.com/oven-sh/bun/issues/25362
### How did you verify your code works?
Test
## Summary
- Expanded documentation for embedding files in single-file executables
with `with { type: "file" }`
- Added clear explanation of how the import attribute works and path
transformation at build time
- Added examples for reading embedded files with both `Bun.file()` and
Node.js `fs` APIs
- Added practical examples: JSON configs, HTTP static assets, templates,
binary files (WASM, fonts)
- Improved `Bun.embeddedFiles` section with a dynamic asset server
example
## Test plan
- [x] Verified all code examples compile and run correctly with `bun
build --compile`
- [x] Tested `Bun.file()` reads embedded files correctly
- [x] Tested `node:fs` APIs (`readFileSync`, `promises.readFile`,
`stat`) work with embedded files
- [x] Tested `Bun.embeddedFiles` returns correct blob array
- [x] Tested `--asset-naming` flag removes content hashes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Fixes#25398
### What does this PR do?
Fixes a bug where object expressions with spread properties and nullish
coalescing to empty objects (e.g., `k?.x ?? {}`) would produce invalid
JavaScript output like `k?.x ?? ` (missing `{}`).
### Root Cause
In `src/ast/SideEffects.zig`, the `simplifyUnusedExpr` function handles
unused object expressions with spread properties. When simplifying
property values:
1. The code creates a mutable copy `prop` from the original `prop_`
2. When a property value is simplified (e.g., `k?.x ?? {}` → `k?.x`), it
updates `prop.value`
3. **Bug:** The code then wrote back `prop_` (the original) instead of
`prop` (the modified copy)
Because `simplifyUnusedExpr` mutates the AST in place when handling
nullish coalescing (setting `bin.right` to empty), the original `prop_`
now contained an expression with `bin.right` as an empty/missing
expression, resulting in invalid output.
### How did you verify your code works?
- Added regression test in `test/regression/issue/25398.test.ts`
- Verified the original reproduction case passes
- Verified existing CommonJS tests continue to pass
- Verified test fails with system bun and passes with the fix
## Summary
- Document new default behavior in v1.3.4: `tsconfig.json` and
`package.json` loading is now disabled by default for standalone
executables
- Add documentation for `--compile-autoload-tsconfig` and
`--compile-autoload-package-json` CLI flags
- Document all four JavaScript API options: `autoloadTsconfig`,
`autoloadPackageJson`, `autoloadDotenv`, `autoloadBunfig`
- Note that `.env` and `bunfig.toml` may also be disabled by default in
a future version
## Test plan
- [ ] Review rendered documentation for accuracy and formatting
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Fix `mock.mock.args` to `mock.mock.calls` in mock-functions.mdx (the
`.args` property doesn't exist)
- Fix mock.restore example to use module methods instead of spy
functions (calling spy functions after restore returns `undefined`)
- Add missing `vi` import in Vitest compatibility example
## Test plan
- [x] Verified each code block works by running tests against the debug
build
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Change the size header in embedded Mach-O and PE sections from `u32`
(4 bytes) to `u64` (8 bytes)
- Ensures the data payload starts at an 8-byte aligned offset, which is
required for the bytecode cache
## Test plan
- [x] Test standalone compilation on macOS
- [ ] Test standalone compilation on Windows
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- Fix `napi_typeof` to return `napi_object` for boxed String objects
(`new String("hello")`) instead of incorrectly returning `napi_string`
- Add regression test for boxed primitive objects (String, Number,
Boolean)
The issue was that `StringObjectType` and `DerivedStringObjectType` JSC
cell types were falling through to return `napi_string`, but these
represent object wrappers around strings, not primitive strings.
## Test plan
- [x] `bun bd test test/napi/napi.test.ts -t "napi_typeof"` passes
- [x] Test fails with `USE_SYSTEM_BUN=1` (confirming the bug exists in
released version)
Fixes#25351🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
This reverts commit b4c8379447.
### What does this PR do?
### How did you verify your code works?
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
By default, standalone executables no longer load `tsconfig.json` and
`package.json` at runtime. This improves startup performance and
prevents unexpected behavior from config files in the runtime
environment.
- Added `--compile-autoload-tsconfig` / `--no-compile-autoload-tsconfig`
CLI flags (default: false)
- Added `--compile-autoload-package-json` /
`--no-compile-autoload-package-json` CLI flags (default: false)
- Added `autoloadTsconfig` and `autoloadPackageJson` options to the
`Bun.build()` compile config
- Flags are stored in `StandaloneModuleGraph.Flags` and applied at
runtime boot
This follows the same pattern as the existing
`--compile-autoload-dotenv` and `--compile-autoload-bunfig` flags.
## Test plan
- [x] Added tests in `test/bundler/bundler_compile_autoload.test.ts`
- [x] Verified standalone executables work correctly with runtime config
files that differ from compile-time configs
- [x] Verified the new CLI flags are properly parsed and applied
- [x] Verified the JS API options work correctly
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Fixes ENG-21288
TODO: Test with `@testing-library/react` `waitFor`
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
This fixes a critical bug where `http.Agent` with `keepAlive: true` was
not reusing connections, causing a 66% performance degradation compared
to Node.js. Every request was establishing a new TCP/TLS connection
instead of reusing existing ones.
**Root Cause:**
Three independent bugs were causing the connection pool to fail:
1. **TypeScript layer** (`src/js/node/_http_client.ts:271`)
- Reading wrong property: `keepalive` instead of `keepAlive`
- User's `keepAlive: true` setting was being ignored
2. **Request header handling** (`src/http.zig:591`)
- Only handled `Connection: close`, ignored `Connection: keep-alive`
- Missing explicit flag update for keep-alive header
3. **Response header handling** (`src/http.zig:2240`)
- Used compile-time function `eqlComptime` at runtime (always failed)
- Inverted logic: disabled pool when NOT "keep-alive"
- Ignored case-sensitivity (should use `eqlIgnoreCase` per RFC 7230)
**Performance Impact:**
- **Before**: All requests ~940ms, stddev 33ms (0% improvement) ❌
- **After**: First request ~930ms, subsequent ~320ms (65.9% improvement)
✅
- Performance now matches Node.js (65.9% vs 66.5% improvement)
- QPS increased from 4.2 to 12.2 req/s (190% improvement)
**Files Changed:**
- `src/js/node/_http_client.ts` - Fix property name (1 line)
- `src/http.zig` - Fix request/response header handling (5 lines)
Fixes#12053
### What does this PR do?
This PR fixes the HTTP connection pool by correcting three bugs:
1. **Fixes TypeScript property name**: Changes `this[kAgent]?.keepalive`
to `this[kAgent]?.keepAlive` to properly read the user's keepAlive
setting from http.Agent
2. **Adds keep-alive request header handling**: Explicitly sets
`disable_keepalive = false` when receiving `Connection: keep-alive`
header
3. **Fixes response header parsing**:
- Replaces compile-time `strings.eqlComptime()` with runtime
`std.ascii.eqlIgnoreCase()`
- Corrects inverted logic to properly enable connection pool on
`Connection: keep-alive`
- Makes header comparison case-insensitive per RFC 7230
All three bugs must be fixed together - any single bug would cause the
connection pool to fail.
### How did you verify your code works?
**Test 1: Minimal reproduction with 10 sequential HTTPS requests**
```typescript
const agent = new https.Agent({ keepAlive: true });
// Make 10 requests to https://api.example.com
```
Results:
- First request: 930ms (cold start with TCP/TLS handshake)
- Subsequent requests: ~320ms average (connection reused)
- **Improvement: 65.9%** (matches Node.js 66.5%)
- Verified across 3 repeated test runs for stability
**Test 2: Response header validation**
- Confirmed server returns `Connection: Keep-Alive`
- Verified Bun correctly parses and applies the header
**Test 3: Performance comparison**
| Runtime | First Request | Subsequent Avg | Improvement | QPS |
|---------|--------------|----------------|-------------|-----|
| Node.js v20.18.0 | 938ms | 314ms | **66.5%** | 12.2 |
| Bun v1.3.1 (broken) | 935ms | 942ms | -0.7% ❌ | 4.2 |
| Bun v1.3.2 (fixed) | 930ms | 317ms | **65.9%** ✅ | 12.2 |
Bun now performs identically to Node.js, confirming the connection pool
works correctly.
## Summary
- Fix regression where `new Bun.FFI.CString(ptr)` throws "function is
not a constructor"
- Pass the same function as both call and constructor callbacks for
CString
## Root Cause
PR #24910 replaced `jsc.createCallback` with `jsc.JSFunction.create` for
all FFI functions. However, `JSFunction.create` doesn't allow
constructor calls by default (it uses `callHostFunctionAsConstructor`
which throws). The old `createCallback` used `JSFFIFunction` which
allowed the same function to be called with `new`.
## Fix
Pass the same function as both the `implementation` and `constructor`
option to `JSFunction.create` for CString specifically. This allows `new
CString(ptr)` to work while keeping the refactoring from #24910.
Additionally, the `bun:ffi` module now replaces `Bun.FFI.CString` with
the proper JS CString class after loading, so users get the full class
with `.ptr`, `.byteOffset`, etc. properties.
## Test plan
- [x] Added regression test `test/regression/issue/25231.test.ts`
- [x] Test fails with `USE_SYSTEM_BUN=1` (v1.3.3), passes with fix
- [x] Verified reproduction case from issue works
Fixes#25231🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
This PR addresses several issues opened for the docs:
- Add callout for SQLite caching behavior between prepare() and query()
- Fix SQLite types and fix deprecated exec to run
- Fix Secrets API example
- Update SolidStart guide
- Add bun upgrade guide
- Prefer `process.versions.bun` over `typeof Bun` for detection
- Document complete `bunx` flags
- Improve Nitro preset documentation for Nuxt
Fixes#23165, #24424, #24294, #25175, #18433, #16804, #22967, #22527,
#10560, #14744
## Summary
- Added null check for `sourceOrigin` before accessing its URL in
`jest.mock()`
- When `callerSourceOrigin()` returns null (e.g., when called with
invalid arguments), the code now safely returns early instead of
crashing
## Test plan
- [x] Added regression test `test/regression/issue/ENG-24434.test.ts`
- [x] `bun bd test test/regression/issue/ENG-24434.test.ts` passes
Fixes ENG-24434
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix typo where `GetConsoleOutputCP()` was called twice instead of
calling `GetConsoleCP()` for the input codepage
- Add missing `GetConsoleCP()` extern declaration in windows.zig
The code was saving the output codepage twice, meaning the input
codepage was never properly saved and thus couldn't be correctly
restored.
## Note
This fix corrects a bug in the codepage save/restore logic, but **may
not fully resolve the garbled text issue** in #25151. The garbled text
problem occurs when `bunx` (without `--bun`) runs a package via Node.js,
and that package tries to spawn `bun`. The error message from cmd.exe
gets garbled on non-English Windows systems.
Further investigation may be needed to determine if additional codepage
handling is required when spawning processes.
Related to #25151🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Fixes `TLSSocket.isSessionReused()` to use BoringSSL's
`SSL_session_reused()` API instead of incorrectly checking if a session
was set.
The previous implementation returned `!!this[ksession]` which would
return `true` if `setSession()` was called, even if the session wasn't
actually reused by the SSL layer. This fix correctly uses the native SSL
API like Node.js does.
## Changes
- Added native `isSessionReused` function in Zig that calls
`SSL_session_reused()`
- Updated `TLSSocket.prototype.isSessionReused` to use the native
implementation
- Added regression tests
## Test plan
- [x] `bun bd test test/regression/issue/25190.test.ts` passes
- [x] `bun bd test test/js/node/tls/node-tls-connect.test.ts` passes
Fixes#25190🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixes `assert.deepStrictEqual()` to properly compare Number and
Boolean wrapper objects
- Previously, `new Number(1)` and `new Number(2)` were incorrectly
considered equal because they have no enumerable properties
- Now correctly extracts and compares internal values using
`JSC::sameValue()`, then falls through to check own properties
## Test plan
- [x] Run `bun bd test test/regression/issue/24045.test.ts` - all 6
tests pass
- [x] Verify tests fail with system Bun (`USE_SYSTEM_BUN=1`) to confirm
fix validity
- [x] Verified behavior matches Node.js exactly (see table below)
## Node.js Compatibility
| Test Case | Node.js | Bun |
|-----------|---------|-----|
| Different Number values (`new Number(1)` vs `new Number(2)`) | throws
| throws |
| Same Number values (`new Number(1)` vs `new Number(1)`) | equal |
equal |
| 0 vs -0 (`new Number(0)` vs `new Number(-0)`) | throws | throws |
| NaN equals NaN (`new Number(NaN)` vs `new Number(NaN)`) | equal |
equal |
| Different Boolean values (`new Boolean(true)` vs `new Boolean(false)`)
| throws | throws |
| Same Boolean values | equal | equal |
| Number wrapper vs primitive (`new Number(1)` vs `1`) | throws | throws
|
| Number vs Boolean wrapper | throws | throws |
| Same value, different own properties | throws | throws |
| Same value, same own properties | equal | equal |
| Different own property values | throws | throws |
## Example
Before (bug):
```javascript
assert.deepStrictEqual(new Number(1), new Number(2)); // passes incorrectly
```
After (fixed):
```javascript
assert.deepStrictEqual(new Number(1), new Number(2)); // throws AssertionError
```
Closes#24045🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
### What does this PR do?
Ensures `ptr` is either a number or heap big int before converting to a
number.
also fixes ENG-24039
### How did you verify your code works?
Added a test
### What does this PR do?
Fixes checking for exceptions when creating empty or used readable
streams
also fixes ENG-24038
### How did you verify your code works?
Added a test for creating empty streams
## Summary
- Implements the `%j` format specifier for `console.log` and related
console methods
- `%j` outputs the JSON stringified representation of the value
- Previously, `%j` was not recognized and was left as literal text in
the output
## Test plan
- [x] Run `bun bd test test/regression/issue/24234.test.ts` - all 5
tests pass
- [x] Verify tests fail with system Bun (`USE_SYSTEM_BUN=1`) to confirm
fix validity
- [x] Manual verification: `console.log('%j', {foo: 'bar'})` outputs
`{"foo":"bar"}`
## Example
Before (bug):
```
$ bun -e "console.log('%j %s', {foo: 'bar'}, 'hello')"
%j [object Object] hello
```
After (fixed):
```
$ bun -e "console.log('%j %s', {foo: 'bar'}, 'hello')"
{"foo":"bar"} hello
```
Closes#24234🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Add proper bounds checking for encoding operations that produce larger
output than input
- Handle allocation failures gracefully by returning appropriate errors
- Add defensive checks in string initialization functions
## Test plan
- Added test case for encoding operations with large buffers
- Verified existing buffer tests still pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Implements the [URLPattern Web
API](https://developer.mozilla.org/en-US/docs/Web/API/URLPattern) based
on WebKit's implementation. URLPattern provides declarative pattern
matching for URLs, similar to how regular expressions work for strings.
### Features
- **Constructor**: Create patterns from strings or `URLPatternInit`
dictionaries
- **`test()`**: Check if a URL matches the pattern (returns boolean)
- **`exec()`**: Extract matched groups from a URL (returns
`URLPatternResult` or null)
- **Pattern properties**: `protocol`, `username`, `password`,
`hostname`, `port`, `pathname`, `search`, `hash`
- **`hasRegExpGroups`**: Detect if the pattern uses custom regular
expressions
### Example Usage
```js
// Match URLs with a user ID parameter
const pattern = new URLPattern({ pathname: '/users/:id' });
pattern.test('https://example.com/users/123'); // true
pattern.test('https://example.com/posts/456'); // false
const result = pattern.exec('https://example.com/users/123');
console.log(result.pathname.groups.id); // "123"
// Wildcard matching
const filesPattern = new URLPattern({ pathname: '/files/*' });
const match = filesPattern.exec('https://example.com/files/image.png');
console.log(match.pathname.groups[0]); // "image.png"
```
## Implementation Notes
- Adapted from WebKit's URLPattern implementation
- Modified JS bindings to work with Bun's infrastructure (simpler
`convertDictionary` patterns, WTF::Variant handling)
- Added IsoSubspaces for proper GC integration
## Test Plan
- [x] 408 tests from Web Platform Tests pass
- [x] Tests fail with system Bun (URLPattern not defined), pass with
debug build
- [x] Manual testing of basic functionality
Fixes https://github.com/oven-sh/bun/issues/2286🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
# What does this PR do?
Nyaa~ This PR fixes a small mistake in the documentation where the code
block for a React component test example was using the wrong filename!
(;ω;)💦
It was previously labeled as `matchers.d.ts`, but it should be something
like `myComponent.test.tsx` to properly reflect a test file for a React
component using `@testing-library/react`. 🧁✨
This makes the example clearer and more accurate for developers using
Bun to test their React components~! 💻🌸💕
# How did you verify your code works?
It's just docs, one single line 🥺
Pwease review and merge it when you can, senpai~~! UwU 🌈🫧
## Summary
- Add documentation for the `env` option that inlines `process.env.*`
values in frontend code when bundling HTML files
- Document runtime configuration via `bunfig.toml` `[serve.static]`
section for `bun ./index.html`
- Document production build configuration via CLI (`--env=PUBLIC_*`) and
`Bun.build` API (`env: "PUBLIC_*"`)
- Explain prefix filtering to avoid exposing sensitive environment
variables
## Test plan
- [x] Verify documentation renders correctly in local preview
- [x] Cross-reference with existing `env` documentation in
bundler/index.mdx
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Michael H <git@riskymh.dev>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Fix `bun publish --help` showing incorrect `--dry-run` description
("Don't install anything" → "Perform a dry run without making changes")
- The `--dry-run` flag is in a shared params array used by multiple
commands, so the new generic message works for all of them
Fixes#24806
## Test plan
- [x] Verify `bun publish --help` shows "Perform a dry run without
making changes" for --dry-run
- [x] Regression test added that validates the correct help text is
shown
- [x] Test passes with debug build, fails with system bun (validating it
tests the right thing)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Make `Http2Server.setTimeout()` and `Http2SecureServer.setTimeout()`
return `this` to enable method chaining
- Matches Node.js behavior where `server.setTimeout(1000).listen()`
works
Fixes#24924
## Test plan
- [x] Test that `Http2Server.setTimeout()` returns server instance
- [x] Test that `Http2SecureServer.setTimeout()` returns server instance
- [x] Test method chaining works (e.g.,
`server.setTimeout(1000).close()`)
- [x] Tests pass with debug build, fail with system bun
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Adds stricter validation for chunk boundaries in the HTTP chunked
transfer encoding parser
- Ensures conformance with RFC 9112 requirements for chunk formatting
- Adds additional test coverage for chunked encoding edge cases
## Test plan
- Added new tests in `test/js/bun/http/request-smuggling.test.ts`
- All existing HTTP tests pass
- `bun bd test test/js/bun/http/request-smuggling.test.ts` passes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Extends `fetch()` proxy option to accept an object format: `proxy: {
url: string, headers?: Headers }`
- Allows sending custom headers to the proxy server (useful for proxy
authentication, custom routing headers, etc.)
- Headers are sent in CONNECT requests (for HTTPS targets) and direct
proxy requests (for HTTP targets)
- User-provided `Proxy-Authorization` header overrides auto-generated
credentials from URL
## Usage
```typescript
// Old format (still works)
fetch(url, { proxy: "http://proxy.example.com:8080" });
// New object format with headers
fetch(url, {
proxy: {
url: "http://proxy.example.com:8080",
headers: {
"Proxy-Authorization": "Bearer token",
"X-Custom-Proxy-Header": "value"
}
}
});
```
## Test plan
- [x] Test proxy object with url string works same as string proxy
- [x] Test proxy object with headers sends headers to proxy (HTTP
target)
- [x] Test proxy object with headers sends headers in CONNECT request
(HTTPS target)
- [x] Test proxy object with Headers instance
- [x] Test proxy object with empty headers
- [x] Test proxy object with undefined headers
- [x] Test user-provided Proxy-Authorization overrides URL credentials
- [x] All existing proxy tests pass (25 total)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix crash in `FormData.from()` when called with very large ArrayBuffer
input
- Add length check in C++ `toString` function against both Bun's
synthetic limit and WebKit's `String::MaxLength`
- For UTF-8 tagged strings, use simdutf to calculate actual UTF-16
length only when byte length exceeds the limit
## Root Cause
When `FormData.from()` was called with a very large ArrayBuffer (e.g.,
`new Uint32Array(913148244)` = ~3.6GB), the code would crash with:
```
ASSERTION FAILED: data.size() <= MaxLength
vendor/WebKit/Source/WTF/wtf/text/StringImpl.h(886)
```
The `toString()` function in `helpers.h` was only checking against
`Bun__stringSyntheticAllocationLimit` (which defaults to ~4GB), but not
against WebKit's `String::MaxLength` (INT32_MAX, ~2GB). When the input
exceeded `String::MaxLength`, `createWithoutCopying()` would fail with
an assertion.
## Changes
1. **helpers.h**: Added `|| str.len > WTF::String::MaxLength` checks to
all three code paths in `toString()`:
- UTF-8 tagged pointer path (with simdutf length calculation only when
needed)
- External pointer path
- Non-copying creation path
2. **url.zig**: Reverted the incorrect Zig-side check (UTF-8 byte length
!= UTF-16 character length)
## Test plan
- [x] Added test that verifies FormData.from with oversized input
doesn't crash
- [x] Verified original crash case now returns empty FormData instead of
crashing:
```js
const v3 = new Uint32Array(913148244);
FormData.from(v3); // No longer crashes
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fix assertion failure in `Bun.mmap` when `offset` or `size` options
are non-numeric values
- Add validation to reject negative `offset`/`size` with clear error
messages
Minimal reproduction: `Bun.mmap("", { offset: null });`
## Root Cause
`Bun.mmap` was calling `toInt64()` directly on the `offset` and `size`
options without validating they are numbers first. `toInt64()` has an
assertion that the value must be a number or BigInt, which fails when
non-numeric values like `null` or functions are passed.
## Test plan
- [x] Added tests for negative offset/size rejection
- [x] Added tests for non-number inputs (null, undefined)
- [x] `bun bd test test/js/bun/util/mmap.test.js` passes
Closes ENG-22413
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix debug assertion failure in `JSWrappingFunction` when
`expect.extend()` is called with objects containing non-`JSFunction`
callables
- The crash occurred because `jsCast<JSFunction*>` was used, which
asserts the value inherits from `JSFunction`, but callable class
constructors (like `Expect`) inherit from `InternalFunction` instead
## Changes
- Change `JSWrappingFunction` to store `JSObject*` instead of
`JSFunction*`
- Use `jsDynamicCast` instead of `jsCast` in `getWrappedFunction`
- Use `getObject()` instead of `jsCast` in `create()`
## Reproduction
```js
const jest = Bun.jest();
jest.expect.extend(jest);
```
Before fix (debug build):
```
ASSERTION FAILED: !from || from->JSCell::inherits(std::remove_pointer<To>::type::info())
JSCast.h(40) : To JSC::jsCast(From *) [To = JSC::JSFunction *, From = JSC::JSCell]
```
After fix: Properly throws `TypeError: expect.extend: 'jest' is not a
valid matcher`
## Test plan
- [x] Added regression test
`test/regression/issue/fuzzer-ENG-22942.test.ts`
- [x] Existing `expect-extend.test.js` tests pass (27 tests)
- [x] Build succeeds
Fixes ENG-22942
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Fixes#23292
`fs.access()` and `fs.accessSync()` threw EUNKNOWN (-134) when checking
named pipes on Windows (paths like `\.\pipe\name`), but Node.js worked
fine.
**Repro:**
```ts
// Server creates pipe at \.\pipe\bun-test
import net from 'net';
const server = net.createServer();
server.listen('\\.\pipe\bun-test');
// Client tries to check if pipe exists
import fs from 'fs';
fs.accessSync('\\.\pipe\bun-test', fs.constants.F_OK);
// Error: EUNKNOWN: unknown error, access '\.\pipe\bun-test'
```
## Root Cause
The `osPathKernel32` function normalizes paths before passing to Windows
APIs. The normalization logic treats a single `.` as a "current
directory" component and removes it, so `\.\pipe\name` incorrectly
became `\pipe\name` - an invalid path.
## Solution
Detect Windows device paths (starting with `\.\` or `\?\`) and skip
normalization for these special paths, preserving the device prefix.
## Test Plan
- [x] Added regression test `test/regression/issue/23292.test.ts`
- [x] Test fails with system bun (v1.3.3): 3 failures (EUNKNOWN)
- [x] Test passes with fix: 4 pass
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Add `cpu_profile` and `heap_snapshot` counters to `Analytics.Features`
- Export `heap_snapshot` to C++ as `Bun__Feature__heap_snapshot`
- Increment `cpu_profile` when `--cpu-prof` flag is used
- Increment `heap_snapshot` in all heap snapshot creation locations:
- `Bun.generateHeapSnapshot()`
- `bun:jsc` `generateHeapSnapshotForDebugging()`
- `console.takeHeapSnapshot()`
- Internal `JSC__JSGlobalObject__generateHeapSnapshot()`
## Test plan
- [x] Build succeeds
- [x] Heap snapshot generation works
- [x] CPU profiling works with `--cpu-prof`
- [x] Existing tests pass: `test/js/bun/util/v8-heap-snapshot.test.ts`
- [x] Existing tests pass: `test/cli/run/cpu-prof.test.ts`
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Added `http_client_proxy` counter to `analytics.Features` struct
- Incremented counter in `ProxyTunnel.onOpen()` when proxy tunnel
connection opens successfully
This allows tracking HTTP client proxy usage in analytics/crash reports
alongside other features like `fetch`, `WebSocket`, `http_server`, etc.
## Test plan
- [x] Build completes successfully (`bun bd`)
- [x] Existing proxy tests pass (`bun bd test
test/js/bun/http/proxy.test.ts`)
- [x] Counter is properly integrated into the analytics framework
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Fixes a crash (ENG-22243) where calling class constructors marked with
`call: false` would create invalid instances instead of throwing an
error.
## Root Cause
When a class definition has `call: false` (like `Bun.RedisClient`), the
code generator was still allowing the constructor to be invoked without
`new`. This created invalid instances that caused a buffer overflow
during garbage collection.
## The Fix
Modified `src/codegen/generate-classes.ts` to properly check the `call`
property:
- When `call: false`: throws `TypeError: Class constructor X cannot be
invoked without 'new'`
- When `call: true`: behaves as before, allowing construction without
`new`
## Test Plan
- [x] Added regression test in `test/regression/issue/22243.test.ts`
- [x] Test fails with system bun (has the bug)
- [x] Test passes with fixed build
- [x] Verified `Bun.RedisClient()` now throws proper error
- [x] Verified `new Bun.RedisClient()` still works
## Before
```bash
$ bun -e "Bun.RedisClient()"
# Creates invalid instance, no error
```
## After
```bash
$ bun -e "Bun.RedisClient()"
TypeError: Class constructor RedisClient cannot be invoked without 'new'
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
- Bumps some packages
- Does some _best practices_ in certain areas to minimize Aikido noise.
### How did you verify your code works?
CI.
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Fix `spyOn` crash when using indexed property keys (e.g., `spyOn(arr,
0)`)
## Test plan
- [x] Added tests for `spyOn` with numeric indexed properties
- [x] Added tests for `spyOn` with string indexed properties (e.g.,
`"0"`)
- [x] All existing `spyOn` tests pass
- [x] Full `mock-fn.test.js` test suite passes
Fixes ENG-21973
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fix assertion failure when `Bun.indexOfLine` is called with a
non-number offset argument
- Changed from `.to(u32)` to `.coerce(i32, globalThis)` for proper
JavaScript type coercion
## Test plan
- [x] Added regression test in `test/js/bun/util/index-of-line.test.ts`
- [x] `bun bd test test/js/bun/util/index-of-line.test.ts` passes
Closes ENG-21997
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
- Fix off-by-one error in `preprocessUpdateRequests` where the bounds
check used `>` instead of `>=` when validating package IDs from the
resolution buffer
- When `old_resolution == packages.len`, the check `> packages.len`
passes but `resolutions_of_yore[old_resolution]` is out of bounds since
valid indices are `0` to `packages.len-1`
- This causes an internal assertion failure during `bun install` with
update requests
## The Bug
```zig
// BEFORE (buggy) - at lockfile.zig:484 and :522
if (old_resolution > old.packages.len) continue;
const res = resolutions_of_yore[old_resolution]; // OOB when old_resolution == packages.len
// AFTER (fixed)
if (old_resolution >= old.packages.len) continue;
const res = resolutions_of_yore[old_resolution]; // Now safe
```
## Crash Report
From
[bun.report](https://bun.report/1.3.3/wi1274e01cAggkggB+rt/F+pvBiw3rDqul/Doyi4Emzi5Ewj44FuvbgjMog00yDCYKERNEL32.DLLut0LCSntdll.dll4zijBA0eNrzzCtJLcpLzFFILC5OLSrJzM9TSEvMzCktSgUAiSkKPg/view):
```
panic: Internal assertion failure
- lockfile.zig:523: preprocessUpdateRequests
- install_with_manager.zig:605: installWithManager
- updatePackageJSONAndInstall.zig:340
Features: extracted_packages, text_lockfile
```
## Test plan
- [x] `bun run zig:check` passes
- [ ] CI passes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Skip 2 tests that use `grpctest.kleinsch.com` (domain no longer
exists)
- Fix flaky "should not keep repeating failed resolutions" test
These tests were originally skipped when added in #14286, but were
accidentally un-skipped in #20051. This restores them to match upstream
grpc-node.
## To re-enable these tests in the future
Bun could set up its own DNS TXT record at `*.bun.sh`. According to the
[gRPC A2
spec](https://github.com/grpc/proposal/blob/master/A2-service-configs-in-dns.md):
**DNS Setup needed:**
1. A record: `grpctest.bun.sh` → any valid IP (e.g., `127.0.0.1`)
2. TXT record: `_grpc_config.grpctest.bun.sh` with value:
```
grpc_config=[{"serviceConfig":{"loadBalancingPolicy":"round_robin","methodConfig":[{"name":[{"service":"MyService","method":"Foo"}],"waitForReady":true}]}}]
```
Then update the tests to use `grpctest.bun.sh` instead.
## Test plan
- [x] `bun bd test test/js/third_party/grpc-js/test-resolver.test.ts`
passes (20 pass, 3 skip, 1 todo, 0 fail)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
We can't use globalThis.takeException() because it throws out of memory
error when we instead need to take the exception.
### How did you verify your code works?
## Summary
- Fixed boundary check in `String.zig` to use `>=` instead of `>` for
`max_length()` comparisons
- Strings fail when the length is exactly equal to `max_length()`, not
just when exceeding it
- This affects both `createExternal` and
`createExternalGloballyAllocated` functions
## Test plan
- Existing tests should continue to pass
- Strings with length exactly equal to `max_length()` will now be
properly rejected
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Fixed a typo in `makeComponent` that incorrectly identified
2-character patterns starting with `.` (like `.*`) as `..` (DotBack)
patterns
- The condition checked `pattern[component.start] == '.'` twice instead
of checking both characters at positions 0 and 1
- This caused patterns like `.*/*` to be parsed as `../` + `*`, making
the glob walker traverse into parent directories
Fixes#24936
## Test plan
- [x] Added tests in `test/js/bun/glob/scan.test.ts` that verify
patterns like `.*/*` and `.*/**/*.ts` don't escape the cwd boundary
- [x] Tests fail with system bun (bug reproduced) and pass with the fix
- [x] All existing glob tests pass (169 tests)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
### What does this PR do?
Removes a TODO
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## What does this PR do?
Adds missing documentation for features introduced in Bun v1.3.2 and
v1.3.3:
- **Standalone executable config flags**
(`docs/bundler/executables.mdx`): Document
`--no-compile-autoload-dotenv` and `--no-compile-autoload-bunfig` flags
that control automatic config file loading in compiled binaries
- **Test retry/repeats** (`docs/test/writing-tests.mdx`): Document the
`retry` and `repeats` test options for handling flaky tests
- **Disable env file loading**
(`docs/runtime/environment-variables.mdx`): Document `--no-env-file`
flag and `env = false` bunfig option
## How did you verify your code works?
- [x] Verified documentation is accurate against source code
implementation in `src/cli/Arguments.zig`
- [x] Verified features are not already documented elsewhere
- [x] Cross-referenced with v1.3.2 and v1.3.3 release notes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Adds [@mschwarzl's Fuzzilli Support
PR](https://github.com/oven-sh/bun/pull/23862) with the changes
necessary to be able to:
- Run it in CI
- Make no impact on `debug` and `release` mode.
### How did you verify your code works?
---------
Co-authored-by: Martin Schwarzl <mschwarzl@cloudflare.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
### What does this PR do?
This was creating `Zig::FFIFunction` when we could instead use a plain
`JSC::JSFunction`
### How did you verify your code works?
Added a test
### What does this PR do?
`blob.stream(undefined)`
### How did you verify your code works?
Added a test
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
Fixes ENG-21490
### How did you verify your code works?
Added a test that would previously fail due to timeout. It also confirms
the parsed result is correct.
---------
Co-authored-by: taylor.fish <contact@taylor.fish>
### What does this PR do?
Makes sure we are creating error messages with an allocator that will
not `deinit` at the end of function scope on error.
fixes ENG-21528
### How did you verify your code works?
Added a test
### What does this PR do?
We need to call `uv_loop_close` in order to remove the threadlocal loop
from a list in libuv so it won't be used later. This explains the crash
reports because they have `workers_terminated` in features.
Fixes#24804
Closes BUN-3NV
Closes ENG-21523
### How did you verify your code works?
Manually. I'm not sure how to write a test yet other than manually
clicking sleep
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Zack Radisic <zack@theradisic.com>
### What does this PR do?
We must use the right number of properties or we should set it to 0
### How did you verify your code works?
Read the code to check the amount of properties + CI
## Summary
Implements `--no-env-file` CLI flag and bunfig configuration options to
disable automatic `.env` file loading at runtime and in the bundler.
## Motivation
Users may want to disable automatic `.env` file loading for:
- Production environments where env vars are managed externally
- CI/CD pipelines where .env files should be ignored
- Testing scenarios where explicit env control is needed
- Security contexts where .env files should not be trusted
## Changes
### CLI Flag
- Added `--no-env-file` flag that disables loading of default .env files
- Still respects explicit `--env-file` arguments for intentional env
loading
### Bunfig Configuration
Added support for disabling .env loading via `bunfig.toml`:
- `env = false` - disables default .env file loading
- `env = null` - disables default .env file loading
- `env.file = false` - disables default .env file loading
- `env.file = null` - disables default .env file loading
### Implementation
- Added `disable_default_env_files` field to `api.TransformOptions` with
serialization support
- Added `disable_default_env_files` field to `options.Env` struct
- Implemented `loadEnvConfig` in bunfig parser to handle env
configuration
- Wired up flag throughout runtime and bundler code paths
- Preserved package.json script runner behavior (always skips default
.env files)
## Tests
Added comprehensive test suite (`test/cli/run/no-envfile.test.ts`) with
9 tests covering:
- `--no-env-file` flag with `.env`, `.env.local`,
`.env.development.local`
- Bunfig configurations: `env = false`, `env.file = false`, `env = true`
- `--no-env-file` with `-e` eval flag
- `--no-env-file` combined with `--env-file` (explicit files still load)
- Production mode behavior
All tests pass with debug bun and fail with system bun (as expected).
## Example Usage
```bash
# Disable all default .env files
bun --no-env-file index.js
# Disable defaults but load explicit file
bun --no-env-file --env-file .env.production index.js
# Disable via bunfig.toml
cat > bunfig.toml << 'CONFIG'
env = false
CONFIG
bun index.js
```
## Files Changed
- `src/cli/Arguments.zig` - CLI flag parsing
- `src/api/schema.zig` - API schema field with encode/decode
- `src/options.zig` - Env struct field and wiring
- `src/bunfig.zig` - Config parsing with loadEnvConfig
- `src/transpiler.zig` - Runtime wiring
- `src/bun.js.zig` - Runtime wiring
- `src/cli/exec_command.zig` - Runtime wiring
- `src/cli/run_command.zig` - Preserved package.json script runner
behavior
- `test/cli/run/no-envfile.test.ts` - Comprehensive test suite
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
Fixes a regression introduced in Bun v1.3.2 with #24283.
We are not able to skip `sharp` lifecycle scripts before v0.33.0 because
previous versions did not use optional dependencies with prebuilds.
Fixes#24550
Fixes ENG-21519
### How did you verify your code works?
Manually
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
Ensures strings that would parse as a number with leading zeroes aren't
emitted without quotes.
fixes#23691
### How did you verify your code works?
Added a test
### What does this PR do?
The assertion was too strict.
This pr changes to assertion to allow multiple of the same dependency id
to be present. Also changes all the assertions to debug assertions.
fixes#24510
### How did you verify your code works?
Manually, and added a new test
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Marko Vejnovic <marko@bun.com>
This updates the documentation for `fs.watch()` to use `relativePath`
instead of `filename` in the recursive example, following the same
convention from PR #23990.
When `recursive: true` is set on `fs.watch()`, the callback receives a
relative path to the changed file rather than just a simple filename.
Using `relativePath` as the parameter name makes this distinction
clearer to users.
**Related to:** https://github.com/oven-sh/bun/pull/23990
Co-authored-by: Michael H <git@riskymh.dev>
### What does this PR do?
We must use the right number of properties (not more or less) or we
should set it to 0
### How did you verify your code works?
Read the code, this will avoid potencial crashs and improve stability
### What does this PR do?
Fixes
https://linear.app/oven/issue/ENG-21505/panic-attempt-to-use-null-value-at-incrementalgraph-by-misusing-jscode
When calling `takeJSBundleToList/takeJSBundle` the desired behavior is
to get only JS chunks from the graph, and we can contain CSS chunks in
the graph we can just continue and ignore in this case keeping the
desired behavior in a safe way instead of unconditional unwrapping
something that is not guaranteed to have a jsCode.
### How did you verify your code works?
CI
## Summary
Remove outdated version mentions (1.0.x and 1.1.x) from documentation
for better consistency. These versions are over a year old - you should
be using a recent version of bun :).
## What changed
**Removed version mentions from:**
- `docs/pm/lifecycle.mdx` - v1.0.16 (trusted dependencies)
- `docs/bundler/executables.mdx` - v1.0.23, v1.1.25, v1.1.30 (various
features)
- `docs/guides/install/jfrog-artifactory.mdx` - v1.0.3+ (env var
comment)
- `docs/guides/install/azure-artifacts.mdx` - v1.0.3+ (env var comment)
- `docs/runtime/workers.mdx` - v1.1.13, v1.1.35 (blob URLs, preload)
- `docs/runtime/networking/dns.mdx` - v1.1.9 (DNS caching)
- `docs/guides/runtime/import-html.mdx` - v1.1.5
- `docs/guides/runtime/define-constant.mdx` - v1.1.5
- `docs/runtime/sqlite.mdx` - v1.1.31
**Kept version mentions in:**
- All 1.2.x versions (still recent, less than a year old)
- Benchmark version numbers (e.g., S3 performance comparison with
v1.1.44)
- `docs/guides/install/yarnlock.mdx` (bun.lock introduction context)
- `docs/project/building-windows.mdx` (build requirements)
- `docs/runtime/http/websockets.mdx` (performance benchmarks)
## Why
The docs lack consistency around version mentions - we don't document
every feature's version, so keeping scattered old version numbers looks
inconsistent. These changes represent a small percentage of features
added recently, and users on ancient versions have bigger problems than
needing to know exactly when a feature landed.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: RiskyMH <git@riskymh.dev>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
- Use `std.fmt.fmtIntSizeBin` to format progress indicators with byte
sizes
- Improves readability during operations like `bun upgrade`
- Changes display from raw bytes (e.g., "23982378/2398284") to
human-readable format (e.g., "23.2MiB/100MiB")
## Changes
Modified `src/Progress.zig`:
- Updated progress formatting to use `std.fmt.fmtIntSizeBin` for both
current and total sizes
- Applied to both progress with total (`[current/total unit]`) and
without total (`[current unit]`)
## Test plan
- [x] Build succeeds with `bun bd`
- [ ] Manual verification with `bun upgrade` shows human-readable sizes
Fixes#24226fixes#7826🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pfg <pfg@pfg.pw>
Fixes ENG-21481
Updates ci_info to include more CIs. It makes it codegen the ci
detection based on the json from the ci-info package. Also it supports
setting CI=true to force ci detected.
---------
Co-authored-by: pfg <pfg@pfg.pw>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Updates documentation for all major features and changes introduced in
Bun v1.3.2 blog post.
## Changes
### Package Manager
- ✅ Document `configVersion` system for controlling default linker
behavior
- ✅ Clarify that "existing projects (made pre-v1.3.2)" use hoisted
installs for backward compatibility
- ✅ Add smart postinstall script optimization with environment variable
flags
- ✅ Document improved Git dependency resolution with HTTP tarball
optimization
- ✅ Add `bun list` alias for `bun pm ls`
### Testing
- ✅ Document new `onTestFinished` lifecycle hook with simple example
- ✅ Add to lifecycle hooks table in test documentation
### Runtime & Performance
- ✅ Add CPU profiling with `--cpu-prof` flag documentation
- ✅ Place after memory usage section for better flow
### WebSockets
- ✅ Add `subscriptions` getter to existing pub/sub example
- ✅ Add TypeScript reference for the subscriptions property
## Documentation Improvements
All documentation now consistently:
- Uses "made pre-v1.3.2" to clarify existing project behavior
- Simplifies default linker explanations with clear references to
`/docs/pm/isolated-installs`
- Uses `/docs/pm/isolated-installs` for all internal references
- Avoids confusing technical details in favor of user-friendly summaries
## Files Modified
- `docs/guides/install/add-git.mdx` - Added GitHub tarball optimization
note
- `docs/pm/cli/install.mdx` - Added installation strategies and smart
postinstall docs
- `docs/pm/cli/pm.mdx` - Added bun list alias
- `docs/pm/isolated-installs.mdx` - Updated default behavior section
with configVersion table
- `docs/project/benchmarking.mdx` - Added CPU profiling section
- `docs/runtime/bunfig.mdx` - Clarified install.linker defaults
- `docs/runtime/http/websockets.mdx` - Added subscriptions to example
and TypeScript interface
- `docs/test/lifecycle.mdx` - Added onTestFinished hook documentation
## Diff
````diff
diff --git a/docs/guides/install/add-git.mdx b/docs/guides/install/add-git.mdx
index 70950e1a63..7f8f3c8d81 100644
--- a/docs/guides/install/add-git.mdx
+++ b/docs/guides/install/add-git.mdx
@@ -33,6 +33,8 @@ bun add git@github.com:lodash/lodash.git
bun add github:colinhacks/zod
```
+**Note:** GitHub dependencies download via HTTP tarball when possible for faster installation.
+
---
See [Docs > Package manager](https://bun.com/docs/cli/install) for complete documentation of Bun's package manager.
diff --git a/docs/pm/cli/install.mdx b/docs/pm/cli/install.mdx
index 7affb62646..dde268b7e5 100644
--- a/docs/pm/cli/install.mdx
+++ b/docs/pm/cli/install.mdx
@@ -88,6 +88,13 @@ Lifecycle scripts will run in parallel during installation. To adjust the maximu
bun install --concurrent-scripts 5
```
+Bun automatically optimizes postinstall scripts for popular packages (like `esbuild`, `sharp`, etc.) by determining which scripts need to run. To disable these optimizations:
+
+```bash terminal icon="terminal"
+BUN_FEATURE_FLAG_DISABLE_NATIVE_DEPENDENCY_LINKER=1 bun install
+BUN_FEATURE_FLAG_DISABLE_IGNORE_SCRIPTS=1 bun install
+```
+
---
## Workspaces
@@ -231,7 +238,7 @@ Bun supports installing dependencies from Git, GitHub, and local or remotely-hos
Bun supports two package installation strategies that determine how dependencies are organized in `node_modules`:
-### Hoisted installs (default for single projects)
+### Hoisted installs
The traditional npm/Yarn approach that flattens dependencies into a shared `node_modules` directory:
@@ -249,7 +256,15 @@ bun install --linker isolated
Isolated installs create a central package store in `node_modules/.bun/` with symlinks in the top-level `node_modules`. This ensures packages can only access their declared dependencies.
-For complete documentation on isolated installs, refer to [Package manager > Isolated installs](/pm/isolated-installs).
+### Default strategy
+
+The default linker strategy depends on whether you're starting fresh or have an existing project:
+
+- **New workspaces/monorepos**: `isolated` (prevents phantom dependencies)
+- **New single-package projects**: `hoisted` (traditional npm behavior)
+- **Existing projects (made pre-v1.3.2)**: `hoisted` (preserves backward compatibility)
+
+The default is controlled by a `configVersion` field in your lockfile. For a detailed explanation, see [Package manager > Isolated installs](/docs/pm/isolated-installs).
---
@@ -319,8 +334,7 @@ dryRun = false
concurrentScripts = 16 # (cpu count or GOMAXPROCS) x2
# installation strategy: "hoisted" or "isolated"
-# default: "hoisted" (for single-project projects)
-# default: "isolated" (for monorepo projects)
+# default varies by project type - see /docs/pm/isolated-installs
linker = "hoisted"
diff --git a/docs/pm/cli/pm.mdx b/docs/pm/cli/pm.mdx
index fc297753d3..9c8faa7da1 100644
--- a/docs/pm/cli/pm.mdx
+++ b/docs/pm/cli/pm.mdx
@@ -115,6 +115,8 @@ To print a list of installed dependencies in the current project and their resol
```bash terminal icon="terminal"
bun pm ls
+# or
+bun list
```
```txt
@@ -130,6 +132,8 @@ To print all installed dependencies, including nth-order dependencies.
```bash terminal icon="terminal"
bun pm ls --all
+# or
+bun list --all
```
```txt
diff --git a/docs/pm/isolated-installs.mdx b/docs/pm/isolated-installs.mdx
index 73c6748b15..17afe02fe1 100644
--- a/docs/pm/isolated-installs.mdx
+++ b/docs/pm/isolated-installs.mdx
@@ -5,7 +5,7 @@ description: "Strict dependency isolation similar to pnpm's approach"
Bun provides an alternative package installation strategy called **isolated installs** that creates strict dependency isolation similar to pnpm's approach. This mode prevents phantom dependencies and ensures reproducible, deterministic builds.
-This is the default installation strategy for monorepo projects.
+This is the default installation strategy for **new** workspace/monorepo projects (with `configVersion = 1` in the lockfile). Existing projects continue using hoisted installs unless explicitly configured.
## What are isolated installs?
@@ -43,8 +43,23 @@ linker = "isolated"
### Default behavior
-- For monorepo projects, Bun uses the **isolated** installation strategy by default.
-- For single-project projects, Bun uses the **hoisted** installation strategy by default.
+The default linker strategy depends on your project's lockfile `configVersion`:
+
+| `configVersion` | Using workspaces? | Default Linker |
+| --------------- | ----------------- | -------------- |
+| `1` | ✅ | `isolated` |
+| `1` | ❌ | `hoisted` |
+| `0` | ✅ | `hoisted` |
+| `0` | ❌ | `hoisted` |
+
+**New projects**: Default to `configVersion = 1`. In workspaces, v1 uses the isolated linker by default; otherwise it uses hoisted linking.
+
+**Existing Bun projects (made pre-v1.3.2)**: If your existing lockfile doesn't have a version yet, Bun sets `configVersion = 0` when you run `bun install`, preserving the previous hoisted linker default.
+
+**Migrations from other package managers**:
+
+- From pnpm: `configVersion = 1` (using isolated installs in workspaces)
+- From npm or yarn: `configVersion = 0` (using hoisted installs)
You can override the default behavior by explicitly specifying the `--linker` flag or setting it in your configuration file.
diff --git a/docs/project/benchmarking.mdx b/docs/project/benchmarking.mdx
index 1263a06729..2ab8bcafc8 100644
--- a/docs/project/benchmarking.mdx
+++ b/docs/project/benchmarking.mdx
@@ -216,3 +216,26 @@ numa nodes: 1
elapsed: 0.068 s
process: user: 0.061 s, system: 0.014 s, faults: 0, rss: 57.4 MiB, commit: 64.0 MiB
```
+
+## CPU profiling
+
+Profile JavaScript execution to identify performance bottlenecks with the `--cpu-prof` flag.
+
+```sh terminal icon="terminal"
+bun --cpu-prof script.js
+```
+
+This generates a `.cpuprofile` file you can open in Chrome DevTools (Performance tab → Load profile) or VS Code's CPU profiler.
+
+### Options
+
+```sh terminal icon="terminal"
+bun --cpu-prof --cpu-prof-name my-profile.cpuprofile script.js
+bun --cpu-prof --cpu-prof-dir ./profiles script.js
+```
+
+| Flag | Description |
+| ---------------------------- | -------------------- |
+| `--cpu-prof` | Enable profiling |
+| `--cpu-prof-name <filename>` | Set output filename |
+| `--cpu-prof-dir <dir>` | Set output directory |
diff --git a/docs/runtime/bunfig.mdx b/docs/runtime/bunfig.mdx
index 91005c1607..5b7fe49823 100644
--- a/docs/runtime/bunfig.mdx
+++ b/docs/runtime/bunfig.mdx
@@ -497,9 +497,9 @@ print = "yarn"
### `install.linker`
-Configure the default linker strategy. Default `"hoisted"` for single-project projects, `"isolated"` for monorepo projects.
+Configure the linker strategy for installing dependencies. Defaults to `"isolated"` for new workspaces, `"hoisted"` for new single-package projects and existing projects (made pre-v1.3.2).
-For complete documentation refer to [Package manager > Isolated installs](/pm/isolated-installs).
+For complete documentation refer to [Package manager > Isolated installs](/docs/pm/isolated-installs).
```toml title="bunfig.toml" icon="settings"
[install]
diff --git a/docs/runtime/http/websockets.mdx b/docs/runtime/http/websockets.mdx
index b33f37c29f..174043200d 100644
--- a/docs/runtime/http/websockets.mdx
+++ b/docs/runtime/http/websockets.mdx
@@ -212,6 +212,9 @@ const server = Bun.serve({
// this is a group chat
// so the server re-broadcasts incoming message to everyone
server.publish("the-group-chat", `${ws.data.username}: ${message}`);
+
+ // inspect current subscriptions
+ console.log(ws.subscriptions); // ["the-group-chat"]
},
close(ws) {
const msg = `${ws.data.username} has left the chat`;
@@ -393,6 +396,7 @@ interface ServerWebSocket {
readonly data: any;
readonly readyState: number;
readonly remoteAddress: string;
+ readonly subscriptions: string[];
send(message: string | ArrayBuffer | Uint8Array, compress?: boolean): number;
close(code?: number, reason?: string): void;
subscribe(topic: string): void;
diff --git a/docs/test/lifecycle.mdx b/docs/test/lifecycle.mdx
index 6427175df6..3837f0e948 100644
--- a/docs/test/lifecycle.mdx
+++ b/docs/test/lifecycle.mdx
@@ -6,11 +6,12 @@ description: "Learn how to use beforeAll, beforeEach, afterEach, and afterAll li
The test runner supports the following lifecycle hooks. This is useful for loading test fixtures, mocking data, and configuring the test environment.
| Hook | Description |
-| ------------ | --------------------------- |
+| ---------------- | ---------------------------------------------------------- |
| `beforeAll` | Runs once before all tests. |
| `beforeEach` | Runs before each test. |
| `afterEach` | Runs after each test. |
| `afterAll` | Runs once after all tests. |
+| `onTestFinished` | Runs after a single test finishes (after all `afterEach`). |
## Per-Test Setup and Teardown
@@ -90,6 +91,23 @@ describe("test group", () => {
});
```
+### `onTestFinished`
+
+Use `onTestFinished` to run a callback after a single test completes. It runs after all `afterEach` hooks.
+
+```ts title="test.ts" icon="/icons/typescript.svg"
+import { test, onTestFinished } from "bun:test";
+
+test("cleanup after test", () => {
+ onTestFinished(() => {
+ // runs after all afterEach hooks
+ console.log("test finished");
+ });
+});
+```
+
+Not supported in concurrent tests; use `test.serial` instead.
+
## Global Setup and Teardown
To scope the hooks to an entire multi-file test run, define the hooks in a separate file.
````
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Michael H <git@riskymh.dev>
Co-authored-by: Lydia Hallie <lydiajuliettehallie@gmail.com>
### What does this PR do?
Fixes `bun pm ls --all` crash with unresolved optional peer
dependencies.
Fixes `bun pm ls` crash with empty lockfiles.
Fixes#24502
### How did you verify your code works?
Added a test for both crashes
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Updated all example version placeholders in documentation from 1.3.1 and
1.2.20 to 1.3.2.
## Changes
Updated version examples in:
- Installation examples (Linux/macOS and Windows install commands)
- Package manager output examples (`bun install`, `bun publish`, `bun
pm` commands)
- Test runner output examples
- Spawn/child process output examples
- Fetch User-Agent header examples in debugging docs
- `Bun.version` API example
## Notes
- Historical version references (e.g., "As of Bun v1.x.x..." or "Bun
v1.x.x+ required") were intentionally **preserved** as they document
when features were introduced
- Generic package.json version examples (non-Bun package versions) were
**preserved**
- Only example outputs and code snippets showing current Bun version
were updated
## Files Changed (13 total)
- `docs/installation.mdx`
- `docs/guides/install/from-npm-install-to-bun-install.mdx`
- `docs/guides/install/add-peer.mdx`
- `docs/bundler/html-static.mdx` (6 occurrences)
- `docs/test/dom.mdx`
- `docs/pm/cli/publish.mdx`
- `docs/pm/cli/pm.mdx`
- `docs/guides/test/snapshot.mdx` (2 occurrences)
- `docs/guides/ecosystem/nuxt.mdx`
- `docs/guides/util/version.mdx`
- `docs/runtime/debugger.mdx` (3 occurrences)
- `docs/runtime/networking/fetch.mdx`
- `docs/runtime/child-process.mdx`
**Total:** 23 version references updated
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Michael H <git@riskymh.dev>
### What does this PR do?
Fixes https://github.com/oven-sh/bun/issues/24385
### How did you verify your code works?
Confirmed that the test added in the first commit fails on mainline
`bun` and is fixed in this PR.
---------
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
### What does this PR do?
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
This PR is the fix-only version of
https://github.com/oven-sh/bun/pull/24486. Unfortunately due to
complexity setting up all CI agents to ping private git repos, I was
unable to get CI passing there.
### How did you verify your code works?
I ran this:
```
marko@fedora:~/Desktop/bun-4$ bun add git+ssh://git@github.com:oven-sh/private-install-test-repo.git#5b37e644a2ef23fad0da4027042f01b194b179e8
bun add v1.3.2-canary.108 (44402ad2)
🔍 Resolving [1/1] error: "git clone" for "git+ssh://git@github.com:oven-sh/private-install-test-repo.git#5b37e644a2ef23fad0da4027042f01b194b179e8" failed
error: InstallFailed cloning repository for git+ssh://git@github.com:oven-sh/private-install-test-repo.git#5b37e644a2ef23fad0da4027042f01b194b179e8
error: git+ssh://git@github.com:oven-sh/private-install-test-repo.git#5b37e644a2ef23fad0da4027042f01b194b179e8 failed to resolve
```
followed by
```
marko@fedora:~/Desktop/bun-4$ BUN_DEBUG_QUIET_LOGS=1 ./build/debug/bun-debug add git+ssh://git@github.com:oven-sh/private-install-test-repo.git#5b37e644a2ef23fad0da4027042f01b194b179e8
bun add v1.3.2 (0db90b25)
installed private-install-test-repo@git+ssh://git@github.com:oven-sh/private-install-test-repo.git#5b37e644a2ef23fad0da4027042f01b194b179e8
[1.61s] done
```
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Adds deployment guides for Bun apps on AWS Lambda, Google Cloud Run, and
DigitalOcean using a custom `Dockerfile`
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
In the crash reporter, we currently use glibc's `backtrace()` function
on glibc Linux targets. However, this has resulted in poor stack traces
in many scenarios, particularly when a JSC signal handlers is involved,
in which case the stack trace tends to have only one frame—the signal
handler itself. Considering that JSC installs a signal handler for SEGV,
this is particularly bad.
Zig's `std.debug.captureStackTrace` generates considerably more complete
stack traces, but it has an issue where the top frame is missing when a
signal handler is involved. This is unfortunate, but it's still the
better option for now. Note that our stack traces on macOS also have
this missing frame issue.
In the future, we will investigate backporting the changes to stack
trace capturing that were recently made in Zig's `master` branch, since
that seems to have fixed the missing frame issue.
This PR still uses the stack trace provided by `backtrace()` if it
returns more frames than `captureStackTrace`. In particular, ARM may
need this behavior.
(For internal tracking: fixes ENG-21406)
Fixes#23865, Fixes ENG-21446
Previously, a termination exception would be thrown. We didn't handle it
properly and eventually it got caught by a `catch @panic()` handler.
Now, no termination exception is thrown.
```
drainMicrotasksWithGlobal calls JSC__JSGlobalObject__drainMicrotasks
JSC__JSGlobalObject__drainMicrotasks returns m_terminationException
-> drainMicrotasksWithGlobal
-> event_loop.zig:exit, which catches the error and discards it
-> ...
```
For workers, we will need to handle termination exceptions in this
codepath.
~~Previously, it would see the exception, call
reportUncaughtExceptoinAtEventLoop, but the exception would still
survive and return out from the catch scope. You're not supposed to
still have an exception signaled at the exit of a catch scope. Exception
checker may not have caught it because maybe the branch wasn't taken.~~
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
pulled out of https://github.com/oven-sh/bun/pull/21809
- brings the ASAN behavior on linux closer in sync with macos
- fixes some tests to also pass in node
---------
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
### What does this PR do?
Fixes#24387
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Marko Vejnovic <marko@bun.com>
## Summary
Fixes a segfault that occurred when calling `process.dlopen` with
`null`, `undefined`, or primitive values for `exports`.
Previously, this would cause a crash at address `0x00000000` in
`node_module_register` due to dereferencing an uninitialized
`strongExportsObject`.
## Changes
- Modified `src/bun.js/bindings/v8/node.cpp` to use JSC's `toObject()`
instead of manual type checking
- This matches Node.js `ToObject()` behavior:
- Throws `TypeError` for `null`/`undefined`
- Creates wrapper objects for primitives
- Preserves existing objects
## Test Plan
Added `test/js/node/process/dlopen-non-object-exports.test.ts` with
three test cases:
- Null exports (should throw)
- Undefined exports (should throw)
- Primitive exports (should create wrapper)
All tests pass with the fix.
## Related Issue
Fixes the first bug discovered in the segfault investigation.
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Fixes incorrect JWK "d" field length for exported elliptic curve private
keys. The "d" field is now correctly padded to ensure RFC 7518
compliance.
## Problem
When exporting EC private keys to JWK format, the "d" field would
sometimes be shorter than required by RFC 7518 because
`convertToBytes()` doesn't pad the result when the BIGNUM has leading
zeros. This caused incompatibility with Chrome's strict validation,
though Node.js and Firefox would accept the malformed keys.
Expected lengths per RFC 7518:
- P-256: 32 bytes → 43 base64url characters
- P-384: 48 bytes → 64 base64url characters
- P-521: 66 bytes → 88 base64url characters
## Solution
Changed `src/bun.js/bindings/webcrypto/CryptoKeyECOpenSSL.cpp:420` to
use `convertToBytesExpand(privateKey, keySizeInBytes)` instead of
`convertToBytes(privateKey)`, ensuring the private key is padded with
leading zeros when necessary. This matches the behavior already used for
the x and y public key coordinates.
## Test plan
- ✅ Added regression test in `test/regression/issue/24399.test.ts` that
generates multiple keys for each curve and verifies correct "d" field
length
- ✅ Test fails with `USE_SYSTEM_BUN=1 bun test` (reproduces the bug)
- ✅ Test passes with `bun bd test` (verifies the fix)
- ✅ Existing crypto tests pass
Fixes#24399🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
Refactored NAPI property and element access to use inline methods and
improved error handling. Added comprehensive tests for default value
behavior and numeric string key operations in NAPI, ensuring correct
handling of missing properties, integer keys, and property deletion.
Updated TypeScript tests to cover new scenarios.
### How did you verify your code works?
Tests
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
## Summary
Fixes 100% CPU usage on idle WebSocket servers between bun-v1.2.23 and
bun-v1.3.0.
Many users reported WebSocket server CPU usage jumping to 100% on idle
connections after upgrading to v1.3.0. Investigation revealed a missing
`poll_ref.unref()` call in the WebSocket upgrade path.
## Root Cause
In commit 625e537f5d (#23348), the `OnBeforeOpen` callback mechanism was
removed as part of refactoring the WebSocket upgrade process. However,
this callback contained a critical cleanup step:
```zig
defer ctx.this.poll_ref.unref(ctx.globalObject.bunVM());
```
When a `NodeHTTPResponse` is created, `poll_ref.ref()` is called (line
314) to keep the event loop alive while handling the HTTP request. After
a WebSocket upgrade, the HTTP response object is no longer relevant and
its `poll_ref` must be unref'd to indicate the request processing is
complete.
Without this unref, the event loop maintains an active reference even
after the upgrade completes, causing the CPU to spin at 100% waiting for
events on what should be an idle connection.
## Changes
- Added `poll_ref.unref()` call in `NodeHTTPResponse.upgrade()` after
setting the `upgraded` flag
- Added regression test to verify event loop properly exits after
WebSocket upgrade
## Test Plan
- [x] Code compiles successfully
- [x] Existing WebSocket tests pass
- [x] Manual testing confirms CPU usage returns to normal on idle
WebSocket connections
## Related Issues
Fixes issue reported by users between bun-v1.2.23 and bun-v1.3.0
regarding 100% CPU usage on idle WebSocket servers.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
Restore call to us_get_default_ca_certificates, and
X509_STORE_set_default_paths
Fixes https://github.com/oven-sh/bun/issues/23735
### How did you verify your code works?
Manually test running:
```bash
bun -e "await fetch('https://secure-api.eloview.com').then(res => res.t
ext()).then(console.log);"
```
should not result in:
```js
error: unable to get local issuer certificate
path: "https://secure-api.eloview.com/",
errno: 0,
code: "UNABLE_TO_GET_ISSUER_CERT_LOCALLY"
```
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Bug Fixes**
* Enhanced system root certificate handling to ensure consistent
validation across all secure connections.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
### What does this PR do?
Adds `"configVersion"` to bun.lock(b). The version will be used to keep
default settings the same if they would be breaking across bun versions.
fixes ENG-21389
fixes ENG-21388
### How did you verify your code works?
TODO:
- [ ] new project
- [ ] existing project without configVersion
- [ ] existing project with configVersion
- [ ] same as above but with bun.lockb
- [ ] configVersion@0 defaults to hoisted linker
- [ ] new projects use isolated linker
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
This PR introduces a new postinstall optimization system that
significantly reduces the need to run lifecycle scripts for certain
packages by intelligently handling their requirements at install time.
## Key Features
### 1. Native Binlink Optimization
When packages like `esbuild` ship platform-specific binaries as optional
dependencies, we now:
- Detect the native binlink pattern (enabled by default for `esbuild`)
- Find the matching platform-specific dependency based on target CPU/OS
- Link binaries directly from the platform-specific package (e.g.,
`@esbuild/darwin-arm64`)
- Fall back gracefully if the platform-specific package isn't found
**Result**: No postinstall scripts needed for esbuild and similar
packages.
### 2. Lifecycle Script Skipping
For packages like `sharp` that run heavy postinstall scripts:
- Skip lifecycle scripts entirely (enabled by default for `sharp`)
- Prevents downloading large binaries or compiling native code
unnecessarily
- Reduces install time and potential failures in restricted environments
## Configuration
Both features can be configured via `package.json`:
```json
{
"nativeDependencies": ["esbuild", "my-custom-package"],
"ignoreScripts": ["sharp", "another-package"]
}
```
Set to empty arrays to disable defaults:
```json
{
"nativeDependencies": [],
"ignoreScripts": []
}
```
Environment variable overrides:
- `BUN_FEATURE_FLAG_DISABLE_NATIVE_DEPENDENCY_LINKER=1` - disable native
binlink
- `BUN_FEATURE_FLAG_DISABLE_IGNORE_SCRIPTS=1` - disable script ignoring
## Implementation Details
### Core Components
- **`postinstall_optimizer.zig`**: New file containing the optimizer
logic
- `PostinstallOptimizer` enum with `native_binlink` and `ignore`
variants
- `List` type to track optimization strategies per package hash
- Defaults for `esbuild` (native binlink) and `sharp` (ignore)
- **`Bin.Linker` changes**: Extended to support separate target paths
- `target_node_modules_path`: Where to find the actual binary
- `target_package_name`: Name of the package containing the binary
- Fallback logic when native binlink optimization fails
### Modified Components
- **PackageInstaller.zig**: Checks optimizer before:
- Enqueueing lifecycle scripts
- Linking binaries (with platform-specific package resolution)
- **isolated_install/Installer.zig**: Similar checks for isolated linker
mode
- `maybeReplaceNodeModulesPath()` resolves platform-specific packages
- Retry logic without optimization on failure
- **Lockfile**: Added `postinstall_optimizer` field to persist
configuration
## Changes Included
- Updated `esbuild` from 0.21.5 to 0.25.11 (testing with latest)
- VS Code launch config updates for debugging install with new flags
- New feature flags in `env_var.zig`
## Test Plan
- [x] Existing install tests pass
- [ ] Test esbuild install without postinstall scripts running
- [ ] Test sharp install with scripts skipped
- [ ] Test custom package.json configuration
- [ ] Test fallback when platform-specific package not found
- [ ] Test feature flag overrides
🤖 Generated with [Claude Code](https://claude.com/claude-code)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Native binlink optimization: installs platform-specific binaries when
available, with a safe retry fallback and verbose logging option.
* Per-package postinstall controls to optionally skip lifecycle scripts.
* New feature flags to disable native binlink optimization and to
disable lifecycle-script ignoring.
* **Tests**
* End-to-end tests and test packages added to validate native binlink
behavior across install scenarios and linker modes.
* **Documentation**
* Bench README and sample app migrated to a Next.js-based setup.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Fixes Next.js 16 + React Compiler build failure when using Bun runtime.
## Issue
When `Module._resolveFilename` was overridden (e.g., by Next.js's
require-hook), Bun was not passing the `options` parameter (which
contains `paths`) to the override function. This caused resolution
failures when the override tried to use custom resolution paths.
Additionally, when `Module._resolveFilename` was called directly with
`options.paths`, Bun was ignoring the paths parameter and using default
resolution.
## Root Causes
1. In `ImportMetaObject.cpp`, when calling an overridden
`_resolveFilename` function, the options object with paths was not being
passed as the 4th argument.
2. In `NodeModuleModule.cpp`, `jsFunctionResolveFileName` was calling
`Bun__resolveSync` without extracting and using the `options.paths`
parameter.
## Solution
1. In `ImportMetaObject.cpp`: When `userPathList` is provided, construct
an options object with `{paths: userPathList}` and pass it as the 4th
argument to the overridden `_resolveFilename` function.
2. In `NodeModuleModule.cpp`: Extract `options.paths` from the 4th
argument and call `Bun__resolveSyncWithPaths` when paths are provided,
instead of always using `Bun__resolveSync`.
## Reproduction
Before this fix, running:
```bash
bun --bun next build --turbopack
```
on a Next.js 16 app with React Compiler enabled would fail with:
```
Cannot find module './node_modules/babel-plugin-react-compiler'
```
## Testing
- Added comprehensive tests for `Module._resolveFilename` with
`options.paths`
- Verified Next.js 16 + React Compiler + Turbopack builds successfully
with Bun
- All 5 new tests pass with the fix, 3 fail without it
- All existing tests continue to pass
## Files Changed
- `src/bun.js/bindings/ImportMetaObject.cpp` - Pass options to override
- `src/bun.js/modules/NodeModuleModule.cpp` - Handle options.paths in
_resolveFilename
- `test/js/node/module/module-resolve-filename-paths.test.js` - New test
suite
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Extract `FetchTasklet` struct from `src/bun.js/webcore/fetch.zig` into
its own file at `src/bun.js/webcore/fetch/FetchTasklet.zig` to improve
code organization and modularity.
## Changes
- Moved `FetchTasklet` struct definition (1336 lines) to new file
`src/bun.js/webcore/fetch/FetchTasklet.zig`
- Added all necessary imports to the new file
- Updated `fetch.zig` line 61 to import `FetchTasklet` from the new
location: `pub const FetchTasklet =
@import("./fetch/FetchTasklet.zig").FetchTasklet;`
- Verified compilation succeeds with `bun bd`
## Impact
- No functional changes - this is a pure refactoring
- Improves code organization by separating the large `FetchTasklet`
implementation
- Makes the codebase more maintainable and easier to navigate
- Reduces `fetch.zig` from 2768 lines to 1433 lines
## Test plan
- [x] Built successfully with `bun bd`
- [x] No changes to functionality - pure code organization refactor
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
fixes#23901
### How did you verify your code works?
with a test
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Fixes a bug where `bun update --interactive` only updated `package.json`
but didn't actually install the updated packages. Users had to manually
run `bun install` afterwards.
## Root Cause
The bug was in `savePackageJson()` in
`src/cli/update_interactive_command.zig`:
1. The function wrote the updated `package.json` to disk
2. But it **didn't update the in-memory cache**
(`WorkspacePackageJSONCache`)
3. When `installWithManager()` ran, it called `getWithPath()` which
returned the **stale cached version**
4. So the installation proceeded with the old dependencies
## The Fix
Update the cache entry after writing to disk (line 116):
```zig
package_json.*.source.contents = new_package_json_source;
```
This matches the behavior in `updatePackageJSONAndInstall.zig` line 269.
## Test Plan
Added comprehensive regression tests in
`test/cli/update_interactive_install.test.ts`:
- ✅ Verifies that `package.json` is updated
- ✅ Verifies that `node_modules` is updated (this was failing before the
fix)
- ✅ Tests both normal update and `--latest` flag
- ✅ Compares installed version to confirm packages were actually
installed
Run tests with:
```bash
bun bd test test/cli/update_interactive_install.test.ts
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Adds a `subscriptions` getter to `ServerWebSocket` that returns an array
of all topics the WebSocket is currently subscribed to.
## Implementation
- Added `getTopicsCount()` and `iterateTopics()` helpers to uWS
WebSocket
- Implemented C++ function `uws_ws_get_topics_as_js_array` that:
- Uses `JSC::MarkedArgumentBuffer` to protect values from GC
- Constructs JSArray directly in C++ for efficiency
- Uses template pattern for SSL/TCP variants
- Properly handles iterator locks with explicit scopes
- Exposed as `subscriptions` getter property on ServerWebSocket
- Returns empty array when WebSocket is closed (not null)
## API
```typescript
const server = Bun.serve({
websocket: {
open(ws) {
ws.subscribe("chat");
ws.subscribe("notifications");
console.log(ws.subscriptions); // ["chat", "notifications"]
ws.unsubscribe("chat");
console.log(ws.subscriptions); // ["notifications"]
}
}
});
```
## Test Coverage
Added 5 comprehensive test cases covering:
- Basic subscription/unsubscription flow
- All subscriptions removed
- Behavior after WebSocket close
- Duplicate subscriptions (should only appear once)
- Multiple subscribe/unsubscribe cycles
All tests pass with 24 assertions.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
Allows optional peers to resolve to package if possible.
Optional peers aren't auto-installed, but they should still be given a
chance to resolve. If they're always left unresolved it's possible for
multiple dependencies on the same package to result in different peer
resolutions when they should be the same. For example, this bug this
could cause monorepos using elysia to have corrupt node_modules because
there might be more than one copy of elysia in `node_modules/.bun` (or
more than the expected number of copies).
fixes#23725
most likely fixes#23895
fixes ENG-21411
### How did you verify your code works?
Added a test for optional peers and non-optional peers that would
previously trigger this bug.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Improved resolution of optional peer dependencies during isolated
installations, with better propagation across package hierarchies.
* **Tests**
* Added comprehensive test suite covering optional peer dependency
scenarios in isolated workspaces.
* Added test fixtures for packages with peer and optional peer
dependencies.
* Enhanced lockfile migration test verification using snapshot-based
assertions.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
When `napi_create_external_buffer` receives empty input, the returned
buffer should be detached.
This fixes the remaining tests in `ref-napi` other than three that use a
few uv symbols
<img width="329" height="159" alt="Screenshot 2025-11-01 at 8 38 01 PM"
src="https://github.com/user-attachments/assets/2c75f937-79c5-467a-bde3-44e45e05d9a0"
/>
### How did you verify your code works?
Added tests for correct values from `napi_get_buffer_info`,
`napi_get_arraybuffer_info`, and `napi_is_detached_arraybuffer` when
given an empty buffer from `napi_create_external_buffer`
---------
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Adds comprehensive documentation explaining how Bun's event loop works,
including task draining, microtasks, process.nextTick, and I/O polling
integration.
## What does this document?
- **Task draining algorithm**: Shows the exact flow for processing each
task (run → release weak refs → drain microtasks → deferred tasks)
- **Process.nextTick ordering**: Explains batching behavior - all
nextTick callbacks in current batch run, then microtasks drain
- **Microtask integration**: How JavaScriptCore's microtask queue and
Bun's nextTick queue interact
- **I/O polling**: How uSockets epoll/kqueue events integrate with the
event loop
- **Timer ordering**: Why setImmediate runs before setTimeout
- **Enter/Exit mechanism**: How the counter prevents excessive microtask
draining
## Visual aids
Includes ASCII flowcharts showing:
- Main tick flow
- autoTick flow (with I/O polling)
- Per-task draining sequence
## Code references
All explanations include specific file paths and line numbers for
verification:
- `src/bun.js/event_loop/Task.zig`
- `src/bun.js/event_loop.zig`
- `src/bun.js/bindings/ZigGlobalObject.cpp`
- `src/js/builtins/ProcessObjectInternals.ts`
- `packages/bun-usockets/src/eventing/epoll_kqueue.c`
## Examples
Includes JavaScript examples demonstrating:
- nextTick vs Promise ordering
- Batching behavior when nextTick callbacks schedule more nextTicks
- setImmediate vs setTimeout ordering
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
### How did you verify your code works?
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Bug Fixes**
* Improved HTTP connection handling during write failures to ensure more
reliable timeout behavior and connection state management.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
### What does this PR do?
If the input was empty `ArrayBuffer::createFromBytes` would create a
buffer that `JSUint8Array::create` would see as detached, so it would
throw an exception. This would likely cause crashes in `node-addon-api`
because finalize data is freed if `napi_create_external_buffer` fails,
and we already setup the finalizer.
Example of creating empty buffer:
a7f62a4caa/src/binding.cc (L687)fixes#6737fixes#10965fixes#12331fixes#12937fixes#13622
most likely fixes#14822
### How did you verify your code works?
Manually and added tests.
## Summary
Removes the `MemoryReportingAllocator` wrapper and simplifies
`fetch.zig` to use `bun.default_allocator` directly. The
`MemoryReportingAllocator` was wrapping `bun.default_allocator` to track
memory usage for reporting to the VM, but this added unnecessary
complexity and indirection without meaningful benefit.
## Changes
**Deleted:**
- `src/allocators/MemoryReportingAllocator.zig` (96 lines)
**Modified `src/bun.js/webcore/fetch.zig`:**
- Removed `memory_reporter: *bun.MemoryReportingAllocator` field from
`FetchTasklet` struct
- Removed `memory_reporter: *bun.MemoryReportingAllocator` field from
`FetchOptions` struct
- Replaced all `this.memory_reporter.allocator()` calls with
`bun.default_allocator`
- Removed all `this.memory_reporter.discard()` calls (no longer needed)
- Simplified `fetch()` function by removing memory reporter
allocation/wrapping/cleanup code
- Updated `deinit()` and `clearData()` to use `bun.default_allocator`
directly
**Cleanup:**
- Removed `MemoryReportingAllocator` export from `src/allocators.zig`
- Removed `MemoryReportingAllocator` export from `src/bun.zig`
- Removed `bun.MemoryReportingAllocator.isInstance()` check from
`src/safety/alloc.zig`
## Testing
- ✅ Builds successfully with `bun bd`
- All fetch operations now use `bun.default_allocator` directly
## Impact
- **Net -116 lines** of code
- Eliminates allocator wrapper overhead in fetch operations
- Simplifies memory management code
- No functional changes to fetch behavior
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
calling `loadConfigPath` could allow loading bunfig more than once.
### How did you verify your code works?
manually
---------
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
This PR makes `bun list` an alias for `bun pm ls`, allowing users to
list their dependency tree with a shorter command.
## Changes
- Updated `src/cli.zig` to route `list` command to
`PackageManagerCommand` instead of `ReservedCommand`
- Modified `src/cli/package_manager_command.zig` to detect when `bun
list` is invoked directly and treat it as `ls`
- Updated help text in `bun pm --help` to show both `bun list` and `bun
pm ls` as valid options
## Implementation Details
The implementation follows the same pattern used for `bun whoami`, which
is also a direct alias to a pm subcommand. When `bun list` is detected,
it's internally converted to the `ls` subcommand.
## Testing
Tested locally:
- ✅ `bun list` shows the dependency tree
- ✅ `bun list --all` works correctly with the `--all` flag
- ✅ `bun pm ls` continues to work (backward compatible)
## Test Output
```bash
$ bun list
/tmp/test-bun-list node_modules (3)
└── react@18.3.1
$ bun list --all
/tmp/test-bun-list node_modules
├── js-tokens@4.0.0
├── loose-envify@1.4.0
└── react@18.3.1
$ bun pm ls
/tmp/test-bun-list node_modules (3)
└── react@18.3.1
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Fixes CPU profiler generating invalid timestamps that Chrome DevTools
couldn't parse (though VSCode's profiler viewer accepted them).
## The Problem
CPU profiles generated by `--cpu-prof` had timestamps that were either:
1. Negative (in the original broken profile from the gist)
2. Truncated/corrupted (after initial timestamp calculation fix)
Example from the broken profile:
```json
{
"startTime": -822663297,
"endTime": -804820609
}
```
After initial fix, timestamps were positive but still wrong:
```json
{
"startTime": 1573519100, // Should be ~1761784720948727
"endTime": 1573849434
}
```
## Root Cause
**Primary Issue**: `WTF::JSON::Object::setInteger()` has precision
issues with large values (> 2^31). When setting timestamps like
`1761784720948727` (microseconds since Unix epoch - 16 digits), the
method was truncating/corrupting them.
**Secondary Issue**: The timestamp calculation logic needed
clarification - now explicitly uses the earliest sample's wall clock
time as startTime and calculates a consistent wallClockOffset.
## The Fix
### src/bun.js/bindings/BunCPUProfiler.cpp
Changed from `setInteger()` to `setDouble()` for timestamp
serialization:
```cpp
// Before (broken):
json->setInteger("startTime"_s, static_cast<long long>(startTime));
json->setInteger("endTime"_s, static_cast<long long>(endTime));
// After (fixed):
json->setDouble("startTime"_s, startTime);
json->setDouble("endTime"_s, endTime);
```
JSON `Number` type can precisely represent integers up to 2^53 (~9
quadrillion), which is far more than needed for microsecond timestamps
(~10^15 for current dates).
Also clarified the timestamp calculation to use `wallClockStart`
directly as the profile's `startTime` and calculate a `wallClockOffset`
for converting stopwatch times to wall clock times.
### test/cli/run/cpu-prof.test.ts
Added validation that timestamps are:
- Positive
- In microseconds (> 1000000000000000, < 3000000000000000)
- Within valid Unix epoch range
## Testing
```bash
bun bd test test/cli/run/cpu-prof.test.ts
```
All tests pass ✅
Generated profile now has correct timestamps:
```json
{
"startTime": 1761784720948727.2,
"endTime": 1761784721305814
}
```
## Why VSCode Worked But Chrome DevTools Didn't
- **VSCode**: Only cares about relative timing (duration = endTime -
startTime), doesn't validate absolute timestamp ranges
- **Chrome DevTools**: Expects timestamps in microseconds since Unix
epoch (positive, ~16 digits), fails validation when timestamps are
negative, too small, or out of valid range
## References
- Gist with CPU profile format documentation:
https://gist.github.com/Jarred-Sumner/2c12da481845e20ce6a6175ee8b05a3e
- Chrome DevTools Protocol - Profiler:
https://chromedevtools.github.io/devtools-protocol/tot/Profiler/🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Implements the `--cpu-prof` CLI flag for Bun to profile CPU usage and
save results in Chrome CPU Profiler JSON format, compatible with Chrome
DevTools and VSCode.
## Implementation Details
- Uses JSC's `SamplingProfiler` to collect CPU samples during execution
- Converts samples to Chrome CPU Profiler JSON format on exit
- Supports `--cpu-prof-name` to customize output filename
- Supports `--cpu-prof-dir` to specify output directory
- Default filename: `CPU.YYYYMMDD.HHMMSS.PID.0.001.cpuprofile`
## Key Features
✅ **Chrome DevTools Compatible** - 100% compatible with Node.js CPU
profile format
✅ **Absolute Timestamps** - Uses wall clock time (microseconds since
epoch)
✅ **1ms Sampling** - Matches Node.js sampling frequency for comparable
granularity
✅ **Thread-Safe** - Properly shuts down background sampling thread
before processing
✅ **Memory-Safe** - Uses HeapIterationScope and DeferGC for safe heap
access
✅ **Cross-Platform** - Compiles on Windows, macOS, and Linux with proper
path handling
## Technical Challenges Solved
1. **Heap Corruption** - Fixed by calling `profiler->shutdown()` before
processing traces
2. **Memory Safety** - Added `HeapIterationScope` and `DeferGC` when
accessing JSCells
3. **Timestamp Accuracy** - Explicitly start stopwatch and convert to
absolute wall clock time
4. **Path Handling** - Used `bun.path.joinAbsStringBufZ` with proper cwd
resolution
5. **Windows Support** - UTF-16 path conversion for Windows
compatibility
6. **Atomic Writes** - Used `bun.sys.File.writeFile` with ENOENT retry
## Testing
All tests pass (4/4):
- ✅ Generates profile with default name
- ✅ `--cpu-prof-name` sets custom filename
- ✅ `--cpu-prof-dir` sets custom directory
- ✅ Profile captures function names
Verified format compatibility:
- JSON structure matches Node.js exactly
- All samples reference valid nodes
- Timestamps use absolute microseconds since epoch
- Cross-platform compilation verified with `bun run zig:check-all`
## Example Usage
```bash
# Basic usage
bun --cpu-prof script.js
# Custom filename
bun --cpu-prof --cpu-prof-name my-profile.cpuprofile script.js
# Custom directory
bun --cpu-prof --cpu-prof-dir ./profiles script.js
```
Output can be opened in Chrome DevTools (Performance → Load Profile) or
VSCode's CPU profiling viewer.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
This PR improves snapshot error messages when running tests in CI
environments to make debugging easier by showing exactly what snapshot
was being created and what value was attempted.
## Changes
### 1. Inline Snapshot Errors
**Before:**
```
Updating inline snapshots is disabled in CI environments unless --update-snapshots is used.
```
**After:**
```
Inline snapshot creation is not allowed in CI environments unless --update-snapshots is used.
If this is not a CI environment, set the environment variable CI=false to force allow.
Received: this is new
```
- Changed message to say "creation" instead of "updating" (more
accurate)
- Shows the received value that was attempted using Jest's pretty
printer
### 2. Snapshot File Errors
**Before:**
```
Snapshot creation is not allowed in CI environments unless --update-snapshots is used
If this is not a CI environment, set the environment variable CI=false to force allow.
Received: this is new
```
**After:**
```
Snapshot creation is not allowed in CI environments unless --update-snapshots is used
If this is not a CI environment, set the environment variable CI=false to force allow.
Snapshot name: "new snapshot 1"
Received: this is new
```
- Now shows the snapshot name that was being looked for
- Shows the received value using Jest's pretty printer
## Implementation Details
- Added `last_error_snapshot_name` field to `Snapshots` struct to pass
snapshot name from `getOrPut()` to error handler
- Removed unreachable code path for inline snapshot updates (mismatches
error earlier with diff)
- Updated test expectations in `ci-restrictions.test.ts`
## Test Plan
```bash
# Test inline snapshot creation in CI
cd /tmp/snapshot-test
echo 'import { test, expect } from "bun:test";
test("new inline snapshot", () => {
expect("this is new").toMatchInlineSnapshot();
});' > test.js
GITHUB_ACTIONS=1 bun test test.js
# Test snapshot file creation in CI
echo 'import { test, expect } from "bun:test";
test("new snapshot", () => {
expect("this is new").toMatchSnapshot();
});' > test2.js
GITHUB_ACTIONS=1 bun test test2.js
```
Both should show improved error messages with the received values and
snapshot name.
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pfg <pfg@pfg.pw>
## Summary
Fixed two bugs in the auto-close-duplicates bot:
- **Respect 👎 reactions from ANY user**: Previously only the issue
author's thumbs down would prevent auto-closing. Now any user can
indicate disagreement with the duplicate detection.
- **Don't re-close reopened issues**: The bot now checks if an issue was
previously reopened and skips auto-closing to respect user intent.
## Changes
1. Modified `fetchAllReactions` call to check all reactions, not just
the author's
2. Changed `authorThumbsDown` logic to `hasThumbsDown` (checks any
user's reaction)
3. Added `wasIssueReopened()` function to query issue events timeline
4. Added check to skip issues with "reopened" events in their history
## Test plan
- [ ] Manually test the script doesn't close issues with 👎 reactions
from non-authors
- [ ] Verify reopened issues are not auto-closed again
- [ ] Check that legitimate duplicates without objections still get
closed properly
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
description: Implements JavaScript classes in C++ using JavaScriptCore. Use when creating new JS classes with C++ bindings, prototypes, or constructors.
---
# Implementing JavaScript Classes in C++
## Class Structure
For publicly accessible Constructor and Prototype, create 3 classes:
1.**`class Foo : public JSC::DestructibleObject`** - if C++ fields exist; otherwise use `JSC::constructEmptyObject` with `putDirectOffset`
2.**`class FooPrototype : public JSC::JSNonFinalObject`**
3.**`class FooConstructor : public JSC::InternalFunction`**
No public constructor? Only Prototype and class needed.
description: Creates JavaScript classes using Bun's Zig bindings generator (.classes.ts). Use when implementing new JS APIs in Zig with JSC integration.
---
# Bun's JavaScriptCore Class Bindings Generator
Bridge JavaScript and Zig through `.classes.ts` definitions and Zig implementations.
description: Guides writing bundler tests using itBundled/expectBundled in test/bundler/. Use when creating or modifying bundler, transpiler, or code transformation tests.
---
# Writing Bundler Tests
Bundler tests use `itBundled()` from `test/bundler/expectBundled.ts` to test Bun's bundler.
## Basic Usage
```typescript
import{describe}from"bun:test";
import{itBundled,dedent}from"./expectBundled";
describe("bundler",()=>{
itBundled("category/TestName",{
files:{
"index.js":`console.log("hello");`,
},
run:{
stdout:"hello",
},
});
});
```
Test ID format: `category/TestName` (e.g., `banner/CommentBanner`, `minify/Empty`)
## File Setup
```typescript
{
files:{
"index.js":`console.log("test");`,
"lib.ts":`export const foo = 123;`,
"nested/file.js":`export default {};`,
},
entryPoints:["index.js"],// defaults to first file
Dev server tests validate that hot-reloading is robust, correct, and reliable. Remember to write thorough, yet concise tests.
## File Structure
- `test/bake/bake-harness.ts` - shared utilities and test harness
- primary test functions `devTest` / `prodTest` / `devAndProductionTest`
- class `Dev` (controls subprocess for dev server)
- class `Client` (controls a happy-dom subprocess for having the page open)
- more helpers
- `test/bake/client-fixture.mjs` - subprocess for what `Client` controls. it loads a page and uses IPC to query parts of the page, run javascript, and much more.
- `test/bake/dev/*.test.ts` - these call `devTest` to test dev server and hot reloading
- `test/bake/dev-and-prod.ts` - these use `devAndProductionTest` to run the same test on dev and production mode. these tests cannot really test hot reloading for obvious reasons.
## Categories
bundle.test.ts - Bundle tests are tests concerning bundling bugs that only occur in DevServer.
css.test.ts - CSS tests concern bundling bugs with CSS files
plugins.test.ts - Plugin tests concern plugins in development mode.
ecosystem.test.ts - These tests involve ensuring certain libraries are correct. It is preferred to test more concrete bugs than testing entire packages.
esm.test.ts - ESM tests are about various esm features in development mode.
html.test.ts - HTML tests are tests relating to HTML files themselves.
react-spa.test.ts - Tests relating to React, our react-refresh transform, and basic server component transforms.
sourcemap.test.ts - Tests verifying source-maps are correct.
## `devTest` Basics
A test takes in two primary inputs: `files` and `async test(dev) {`
```ts
import { devTest, emptyHtmlFile } from "../bake-harness";
`files` holds the initial state, and the callback runs with the server running. `dev.fetch()` runs HTTP requests, while `dev.client()` opens a browser instance to the code.
Functions `dev.write` and `dev.patch` and `dev.delete` mutate the filesystem. Do not use `node:fs` APIs, as the dev server ones are hooked to wait for hot-reload, and all connected clients to receive changes.
When a change performs a hard-reload, that must be explicitly annotated with `expectReload`. This tells `client-fixture.mjs` that the test is meant to reload the page once; All other hard reloads automatically fail the test.
Client's have `console.log` instrumented, so that any unasserted logs fail the test. This makes it more obvious when an extra reload or re-evaluation. Messages are awaited via `c.expectMessage("log")` or with multiple arguments if there are multiple logs.
## Testing for bundling errors
By default, a client opening a page to an error will fail the test. This makes testing errors explicit.
```ts
devTest("import then create", {
files: {
"index.html": `
<!DOCTYPE html>
<html>
<head></head>
<body>
<script type="module" src="/script.ts"></script>
</body>
</html>
`,
"script.ts": `
import data from "./data";
console.log(data);
`,
},
async test(dev) {
const c = await dev.client("/", {
errors: ['script.ts:1:18: error: Could not resolve: "./data"'],
If there is a publicly accessible Constructor and Prototype, then there are 3 classes:
- IF there are C++ class members we need a destructor, so `class Foo : public JSC::DestructibleObject`, if no C++ class fields (only JS properties) then we don't need a class at all usually. We can instead use JSC::constructEmptyObject(vm, structure) and `putDirectOffset` like in [NodeFSStatBinding.cpp](mdc:src/bun.js/bindings/NodeFSStatBinding.cpp).
- class FooPrototype : public JSC::JSNonFinalObject
- class FooConstructor : public JSC::InternalFunction
If there is no publicly accessible Constructor, just the Prototype and the class is necessary. In some cases, we can avoid the prototype entirely (but that's rare).
If there are C++ fields on the Foo class, the Foo class will need an iso subspace added to [DOMClientIsoSubspaces.h](mdc:src/bun.js/bindings/webcore/DOMClientIsoSubspaces.h) and [DOMIsoSubspaces.h](mdc:src/bun.js/bindings/webcore/DOMIsoSubspaces.h). Prototype and Constructor do not need subspaces.
Usually you'll need to #include "root.h" at the top of C++ files or you'll get lint errors.
1. Add the `JSC::LazyClassStructure` to [ZigGlobalObject.h](mdc:src/bun.js/bindings/ZigGlobalObject.h)
2. Initialize the class structure in [ZigGlobalObject.cpp](mdc:src/bun.js/bindings/ZigGlobalObject.cpp) in `void GlobalObject::finishCreation(VM& vm)`
3. Visit the class structure in visitChildren in [ZigGlobalObject.cpp](mdc:src/bun.js/bindings/ZigGlobalObject.cpp) in `void GlobalObject::visitChildrenImpl`
```c++#ZigGlobalObject.cpp
void GlobalObject::finishCreation(VM& vm) {
// ...
m_JSStatsBigIntClassStructure.initLater(
[](LazyClassStructure::Initializer& init) {
// Call the function to initialize our class structure.
Bun::initJSBigIntStatsClassStructure(init);
});
```
Then, implement the function that creates the structure:
Bun::throwError(globalObject, scope, ErrorCode::ERR_MISSING_ARGS, "X509Certificate constructor requires at least one argument"_s);
return {};
}
JSValue arg = callFrame->uncheckedArgument(0);
if (!arg.isCell()) {
Bun::throwError(globalObject, scope, ErrorCode::ERR_INVALID_ARG_TYPE, "X509Certificate constructor argument must be a Buffer, TypedArray, or string"_s);
# Registering Functions, Objects, and Modules in Bun
This guide documents the process of adding new functionality to the Bun global object and runtime.
## Overview
Bun's architecture exposes functionality to JavaScript through a set of carefully registered functions, objects, and modules. Most core functionality is implemented in Zig, with JavaScript bindings that make these features accessible to users.
There are several key ways to expose functionality in Bun:
1. **Global Functions**: Direct methods on the `Bun` object (e.g., `Bun.serve()`)
2. **Getter Properties**: Lazily initialized properties on the `Bun` object (e.g., `Bun.sqlite`)
3. **Constructor Classes**: Classes available through the `Bun` object (e.g., `Bun.ValkeyClient`)
4. **Global Modules**: Modules that can be imported directly (e.g., `import {X} from "bun:*"`)
## The Registration Process
Adding new functionality to Bun involves several coordinated steps across multiple files:
### 1. Implement the Core Functionality in Zig
First, implement your feature in Zig, typically in its own directory in `src/`. Examples:
- `src/valkey/` for Redis/Valkey client
- `src/semver/` for SemVer functionality
- `src/smtp/` for SMTP client
### 2. Create JavaScript Bindings
Create bindings that expose your Zig functionality to JavaScript:
- Create a class definition file (e.g., `js_bindings.classes.ts`) to define the JavaScript interface
- Implement `JSYourFeature` struct in a file like `js_your_feature.zig`
Example from a class definition file:
```typescript
// Example from a .classes.ts file
import { define } from "../../codegen/class-definitions";
export default [
define({
name: "YourFeature",
construct: true,
finalize: true,
hasPendingActivity: true,
memoryCost: true,
klass: {},
JSType: "0b11101110",
proto: {
yourMethod: {
fn: "yourZigMethod",
length: 1,
},
property: {
getter: "getProperty",
},
},
values: ["cachedValues"],
}),
];
```
### 3. Register with BunObject in `src/bun.js/bindings/BunObject+exports.h`
Add an entry to the `FOR_EACH_GETTER` macro:
```c
// In BunObject+exports.h
#define FOR_EACH_GETTER(macro) \
macro(CSRF) \
macro(CryptoHasher) \
... \
macro(YourFeature) \
```
### 4. Create a Getter Function in `src/bun.js/api/BunObject.zig`
Implement a getter function in `BunObject.zig` that returns your feature:
You'll find all of Bun's tests in the `test/` directory.
* `test/`
* `cli/` - CLI command tests, like `bun install` or `bun init`
* `js/` - JavaScript & TypeScript tests
* `bun/` - `Bun` APIs tests, separated by category, for example: `glob/` for `Bun.Glob` tests
* `node/` - Node.js module tests, separated by module, for example: `assert/` for `node:assert` tests
* `test/` - Vendored Node.js tests, taken from the Node.js repository (does not conform to Bun's test style)
* `web/` - Web API tests, separated by category, for example: `fetch/` for `Request` and `Response` tests
* `third_party/` - npm package tests, to validate that basic usage works in Bun
* `napi/` - N-API tests
* `v8/` - V8 C++ API tests
* `bundler/` - Bundler, transpiler, CSS, and `bun build` tests
* `regression/issue/[number]` - Regression tests, always make one when fixing a particular issue
## How tests are written
Bun's tests are written as JavaScript and TypeScript files with the Jest-style APIs, like `test`, `describe`, and `expect`. They are tested using Bun's own test runner, `bun test`.
```js
import { describe, test, expect } from "bun:test";
import assert, { AssertionError } from "assert";
describe("assert(expr)", () => {
test.each([true, 1, "foo"])(`assert(%p) does not throw`, expr => {
* If you are fixing a bug, write the test first and make sure it fails (as expected) with the canary version of Bun
* If you are fixing a Node.js compatibility bug, create a throw-away snippet of code and test that it works as you expect in Node.js, then that it fails (as expected) with the canary version of Bun
* When the expected behaviour is ambigious, defer to matching what happens in Node.js
* Always attempt to find related tests in an existing test file before creating a new test file
description: How Zig works with JavaScriptCore bindings generator
globs:
alwaysApply: false
---
# Bun's JavaScriptCore Class Bindings Generator
This document explains how Bun's class bindings generator works to bridge Zig and JavaScript code through JavaScriptCore (JSC).
## Architecture Overview
Bun's binding system creates a seamless bridge between JavaScript and Zig, allowing Zig implementations to be exposed as JavaScript classes. The system has several key components:
- **Child Visitor Methods**: `visitChildrenImpl` and `visitAdditionalChildren`
- **Heap Analysis**: `analyzeHeap` for debugging memory issues
This architecture makes it possible to implement high-performance native functionality in Zig while exposing a clean, idiomatic JavaScript API to users.
- Always use `port: 0`. Do not hardcode ports. Do not use your own random port number function.
- Use `normalizeBunSnapshot` to normalize snapshot output of the test.
- NEVER write tests that check for no "panic" or "uncaught exception" or similar in the test output. That is NOT a valid test.
- NEVER write tests that check for no "panic" or "uncaught exception" or similar in the test output. These tests will never fail in CI.
- Use `tempDir` from `"harness"` to create a temporary directory. **Do not** use `tmpdirSync` or `fs.mkdtempSync` to create temporary directories.
- When spawning processes, tests should expect(stdout).toBe(...) BEFORE expect(exitCode).toBe(0). This gives you a more useful error message on test failure.
- **CRITICAL**: Do not write flaky tests. Do not use `setTimeout` in tests. Instead, `await` the condition to be met. You are not testing the TIME PASSING, you are testing the CONDITION.
ccache is used to cache compilation artifacts, significantly speeding up builds:
```bash
# For macOS
$ brew install ccache
# For Ubuntu/Debian
$ sudo apt install ccache
# For Arch
$ sudo pacman -S ccache
# For Fedora
$ sudo dnf install ccache
# For openSUSE
$ sudo zypper install ccache
```
Our build scripts will automatically detect and use `ccache` if available. You can check cache statistics with `ccache --show-stats`.
## Install LLVM
Bun requires LLVM 19 (`clang` is part of LLVM). This version requirement is to match WebKit (precompiled), as mismatching versions will cause memory allocation failures at runtime. In most cases, you can install LLVM through your system package manager:
@@ -163,7 +186,7 @@ Bun generally takes about 2.5 minutes to compile a debug build when there are Zi
- Batch up your changes
- Ensure zls is running with incremental watching for LSP errors (if you use VSCode and install Zig and run `bun run build` once to download Zig, this should just work)
- Prefer using the debugger ("CodeLLDB" in VSCode) to step through the code.
- Use debug logs. `BUN_DEBUG_<scope>=1` will enable debug logging for the corresponding `Output.scoped(.<scope>, .hidden)` logs. You can also set `BUN_DEBUG_QUIET_LOGS=1` to disable all debug logging that isn't explicitly enabled. To dump debug lgos into a file, `BUN_DEBUG=<path-to-file>.log`. Debug logs are aggressively removed in release builds.
- Use debug logs. `BUN_DEBUG_<scope>=1` will enable debug logging for the corresponding `Output.scoped(.<scope>, .hidden)` logs. You can also set `BUN_DEBUG_QUIET_LOGS=1` to disable all debug logging that isn't explicitly enabled. To dump debug logs into a file, `BUN_DEBUG=<path-to-file>.log`. Debug logs are aggressively removed in release builds.
- src/js/\*\*.ts changes are pretty much instant to rebuild. C++ changes are a bit slower, but still much faster than the Zig code (Zig is one compilation unit, C++ is many).
## Code generation scripts
@@ -331,15 +354,6 @@ $ bun run build -DUSE_STATIC_LIBATOMIC=OFF
The built version of Bun may not work on other systems if compiled this way.
### ccache conflicts with building TinyCC on macOS
If you run into issues with `ccache` when building TinyCC, try reinstalling ccache
Requires [`hyperfine`](https://github.com/sharkdp/hyperfine). The goal of this benchmark is to compare installation performance of Bun with other package managers _when caches are hot_.
This is a [T3 Stack](https://create.t3.gg/) project bootstrapped with `create-t3-app`.
### With lockfile, online mode
## What's next? How do I make an app with this?
To run the benchmark with the standard "install" command for each package manager:
We try to keep this project as simple as possible, so you can start with just the scaffolding we set up for you, and add additional things later when they become necessary.
If you are not familiar with the different technologies used in this project, please refer to the respective docs. If you still are in the wind, please join our [Discord](https://t3.gg/discord) and ask for help.
### With lockfile, offline mode
- [Next.js](https://nextjs.org)
- [NextAuth.js](https://next-auth.js.org)
- [Prisma](https://prisma.io)
- [Drizzle](https://orm.drizzle.team)
- [Tailwind CSS](https://tailwindcss.com)
- [tRPC](https://trpc.io)
Even though all packages are cached, some tools may hit the npm API during the version resolution step. (This is not the same as re-downloading a package.) To entirely avoid network calls, the other package managers require `--prefer-offline/--offline` flag. To run the benchmark using "offline" mode:
Then visit [http://localhost:3000](http://localhost:3000).
Follow our deployment guides for [Vercel](https://create.t3.gg/en/deployment/vercel), [Netlify](https://create.t3.gg/en/deployment/netlify) and [Docker](https://create.t3.gg/en/deployment/docker) for more information.
set-lbun_install_boolean_flags yarn production optional development no-save dry-run force no-cache silent verbose global
set-lbun_install_boolean_flags_descriptions"Write a yarn.lock file (yarn v1)""Don't install devDependencies""Add dependency to optionalDependencies""Add dependency to devDependencies""Don't update package.json or save a lockfile""Don't install anything""Always request the latest versions from the registry & reinstall all dependencies""Ignore manifest cache entirely""Don't output anything""Excessively verbose logging""Use global folder"
set-lbun_builtin_cmds_without_run dev create help bun upgrade discord install remove add init pm x
set-lbun_builtin_cmds_accepting_flags create help bun upgrade discord run init link unlink pm x
set-lbun_builtin_cmds_without_run dev create help bun upgrade discord install remove add update init pm x
set-lbun_builtin_cmds_accepting_flags create help bun upgrade discord run init link unlink pm x update
function__bun_complete_bins_scripts--inherit-variable bun_builtin_cmds_without_run -d"Emit bun completions for bins and scripts"
# Do nothing if we already have a builtin subcommand,
Bun.js has fast paths for common use cases that make Web APIs live up to the performance demands of servers and CLIs.
`Bun.file(path)` returns a [`Blob`](https://developer.mozilla.org/en-US/docs/Web/API/Blob) that represents a lazily-loaded file.
When you pass a file blob to `Bun.write`, Bun automatically uses a faster system call:
```js
constblob=Bun.file("input.txt");
awaitBun.write("output.txt",blob);
```
On Linux, this uses the [`copy_file_range`](https://man7.org/linux/man-pages/man2/copy_file_range.2.html) syscall and on macOS, this becomes `clonefile` (or [`fcopyfile`](https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man3/copyfile.3.html)).
`Bun.write` also supports [`Response`](https://developer.mozilla.org/en-US/docs/Web/API/Response) objects. It automatically converts to a [`Blob`](https://developer.mozilla.org/en-US/docs/Web/API/Blob).
```js
// Eventually, this will stream the response to disk but today it buffers
- Also [`ErrorEvent`](https://developer.mozilla.org/en-US/docs/Web/API/ErrorEvent) [`CloseEvent`](https://developer.mozilla.org/en-US/docs/Web/API/CloseEvent) [`MessageEvent`](https://developer.mozilla.org/en-US/docs/Web/API/MessageEvent).
The `import.meta` object is a way for a module to access information about itself. It's part of the JavaScript language, but its contents are not standardized. Each "host" (browser, runtime, etc) is free to implement any properties it wishes on the `import.meta` object.
- Absolute path to the directory containing the current file, e.g. `/path/to/project`. Equivalent to `__dirname` in CommonJS modules (and Node.js)
---
- `import.meta.dirname`
- An alias to `import.meta.dir`, for Node.js compatibility
---
- `import.meta.env`
- An alias to `process.env`.
---
- `import.meta.file`
- The name of the current file, e.g. `index.tsx`
---
- `import.meta.path`
- Absolute path to the current file, e.g. `/path/to/project/index.ts`. Equivalent to `__filename` in CommonJS modules (and Node.js)
---
- `import.meta.filename`
- An alias to `import.meta.path`, for Node.js compatibility
---
- `import.meta.main`
- Indicates whether the current file is the entrypoint to the current `bun` process. Is the file being directly executed by `bun run` or is it being imported?
---
- `import.meta.resolve`
- Resolve a module specifier (e.g. `"zod"` or `"./file.tsx"`) to a url. Equivalent to [`import.meta.resolve` in browsers](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import.meta#resolve)
- A `string` url to the current file, e.g. `file:///path/to/project/index.ts`. Equivalent to [`import.meta.url` in browsers](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import.meta#url)
There are some more examples in the [examples](./examples) folder.
PRs adding more examples are very welcome!
## Fast paths for Web APIs
Bun.js has fast paths for common use cases that make Web APIs live up to the performance demands of servers and CLIs.
`Bun.file(path)` returns a [`Blob`](https://developer.mozilla.org/en-US/docs/Web/API/Blob) that represents a lazily-loaded file.
When you pass a file blob to `Bun.write`, Bun automatically uses a faster system call:
```js
constblob=Bun.file("input.txt");
awaitBun.write("output.txt",blob);
```
On Linux, this uses the [`copy_file_range`](https://man7.org/linux/man-pages/man2/copy_file_range.2.html) syscall and on macOS, this becomes `clonefile` (or [`fcopyfile`](https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man3/copyfile.3.html)).
`Bun.write` also supports [`Response`](https://developer.mozilla.org/en-US/docs/Web/API/Response) objects. It automatically converts to a [`Blob`](https://developer.mozilla.org/en-US/docs/Web/API/Blob).
```js
// Eventually, this will stream the response to disk but today it buffers
description: Speed up JavaScript execution with bytecode caching in Bun's bundler
---
Bytecode caching is a build-time optimization that dramatically improves application startup time by pre-compiling your JavaScript to bytecode. For example, when compiling TypeScript's `tsc` with bytecode enabled, startup time improves by **2x**.
## Usage
### Basic usage
Enable bytecode caching with the `--bytecode` flag:
```bash terminal icon="terminal"
bun build ./index.ts --target=bun --bytecode --outdir=./dist
```
This generates two files:
- `dist/index.js` - Your bundled JavaScript
- `dist/index.jsc` - The bytecode cache file
At runtime, Bun automatically detects and uses the `.jsc` file:
```bash terminal icon="terminal"
bun ./dist/index.js # Automatically uses index.jsc
```
### With standalone executables
When creating executables with `--compile`, bytecode is embedded into the binary:
```bash terminal icon="terminal"
bun build ./cli.ts --compile --bytecode --outfile=mycli
```
The resulting executable contains both the code and bytecode, giving you maximum performance in a single file.
### Combining with other optimizations
Bytecode works great with minification and source maps:
```bash terminal icon="terminal"
bun build --compile --bytecode --minify --sourcemap ./cli.ts --outfile=mycli
```
- `--minify` reduces code size before generating bytecode (less code -> less bytecode)
- `--sourcemap` preserves error reporting (errors still point to original source)
- `--bytecode` eliminates parsing overhead
## Performance impact
The performance improvement scales with your codebase size:
- Users notice the difference between 90ms and 45ms startup
- Example: TypeScript compiler, Prettier, ESLint
#### Build tools and task runners
- Run hundreds or thousands of times during development
- Milliseconds saved per run compound quickly
- Developer experience improvement
- Example: Build scripts, test runners, code generators
#### Standalone executables
- Distributed to users who care about snappy performance
- Single-file distribution is convenient
- File size less important than startup time
- Example: CLIs distributed via npm or as binaries
### Skip it for:
- ❌ **Small scripts**
- ❌ **Code that runs once**
- ❌ **Development builds**
- ❌ **Size-constrained environments**
- ❌ **Code with top-level await** (not supported)
## Limitations
### CommonJS only
Bytecode caching currently works with CommonJS output format. Bun's bundler automatically converts most ESM code to CommonJS, but **top-level await** is the exception:
```js
// This prevents bytecode caching
const data = await fetch("https://api.example.com");
export default data;
```
**Why**: Top-level await requires async module evaluation, which can't be represented in CommonJS. The module graph becomes asynchronous, and the CommonJS wrapper function model breaks down.
**Workaround**: Move async initialization into a function:
```js
async function init() {
const data = await fetch("https://api.example.com");
return data;
}
export default init;
```
Now the module exports a function that the consumer can await when needed.
### Version compatibility
Bytecode is **not portable across Bun versions**. The bytecode format is tied to JavaScriptCore's internal representation, which changes between versions.
When you update Bun, you must regenerate bytecode:
```bash terminal icon="terminal"
# After updating Bun
bun build --bytecode ./index.ts --outdir=./dist
```
If bytecode doesn't match the current Bun version, it's automatically ignored and your code falls back to parsing the JavaScript source. Your app still runs - you just lose the performance optimization.
**Best practice**: Generate bytecode as part of your CI/CD build process. Don't commit `.jsc` files to git. Regenerate them whenever you update Bun.
### Source code still required
- The `.js` file (your bundled source code)
- The `.jsc` file (the bytecode cache)
At runtime:
1. Bun loads the `.js` file, sees a `@bytecode` pragma, and checks the `.jsc` file
2. Bun loads the `.jsc` file
3. Bun validates the bytecode hash matches the source
4. If valid, Bun uses the bytecode
5. If invalid, Bun falls back to parsing the source
### Bytecode is not obfuscation
Bytecode **does not obscure your source code**. It's an optimization, not a security measure.
## Production deployment
### Docker
Include bytecode generation in your Dockerfile:
```dockerfile Dockerfile icon="docker"
FROM oven/bun:1 AS builder
WORKDIR /app
COPY package.json bun.lock ./
RUN bun install --frozen-lockfile
COPY . .
RUN bun build --bytecode --minify --sourcemap \
--target=bun \
--outdir=./dist \
--compile \
./src/server.ts --outfile=./dist/server
FROM oven/bun:1 AS runner
WORKDIR /app
COPY --from=builder /dist/server /app/server
CMD ["./server"]
```
The bytecode is architecture-independent.
### CI/CD
Generate bytecode during your build pipeline:
```yaml workflow.yml icon="file-code"
# GitHub Actions
- name: Build with bytecode
run: |
bun install
bun build --bytecode --minify \
--outdir=./dist \
--target=bun \
./src/index.ts
```
## Debugging
### Verify bytecode is being used
Check that the `.jsc` file exists:
```bash terminal icon="terminal"
ls -lh dist/
```
```txt
-rw-r--r-- 1 user staff 245K index.js
-rw-r--r-- 1 user staff 1.1M index.jsc
```
The `.jsc` file should be 2-8x larger than the `.js` file.
To log if bytecode is being used, set `BUN_JSC_verboseDiskCache=1` in your environment.
On success, it will log something like:
```txt
[Disk cache] cache hit for sourceCode
```
If you see a cache miss, it will log something like:
```txt
[Disk cache] cache miss for sourceCode
```
It's normal for it it to log a cache miss multiple times since Bun doesn't currently bytecode cache JavaScript code used in builtin modules.
### Common issues
**Bytecode silently ignored**: Usually caused by a Bun version update. The cache version doesn't match, so bytecode is rejected. Regenerate to fix.
**File size too large**: This is expected. Consider:
- Using `--minify` to reduce code size before bytecode generation
- Compressing `.jsc` files for network transfer (gzip/brotli)
- Evaluating if the startup performance gain is worth the size increase
**Top-level await**: Not supported. Refactor to use async initialization functions.
## What is bytecode?
When you run JavaScript, the JavaScript engine doesn't execute your source code directly. Instead, it goes through several steps:
1. **Parsing**: The engine reads your JavaScript source code and converts it into an Abstract Syntax Tree (AST)
2. **Bytecode compilation**: The AST is compiled into bytecode - a lower-level representation that's faster to execute
3. **Execution**: The bytecode is executed by the engine's interpreter or JIT compiler
Bytecode is an intermediate representation - it's lower-level than JavaScript source code, but higher-level than machine code. Think of it as assembly language for a virtual machine. Each bytecode instruction represents a single operation like "load this variable," "add two numbers," or "call this function."
This happens **every single time** you run your code. If you have a CLI tool that runs 100 times a day, your code gets parsed 100 times. If you have a serverless function with frequent cold starts, parsing happens on every cold start.
With bytecode caching, Bun moves steps 1 and 2 to the build step. At runtime, the engine loads the pre-compiled bytecode and jumps straight to execution.
### Why lazy parsing makes this even better
Modern JavaScript engines use a clever optimization called **lazy parsing**. They don't parse all your code upfront - instead, functions are only parsed when they're first called:
```js
// Without bytecode caching:
function rarely_used() {
// This 500-line function is only parsed
// when it's actually called
}
function main() {
console.log("Starting app");
// rarely_used() is never called, so it's never parsed
}
```
This means parsing overhead isn't just a startup cost - it happens throughout your application's lifetime as different code paths execute. With bytecode caching, **all functions are pre-compiled**, even the ones that are lazily parsed. The parsing work happens once at build time instead of being distributed throughout your application's execution.
## The bytecode format
### Inside a .jsc file
A `.jsc` file contains a serialized bytecode structure. Understanding what's inside helps explain both the performance benefits and the file size tradeoff.
**Header section** (validated on every load):
- **Cache version**: A hash tied to the JavaScriptCore framework version. This ensures bytecode generated with one version of Bun only runs with that exact version.
- **Code block type tag**: Identifies whether this is a Program, Module, Eval, or Function code block.
- **Source code hash**: A hash of the original JavaScript source code. Bun verifies this matches before using the bytecode.
- **Source code length**: The exact length of the source, for additional validation.
- **Compilation flags**: Critical compilation context like strict mode, whether it's a script vs module, eval context type, etc. The same source code compiled with different flags produces different bytecode.
**Bytecode instructions**:
- **Instruction stream**: The actual bytecode opcodes - the compiled representation of your JavaScript. This is a variable-length sequence of bytecode instructions.
- **Metadata table**: Each opcode has associated metadata - things like profiling counters, type hints, and execution counts (even if not yet populated).
- **Jump targets**: Pre-computed addresses for control flow (if/else, loops, switch statements).
- **Switch tables**: Optimized lookup tables for switch statements.
**Constants and identifiers**:
- **Constant pool**: All literal values in your code - numbers, strings, booleans, null, undefined. These are stored as actual JavaScript values (JSValues) so they don't need to be parsed from source at runtime.
- **Identifier table**: All variable and function names used in the code. Stored as deduplicated strings.
- **Source code representation markers**: Flags indicating how constants should be represented (as integers, doubles, big ints, etc.).
**Function metadata** (for each function in your code):
- **Register allocation**: How many registers (local variables) the function needs - `thisRegister`, `scopeRegister`, `numVars`, `numCalleeLocals`, `numParameters`.
- **Code features**: A bitmask of function characteristics: is it a constructor? an arrow function? does it use `super`? does it have tail calls? These affect how the function is executed.
- **Lexically scoped features**: Strict mode and other lexical context.
- **Parse mode**: The mode in which the function was parsed (normal, async, generator, async generator).
**Nested structures**:
- **Function declarations and expressions**: Each nested function gets its own bytecode block, recursively. A file with 100 functions has 100 separate bytecode blocks, all nested in the structure.
- **Exception handlers**: Try/catch/finally blocks with their boundaries and handler addresses pre-computed.
- **Expression info**: Maps bytecode positions back to source code locations for error reporting and debugging.
### What bytecode does NOT contain
Importantly, **bytecode does not embed your source code**. Instead:
- The JavaScript source is stored separately (in the `.js` file)
- The bytecode only stores a hash and length of the source
- At load time, Bun validates the bytecode matches the current source code
This is why you need to deploy both the `.js` and `.jsc` files. The `.jsc` file is useless without its corresponding `.js` file.
## The tradeoff: file size
Bytecode files are significantly larger than source code - typically 2-8x larger.
### Why is bytecode so much larger?
**Bytecode instructions are verbose**:
A single line of minified JavaScript might compile to dozens of bytecode instructions. For example:
```js
const sum = arr.reduce((a, b) => a + b, 0);
```
Compiles to bytecode that:
- Loads the `arr` variable
- Gets the `reduce` property
- Creates the arrow function (which itself has bytecode)
- Loads the initial value `0`
- Sets up the call with the right number of arguments
- Actually performs the call
- Stores the result in `sum`
Each of these steps is a separate bytecode instruction with its own metadata.
**Constant pools store everything**:
Every string literal, number, property name - everything gets stored in the constant pool. Even if your source code has `"hello"` a hundred times, the constant pool stores it once, but the identifier table and constant references add overhead.
**Per-function metadata**:
Each function - even small one-line functions - gets its own complete metadata:
- Register allocation info
- Code features bitmask
- Parse mode
- Exception handlers
- Expression info for debugging
A file with 1,000 small functions has 1,000 sets of metadata.
**Profiling data structures**:
Even though profiling data isn't populated yet, the _structures_ to hold profiling data are allocated. This includes:
- Value profile slots (tracking what types flow through each operation)
- Binary arithmetic profile slots (tracking number types in math operations)
- Unary arithmetic profile slots
These take up space even when empty.
**Pre-computed control flow**:
Jump targets, switch tables, and exception handler boundaries are all pre-computed and stored. This makes execution faster but increases file size.
### Mitigation strategies
**Compression**:
Bytecode compresses extremely well with gzip/brotli (60-70% compression). The repetitive structure and metadata compress efficiently.
**Minification first**:
Using `--minify` before bytecode generation helps:
- Shorter identifiers → smaller identifier table
- Dead code elimination → less bytecode generated
- Constant folding → fewer constants in the pool
**The tradeoff**:
You're trading 2-4x larger files for 2-4x faster startup. For CLIs, this is usually worth it. For long-running servers where a few megabytes of disk space don't matter, it's even less of an issue.
## Versioning and portability
### Cross-architecture portability: ✅
Bytecode is **architecture-independent**. You can:
- Build on macOS ARM64, deploy to Linux x64
- Build on Linux x64, deploy to AWS Lambda ARM64
- Build on Windows x64, deploy to macOS ARM64
The bytecode contains abstract instructions that work on any architecture. Architecture-specific optimizations happen during JIT compilation at runtime, not in the cached bytecode.
### Cross-version portability: ❌
Bytecode is **not stable across Bun versions**. Here's why:
**Bytecode format changes**:
JavaScriptCore's bytecode format evolves. New opcodes get added, old ones get removed or changed, metadata structures change. Each version of JavaScriptCore has a different bytecode format.
**Version validation**:
The cache version in the `.jsc` file header is a hash of the JavaScriptCore framework. When Bun loads bytecode:
1. It extracts the cache version from the `.jsc` file
2. It computes the current JavaScriptCore version
3. If they don't match, the bytecode is **silently rejected**
4. Bun falls back to parsing the `.js` source code
Your application still runs - you just lose the performance optimization.
**Graceful degradation**:
This design means bytecode caching "fails open" - if anything goes wrong (version mismatch, corrupted file, missing file), your code still runs normally. You might see slower startup, but you won't see errors.
## Unlinked vs. linked bytecode
JavaScriptCore makes a crucial distinction between "unlinked" and "linked" bytecode. This separation is what makes bytecode caching possible:
### Unlinked bytecode (what's cached)
The bytecode saved in `.jsc` files is **unlinked bytecode**. It contains:
- The compiled bytecode instructions
- Structural information about the code
- Constants and identifiers
- Control flow information
But it **doesn't** contain:
- Pointers to actual runtime objects
- JIT-compiled machine code
- Profiling data from previous runs
- Call link information (which functions call which)
Unlinked bytecode is **immutable and shareable**. Multiple executions of the same code can all reference the same unlinked bytecode.
### Linked bytecode (runtime execution)
When Bun runs bytecode, it "links" it - creating a runtime wrapper that adds:
- **Call link information**: As your code runs, the engine learns which functions call which and optimizes those call sites.
- **Profiling data**: The engine tracks how many times each instruction executes, what types of values flow through the code, array access patterns, etc.
- **JIT compilation state**: References to baseline JIT or optimizing JIT (DFG/FTL) compiled versions of hot code.
- **Runtime objects**: Pointers to actual JavaScript objects, prototypes, scopes, etc.
This linked representation is created fresh every time you run your code. This allows:
1. **Caching the expensive work** (parsing and compilation to unlinked bytecode)
2. **Still collecting runtime profiling data** to guide optimizations
3. **Still applying JIT optimizations** based on actual execution patterns
Bytecode caching moves expensive work (parsing and compiling to bytecode) from runtime to build time. For applications that start frequently, this can halve your startup time at the cost of larger files on disk.
For production CLIs and serverless deployments, the combination of `--bytecode --minify --sourcemap` gives you the best performance while maintaining debuggability.
description: Bun's bundler has built-in support for CSS with modern features
---
Bun's bundler has built-in support for CSS with the following features:
- Transpiling modern/future features to work on all browsers (including vendor prefixing)
@@ -9,11 +14,11 @@ Bun's bundler has built-in support for CSS with the following features:
Bun's CSS bundler lets you use modern/future CSS features without having to worry about browser compatibility — all thanks to its transpiling and vendor prefixing features which are enabled by default.
Bun's CSS parser and bundler is a direct Rust → Zig port of [LightningCSS](https://lightningcss.dev/), with a bundling approach inspired by esbuild. The transpiler converts modern CSS syntax into backwards-compatible equivalents that work across browsers.
Bun's CSS parser and bundler is a direct Rust → Zig port of LightningCSS, with a bundling approach inspired by esbuild. The transpiler converts modern CSS syntax into backwards-compatible equivalents that work across browsers.
A huge thanks goes to the amazing work from the authors of [LightningCSS](https://lightningcss.dev/) and [esbuild](https://esbuild.github.io/).
<Note>A huge thanks goes to the amazing work from the authors of LightningCSS and esbuild.</Note>
### Browser Compatibility
## Browser Compatibility
By default, Bun's CSS bundler targets the following browsers:
@@ -23,13 +28,13 @@ By default, Bun's CSS bundler targets the following browsers:
- Chrome 87+
- Safari 14+
### Syntax Lowering
## Syntax Lowering
#### Nesting
### Nesting
The CSS Nesting specification allows you to write more concise and intuitive stylesheets by nesting selectors inside one another. Instead of repeating parent selectors across your CSS file, you can write child styles directly within their parent blocks.
```css
```scss title="styles.css" icon="file-code"
/* With nesting */
.card {
background: white;
@@ -48,7 +53,7 @@ The CSS Nesting specification allows you to write more concise and intuitive sty
Bun's CSS bundler automatically converts this nested syntax into traditional flat CSS that works in all browsers:
```css
```css title="styles.css" icon="file-code"
/* Compiled output */
.card {
background: white;
@@ -67,7 +72,7 @@ Bun's CSS bundler automatically converts this nested syntax into traditional fla
You can also nest media queries and other at-rules inside selectors, eliminating the need to repeat selector patterns:
```css
```scss title="styles.css" icon="file-code"
.responsive-element {
display: block;
@@ -79,7 +84,7 @@ You can also nest media queries and other at-rules inside selectors, eliminating
This compiles to:
```css
```css title="styles.css" icon="file-code"
.responsive-element {
display: block;
}
@@ -91,11 +96,11 @@ This compiles to:
}
```
#### Color mix
### Color mix
The `color-mix()` function gives you an easy way to blend two colors together according to a specified ratio in a chosen color space. This powerful feature lets you create color variations without manually calculating the resulting values.
```css
```scss title="styles.css" icon="file-code"
.button {
/* Mix blue and red in the RGB color space with a 30/70 proportion */
background-color: color-mix(in srgb, blue 30%, red);
@@ -109,7 +114,7 @@ The `color-mix()` function gives you an easy way to blend two colors together ac
Bun's CSS bundler evaluates these color mixes at build time when all color values are known (not CSS variables), generating static color values that work in all browsers:
```css
```css title="styles.css" icon="file-code"
.button {
/* Computed to the exact resulting color */
background-color: #b31a1a;
@@ -122,11 +127,11 @@ Bun's CSS bundler evaluates these color mixes at build time when all color value
This feature is particularly useful for creating color systems with programmatically derived shades, tints, and accents without needing preprocessors or custom tooling.
#### Relative colors
### Relative colors
CSS now allows you to modify individual components of a color using relative color syntax. This powerful feature lets you create color variations by adjusting specific attributes like lightness, saturation, or individual channels without having to recalculate the entire color.
```css
```css title="styles.css" icon="file-code"
.theme-color {
/* Start with a base color and increase lightness by 15% */
--accent: lch(from purple calc(l + 15%) c h);
@@ -147,27 +152,23 @@ Bun's CSS bundler computes these relative color modifications at build time (whe
This approach is extremely useful for theme generation, creating accessible color variants, or building color scales based on mathematical relationships instead of hard-coding each value.
#### LAB colors
### LAB colors
Modern CSS supports perceptually uniform color spaces like LAB, LCH, OKLAB, and OKLCH that offer significant advantages over traditional RGB. These color spaces can represent colors outside the standard RGB gamut, resulting in more vibrant and visually consistent designs.
```css
```css title="styles.css" icon="file-code"
.vibrant-element {
/* A vibrant red that exceeds sRGB gamut boundaries */
color: lab(55% 78 35);
/* A smooth gradient using perceptual color space */
This layered approach ensures optimal color rendering across all browsers while allowing you to use the latest color technologies in your designs.
#### Color function
### Color function
The `color()` function provides a standardized way to specify colors in various predefined color spaces, expanding your design options beyond the traditional RGB space. This allows you to access wider color gamuts and create more vibrant designs.
```css
```css title="styles.css" icon="file-code"
.vivid-element {
/* Using the Display P3 color space for wider gamut colors */
color: color(display-p3 1 0.1 0.3);
@@ -203,7 +200,7 @@ The `color()` function provides a standardized way to specify colors in various
For browsers that don't support these advanced color functions yet, Bun's CSS bundler provides appropriate RGB fallbacks:
```css
```css title="styles.css" icon="file-code"
.vivid-element {
/* RGB fallback first for maximum compatibility */
color: #fa1a4c;
@@ -217,11 +214,11 @@ For browsers that don't support these advanced color functions yet, Bun's CSS bu
This functionality lets you use modern color spaces immediately while ensuring your designs remain functional across all browsers, with optimal colors displayed in supporting browsers and reasonable approximations elsewhere.
#### HWB colors
### HWB colors
The HWB (Hue, Whiteness, Blackness) color model provides an intuitive way to express colors based on how much white or black is mixed with a pure hue. Many designers find this approach more natural for creating color variations compared to manipulating RGB or HSL values.
```css
```css title="styles.css" icon="file-code"
.easy-theming {
/* Pure cyan with no white or black added */
--primary: hwb(180 0% 0%);
@@ -239,7 +236,7 @@ The HWB (Hue, Whiteness, Blackness) color model provides an intuitive way to exp
Bun's CSS bundler automatically converts HWB colors to RGB for compatibility with all browsers:
```css
```css title="styles.css" icon="file-code"
.easy-theming {
--primary: #00ffff;
--primary-light: #33ffff;
@@ -250,11 +247,11 @@ Bun's CSS bundler automatically converts HWB colors to RGB for compatibility wit
The HWB model makes it particularly easy to create systematic color variations for design systems, providing a more intuitive approach to creating consistent tints and shades than working directly with RGB or HSL values.
#### Color notation
### Color notation
Modern CSS has introduced more intuitive and concise ways to express colors. Space-separated color syntax eliminates the need for commas in RGB and HSL values, while hex colors with alpha channels provide a compact way to specify transparency.
```css
```css title="styles.css" icon="file-code"
.modern-styling {
/* Space-separated RGB notation (no commas) */
color: rgb(50 100 200);
@@ -272,7 +269,7 @@ Modern CSS has introduced more intuitive and concise ways to express colors. Spa
Bun's CSS bundler automatically converts these modern color formats to ensure compatibility with older browsers:
```css
```css title="styles.css" icon="file-code"
.modern-styling {
/* Converted to comma format for older browsers */
color: rgb(50, 100, 200);
@@ -289,11 +286,11 @@ Bun's CSS bundler automatically converts these modern color formats to ensure co
This conversion process lets you write cleaner, more modern CSS while ensuring your styles work correctly across all browsers.
#### light-dark() color function
### light-dark() color function
The `light-dark()` function provides an elegant solution for implementing color schemes that respect the user's system preference without requiring complex media queries. This function accepts two color values and automatically selects the appropriate one based on the current color scheme context.
```css
```css title="styles.css" icon="file-code"
:root {
/* Define color scheme support */
color-scheme: light dark;
@@ -318,7 +315,7 @@ The `light-dark()` function provides an elegant solution for implementing color
For browsers that don't support this feature yet, Bun's CSS bundler converts it to use CSS variables with proper fallbacks:
```css
```css title="styles.css" icon="file-code"
:root {
--lightningcss-light: initial;
--lightningcss-dark: ;
@@ -345,21 +342,19 @@ For browsers that don't support this feature yet, Bun's CSS bundler converts it
This approach gives you a clean way to handle light and dark themes without duplicating styles or writing complex media queries, while maintaining compatibility with browsers that don't yet support the feature natively.
#### Logical properties
### Logical properties
CSS logical properties let you define layout, spacing, and sizing relative to the document's writing mode and text direction rather than physical screen directions. This is crucial for creating truly international layouts that automatically adapt to different writing systems.
```css
```css title="styles.css" icon="file-code"
.multilingual-component {
/* Margin that adapts to writing direction */
margin-inline-start: 1rem;
@@ -378,7 +373,7 @@ CSS logical properties let you define layout, spacing, and sizing relative to th
For browsers that don't fully support logical properties, Bun's CSS bundler compiles them to physical properties with appropriate directional adjustments:
```css
```css title="styles.css" icon="file-code"
/* For left-to-right languages */
.multilingual-component:dir(ltr) {
margin-left: 1rem;
@@ -402,11 +397,11 @@ For browsers that don't fully support logical properties, Bun's CSS bundler comp
If the `:dir()` selector isn't supported, additional fallbacks are automatically generated to ensure your layouts work properly across all browsers and writing systems. This makes creating internationalized designs much simpler while maintaining compatibility with older browsers.
#### :dir() selector
### :dir() selector
The `:dir()` pseudo-class selector allows you to style elements based on their text direction (RTL or LTR), providing a powerful way to create direction-aware designs without JavaScript. This selector matches elements based on their directionality as determined by the document or explicit direction attributes.
```css
```css title="styles.css" icon="file-code"
/* Apply different styles based on text direction */
.nav-arrow:dir(ltr) {
transform: rotate(0deg);
@@ -428,7 +423,7 @@ The `:dir()` pseudo-class selector allows you to style elements based on their t
For browsers that don't support the `:dir()` selector yet, Bun's CSS bundler converts it to the more widely supported `:lang()` selector with appropriate language mappings:
```css
```css title="styles.css" icon="file-code"
/* Converted to use language-based selectors as fallback */
.nav-arrow:lang(en, fr, de, es, it, pt, nl) {
transform: rotate(0deg);
@@ -449,11 +444,11 @@ For browsers that don't support the `:dir()` selector yet, Bun's CSS bundler con
This conversion lets you write direction-aware CSS that works reliably across browsers, even those that don't yet support the `:dir()` selector natively. If multiple arguments to `:lang()` aren't supported, further fallbacks are automatically provided.
#### :lang() selector
### :lang() selector
The `:lang()` pseudo-class selector allows you to target elements based on the language they're in, making it easy to apply language-specific styling. Modern CSS allows the `:lang()` selector to accept multiple language codes, letting you group language-specific rules more efficiently.
```css
```css title="styles.css" icon="file-code"
/* Typography adjustments for CJK languages */
:lang(zh, ja, ko) {
line-height: 1.8;
@@ -472,7 +467,7 @@ blockquote:lang(de, nl, da, sv) {
For browsers that don't support multiple arguments in the `:lang()` selector, Bun's CSS bundler converts this syntax to use the `:is()` selector to maintain the same behavior:
```css
```css title="styles.css" icon="file-code"
/* Multiple languages grouped with :is() for better browser support */
If needed, Bun can provide additional fallbacks for `:is()` as well, ensuring your language-specific styles work across all browsers. This approach simplifies creating internationalized designs with distinct typographic and styling rules for different language groups.
#### :is() selector
### :is() selector
The `:is()` pseudo-class function (formerly `:matches()`) allows you to create more concise and readable selectors by grouping multiple selectors together. It accepts a selector list as its argument and matches if any of the selectors in that list match, significantly reducing repetition in your CSS.
```css
```css title="styles.css" icon="file-code"
/* Instead of writing these separately */
/*
.article h1,
@@ -547,13 +542,17 @@ For browsers that don't support `:is()`, Bun's CSS bundler provides fallbacks us
}
```
It's worth noting that the vendor-prefixed versions have some limitations compared to the standardized `:is()` selector, particularly with complex selectors. Bun handles these limitations intelligently, only using prefixed versions when they'll work correctly.
<Warning>
The vendor-prefixed versions have some limitations compared to the standardized `:is()` selector, particularly with
complex selectors. Bun handles these limitations intelligently, only using prefixed versions when they'll work
correctly.
</Warning>
#### :not() selector
### :not() selector
The `:not()` pseudo-class allows you to exclude elements that match a specific selector. The modern version of this selector accepts multiple arguments, letting you exclude multiple patterns with a single, concise selector.
```css
```css title="styles.css" icon="file-code"
/* Select all buttons except primary and secondary variants */
For browsers that don't support multiple arguments in `:not()`, Bun's CSS bundler converts this syntax to a more compatible form while preserving the same behavior:
```css
```css title="styles.css" icon="file-code"
/* Converted to use :not with :is() for compatibility */
This conversion ensures your negative selectors work correctly across all browsers while maintaining the correct specificity and behavior of the original selector.
#### Math functions
### Math functions
CSS now includes a rich set of mathematical functions that let you perform complex calculations directly in your stylesheets. These include standard math functions (`round()`, `mod()`, `rem()`, `abs()`, `sign()`), trigonometric functions (`sin()`, `cos()`, `tan()`, `asin()`, `acos()`, `atan()`, `atan2()`), and exponential functions (`pow()`, `sqrt()`, `exp()`, `log()`, `hypot()`).
```css
```css title="styles.css" icon="file-code"
.dynamic-sizing {
/* Clamp a value between minimum and maximum */
width: clamp(200px, 50%, 800px);
@@ -625,7 +624,7 @@ CSS now includes a rich set of mathematical functions that let you perform compl
Bun's CSS bundler evaluates these mathematical expressions at build time when all values are known constants (not variables), resulting in optimized output:
```css
```css title="styles.css" icon="file-code"
.dynamic-sizing {
width: clamp(200px, 50%, 800px);
padding: 15px;
@@ -637,11 +636,11 @@ Bun's CSS bundler evaluates these mathematical expressions at build time when al
This approach lets you write more expressive and maintainable CSS with meaningful mathematical relationships, which then gets compiled to optimized values for maximum browser compatibility and performance.
#### Media query ranges
### Media query ranges
Modern CSS supports intuitive range syntax for media queries, allowing you to specify breakpoints using comparison operators like `<`, `>`, `<=`, and `>=` instead of the more verbose `min-` and `max-` prefixes. This syntax is more readable and matches how we normally think about values and ranges.
```css
```css title="styles.css" icon="file-code"
/* Modern syntax with comparison operators */
@media (width >= 768px) {
.container {
@@ -666,7 +665,7 @@ Modern CSS supports intuitive range syntax for media queries, allowing you to sp
Bun's CSS bundler converts these modern range queries to traditional media query syntax for compatibility with all browsers:
```css
```css title="styles.css" icon="file-code"
/* Converted to traditional min/max syntax */
@media (min-width: 768px) {
.container {
@@ -689,11 +688,11 @@ Bun's CSS bundler converts these modern range queries to traditional media query
This lets you write more intuitive and mathematical media queries while ensuring your stylesheets work correctly across all browsers, including those that don't support the modern range syntax.
#### Shorthands
### Shorthands
CSS has introduced several modern shorthand properties that improve code readability and maintainability. Bun's CSS bundler ensures these convenient shorthands work on all browsers by converting them to their longhand equivalents when needed.
```css
```css title="styles.css" icon="file-code"
/* Alignment shorthands */
.flex-container {
/* Shorthand for align-items and justify-items */
@@ -729,7 +728,7 @@ CSS has introduced several modern shorthand properties that improve code readabi
For browsers that don't support these modern shorthands, Bun converts them to their component longhand properties:
```css
```css title="styles.css" icon="file-code"
.flex-container {
/* Expanded alignment properties */
align-items: center;
@@ -766,11 +765,11 @@ For browsers that don't support these modern shorthands, Bun converts them to th
This conversion ensures your stylesheets remain clean and maintainable while providing the broadest possible browser compatibility.
#### Double position gradients
### Double position gradients
The double position gradient syntax is a modern CSS feature that allows you to create hard color stops in gradients by specifying the same color at two adjacent positions. This creates a sharp transition rather than a smooth fade, which is useful for creating stripes, color bands, and other multi-color designs.
```css
```css title="styles.css" icon="file-code"
.striped-background {
/* Creates a sharp transition from green to red at 30%-40% */
background: linear-gradient(
@@ -799,7 +798,7 @@ The double position gradient syntax is a modern CSS feature that allows you to c
For browsers that don't support this syntax, Bun's CSS bundler automatically converts it to the traditional format by duplicating color stops:
```css
```css title="styles.css" icon="file-code"
.striped-background {
background: linear-gradient(
to right,
@@ -830,11 +829,11 @@ For browsers that don't support this syntax, Bun's CSS bundler automatically con
This conversion lets you use the cleaner double position syntax in your source code while ensuring gradients display correctly in all browsers.
#### system-ui font
### system-ui font
The `system-ui` generic font family lets you use the device's native UI font, creating interfaces that feel more integrated with the operating system. This provides a more native look and feel without having to specify different font stacks for each platform.
```css
```css title="styles.css" icon="file-code"
.native-interface {
/* Use the system's default UI font */
font-family: system-ui;
@@ -848,7 +847,7 @@ The `system-ui` generic font family lets you use the device's native UI font, cr
For browsers that don't support `system-ui`, Bun's CSS bundler automatically expands it to a comprehensive cross-platform font stack:
```css
```css title="styles.css" icon="file-code"
.native-interface {
/* Expanded to support all major platforms */
font-family:
@@ -883,7 +882,7 @@ This approach gives you the simplicity of writing just `system-ui` in your sourc
## CSS Modules
Bun's bundler also supports bundling [CSS modules](https://css-tricks.com/css-modules-part-1-need/) in addition to [regular CSS](/docs/bundler/css) with support for the following features:
Bun's bundler also supports bundling CSS modules in addition to regular CSS with support for the following features:
- Automatically detecting CSS module files (`.module.css`) with zero configuration
- Composition (`composes` property)
@@ -894,17 +893,17 @@ A CSS module is a CSS file (with the `.module.css` extension) where are all clas
Under the hood, Bun's bundler transforms locally scoped class names into unique identifiers.
## Getting started
### Getting started
Create a CSS file with the `.module.css` extension:
composes: background from "./background.module.css";
color: red;
}
```
{% callout %}
<Warning>
When composing classes from separate files, be sure that they do not contain the same properties.
The CSS module spec says that composing classes from separate files with conflicting properties is
undefined behavior, meaning that the output may differ and be unreliable.
{% /callout %}
The CSS module spec says that composing classes from separate files with conflicting properties is undefined behavior, meaning that the output may differ and be unreliable.
Bun's bundler also supports bundling [CSS modules](https://css-tricks.com/css-modules-part-1-need/) in addition to [regular CSS](/docs/bundler/css) with support for the following features:
- Automatically detecting CSS module files (`.module.css`) with zero configuration
- Composition (`composes` property)
- Importing CSS modules into JSX/TSX
- Warnings/errors for invalid usages of CSS modules
A CSS module is a CSS file (with the `.module.css` extension) where are all class names and animations are scoped to the file. This helps you avoid class name collisions as CSS declarations are globally scoped by default.
Under the hood, Bun's bundler transforms locally scoped class names into unique identifiers.
## Getting started
Create a CSS file with the `.module.css` extension:
```css
/* styles.module.css */
.button{
color:red;
}
/* other-styles.module.css */
.button{
color:blue;
}
```
You can then import this file, for example into a TSX file:
description: Migration guide from esbuild to Bun's bundler
---
Bun's bundler API is inspired heavily by esbuild. Migrating to Bun's bundler from esbuild should be relatively painless. This guide will briefly explain why you might consider migrating to Bun's bundler and provide a side-by-side API comparison reference for those who are already familiar with esbuild's API.
There are a few behavioral differences to note.
<Note>
**Bundling by default.** Unlike esbuild, Bun always bundles by default. This is why the `--bundle` flag isn't
necessary in the Bun example. To transpile each file individually, use `Bun.Transpiler`.
</Note>
<Note>
**It's just a bundler.** Unlike esbuild, Bun's bundler does not include a built-in development server or file watcher.
It's just a bundler. The bundler is intended for use in conjunction with `Bun.serve` and other runtime APIs to achieve
the same effect. As such, all options relating to HTTP/file watching are not applicable.
</Note>
## Performance
With a performance-minded API coupled with the extensively optimized Zig-based JS/TS parser, Bun's bundler is 1.75x faster than esbuild on esbuild's three.js benchmark.
<Info>Bundling 10 copies of three.js from scratch, with sourcemaps and minification</Info>
## CLI API
Bun and esbuild both provide a command-line interface.
```bash terminal icon="terminal"
# esbuild
esbuild <entrypoint> --outdir=out --bundle
# bun
bun build <entrypoint> --outdir=out
```
In Bun's CLI, simple boolean flags like `--minify` do not accept an argument. Other flags like `--outdir <path>` do accept an argument; these flags can be written as `--outdir out` or `--outdir=out`. Some flags like `--define` can be specified several times: `--define foo=bar --define bar=baz`.
| `--bundle` | n/a | Bun always bundles, use `--no-bundle` to disable this behavior. |
| `--define:K=V` | `--define K=V` | Small syntax difference; no colon.<br/>`esbuild --define:foo=bar`<br/>`bun build --define foo=bar` |
| `--external:<pkg>` | `--external <pkg>` | Small syntax difference; no colon.<br/>`esbuild --external:react`<br/>`bun build --external react` |
| `--format` | `--format` | Bun supports `"esm"` and `"cjs"` currently, but more module formats are planned. esbuild defaults to `"iife"`. |
| `--loader:.ext=loader` | `--loader .ext:loader` | Bun supports a different set of built-in loaders than esbuild; see Bundler > Loaders for a complete reference. The esbuild loaders `dataurl`, `binary`, `base64`, `copy`, and `empty` are not yet implemented.<br/><br/>The syntax for `--loader` is slightly different.<br/>`esbuild app.ts --bundle --loader:.svg=text`<br/>`bun build app.ts --loader .svg:text` |
| `--minify` | `--minify` | No differences |
| `--outdir` | `--outdir` | No differences |
| `--outfile` | `--outfile` | No differences |
| `--packages` | `--packages` | No differences |
| `--platform` | `--target` | Renamed to `--target` for consistency with tsconfig. Does not support `neutral`. |
| `--serve` | n/a | Not applicable |
| `--sourcemap` | `--sourcemap` | No differences |
| `--splitting` | `--splitting` | No differences |
| `--target` | n/a | Not supported. Bun's bundler performs no syntactic down-leveling at this time. |
| `--watch` | `--watch` | No differences |
| `--allow-overwrite` | n/a | Overwriting is never allowed |
| `--analyze` | n/a | Not supported |
| `--asset-names` | `--asset-naming` | Renamed for consistency with naming in JS API |
| `--banner` | `--banner` | Only applies to js bundles |
| `--footer` | `--footer` | Only applies to js bundles |
| `--certfile` | n/a | Not applicable |
| `--charset=utf8` | n/a | Not supported |
| `--chunk-names` | `--chunk-naming` | Renamed for consistency with naming in JS API |
| `--color` | n/a | Always enabled |
| `--drop` | `--drop` | |
| n/a | `--feature` | Bun-specific. Enable feature flags for compile-time dead-code elimination via `import { feature } from "bun:bundle"` |
| `--entry-names` | `--entry-naming` | Renamed for consistency with naming in JS API |
| `--global-name` | n/a | Not applicable, Bun does not support `iife` output at this time |
| `--jsx-dev` | n/a | Bun reads `compilerOptions.jsx` from `tsconfig.json` to determine a default. If `compilerOptions.jsx` is `"react-jsx"`, or if `NODE_ENV=production`, Bun will use the jsx transform. Otherwise, it uses `jsxDEV`. The bundler does not support `preserve`. |
| `absWorkingDir` | n/a | Always set to `process.cwd()` |
| `alias` | n/a | Not supported |
| `allowOverwrite` | n/a | Always false |
| `assetNames` | `naming.asset` | Uses same templating syntax as esbuild, but `[ext]` must be included explicitly.<br/><br/>`ts<br/>Bun.build({<br/> entrypoints: ["./index.tsx"],<br/> naming: {<br/> asset: "[name].[ext]",<br/> },<br/>});<br/>` |
| `banner` | n/a | Not supported |
| `bundle` | n/a | Always true. Use `Bun.Transpiler` to transpile without bundling. |
| `charset` | n/a | Not supported |
| `chunkNames` | `naming.chunk` | Uses same templating syntax as esbuild, but `[ext]` must be included explicitly.<br/><br/>`ts<br/>Bun.build({<br/> entrypoints: ["./index.tsx"],<br/> naming: {<br/> chunk: "[name].[ext]",<br/> },<br/>});<br/>` |
| `color` | n/a | Bun returns logs in the `logs` property of the build result. |
| `conditions` | n/a | Not supported. Export conditions priority is determined by `target`. |
| `define` | `define` | |
| `drop` | n/a | Not supported |
| `entryNames` | `naming` or `naming.entry` | Bun supports a `naming` key that can either be a string or an object. Uses same templating syntax as esbuild, but `[ext]` must be included explicitly.<br/><br/>`ts<br/>Bun.build({<br/> entrypoints: ["./index.tsx"],<br/> // when string, this is equivalent to entryNames<br/> naming: "[name].[ext]",<br/><br/> // granular naming options<br/> naming: {<br/> entry: "[name].[ext]",<br/> asset: "[name].[ext]",<br/> chunk: "[name].[ext]",<br/> },<br/>});<br/>` |
| `format` | `format` | Only supports `"esm"` currently. Support for `"cjs"` and `"iife"` is planned. |
| `globalName` | n/a | Not supported |
| `ignoreAnnotations` | n/a | Not supported |
| `inject` | n/a | Not supported |
| `jsx` | `jsx` | Not supported in JS API, configure in `tsconfig.json` |
| `jsxDev` | `jsxDev` | Not supported in JS API, configure in `tsconfig.json` |
| `jsxFactory` | `jsxFactory` | Not supported in JS API, configure in `tsconfig.json` |
| `jsxFragment` | `jsxFragment` | Not supported in JS API, configure in `tsconfig.json` |
| `jsxImportSource` | `jsxImportSource` | Not supported in JS API, configure in `tsconfig.json` |
| `jsxSideEffects` | `jsxSideEffects` | Not supported in JS API, configure in `tsconfig.json` |
| `keepNames` | n/a | Not supported |
| `legalComments` | n/a | Not supported |
| `loader` | `loader` | Bun supports a different set of built-in loaders than esbuild; see Bundler > Loaders for a complete reference. The esbuild loaders `dataurl`, `binary`, `base64`, `copy`, and `empty` are not yet implemented. |
| `logLevel` | n/a | Not supported |
| `logLimit` | n/a | Not supported |
| `logOverride` | n/a | Not supported |
| `mainFields` | n/a | Not supported |
| `mangleCache` | n/a | Not supported |
| `mangleProps` | n/a | Not supported |
| `mangleQuoted` | n/a | Not supported |
| `metafile` | n/a | Not supported |
| `minify` | `minify` | In Bun, `minify` can be a boolean or an object.<br/><br/>`ts<br/>await Bun.build({<br/> entrypoints: ['./index.tsx'],<br/> // enable all minification<br/> minify: true<br/><br/> // granular options<br/> minify: {<br/> identifiers: true,<br/> syntax: true,<br/> whitespace: true<br/> }<br/>})<br/>` |
| `minifyIdentifiers` | `minify.identifiers` | See `minify` |
| `minifySyntax` | `minify.syntax` | See `minify` |
| `minifyWhitespace` | `minify.whitespace` | See `minify` |
| `nodePaths` | n/a | Not supported |
| `outExtension` | n/a | Not supported |
| `outbase` | `root` | Different name |
| `outdir` | `outdir` | No differences |
| `outfile` | `outfile` | No differences |
| `packages` | n/a | Not supported, use `external` |
| `platform` | `target` | Supports `"bun"`, `"node"` and `"browser"` (the default). Does not support `"neutral"`. |
| `plugins` | `plugins` | Bun's plugin API is a subset of esbuild's. Some esbuild plugins will work out of the box with Bun. |
| `target` | n/a | No support for syntax downleveling |
| `treeShaking` | n/a | Always true |
| `tsconfig` | n/a | Not supported |
| `write` | n/a | Set to true if `outdir`/`outfile` is set, otherwise false |
## Plugin API
Bun's plugin API is designed to be esbuild compatible. Bun doesn't support esbuild's entire plugin API surface, but the core functionality is implemented. Many third-party esbuild plugins will work out of the box with Bun.
<Note>
Long term, we aim for feature parity with esbuild's API, so if something doesn't work please file an issue to help us
prioritize.
</Note>
Plugins in Bun and esbuild are defined with a builder object.
The builder object provides some methods for hooking into parts of the bundling process. Bun implements `onResolve` and `onLoad`; it does not yet implement the esbuild hooks `onStart`, `onEnd`, and `onDispose`, and `resolve` utilities. `initialOptions` is partially implemented, being read-only and only having a subset of esbuild's options; use `config` (same thing but with Bun's `BuildConfig` format) instead.
Bun's bundler implements a `--compile` flag for generating a standalone binary from a TypeScript or JavaScript file.
{% codetabs %}
```bash
$ bun build ./cli.ts --compile --outfile mycli
```
```ts#cli.ts
console.log("Hello world!");
```
{% /codetabs %}
This bundles `cli.ts` into an executable that can be executed directly:
```
$ ./mycli
Hello world!
```
All imported files and packages are bundled into the executable, along with a copy of the Bun runtime. All built-in Bun and Node.js APIs are supported.
## Cross-compile to other platforms
The `--target` flag lets you compile your standalone executable for a different operating system, architecture, or version of Bun than the machine you're running `bun build` on.
To build for Linux x64 (most servers):
```sh
bun build --compile --target=bun-linux-x64 ./index.ts --outfile myapp
# To support CPUs from before 2013, use the baseline version (nehalem)
bun build --compile --target=bun-linux-x64-baseline ./index.ts --outfile myapp
# To explicitly only support CPUs from 2013 and later, use the modern version (haswell)
# modern is faster, but baseline is more compatible.
bun build --compile --target=bun-linux-x64-modern ./index.ts --outfile myapp
```
To build for Linux ARM64 (e.g. Graviton or Raspberry Pi):
```sh
# Note: the default architecture is x64 if no architecture is specified.
bun build --compile --target=bun-linux-arm64 ./index.ts --outfile myapp
```
To build for Windows x64:
```sh
bun build --compile --target=bun-windows-x64 ./path/to/my/app.ts --outfile myapp
# To support CPUs from before 2013, use the baseline version (nehalem)
bun build --compile --target=bun-windows-x64-baseline ./path/to/my/app.ts --outfile myapp
# To explicitly only support CPUs from 2013 and later, use the modern version (haswell)
bun build --compile --target=bun-windows-x64-modern ./path/to/my/app.ts --outfile myapp
# note: if no .exe extension is provided, Bun will automatically add it for Windows executables
```
To build for macOS arm64:
```sh
bun build --compile --target=bun-darwin-arm64 ./path/to/my/app.ts --outfile myapp
```
To build for macOS x64:
```sh
bun build --compile --target=bun-darwin-x64 ./path/to/my/app.ts --outfile myapp
```
#### Supported targets
The order of the `--target` flag does not matter, as long as they're delimited by a `-`.
| --target | Operating System | Architecture | Modern | Baseline | Libc |
On x64 platforms, Bun uses SIMD optimizations which require a modern CPU supporting AVX2 instructions. The `-baseline` build of Bun is for older CPUs that don't support these optimizations. Normally, when you install Bun we automatically detect which version to use but this can be harder to do when cross-compiling since you might not know the target CPU. You usually don't need to worry about it on Darwin x64, but it is relevant for Windows x64 and Linux x64. If you or your users see `"Illegal instruction"` errors, you might need to use the baseline version.
## Build-time constants
Use the `--define` flag to inject build-time constants into your executable, such as version numbers, build timestamps, or configuration values:
These constants are embedded directly into your compiled binary at build time, providing zero runtime overhead and enabling dead code elimination optimizations.
{% callout type="info" %}
For comprehensive examples and advanced patterns, see the [Build-time constants guide](/guides/runtime/build-time-constants).
{% /callout %}
## Deploying to production
Compiled executables reduce memory usage and improve Bun's start time.
Normally, Bun reads and transpiles JavaScript and TypeScript files on `import` and `require`. This is part of what makes so much of Bun "just work", but it's not free. It costs time and memory to read files from disk, resolve file paths, parse, transpile, and print source code.
With compiled executables, you can move that cost from runtime to build-time.
When deploying to production, we recommend the following:
```sh
bun build --compile --minify --sourcemap ./path/to/my/app.ts --outfile myapp
```
### Bytecode compilation
To improve startup time, enable bytecode compilation:
```sh
bun build --compile --minify --sourcemap --bytecode ./path/to/my/app.ts --outfile myapp
```
Using bytecode compilation, `tsc` starts 2x faster:
Bytecode compilation moves parsing overhead for large input files from runtime to bundle time. Your app starts faster, in exchange for making the `bun build` command a little slower. It doesn't obscure source code.
**Experimental:** Bytecode compilation is an experimental feature introduced in Bun v1.1.30. Only `cjs` format is supported (which means no top-level-await). Let us know if you run into any issues!
### What do these flags do?
The `--minify` argument optimizes the size of the transpiled output code. If you have a large application, this can save megabytes of space. For smaller applications, it might still improve start time a little.
The `--sourcemap` argument embeds a sourcemap compressed with zstd, so that errors & stacktraces point to their original locations instead of the transpiled location. Bun will automatically decompress & resolve the sourcemap when an error occurs.
The `--bytecode` argument enables bytecode compilation. Every time you run JavaScript code in Bun, JavaScriptCore (the engine) will compile your source code into bytecode. We can move this parsing work from runtime to bundle time, saving you startup time.
## Embedding runtime arguments
**`--compile-exec-argv="args"`** - Embed runtime arguments that are available via `process.execArgv`:
```bash
bun build --compile --compile-exec-argv="--smol --user-agent=MyBot" ./app.ts --outfile myapp
You can run a standalone executable as if it were the `bun` CLI itself by setting the `BUN_BE_BUN=1` environment variable. When this variable is set, the executable will ignore its bundled entrypoint and instead expose all the features of Bun's CLI.
For example, consider an executable compiled from a simple script:
```sh
$ cat such-bun.js
console.log("you shouldn't see this");
$ bun build --compile ./such-bun.js
[3ms] bundle 1 modules
[89ms] compile such-bun
```
Normally, running `./such-bun` with arguments would execute the script. However, with the `BUN_BE_BUN=1` environment variable, it acts just like the `bun` binary:
```sh
# Executable runs its own entrypoint by default
$ ./such-bun install
you shouldn't see this
# With the env var, the executable acts like the `bun` CLI
$ BUN_BE_BUN=1 ./such-bun install
bun install v1.2.16-canary.1 (1d1db811)
Checked 63 installs across 64 packages (no changes) [5.00ms]
```
This is useful for building CLI tools on top of Bun that may need to install packages, bundle dependencies, run different or local files and more without needing to download a separate binary or install bun.
## Full-stack executables
{% note %}
New in Bun v1.2.17
{% /note %}
Bun's `--compile` flag can create standalone executables that contain both server and client code, making it ideal for full-stack applications. When you import an HTML file in your server code, Bun automatically bundles all frontend assets (JavaScript, CSS, etc.) and embeds them into the executable. When Bun sees the HTML import on the server, it kicks off a frontend build process to bundle JavaScript, CSS, and other assets.
console.log(`Server running at http://localhost:${server.port}`);
```
```html#index.html
<!DOCTYPE html>
<html>
<head>
<title>My App</title>
<link rel="stylesheet" href="./styles.css">
</head>
<body>
<h1>Hello World</h1>
<script src="./app.js"></script>
</body>
</html>
```
```js#app.js
console.log("Hello from the client!");
```
```css#styles.css
body {
background-color: #f0f0f0;
}
```
{% /codetabs %}
To build this into a single executable:
```sh
bun build --compile ./server.ts --outfile myapp
```
This creates a self-contained binary that includes:
- Your server code
- The Bun runtime
- All frontend assets (HTML, CSS, JavaScript)
- Any npm packages used by your server
The result is a single file that can be deployed anywhere without needing Node.js, Bun, or any dependencies installed. Just run:
```sh
./myapp
```
Bun automatically handles serving the frontend assets with proper MIME types and cache headers. The HTML import is replaced with a manifest object that `Bun.serve` uses to efficiently serve pre-bundled assets.
For more details on building full-stack applications with Bun, see the [full-stack guide](/docs/bundler/fullstack).
## Worker
To use workers in a standalone executable, add the worker's entrypoint to the CLI arguments:
```sh
$ bun build --compile ./index.ts ./my-worker.ts --outfile myapp
```
Then, reference the worker in your code:
```ts
console.log("Hello from Bun!");
// Any of these will work:
new Worker("./my-worker.ts");
new Worker(new URL("./my-worker.ts", import.meta.url));
new Worker(new URL("./my-worker.ts", import.meta.url).href);
```
As of Bun v1.1.25, when you add multiple entrypoints to a standalone executable, they will be bundled separately into the executable.
In the future, we may automatically detect usages of statically-known paths in `new Worker(path)` and then bundle those into the executable, but for now, you'll need to add it to the shell command manually like the above example.
If you use a relative path to a file not included in the standalone executable, it will attempt to load that path from disk relative to the current working directory of the process (and then error if it doesn't exist).
## SQLite
You can use `bun:sqlite` imports with `bun build --compile`.
By default, the database is resolved relative to the current working directory of the process.
```js
import db from "./my.db" with { type: "sqlite" };
console.log(db.query("select * from users LIMIT 1").get());
```
That means if the executable is located at `/usr/bin/hello`, the user's terminal is located at `/home/me/Desktop`, it will look for `/home/me/Desktop/my.db`.
```
$ cd /home/me/Desktop
$ ./hello
```
## Embed assets & files
Standalone executables support embedding files.
To embed files into an executable with `bun build --compile`, import the file in your code.
```ts
// this becomes an internal file path
import icon from "./icon.png" with { type: "file" };
import { file } from "bun";
export default {
fetch(req) {
// Embedded files can be streamed from Response objects
return new Response(file(icon));
},
};
```
Embedded files can be read using `Bun.file`'s functions or the Node.js `fs.readFile` function (in `"node:fs"`).
For example, to read the contents of the embedded file:
```js
import icon from "./icon.png" with { type: "file" };
import { file } from "bun";
const bytes = await file(icon).arrayBuffer();
// await fs.promises.readFile(icon)
// fs.readFileSync(icon)
```
### Embed SQLite databases
If your application wants to embed a SQLite database, set `type: "sqlite"` in the import attribute and the `embed` attribute to `"true"`.
```js
import myEmbeddedDb from "./my.db" with { type: "sqlite", embed: "true" };
console.log(myEmbeddedDb.query("select * from users LIMIT 1").get());
```
This database is read-write, but all changes are lost when the executable exits (since it's stored in memory).
### Embed N-API Addons
As of Bun v1.0.23, you can embed `.node` files into executables.
```js
const addon = require("./addon.node");
console.log(addon.hello());
```
Unfortunately, if you're using `@mapbox/node-pre-gyp` or other similar tools, you'll need to make sure the `.node` file is directly required or it won't bundle correctly.
### Embed directories
To embed a directory with `bun build --compile`, use a shell glob in your `bun build` command:
```sh
$ bun build --compile ./index.ts ./public/**/*.png
```
Then, you can reference the files in your code:
```ts
import icon from "./public/assets/icon.png" with { type: "file" };
import { file } from "bun";
export default {
fetch(req) {
// Embedded files can be streamed from Response objects
return new Response(file(icon));
},
};
```
This is honestly a workaround, and we expect to improve this in the future with a more direct API.
### Listing embedded files
To get a list of all embedded files, use `Bun.embeddedFiles`:
`Bun.embeddedFiles` returns an array of `Blob` objects which you can use to get the size, contents, and other properties of the files.
```ts
embeddedFiles: Blob[]
```
The list of embedded files excludes bundled source code like `.ts` and `.js` files.
#### Content hash
By default, embedded files have a content hash appended to their name. This is useful for situations where you want to serve the file from a URL or CDN and have fewer cache invalidation issues. But sometimes, this is unexpected and you might want the original name instead:
To disable the content hash, pass `--asset-naming` to `bun build --compile` like this:
```sh
$ bun build --compile --asset-naming="[name].[ext]" ./index.ts
```
## Minification
To trim down the size of the executable a little, pass `--minify` to `bun build --compile`. This uses Bun's minifier to reduce the code size. Overall though, Bun's binary is still way too big and we need to make it smaller.
## Using Bun.build() API
You can also generate standalone executables using the `Bun.build()` JavaScript API. This is useful when you need programmatic control over the build process.
### Basic usage
```js
await Bun.build({
entrypoints: ["./app.ts"],
outdir: "./dist",
compile: {
target: "bun-windows-x64",
outfile: "myapp.exe",
},
});
```
### Windows metadata with Bun.build()
When targeting Windows, you can specify metadata through the `windows` object:
```js
await Bun.build({
entrypoints: ["./app.ts"],
outdir: "./dist",
compile: {
target: "bun-windows-x64",
outfile: "myapp.exe",
windows: {
title: "My Application",
publisher: "My Company Inc",
version: "1.2.3.4",
description: "A powerful application built with Bun",
When compiling a standalone executable for Windows, there are several platform-specific options that can be used to customize the generated `.exe` file:
### Visual customization
- `--windows-icon=path/to/icon.ico` - Set the executable file icon
- `--windows-hide-console` - Disable the background terminal window (useful for GUI applications)
### Metadata customization
You can embed version information and other metadata into your Windows executable:
- `--windows-title <STR>` - Set the product name (appears in file properties)
- `--windows-publisher <STR>` - Set the company name
- `--windows-version <STR>` - Set the version number (e.g. "1.2.3.4")
- `--windows-description <STR>` - Set the file description
- `--windows-copyright <STR>` - Set the copyright information
#### Example with all metadata flags:
```sh
bun build --compile ./app.ts \
--outfile myapp.exe \
--windows-title "My Application" \
--windows-publisher "My Company Inc" \
--windows-version "1.2.3.4" \
--windows-description "A powerful application built with Bun" \
This metadata will be visible in Windows Explorer when viewing the file properties:
1. Right-click the executable in Windows Explorer
2. Select "Properties"
3. Go to the "Details" tab
#### Version string format
The `--windows-version` flag accepts version strings in the following formats:
- `"1"` - Will be normalized to "1.0.0.0"
- `"1.2"` - Will be normalized to "1.2.0.0"
- `"1.2.3"` - Will be normalized to "1.2.3.0"
- `"1.2.3.4"` - Full version format
Each version component must be a number between 0 and 65535.
{% callout %}
These flags currently cannot be used when cross-compiling because they depend on Windows APIs. They are only available when building on Windows itself.
{% /callout %}
## Code signing on macOS
To codesign a standalone executable on macOS (which fixes Gatekeeper warnings), use the `codesign` command.
To get started, import HTML files and pass them to the `routes` option in `Bun.serve()`.
```ts
import{sql,serve}from"bun";
importdashboardfrom"./dashboard.html";
importhomepagefrom"./index.html";
constserver=serve({
routes:{
// ** HTML imports **
// Bundle & route index.html to "/". This uses HTMLRewriter to scan the HTML for `<script>` and `<link>` tags, run's Bun's JavaScript & CSS bundler on them, transpiles any TypeScript, JSX, and TSX, downlevels CSS with Bun's CSS parser and serves the result.
"/":homepage,
// Bundle & route dashboard.html to "/dashboard"
"/dashboard":dashboard,
// ** API endpoints ** (Bun v1.2.3+ required)
"/api/users":{
asyncGET(req){
constusers=awaitsql`SELECT * FROM users`;
returnResponse.json(users);
},
asyncPOST(req){
const{name,email}=awaitreq.json();
const[user]=
awaitsql`INSERT INTO users (name, email) VALUES (${name}, ${email})`;
returnResponse.json(user);
},
},
"/api/users/:id":asyncreq=>{
const{id}=req.params;
const[user]=awaitsql`SELECT * FROM users WHERE id = ${id}`;
returnResponse.json(user);
},
},
// Enable development mode for:
// - Detailed error messages
// - Hot reloading (Bun v1.2.3+ required)
development: true,
// Prior to v1.2.3, the `fetch` option was used to handle all API requests. It is now optional.
// async fetch(req) {
// // Return 404 for unmatched routes
// return new Response("Not Found", { status: 404 });
// },
});
console.log(`Listening on ${server.url}`);
```
```bash
$ bun run app.ts
```
## HTML imports are routes
The web starts with HTML, and so does Bun's fullstack dev server.
To specify entrypoints to your frontend, import HTML files into your JavaScript/TypeScript/TSX/JSX files.
```ts
importdashboardfrom"./dashboard.html";
importhomepagefrom"./index.html";
```
These HTML files are used as routes in Bun's dev server you can pass to `Bun.serve()`.
```ts
Bun.serve({
routes:{
"/":homepage,
"/dashboard":dashboard,
}
fetch(req){
// ... api requests
},
});
```
When you make a request to `/dashboard` or `/`, Bun automatically bundles the `<script>` and `<link>` tags in the HTML files, exposes them as static routes, and serves the result.
When building locally, enable development mode by setting `development: true` in `Bun.serve()`.
```js-diff
import homepage from "./index.html";
import dashboard from "./dashboard.html";
Bun.serve({
routes: {
"/": homepage,
"/dashboard": dashboard,
}
+ development: true,
fetch(req) {
// ... api requests
},
});
```
When `development` is `true`, Bun will:
- Include the `SourceMap` header in the response so that devtools can show the original source code
- Disable minification
- Re-bundle assets on each request to a .html file
- Enable hot module reloading (unless `hmr: false` is set)
#### Echo console logs from browser to terminal
Bun.serve() supports echoing console logs from the browser to the terminal.
To enable this, pass `console: true` in the `development` object in `Bun.serve()`.
```ts
import homepage from "./index.html";
Bun.serve({
// development can also be an object.
development: {
// Enable Hot Module Reloading
hmr: true,
// Echo console logs from the browser to the terminal
console: true,
},
routes: {
"/": homepage,
},
});
```
When `console: true` is set, Bun will stream console logs from the browser to the terminal. This reuses the existing WebSocket connection from HMR to send the logs.
#### Production mode
Hot reloading and `development: true` helps you iterate quickly, but in production, your server should be as fast as possible and have as few external dependencies as possible.
##### Ahead of time bundling (recommended)
As of Bun v1.2.17, you can use `Bun.build` or `bun build` to bundle your full-stack application ahead of time.
```sh
$ bun build --target=bun --production --outdir=dist ./src/index.ts
```
When Bun's bundler sees an HTML import from server-side code, it will bundle the referenced JavaScript/TypeScript/TSX/JSX and CSS files into a manifest object that Bun.serve() can use to serve the assets.
```ts
import { serve } from "bun";
import index from "./index.html";
serve({
routes: { "/": index },
});
```
{% details summary="Internally, the `index` variable is a manifest object that looks something like this" %}
```json
{
"index": "./index.html",
"files": [
{
"input": "index.html",
"path": "./index-f2me3qnf.js",
"loader": "js",
"isEntry": true,
"headers": {
"etag": "eet6gn75",
"content-type": "text/javascript;charset=utf-8"
}
},
{
"input": "index.html",
"path": "./index.html",
"loader": "html",
"isEntry": true,
"headers": {
"etag": "r9njjakd",
"content-type": "text/html;charset=utf-8"
}
},
{
"input": "index.html",
"path": "./index-gysa5fmk.css",
"loader": "css",
"isEntry": true,
"headers": {
"etag": "50zb7x61",
"content-type": "text/css;charset=utf-8"
}
},
{
"input": "logo.svg",
"path": "./logo-kygw735p.svg",
"loader": "file",
"isEntry": false,
"headers": {
"etag": "kygw735p",
"content-type": "application/octet-stream"
}
},
{
"input": "react.svg",
"path": "./react-ck11dneg.svg",
"loader": "file",
"isEntry": false,
"headers": {
"etag": "ck11dneg",
"content-type": "application/octet-stream"
}
}
]
}
```
{% /details %}
##### Runtime bundling
When adding a build step is too complicated, you can set `development: false` in `Bun.serve()`.
- Enable in-memory caching of bundled assets. Bun will bundle assets lazily on the first request to an `.html` file, and cache the result in memory until the server restarts.
- Enables `Cache-Control` headers and `ETag` headers
- Minifies JavaScript/TypeScript/TSX/JSX files
## Plugins
Bun's [bundler plugins](https://bun.com/docs/bundler/plugins) are also supported when bundling static routes.
To configure plugins for `Bun.serve`, add a `plugins` array in the `[serve.static]` section of your `bunfig.toml`.
### Using TailwindCSS in HTML routes
For example, enable TailwindCSS on your routes by installing and adding the `bun-plugin-tailwind` plugin:
```sh
$ bun add bun-plugin-tailwind
```
```toml#bunfig.toml
[serve.static]
plugins = ["bun-plugin-tailwind"]
```
This will allow you to use TailwindCSS utility classes in your HTML and CSS files. All you need to do is import `tailwindcss` somewhere:
```html#index.html
<!doctype html>
<html>
<head>
<title>Home</title>
<link rel="stylesheet" href="tailwindcss" />
</head>
<body>
<!-- the rest of your HTML... -->
</body>
</html>
```
Or in your CSS:
```css#style.css
@import "tailwindcss";
```
### Custom plugins
Any JS file or module which exports a [valid bundler plugin object](https://bun.com/docs/bundler/plugins#usage) (essentially an object with a `name` and `setup` field) can be placed inside the `plugins` array:
```toml#bunfig.toml
[serve.static]
plugins = ["./my-plugin-implementation.ts"]
```
Bun will lazily resolve and load each plugin and use them to bundle your routes.
Note: this is currently in `bunfig.toml` to make it possible to know statically which plugins are in use when we eventually integrate this with the `bun build` CLI. These plugins work in `Bun.build()`'s JS API, but are not yet supported in the CLI.
## How this works
Bun uses [`HTMLRewriter`](/docs/api/html-rewriter) to scan for `<script>` and `<link>` tags in HTML files, uses them as entrypoints for [Bun's bundler](/docs/bundler), generates an optimized bundle for the JavaScript/TypeScript/TSX/JSX and CSS files, and serves the result.
1. **`<script>` processing**
- Transpiles TypeScript, JSX, and TSX in `<script>` tags
- Bundles imported dependencies
- Generates sourcemaps for debugging
- Minifies when `development` is not `true` in `Bun.serve()`
- Rewrites `url` and asset paths to include content-addressable hashes in URLs
```html
<link rel="stylesheet" href="./styles.css" />
```
3. **`<img>` & asset processing**
- Links to assets are rewritten to include content-addressable hashes in URLs
- Small assets in CSS files are inlined into `data:` URLs, reducing the total number of HTTP requests sent over the wire
4. **Rewrite HTML**
- Combines all `<script>` tags into a single `<script>` tag with a content-addressable hash in the URL
- Combines all `<link>` tags into a single `<link>` tag with a content-addressable hash in the URL
- Outputs a new HTML file
5. **Serve**
- All the output files from the bundler are exposed as static routes, using the same mechanism internally as when you pass a `Response` object to [`static` in `Bun.serve()`](/docs/api/http#static-routes).
This works similarly to how [`Bun.build` processes HTML files](/docs/bundler/html).
## This is a work in progress
- This doesn't support `bun build` yet. It also will in the future.
Hot Module Replacement (HMR) allows you to update modules in a running
application without needing a full page reload. This preserves the application
state and improves the development experience.
HMR is enabled by default when using Bun's full-stack development server.
## `import.meta.hot` API Reference
Bun implements a client-side HMR API modeled after [Vite's `import.meta.hot` API](https://vitejs.dev/guide/api-hmr.html). It can be checked for with `if (import.meta.hot)`, tree-shaking it in production
```ts
if(import.meta.hot){
// HMR APIs are available.
}
```
However, **this check is often not needed** as Bun will dead-code-eliminate
calls to all of the HMR APIs in production builds.
```ts
// This entire function call will be removed in production!
import.meta.hot.dispose(()=>{
console.log("dispose");
});
```
For this to work, Bun forces these APIs to be called without indirection. That means the following do not work:
```ts#invalid-hmr-usage.ts
// INVALID: Assigning `hot` to a variable
const hot = import.meta.hot;
hot.accept();
// INVALID: Assigning `import.meta` to a variable
const meta = import.meta;
meta.hot.accept();
console.log(meta.hot.data);
// INVALID: Passing to a function
doSomething(import.meta.hot.dispose);
// OK: The full phrase "import.meta.hot.<API>" must be called directly:
import.meta.hot.accept();
// OK: `data` can be passed to functions:
doSomething(import.meta.hot.data);
```
{% callout %}
**Note** — The HMR API is still a work in progress. Some features are missing. HMR can be disabled in `Bun.serve` by setting the `development` option to `{ hmr: false }`.
// newModules is an array where each item corresponds to the updated module
// or undefined if that module had a syntax error
});
```
Indicates that multiple dependencies' modules can be accepted. This variant accepts an array of dependencies, where the callback will receive the updated modules, and `undefined` for any that had errors.
### `import.meta.hot.data`
`import.meta.hot.data` maintains state between module instances during hot
replacement, enabling data transfer from previous to new versions. When
`import.meta.hot.data` is written into, Bun will also mark this module as
capable of self-accepting (equivalent of calling `import.meta.hot.accept()`).
In production, `data` is inlined to be `{}`, meaning it cannot be used as a state holder.
The above pattern is recommended for stateful modules because Bun knows it can minify `{}.prop ??= value` into `value` in production.
### `import.meta.hot.dispose()`
Attaches an on-dispose callback. This is called:
- Just before the module is replaced with another copy (before the next is loaded)
- After the module is detached (removing all imports to this module, see `import.meta.hot.prune()`)
```ts
const sideEffect = setupSideEffect();
import.meta.hot.dispose(() => {
sideEffect.cleanup();
});
```
This callback is not called on route navigation or when the browser tab closes.
Returning a promise will delay module replacement until the module is disposed.
All dispose callbacks are called in parallel.
### `import.meta.hot.prune()`
Attaches an on-prune callback. This is called when all imports to this module
are removed, but the module was previously loaded.
This can be used to clean up resources that were created when the module was
loaded. Unlike `import.meta.hot.dispose()`, this pairs much better with `accept`
and `data` to manage stateful resources. A full example managing a `WebSocket`:
```ts
import { something } from "./something";
// Initialize or re-use a WebSocket connection
export const ws = (import.meta.hot.data.ws ??= new WebSocket(location.origin));
// If the module's import is removed, clean up the WebSocket connection.
import.meta.hot.prune(() => {
ws.close();
});
```
If `dispose` was used instead, the WebSocket would close and re-open on every
hot update. Both versions of the code will prevent page reloads when imported
files are updated.
### `import.meta.hot.on()` and `off()`
`on()` and `off()` are used to listen for events from the HMR runtime. Event names are prefixed with a prefix so that plugins do not conflict with each other.
```ts
import.meta.hot.on("bun:beforeUpdate", () => {
console.log("before a hot update");
});
```
When a file is replaced, all of its event listeners are automatically removed.
description: Hot Module Replacement (HMR) for Bun's development server
---
Hot Module Replacement (HMR) allows you to update modules in a running application without needing a full page reload. This preserves the application state and improves the development experience.
<Note>HMR is enabled by default when using Bun's full-stack development server.</Note>
## `import.meta.hot` API Reference
Bun implements a client-side HMR API modeled after [Vite's `import.meta.hot` API](https://vite.dev/guide/api-hmr). It can be checked for with `if (import.meta.hot)`, tree-shaking it in production.
// OK: The full phrase "import.meta.hot.<API>" must be called directly:
import.meta.hot.accept();
// OK: `data` can be passed to functions:
doSomething(import.meta.hot.data);
```
</Warning>
<Note>
The HMR API is still a work in progress. Some features are missing. HMR can be disabled in `Bun.serve` by setting the development option to `{ hmr: false }`.
| `hot.accept()` | ✅ | Indicate that a hot update can be replaced gracefully. |
| `hot.data` | ✅ | Persist data between module evaluations. |
| `hot.dispose()` | ✅ | Add a callback function to run when a module is about to be replaced. |
| `hot.invalidate()` | ❌ | |
| `hot.on()` | ✅ | Attach an event listener |
| `hot.off()` | ✅ | Remove an event listener from `on`. |
| `hot.send()` | ❌ | |
| `hot.prune()` | 🚧 | NOTE: Callback is currently never called. |
| `hot.decline()` | ✅ | No-op to match Vite's `import.meta.hot` |
## import.meta.hot.accept()
The `accept()` method indicates that a module can be hot-replaced. When called without arguments, it indicates that this module can be replaced simply by re-evaluating the file. After a hot update, importers of this module will be automatically patched.
This creates a hot-reloading boundary for all of the files that `index.ts` imports. That means whenever `foo.ts` or any of its dependencies are saved, the update will bubble up to `index.ts` will re-evaluate. Files that import `index.ts` will then be patched to import the new version of `getNegativeCount()`. If only `index.ts` is updated, only the one file will be re-evaluated, and the counter in `foo.ts` is reused.
This may be used in combination with `import.meta.hot.data` to transfer state from the previous module to the new one.
<Info>
When no modules call `import.meta.hot.accept()` (and there isn't React Fast Refresh or a plugin calling it for you),
the page will reload when the file updates, and a console warning shows which files were invalidated. This warning is
safe to ignore if it makes more sense to rely on full page reloads.
</Info>
### With callback
When provided one callback, `import.meta.hot.accept` will function how it does in Vite. Instead of patching the importers of this module, it will call the callback with the new module.
// newModules is an array where each item corresponds to the updated module
// or undefined if that module had a syntax error
});
```
Indicates that multiple dependencies' modules can be accepted. This variant accepts an array of dependencies, where the callback will receive the updated modules, and `undefined` for any that had errors.
## import.meta.hot.data
`import.meta.hot.data` maintains state between module instances during hot replacement, enabling data transfer from previous to new versions. When `import.meta.hot.data` is written into, Bun will also mark this module as capable of self-accepting (equivalent of calling `import.meta.hot.accept()`).
<Warning>This callback is not called on route navigation or when the browser tab closes.</Warning>
Returning a promise will delay module replacement until the module is disposed. All dispose callbacks are called in parallel.
## import.meta.hot.prune()
Attaches an on-prune callback. This is called when all imports to this module are removed, but the module was previously loaded.
This can be used to clean up resources that were created when the module was loaded. Unlike `import.meta.hot.dispose()`, this pairs much better with `accept` and `data` to manage stateful resources. A full example managing a WebSocket:
export const ws = (import.meta.hot.data.ws ??= new WebSocket(location.origin));
// If the module's import is removed, clean up the WebSocket connection.
import.meta.hot.prune(() => {
ws.close();
});
```
<Info>
If `dispose` was used instead, the WebSocket would close and re-open on every hot update. Both versions of the code
will prevent page reloads when imported files are updated.
</Info>
## import.meta.hot.on() and off()
`on()` and `off()` are used to listen for events from the HMR runtime. Event names are prefixed with a prefix so that plugins do not conflict with each other.
| `bun:beforeUpdate` | before a hot update is applied. |
| `bun:afterUpdate` | after a hot update is applied. |
| `bun:beforeFullReload` | before a full page reload happens. |
| `bun:beforePrune` | before prune callbacks are called. |
| `bun:invalidate` | when a module is invalidated with `import.meta.hot.invalidate()` |
| `bun:error` | when a build or runtime error occurs |
| `bun:ws:disconnect` | when the HMR WebSocket connection is lost. This can indicate the development server is offline. |
| `bun:ws:connect` | when the HMR WebSocket connects or re-connects. |
<Note>For compatibility with Vite, the above events are also available via `vite:*` prefix instead of `bun:*`.</Note>
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.