Compare commits

...

114 Commits

Author SHA1 Message Date
Dylan Conway
302b1e2b85 update 2025-09-30 19:50:11 -07:00
Dylan Conway
a58e4c0c01 force heap allocators 2025-09-30 19:40:31 -07:00
Meghan Denny
f19a1cc3a5 test: break up node-http.test.ts (#23125) 2025-09-30 17:25:17 -07:00
taylor.fish
eac82e2184 Fix memory.deinit; handle more types (#23032)
* Fix `memory.deinit`; the previous PR erroneously added non-comptime
code in a comptime expression, which would result in a compile error
when trying to deinit an optional
* Handle arrays (distinct from slices)
* Handle error unions
* Disallow untagged unions, as this is probably an error; usually a
manual deinit impl is needed if untagged unions are involved
* Make switch exhaustive so we know we're not missing other types
(thanks @pfgithub)

(For internal tracking: fixes STAB-1295)
2025-09-30 16:14:55 -07:00
Marko Vejnovic
dac1ee73c6 fix: Ordering Semantics (#23144)
Co-authored-by: taylor.fish <contact@taylor.fish>
2025-09-30 16:13:23 -07:00
Jarred Sumner
9aa3c7863d Faster linux zig build (#23075)
### What does this PR do?

### How did you verify your code works?
2025-09-30 14:59:06 -07:00
Ciro Spaciari
b88cecfe66 fix(redis) refactor valkey connection status and lifecycle handling (#23141)
### What does this PR do?
Status should reflect connect status now, making sure that js value is
alive long enough, redis still needs a refactor followup.
### How did you verify your code works?
Run valkey.test.ts duplicate tests 1k times
2025-09-30 13:32:03 -07:00
Alistair Smith
b613790451 Many more Redis commands (#23116) 2025-09-30 13:27:25 -07:00
Ciro Spaciari
5fe3e3774c fix(Bun.SQL) fix IN, UPDATE support in helpers, and fix JSONB/JSON serialization (#22700)
### What does this PR do?
Fixes https://github.com/oven-sh/bun/issues/20669
Fixes https://github.com/oven-sh/bun/issues/18775
Fixes https://github.com/oven-sh/bun/issues/22156
Fixes https://github.com/oven-sh/bun/issues/22164
Fixes https://github.com/oven-sh/bun/issues/18254
Fixes https://github.com/oven-sh/bun/issues/21267
Fixes https://github.com/oven-sh/bun/issues/20669
Fixes https://github.com/oven-sh/bun/issues/1317
Fixes https://github.com/oven-sh/bun/pull/22700
Partially Fixes https://github.com/oven-sh/bun/issues/22757 (sqlite
pending, need a followup and probably @alii help here)
### How did you verify your code works?
Tests

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-30 13:26:15 -07:00
robobun
c5005a37d7 Fix --tolerate-republish flag in bun publish (continues PR #22107) (#22381)
## Summary
This PR continues the work from #22107 to fix the `--tolerate-republish`
flag implementation in `bun publish`.

### Changes:
- **Pre-check version existence**: Before attempting to publish with
`--tolerate-republish`, check if the version already exists on the
registry
- **Improved version checking**: Use GET request to package endpoint
instead of HEAD, then parse JSON response to check if specific version
exists
- **Correct output stream**: Output warning to stderr instead of stdout
for consistency with test expectations
- **Better error handling**: Update test to accept both 403 and 409 HTTP
error codes for duplicate publish attempts

### Test fixes:
The tests were failing because:
1. The mock registry returns 409 Conflict (not 403) for duplicate
packages
2. The warning message wasn't appearing in stderr as expected
3. The version check was using HEAD request which doesn't reliably
return version info

## Test plan
- [x] Fixed failing tests for `--tolerate-republish` functionality
- [x] Tests now properly handle both 403 and 409 error responses
- [x] Warning messages appear correctly in stderr

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2025-09-30 13:25:50 -07:00
Zack Radisic
a89e61fcaa ssg 3 (#22138)
### What does this PR do?

Fixes a crash related to the dev server overwriting the uws user context
pointer when setting abort callback.

Adds support for `return new Response(<jsx />, { ... })` and `return
Response.render(...)` and `return Response.redirect(...)`:
- Created a `SSRResponse` class to handle this (see
`JSBakeResponse.{h,cpp}`)
- `SSRResponse` is designed to "fake" being a React component 
- This is done in JSBakeResponse::create inside of
src/bun.js/bindings/JSBakeResponse.cpp
- And `src/js/builtins/BakeSSRResponse.ts` defines a `wrapComponent`
function which wraps
the passed in component (when doing `new Response(<jsx />, ...)`). It
does
    this to throw an error (in redirect()/render() case) or return the
    component.
- Created a `BakeAdditionsToGlobal` struct which contains some
properties
    needed for this
- Added some of the properties we need to fake to BunBuiltinNames.h
(e.g.
    `$$typeof`), the rationale behind this is that we couldn't use
`structure->addPropertyTransition` because JSBakeResponse is not a final
    JSObject.
- When bake and server-side, bundler rewrites `Response ->
Bun.SSRResponse` (see `src/ast/P.zig` and `src/ast/visitExpr.zig`)
- Created a new WebCore body variant (`Render: struct { path: []const u8
}`)
  - Created when `return Response.render(...)`
  - When handled, it re-invokes dev server to render the new path

Enables server-side sourcemaps for the dev server:
- New source providers for server-side:
(`DevServerSourceProvider.{h,cpp}`)
- IncrementalGraph and SourceMapStore are updated to support this

There are numerous other stuff:
- allow `app` configuration from Bun.serve(...)
- fix errors stopping dev server
- fix use after free related to in
RequestContext.finishRunningErrorHandler
- Request.cookies
- Make `"use client";` components work
- Fix some bugs using `require(...)` in dev server
- Fix catch-all routes not working in the dev server
- Updates `findSourceMappingURL(...)` to use `std.mem.lastIndexOf(...)`
because
  the sourcemap that should be used is the last one anyway

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Alistair Smith <hi@alistair.sh>
2025-09-30 05:26:32 -07:00
robobun
2b7fc18092 fix(lint): resolve no-unused-expressions errors (#23127)
## Summary

Fixes all oxlint `no-unused-expressions` violations across the codebase
by:
- Adding an oxlint override to disable the rule for
`src/js/builtins/**`, where special syntax markers like `$getter`,
`$constructor`, etc. are intentionally used as standalone expressions
- Converting short-circuit expressions (`condition && fn()`) to proper
if statements for improved code clarity
- Wrapping intentional property access side effects (e.g.,
`this.stdio;`, `err.stack;`) with the `void` operator
- Converting ternary expressions used for control flow to if/else
statements

## Test plan

- [x] `bun lint` passes with no errors

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-30 04:45:34 -07:00
robobun
e3a1ae09f3 docs: add zstd compression documentation for v1.2.14 (#23095)
## Summary
- Documents zstd compression features introduced in Bun v1.2.14
- Adds missing API documentation for zstd utilities

## Changes
- Updated `docs/api/fetch.md` to include zstd in Accept-Encoding
examples and note automatic decompression support
- Added `Bun.zstdCompress()/zstdCompressSync()` and
`Bun.zstdDecompress()/zstdDecompressSync()` documentation to
`docs/api/utils.md`
- Documented compression levels (1-22) with concise usage examples

## Note
HTTP/2 features (`maxSendHeaderBlockLength` and `setNextStreamID`) were
not added per request to avoid updating nodejs-apis.md.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-09-30 00:26:02 -07:00
Dylan Conway
25e156c95b fix(pnpm migration): correctly resolve link: dependencies on the root package (#23111)
### What does this PR do?
Two things:
- we weren't adding the root package to the `pkg_map`.
- `link:` dependency paths in `"snapshots"` weren't being joined with
the top level dir.
### How did you verify your code works?
Manually and added a test.
2025-09-30 00:10:15 -07:00
Jarred Sumner
badcfe8a14 fmt 2025-09-29 23:36:44 -07:00
robobun
8d7ca660ef docs: Add documentation for Bun v1.2.23 features (#23080)
## Summary
This PR adds concise documentation for features introduced in Bun
v1.2.23 that were missing from the docs.

## Added Documentation
- **pnpm migration**: Automatic `pnpm-lock.yaml` to `bun.lock` migration
- **Platform filtering**: `--cpu` and `--os` flags for cross-platform
dependency installation
- **Test improvements**: Chaining test qualifiers (e.g.,
`.failing.each`)
- **CLI**: `bun feedback` command
- **TLS/SSL**: `--use-system-ca` flag and `NODE_USE_SYSTEM_CA`
environment variable
- **Node.js compat**: `process.report.getReport()` Windows support
- **Bundler**: New `jsx` configuration object in `Bun.build`
- **SQL**: `sql.array` helper for PostgreSQL arrays

## Test plan
Documentation changes only - reviewed for accuracy against the v1.2.23
release notes.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-29 23:34:45 -07:00
robobun
933c6fd260 docs: add missing v1.2.20 features documentation (#23086)
## Summary
- Document automatic yarn.lock migration in lockfile docs
- Add --recursive flag documentation for bun outdated/update commands  
- Document Windows long path support in installation docs

## Test plan
Documentation only - no code changes to test.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-29 23:34:09 -07:00
robobun
f9a69773ab docs: add missing v1.2.19 features to documentation (#23087)
## Summary
This PR adds documentation for features introduced in Bun v1.2.19 that
were missing from the docs.

## Features Documented

### `.npmrc` Options
- `link-workspace-packages`: Controls how workspace packages are
installed when available locally
- `save-exact`: Always saves exact versions without the `^` prefix

### Node.js API Enhancements
- `vm.constants.DONT_CONTEXTIFY`: Support for making `globalThis` behave
like typical `globalThis`
- `os.networkInterfaces()`: Now returns `scopeid` property for IPv6
interfaces (not `scope_id`)
- `process.features.typescript`: Returns `"transform"`
- `process.features.require_module`: Returns `true`
- `process.features.openssl_is_boringssl`: Returns `true`

### `fs.glob` Enhancements
- Support for array of patterns as first argument
- New `exclude`/`ignore` option to filter results

### `node:module` API
- `SourceMap` class for parsing and inspecting sourcemaps
- `findSourceMap()` function to locate sourcemaps

## Changes Made
- `/docs/install/npmrc.md`: Added `link-workspace-packages` and
`save-exact` options
- `/docs/runtime/nodejs-apis.md`: Added Node.js compatibility features
- `/docs/api/glob.md`: Added Node.js fs.glob compatibility section

Documentation was kept concise with high information density as
requested.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-29 23:33:07 -07:00
robobun
e88d151241 docs: add v1.2.17 features to documentation (#23089)
## Summary

Updates documentation to include features released in Bun v1.2.17 that
were missing from the docs:

-  HTML imports ahead-of-time bundling (already documented)
-  columnTypes & declaredTypes in bun:sqlite (already documented, just
not visible in diff as it was recent)
-  bun info command (already documented)
-  Node.js compatibility improvements (partially documented, added
missing details)
-  --unhandled-rejections flag (added)
-  CLAUDE.md generation in bun init (added)

## Changes

- Document `--unhandled-rejections` CLI flag for configuring promise
rejection handling
- Document `CLAUDE.md` file generation in `bun init` when Claude CLI is
detected
- Update Node.js compatibility notes with recent improvements:
  - `child_process.fork()` execArgv support
  - Zstandard compression in `node:zlib`
  - Optional options parameter in `fs.glob`
  - `tls.getCACertificates()` implementation

## Test plan

Documentation changes only - no code changes to test.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-29 23:32:35 -07:00
robobun
debd9cc35d docs: add missing v1.2.16 feature documentation (#23090)
## Summary
- Adds documentation for catalog dependency support in `bun outdated`
command

## Changes
- **`docs/cli/outdated.md`**: Added catalog dependency support section
with example output

## Test plan
- [x] Documentation is minimal and concise
- [x] Example shows clear output format with "(catalog)" labels

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-29 23:30:13 -07:00
robobun
e0b6183571 docs: update for Bun v1.2.15 features (#23091)
## Summary
- Document missing features from Bun v1.2.15 release
- Add minimal, concise documentation updates with high information
density

## Changes
- Add `BUN_OPTIONS` environment variable to docs/runtime/env.md
- Document Cursor AI rules generation in `bun init` (docs/cli/init.md)
- Update Node.js API compatibility status for `Worker.getHeapSnapshot`
and `createHistogram` (docs/runtime/nodejs-apis.md)
- Add concise `vm.SourceTextModule` usage example

## Test plan
- [x] Documentation builds correctly
- [x] All links are valid
- [x] Code examples are accurate

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-29 23:28:11 -07:00
robobun
8d2953c097 docs: add v1.2.18 features documentation (#23088)
## Summary
- Added documentation for all new features from Bun v1.2.18 release
- Updates are minimal and concise with high information density
- Includes relevant code examples where helpful

## Updates made

### New features documented:
- ReadableStream convenience methods (`.text()`, `.json()`, `.bytes()`,
`.blob()`)
- WebSocket client permessage-deflate compression support
- NODE_PATH environment variable support for bundler
- bun test exits with code 1 when no tests match filter
- Math.sumPrecise for high-precision floating-point summation

### Version updates:
- Node.js compatibility version updated to v24.3.0
- SQLite version updated to 3.50.2

### Behavior changes:
- fs.glob now matches directories by default (not just files)

## Test plan
- [x] Verified all features are from the v1.2.18 release notes
- [x] Checked documentation follows existing patterns
- [x] Code examples are concise and accurate

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-29 23:23:40 -07:00
Jarred Sumner
057fa31a75 Fix format 2025-09-29 23:22:57 -07:00
Jarred Sumner
9c2590ca07 Update workers.md 2025-09-29 23:08:07 -07:00
robobun
b5a56c183b fix(test): update error message assertion for git clone failure (#23119)
## Summary
- Updated test assertion to match new error message format for git clone
failures

## Details
The error message format changed from:
```
error: "git clone" for "uglify" failed
```

To:
```
error: InstallFailed cloning repository for uglify
```

This appears to be due to changes in how 404s work on the bun.sh domain.

## Test plan
- [x] Ran `bun bd test test/cli/install/bun-install.test.ts -t "should
fail on invalid Git URL"` - passes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-29 22:58:24 -07:00
robobun
41be6aeb3c docs: add missing v1.2.21 features to documentation (#23085)
## Summary
- Added documentation for 5 features introduced in Bun v1.2.21 that were
missing from the docs
- Kept updates minimal with high information density as requested

## Changes
- **bun audit filtering options** (`docs/install/audit.md`)
  - `--audit-level=<low|moderate|high|critical>` - filter by severity
  - `--prod` - audit only production dependencies  
  - `--ignore <CVE>` - ignore specific vulnerabilities

- **--compile-exec-argv flag** (`docs/bundler/executables.md`)
  - Embed runtime arguments in compiled executables
  - Arguments available via `process.execArgv`

- **bunx --package/-p flag** (`docs/cli/bunx.md`)
  - Run binaries from specific packages when name differs

- **package.json sideEffects glob patterns** (`docs/bundler/index.md`)
  - Support for `*`, `?`, `**`, `[]`, `{}` patterns

- **--user-agent CLI flag** (`docs/cli/run.md`)
  - Customize User-Agent header for all fetch() requests

## Test plan
- [x] Reviewed all changes match Bun v1.2.21 blog post features
- [x] Verified documentation style is concise with code examples
- [x] Checked no existing documentation was removed

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-29 21:54:34 -07:00
robobun
8025fa4046 docs: add missing v1.2.13 features to documentation (#23096)
## Summary
- Added documentation for worker_threads environmentData API and process
'worker' event
- Added documentation for --no-addons CLI flag
- Added documentation for RedisClient getBuffer() method

## Context
These features were released in Bun v1.2.13 but were missing from the
documentation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-29 21:53:32 -07:00
robobun
a37b00e477 docs: add v1.2.22 features to documentation (#23083)
## Summary
Adds minimal documentation for features introduced in Bun v1.2.22 that
were previously undocumented.

## Changes
- Add `redis.hget()` example showing direct value return vs `hmget()`
array
- Add WebSocket subprotocol negotiation example with array syntax
- Mark bundler `onEnd` hook as implemented in plugins docs
- Add `bun run --workspaces` flag documentation
- Update `perf_hooks.monitorEventLoopDelay` as implemented in Node.js
APIs
- Add async stack traces note to debugger docs
- Document TTY access pattern after stdin closes

All changes are minimal - just code snippets or single-line mentions in
existing files. No new files created.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-29 21:51:44 -07:00
robobun
621066d0c4 Fix crash in toContainAnyKeys and toContainKeys with non-object values (#22977)
## Summary
- Fixed segmentation fault when calling `toContainAnyKeys`,
`toContainKeys`, and `toContainAllKeys` on non-object values (null,
undefined, numbers, strings, etc.)
- Added proper validation to check if value is an object before calling
`hasOwnPropertyValue` or `keys()`
- Added comprehensive test coverage for edge cases

## Problem
The matchers were crashing with a segmentation fault when called with
non-object values because:

1. `toContainAnyKeys` and `toContainKeys` were calling
`hasOwnPropertyValue` without checking if the value is an object first
2. `toContainAllKeys` was calling `keys()` without checking if the value
is an object first
3. The `hasOwnPropertyValue` function documentation explicitly states:
"If the object is not an object, it will crash. **You must check if the
object is an object before calling this function.**"

## Solution
- Added `value.isObject()` check in `toContainAnyKeys` before attempting
to check for properties
- Fixed `toContainKeys` by replacing the `toBoolean()` check with
`isObject()` check
- Fixed `toContainAllKeys` by adding proper object validation before
calling `keys()`
- For non-objects with empty expected arrays, the matchers return true
(matching jest-extended behavior)

## Test plan
- [x] Added comprehensive test coverage in
`test/js/bun/test/expect.test.js`
- [x] Tests cover: null, undefined, numbers, strings, booleans, symbols,
BigInt, arrays, functions
- [x] All existing jest-extended tests continue to pass
- [x] Debug build compiles and all tests pass

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2025-09-29 16:10:56 -07:00
Meghan Denny
51a05ae2e3 safety: a few more exception validation fixes (#23038) 2025-09-29 15:27:52 -07:00
Dylan Conway
52629145ca fix(parser): TSX arrow function bugfix (#23082)
### What does this PR do?
Missing `.t_equals` and `.t_slash` checks. This matches esbuild.

```go
// Returns true if the current less-than token is considered to be an arrow
// function under TypeScript's rules for files containing JSX syntax
func (p *parser) isTSArrowFnJSX() (isTSArrowFn bool) {
	oldLexer := p.lexer
	p.lexer.Next()

	// Look ahead to see if this should be an arrow function instead
	if p.lexer.Token == js_lexer.TConst {
		p.lexer.Next()
	}
	if p.lexer.Token == js_lexer.TIdentifier {
		p.lexer.Next()
		if p.lexer.Token == js_lexer.TComma || p.lexer.Token == js_lexer.TEquals {
			isTSArrowFn = true
		} else if p.lexer.Token == js_lexer.TExtends {
			p.lexer.Next()
			isTSArrowFn = p.lexer.Token != js_lexer.TEquals && p.lexer.Token != js_lexer.TGreaterThan && p.lexer.Token != js_lexer.TSlash
		}
	}

	// Restore the lexer
	p.lexer = oldLexer
	return
}
```

fixes #19697
### How did you verify your code works?
Added some tests.
2025-09-29 05:10:16 -07:00
Dylan Conway
f4218ed40b fix(parser): possible crash with --minify-syntax and string -> dot conversions (#23078)
### What does this PR do?
Fixes code like `[(()=>{})()][''+'c']`.

We were calling `visitExpr` on a node that was already visited. This
code doesn't exist in esbuild, but we should keep it because it's an
optimization.

fixes #18629
fixes #15926

### How did you verify your code works?
Manually and added a test.
2025-09-29 04:20:57 -07:00
Dylan Conway
9c75db45fa fix(parser): scope mismatch bug from parseSuffix (#23073)
### What does this PR do?
esbuild returns `left` from the inner loop. This PR matches this
behavior. Before it was breaking out of the inner loop and continuing
through the outer loop, potentially parsing too far.

fixes #22013
fixes #22384

### How did you verify your code works?
Added some tests.

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-29 03:03:18 -07:00
Dylan Conway
f6e722b594 fix(glob): fix index out of bounds in GlobWalker (#23055)
### What does this PR do?
Given pattern input "../." we might collapse all path components.
### How did you verify your code works?
Manually and added a test.

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-09-29 02:21:13 -07:00
Jarred Sumner
d9fdb67d70 Deflake bun-server.test.ts 2025-09-28 23:58:21 -07:00
Jarred Sumner
a09dc2f450 Update no-validate-leaksan.txt 2025-09-28 23:46:10 -07:00
Meghan Denny
39e48ed244 Bump 2025-09-28 11:28:14 -07:00
Michael H
db2f768bd9 vscode: remove old test runner codelens stuff (#23003)
### What does this PR do?

fix #23001 by removing references to old codelens from before the native
integration which should have its own (better and) native way of
starting

### How did you verify your code works?
2025-09-28 07:45:32 -07:00
Ciro Spaciari
cf1367137d feat(sql.array) add support to sql.array (#22946)
### What does this PR do?
Fixes https://github.com/oven-sh/bun/issues/17030 
In this case should work as expected just passing a normal array should
be serialized as JSON/JSONB

Fixes https://github.com/oven-sh/bun/issues/17798
Insert and update helpers should work as expected here when using
sql.array helper:

```sql
CREATE TABLE user (
    id SERIAL PRIMARY KEY,
    name VARCHAR NOT NULL,
    roles TEXT[]
);
```

```js
const item = { id: 1, name: "test", role: sql.array(['a', 'b'], "TEXT") };
await sql`
  UPDATE user
  SET ${sql(item)}
  WHERE id = 1
`;
```
Fixes https://github.com/oven-sh/bun/issues/22281
Should work using  sql.array(array, "TEXT")

Fixes https://github.com/oven-sh/bun/issues/22165
Fixes https://github.com/oven-sh/bun/issues/22155
Add sql.array(array, typeNameOrTypeID) in Bun.SQL
(https://github.com/oven-sh/bun/issues/15088)

### How did you verify your code works?
Tests

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-27 01:12:29 -07:00
robobun
5a12175cb0 Fix V8StackTraceIterator to handle frames without parentheses (#23034)
## Summary
- Fixes V8StackTraceIterator terminating early when encountering stack
frames without parentheses (e.g., "at unknown")
- Ensures complete stack traces are shown by `Bun.inspect()` even after
`error.stack` property is accessed
- Addresses the root cause that PR #23022 was working around

## Problem
When the V8StackTraceIterator encountered a stack frame line without
parentheses (like `at unknown`), it would:
1. Set `offset = stack.length()` 
2. Return `false`, terminating the entire iteration
3. Only parse frames before the "unknown" frame

This caused `Bun.inspect()` to show incomplete stack traces after the
`error.stack` property was accessed, as documented in issue discussions
around PR #23022.

## Solution
Changed the parser to continue iterating through subsequent frames
instead of terminating. Frames without parentheses are now treated as
having a source URL but no function name or location info.

## Test plan
- [x] Added regression test
`test/regression/issue/23022-stack-trace-iterator.test.ts`
- [x] Verified existing stack trace tests still pass
- [x] Manually tested with Node.js stream errors that trigger this code
path

### Before fix:
```
error: Socket is closed
 code: "ERR_SOCKET_CLOSED"

      at node:net:1322:32
```

### After fix:
```
error: Socket is closed
 code: "ERR_SOCKET_CLOSED"

      at unknown:1:1
      at _write (node:net:1322:32)
      at writeOrBuffer (internal:streams/writable:381:18)
      at internal:streams/writable:334:16
      at testStackTrace (/workspace/bun/test.js:8:10)
      ...
```

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-27 00:55:54 -07:00
Michael H
ba20670da3 implement pnpm migration (#22262)
### What does this PR do?

fixes #7157, fixes #14662

migrates pnpm-workspace.yaml data to package.json & converts
pnpm-lock.yml to bun.lock

---

### How did you verify your code works?

manually, tests and real world examples

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2025-09-27 00:45:29 -07:00
Meghan Denny
8c9c7894d6 update handle-leak.test.ts
observed higher values under load on windows
will audit again once more memory work has been completed
2025-09-27 00:27:23 -07:00
taylor.fish
73feb108d9 Handle optionals in bun.memory.deinit (#23027)
Currently, if you try to deinit an optional, `bun.memory.deinit` will
silently do nothing, even if the optional's payload is a struct with a
`deinit` method.

This commit makes sure the payload is deinitialized.

(For internal tracking: fixes STAB-1293)
2025-09-26 23:02:44 -07:00
Marko Vejnovic
5179dad481 hotfix(redis): Automatically connect on .subscribe() (#23018)
### What does this PR do?

Previously `redis.subscribe` did not automatically connect, which was in
contrast to other Redis functions. This PR changes the necessary
`PUB/SUB` things so that `.subscribe` automatically connects.

### How did you verify your code works?

Didn't

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-26 22:43:23 -07:00
Meghan Denny
97f6adf767 revert this change made to ShellRmTask.DirTask (#23028)
it caused deploying the site to hang and `ref.ref(` isnt threadsafe. the
proper fix should latch onto the shell command instead of the generic
task queue
2025-09-26 22:23:06 -07:00
Dylan Conway
8102e80f88 fix(build): Promise.all() async module dependencies (#22704)
### What does this PR do?
Currently bundling and running projects with cyclic async module
dependencies will hang due to module promises never resolving. This PR
unblocks these projects by outputting `await Promise.all` with these
dependencies.

Before (will hang with bun, or error with unsettled top level await with
node):
```js
var __esm = (fn, res) => () => (fn && (res = fn((fn = 0))), res);

var init_mod3 = __esm(async () => {
  await init_mod1();
});

var init_mod2 = __esm(async () => {
  await init_mod1();
});

var init_mod1 = __esm(async () => {
  await init_mod2();
  await init_mod3();
});

await init_mod1();
```

After:
```js
var __esm = (fn, res) => () => (fn && (res = fn((fn = 0))), res);
var __promiseAll = Promise.all.bind(Promise);

var init_mod3 = __esm(async () => {
  await init_mod1();
});

var init_mod2 = __esm(async () => {
  await init_mod1();
});

var init_mod1 = __esm(async () => {
  await __promiseAll([init_mod2(), init_mod3()]);
});

await init_mod1();
```

### How did you verify your code works?
Manually and tests

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-26 22:21:00 -07:00
taylor.fish
ed72eff2a9 bun.ptr.Shared: fix dependency loop and add leak/adopt methods (#23024)
Fixes STAB-1292
2025-09-26 19:46:50 -07:00
Jarred Sumner
665ea96076 Deflake test/js/bun/http/bun-server.test.ts 2025-09-26 19:22:07 -07:00
Meghan Denny
da0babebd2 node:http2: fix leak in H2FrameParser (#22997) 2025-09-26 19:20:44 -07:00
Jarred Sumner
733e7f6165 Fix fetch-preconnect test failure (#23016)
### What does this PR do?

### How did you verify your code works?
2025-09-26 19:01:01 -07:00
Ciro Spaciari
d3ce459f0e fix(valkey/redis) fix tls (includes pub/sub) (#22981)
### What does this PR do?
Fix tls property not being properly set
Fixes https://github.com/oven-sh/bun/issues/22186
### How did you verify your code works?
Tests + Manually test with upstash using `rediss` protocol and tls: true
options

---------

Co-authored-by: Marko Vejnovic <marko.vejnovic@hotmail.com>
Co-authored-by: Marko Vejnovic <marko@bun.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-26 18:57:06 -07:00
pfg
e14e42b402 fix lint (#23019)
Format action was failing
2025-09-26 18:05:01 -07:00
taylor.fish
eb04e4e640 Make bun.webcore.Blob smaller and ref-counted (#23015)
Reduce the size of `bun.webcore.Blob` from 120 bytes to 96. Also make it
ref-counted: in-progress work on improving the bindings generator
depends on this, as it means C++ can pass a pointer to the `Blob` to Zig
without risking it being destroyed if the GC collects the associated
`JSBlob`.

Note that this PR depends on #23013.

(For internal tracking: fixes STAB-1289, STAB-1290)
2025-09-26 17:18:30 -07:00
pfg
250d30eb7d Concurrent limit --max-concurrency, defaults to 20 (#22944)
### What does this PR do?

Adds a max-concurrency flag to limit the amount of concurrent tests that
run at once. Defaults to 20. Jest and Vitest both default to 5.

### How did you verify your code works?

Tests

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-26 16:39:08 -07:00
Jarred Sumner
0511fbf7b6 Skip failing test on arm64 linux musl caused by third-party dependency 2025-09-26 15:23:32 -07:00
taylor.fish
c58d2e3911 Add generic-allocator ArrayList (#22917)
Add a version of `ArrayList` that takes a generic `Allocator` type
parameter. This matches the interface of smart pointers like
`bun.ptr.Owned` and `bun.ptr.Shared`.

This type behaves like a managed `ArrayList` but has no overhead if
`Allocator` is a zero-sized type, like `bun.DefaultAllocator`.

(For internal tracking: fixes STAB-1267)

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-26 15:19:45 -07:00
taylor.fish
266fca2e5c Add ExternalShared and RawRefCount (#23013)
Add `bun.ptr.ExternalShared`, a shared pointer whose reference count is
managed externally; e.g., by extern functions. This can be used to work
with `RefCounted` C++ objects in Zig. For example:

```cpp
// C++:
struct MyType : RefCounted<MyType> { ... };
extern "C" void MyType__ref(MyType* self) { self->ref(); }
extern "C" void MyType__ref(MyType* self) { self->deref(); }
```

```zig
// Zig:
const MyType = opaque {
    extern fn MyType__ref(self: *MyType) void;
    extern fn MyType__deref(self: *MyType) void;

    pub const Ref = bun.ptr.ExternalShared(MyType);

    // This enables `ExternalShared` to work.
    pub const external_shared_descriptor = struct {
        pub const ref = MyType__ref;
        pub const deref = MyType__deref;
    };
};

// Now `MyType.Ref` behaves just like `Ref<MyType>` in C++:
var some_ref: MyType.Ref = someFunctionReturningMyTypeRef();
const ptr: *MyType = some_ref.get(); // gets the inner pointer
var some_other_ref = some_ref.clone(); // increments the ref count
some_ref.deinit(); // decrements the ref count
// decrements the ref count again; if no other refs exist, the object
// is destroyed
some_other_ref.deinit();
```

This commit also adds `RawRefCount`, a simple wrapper around an integer
reference count that can be used to implement the interface required by
`ExternalShared`. Generally, for reference-counted Zig types,
`bun.ptr.Shared` is preferred, but occasionally it is useful to have an
“intrusive” reference-counted type where the ref count is stored in the
type itself. For this purpose, `ExternalShared` + `RawRefCount` is more
flexible and less error-prone than the deprecated `bun.ptr.RefCounted`
type.

(For internal tracking: fixes STAB-1287, STAB-1288)
2025-09-26 15:15:58 -07:00
pfg
9d01a7b91a Require deinit function for memory.deinit() (#22923)
Co-authored-by: taylor.fish <contact@taylor.fish>
2025-09-26 13:47:24 -07:00
robobun
a329da97f4 Fix server stability issue with oversized requests (#22701)
## Summary
Improves server stability when handling certain request edge cases.

## Test plan
- Added regression test in `test/regression/issue/22353.test.ts`
- Test verifies server continues operating normally after handling edge
case requests
- All existing HTTP server tests pass

Fixes #22353

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-26 04:59:07 -07:00
Filip Stevanovic
f45900d7e6 fix(fetch): print request body for application/x-www-form-urlencoded in curl logs (#22849)
### What does this PR do?

fixes an issue where fetch requests with `Content-Type:
application/x-www-form-urlencoded` would not include the request body in
curl logs when `BUN_CONFIG_VERBOSE_FETCH=curl` is enabled

previously, only JSON and text-based content types were recognized as
safe-to-print in the curl formatter. This change updates the allow-list
to also handle `application/x-www-form-urlencoded`, ensuring bodies for
common form submissions are shown in logs

### How did you verify your code works?

- added `Content-Type: application/x-www-form-urlencoded` to a fetch
request and confirmed that `BUN_CONFIG_VERBOSE_FETCH=curl` now outputs a
`--data-raw` section with the encoded body
- verified the fix against the reproduction script provided in issue
#12042
 - created and ran a regression test
- checked that existing content types (JSON, text, etc.) continue to
print correctly

fixes #12042
2025-09-26 03:54:41 -07:00
Jarred Sumner
00490199f1 bun feedback (#22710)
### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-26 03:47:26 -07:00
Marko Vejnovic
17b503b389 Redis PUB/SUB 2.0 (#22568)
### What does this PR do?

**This PR is created because [the previous PR I
opened](https://github.com/oven-sh/bun/pull/21728) had some concerning
issues.** Thanks @Jarred-Sumner for the help.

The goal of this PR is to introduce PUB/SUB functionality to the
built-in Redis client. Based on the fact that the current Redis API does
not appear to have compatibility with `io-redis` or `redis-node`, I've
decided to do away with existing APIs and API compatibility with these
existing libraries.

I have decided to base my implementation on the [`redis-node` pub/sub
API](https://github.com/redis/node-redis/blob/master/docs/pub-sub.md).

#### Random Things That Happened

- [x] Refactored the build scripts so that `valgrind` can be disabled.
- [x] Added a `numeric` namespace in `harness.ts` with useful
mathematical libraries.
- [x] Added a mechanism in `cppbind.ts` to disable static assertions
(specifically to allow `check_slow` even when returning a `JSValue`).
Implemented via `// NOLINT[NEXTLINE]?\(.*\)` macros.
- [x] Fixed inconsistencies in error handling of `JSMap`.

### How did you verify your code works?

I've written a set of unit tests to hopefully catch the major use-cases
of this feature. They all appear to pass.


#### Future Improvements

I would have a lot more confidence in our Redis implementation if we
tested it with a test suite running over a network which emulates a high
network failure rate. There are large amounts of edge cases that are
worthwhile to grab, but I think we can roll that out in a future PR.

### Future Tasks

- [ ] Tests over flaky network
- [ ] Use the custom private members over `_<member>`.

---------

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-26 03:06:18 -07:00
Jarred Sumner
ea735c341f Bump WebKit (#22957)
### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-26 01:46:26 -07:00
Jarred Sumner
064ecc37fd Move Bun__JSRequest__calculateEstimatedByteSize earlier (#22993)
### What does this PR do?

### How did you verify your code works?
2025-09-26 00:33:30 -07:00
Meghan Denny
90c7a4e886 update no-validate-leaksan.txt 2025-09-26 00:24:02 -07:00
Meghan Denny
c63fa996d1 package.json: add amazonlinux machine script 2025-09-26 00:17:58 -07:00
robobun
5457d76bcb Fix double-free in createArgv function (#22978)
## Summary
- Fixed a double-free bug in the `createArgv` function in
`node_process.zig`

## Details
The `createArgv` function had two `defer allocator.free(args)`
statements:
- One on line 164 
- Another on line 192 (now removed)

This would cause the same memory to be freed twice when the function
returned, leading to undefined behavior.

Fixes #22975

## Test plan
The existing process.argv tests should continue to pass with this fix.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-25 23:52:56 -07:00
pfg
c4519c7552 Add --randomize --seed flag (#22987)
Outputs the seed when randomizing. Adds --seed flag to reproduce a
random order. Seeds might not produce the same order across operating
systems / bun versions.

Fixes #11847

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-25 23:47:46 -07:00
Jarred Sumner
656747bcf1 Fix vm destruction assertion failure in udp socket, reduce usage of protect() (#22986)
### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-25 22:41:02 -07:00
Jarred Sumner
2039ab182d Remove stale path assertion on Windows (#22988)
### What does this PR do?

This assertion is occasionally incorrect, and was originally added as a
workaround for lack of proper error handling in zig's std library. We've
seen fixed that so this assertion is no longer needed.

### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-25 22:34:49 -07:00
Meghan Denny
5a709a2dbf node:tty: use terminal VT mode on Windows (#21161)
mirrors: https://github.com/nodejs/node/pull/58358
2025-09-25 19:58:44 -07:00
Meghan Denny
51ce3bc269 [publish images] ci: ensure tests that require docker have it available (#22781) 2025-09-25 19:03:22 -07:00
Marko Vejnovic
14b62e6904 chore(build): Add build:debug:noasan and remove build:debug:asan (#22982)
### What does this PR do?

Adds a `bun run build:debug:noasan` run script and deletes the `bun run
build:debug:asan` rule.

### How did you verify your code works?

Ran the change locally.
2025-09-25 18:06:21 -07:00
robobun
d3061de1bf feat(windows): implement authenticode stripping for --compile (#22960)
## Summary

Implements authenticode signature stripping for Windows PE files when
using `bun build --compile`, ensuring that generated executables can be
properly signed with external tools after Bun embeds its data section.

## What Changed

### Core Implementation
- **Authenticode stripping**: Removes digital signatures from PE files
before adding the .bun section
- **Safe memory access**: Replaced all `@alignCast` operations with safe
unaligned access helpers to prevent crashes
- **Hardened PE parsing**: Added comprehensive bounds checking and
validation throughout
- **PE checksum recalculation**: Properly updates checksums after
modifications

### Key Features
- Always strips authenticode signatures when using `--compile` for
Windows (uses `.strip_always` mode)
- Validates PE file structure according to PE/COFF specification
- Handles overlapping memory regions safely during certificate removal
- Clears `IMAGE_DLLCHARACTERISTICS_FORCE_INTEGRITY` flag when stripping
signatures
- Ensures no unexpected overlay data remains after stripping

### Bug Fixes
- Fixed memory corruption bug using `copyBackwards` for overlapping
regions
- Fixed checksum calculation skipping 6 bytes instead of 4
- Added integer overflow protection in payload size calculations
- Fixed double alignment bug in `size_of_image` calculation

## Technical Details

The implementation follows the Windows PE/COFF specification and
includes:

- `StripMode` enum to control when signatures are stripped
(none/strip_if_signed/strip_always)
- Safe unaligned memory access helpers (`viewAtConst`, `viewAtMut`)
- Proper alignment helpers with overflow protection (`alignUpU32`,
`alignUpUsize`)
- Comprehensive error types for all failure cases

## Testing

- Passes all existing PE tests in
`test/regression/issue/pe-codesigning-integrity.test.ts`
- Compiles successfully with `bun run zig:check-windows`
- Properly integrated with StandaloneModuleGraph for Windows compilation

## Impact

This ensures Windows users can:
1. Use `bun build --compile` to create standalone executables
2. Sign the resulting executables with their own certificates
3. Distribute properly signed Windows binaries

Fixes issues where previously signed executables would have invalid
signatures after Bun added its embedded data.

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-25 18:03:27 -07:00
robobun
58782ceef2 Fix bun_dependency_versions.h regenerating on every CMake run (#22985)
## Summary
- Fixes unnecessary regeneration of `bun_dependency_versions.h` on every
CMake run
- Only writes the header file when content actually changes

## Test plan
Tested locally by running CMake configuration multiple times:
1. First run generates the file (shows "Updated dependency versions
header")
2. Subsequent runs skip writing (shows "Dependency versions header
unchanged")
3. File modification timestamp remains unchanged when content is the
same
4. File is properly regenerated when deleted or when content changes

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-25 17:23:45 -07:00
Meghan Denny
0b9a2fce2d update no-validate-leaksan.txt 2025-09-25 17:06:23 -07:00
Marko Vejnovic
749ad8a1ff fix(build): Minor Linux Build Fixes (#22972)
### What does this PR do?

### How did you verify your code works?
2025-09-25 16:53:21 -07:00
Jarred Sumner
9746d03ccb Delete slop test 2025-09-25 16:24:24 -07:00
Jarred Sumner
4dfd87a302 Fix aborting fetch() calls while the socket is connecting. Fix a thread-safety issue involving redirects and AbortSignal. (#22842)
### What does this PR do?

When we added "happy eyeballs" support to fetch(), it meant that
`onOpen` would not be called potentially for awhile. If the AbortSignal
is aborted between `connect()` and the socket becoming
readable/writable, then we would delay closing the connection until the
connection opens. Fixing that fixes #18536.

Separately, the `isHTTPS()` function used in abort and in request body
streams was not thread safe. This caused a crash when many redirects
happen simultaneously while either AbortSignal or request body messages
are in-flight.
This PR fixes https://github.com/oven-sh/bun/issues/14137



### How did you verify your code works?

There are tests

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Ciro Spaciari <ciro.spaciari@gmail.com>
2025-09-25 16:08:06 -07:00
Meghan Denny
20854fb285 node:crypto: add blake2s256 hasher (#22958) 2025-09-25 15:28:42 -07:00
robobun
be15f6c80c feat(test): add --randomize flag to run tests in random order (#22945)
## Summary

This PR adds a `--randomize` flag to `bun test` that shuffles test
execution order. This helps developers catch test interdependencies and
identify flaky tests that may depend on execution order.

## Changes

-  Added `--randomize` CLI flag to test command
- 🔀 Implemented test shuffling using `bun.fastRandom()` as PRNG seed
- 🧪 Added comprehensive tests to verify randomization behavior
- 📝 Tests are shuffled at the scheduling phase, properly handling
describe blocks and hooks

## Usage

```bash
# Run tests in random order
bun test --randomize

# Works with other test flags
bun test --randomize --bail
bun test mytest.test.ts --randomize
```

## Implementation Details

The randomization happens in `Order.zig`'s `generateOrderDescribe`
function, which shuffles the `current.entries.items` array when the
randomize flag is set. This ensures:

- All tests still run (just in different order)
- Hooks (beforeAll, afterAll, beforeEach, afterEach) maintain proper
relationships
- Describe blocks and their children are shuffled independently
- Each run uses a different random seed for varied execution orders

## Test Coverage

Added tests in `test/cli/test/test-randomize.test.ts` that verify:
- Tests run in random order with the flag
- All tests execute (none are skipped)
- Without the flag, tests run in consistent order
- Randomization works with describe blocks

## Example Output

```bash
# Without --randomize (consistent order)
$ bun test mytest.js
Running test 1
Running test 2
Running test 3
Running test 4
Running test 5

# With --randomize (different order each run)
$ bun test mytest.js --randomize
Running test 3
Running test 5
Running test 1
Running test 4
Running test 2

$ bun test mytest.js --randomize
Running test 2
Running test 4
Running test 5
Running test 1
Running test 3
```

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: pfg <pfg@pfg.pw>
2025-09-25 14:20:47 -07:00
pfg
0ea4ce1bb4 Synchronous concurrent test fix (#22928)
```ts
beforeEach(() => {
  console.log("beforeEach");
});
afterEach(() => {
  console.log("afterEach");
});
test.concurrent("test 1", () => {
  console.log("start test 1");
});
test.concurrent("test 2", async () => {
  console.log("start test 2");
});
test.concurrent("test 3", () => {
  console.log("start test 3");
});
```

```
$> bun-before test synchronous-concurrent
beforeEach
beforeEach
beforeEach
start test 1
start test 2
start test 3
afterEach
afterEach
afterEach

$> bun-after test synchronous-concurrent
beforeEach
start test 1
afterEach
beforeEach
start test 2
afterEach
beforeEach
start test 3
afterEach
```

---------

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-25 03:52:18 -07:00
robobun
6c381b0e03 Fix double slash in error stack traces when root_path has trailing slash (#22951)
## Summary
- Fixes double slashes appearing in error stack traces when `root_path`
ends with a trailing slash
- Followup to #22469 which added dimmed cwd prefixes to error messages

## Changes
- Use `strings.withoutTrailingSlash()` to strip any trailing separator
from `root_path` before adding the path separator
- This prevents paths like `/workspace//file.js` from appearing in error
messages

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-25 00:37:10 -07:00
Ciro Spaciari
7798e6638b Implement NODE_USE_SYSTEM_CA with --use-system-ca CLI flag (#22441)
### What does this PR do?
Resume work on https://github.com/oven-sh/bun/pull/21898
### How did you verify your code works?
Manually tested on MacOS, Windows 11 and Ubuntu 25.04. CI changes are
needed for the tests

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-24 21:55:57 -07:00
Marko Vejnovic
e3783c244f chore(libuv): Update to 1.51.0 (#22942)
### What does this PR do?

Uprevs `libuv` to version `1.51.0`.

### How did you verify your code works?

CI passes.

---------

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-24 20:55:25 -07:00
robobun
fee28ca66f Fix dns.resolve callback parameters to match Node.js behavior (#22814)
## Summary
- Fixed `dns.resolve()` callback to pass 2 parameters instead of 3,
matching Node.js
- Fixed `dns.promises.resolve()` to return array of strings for A/AAAA
records instead of objects
- Added comprehensive regression tests

## What was wrong?

The `dns.resolve()` callback was incorrectly passing 3 parameters
`(error, hostname, results)` instead of Node.js's 2 parameters `(error,
results)`. Additionally, `dns.promises.resolve()` was returning objects
with `{address, family}` instead of plain string arrays for A/AAAA
records.

## How this fixes it

1. Removed the extra `hostname` parameter from the callback in
`dns.resolve()` for A/AAAA records
2. Changed promise version to use `promisifyResolveX(false)` instead of
`promisifyLookup()` to return string arrays
3. Applied same fixes to the `Resolver` class methods

## Test plan
- Added regression test `test/regression/issue/22712.test.ts` with 6
test cases
- All tests pass with the fix
- Verified existing DNS tests still pass

Fixes #22712

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-24 18:29:15 -07:00
robobun
f9a042f114 Improve --reporter flag help and error messages (#22900)
## Summary
- Clarifies help text for `--reporter` and `--reporter-outfile` flags
- Improves error messages when invalid reporter formats are specified
- Makes distinction between test reporters and coverage reporters
clearer

## Changes
1. Updated help text in `Arguments.zig` to better explain:
   - What formats are currently available (only 'junit' for --reporter)
   - Default behavior (console output for tests)
   - Requirements (--reporter-outfile needed with --reporter=junit)
   
2. Improved error messages to list available options when invalid
formats are used

3. Updated CLI completions to match the new help text

## Test plan
- [x] Built and tested with `bun bd`
- [x] Verified help text displays correctly: `./build/debug/bun-debug
test --help`
- [x] Tested error message for invalid reporter:
`./build/debug/bun-debug test --reporter=json`
- [x] Tested error message for missing outfile: `./build/debug/bun-debug
test --reporter=junit`
- [x] Tested error message for invalid coverage reporter:
`./build/debug/bun-debug test --coverage-reporter=invalid`
- [x] Verified junit reporter still works: `./build/debug/bun-debug test
--reporter=junit --reporter-outfile=/tmp/junit.xml`
- [x] Verified lcov coverage reporter still works:
`./build/debug/bun-debug test --coverage --coverage-reporter=lcov`

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-24 18:26:37 -07:00
robobun
57b93f6ea3 Fix panic when macros return collections with 3+ arrays/objects (#22827)
## Summary

Fixes #22656, #11730, and #7116

Fixes a panic that occurred when macros returned collections containing
three or more arrays or objects.

## Problem

The issue was caused by hash table resizing during recursive processing.
When `this.run()` was called recursively to process nested
arrays/objects, it could add more entries to the `visited` map,
triggering a resize. This would invalidate the `_entry.value_ptr`
pointer obtained from `getOrPut`, leading to memory corruption and
crashes.

## Solution

The fix ensures we handle hash table resizing safely:

1. Use `getOrPut` to reserve an entry and store a placeholder
2. Process all children (which may trigger hash table resizing)
3. Create the final expression with all data
4. Use `put` to update the entry (safe even after resizing)

This approach is applied consistently to both arrays and objects.

## Verification

All three issues have been tested and verified as fixed:

###  #22656 - "Panic when returning collections with three or more
arrays or objects"
- **Before**: `panic(main thread): switch on corrupt value`
- **After**: Works correctly

###  #11730 - "Constructing deep objects in macros causes segfaults"
- **Before**: `Segmentation fault at address 0x8` with deep nested
structures
- **After**: Handles deep nesting without crashes

###  #7116 - "[macro] crash with large complex array"
- **Before**: Crashes with objects containing 50+ properties (hash table
stress)
- **After**: Processes large complex arrays successfully

## Test Plan

Added comprehensive regression tests that cover:
- Collections with 3+ arrays
- Collections with 3+ objects
- Deeply nested structures (5+ levels)
- Objects with many properties (50+) to stress hash table operations
- Mixed collections of arrays and objects

All tests pass with the fix applied.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-24 18:25:39 -07:00
robobun
fcd628424a Fix YAML.parse to throw SyntaxError instead of BuildMessage (#22924)
YAML.parse now throws SyntaxError for invalid syntax matching JSON.parse
behavior

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2025-09-24 16:29:05 -07:00
pfg
526686fdc9 Prevent test.only and snapshot updates in CI (#21811)
This is feature flagged and will not activate until Bun 1.3

- Makes `test.only()` throw an error in CI
- Unless `--update-snapshots` is passed:
- Makes `expect.toMatchSnapshot()` throw an error instead of adding a
new snapshot in CI
- Makes `expect.toMatchInlineSnapshot()` throw an error instead of
filling in the snapshot value in CI

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-24 15:19:16 -07:00
pfg
95b18582ec Revert "concurrent limit"
This reverts commit 4252a6df31.
2025-09-24 15:09:20 -07:00
pfg
4252a6df31 concurrent limit 2025-09-24 15:08:36 -07:00
Meghan Denny
13248bab57 package.json: add shorthands for creating remote machines (#22780) 2025-09-24 13:11:19 -07:00
Meghan Denny
80e8b9601d update no-validate-exceptions.txt (#22907) 2025-09-24 12:57:14 -07:00
btcbobby
0bd3f3757f Update install.sh to try ~/.bash_profile first for PATH modification (#11679) 2025-09-24 12:53:45 -07:00
Dylan Conway
084eeb945e fix(install): serialize updated workspaces versions correctly for bun.lockb (#22932)
### What does this PR do?
This change was missing after changing semver core numbers to use u64.

Also fixes potentially serializing uninitialized bytes from resolution
unions.
### How did you verify your code works?
Added a test for migrating a bun.lockb with most features used.
2025-09-24 02:42:57 -07:00
Meghan Denny
92bc522e85 lsan: fix reporting on linux ci (#22806) 2025-09-24 00:47:52 -07:00
robobun
e58a4a7282 feat: add concurrent-test-glob option to bunfig.toml for selective concurrent test execution (#22898)
## Summary

Adds a new `concurrentTestGlob` configuration option to bunfig.toml that
allows test files matching a glob pattern to automatically run with
concurrent test execution enabled. This provides granular control over
which tests run concurrently without modifying test files or using the
global `--concurrent` flag.

## Problem

Currently, enabling concurrent test execution in Bun requires either:
1. Using the `--concurrent` flag (affects ALL tests)
2. Manually adding `test.concurrent()` to individual test functions
(requires modifying test files)

This creates challenges for:
- Large codebases wanting to gradually migrate to concurrent testing
- Projects with mixed test types (unit tests that need isolation vs
integration tests that can run in parallel)
- CI/CD pipelines that want to optimize test execution without code
changes

## Solution

This PR introduces a `concurrentTestGlob` option in bunfig.toml that
automatically enables concurrent execution for test files matching a
specified glob pattern:

```toml
[test]
concurrentTestGlob = "**/concurrent-*.test.ts"
```

### Key Features
-  Non-breaking: Completely opt-in via configuration
-  Flexible: Use glob patterns to target specific test files or
directories
-  Override-friendly: `--concurrent` flag still forces all tests to run
concurrently
-  Zero code changes: No need to modify existing test files

## Implementation Details

### Code Changes
1. Added `concurrent_test_glob` field to `TestOptions` struct
(`src/cli.zig`)
2. Added parsing for `concurrentTestGlob` from bunfig.toml
(`src/bunfig.zig`)
3. Added `concurrent_test_glob` field to `TestRunner`
(`src/bun.js/test/jest.zig`)
4. Implemented `shouldFileRunConcurrently()` method that checks file
paths against the glob pattern
5. Updated test execution logic to apply concurrent mode based on glob
matching (`src/bun.js/test/ScopeFunctions.zig`)

### How It Works
- When a test file is loaded, its path is checked against the configured
glob pattern
- If it matches, all tests in that file run concurrently (as if
`--concurrent` was passed)
- Files not matching the pattern run sequentially as normal
- The `--concurrent` CLI flag overrides this behavior when specified

## Usage Examples

### Basic Usage
```toml
# bunfig.toml
[test]
concurrentTestGlob = "**/integration/*.test.ts"
```

### Multiple Patterns
```toml
[test]
concurrentTestGlob = [
  "**/integration/*.test.ts",
  "**/e2e/*.test.ts", 
  "**/concurrent-*.test.ts"
]
```

### Migration Strategy
Teams can gradually migrate to concurrent testing:
1. Start with integration tests: `"**/integration/*.test.ts"`
2. Add stable unit tests: `"**/fast-*.test.ts"`
3. Eventually migrate most tests except those requiring isolation

## Testing

Added comprehensive test coverage in
`test/cli/test/concurrent-test-glob.test.ts`:
-  Tests matching glob patterns run concurrently (verified via
execution order logging)
-  Tests not matching patterns run sequentially (verified via shared
state and execution order)
-  `--concurrent` flag properly overrides the glob setting
- Tests use file system logging to deterministically verify concurrent
vs sequential execution

## Documentation

Complete documentation added:
- `docs/runtime/bunfig.md` - Configuration reference
- `docs/test/configuration.md` - Test configuration details
- `docs/test/examples/concurrent-test-glob.md` - Comprehensive example
with migration guide

## Performance Considerations

- Glob matching happens once per test file during loading
- Uses Bun's existing `glob.match()` implementation
- Minimal overhead: simple string pattern matching
- Future optimization: Could cache match results per file path

## Breaking Changes

None. This is a fully backward-compatible, opt-in feature.

## Checklist

- [x] Implementation complete and building
- [x] Tests passing
- [x] Documentation updated
- [x] No breaking changes
- [x] Follows existing code patterns

## Related Issues

This addresses common requests for more granular control over concurrent
test execution, particularly for large codebases migrating from other
test runners.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-23 23:01:15 -07:00
Meghan Denny
ebe2e9da14 node:net: fix handle leak (#22913) 2025-09-23 22:02:34 -07:00
robobun
1a23797e82 feat: add test.serial() API for forcing serial test execution (#22899)
## Summary

Adds a new `test.serial()` API that forces tests to run serially even
when the `--concurrent` flag is passed. This is the opposite of
`test.concurrent()` which forces parallel execution.

## Motivation

Some tests genuinely need to run serially even in CI environments with
`--concurrent`:
- Database migration tests that must run in order
- Tests that modify shared global state
- Tests that use fixed ports or file system resources
- Tests that depend on timing or resource constraints

## Implementation

Changed `self_concurrent` from `bool` to `?bool`:
- `null` = default behavior (inherit from parent or use default)
- `true` = force concurrent execution
- `false` = force serial execution

## API Surface

```javascript
// Force serial execution
test.serial("database migration", async () => {
  // This runs serially even with --concurrent flag
});

// All modifiers work
test.serial.skip("skip this serial test", () => {});
test.serial.todo("implement this serial test");
test.serial.only("only run this serial test", () => {});
test.serial.each([[1], [2]])("serial test %i", (n) => {});
test.serial.if(condition)("conditional serial", () => {});

// Works with describe too
describe.serial("serial test suite", () => {
  test("test 1", () => {}); // runs serially
  test("test 2", () => {}); // runs serially
});

// Explicit test-level settings override describe-level
describe.concurrent("concurrent suite", () => {
  test.serial("this runs serially", () => {}); // serial wins
  test("this runs concurrently", () => {});
});
```

## Test Coverage

Comprehensive tests added including:
- Basic `test.serial()` functionality
- All modifiers (skip, todo, only, each, if)
- `describe.serial()` blocks
- Mixing serial and concurrent tests in same describe block
- Nested describe blocks with conflicting settings
- Explicit overrides (test.serial in describe.concurrent and vice versa)

All 36 tests pass 

## Example

```javascript
// Without this PR - these tests might run in parallel with --concurrent
test("migrate database schema v1", async () => { await migrateV1(); });
test("migrate database schema v2", async () => { await migrateV2(); });
test("migrate database schema v3", async () => { await migrateV3(); });

// With this PR - guaranteed serial execution
test.serial("migrate database schema v1", async () => { await migrateV1(); });
test.serial("migrate database schema v2", async () => { await migrateV2(); });
test.serial("migrate database schema v3", async () => { await migrateV3(); });
```

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-23 21:06:06 -07:00
vfilanovsky-openai
16435f3561 Make sure bun can be installed on Alpine Linux (musl) on arm64 hardware (#22892)
### What does this PR do?
This PRs adjusts the "arch" string for Linux-musl variant to make sure
it can be installed on ARM64 platforms using `npm`. Without this fix,
installing bun on Alpine Linux on arm64 fails because the native binary
cannot be found.

#### Why it fails
Bun attempts to find/download the native binaries during the postinstall
phase (see
[install.ts](https://github.com/oven-sh/bun/blob/bun-v1.1.42/packages/bun-release/src/npm/install.ts)).
The platform matching logic lives in
[platform.ts](https://github.com/oven-sh/bun/blob/bun-v1.1.42/packages/bun-release/src/platform.ts).
Note how the "musl" variant is marked [as
"aarch64"](https://github.com/oven-sh/bun/blob/bun-v1.1.42/packages/bun-release/src/platform.ts#L63-L69),
while the regular "glibc" variant is marked [as
"arm64"](https://github.com/oven-sh/bun/blob/bun-v1.1.42/packages/bun-release/src/platform.ts#L44-L49).
On Alpine Linux distributions (or when using "node-alpine" docker image)
we're supposed to be using the "musl" binary. However, since bun marks
it as "aarch64" while the matching logic relies on `process.arch`, it
never gets matched. Node.js uses "arm64", _not_ "aarch64" (see
["process.arch"
docs](https://nodejs.org/docs/latest-v22.x/api/process.html#processarch)).
In short - a mismatch between the expected arch ("aarch64") and the
actual reported arch ("arm64") prevents bun from finding the right
binary when installing with npm/pnpm.

### How did you verify your code works?
Verified by running the installer on Alpine Linux on arm64.

cc @magus
2025-09-23 20:14:57 -07:00
Ciro Spaciari
db22b7f402 fix(Bun.sql) handle numeric correctly (#22925)
### What does this PR do?
Fixes https://github.com/oven-sh/bun/issues/21225
### How did you verify your code works?
Tests

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-23 20:14:19 -07:00
Nathan Whitaker
a8ccdb02e9 fix(benchmark): postgres benchmark was not performing any queries under deno and node (#22926)
### What does this PR do?
Fixes the postgres benchmark so that it actually benchmarks query
performance on node and deno.

Before this PR, the `sql` function was just creating a tagged template
function, which involved connecting to the database. So basically bun
was doing queries, but node and deno were just connecting to the
postgres database over and over.

You can see from the first example in the docs that you're supposed to
call the default export in order to get back a function to use with
template literals: https://www.npmjs.com/package/postgres


### How did you verify your code works?
Ran it
2025-09-23 17:48:10 -07:00
pfg
144c45229e bun.ptr.Shared.Lazy fixes cppbind change (#22753)
shared lazy:

- cloneWeak didn't incrementWeak. fixed
- exposes a public Optional so you can do `bun.ptr.Shared(*T).Optional`
- the doc comment for 'take' said it set self to null. but it did not.
fixed.
- upgrading a weak to a strong incremented the weak instead of
decrementing it. fixed.
- adds a new method unsafeGetStrongFromPointer. this is currently unused
but used in pfg/describe-2:

a690faa60a/src/bun.js/api/Timer/EventLoopTimer.zig (L220-L223)

cppbind:

- moves the bindings to the root of the file at the top and puts raw at
the bottom
- fixes false_is_throw to return void instead of bool
- updates the help message

---------

Co-authored-by: taylor.fish <contact@taylor.fish>
2025-09-23 17:10:08 -07:00
Ciro Spaciari
85271f9dd9 fix(node:http) allow CONNECT in node http/https servers (#22756)
### What does this PR do?
Fixes https://github.com/oven-sh/bun/issues/22755
Fixes https://github.com/oven-sh/bun/issues/19790
Fixes https://github.com/oven-sh/bun/issues/16372
### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-23 16:46:59 -07:00
robobun
99786797c7 docs: Add missing YAML.stringify() documentation (#22921)
Co-authored-by: Alistair Smith <hi@alistair.sh>
2025-09-23 16:02:51 -07:00
Meghan Denny
b82c676ce5 ci: increase asan to 2xlarge (#22916) 2025-09-23 14:16:01 -07:00
robobun
e555702653 Fix infinite recursion when error.stack is a circular reference (#22863)
## Summary

This PR fixes infinite recursion and stack overflow crashes when error
objects have circular references in their properties, particularly when
`error.stack = error`.

### The Problem
When an error object's stack property references itself or creates a
circular reference chain, Bun would enter infinite recursion and crash.
Common patterns that triggered this:
```javascript
const error = new Error();
error.stack = error;  // Crash!
console.log(error);

// Or circular cause chains:
error1.cause = error2;
error2.cause = error1;  // Crash!
```

### The Solution
Added proper circular reference detection at three levels:

1. **C++ bindings layer** (`bindings.cpp`): Skip processing if `stack`
property equals the error object itself
2. **VirtualMachine layer** (`VirtualMachine.zig`): Track visited errors
when printing error instances and their causes
3. **ConsoleObject layer** (`ConsoleObject.zig`): Properly coordinate
visited map between formatters

Circular references are now safely detected and printed as `[Circular]`
instead of causing crashes.

## Test plan

Added comprehensive tests in
`test/regression/issue/circular-error-stack.test.ts`:
-  `error.stack = error` circular reference
-  Nested circular references via error properties  
-  Circular cause chains (`error1.cause = error2; error2.cause =
error1`)

All tests pass:
```
bun test circular-error-stack.test.ts
✓ error with circular stack reference should not cause infinite recursion
✓ error with nested circular references should not cause infinite recursion  
✓ error with circular reference in cause chain
```

Manual testing:
```javascript
// Before: Stack overflow crash
// After: Prints error normally
const error = new Error("Test");
error.stack = error;
console.log(error);  // error: Test
```

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-22 22:22:58 -07:00
pfg
68bdffebe6 bun test - don't send start events for skipped tests (#22896) 2025-09-22 22:09:18 -07:00
Dylan Conway
285143dc66 fix(install): change semver core numbers to u64 (#22889)
### What does this PR do?
Sometimes packages will use very large numbers exceeding max u32 for
major/minor/patch (usually patch). This pr changes each core number in
bun to u64.

Because we serialize package information to disk for the binary lockfile
and package manifests, this pr bumps the version of each. We don't need
to change anything other than the version for serialized package
manifests because they will invalidate and save the new version. For old
binary lockfiles, this pr adds logic for migrating to the new version.
Even if there are no changes, migrating will always save the new
lockfile. Unfortunately means there will be a one time invisible diff
for binary lockfile users, but this is better than installs failing to
work.

fixes #22881
fixes #21793
fixes #16041
fixes #22891

resolves BUN-7MX, BUN-R4Q, BUN-WRB

### How did you verify your code works?
Manually, and added a test for migrating from an older binary lockfile.

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-22 19:28:26 -07:00
Don Isaac
beae53e81b fix(test): add EXPECTED_COLOR and RECEIVED_COLOR aliases (#22862)
### What does this PR do?
This PR does two things.

First, it fixes a bug when using
[`jest-dom`](https://github.com/testing-library/jest-dom) where
expectation failures would break as `RECEIVED_COLOR` and
`EXPECTED_COLOR` are not properties of `ExpectMatcherContext`.
<img width="1216" height="139" alt="image"
src="https://github.com/user-attachments/assets/26ef87c2-f763-4a46-83a3-d96c4c534f3d"
/>

Second, it adds some existing timer mock functions that were missing
from the `vi` object.

### How did you verify your code works?

I've added a test.
2025-09-22 18:43:28 -07:00
pfg
0d6a27d394 Remove remnants of '--only' flag in documentation (#22168) 2025-09-22 16:07:44 -07:00
Jarred Sumner
0b549321e9 Start using test.concurrent in our tests (#22823)
### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
2025-09-22 05:30:34 -07:00
robobun
33fdc2112f feat: add --cpu and --os flags to bun install for filtering optional dependencies (#22850)
## Summary

Implements `--cpu` and `--os` flags for `bun install` to filter optional
dependencies based on target architecture and operating system. This
allows developers to control which platform-specific optional
dependencies are installed.

## What Changed

### Core Implementation
- Added `--cpu` and `--os` flags to `bun install` command that accept
multiple values
- Multiple values combine with bitwise OR (e.g., `--cpu x64 --cpu arm64`
matches packages for either architecture)
- Updated `isDisabled` methods throughout the codebase to accept custom
CPU/OS targets
- Removed deprecated `isMatch` methods in favor of `isMatchWithTarget`
for consistency

### Files Modified
- `src/install/npm.zig` - Removed `isMatch` methods, standardized on
`isMatchWithTarget`
- `src/install/PackageManager/CommandLineArguments.zig` - Parse and
validate multiple flag values
- `src/install/PackageManager/PackageManagerOptions.zig` - Pass CPU/OS
options through
- `src/install/lockfile/Package.zig` & `Package/Meta.zig` - Updated
`isDisabled` signatures
- `src/install/lockfile/Tree.zig` & `lockfile.zig` - Updated call sites

## Usage Examples

```bash
# Install only x64 dependencies
bun install --cpu x64

# Install dependencies for both x64 and arm64
bun install --cpu x64 --cpu arm64

# Install Linux-specific dependencies
bun install --os linux

# Install for multiple platforms
bun install --cpu x64 --cpu arm64 --os linux --os darwin
```

## Test Plan

 All 10 tests pass in `test/cli/install/bun-install-cpu-os.test.ts`:
- CPU architecture filtering
- OS filtering
- Combined CPU and OS filtering
- Multiple CPU architectures support
- Multiple operating systems support
- Multiple CPU and OS combinations
- Error handling for invalid values
- Negated CPU/OS support (`!arm64`, `!linux`)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-22 03:27:09 -07:00
659 changed files with 45628 additions and 12388 deletions

View File

@@ -371,7 +371,7 @@ function getZigAgent(platform, options) {
* @returns {Agent}
*/
function getTestAgent(platform, options) {
const { os, arch } = platform;
const { os, arch, profile } = platform;
if (os === "darwin") {
return {
@@ -391,6 +391,13 @@ function getTestAgent(platform, options) {
}
if (arch === "aarch64") {
if (profile === "asan") {
return getEc2Agent(platform, options, {
instanceType: "c8g.2xlarge",
cpuCount: 2,
threadsPerCore: 1,
});
}
return getEc2Agent(platform, options, {
instanceType: "c8g.xlarge",
cpuCount: 2,
@@ -398,6 +405,13 @@ function getTestAgent(platform, options) {
});
}
if (profile === "asan") {
return getEc2Agent(platform, options, {
instanceType: "c7i.2xlarge",
cpuCount: 2,
threadsPerCore: 1,
});
}
return getEc2Agent(platform, options, {
instanceType: "c7i.xlarge",
cpuCount: 2,
@@ -538,6 +552,7 @@ function getLinkBunStep(platform, options) {
cancel_on_build_failing: isMergeQueue(),
env: {
BUN_LINK_ONLY: "ON",
ASAN_OPTIONS: "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=0",
...getBuildEnv(platform, options),
},
command: `${getBuildCommand(platform, options, "build-bun")} --target bun`,
@@ -601,6 +616,9 @@ function getTestBunStep(platform, options, testOptions = {}) {
cancel_on_build_failing: isMergeQueue(),
parallelism: unifiedTests ? undefined : os === "darwin" ? 2 : 10,
timeout_in_minutes: profile === "asan" || os === "windows" ? 45 : 30,
env: {
ASAN_OPTIONS: "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=0",
},
command:
os === "windows"
? `node .\\scripts\\runner.node.mjs ${args.join(" ")}`

View File

@@ -45,3 +45,8 @@ jobs:
env:
VSCE_PAT: ${{ secrets.VSCODE_EXTENSION }}
working-directory: packages/bun-vscode/extension
- uses: actions/upload-artifact@v4
with:
name: bun-vscode-${{ github.event.inputs.version }}.vsix
path: packages/bun-vscode/extension/bun-vscode-${{ github.event.inputs.version }}.vsix

4
.vscode/launch.json generated vendored
View File

@@ -26,7 +26,7 @@
// "BUN_JSC_dumpSimulatedThrows": "1",
// "BUN_JSC_unexpectedExceptionStackTraceLimit": "20",
// "BUN_DESTRUCT_VM_ON_EXIT": "1",
// "ASAN_OPTIONS": "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1",
// "ASAN_OPTIONS": "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1:abort_on_error=1",
// "LSAN_OPTIONS": "malloc_context_size=100:print_suppressions=1:suppressions=${workspaceFolder}/test/leaksan.supp",
},
"console": "internalConsole",
@@ -69,7 +69,7 @@
// "BUN_JSC_dumpSimulatedThrows": "1",
// "BUN_JSC_unexpectedExceptionStackTraceLimit": "20",
// "BUN_DESTRUCT_VM_ON_EXIT": "1",
// "ASAN_OPTIONS": "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1",
// "ASAN_OPTIONS": "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1:abort_on_error=1",
// "LSAN_OPTIONS": "malloc_context_size=100:print_suppressions=1:suppressions=${workspaceFolder}/test/leaksan.supp",
},
"console": "internalConsole",

View File

@@ -21,7 +21,7 @@ $ sudo pacman -S base-devel ccache cmake git go libiconv libtool make ninja pkg-
```
```bash#Fedora
$ sudo dnf install cargo ccache cmake git golang libtool ninja-build pkg-config rustc ruby libatomic-static libstdc++-static sed unzip which libicu-devel 'perl(Math::BigInt)'
$ sudo dnf install cargo clang19 llvm19 lld19 ccache cmake git golang libtool ninja-build pkg-config rustc ruby libatomic-static libstdc++-static sed unzip which libicu-devel 'perl(Math::BigInt)'
```
```bash#openSUSE Tumbleweed

2
LATEST
View File

@@ -1 +1 @@
1.2.22
1.2.23

View File

@@ -1,6 +1,6 @@
const isBun = typeof globalThis?.Bun?.sql !== "undefined";
import postgres from "postgres";
const sql = isBun ? Bun.sql : postgres;
const sql = isBun ? Bun.sql : postgres();
// Create the table if it doesn't exist
await sql`

View File

@@ -48,6 +48,7 @@ const BunBuildOptions = struct {
/// enable debug logs in release builds
enable_logs: bool = false,
enable_asan: bool,
enable_valgrind: bool,
tracy_callstack_depth: u16,
reported_nodejs_version: Version,
/// To make iterating on some '@embedFile's faster, we load them at runtime
@@ -67,6 +68,7 @@ const BunBuildOptions = struct {
cached_options_module: ?*Module = null,
windows_shim: ?WindowsShim = null,
llvm_codegen_threads: ?u32 = null,
pub fn isBaseline(this: *const BunBuildOptions) bool {
return this.arch.isX86() and
@@ -94,6 +96,7 @@ const BunBuildOptions = struct {
opts.addOption(bool, "baseline", this.isBaseline());
opts.addOption(bool, "enable_logs", this.enable_logs);
opts.addOption(bool, "enable_asan", this.enable_asan);
opts.addOption(bool, "enable_valgrind", this.enable_valgrind);
opts.addOption([]const u8, "reported_nodejs_version", b.fmt("{}", .{this.reported_nodejs_version}));
opts.addOption(bool, "zig_self_hosted_backend", this.no_llvm);
opts.addOption(bool, "override_no_export_cpp_apis", this.override_no_export_cpp_apis);
@@ -213,26 +216,21 @@ pub fn build(b: *Build) !void {
var build_options = BunBuildOptions{
.target = target,
.optimize = optimize,
.os = os,
.arch = arch,
.codegen_path = codegen_path,
.codegen_embed = codegen_embed,
.no_llvm = no_llvm,
.override_no_export_cpp_apis = override_no_export_cpp_apis,
.version = try Version.parse(bun_version),
.canary_revision = canary: {
const rev = b.option(u32, "canary", "Treat this as a canary build") orelse 0;
break :canary if (rev == 0) null else rev;
},
.reported_nodejs_version = try Version.parse(
b.option([]const u8, "reported_nodejs_version", "Reported Node.js version") orelse
"0.0.0-unset",
),
.sha = sha: {
const sha_buildoption = b.option([]const u8, "sha", "Force the git sha");
const sha_github = b.graph.env_map.get("GITHUB_SHA");
@@ -268,10 +266,11 @@ pub fn build(b: *Build) !void {
break :sha sha;
},
.tracy_callstack_depth = b.option(u16, "tracy_callstack_depth", "") orelse 10,
.enable_logs = b.option(bool, "enable_logs", "Enable logs in release") orelse false,
.enable_asan = b.option(bool, "enable_asan", "Enable asan") orelse false,
.enable_valgrind = b.option(bool, "enable_valgrind", "Enable valgrind") orelse false,
.llvm_codegen_threads = b.option(u32, "llvm_codegen_threads", "Number of threads to use for LLVM codegen") orelse 1,
};
// zig build obj
@@ -500,6 +499,7 @@ fn addMultiCheck(
.codegen_path = root_build_options.codegen_path,
.no_llvm = root_build_options.no_llvm,
.enable_asan = root_build_options.enable_asan,
.enable_valgrind = root_build_options.enable_valgrind,
.override_no_export_cpp_apis = root_build_options.override_no_export_cpp_apis,
};
@@ -605,7 +605,15 @@ fn configureObj(b: *Build, opts: *BunBuildOptions, obj: *Compile) void {
// Object options
obj.use_llvm = !opts.no_llvm;
obj.use_lld = if (opts.os == .mac) false else !opts.no_llvm;
obj.use_lld = if (opts.os == .mac or opts.os == .linux) false else !opts.no_llvm;
if (opts.optimize == .Debug) {
if (@hasField(std.meta.Child(@TypeOf(obj)), "llvm_codegen_threads"))
obj.llvm_codegen_threads = opts.llvm_codegen_threads orelse 0;
}
obj.no_link_obj = true;
if (opts.enable_asan and !enableFastBuild(b)) {
if (@hasField(Build.Module, "sanitize_address")) {
obj.root_module.sanitize_address = true;
@@ -636,7 +644,7 @@ fn configureObj(b: *Build, opts: *BunBuildOptions, obj: *Compile) void {
obj.link_function_sections = true;
obj.link_data_sections = true;
if (opts.optimize == .Debug) {
if (opts.optimize == .Debug and opts.enable_valgrind) {
obj.root_module.valgrind = true;
}
}
@@ -745,6 +753,7 @@ fn addInternalImports(b: *Build, mod: *Module, opts: *BunBuildOptions) void {
.{ .file = "node-fallbacks/url.js", .enable = opts.shouldEmbedCode() },
.{ .file = "node-fallbacks/util.js", .enable = opts.shouldEmbedCode() },
.{ .file = "node-fallbacks/zlib.js", .enable = opts.shouldEmbedCode() },
.{ .file = "eval/feedback.ts", .enable = opts.shouldEmbedCode() },
}) |entry| {
if (!@hasField(@TypeOf(entry), "enable") or entry.enable) {
const path = b.pathJoin(&.{ opts.codegen_path, entry.file });

View File

@@ -60,10 +60,10 @@ endif()
# Windows Code Signing Option
if(WIN32)
optionx(ENABLE_WINDOWS_CODESIGNING BOOL "Enable Windows code signing with DigiCert KeyLocker" DEFAULT OFF)
if(ENABLE_WINDOWS_CODESIGNING)
message(STATUS "Windows code signing: ENABLED")
# Check for required environment variables
if(NOT DEFINED ENV{SM_API_KEY})
message(WARNING "SM_API_KEY not set - code signing may fail")
@@ -114,8 +114,10 @@ endif()
if(DEBUG AND ((APPLE AND ARCH STREQUAL "aarch64") OR LINUX))
set(DEFAULT_ASAN ON)
set(DEFAULT_VALGRIND OFF)
else()
set(DEFAULT_ASAN OFF)
set(DEFAULT_VALGRIND OFF)
endif()
optionx(ENABLE_ASAN BOOL "If ASAN support should be enabled" DEFAULT ${DEFAULT_ASAN})

View File

@@ -2,6 +2,8 @@ include(PathUtils)
if(DEBUG)
set(bun bun-debug)
elseif(ENABLE_ASAN AND ENABLE_VALGRIND)
set(bun bun-asan-valgrind)
elseif(ENABLE_ASAN)
set(bun bun-asan)
elseif(ENABLE_VALGRIND)
@@ -42,6 +44,14 @@ else()
set(CONFIGURE_DEPENDS "")
endif()
set(LLVM_ZIG_CODEGEN_THREADS 0)
# This makes the build slower, so we turn it off for now.
# if (DEBUG)
# include(ProcessorCount)
# ProcessorCount(CPU_COUNT)
# set(LLVM_ZIG_CODEGEN_THREADS ${CPU_COUNT})
# endif()
# --- Dependencies ---
set(BUN_DEPENDENCIES
@@ -576,7 +586,13 @@ if (TEST)
set(BUN_ZIG_OUTPUT ${BUILD_PATH}/bun-test.o)
set(ZIG_STEPS test)
else()
set(BUN_ZIG_OUTPUT ${BUILD_PATH}/bun-zig.o)
if (LLVM_ZIG_CODEGEN_THREADS GREATER 1)
foreach(i RANGE ${LLVM_ZIG_CODEGEN_THREADS})
list(APPEND BUN_ZIG_OUTPUT ${BUILD_PATH}/bun-zig.${i}.o)
endforeach()
else()
set(BUN_ZIG_OUTPUT ${BUILD_PATH}/bun-zig.o)
endif()
set(ZIG_STEPS obj)
endif()
@@ -619,6 +635,8 @@ register_command(
-Dcpu=${ZIG_CPU}
-Denable_logs=$<IF:$<BOOL:${ENABLE_LOGS}>,true,false>
-Denable_asan=$<IF:$<BOOL:${ENABLE_ZIG_ASAN}>,true,false>
-Denable_valgrind=$<IF:$<BOOL:${ENABLE_VALGRIND}>,true,false>
-Dllvm_codegen_threads=${LLVM_ZIG_CODEGEN_THREADS}
-Dversion=${VERSION}
-Dreported_nodejs_version=${NODEJS_VERSION}
-Dcanary=${CANARY_REVISION}
@@ -886,12 +904,8 @@ if(NOT WIN32)
endif()
if(ENABLE_ASAN)
target_compile_options(${bun} PUBLIC
-fsanitize=address
)
target_link_libraries(${bun} PUBLIC
-fsanitize=address
)
target_compile_options(${bun} PUBLIC -fsanitize=address)
target_link_libraries(${bun} PUBLIC -fsanitize=address)
endif()
target_compile_options(${bun} PUBLIC
@@ -930,12 +944,8 @@ if(NOT WIN32)
)
if(ENABLE_ASAN)
target_compile_options(${bun} PUBLIC
-fsanitize=address
)
target_link_libraries(${bun} PUBLIC
-fsanitize=address
)
target_compile_options(${bun} PUBLIC -fsanitize=address)
target_link_libraries(${bun} PUBLIC -fsanitize=address)
endif()
endif()
else()
@@ -969,6 +979,7 @@ if(WIN32)
/delayload:WSOCK32.dll
/delayload:ADVAPI32.dll
/delayload:IPHLPAPI.dll
/delayload:CRYPT32.dll
)
endif()
endif()
@@ -1010,6 +1021,7 @@ if(LINUX)
-Wl,--wrap=exp2
-Wl,--wrap=expf
-Wl,--wrap=fcntl64
-Wl,--wrap=gettid
-Wl,--wrap=log
-Wl,--wrap=log2
-Wl,--wrap=log2f
@@ -1061,7 +1073,7 @@ if(LINUX)
)
endif()
if (NOT DEBUG AND NOT ENABLE_ASAN)
if (NOT DEBUG AND NOT ENABLE_ASAN AND NOT ENABLE_VALGRIND)
target_link_options(${bun} PUBLIC
-Wl,-icf=safe
)
@@ -1188,6 +1200,7 @@ if(WIN32)
ntdll
userenv
dbghelp
crypt32
wsock32 # ws2_32 required by TransmitFile aka sendfile on windows
delayimp.lib
)
@@ -1363,12 +1376,20 @@ if(NOT BUN_CPP_ONLY)
if(ENABLE_BASELINE)
set(bunTriplet ${bunTriplet}-baseline)
endif()
if(ENABLE_ASAN)
if (ENABLE_ASAN AND ENABLE_VALGRIND)
set(bunTriplet ${bunTriplet}-asan-valgrind)
set(bunPath ${bunTriplet})
elseif (ENABLE_VALGRIND)
set(bunTriplet ${bunTriplet}-valgrind)
set(bunPath ${bunTriplet})
elseif(ENABLE_ASAN)
set(bunTriplet ${bunTriplet}-asan)
set(bunPath ${bunTriplet})
else()
string(REPLACE bun ${bunTriplet} bunPath ${bun})
endif()
set(bunFiles ${bunExe} features.json)
if(WIN32)
list(APPEND bunFiles ${bun}.pdb)

View File

@@ -4,7 +4,8 @@ register_repository(
REPOSITORY
libuv/libuv
COMMIT
da527d8d2a908b824def74382761566371439003
# Corresponds to v1.51.0
5152db2cbfeb5582e9c27c5ea1dba2cd9e10759b
)
if(WIN32)

View File

@@ -181,12 +181,23 @@ function(generate_dependency_versions_header)
string(APPEND HEADER_CONTENT "}\n")
string(APPEND HEADER_CONTENT "#endif\n\n")
string(APPEND HEADER_CONTENT "#endif // BUN_DEPENDENCY_VERSIONS_H\n")
# Write the header file
# Write the header file only if content has changed
set(OUTPUT_FILE "${CMAKE_BINARY_DIR}/bun_dependency_versions.h")
file(WRITE "${OUTPUT_FILE}" "${HEADER_CONTENT}")
message(STATUS "Generated dependency versions header: ${OUTPUT_FILE}")
# Read existing content if file exists
set(EXISTING_CONTENT "")
if(EXISTS "${OUTPUT_FILE}")
file(READ "${OUTPUT_FILE}" EXISTING_CONTENT)
endif()
# Only write if content is different
if(NOT "${EXISTING_CONTENT}" STREQUAL "${HEADER_CONTENT}")
file(WRITE "${OUTPUT_FILE}" "${HEADER_CONTENT}")
message(STATUS "Updated dependency versions header: ${OUTPUT_FILE}")
else()
message(STATUS "Dependency versions header unchanged: ${OUTPUT_FILE}")
endif()
# Also create a more detailed version for debugging
set(DEBUG_OUTPUT_FILE "${CMAKE_BINARY_DIR}/bun_dependency_versions_debug.txt")

View File

@@ -131,6 +131,9 @@ else()
find_llvm_command(CMAKE_RANLIB llvm-ranlib)
if(LINUX)
find_llvm_command(LLD_PROGRAM ld.lld)
# Ensure vendor dependencies use lld instead of ld
list(APPEND CMAKE_ARGS -DCMAKE_EXE_LINKER_FLAGS=--ld-path=${LLD_PROGRAM})
list(APPEND CMAKE_ARGS -DCMAKE_SHARED_LINKER_FLAGS=--ld-path=${LLD_PROGRAM})
endif()
if(APPLE)
find_llvm_command(CMAKE_DSYMUTIL dsymutil)

View File

@@ -2,7 +2,7 @@ option(WEBKIT_VERSION "The version of WebKit to use")
option(WEBKIT_LOCAL "If a local version of WebKit should be used instead of downloading")
if(NOT WEBKIT_VERSION)
set(WEBKIT_VERSION 495c25e24927ba03277ae225cd42811588d03ff8)
set(WEBKIT_VERSION 69fa2714ab5f917c2d15501ff8cfdccfaea78882)
endif()
string(SUBSTRING ${WEBKIT_VERSION} 0 16 WEBKIT_VERSION_PREFIX)

View File

@@ -20,7 +20,7 @@ else()
unsupported(CMAKE_SYSTEM_NAME)
endif()
set(ZIG_COMMIT "e0b7c318f318196c5f81fdf3423816a7b5bb3112")
set(ZIG_COMMIT "55fdbfa0c86be86b68d43a4ba761e6909eb0d7b2")
optionx(ZIG_TARGET STRING "The zig target to use" DEFAULT ${DEFAULT_ZIG_TARGET})
if(CMAKE_BUILD_TYPE STREQUAL "Release")
@@ -90,6 +90,7 @@ register_command(
-DZIG_PATH=${ZIG_PATH}
-DZIG_COMMIT=${ZIG_COMMIT}
-DENABLE_ASAN=${ENABLE_ASAN}
-DENABLE_VALGRIND=${ENABLE_VALGRIND}
-DZIG_COMPILER_SAFE=${ZIG_COMPILER_SAFE}
-P ${CWD}/cmake/scripts/DownloadZig.cmake
SOURCES

View File

@@ -122,7 +122,7 @@
},
{
"name": "reporter",
"description": "Specify the test reporter. Currently --reporter=junit is the only supported format.",
"description": "Test output reporter format. Available: 'junit' (requires --reporter-outfile). Default: console output.",
"hasValue": true,
"valueType": "val",
"required": false,
@@ -130,7 +130,7 @@
},
{
"name": "reporter-outfile",
"description": "The output file used for the format from --reporter.",
"description": "Output file path for the reporter format (required with --reporter).",
"hasValue": true,
"valueType": "val",
"required": false,

View File

@@ -665,7 +665,6 @@ _bun_test_completion() {
'--timeout[Set the per-test timeout in milliseconds, default is 5000.]:timeout' \
'--update-snapshots[Update snapshot files]' \
'--rerun-each[Re-run each test file <NUMBER> times, helps catch certain bugs]:rerun' \
'--only[Only run tests that are marked with "test.only()"]' \
'--todo[Include tests that are marked with "test.todo()"]' \
'--coverage[Generate a coverage profile]' \
'--bail[Exit the test suite after <NUMBER> failures. If you do not specify a number, it defaults to 1.]:bail' \

View File

@@ -233,6 +233,7 @@ In addition to the standard fetch options, Bun provides several extensions:
```ts
const response = await fetch("http://example.com", {
// Control automatic response decompression (default: true)
// Supports gzip, deflate, brotli (br), and zstd
decompress: true,
// Disable connection reuse for this request
@@ -339,7 +340,7 @@ This will print the request and response headers to your terminal:
[fetch] > User-Agent: Bun/$BUN_LATEST_VERSION
[fetch] > Accept: */*
[fetch] > Host: example.com
[fetch] > Accept-Encoding: gzip, deflate, br
[fetch] > Accept-Encoding: gzip, deflate, br, zstd
[fetch] < 200 OK
[fetch] < Content-Encoding: gzip

View File

@@ -155,3 +155,24 @@ const glob = new Glob("\\!index.ts");
glob.match("!index.ts"); // => true
glob.match("index.ts"); // => false
```
## Node.js `fs.glob()` compatibility
Bun also implements Node.js's `fs.glob()` functions with additional features:
```ts
import { glob, globSync, promises } from "node:fs";
// Array of patterns
const files = await promises.glob(["**/*.ts", "**/*.js"]);
// Exclude patterns
const filtered = await promises.glob("**/*", {
exclude: ["node_modules/**", "*.test.*"],
});
```
All three functions (`fs.glob()`, `fs.globSync()`, `fs.promises.glob()`) support:
- Array of patterns as the first argument
- `exclude` option to filter results

View File

@@ -184,6 +184,7 @@ Bun.hash.rapidhash("data", 1234);
- `"blake2b256"`
- `"blake2b512"`
- `"blake2s256"`
- `"md4"`
- `"md5"`
- `"ripemd160"`

View File

@@ -88,6 +88,9 @@ await redis.set("user:1:name", "Alice");
// Get a key
const name = await redis.get("user:1:name");
// Get a key as Uint8Array
const buffer = await redis.getBuffer("user:1:name");
// Delete a key
await redis.del("user:1:name");
@@ -132,6 +135,10 @@ await redis.hmset("user:123", [
const userFields = await redis.hmget("user:123", ["name", "email"]);
console.log(userFields); // ["Alice", "alice@example.com"]
// Get single field from hash (returns value directly, null if missing)
const userName = await redis.hget("user:123", "name");
console.log(userName); // "Alice"
// Increment a numeric field in a hash
await redis.hincrby("user:123", "visits", 1);
@@ -161,6 +168,102 @@ const randomTag = await redis.srandmember("tags");
const poppedTag = await redis.spop("tags");
```
## Pub/Sub
Bun provides native bindings for the [Redis
Pub/Sub](https://redis.io/docs/latest/develop/pubsub/) protocol. **New in Bun
1.2.23**
{% callout %}
**🚧** — The Redis Pub/Sub feature is experimental. Although we expect it to be
stable, we're currently actively looking for feedback and areas for improvement.
{% /callout %}
### Basic Usage
To get started publishing messages, you can set up a publisher in
`publisher.ts`:
```typescript#publisher.ts
import { RedisClient } from "bun";
const writer = new RedisClient("redis://localhost:6739");
await writer.connect();
writer.publish("general", "Hello everyone!");
writer.close();
```
In another file, create the subscriber in `subscriber.ts`:
```typescript#subscriber.ts
import { RedisClient } from "bun";
const listener = new RedisClient("redis://localhost:6739");
await listener.connect();
await listener.subscribe("general", (message, channel) => {
console.log(`Received: ${message}`);
});
```
In one shell, run your subscriber:
```bash
bun run subscriber.ts
```
and, in another, run your publisher:
```bash
bun run publisher.ts
```
{% callout %}
**Note:** The subscription mode takes over the `RedisClient` connection. A
client with subscriptions can only call `RedisClient.prototype.subscribe()`. In
other words, applications which need to message Redis need a separate
connection, acquirable through `.duplicate()`:
```typescript
import { RedisClient } from "bun";
const redis = new RedisClient("redis://localhost:6379");
await redis.connect();
const subscriber = await redis.duplicate();
await subscriber.subscribe("foo", () => {});
await redis.set("bar", "baz");
```
{% /callout %}
### Publishing
Publishing messages is done through the `publish()` method:
```typescript
await client.publish(channelName, message);
```
### Subscriptions
The Bun `RedisClient` allows you to subscribe to channels through the
`.subscribe()` method:
```typescript
await client.subscribe(channel, (message, channel) => {});
```
You can unsubscribe through the `.unsubscribe()` method:
```typescript
await client.unsubscribe(); // Unsubscribe from all channels.
await client.unsubscribe(channel); // Unsubscribe a particular channel.
await client.unsubscribe(channel, listener); // Unsubscribe a particular listener.
```
## Advanced Usage
### Command Execution and Pipelining
@@ -482,9 +585,10 @@ When connecting to Redis servers using older versions that don't support RESP3,
Current limitations of the Redis client we are planning to address in future versions:
- [ ] No dedicated API for pub/sub functionality (though you can use the raw command API)
- [ ] Transactions (MULTI/EXEC) must be done through raw commands for now
- [ ] Streams are supported but without dedicated methods
- [ ] Pub/Sub does not currently support binary data, nor pattern-based
subscriptions.
Unsupported features:

View File

@@ -377,6 +377,22 @@ const users = [
await sql`SELECT * FROM users WHERE id IN ${sql(users, "id")}`;
```
### `sql.array` helper
The `sql.array` helper creates PostgreSQL array literals from JavaScript arrays:
```ts
// Create array literals for PostgreSQL
await sql`INSERT INTO tags (items) VALUES (${sql.array(["red", "blue", "green"])})`;
// Generates: INSERT INTO tags (items) VALUES (ARRAY['red', 'blue', 'green'])
// Works with numeric arrays too
await sql`SELECT * FROM products WHERE ids = ANY(${sql.array([1, 2, 3])})`;
// Generates: SELECT * FROM products WHERE ids = ANY(ARRAY[1, 2, 3])
```
**Note**: `sql.array` is PostgreSQL-only. Multi-dimensional arrays and NULL elements may not be supported yet.
## `sql``.simple()`
The PostgreSQL wire protocol supports two types of queries: "simple" and "extended". Simple queries can contain multiple statements but don't support parameters, while extended queries (the default) support parameters but only allow one statement.

View File

@@ -663,6 +663,8 @@ class Statement<Params, ReturnType> {
toString(): string; // serialize to SQL
columnNames: string[]; // the column names of the result set
columnTypes: string[]; // types based on actual values in first row (call .get()/.all() first)
declaredTypes: (string | null)[]; // types from CREATE TABLE schema (call .get()/.all() first)
paramsCount: number; // the number of parameters expected by the statement
native: any; // the native object representing the statement

View File

@@ -28,6 +28,20 @@ for await (const chunk of stream) {
}
```
`ReadableStream` also provides convenience methods for consuming the entire stream:
```ts
const stream = new ReadableStream({
start(controller) {
controller.enqueue("hello world");
controller.close();
},
});
const data = await stream.text(); // => "hello world"
// Also available: .json(), .bytes(), .blob()
```
## Direct `ReadableStream`
Bun implements an optimized version of `ReadableStream` that avoid unnecessary data copying & queue management logic. With a traditional `ReadableStream`, chunks of data are _enqueued_. Each chunk is copied into a queue, where it sits until the stream is ready to send more data.

View File

@@ -602,6 +602,40 @@ dec.decode(decompressed);
// => "hellohellohello..."
```
## `Bun.zstdCompress()` / `Bun.zstdCompressSync()`
Compresses a `Uint8Array` using the Zstandard algorithm.
```ts
const buf = Buffer.from("hello".repeat(100));
// Synchronous
const compressedSync = Bun.zstdCompressSync(buf);
// Asynchronous
const compressedAsync = await Bun.zstdCompress(buf);
// With compression level (1-22, default: 3)
const compressedLevel = Bun.zstdCompressSync(buf, { level: 6 });
```
## `Bun.zstdDecompress()` / `Bun.zstdDecompressSync()`
Decompresses a `Uint8Array` using the Zstandard algorithm.
```ts
const buf = Buffer.from("hello".repeat(100));
const compressed = Bun.zstdCompressSync(buf);
// Synchronous
const decompressedSync = Bun.zstdDecompressSync(compressed);
// Asynchronous
const decompressedAsync = await Bun.zstdDecompress(compressed);
const dec = new TextDecoder();
dec.decode(decompressedSync);
// => "hellohellohello..."
```
## `Bun.inspect()`
Serializes an object to a `string` exactly as it would be printed by `console.log`.

View File

@@ -279,6 +279,9 @@ Bun implements the `WebSocket` class. To create a WebSocket client that connects
```ts
const socket = new WebSocket("ws://localhost:3000");
// With subprotocol negotiation
const socket2 = new WebSocket("ws://localhost:3000", ["soap", "wamp"]);
```
In browsers, the cookies that are currently set on the page will be sent with the WebSocket upgrade request. This is a standard feature of the `WebSocket` API.
@@ -293,6 +296,17 @@ const socket = new WebSocket("ws://localhost:3000", {
});
```
### Client compression
WebSocket clients support permessage-deflate compression. The `extensions` property shows negotiated compression:
```ts
const socket = new WebSocket("wss://echo.websocket.org");
socket.addEventListener("open", () => {
console.log(socket.extensions); // => "permessage-deflate"
});
```
To add event listeners to the socket:
```ts

View File

@@ -282,6 +282,31 @@ const worker = new Worker("./i-am-smol.ts", {
Setting `smol: true` sets `JSC::HeapSize` to be `Small` instead of the default `Large`.
{% /details %}
## Environment Data
Share data between the main thread and workers using `setEnvironmentData()` and `getEnvironmentData()`.
```js
import { setEnvironmentData, getEnvironmentData } from "worker_threads";
// In main thread
setEnvironmentData("config", { apiUrl: "https://api.example.com" });
// In worker
const config = getEnvironmentData("config");
console.log(config); // => { apiUrl: "https://api.example.com" }
```
## Worker Events
Listen for worker creation events using `process.emit()`:
```js
process.on("worker", worker => {
console.log("New worker created:", worker.threadId);
});
```
## `Bun.isMainThread`
You can check if you're in the main thread by checking `Bun.isMainThread`.

View File

@@ -3,6 +3,7 @@ In Bun, YAML is a first-class citizen alongside JSON and TOML.
Bun provides built-in support for YAML files through both runtime APIs and bundler integration. You can
- Parse YAML strings with `Bun.YAML.parse`
- Stringify JavaScript objects to YAML with `Bun.YAML.stringify`
- import & require YAML files as modules at runtime (including hot reloading & watch mode support)
- import & require YAML files in frontend apps via bun's bundler
@@ -104,7 +105,7 @@ const data = Bun.YAML.parse(yaml);
#### Error Handling
`Bun.YAML.parse()` throws a `SyntaxError` if the YAML is invalid:
`Bun.YAML.parse()` throws an error if the YAML is invalid:
```ts
try {
@@ -114,6 +115,175 @@ try {
}
```
### `Bun.YAML.stringify()`
Convert a JavaScript value into a YAML string. The API signature matches `JSON.stringify`:
```ts
YAML.stringify(value, replacer?, space?)
```
- `value`: The value to convert to YAML
- `replacer`: Currently only `null` or `undefined` (function replacers not yet supported)
- `space`: Number of spaces for indentation (e.g., `2`) or a string to use for indentation. **Without this parameter, outputs flow-style (single-line) YAML**
#### Basic Usage
```ts
import { YAML } from "bun";
const data = {
name: "John Doe",
age: 30,
hobbies: ["reading", "coding"],
};
// Without space - outputs flow-style (single-line) YAML
console.log(YAML.stringify(data));
// {name: John Doe,age: 30,hobbies: [reading,coding]}
// With space=2 - outputs block-style (multi-line) YAML
console.log(YAML.stringify(data, null, 2));
// name: John Doe
// age: 30
// hobbies:
// - reading
// - coding
```
#### Output Styles
```ts
const arr = [1, 2, 3];
// Flow style (single-line) - default
console.log(YAML.stringify(arr));
// [1,2,3]
// Block style (multi-line) - with indentation
console.log(YAML.stringify(arr, null, 2));
// - 1
// - 2
// - 3
```
#### String Quoting
`YAML.stringify()` automatically quotes strings when necessary:
- Strings that would be parsed as YAML keywords (`true`, `false`, `null`, `yes`, `no`, etc.)
- Strings that would be parsed as numbers
- Strings containing special characters or escape sequences
```ts
const examples = {
keyword: "true", // Will be quoted: "true"
number: "123", // Will be quoted: "123"
text: "hello world", // Won't be quoted: hello world
empty: "", // Will be quoted: ""
};
console.log(YAML.stringify(examples, null, 2));
// keyword: "true"
// number: "123"
// text: hello world
// empty: ""
```
#### Cycles and References
`YAML.stringify()` automatically detects and handles circular references using YAML anchors and aliases:
```ts
const obj = { name: "root" };
obj.self = obj; // Circular reference
const yamlString = YAML.stringify(obj, null, 2);
console.log(yamlString);
// &root
// name: root
// self:
// *root
// Objects with shared references
const shared = { id: 1 };
const data = {
first: shared,
second: shared,
};
console.log(YAML.stringify(data, null, 2));
// first:
// &first
// id: 1
// second:
// *first
```
#### Special Values
```ts
// Special numeric values
console.log(YAML.stringify(Infinity)); // .inf
console.log(YAML.stringify(-Infinity)); // -.inf
console.log(YAML.stringify(NaN)); // .nan
console.log(YAML.stringify(0)); // 0
console.log(YAML.stringify(-0)); // -0
// null and undefined
console.log(YAML.stringify(null)); // null
console.log(YAML.stringify(undefined)); // undefined (returns undefined, not a string)
// Booleans
console.log(YAML.stringify(true)); // true
console.log(YAML.stringify(false)); // false
```
#### Complex Objects
```ts
const config = {
server: {
port: 3000,
host: "localhost",
ssl: {
enabled: true,
cert: "/path/to/cert.pem",
key: "/path/to/key.pem",
},
},
database: {
connections: [
{ name: "primary", host: "db1.example.com" },
{ name: "replica", host: "db2.example.com" },
],
},
features: {
auth: true,
"rate-limit": 100, // Keys with special characters are preserved
},
};
const yamlString = YAML.stringify(config, null, 2);
console.log(yamlString);
// server:
// port: 3000
// host: localhost
// ssl:
// enabled: true
// cert: /path/to/cert.pem
// key: /path/to/key.pem
// database:
// connections:
// - name: primary
// host: db1.example.com
// - name: replica
// host: db2.example.com
// features:
// auth: true
// rate-limit: 100
```
## Module Import
### ES Modules

View File

@@ -140,6 +140,19 @@ The `--sourcemap` argument embeds a sourcemap compressed with zstd, so that erro
The `--bytecode` argument enables bytecode compilation. Every time you run JavaScript code in Bun, JavaScriptCore (the engine) will compile your source code into bytecode. We can move this parsing work from runtime to bundle time, saving you startup time.
## Embedding runtime arguments
**`--compile-exec-argv="args"`** - Embed runtime arguments that are available via `process.execArgv`:
```bash
bun build --compile --compile-exec-argv="--smol --user-agent=MyBot" ./app.ts --outfile myapp
```
```js
// In the compiled app
console.log(process.execArgv); // ["--smol", "--user-agent=MyBot"]
```
## Act as the Bun CLI
{% note %}

View File

@@ -313,6 +313,14 @@ $ bun build --entrypoints ./index.ts --outdir ./out --target browser
Depending on the target, Bun will apply different module resolution rules and optimizations.
### Module resolution
Bun supports the `NODE_PATH` environment variable for additional module resolution paths:
```bash
NODE_PATH=./src bun build ./entry.js --outdir ./dist
```
<!-- - Module resolution. For example, when bundling for the browser, Bun will prioritize the `"browser"` export condition when resolving imports. An error will be thrown if any Node.js or Bun built-ins are imported or used, e.g. `node:fs` or `Bun.serve`. -->
{% table %}
@@ -392,6 +400,55 @@ $ bun build ./index.tsx --outdir ./out --format cjs
TODO: document IIFE once we support globalNames.
### `jsx`
Configure JSX transform behavior. Allows fine-grained control over how JSX is compiled.
**Classic runtime example** (uses `factory` and `fragment`):
{% codetabs %}
```ts#JavaScript
await Bun.build({
entrypoints: ['./app.tsx'],
outdir: './out',
jsx: {
factory: 'h',
fragment: 'Fragment',
runtime: 'classic',
},
})
```
```bash#CLI
# JSX configuration is handled via bunfig.toml or tsconfig.json
$ bun build ./app.tsx --outdir ./out
```
{% /codetabs %}
**Automatic runtime example** (uses `importSource`):
{% codetabs %}
```ts#JavaScript
await Bun.build({
entrypoints: ['./app.tsx'],
outdir: './out',
jsx: {
importSource: 'preact',
runtime: 'automatic',
},
})
```
```bash#CLI
# JSX configuration is handled via bunfig.toml or tsconfig.json
$ bun build ./app.tsx --outdir ./out
```
{% /codetabs %}
### `splitting`
Whether to enable code splitting.
@@ -1519,6 +1576,15 @@ interface BuildConfig {
* @default "esm"
*/
format?: "esm" | "cjs" | "iife";
/**
* JSX configuration object for controlling JSX transform behavior
*/
jsx?: {
factory?: string;
fragment?: string;
importSource?: string;
runtime?: "automatic" | "classic";
};
naming?:
| string
| {

View File

@@ -176,7 +176,21 @@ When a `bun.lock` exists and `package.json` hasnt changed, Bun downloads miss
## Platform-specific dependencies?
bun stores normalized `cpu` and `os` values from npm in the lockfile, along with the resolved packages. It skips downloading, extracting, and installing packages disabled for the current target at runtime. This means the lockfile wont change between platforms/architectures even if the packages ultimately installed do change.
bun stores normalized `cpu` and `os` values from npm in the lockfile, along with the resolved packages. It skips downloading, extracting, and installing packages disabled for the current target at runtime. This means the lockfile won't change between platforms/architectures even if the packages ultimately installed do change.
### `--cpu` and `--os` flags
You can override the target platform for package selection:
```bash
bun install --cpu=x64 --os=linux
```
This installs packages for the specified platform instead of the current system. Useful for cross-platform builds or when preparing deployments for different environments.
**Accepted values for `--cpu`**: `arm64`, `x64`, `ia32`, `ppc64`, `s390x`
**Accepted values for `--os`**: `linux`, `darwin`, `win32`, `freebsd`, `openbsd`, `sunos`, `aix`
## Peer dependencies?
@@ -245,3 +259,91 @@ bun uses a binary format for caching NPM registry responses. This loads much fas
You will see these files in `~/.bun/install/cache/*.npm`. The filename pattern is `${hash(packageName)}.npm`. Its a hash so that extra directories dont need to be created for scoped packages.
Bun's usage of `Cache-Control` ignores `Age`. This improves performance, but means bun may be about 5 minutes out of date to receive the latest package version metadata from npm.
## pnpm migration
Bun automatically migrates projects from pnpm to bun. When a `pnpm-lock.yaml` file is detected and no `bun.lock` file exists, Bun will automatically migrate the lockfile to `bun.lock` during installation. The original `pnpm-lock.yaml` file remains unmodified.
```bash
bun install
```
**Note**: Migration only runs when `bun.lock` is absent. There is currently no opt-out flag for pnpm migration.
The migration process handles:
### Lockfile Migration
- Converts `pnpm-lock.yaml` to `bun.lock` format
- Preserves package versions and resolution information
- Maintains dependency relationships and peer dependencies
- Handles patched dependencies with integrity hashes
### Workspace Configuration
When a `pnpm-workspace.yaml` file exists, Bun migrates workspace settings to your root `package.json`:
```yaml
# pnpm-workspace.yaml
packages:
- "apps/*"
- "packages/*"
catalog:
react: ^18.0.0
typescript: ^5.0.0
catalogs:
build:
webpack: ^5.0.0
babel: ^7.0.0
```
The workspace packages list and catalogs are moved to the `workspaces` field in `package.json`:
```json
{
"workspaces": {
"packages": ["apps/*", "packages/*"],
"catalog": {
"react": "^18.0.0",
"typescript": "^5.0.0"
},
"catalogs": {
"build": {
"webpack": "^5.0.0",
"babel": "^7.0.0"
}
}
}
}
```
### Catalog Dependencies
Dependencies using pnpm's `catalog:` protocol are preserved:
```json
{
"dependencies": {
"react": "catalog:",
"webpack": "catalog:build"
}
}
```
### Configuration Migration
The following pnpm configuration is migrated from both `pnpm-lock.yaml` and `pnpm-workspace.yaml`:
- **Overrides**: Moved from `pnpm.overrides` to root-level `overrides` in `package.json`
- **Patched Dependencies**: Moved from `pnpm.patchedDependencies` to root-level `patchedDependencies` in `package.json`
- **Workspace Overrides**: Applied from `pnpm-workspace.yaml` to root `package.json`
### Requirements
- Requires pnpm lockfile version 7 or higher
- Workspace packages must have a `name` field in their `package.json`
- All catalog entries referenced by dependencies must exist in the catalogs definition
After migration, you can safely remove `pnpm-lock.yaml` and `pnpm-workspace.yaml` files.

View File

@@ -63,6 +63,15 @@ $ bunx --bun my-cli # good
$ bunx my-cli --bun # bad
```
## Package flag
**`--package <pkg>` or `-p <pkg>`** - Run binary from specific package. Useful when binary name differs from package name:
```bash
bunx -p renovate renovate-config-validator
bunx --package @angular/cli ng
```
To force bun to always be used with a script, use a shebang.
```

View File

@@ -33,6 +33,11 @@ It creates:
- an entry point which defaults to `index.ts` unless any of `index.{tsx, jsx, js, mts, mjs}` exist or the `package.json` specifies a `module` or `main` field
- a `README.md` file
AI Agent rules (disable with `$BUN_AGENT_RULE_DISABLED=1`):
- a `CLAUDE.md` file when Claude CLI is detected (disable with `CLAUDE_CODE_AGENT_RULE_DISABLED` env var)
- a `.cursor/rules/*.mdc` file to guide [Cursor AI](https://cursor.sh) to use Bun instead of Node.js and npm when Cursor is detected
If you pass `-y` or `--yes`, it will assume you want to continue without asking questions.
At the end, it runs `bun install` to install `@types/bun`.

View File

@@ -44,4 +44,47 @@ You can also pass glob patterns to filter by workspace names:
{% bunOutdatedTerminal glob="{e,t}*" displayGlob="--filter='@monorepo/{types,cli}'" /%}
### Catalog Dependencies
`bun outdated` supports checking catalog dependencies defined in `package.json`:
```sh
$ bun outdated -r
┌────────────────────┬─────────┬─────────┬─────────┬────────────────────────────────┐
│ Package │ Current │ Update │ Latest │ Workspace │
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ body-parser │ 1.19.0 │ 1.19.0 │ 2.2.0 │ @test/shared │
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ cors │ 2.8.0 │ 2.8.0 │ 2.8.5 │ @test/shared │
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ chalk │ 4.0.0 │ 4.0.0 │ 5.6.2 │ @test/utils │
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ uuid │ 8.0.0 │ 8.0.0 │ 13.0.0 │ @test/utils │
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ axios │ 0.21.0 │ 0.21.0 │ 1.12.2 │ catalog (@test/app)
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ lodash │ 4.17.15 │ 4.17.15 │ 4.17.21 │ catalog (@test/app, @test/app)
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ react │ 17.0.0 │ 17.0.0 │ 19.1.1 │ catalog (@test/app)
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ react-dom │ 17.0.0 │ 17.0.0 │ 19.1.1 │ catalog (@test/app)
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ express │ 4.17.0 │ 4.17.0 │ 5.1.0 │ catalog (@test/shared)
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ moment │ 2.24.0 │ 2.24.0 │ 2.30.1 │ catalog (@test/utils)
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ @types/node (dev) │ 14.0.0 │ 14.0.0 │ 24.5.2 │ @test/shared │
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ @types/react (dev) │ 17.0.0 │ 17.0.0 │ 19.1.15 │ catalog:testing (@test/app)
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ eslint (dev) │ 7.0.0 │ 7.0.0 │ 9.36.0 │ catalog:testing (@test/app)
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ typescript (dev) │ 4.9.5 │ 4.9.5 │ 5.9.2 │ catalog:build (@test/app)
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ jest (dev) │ 26.0.0 │ 26.0.0 │ 30.2.0 │ catalog:testing (@test/shared)
├────────────────────┼─────────┼─────────┼─────────┼────────────────────────────────┤
│ prettier (dev) │ 2.0.0 │ 2.0.0 │ 3.6.2 │ catalog:build (@test/utils)
└────────────────────┴─────────┴─────────┴─────────┴────────────────────────────────┘
```
{% bunCLIUsage command="outdated" /%}

View File

@@ -82,6 +82,16 @@ The `--dry-run` flag can be used to simulate the publish process without actuall
$ bun publish --dry-run
```
### `--tolerate-republish`
The `--tolerate-republish` flag makes `bun publish` exit with code 0 instead of code 1 when attempting to republish over an existing version number. This is useful in automated workflows where republishing the same version might occur and should not be treated as an error.
```sh
$ bun publish --tolerate-republish
```
Without this flag, attempting to publish a version that already exists will result in an error and exit code 1. With this flag, the command will exit successfully even when trying to republish an existing version.
### `--gzip-level`
Specify the level of gzip compression to use when packing the package. Only applies to `bun publish` without a tarball path argument. Values range from `0` to `9` (default is `9`).

View File

@@ -151,6 +151,14 @@ By default, Bun respects this shebang and executes the script with `node`. Howev
$ bun run --bun vite
```
### `--no-addons`
Disable native addons and use the `node-addons` export condition.
```bash
$ bun --no-addons run server.js
```
### Filtering
In monorepos containing multiple packages, you can use the `--filter` argument to execute scripts in many packages at once.
@@ -166,6 +174,14 @@ will execute `<script>` in both `bar` and `baz`, but not in `foo`.
Find more details in the docs page for [filter](https://bun.com/docs/cli/filter#running-scripts-with-filter).
### `--workspaces`
Run scripts across all workspaces in the monorepo:
```bash
bun run --workspaces test
```
## `bun run -` to pipe code from stdin
`bun run -` lets you read JavaScript, TypeScript, TSX, or JSX from stdin and execute it without writing to a temporary file first.
@@ -212,6 +228,14 @@ $ bun --smol run index.tsx
This causes the garbage collector to run more frequently, which can slow down execution. However, it can be useful in environments with limited memory. Bun automatically adjusts the garbage collector's heap size based on the available memory (accounting for cgroups and other memory limits) with and without the `--smol` flag, so this is mostly useful for cases where you want to make the heap size grow more slowly.
## `--user-agent`
**`--user-agent <string>`** - Set User-Agent header for all `fetch()` requests:
```bash
bun --user-agent "MyBot/1.0" run index.tsx
```
## Resolution order
Absolute paths and paths starting with `./` or `.\\` are always executed as source files. Unless using `bun run`, running a file with an allowed extension will prefer the file over a package.json script.
@@ -223,4 +247,15 @@ When there is a package.json script and a file with the same name, `bun run` pri
3. Binaries from project packages, eg `bun add eslint && bun run eslint`
4. (`bun run` only) System commands, eg `bun run ls`
### `--unhandled-rejections`
Configure how unhandled promise rejections are handled:
```bash
$ bun --unhandled-rejections=throw script.js # Throw exception (terminate immediately)
$ bun --unhandled-rejections=strict script.js # Throw exception (emit rejectionHandled if handled later)
$ bun --unhandled-rejections=warn script.js # Print warning to stderr (default in Node.js)
$ bun --unhandled-rejections=none script.js # Silently ignore
```
{% bunCLIUsage command="run" /%}

View File

@@ -47,6 +47,8 @@ To filter by _test name_, use the `-t`/`--test-name-pattern` flag.
$ bun test --test-name-pattern addition
```
When no tests match the filter, `bun test` exits with code 1.
To run a specific file in the test runner, make sure the path starts with `./` or `/` to distinguish it from a filter name.
```bash
@@ -109,6 +111,90 @@ Use the `--timeout` flag to specify a _per-test_ timeout in milliseconds. If a t
$ bun test --timeout 20
```
## Concurrent test execution
By default, Bun runs all tests sequentially within each test file. You can enable concurrent execution to run async tests in parallel, significantly speeding up test suites with independent tests.
### `--concurrent` flag
Use the `--concurrent` flag to run all tests concurrently within their respective files:
```sh
$ bun test --concurrent
```
When this flag is enabled, all tests will run in parallel unless explicitly marked with `test.serial`.
### `--max-concurrency` flag
Control the maximum number of tests running simultaneously with the `--max-concurrency` flag:
```sh
# Limit to 4 concurrent tests
$ bun test --concurrent --max-concurrency 4
# Default: 20
$ bun test --concurrent
```
This helps prevent resource exhaustion when running many concurrent tests. The default value is 20.
### `test.concurrent`
Mark individual tests to run concurrently, even when the `--concurrent` flag is not used:
```ts
import { test, expect } from "bun:test";
// These tests run in parallel with each other
test.concurrent("concurrent test 1", async () => {
await fetch("/api/endpoint1");
expect(true).toBe(true);
});
test.concurrent("concurrent test 2", async () => {
await fetch("/api/endpoint2");
expect(true).toBe(true);
});
// This test runs sequentially
test("sequential test", () => {
expect(1 + 1).toBe(2);
});
```
### `test.serial`
Force tests to run sequentially, even when the `--concurrent` flag is enabled:
```ts
import { test, expect } from "bun:test";
let sharedState = 0;
// These tests must run in order
test.serial("first serial test", () => {
sharedState = 1;
expect(sharedState).toBe(1);
});
test.serial("second serial test", () => {
// Depends on the previous test
expect(sharedState).toBe(1);
sharedState = 2;
});
// This test can run concurrently if --concurrent is enabled
test("independent test", () => {
expect(true).toBe(true);
});
// Chaining test qualifiers
test.failing.each([1, 2, 3])("chained qualifiers %d", input => {
expect(input).toBe(0); // This test is expected to fail for each input
});
```
## Rerun tests
Use the `--rerun-each` flag to run each test multiple times. This is useful for detecting flaky or non-deterministic test failures.
@@ -117,6 +203,36 @@ Use the `--rerun-each` flag to run each test multiple times. This is useful for
$ bun test --rerun-each 100
```
## Randomize test execution order
Use the `--randomize` flag to run tests in a random order. This helps detect tests that depend on shared state or execution order.
```sh
$ bun test --randomize
```
When using `--randomize`, the seed used for randomization will be displayed in the test summary:
```sh
$ bun test --randomize
# ... test output ...
--seed=12345
2 pass
8 fail
Ran 10 tests across 2 files. [50.00ms]
```
### Reproducible random order with `--seed`
Use the `--seed` flag to specify a seed for the randomization. This allows you to reproduce the same test order when debugging order-dependent failures.
```sh
# Reproduce a previous randomized run
$ bun test --seed 123456
```
The `--seed` flag implies `--randomize`, so you don't need to specify both. Using the same seed value will always produce the same test execution order, making it easier to debug intermittent failures caused by test interdependencies.
## Bail out with `--bail`
Use the `--bail` flag to abort the test run early after a pre-determined number of test failures. By default Bun will run all tests and report all failures, but sometimes in CI environments it's preferable to terminate earlier to reduce CPU usage.

View File

@@ -90,6 +90,17 @@ Packages are organized in sections by dependency type:
Within each section, individual packages may have additional suffixes (` dev`, ` peer`, ` optional`) for extra clarity.
## `--recursive`
Use the `--recursive` flag with `--interactive` to update dependencies across all workspaces in a monorepo:
```sh
$ bun update --interactive --recursive
$ bun update -i -r
```
This displays an additional "Workspace" column showing which workspace each dependency belongs to.
## `--latest`
By default, `bun update` will update to the latest version of a dependency that satisfies the version range specified in your `package.json`.

View File

@@ -24,6 +24,26 @@ To update all dependencies to the latest versions (including breaking changes):
bun update --latest
```
### Filtering options
**`--audit-level=<low|moderate|high|critical>`** - Only show vulnerabilities at this severity level or higher:
```bash
bun audit --audit-level=high
```
**`--prod`** - Audit only production dependencies (excludes devDependencies):
```bash
bun audit --prod
```
**`--ignore <CVE>`** - Ignore specific CVEs (can be used multiple times):
```bash
bun audit --ignore CVE-2022-25883 --ignore CVE-2023-26136
```
### `--json`
Use the `--json` flag to print the raw JSON response from the registry instead of the formatted report:

View File

@@ -46,3 +46,13 @@ print = "yarn"
Bun v1.2 changed the default lockfile format to the text-based `bun.lock`. Existing binary `bun.lockb` lockfiles can be migrated to the new format by running `bun install --save-text-lockfile --frozen-lockfile --lockfile-only` and deleting `bun.lockb`.
More information about the new lockfile format can be found on [our blogpost](https://bun.com/blog/bun-lock-text-lockfile).
#### Automatic lockfile migration
When running `bun install` in a project without a `bun.lock`, Bun automatically migrates existing lockfiles:
- `yarn.lock` (v1)
- `package-lock.json` (npm)
- `pnpm-lock.yaml` (pnpm)
The original lockfile is preserved and can be removed manually after verification.

View File

@@ -73,3 +73,33 @@ The equivalent `bunfig.toml` option is to add a key in [`install.scopes`](https:
[install.scopes]
myorg = { url = "http://localhost:4873/", username = "myusername", password = "$NPM_PASSWORD" }
```
### `link-workspace-packages`: Control workspace package installation
Controls how workspace packages are installed when available locally:
```ini
link-workspace-packages=true
```
The equivalent `bunfig.toml` option is [`install.linkWorkspacePackages`](https://bun.com/docs/runtime/bunfig#install-linkworkspacepackages):
```toml
[install]
linkWorkspacePackages = true
```
### `save-exact`: Save exact versions
Always saves exact versions without the `^` prefix:
```ini
save-exact=true
```
The equivalent `bunfig.toml` option is [`install.exact`](https://bun.com/docs/runtime/bunfig#install-exact):
```toml
[install]
exact = true
```

View File

@@ -81,7 +81,7 @@ Workspaces have a couple major benefits.
- **Code can be split into logical parts.** If one package relies on another, you can simply add it as a dependency in `package.json`. If package `b` depends on `a`, `bun install` will install your local `packages/a` directory into `node_modules` instead of downloading it from the npm registry.
- **Dependencies can be de-duplicated.** If `a` and `b` share a common dependency, it will be _hoisted_ to the root `node_modules` directory. This reduces redundant disk usage and minimizes "dependency hell" issues associated with having multiple versions of a package installed simultaneously.
- **Run scripts in multiple packages.** You can use the [`--filter` flag](https://bun.com/docs/cli/filter) to easily run `package.json` scripts in multiple packages in your workspace.
- **Run scripts in multiple packages.** You can use the [`--filter` flag](https://bun.com/docs/cli/filter) to easily run `package.json` scripts in multiple packages in your workspace, or `--workspaces` to run scripts across all workspaces.
## Share versions with Catalogs

View File

@@ -359,7 +359,7 @@ export default {
page("api/file-io", "File I/O", {
description: `Read and write files fast with Bun's heavily optimized file system API.`,
}), // "`Bun.write`"),
page("api/redis", "Redis client", {
page("api/redis", "Redis Client", {
description: `Bun provides a fast, native Redis client with automatic command pipelining for better performance.`,
}),
page("api/import-meta", "import.meta", {

View File

@@ -232,6 +232,23 @@ Set path where coverage reports will be saved. Please notice, that it works only
coverageDir = "path/to/somewhere" # default "coverage"
```
### `test.concurrentTestGlob`
Specify a glob pattern to automatically run matching test files with concurrent test execution enabled. Test files matching this pattern will behave as if the `--concurrent` flag was passed, running all tests within those files concurrently.
```toml
[test]
concurrentTestGlob = "**/concurrent-*.test.ts"
```
This is useful for:
- Gradually migrating test suites to concurrent execution
- Running integration tests concurrently while keeping unit tests sequential
- Separating fast concurrent tests from tests that require sequential execution
The `--concurrent` CLI flag will override this setting when specified.
## Package manager
Package management is a complex issue; to support a range of use cases, the behavior of `bun install` can be configured under the `[install]` section.

View File

@@ -220,6 +220,11 @@ These environment variables are read by Bun and configure aspects of its behavio
- `DO_NOT_TRACK`
- Disable uploading crash reports to `bun.report` on crash. On macOS & Windows, crash report uploads are enabled by default. Otherwise, telemetry is not sent yet as of May 21st, 2024, but we are planning to add telemetry in the coming weeks. If `DO_NOT_TRACK=1`, then auto-uploading crash reports and telemetry are both [disabled](https://do-not-track.dev/).
---
- `BUN_OPTIONS`
- Prepends command-line arguments to any Bun execution. For example, `BUN_OPTIONS="--hot"` makes `bun run dev` behave like `bun --hot run dev`.
{% /table %}
## Runtime transpiler caching

View File

@@ -124,7 +124,7 @@ This page is updated regularly to reflect compatibility status of the latest ver
### [`node:perf_hooks`](https://nodejs.org/api/perf_hooks.html)
🟡 Missing `createHistogram` `monitorEventLoopDelay`. It's recommended to use `performance` global instead of `perf_hooks.performance`.
🟡 APIs are implemented, but Node.js test suite does not pass yet for this module.
### [`node:process`](https://nodejs.org/api/process.html)
@@ -156,7 +156,7 @@ This page is updated regularly to reflect compatibility status of the latest ver
### [`node:worker_threads`](https://nodejs.org/api/worker_threads.html)
🟡 `Worker` doesn't support the following options: `stdin` `stdout` `stderr` `trackedUnmanagedFds` `resourceLimits`. Missing `markAsUntransferable` `moveMessagePortToContext` `getHeapSnapshot`.
🟡 `Worker` doesn't support the following options: `stdin` `stdout` `stderr` `trackedUnmanagedFds` `resourceLimits`. Missing `markAsUntransferable` `moveMessagePortToContext`.
### [`node:inspector`](https://nodejs.org/api/inspector.html)

View File

@@ -46,6 +46,25 @@ smol = true # Reduce memory usage during test runs
This is equivalent to using the `--smol` flag on the command line.
### Test execution
#### concurrentTestGlob
Automatically run test files matching a glob pattern with concurrent test execution enabled. This is useful for gradually migrating test suites to concurrent execution or for running specific test types concurrently.
```toml
[test]
concurrentTestGlob = "**/concurrent-*.test.ts" # Run files matching this pattern concurrently
```
Test files matching this pattern will behave as if the `--concurrent` flag was passed, running all tests within those files concurrently. This allows you to:
- Gradually migrate your test suite to concurrent execution
- Run integration tests concurrently while keeping unit tests sequential
- Separate fast concurrent tests from tests that require sequential execution
The `--concurrent` CLI flag will override this setting when specified, forcing all tests to run concurrently regardless of the glob pattern.
### Coverage options
In addition to the options documented in the [coverage documentation](./coverage.md), the following options are available:

View File

@@ -0,0 +1,132 @@
# Concurrent Test Glob Example
This example demonstrates how to use the `concurrentTestGlob` option to selectively run tests concurrently based on file naming patterns.
## Project Structure
```text
my-project/
├── bunfig.toml
├── tests/
│ ├── unit/
│ │ ├── math.test.ts # Sequential
│ │ └── utils.test.ts # Sequential
│ └── integration/
│ ├── concurrent-api.test.ts # Concurrent
│ └── concurrent-database.test.ts # Concurrent
```
## Configuration
### bunfig.toml
```toml
[test]
# Run all test files with "concurrent-" prefix concurrently
concurrentTestGlob = "**/concurrent-*.test.ts"
```
## Test Files
### Unit Test (Sequential)
`tests/unit/math.test.ts`
```typescript
import { test, expect } from "bun:test";
// These tests run sequentially by default
// Good for tests that share state or have specific ordering requirements
let sharedState = 0;
test("addition", () => {
sharedState = 5 + 3;
expect(sharedState).toBe(8);
});
test("uses previous state", () => {
// This test depends on the previous test's state
expect(sharedState).toBe(8);
});
```
### Integration Test (Concurrent)
`tests/integration/concurrent-api.test.ts`
```typescript
import { test, expect } from "bun:test";
// These tests automatically run concurrently due to filename matching the glob pattern.
// Using test() is equivalent to test.concurrent() when the file matches concurrentTestGlob.
// Each test is independent and can run in parallel.
test("fetch user data", async () => {
const response = await fetch("/api/user/1");
expect(response.ok).toBe(true);
});
test("fetch posts", async () => {
const response = await fetch("/api/posts");
expect(response.ok).toBe(true);
});
test("fetch comments", async () => {
const response = await fetch("/api/comments");
expect(response.ok).toBe(true);
});
```
## Running Tests
```bash
# Run all tests - concurrent-*.test.ts files will run concurrently
bun test
# Override: Force ALL tests to run concurrently
# Note: This overrides bunfig.toml and runs all tests concurrently, regardless of glob
bun test --concurrent
# Run only unit tests (sequential)
bun test tests/unit
# Run only integration tests (concurrent due to glob pattern)
bun test tests/integration
```
## Benefits
1. **Gradual Migration**: Migrate to concurrent tests file by file by renaming them
2. **Clear Organization**: File naming convention indicates execution mode
3. **Performance**: Integration tests run faster in parallel
4. **Safety**: Unit tests remain sequential where needed
5. **Flexibility**: Easy to change execution mode by renaming files
## Migration Strategy
To migrate existing tests to concurrent execution:
1. Start with independent integration tests
2. Rename files to match the glob pattern: `mv api.test.ts concurrent-api.test.ts`
3. Verify tests still pass
4. Monitor for race conditions or shared state issues
5. Continue migrating stable tests incrementally
## Tips
- Use descriptive prefixes: `concurrent-`, `parallel-`, `async-`
- Keep related sequential tests together
- Document why certain tests must remain sequential
- Use `test.concurrent()` for fine-grained control in sequential files
(In files matched by `concurrentTestGlob`, plain `test()` already runs concurrently)
- Consider separate globs for different test types:
```toml
[test]
# Multiple patterns for different test categories
concurrentTestGlob = [
"**/integration/*.test.ts",
"**/e2e/*.test.ts",
"**/concurrent-*.test.ts"
]
```

View File

@@ -149,12 +149,6 @@ describe.only("only", () => {
The following command will only execute tests #2 and #3.
```sh
$ bun test --only
```
The following command will only execute tests #1, #2 and #3.
```sh
$ bun test
```

View File

@@ -8,6 +8,8 @@
# Thread::initializePlatformThreading() in ThreadingPOSIX.cpp) to the JS thread to suspend or resume
# it. So stopping the process would just create noise when debugging any long-running script.
process handle -p true -s false -n false SIGPWR
process handle -p true -s false -n false SIGUSR1
process handle -p true -s false -n false SIGUSR2
command script import -c lldb_pretty_printers.py
type category enable zig.lang

View File

@@ -78,6 +78,12 @@
"no-empty-file": "off",
"no-unnecessary-await": "off"
}
},
{
"files": ["src/js/builtins/**"],
"rules": {
"no-unused-expressions": "off"
}
}
]
}

View File

@@ -1,7 +1,7 @@
{
"private": true,
"name": "bun",
"version": "1.2.23",
"version": "1.2.24",
"workspaces": [
"./packages/bun-types",
"./packages/@types/bun"
@@ -33,7 +33,7 @@
"bd:v": "(bun run --silent build:debug &> /tmp/bun.debug.build.log || (cat /tmp/bun.debug.build.log && rm -rf /tmp/bun.debug.build.log && exit 1)) && rm -f /tmp/bun.debug.build.log && ./build/debug/bun-debug",
"bd": "BUN_DEBUG_QUIET_LOGS=1 bun --silent bd:v",
"build:debug": "export COMSPEC=\"C:\\Windows\\System32\\cmd.exe\" && bun ./scripts/build.mjs -GNinja -DCMAKE_BUILD_TYPE=Debug -B build/debug --log-level=NOTICE",
"build:debug:asan": "bun ./scripts/build.mjs -GNinja -DCMAKE_BUILD_TYPE=Debug -DENABLE_ASAN=ON -B build/debug-asan --log-level=NOTICE",
"build:debug:noasan": "export COMSPEC=\"C:\\Windows\\System32\\cmd.exe\" && bun ./scripts/build.mjs -GNinja -DCMAKE_BUILD_TYPE=Debug -DENABLE_ASAN=OFF -B build/debug --log-level=NOTICE",
"build:release": "bun ./scripts/build.mjs -GNinja -DCMAKE_BUILD_TYPE=Release -B build/release",
"build:ci": "bun ./scripts/build.mjs -GNinja -DCMAKE_BUILD_TYPE=Release -DCMAKE_VERBOSE_MAKEFILE=ON -DCI=true -B build/release-ci --verbose --fresh",
"build:assert": "bun ./scripts/build.mjs -GNinja -DCMAKE_BUILD_TYPE=RelWithDebInfo -DENABLE_ASSERTIONS=ON -DENABLE_LOGS=ON -B build/release-assert",
@@ -84,6 +84,11 @@
"node:test": "node ./scripts/runner.node.mjs --quiet --exec-path=$npm_execpath --node-tests ",
"node:test:cp": "bun ./scripts/fetch-node-test.ts ",
"clean:zig": "rm -rf build/debug/cache/zig build/debug/CMakeCache.txt 'build/debug/*.o' .zig-cache zig-out || true",
"machine:linux:ubuntu": "./scripts/machine.mjs ssh --cloud=aws --arch=x64 --instance-type c7i.2xlarge --os=linux --distro=ubuntu --release=25.04",
"machine:linux:debian": "./scripts/machine.mjs ssh --cloud=aws --arch=x64 --instance-type c7i.2xlarge --os=linux --distro=debian --release=12",
"machine:linux:alpine": "./scripts/machine.mjs ssh --cloud=aws --arch=x64 --instance-type c7i.2xlarge --os=linux --distro=alpine --release=3.21",
"machine:linux:amazonlinux": "./scripts/machine.mjs ssh --cloud=aws --arch=x64 --instance-type c7i.2xlarge --os=linux --distro=amazonlinux --release=2023",
"machine:windows:2019": "./scripts/machine.mjs ssh --cloud=aws --arch=x64 --instance-type c7i.2xlarge --os=windows --release=2019",
"sync-webkit-source": "bun ./scripts/sync-webkit-source.ts"
}
}

View File

@@ -62,7 +62,7 @@ export const platforms: Platform[] = [
},
{
os: "linux",
arch: "aarch64",
arch: "arm64",
abi: "musl",
bin: "bun-linux-aarch64-musl",
exe: "bin/bun",

View File

@@ -636,7 +636,7 @@ declare module "bun" {
* import { YAML } from "bun";
*
* console.log(YAML.parse("123")) // 123
* console.log(YAML.parse("123")) // null
* console.log(YAML.parse("null")) // null
* console.log(YAML.parse("false")) // false
* console.log(YAML.parse("abc")) // "abc"
* console.log(YAML.parse("- abc")) // [ "abc" ]
@@ -653,7 +653,10 @@ declare module "bun" {
*
* @param input The JavaScript value to stringify.
* @param replacer Currently not supported.
* @param space A number for how many spaces each level of indentation gets, or a string used as indentation. The number is clamped between 0 and 10, and the first 10 characters of the string are used.
* @param space A number for how many spaces each level of indentation gets, or a string used as indentation.
* Without this parameter, outputs flow-style (single-line) YAML.
* With this parameter, outputs block-style (multi-line) YAML.
* The number is clamped between 0 and 10, and the first 10 characters of the string are used.
* @returns A string containing the YAML document.
*
* @example
@@ -661,19 +664,24 @@ declare module "bun" {
* import { YAML } from "bun";
*
* const input = {
* abc: "def"
* abc: "def",
* num: 123
* };
*
* // Without space - flow style (single-line)
* console.log(YAML.stringify(input));
* // # output
* // {abc: def,num: 123}
*
* // With space - block style (multi-line)
* console.log(YAML.stringify(input, null, 2));
* // abc: def
* // num: 123
*
* const cycle = {};
* cycle.obj = cycle;
* console.log(YAML.stringify(cycle));
* // # output
* // &root
* // obj:
* // *root
* console.log(YAML.stringify(cycle, null, 2));
* // &1
* // obj: *1
*/
export function stringify(input: unknown, replacer?: undefined | null, space?: string | number): string;
}
@@ -5039,6 +5047,7 @@ declare module "bun" {
type SupportedCryptoAlgorithms =
| "blake2b256"
| "blake2b512"
| "blake2s256"
| "md4"
| "md5"
| "ripemd160"

File diff suppressed because it is too large Load Diff

View File

@@ -12,6 +12,68 @@ declare module "bun" {
release(): void;
}
type ArrayType =
| "BOOLEAN"
| "BYTEA"
| "CHAR"
| "NAME"
| "TEXT"
| "CHAR"
| "VARCHAR"
| "SMALLINT"
| "INT2VECTOR"
| "INTEGER"
| "INT"
| "BIGINT"
| "REAL"
| "DOUBLE PRECISION"
| "NUMERIC"
| "MONEY"
| "OID"
| "TID"
| "XID"
| "CID"
| "JSON"
| "JSONB"
| "JSONPATH"
| "XML"
| "POINT"
| "LSEG"
| "PATH"
| "BOX"
| "POLYGON"
| "LINE"
| "CIRCLE"
| "CIDR"
| "MACADDR"
| "INET"
| "MACADDR8"
| "DATE"
| "TIME"
| "TIMESTAMP"
| "TIMESTAMPTZ"
| "INTERVAL"
| "TIMETZ"
| "BIT"
| "VARBIT"
| "ACLITEM"
| "PG_DATABASE"
| (string & {});
/**
* Represents a SQL array parameter
*/
interface SQLArrayParameter {
/**
* The serialized values of the array parameter
*/
serializedValues: string;
/**
* The type of the array parameter
*/
arrayType: ArrayType;
}
/**
* Represents a client within a transaction context Extends SQL with savepoint
* functionality
@@ -630,6 +692,21 @@ declare module "bun" {
*/
reserve(): Promise<ReservedSQL>;
/**
* Creates a new SQL array parameter
* @param values - The values to create the array parameter from
* @param typeNameOrTypeID - The type name or type ID to create the array parameter from, if omitted it will default to JSON
* @returns A new SQL array parameter
*
* @example
* ```ts
* const array = sql.array([1, 2, 3], "INT");
* await sql`CREATE TABLE users_posts (user_id INT, posts_id INT[])`;
* await sql`INSERT INTO users_posts (user_id, posts_id) VALUES (${user.id}, ${array})`;
* ```
*/
array(values: any[], typeNameOrTypeID?: number | ArrayType): SQLArrayParameter;
/**
* Begins a new transaction.
*

View File

@@ -91,6 +91,7 @@ declare module "bun:test" {
export namespace jest {
function restoreAllMocks(): void;
function clearAllMocks(): void;
function resetAllMocks(): void;
function fn<T extends (...args: any[]) => any>(func?: T): Mock<T>;
function setSystemTime(now?: number | Date): void;
function setTimeout(milliseconds: number): void;
@@ -180,6 +181,9 @@ declare module "bun:test" {
* Clear all mock state (calls, results, etc.) without restoring original implementation
*/
clearAllMocks: typeof jest.clearAllMocks;
resetAllMocks: typeof jest.resetAllMocks;
useFakeTimers: typeof jest.useFakeTimers;
useRealTimers: typeof jest.useRealTimers;
};
interface FunctionLike {
@@ -226,6 +230,11 @@ declare module "bun:test" {
* Marks this group of tests to be executed concurrently.
*/
concurrent: Describe<T>;
/**
* Marks this group of tests to be executed serially (one after another),
* even when the --concurrent flag is used.
*/
serial: Describe<T>;
/**
* Runs this group of tests, only if `condition` is true.
*
@@ -423,7 +432,7 @@ declare module "bun:test" {
options?: number | TestOptions,
): void;
/**
* Skips all other tests, except this test when run with the `--only` option.
* Skips all other tests, except this test.
*/
only: Test<T>;
/**
@@ -455,6 +464,11 @@ declare module "bun:test" {
* Runs the test concurrently with other concurrent tests.
*/
concurrent: Test<T>;
/**
* Forces the test to run serially (not in parallel),
* even when the --concurrent flag is used.
*/
serial: Test<T>;
/**
* Runs this test, if `condition` is true.
*
@@ -487,6 +501,13 @@ declare module "bun:test" {
* @param condition if the test should run concurrently
*/
concurrentIf(condition: boolean): Test<T>;
/**
* Forces the test to run serially (not in parallel), if `condition` is true.
* This applies even when the --concurrent flag is used.
*
* @param condition if the test should run serially
*/
serialIf(condition: boolean): Test<T>;
/**
* Returns a function that runs for each item in `table`.
*

View File

@@ -6,10 +6,46 @@
#include <atomic>
#include <string.h>
#include "./default_ciphers.h"
// System-specific includes for certificate loading
#include "./root_certs_platform.h"
#ifdef _WIN32
#include <windows.h>
#include <wincrypt.h>
#else
// Linux/Unix includes
#include <dirent.h>
#include <stdio.h>
#include <limits.h>
#endif
static const int root_certs_size = sizeof(root_certs) / sizeof(root_certs[0]);
extern "C" void BUN__warn__extra_ca_load_failed(const char* filename, const char* error_msg);
// Forward declarations for platform-specific functions
// (Actual implementations are in platform-specific files)
// External variable from Zig CLI arguments
extern "C" bool Bun__Node__UseSystemCA;
// Helper function to check if system CA should be used
// Checks both CLI flag (--use-system-ca) and environment variable (NODE_USE_SYSTEM_CA=1)
static bool us_should_use_system_ca() {
// Check CLI flag first
if (Bun__Node__UseSystemCA) {
return true;
}
// Check environment variable
const char *use_system_ca = getenv("NODE_USE_SYSTEM_CA");
return use_system_ca && strcmp(use_system_ca, "1") == 0;
}
// Platform-specific system certificate loading implementations are separated:
// - macOS: root_certs_darwin.cpp (Security framework with dynamic loading)
// - Windows: root_certs_windows.cpp (Windows CryptoAPI)
// - Linux/Unix: us_load_system_certificates_linux() below
// This callback is used to avoid the default passphrase callback in OpenSSL
// which will typically prompt for the passphrase. The prompting is designed
// for the OpenSSL CLI, but works poorly for this case because it involves
@@ -101,7 +137,8 @@ end:
static void us_internal_init_root_certs(
X509 *root_cert_instances[root_certs_size],
STACK_OF(X509) *&root_extra_cert_instances) {
STACK_OF(X509) *&root_extra_cert_instances,
STACK_OF(X509) *&root_system_cert_instances) {
static std::atomic_flag root_cert_instances_lock = ATOMIC_FLAG_INIT;
static std::atomic_bool root_cert_instances_initialized = 0;
@@ -123,6 +160,17 @@ static void us_internal_init_root_certs(
if (extra_certs && extra_certs[0]) {
root_extra_cert_instances = us_ssl_ctx_load_all_certs_from_file(extra_certs);
}
// load system certificates if NODE_USE_SYSTEM_CA=1
if (us_should_use_system_ca()) {
#ifdef __APPLE__
us_load_system_certificates_macos(&root_system_cert_instances);
#elif defined(_WIN32)
us_load_system_certificates_windows(&root_system_cert_instances);
#else
us_load_system_certificates_linux(&root_system_cert_instances);
#endif
}
}
atomic_flag_clear_explicit(&root_cert_instances_lock,
@@ -137,12 +185,15 @@ extern "C" int us_internal_raw_root_certs(struct us_cert_string_t **out) {
struct us_default_ca_certificates {
X509 *root_cert_instances[root_certs_size];
STACK_OF(X509) *root_extra_cert_instances;
STACK_OF(X509) *root_system_cert_instances;
};
us_default_ca_certificates* us_get_default_ca_certificates() {
static us_default_ca_certificates default_ca_certificates = {{NULL}, NULL};
static us_default_ca_certificates default_ca_certificates = {{NULL}, NULL, NULL};
us_internal_init_root_certs(default_ca_certificates.root_cert_instances, default_ca_certificates.root_extra_cert_instances);
us_internal_init_root_certs(default_ca_certificates.root_cert_instances,
default_ca_certificates.root_extra_cert_instances,
default_ca_certificates.root_system_cert_instances);
return &default_ca_certificates;
}
@@ -151,20 +202,33 @@ STACK_OF(X509) *us_get_root_extra_cert_instances() {
return us_get_default_ca_certificates()->root_extra_cert_instances;
}
STACK_OF(X509) *us_get_root_system_cert_instances() {
if (!us_should_use_system_ca())
return NULL;
// Ensure single-path initialization via us_internal_init_root_certs
auto certs = us_get_default_ca_certificates();
return certs->root_system_cert_instances;
}
extern "C" X509_STORE *us_get_default_ca_store() {
X509_STORE *store = X509_STORE_new();
if (store == NULL) {
return NULL;
}
if (!X509_STORE_set_default_paths(store)) {
X509_STORE_free(store);
return NULL;
// Only load system default paths when NODE_USE_SYSTEM_CA=1
// Otherwise, rely on bundled certificates only (like Node.js behavior)
if (us_should_use_system_ca()) {
if (!X509_STORE_set_default_paths(store)) {
X509_STORE_free(store);
return NULL;
}
}
us_default_ca_certificates *default_ca_certificates = us_get_default_ca_certificates();
X509** root_cert_instances = default_ca_certificates->root_cert_instances;
STACK_OF(X509) *root_extra_cert_instances = default_ca_certificates->root_extra_cert_instances;
STACK_OF(X509) *root_system_cert_instances = default_ca_certificates->root_system_cert_instances;
// load all root_cert_instances on the default ca store
for (size_t i = 0; i < root_certs_size; i++) {
@@ -183,8 +247,59 @@ extern "C" X509_STORE *us_get_default_ca_store() {
}
}
if (us_should_use_system_ca() && root_system_cert_instances) {
for (int i = 0; i < sk_X509_num(root_system_cert_instances); i++) {
X509 *cert = sk_X509_value(root_system_cert_instances, i);
X509_up_ref(cert);
X509_STORE_add_cert(store, cert);
}
}
return store;
}
extern "C" const char *us_get_default_ciphers() {
return DEFAULT_CIPHER_LIST;
}
}
// Platform-specific implementations for loading system certificates
#if defined(_WIN32)
// Windows implementation is split to avoid header conflicts:
// - root_certs_windows.cpp loads raw certificate data (uses Windows headers)
// - This file converts raw data to X509* (uses OpenSSL headers)
#include <vector>
struct RawCertificate {
std::vector<unsigned char> data;
};
// Defined in root_certs_windows.cpp - loads raw certificate data
extern void us_load_system_certificates_windows_raw(
std::vector<RawCertificate>& raw_certs);
// Convert raw Windows certificates to OpenSSL X509 format
void us_load_system_certificates_windows(STACK_OF(X509) **system_certs) {
*system_certs = sk_X509_new_null();
if (*system_certs == NULL) {
return;
}
// Load raw certificates from Windows stores
std::vector<RawCertificate> raw_certs;
us_load_system_certificates_windows_raw(raw_certs);
// Convert each raw certificate to X509
for (const auto& raw_cert : raw_certs) {
const unsigned char* data = raw_cert.data.data();
X509* x509_cert = d2i_X509(NULL, &data, raw_cert.data.size());
if (x509_cert != NULL) {
sk_X509_push(*system_certs, x509_cert);
}
}
}
#else
// Linux and other Unix-like systems - implementation is in root_certs_linux.cpp
extern "C" void us_load_system_certificates_linux(STACK_OF(X509) **system_certs);
#endif

View File

@@ -0,0 +1,431 @@
#ifdef __APPLE__
#include <dlfcn.h>
#include <CoreFoundation/CoreFoundation.h>
#include <atomic>
#include <openssl/x509.h>
#include <openssl/x509_vfy.h>
#include <stdio.h>
// Security framework types and constants - dynamically loaded
typedef struct OpaqueSecCertificateRef* SecCertificateRef;
typedef struct OpaqueSecTrustRef* SecTrustRef;
typedef struct OpaqueSecPolicyRef* SecPolicyRef;
typedef int32_t OSStatus;
typedef uint32_t SecTrustSettingsDomain;
// Security framework constants
enum {
errSecSuccess = 0,
errSecItemNotFound = -25300,
};
// Trust settings domains
enum {
kSecTrustSettingsDomainUser = 0,
kSecTrustSettingsDomainAdmin = 1,
kSecTrustSettingsDomainSystem = 2,
};
// Trust status enumeration
enum class TrustStatus {
TRUSTED,
DISTRUSTED,
UNSPECIFIED
};
// Dynamic Security framework loader
class SecurityFramework {
public:
void* handle;
void* cf_handle;
// Core Foundation constants
CFStringRef kSecClass;
CFStringRef kSecClassCertificate;
CFStringRef kSecMatchLimit;
CFStringRef kSecMatchLimitAll;
CFStringRef kSecReturnRef;
CFStringRef kSecMatchTrustedOnly;
CFBooleanRef kCFBooleanTrue;
CFAllocatorRef kCFAllocatorDefault;
CFArrayCallBacks* kCFTypeArrayCallBacks;
CFDictionaryKeyCallBacks* kCFTypeDictionaryKeyCallBacks;
CFDictionaryValueCallBacks* kCFTypeDictionaryValueCallBacks;
// Core Foundation function pointers
CFMutableArrayRef (*CFArrayCreateMutable)(CFAllocatorRef allocator, CFIndex capacity, const CFArrayCallBacks *callBacks);
CFArrayRef (*CFArrayCreate)(CFAllocatorRef allocator, const void **values, CFIndex numValues, const CFArrayCallBacks *callBacks);
void (*CFArraySetValueAtIndex)(CFMutableArrayRef theArray, CFIndex idx, const void *value);
const void* (*CFArrayGetValueAtIndex)(CFArrayRef theArray, CFIndex idx);
CFIndex (*CFArrayGetCount)(CFArrayRef theArray);
void (*CFRelease)(CFTypeRef cf);
CFDictionaryRef (*CFDictionaryCreate)(CFAllocatorRef allocator, const void **keys, const void **values, CFIndex numValues, const CFDictionaryKeyCallBacks *keyCallBacks, const CFDictionaryValueCallBacks *valueCallBacks);
const UInt8* (*CFDataGetBytePtr)(CFDataRef theData);
CFIndex (*CFDataGetLength)(CFDataRef theData);
// Security framework function pointers
OSStatus (*SecItemCopyMatching)(CFDictionaryRef query, CFTypeRef *result);
CFDataRef (*SecCertificateCopyData)(SecCertificateRef certificate);
OSStatus (*SecTrustCreateWithCertificates)(CFArrayRef certificates, CFArrayRef policies, SecTrustRef *trust);
SecPolicyRef (*SecPolicyCreateSSL)(Boolean server, CFStringRef hostname);
Boolean (*SecTrustEvaluateWithError)(SecTrustRef trust, CFErrorRef *error);
OSStatus (*SecTrustSettingsCopyTrustSettings)(SecCertificateRef certRef, SecTrustSettingsDomain domain, CFArrayRef *trustSettings);
SecurityFramework() : handle(nullptr), cf_handle(nullptr),
kSecClass(nullptr), kSecClassCertificate(nullptr),
kSecMatchLimit(nullptr), kSecMatchLimitAll(nullptr),
kSecReturnRef(nullptr), kSecMatchTrustedOnly(nullptr), kCFBooleanTrue(nullptr),
kCFAllocatorDefault(nullptr), kCFTypeArrayCallBacks(nullptr),
kCFTypeDictionaryKeyCallBacks(nullptr), kCFTypeDictionaryValueCallBacks(nullptr),
CFArrayCreateMutable(nullptr), CFArrayCreate(nullptr),
CFArraySetValueAtIndex(nullptr), CFArrayGetValueAtIndex(nullptr),
CFArrayGetCount(nullptr), CFRelease(nullptr),
CFDictionaryCreate(nullptr), CFDataGetBytePtr(nullptr), CFDataGetLength(nullptr),
SecItemCopyMatching(nullptr), SecCertificateCopyData(nullptr),
SecTrustCreateWithCertificates(nullptr), SecPolicyCreateSSL(nullptr),
SecTrustEvaluateWithError(nullptr), SecTrustSettingsCopyTrustSettings(nullptr) {}
~SecurityFramework() {
if (handle) {
dlclose(handle);
}
if (cf_handle) {
dlclose(cf_handle);
}
}
bool load() {
if (handle && cf_handle) return true; // Already loaded
// Load CoreFoundation framework
cf_handle = dlopen("/System/Library/Frameworks/CoreFoundation.framework/CoreFoundation", RTLD_LAZY | RTLD_LOCAL);
if (!cf_handle) {
fprintf(stderr, "Failed to load CoreFoundation framework: %s\n", dlerror());
return false;
}
// Load Security framework
handle = dlopen("/System/Library/Frameworks/Security.framework/Security", RTLD_LAZY | RTLD_LOCAL);
if (!handle) {
fprintf(stderr, "Failed to load Security framework: %s\n", dlerror());
dlclose(cf_handle);
cf_handle = nullptr;
return false;
}
// Load constants and functions
if (!load_constants()) {
if (handle) {
dlclose(handle);
handle = nullptr;
}
if (cf_handle) {
dlclose(cf_handle);
cf_handle = nullptr;
}
return false;
}
if (!load_functions()) {
if (handle) {
dlclose(handle);
handle = nullptr;
}
if (cf_handle) {
dlclose(cf_handle);
cf_handle = nullptr;
}
return false;
}
return true;
}
private:
bool load_constants() {
// Load Security framework constants
void* ptr = dlsym(handle, "kSecClass");
if (!ptr) { fprintf(stderr, "DEBUG: kSecClass not found\n"); return false; }
kSecClass = *(CFStringRef*)ptr;
ptr = dlsym(handle, "kSecClassCertificate");
if (!ptr) { fprintf(stderr, "DEBUG: kSecClassCertificate not found\n"); return false; }
kSecClassCertificate = *(CFStringRef*)ptr;
ptr = dlsym(handle, "kSecMatchLimit");
if (!ptr) { fprintf(stderr, "DEBUG: kSecMatchLimit not found\n"); return false; }
kSecMatchLimit = *(CFStringRef*)ptr;
ptr = dlsym(handle, "kSecMatchLimitAll");
if (!ptr) { fprintf(stderr, "DEBUG: kSecMatchLimitAll not found\n"); return false; }
kSecMatchLimitAll = *(CFStringRef*)ptr;
ptr = dlsym(handle, "kSecReturnRef");
if (!ptr) { fprintf(stderr, "DEBUG: kSecReturnRef not found\n"); return false; }
kSecReturnRef = *(CFStringRef*)ptr;
ptr = dlsym(handle, "kSecMatchTrustedOnly");
if (!ptr) { fprintf(stderr, "DEBUG: kSecMatchTrustedOnly not found\n"); return false; }
kSecMatchTrustedOnly = *(CFStringRef*)ptr;
// Load CoreFoundation constants
ptr = dlsym(cf_handle, "kCFBooleanTrue");
if (!ptr) { fprintf(stderr, "DEBUG: kCFBooleanTrue not found\n"); return false; }
kCFBooleanTrue = *(CFBooleanRef*)ptr;
ptr = dlsym(cf_handle, "kCFAllocatorDefault");
if (!ptr) { fprintf(stderr, "DEBUG: kCFAllocatorDefault not found\n"); return false; }
kCFAllocatorDefault = *(CFAllocatorRef*)ptr;
ptr = dlsym(cf_handle, "kCFTypeArrayCallBacks");
if (!ptr) { fprintf(stderr, "DEBUG: kCFTypeArrayCallBacks not found\n"); return false; }
kCFTypeArrayCallBacks = (CFArrayCallBacks*)ptr;
ptr = dlsym(cf_handle, "kCFTypeDictionaryKeyCallBacks");
if (!ptr) { fprintf(stderr, "DEBUG: kCFTypeDictionaryKeyCallBacks not found\n"); return false; }
kCFTypeDictionaryKeyCallBacks = (CFDictionaryKeyCallBacks*)ptr;
ptr = dlsym(cf_handle, "kCFTypeDictionaryValueCallBacks");
if (!ptr) { fprintf(stderr, "DEBUG: kCFTypeDictionaryValueCallBacks not found\n"); return false; }
kCFTypeDictionaryValueCallBacks = (CFDictionaryValueCallBacks*)ptr;
return true;
}
bool load_functions() {
// Load CoreFoundation functions
CFArrayCreateMutable = (CFMutableArrayRef (*)(CFAllocatorRef, CFIndex, const CFArrayCallBacks*))dlsym(cf_handle, "CFArrayCreateMutable");
CFArrayCreate = (CFArrayRef (*)(CFAllocatorRef, const void**, CFIndex, const CFArrayCallBacks*))dlsym(cf_handle, "CFArrayCreate");
CFArraySetValueAtIndex = (void (*)(CFMutableArrayRef, CFIndex, const void*))dlsym(cf_handle, "CFArraySetValueAtIndex");
CFArrayGetValueAtIndex = (const void* (*)(CFArrayRef, CFIndex))dlsym(cf_handle, "CFArrayGetValueAtIndex");
CFArrayGetCount = (CFIndex (*)(CFArrayRef))dlsym(cf_handle, "CFArrayGetCount");
CFRelease = (void (*)(CFTypeRef))dlsym(cf_handle, "CFRelease");
CFDictionaryCreate = (CFDictionaryRef (*)(CFAllocatorRef, const void**, const void**, CFIndex, const CFDictionaryKeyCallBacks*, const CFDictionaryValueCallBacks*))dlsym(cf_handle, "CFDictionaryCreate");
CFDataGetBytePtr = (const UInt8* (*)(CFDataRef))dlsym(cf_handle, "CFDataGetBytePtr");
CFDataGetLength = (CFIndex (*)(CFDataRef))dlsym(cf_handle, "CFDataGetLength");
// Load Security framework functions
SecItemCopyMatching = (OSStatus (*)(CFDictionaryRef, CFTypeRef*))dlsym(handle, "SecItemCopyMatching");
SecCertificateCopyData = (CFDataRef (*)(SecCertificateRef))dlsym(handle, "SecCertificateCopyData");
SecTrustCreateWithCertificates = (OSStatus (*)(CFArrayRef, CFArrayRef, SecTrustRef*))dlsym(handle, "SecTrustCreateWithCertificates");
SecPolicyCreateSSL = (SecPolicyRef (*)(Boolean, CFStringRef))dlsym(handle, "SecPolicyCreateSSL");
SecTrustEvaluateWithError = (Boolean (*)(SecTrustRef, CFErrorRef*))dlsym(handle, "SecTrustEvaluateWithError");
SecTrustSettingsCopyTrustSettings = (OSStatus (*)(SecCertificateRef, SecTrustSettingsDomain, CFArrayRef*))dlsym(handle, "SecTrustSettingsCopyTrustSettings");
return CFArrayCreateMutable && CFArrayCreate && CFArraySetValueAtIndex &&
CFArrayGetValueAtIndex && CFArrayGetCount && CFRelease &&
CFDictionaryCreate && CFDataGetBytePtr && CFDataGetLength &&
SecItemCopyMatching && SecCertificateCopyData &&
SecTrustCreateWithCertificates && SecPolicyCreateSSL &&
SecTrustEvaluateWithError && SecTrustSettingsCopyTrustSettings;
}
};
// Global instance for dynamic loading
static std::atomic<SecurityFramework*> g_security_framework{nullptr};
static SecurityFramework* get_security_framework() {
SecurityFramework* framework = g_security_framework.load();
if (!framework) {
SecurityFramework* new_framework = new SecurityFramework();
if (new_framework->load()) {
SecurityFramework* expected = nullptr;
if (g_security_framework.compare_exchange_strong(expected, new_framework)) {
framework = new_framework;
} else {
delete new_framework;
framework = expected;
}
} else {
delete new_framework;
framework = nullptr;
}
}
return framework;
}
// Helper function to determine if a certificate is self-issued
static bool is_certificate_self_issued(X509* cert) {
X509_NAME* subject = X509_get_subject_name(cert);
X509_NAME* issuer = X509_get_issuer_name(cert);
return subject && issuer && X509_NAME_cmp(subject, issuer) == 0;
}
// Validate certificate trust using Security framework
static bool is_certificate_trust_valid(SecurityFramework* security, SecCertificateRef cert_ref) {
CFMutableArrayRef subj_certs = security->CFArrayCreateMutable(nullptr, 1, security->kCFTypeArrayCallBacks);
if (!subj_certs) return false;
security->CFArraySetValueAtIndex(subj_certs, 0, cert_ref);
SecPolicyRef policy = security->SecPolicyCreateSSL(true, nullptr);
if (!policy) {
security->CFRelease(subj_certs);
return false;
}
CFArrayRef policies = security->CFArrayCreate(nullptr, (const void**)&policy, 1, security->kCFTypeArrayCallBacks);
if (!policies) {
security->CFRelease(policy);
security->CFRelease(subj_certs);
return false;
}
SecTrustRef sec_trust = nullptr;
OSStatus ortn = security->SecTrustCreateWithCertificates(subj_certs, policies, &sec_trust);
bool result = false;
if (ortn == errSecSuccess && sec_trust) {
result = security->SecTrustEvaluateWithError(sec_trust, nullptr);
}
// Cleanup
if (sec_trust) security->CFRelease(sec_trust);
security->CFRelease(policies);
security->CFRelease(policy);
security->CFRelease(subj_certs);
return result;
}
// Check trust settings for policy (simplified version)
static TrustStatus is_trust_settings_trusted_for_policy(SecurityFramework* security, CFArrayRef trust_settings, bool is_self_issued) {
if (!trust_settings) {
return TrustStatus::UNSPECIFIED;
}
// Empty trust settings array means "always trust this certificate"
if (security->CFArrayGetCount(trust_settings) == 0) {
return is_self_issued ? TrustStatus::TRUSTED : TrustStatus::UNSPECIFIED;
}
// For simplicity, we'll do basic checking here
// A full implementation would parse the trust dictionary entries
return TrustStatus::UNSPECIFIED;
}
// Check if certificate is trusted for server auth policy
static bool is_certificate_trusted_for_policy(SecurityFramework* security, X509* cert, SecCertificateRef cert_ref) {
bool is_self_issued = is_certificate_self_issued(cert);
bool trust_evaluated = false;
// Check user trust domain, then admin domain
for (const auto& trust_domain : {kSecTrustSettingsDomainUser, kSecTrustSettingsDomainAdmin, kSecTrustSettingsDomainSystem}) {
CFArrayRef trust_settings = nullptr;
OSStatus err = security->SecTrustSettingsCopyTrustSettings(cert_ref, trust_domain, &trust_settings);
if (err != errSecSuccess && err != errSecItemNotFound) {
continue;
}
if (err == errSecSuccess && trust_settings) {
TrustStatus result = is_trust_settings_trusted_for_policy(security, trust_settings, is_self_issued);
security->CFRelease(trust_settings);
if (result == TrustStatus::TRUSTED) {
return true;
} else if (result == TrustStatus::DISTRUSTED) {
return false;
}
}
// If no trust settings and we haven't evaluated trust yet, check trust validity
if (!trust_settings && !trust_evaluated) {
if (is_certificate_trust_valid(security, cert_ref)) {
return true;
}
trust_evaluated = true;
}
}
return false;
}
// Main function to load system certificates on macOS
extern "C" void us_load_system_certificates_macos(STACK_OF(X509) **system_certs) {
*system_certs = sk_X509_new_null();
if (!*system_certs) {
return;
}
SecurityFramework* security = get_security_framework();
if (!security) {
return; // Fail silently
}
// Create search dictionary for certificates
CFTypeRef search_keys[] = {
security->kSecClass,
security->kSecMatchLimit,
security->kSecReturnRef,
security->kSecMatchTrustedOnly,
};
CFTypeRef search_values[] = {
security->kSecClassCertificate,
security->kSecMatchLimitAll,
security->kCFBooleanTrue,
security->kCFBooleanTrue,
};
CFDictionaryRef search = security->CFDictionaryCreate(
security->kCFAllocatorDefault,
search_keys,
search_values,
4,
security->kCFTypeDictionaryKeyCallBacks,
security->kCFTypeDictionaryValueCallBacks
);
if (!search) {
return;
}
CFArrayRef certificates = nullptr;
OSStatus status = security->SecItemCopyMatching(search, (CFTypeRef*)&certificates);
security->CFRelease(search);
if (status != errSecSuccess || !certificates) {
return;
}
CFIndex count = security->CFArrayGetCount(certificates);
for (CFIndex i = 0; i < count; ++i) {
SecCertificateRef cert_ref = (SecCertificateRef)security->CFArrayGetValueAtIndex(certificates, i);
if (!cert_ref) continue;
// Get certificate data
CFDataRef cert_data = security->SecCertificateCopyData(cert_ref);
if (!cert_data) continue;
// Convert to X509
const unsigned char* data_ptr = security->CFDataGetBytePtr(cert_data);
long data_len = security->CFDataGetLength(cert_data);
X509* x509_cert = d2i_X509(nullptr, &data_ptr, data_len);
security->CFRelease(cert_data);
if (!x509_cert) continue;
// Only consider CA certificates
if (X509_check_ca(x509_cert) == 1 &&
is_certificate_trusted_for_policy(security, x509_cert, cert_ref)) {
sk_X509_push(*system_certs, x509_cert);
} else {
X509_free(x509_cert);
}
}
security->CFRelease(certificates);
}
// Cleanup function for Security framework
extern "C" void us_cleanup_security_framework() {
SecurityFramework* framework = g_security_framework.exchange(nullptr);
if (framework) {
delete framework;
}
}
#endif // __APPLE__

View File

@@ -5,6 +5,7 @@
#define CPPDECL extern "C"
STACK_OF(X509) *us_get_root_extra_cert_instances();
STACK_OF(X509) *us_get_root_system_cert_instances();
#else
#define CPPDECL extern

View File

@@ -0,0 +1,170 @@
#ifndef _WIN32
#ifndef __APPLE__
#include <dirent.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <limits.h>
#include <openssl/x509.h>
#include <openssl/x509_vfy.h>
#include <openssl/pem.h>
extern "C" void BUN__warn__extra_ca_load_failed(const char* filename, const char* error_msg);
// Helper function to load certificates from a directory
static void load_certs_from_directory(const char* dir_path, STACK_OF(X509)* cert_stack) {
DIR* dir = opendir(dir_path);
if (!dir) {
return;
}
struct dirent* entry;
while ((entry = readdir(dir)) != NULL) {
// Skip . and ..
if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0) {
continue;
}
// Check if file has .crt, .pem, or .cer extension
const char* ext = strrchr(entry->d_name, '.');
if (!ext || (strcmp(ext, ".crt") != 0 && strcmp(ext, ".pem") != 0 && strcmp(ext, ".cer") != 0)) {
continue;
}
// Build full path
char filepath[PATH_MAX];
snprintf(filepath, sizeof(filepath), "%s/%s", dir_path, entry->d_name);
// Try to load certificate
FILE* file = fopen(filepath, "r");
if (file) {
X509* cert = PEM_read_X509(file, NULL, NULL, NULL);
fclose(file);
if (cert) {
if (!sk_X509_push(cert_stack, cert)) {
X509_free(cert);
}
}
}
}
closedir(dir);
}
// Helper function to load certificates from a bundle file
static void load_certs_from_bundle(const char* bundle_path, STACK_OF(X509)* cert_stack) {
FILE* file = fopen(bundle_path, "r");
if (!file) {
return;
}
X509* cert;
while ((cert = PEM_read_X509(file, NULL, NULL, NULL)) != NULL) {
if (!sk_X509_push(cert_stack, cert)) {
X509_free(cert);
break;
}
}
ERR_clear_error();
fclose(file);
}
// Main function to load system certificates on Linux and other Unix-like systems
extern "C" void us_load_system_certificates_linux(STACK_OF(X509) **system_certs) {
*system_certs = sk_X509_new_null();
if (*system_certs == NULL) {
return;
}
// First check environment variables (same as Node.js and OpenSSL)
const char* ssl_cert_file = getenv("SSL_CERT_FILE");
const char* ssl_cert_dir = getenv("SSL_CERT_DIR");
// If SSL_CERT_FILE is set, load from it
if (ssl_cert_file && strlen(ssl_cert_file) > 0) {
load_certs_from_bundle(ssl_cert_file, *system_certs);
}
// If SSL_CERT_DIR is set, load from each directory (colon-separated)
if (ssl_cert_dir && strlen(ssl_cert_dir) > 0) {
char* dir_copy = strdup(ssl_cert_dir);
if (dir_copy) {
char* token = strtok(dir_copy, ":");
while (token != NULL) {
// Skip empty tokens
if (strlen(token) > 0) {
load_certs_from_directory(token, *system_certs);
}
token = strtok(NULL, ":");
}
free(dir_copy);
}
}
// If environment variables were set, use only those (even if they yield zero certs)
if (ssl_cert_file || ssl_cert_dir) {
return;
}
// Otherwise, load certificates from standard Linux/Unix paths
// These are the common locations for system certificates
// Common certificate bundle locations (single file with multiple certs)
// These paths are based on common Linux distributions and OpenSSL defaults
static const char* bundle_paths[] = {
"/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo
"/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL 6
"/etc/ssl/ca-bundle.pem", // OpenSUSE
"/etc/pki/tls/cert.pem", // Fedora/RHEL 7+
"/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem", // CentOS/RHEL 7+
"/etc/ssl/cert.pem", // Alpine Linux, macOS OpenSSL
"/usr/local/etc/openssl/cert.pem", // Homebrew OpenSSL on macOS
"/usr/local/share/ca-certificates/ca-certificates.crt", // Custom CA installs
NULL
};
// Common certificate directory locations (multiple files)
// Note: OpenSSL expects hashed symlinks in directories (c_rehash format)
static const char* dir_paths[] = {
"/etc/ssl/certs", // Common location (Debian/Ubuntu with hashed links)
"/etc/pki/tls/certs", // RHEL/Fedora
"/usr/share/ca-certificates", // Debian/Ubuntu (original certs, not hashed)
"/usr/local/share/certs", // FreeBSD
"/etc/openssl/certs", // NetBSD
"/var/ssl/certs", // AIX
"/usr/local/etc/openssl/certs", // Homebrew OpenSSL on macOS
"/System/Library/OpenSSL/certs", // macOS system OpenSSL (older versions)
NULL
};
// Try loading from bundle files first
for (const char** path = bundle_paths; *path != NULL; path++) {
load_certs_from_bundle(*path, *system_certs);
}
// Then try loading from directories
for (const char** path = dir_paths; *path != NULL; path++) {
load_certs_from_directory(*path, *system_certs);
}
// Also check NODE_EXTRA_CA_CERTS environment variable
const char* extra_ca_certs = getenv("NODE_EXTRA_CA_CERTS");
if (extra_ca_certs && strlen(extra_ca_certs) > 0) {
FILE* file = fopen(extra_ca_certs, "r");
if (file) {
X509* cert;
while ((cert = PEM_read_X509(file, NULL, NULL, NULL)) != NULL) {
sk_X509_push(*system_certs, cert);
}
fclose(file);
} else {
BUN__warn__extra_ca_load_failed(extra_ca_certs, "Failed to open file");
}
}
}
#endif // !__APPLE__
#endif // !_WIN32

View File

@@ -0,0 +1,18 @@
#pragma once
#include <openssl/x509.h>
// Platform-specific certificate loading functions
extern "C" {
// Load system certificates for the current platform
void us_load_system_certificates_linux(STACK_OF(X509) **system_certs);
void us_load_system_certificates_macos(STACK_OF(X509) **system_certs);
void us_load_system_certificates_windows(STACK_OF(X509) **system_certs);
// Platform-specific cleanup functions
#ifdef __APPLE__
void us_cleanup_security_framework();
#endif
}

View File

@@ -0,0 +1,53 @@
#ifdef _WIN32
#include <windows.h>
#include <wincrypt.h>
#include <vector>
#include <cstring>
// Forward declaration to avoid including OpenSSL headers here
// This prevents conflicts with Windows macros like X509_NAME
// Note: We don't use STACK_OF macro here since we don't have OpenSSL headers
// Structure to hold raw certificate data
struct RawCertificate {
std::vector<unsigned char> data;
};
// Helper function to load raw certificates from a Windows certificate store
static void LoadRawCertsFromStore(std::vector<RawCertificate>& raw_certs,
DWORD store_flags,
const wchar_t* store_name) {
HCERTSTORE cert_store = CertOpenStore(
CERT_STORE_PROV_SYSTEM_W,
0,
0,
store_flags | CERT_STORE_READONLY_FLAG,
store_name
);
if (cert_store == NULL) {
return;
}
PCCERT_CONTEXT cert_context = NULL;
while ((cert_context = CertEnumCertificatesInStore(cert_store, cert_context)) != NULL) {
RawCertificate raw_cert;
raw_cert.data.assign(cert_context->pbCertEncoded,
cert_context->pbCertEncoded + cert_context->cbCertEncoded);
raw_certs.push_back(std::move(raw_cert));
}
CertCloseStore(cert_store, 0);
}
// Main function to load raw system certificates on Windows
// Returns certificates as raw DER data to avoid OpenSSL header conflicts
extern void us_load_system_certificates_windows_raw(
std::vector<RawCertificate>& raw_certs) {
// Load only from ROOT by default
LoadRawCertsFromStore(raw_certs, CERT_SYSTEM_STORE_CURRENT_USER, L"ROOT");
LoadRawCertsFromStore(raw_certs, CERT_SYSTEM_STORE_LOCAL_MACHINE, L"ROOT");
}
#endif // _WIN32

View File

@@ -627,9 +627,15 @@ public:
return std::move(*this);
}
void setOnClose(HttpContextData<SSL>::OnSocketClosedCallback onClose) {
void setOnSocketClosed(HttpContextData<SSL>::OnSocketClosedCallback onClose) {
httpContext->getSocketContextData()->onSocketClosed = onClose;
}
void setOnSocketDrain(HttpContextData<SSL>::OnSocketDrainCallback onDrain) {
httpContext->getSocketContextData()->onSocketDrain = onDrain;
}
void setOnSocketData(HttpContextData<SSL>::OnSocketDataCallback onData) {
httpContext->getSocketContextData()->onSocketData = onData;
}
void setOnClientError(HttpContextData<SSL>::OnClientErrorCallback onClientError) {
httpContext->getSocketContextData()->onClientError = std::move(onClientError);

View File

@@ -193,23 +193,32 @@ private:
auto *httpResponseData = reinterpret_cast<HttpResponseData<SSL> *>(us_socket_ext(SSL, s));
/* Call filter */
HttpContextData<SSL> *httpContextData = getSocketContextDataS(s);
if(httpResponseData && httpResponseData->isConnectRequest) {
if (httpResponseData->socketData && httpContextData->onSocketData) {
httpContextData->onSocketData(httpResponseData->socketData, SSL, s, "", 0, true);
}
if(httpResponseData->inStream) {
httpResponseData->inStream(reinterpret_cast<HttpResponse<SSL> *>(s), "", 0, true, httpResponseData->userData);
httpResponseData->inStream = nullptr;
}
}
for (auto &f : httpContextData->filterHandlers) {
f((HttpResponse<SSL> *) s, -1);
}
if (httpResponseData->socketData && httpContextData->onSocketClosed) {
httpContextData->onSocketClosed(httpResponseData->socketData, SSL, s);
}
/* Signal broken HTTP request only if we have a pending request */
if (httpResponseData->onAborted != nullptr && httpResponseData->userData != nullptr) {
httpResponseData->onAborted((HttpResponse<SSL> *)s, httpResponseData->userData);
}
if (httpResponseData->socketData && httpContextData->onSocketClosed) {
httpContextData->onSocketClosed(httpResponseData->socketData, SSL, s);
}
/* Destruct socket ext */
httpResponseData->~HttpResponseData<SSL>();
@@ -254,7 +263,9 @@ private:
/* The return value is entirely up to us to interpret. The HttpParser cares only for whether the returned value is DIFFERENT from passed user */
auto result = httpResponseData->consumePostPadded(httpContextData->maxHeaderSize, httpContextData->flags.requireHostHeader,httpContextData->flags.useStrictMethodValidation, data, (unsigned int) length, s, proxyParser, [httpContextData](void *s, HttpRequest *httpRequest) -> void * {
auto result = httpResponseData->consumePostPadded(httpContextData->maxHeaderSize, httpResponseData->isConnectRequest, httpContextData->flags.requireHostHeader,httpContextData->flags.useStrictMethodValidation, data, (unsigned int) length, s, proxyParser, [httpContextData](void *s, HttpRequest *httpRequest) -> void * {
/* For every request we reset the timeout and hang until user makes action */
/* Warning: if we are in shutdown state, resetting the timer is a security issue! */
us_socket_timeout(SSL, (us_socket_t *) s, 0);
@@ -330,7 +341,12 @@ private:
/* Continue parsing */
return s;
}, [httpResponseData](void *user, std::string_view data, bool fin) -> void * {
}, [httpResponseData, httpContextData](void *user, std::string_view data, bool fin) -> void * {
if (httpResponseData->isConnectRequest && httpResponseData->socketData && httpContextData->onSocketData) {
httpContextData->onSocketData(httpResponseData->socketData, SSL, (struct us_socket_t *) user, data.data(), data.length(), fin);
}
/* We always get an empty chunk even if there is no data */
if (httpResponseData->inStream) {
@@ -449,7 +465,7 @@ private:
us_socket_context_on_writable(SSL, getSocketContext(), [](us_socket_t *s) {
auto *asyncSocket = reinterpret_cast<AsyncSocket<SSL> *>(s);
auto *httpResponseData = reinterpret_cast<HttpResponseData<SSL> *>(asyncSocket->getAsyncSocketData());
/* Attempt to drain the socket buffer before triggering onWritable callback */
size_t bufferedAmount = asyncSocket->getBufferedAmount();
if (bufferedAmount > 0) {
@@ -470,6 +486,12 @@ private:
*/
}
auto *httpContextData = getSocketContextDataS(s);
if (httpResponseData->isConnectRequest && httpResponseData->socketData && httpContextData->onSocketDrain) {
httpContextData->onSocketDrain(httpResponseData->socketData, SSL, (struct us_socket_t *) s);
}
/* Ask the developer to write data and return success (true) or failure (false), OR skip sending anything and return success (true). */
if (httpResponseData->onWritable) {
/* We are now writable, so hang timeout again, the user does not have to do anything so we should hang until end or tryEnd rearms timeout */
@@ -514,6 +536,7 @@ private:
us_socket_context_on_end(SSL, getSocketContext(), [](us_socket_t *s) {
auto *asyncSocket = reinterpret_cast<AsyncSocket<SSL> *>(s);
asyncSocket->uncorkWithoutSending();
/* We do not care for half closed sockets */
return asyncSocket->close();
});

View File

@@ -44,7 +44,10 @@ struct alignas(16) HttpContextData {
private:
std::vector<MoveOnlyFunction<void(HttpResponse<SSL> *, int)>> filterHandlers;
using OnSocketClosedCallback = void (*)(void* userData, int is_ssl, struct us_socket_t *rawSocket);
using OnSocketDataCallback = void (*)(void* userData, int is_ssl, struct us_socket_t *rawSocket, const char *data, int length, bool last);
using OnSocketDrainCallback = void (*)(void* userData, int is_ssl, struct us_socket_t *rawSocket);
using OnClientErrorCallback = MoveOnlyFunction<void(int is_ssl, struct us_socket_t *rawSocket, uWS::HttpParserError errorCode, char *rawPacket, int rawPacketLength)>;
MoveOnlyFunction<void(const char *hostname)> missingServerNameHandler;
@@ -61,6 +64,8 @@ private:
void *upgradedWebSocket = nullptr;
/* Used to simulate Node.js socket events. */
OnSocketClosedCallback onSocketClosed = nullptr;
OnSocketDrainCallback onSocketDrain = nullptr;
OnSocketDataCallback onSocketData = nullptr;
OnClientErrorCallback onClientError = nullptr;
uint64_t maxHeaderSize = 0; // 0 means no limit

View File

@@ -117,18 +117,19 @@ namespace uWS
struct ConsumeRequestLineResult {
char *position;
bool isAncientHTTP;
bool isConnect;
HTTPHeaderParserError headerParserError;
public:
static ConsumeRequestLineResult error(HTTPHeaderParserError error) {
return ConsumeRequestLineResult{nullptr, false, error};
return ConsumeRequestLineResult{nullptr, false, false, error};
}
static ConsumeRequestLineResult success(char *position, bool isAncientHTTP = false) {
return ConsumeRequestLineResult{position, isAncientHTTP, HTTP_HEADER_PARSER_ERROR_NONE};
static ConsumeRequestLineResult success(char *position, bool isAncientHTTP = false, bool isConnect = false) {
return ConsumeRequestLineResult{position, isAncientHTTP, isConnect, HTTP_HEADER_PARSER_ERROR_NONE};
}
static ConsumeRequestLineResult shortRead(bool isAncientHTTP = false) {
return ConsumeRequestLineResult{nullptr, isAncientHTTP, HTTP_HEADER_PARSER_ERROR_NONE};
static ConsumeRequestLineResult shortRead(bool isAncientHTTP = false, bool isConnect = false) {
return ConsumeRequestLineResult{nullptr, isAncientHTTP, isConnect, HTTP_HEADER_PARSER_ERROR_NONE};
}
bool isErrorOrShortRead() {
@@ -551,7 +552,10 @@ namespace uWS
return ConsumeRequestLineResult::shortRead();
}
if (data[0] == 32 && (__builtin_expect(data[1] == '/', 1) || isHTTPorHTTPSPrefixForProxies(data + 1, end) == 1)) [[likely]] {
bool isHTTPMethod = (__builtin_expect(data[1] == '/', 1));
bool isConnect = !isHTTPMethod && (isHTTPorHTTPSPrefixForProxies(data + 1, end) == 1 || ((data - start) == 7 && memcmp(start, "CONNECT", 7) == 0));
if (isHTTPMethod || isConnect) [[likely]] {
header.key = {start, (size_t) (data - start)};
data++;
if(!isValidMethod(header.key, useStrictMethodValidation)) {
@@ -577,22 +581,22 @@ namespace uWS
if (nextPosition >= end) {
/* Whatever we have must be part of the version string */
if (memcmp(" HTTP/1.1\r\n", data, std::min<unsigned int>(11, (unsigned int) (end - data))) == 0) {
return ConsumeRequestLineResult::shortRead();
return ConsumeRequestLineResult::shortRead(false, isConnect);
} else if (memcmp(" HTTP/1.0\r\n", data, std::min<unsigned int>(11, (unsigned int) (end - data))) == 0) {
/*Indicates that the request line is ancient HTTP*/
return ConsumeRequestLineResult::shortRead(true);
return ConsumeRequestLineResult::shortRead(true, isConnect);
}
return ConsumeRequestLineResult::error(HTTP_HEADER_PARSER_ERROR_INVALID_HTTP_VERSION);
}
if (memcmp(" HTTP/1.1\r\n", data, 11) == 0) {
return ConsumeRequestLineResult::success(nextPosition);
return ConsumeRequestLineResult::success(nextPosition, false, isConnect);
} else if (memcmp(" HTTP/1.0\r\n", data, 11) == 0) {
/*Indicates that the request line is ancient HTTP*/
return ConsumeRequestLineResult::success(nextPosition, true);
return ConsumeRequestLineResult::success(nextPosition, true, isConnect);
}
/* If we stand at the post padded CR, we have fragmented input so try again later */
if (data[0] == '\r') {
return ConsumeRequestLineResult::shortRead();
return ConsumeRequestLineResult::shortRead(false, isConnect);
}
/* This is an error */
return ConsumeRequestLineResult::error(HTTP_HEADER_PARSER_ERROR_INVALID_HTTP_VERSION);
@@ -602,14 +606,14 @@ namespace uWS
/* If we stand at the post padded CR, we have fragmented input so try again later */
if (data[0] == '\r') {
return ConsumeRequestLineResult::shortRead();
return ConsumeRequestLineResult::shortRead(false, isConnect);
}
if (data[0] == 32) {
switch (isHTTPorHTTPSPrefixForProxies(data + 1, end)) {
// If we haven't received enough data to check if it's http:// or https://, let's try again later
case -1:
return ConsumeRequestLineResult::shortRead();
return ConsumeRequestLineResult::shortRead(false, isConnect);
// Otherwise, if it's not http:// or https://, return 400
default:
return ConsumeRequestLineResult::error(HTTP_HEADER_PARSER_ERROR_INVALID_REQUEST);
@@ -635,7 +639,7 @@ namespace uWS
}
/* End is only used for the proxy parser. The HTTP parser recognizes "\ra" as invalid "\r\n" scan and breaks. */
static HttpParserResult getHeaders(char *postPaddedBuffer, char *end, struct HttpRequest::Header *headers, void *reserved, bool &isAncientHTTP, bool useStrictMethodValidation, uint64_t maxHeaderSize) {
static HttpParserResult getHeaders(char *postPaddedBuffer, char *end, struct HttpRequest::Header *headers, void *reserved, bool &isAncientHTTP, bool &isConnectRequest, bool useStrictMethodValidation, uint64_t maxHeaderSize) {
char *preliminaryKey, *preliminaryValue, *start = postPaddedBuffer;
#ifdef UWS_WITH_PROXY
/* ProxyParser is passed as reserved parameter */
@@ -689,6 +693,9 @@ namespace uWS
if(requestLineResult.isAncientHTTP) {
isAncientHTTP = true;
}
if(requestLineResult.isConnect) {
isConnectRequest = true;
}
/* No request headers found */
const char * headerStart = (headers[0].key.length() > 0) ? headers[0].key.data() : end;
@@ -798,7 +805,7 @@ namespace uWS
/* This is the only caller of getHeaders and is thus the deepest part of the parser. */
template <bool ConsumeMinimally>
HttpParserResult fenceAndConsumePostPadded(uint64_t maxHeaderSize, bool requireHostHeader, bool useStrictMethodValidation, char *data, unsigned int length, void *user, void *reserved, HttpRequest *req, MoveOnlyFunction<void *(void *, HttpRequest *)> &requestHandler, MoveOnlyFunction<void *(void *, std::string_view, bool)> &dataHandler) {
HttpParserResult fenceAndConsumePostPadded(uint64_t maxHeaderSize, bool& isConnectRequest, bool requireHostHeader, bool useStrictMethodValidation, char *data, unsigned int length, void *user, void *reserved, HttpRequest *req, MoveOnlyFunction<void *(void *, HttpRequest *)> &requestHandler, MoveOnlyFunction<void *(void *, std::string_view, bool)> &dataHandler) {
/* How much data we CONSUMED (to throw away) */
unsigned int consumedTotal = 0;
@@ -809,7 +816,7 @@ namespace uWS
data[length + 1] = 'a'; /* Anything that is not \n, to trigger "invalid request" */
req->ancientHttp = false;
for (;length;) {
auto result = getHeaders(data, data + length, req->headers, reserved, req->ancientHttp, useStrictMethodValidation, maxHeaderSize);
auto result = getHeaders(data, data + length, req->headers, reserved, req->ancientHttp, isConnectRequest, useStrictMethodValidation, maxHeaderSize);
if(result.isError()) {
return result;
}
@@ -916,6 +923,10 @@ namespace uWS
length -= emittable;
consumedTotal += emittable;
}
} else if(isConnectRequest) {
// This only server to mark that the connect request read all headers
// and can starting emitting data
remainingStreamingBytes = STATE_IS_CHUNKED;
} else {
/* If we came here without a body; emit an empty data chunk to signal no data */
dataHandler(user, {}, true);
@@ -931,15 +942,16 @@ namespace uWS
}
public:
HttpParserResult consumePostPadded(uint64_t maxHeaderSize, bool requireHostHeader, bool useStrictMethodValidation, char *data, unsigned int length, void *user, void *reserved, MoveOnlyFunction<void *(void *, HttpRequest *)> &&requestHandler, MoveOnlyFunction<void *(void *, std::string_view, bool)> &&dataHandler) {
HttpParserResult consumePostPadded(uint64_t maxHeaderSize, bool& isConnectRequest, bool requireHostHeader, bool useStrictMethodValidation, char *data, unsigned int length, void *user, void *reserved, MoveOnlyFunction<void *(void *, HttpRequest *)> &&requestHandler, MoveOnlyFunction<void *(void *, std::string_view, bool)> &&dataHandler) {
/* This resets BloomFilter by construction, but later we also reset it again.
* Optimize this to skip resetting twice (req could be made global) */
HttpRequest req;
if (remainingStreamingBytes) {
/* It's either chunked or with a content-length */
if (isParsingChunkedEncoding(remainingStreamingBytes)) {
if (isConnectRequest) {
dataHandler(user, std::string_view(data, length), false);
return HttpParserResult::success(0, user);
} else if (isParsingChunkedEncoding(remainingStreamingBytes)) {
/* It's either chunked or with a content-length */
std::string_view dataToConsume(data, length);
for (auto chunk : uWS::ChunkIterator(&dataToConsume, &remainingStreamingBytes)) {
dataHandler(user, chunk, chunk.length() == 0);
@@ -950,6 +962,7 @@ public:
data = (char *) dataToConsume.data();
length = (unsigned int) dataToConsume.length();
} else {
// this is exactly the same as below!
// todo: refactor this
if (remainingStreamingBytes >= length) {
@@ -980,7 +993,7 @@ public:
fallback.append(data, maxCopyDistance);
// break here on break
HttpParserResult consumed = fenceAndConsumePostPadded<true>(maxHeaderSize, requireHostHeader, useStrictMethodValidation, fallback.data(), (unsigned int) fallback.length(), user, reserved, &req, requestHandler, dataHandler);
HttpParserResult consumed = fenceAndConsumePostPadded<true>(maxHeaderSize, isConnectRequest, requireHostHeader, useStrictMethodValidation, fallback.data(), (unsigned int) fallback.length(), user, reserved, &req, requestHandler, dataHandler);
/* Return data will be different than user if we are upgraded to WebSocket or have an error */
if (consumed.returnedData != user) {
return consumed;
@@ -997,8 +1010,11 @@ public:
length -= consumedBytes - had;
if (remainingStreamingBytes) {
/* It's either chunked or with a content-length */
if (isParsingChunkedEncoding(remainingStreamingBytes)) {
if(isConnectRequest) {
dataHandler(user, std::string_view(data, length), false);
return HttpParserResult::success(0, user);
} else if (isParsingChunkedEncoding(remainingStreamingBytes)) {
/* It's either chunked or with a content-length */
std::string_view dataToConsume(data, length);
for (auto chunk : uWS::ChunkIterator(&dataToConsume, &remainingStreamingBytes)) {
dataHandler(user, chunk, chunk.length() == 0);
@@ -1037,7 +1053,7 @@ public:
}
}
HttpParserResult consumed = fenceAndConsumePostPadded<false>(maxHeaderSize, requireHostHeader, useStrictMethodValidation, data, length, user, reserved, &req, requestHandler, dataHandler);
HttpParserResult consumed = fenceAndConsumePostPadded<false>(maxHeaderSize, isConnectRequest, requireHostHeader, useStrictMethodValidation, data, length, user, reserved, &req, requestHandler, dataHandler);
/* Return data will be different than user if we are upgraded to WebSocket or have an error */
if (consumed.returnedData != user) {
return consumed;

View File

@@ -243,7 +243,7 @@ public:
/* Manually upgrade to WebSocket. Typically called in upgrade handler. Immediately calls open handler.
* NOTE: Will invalidate 'this' as socket might change location in memory. Throw away after use. */
template <typename UserData>
us_socket_t *upgrade(UserData &&userData, std::string_view secWebSocketKey, std::string_view secWebSocketProtocol,
us_socket_t *upgrade(UserData&& userData, std::string_view secWebSocketKey, std::string_view secWebSocketProtocol,
std::string_view secWebSocketExtensions,
struct us_socket_context_t *webSocketContext) {
@@ -350,7 +350,8 @@ public:
us_socket_timeout(SSL, (us_socket_t *) webSocket, webSocketContextData->idleTimeoutComponents.first);
/* Move construct the UserData right before calling open handler */
new (webSocket->getUserData()) UserData(std::move(userData));
new (webSocket->getUserData()) UserData(std::forward<UserData>(userData));
/* Emit open event and start the timeout */
if (webSocketContextData->openHandler) {
@@ -741,6 +742,10 @@ public:
return httpResponseData->socketData;
}
bool isConnectRequest() {
HttpResponseData<SSL> *httpResponseData = getHttpResponseData();
return httpResponseData->isConnectRequest;
}
void setWriteOffset(uint64_t offset) {
HttpResponseData<SSL> *httpResponseData = getHttpResponseData();

View File

@@ -108,6 +108,7 @@ struct HttpResponseData : AsyncSocketData<SSL>, HttpParser {
uint8_t state = 0;
uint8_t idleTimeout = 10; // default HTTP_TIMEOUT 10 seconds
bool fromAncientRequest = false;
bool isConnectRequest = false;
bool isIdle = true;
bool shouldCloseOnceIdle = false;

View File

@@ -22,7 +22,7 @@ At its core is the _Bun runtime_, a fast JavaScript runtime designed as a drop-i
## Features:
- Live in-editor error messages (gif below)
- Test runner codelens
- Vscode test runner support
- Debugger support
- Run scripts from package.json
- Visual lockfile viewer for old binary lockfiles (`bun.lockb`)

View File

@@ -1,6 +1,6 @@
{
"name": "bun-vscode",
"version": "0.0.29",
"version": "0.0.31",
"author": "oven",
"repository": {
"type": "git",
@@ -116,20 +116,6 @@
"category": "Bun",
"enablement": "!inDebugMode && resourceLangId =~ /^(javascript|typescript|javascriptreact|typescriptreact)$/ && !isInDiffEditor && resourceScheme == 'untitled'",
"icon": "$(play-circle)"
},
{
"command": "extension.bun.runTest",
"title": "Run all tests",
"shortTitle": "Run Test",
"category": "Bun",
"icon": "$(play)"
},
{
"command": "extension.bun.watchTest",
"title": "Run all tests in watch mode",
"shortTitle": "Run Test Watch",
"category": "Bun",
"icon": "$(sync)"
}
],
"menus": {

View File

@@ -1,5 +1,5 @@
#!/bin/sh
# Version: 18
# Version: 19
# A script that installs the dependencies needed to build and test Bun.
# This should work on macOS and Linux with a POSIX shell.
@@ -685,6 +685,8 @@ install_common_software() {
apt-transport-https \
software-properties-common
fi
install_packages \
libc6-dbg
;;
dnf)
install_packages \
@@ -1193,7 +1195,7 @@ install_docker() {
execute_sudo amazon-linux-extras install docker
;;
amzn-* | alpine-*)
install_packages docker
install_packages docker docker-cli-compose
;;
*)
sh="$(require sh)"
@@ -1208,10 +1210,17 @@ install_docker() {
if [ -f "$systemctl" ]; then
execute_sudo "$systemctl" enable docker
fi
if [ "$os" = "linux" ] && [ "$distro" = "alpine" ]; then
execute doas rc-update add docker default
execute doas rc-service docker start
fi
getent="$(which getent)"
if [ -n "$("$getent" group docker)" ]; then
usermod="$(which usermod)"
if [ -z "$usermod" ]; then
usermod="$(sudo which usermod)"
fi
if [ -f "$usermod" ]; then
execute_sudo "$usermod" -aG docker "$user"
fi

View File

@@ -72,6 +72,7 @@ const cwd = import.meta.dirname ? dirname(import.meta.dirname) : process.cwd();
const testsPath = join(cwd, "test");
const spawnTimeout = 5_000;
const spawnBunTimeout = 20_000; // when running with ASAN/LSAN bun can take a bit longer to exit, not a bug.
const testTimeout = 3 * 60_000;
const integrationTimeout = 5 * 60_000;
@@ -79,7 +80,7 @@ function getNodeParallelTestTimeout(testPath) {
if (testPath.includes("test-dns")) {
return 90_000;
}
return 10_000;
return 20_000;
}
process.on("SIGTRAP", () => {
@@ -578,8 +579,11 @@ async function runTests() {
const title = relative(cwd, absoluteTestPath).replaceAll(sep, "/");
if (isNodeTest(testPath)) {
const testContent = readFileSync(absoluteTestPath, "utf-8");
const runWithBunTest =
title.includes("needs-test") || testContent.includes("bun:test") || testContent.includes("node:test");
let runWithBunTest = title.includes("needs-test") || testContent.includes("node:test");
// don't wanna have a filter for includes("bun:test") but these need our mocks
runWithBunTest ||= title === "test/js/node/test/parallel/test-fs-append-file-flush.js";
runWithBunTest ||= title === "test/js/node/test/parallel/test-fs-write-file-flush.js";
runWithBunTest ||= title === "test/js/node/test/parallel/test-fs-write-stream-flush.js";
const subcommand = runWithBunTest ? "test" : "run";
const env = {
FORCE_COLOR: "0",
@@ -592,7 +596,7 @@ async function runTests() {
}
if ((basename(execPath).includes("asan") || !isCI) && shouldValidateLeakSan(testPath)) {
env.BUN_DESTRUCT_VM_ON_EXIT = "1";
env.ASAN_OPTIONS = "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1";
env.ASAN_OPTIONS = "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1:abort_on_error=1";
// prettier-ignore
env.LSAN_OPTIONS = `malloc_context_size=100:print_suppressions=0:suppressions=${process.cwd()}/test/leaksan.supp`;
}
@@ -657,6 +661,7 @@ async function runTests() {
const buildResult = await spawnBun(execPath, {
cwd: vendorPath,
args: ["run", "build"],
timeout: 60_000,
});
if (!buildResult.ok) {
throw new Error(`Failed to build vendor: ${buildResult.error}`);
@@ -683,6 +688,9 @@ async function runTests() {
}
}
// tests are all over, close the group from the final test. any further output should print ungrouped.
startGroup("End");
if (isGithubAction) {
reportOutputToGitHubAction("failing_tests_count", failedResults.length);
const markdown = formatTestToMarkdown(failedResults, false, 0);
@@ -1132,10 +1140,6 @@ async function spawnBun(execPath, { args, cwd, timeout, env, stdout, stderr }) {
: { BUN_ENABLE_CRASH_REPORTING: "0" }),
};
if (basename(execPath).includes("asan") && bunEnv.ASAN_OPTIONS === undefined) {
bunEnv.ASAN_OPTIONS = "allow_user_segv_handler=1:disable_coredump=0";
}
if (isWindows && bunEnv.Path) {
delete bunEnv.Path;
}
@@ -1152,6 +1156,9 @@ async function spawnBun(execPath, { args, cwd, timeout, env, stdout, stderr }) {
}
bunEnv["TEMP"] = tmpdirPath;
}
if (timeout === undefined) {
timeout = spawnBunTimeout;
}
try {
const existingCores = options["coredump-upload"] ? readdirSync(coresDir) : [];
const result = await spawnSafe({
@@ -1331,7 +1338,7 @@ async function spawnBunTest(execPath, testPath, opts = { cwd }) {
}
if ((basename(execPath).includes("asan") || !isCI) && shouldValidateLeakSan(relative(cwd, absPath))) {
env.BUN_DESTRUCT_VM_ON_EXIT = "1";
env.ASAN_OPTIONS = "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1";
env.ASAN_OPTIONS = "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1:abort_on_error=1";
// prettier-ignore
env.LSAN_OPTIONS = `malloc_context_size=100:print_suppressions=0:suppressions=${process.cwd()}/test/leaksan.supp`;
}

View File

@@ -2808,6 +2808,7 @@ export function endGroup() {
} else {
console.groupEnd();
}
// when a file exits with an ASAN error, there is no trailing newline so we add one here to make sure `console.group()` detection doesn't get broken in CI.
console.log();
}
@@ -2865,6 +2866,12 @@ export function printEnvironment() {
spawnSync([shell, "-c", "free -m -w"], { stdio: "inherit" });
}
});
startGroup("Docker", () => {
const shell = which(["sh", "bash"]);
if (shell) {
spawnSync([shell, "-c", "docker ps"], { stdio: "inherit" });
}
});
}
if (isWindows) {
startGroup("Disk (win)", () => {

View File

@@ -121,6 +121,10 @@ pub fn exit(code: u32) noreturn {
std.os.windows.kernel32.ExitProcess(code);
},
else => {
if (Environment.enable_asan) {
std.c.exit(@bitCast(code));
std.c.abort(); // exit should be noreturn
}
bun.c.quick_exit(@bitCast(code));
std.c.abort(); // quick_exit should be noreturn
},

View File

@@ -199,7 +199,6 @@ pub const StandaloneModuleGraph = struct {
store.ref();
const b = bun.webcore.Blob.initWithStore(store, globalObject).new();
b.allocator = bun.default_allocator;
if (bun.http.MimeType.byExtensionNoDefault(bun.strings.trimLeadingChar(std.fs.path.extension(this.name), '.'))) |mime| {
store.mime_type = mime;
@@ -724,7 +723,8 @@ pub const StandaloneModuleGraph = struct {
return bun.invalid_fd;
};
defer pe_file.deinit();
pe_file.addBunSection(bytes) catch |err| {
// Always strip authenticode when adding .bun section for --compile
pe_file.addBunSection(bytes, .strip_always) catch |err| {
Output.prettyErrorln("Error adding Bun section to PE file: {}", .{err});
cleanup(zname, cloned_executable_fd);
return bun.invalid_fd;

View File

@@ -919,6 +919,8 @@ pub const Default = struct {
_ = self;
return c_allocator;
}
pub const deinit = void;
};
const basic = if (bun.use_mimalloc)
@@ -926,6 +928,127 @@ const basic = if (bun.use_mimalloc)
else
@import("./allocators/fallback.zig");
pub fn stackFallback(comptime size: usize, fallback_allocator: std.mem.Allocator) StackFallbackAllocator(size) {
return StackFallbackAllocator(size){
.buffer = undefined,
.fallback_allocator = fallback_allocator,
.fixed_buffer_allocator = undefined,
.force_heap = if (comptime Environment.ci_assert)
!bun.getRuntimeFeatureFlag(.BUN_DEBUG_FORCE_HEAP_FALLBACK_ALLOCATORS)
else {},
};
}
/// An allocator that attempts to allocate using a
/// `FixedBufferAllocator` using an array of size `size`. If the
/// allocation fails, it will fall back to using
/// `fallback_allocator`. Easily created with `stackFallback`.
pub fn StackFallbackAllocator(comptime size: usize) type {
return struct {
const Self = @This();
buffer: [size]u8,
fallback_allocator: std.mem.Allocator,
fixed_buffer_allocator: std.heap.FixedBufferAllocator,
get_called: if (Environment.ci_assert) bool else void = if (Environment.ci_assert) false,
force_heap: if (Environment.ci_assert) bool else void,
/// This function both fetches a `Allocator` interface to this
/// allocator *and* resets the internal buffer allocator.
pub fn get(self: *Self) std.mem.Allocator {
if (comptime Environment.ci_assert) {
bun.assert(!self.get_called); // `get` called multiple times; instead use `const allocator = stackFallback(N).get();`
self.get_called = true;
}
self.fixed_buffer_allocator = std.heap.FixedBufferAllocator.init(self.buffer[0..]);
return .{
.ptr = self,
.vtable = &.{
.alloc = alloc,
.resize = resize,
.remap = remap,
.free = free,
},
};
}
fn alloc(
ctx: *anyopaque,
len: usize,
alignment: std.mem.Alignment,
ra: usize,
) ?[*]u8 {
const self: *Self = @ptrCast(@alignCast(ctx));
if (comptime Environment.ci_assert) {
if (self.force_heap) {
return self.fallback_allocator.rawAlloc(len, alignment, ra);
}
}
return std.heap.FixedBufferAllocator.alloc(&self.fixed_buffer_allocator, len, alignment, ra) orelse
return self.fallback_allocator.rawAlloc(len, alignment, ra);
}
fn resize(
ctx: *anyopaque,
buf: []u8,
alignment: std.mem.Alignment,
new_len: usize,
ra: usize,
) bool {
const self: *Self = @ptrCast(@alignCast(ctx));
if (comptime Environment.ci_assert) {
if (self.force_heap) {
return self.fallback_allocator.rawResize(buf, alignment, new_len, ra);
}
}
if (self.fixed_buffer_allocator.ownsPtr(buf.ptr)) {
return std.heap.FixedBufferAllocator.resize(&self.fixed_buffer_allocator, buf, alignment, new_len, ra);
} else {
return self.fallback_allocator.rawResize(buf, alignment, new_len, ra);
}
}
fn remap(
context: *anyopaque,
memory: []u8,
alignment: std.mem.Alignment,
new_len: usize,
return_address: usize,
) ?[*]u8 {
const self: *Self = @ptrCast(@alignCast(context));
if (comptime Environment.ci_assert) {
if (self.force_heap) {
return self.fallback_allocator.rawRemap(memory, alignment, new_len, return_address);
}
}
if (self.fixed_buffer_allocator.ownsPtr(memory.ptr)) {
return std.heap.FixedBufferAllocator.remap(&self.fixed_buffer_allocator, memory, alignment, new_len, return_address);
} else {
return self.fallback_allocator.rawRemap(memory, alignment, new_len, return_address);
}
}
fn free(
ctx: *anyopaque,
buf: []u8,
alignment: std.mem.Alignment,
ra: usize,
) void {
const self: *Self = @ptrCast(@alignCast(ctx));
if (comptime Environment.ci_assert) {
if (self.force_heap) {
return self.fallback_allocator.rawFree(buf, alignment, ra);
}
}
if (self.fixed_buffer_allocator.ownsPtr(buf.ptr)) {
return std.heap.FixedBufferAllocator.free(&self.fixed_buffer_allocator, buf, alignment, ra);
} else {
return self.fallback_allocator.rawFree(buf, alignment, ra);
}
}
};
}
const Environment = @import("./env.zig");
const std = @import("std");

View File

@@ -94,6 +94,8 @@ const BorrowedHeap = if (safety_checks) *DebugHeap else *mimalloc.Heap;
const DebugHeap = struct {
inner: *mimalloc.Heap,
thread_lock: bun.safety.ThreadLock,
pub const deinit = void;
};
threadlocal var thread_heap: if (safety_checks) ?DebugHeap else void = if (safety_checks) null;

View File

@@ -506,6 +506,12 @@ pub fn AllocationScopeIn(comptime Allocator: type) type {
pub fn setPointerExtra(self: Self, ptr: *anyopaque, extra: Extra) void {
return self.borrow().setPointerExtra(ptr, extra);
}
pub fn leakSlice(self: Self, memory: anytype) void {
if (comptime !Self.enabled) return;
_ = @typeInfo(@TypeOf(memory)).pointer;
self.trackExternalFree(memory, null) catch @panic("tried to free memory that was not allocated by the allocation scope");
}
};
}

View File

@@ -112,6 +112,7 @@ pub const Features = struct {
pub var unsupported_uv_function: usize = 0;
pub var exited: usize = 0;
pub var yarn_migration: usize = 0;
pub var pnpm_migration: usize = 0;
pub var yaml_parse: usize = 0;
comptime {

View File

@@ -752,7 +752,7 @@ pub const Object = struct {
pub fn hasProperty(obj: *const Object, name: string) bool {
for (obj.properties.slice()) |prop| {
const key = prop.key orelse continue;
if (std.meta.activeTag(key.data) != .e_string) continue;
if (key.data != .e_string) continue;
if (key.data.e_string.eql(string, name)) return true;
}
return false;
@@ -762,7 +762,7 @@ pub const Object = struct {
for (obj.properties.slice(), 0..) |prop, i| {
const value = prop.value orelse continue;
const key = prop.key orelse continue;
if (std.meta.activeTag(key.data) != .e_string) continue;
if (key.data != .e_string) continue;
const key_str = key.data.e_string;
if (key_str.eql(string, name)) {
return Expr.Query{

View File

@@ -132,14 +132,14 @@ pub fn isEmpty(expr: Expr) bool {
pub const Query = struct { expr: Expr, loc: logger.Loc, i: u32 = 0 };
pub fn hasAnyPropertyNamed(expr: *const Expr, comptime names: []const string) bool {
if (std.meta.activeTag(expr.data) != .e_object) return false;
if (expr.data != .e_object) return false;
const obj = expr.data.e_object;
if (obj.properties.len == 0) return false;
for (obj.properties.slice()) |prop| {
if (prop.value == null) continue;
const key = prop.key orelse continue;
if (std.meta.activeTag(key.data) != .e_string) continue;
if (key.data != .e_string) continue;
const key_str = key.data.e_string;
if (strings.eqlAnyComptime(key_str.data, names)) return true;
}
@@ -266,7 +266,7 @@ pub fn set(expr: *Expr, allocator: std.mem.Allocator, name: string, value: Expr)
for (0..expr.data.e_object.properties.len) |i| {
const prop = &expr.data.e_object.properties.ptr[i];
const key = prop.key orelse continue;
if (std.meta.activeTag(key.data) != .e_string) continue;
if (key.data != .e_string) continue;
if (key.data.e_string.eql(string, name)) {
prop.value = value;
return;
@@ -288,7 +288,7 @@ pub fn setString(expr: *Expr, allocator: std.mem.Allocator, name: string, value:
for (0..expr.data.e_object.properties.len) |i| {
const prop = &expr.data.e_object.properties.ptr[i];
const key = prop.key orelse continue;
if (std.meta.activeTag(key.data) != .e_string) continue;
if (key.data != .e_string) continue;
if (key.data.e_string.eql(string, name)) {
prop.value = Expr.init(E.String, .{ .data = value }, logger.Loc.Empty);
return;
@@ -310,6 +310,15 @@ pub fn getObject(expr: *const Expr, name: string) ?Expr {
return null;
}
pub fn getBoolean(expr: *const Expr, name: string) ?bool {
if (expr.asProperty(name)) |query| {
if (query.expr.data == .e_boolean) {
return query.expr.data.e_boolean.value;
}
}
return null;
}
pub fn getString(expr: *const Expr, allocator: std.mem.Allocator, name: string) OOM!?struct { string, logger.Loc } {
if (asProperty(expr, name)) |q| {
if (q.expr.asString(allocator)) |str| {
@@ -385,7 +394,7 @@ pub fn getRope(self: *const Expr, rope: *const E.Object.Rope) ?E.Object.RopeQuer
// Making this comptime bloats the binary and doesn't seem to impact runtime performance.
pub fn asProperty(expr: *const Expr, name: string) ?Query {
if (std.meta.activeTag(expr.data) != .e_object) return null;
if (expr.data != .e_object) return null;
const obj = expr.data.e_object;
if (obj.properties.len == 0) return null;
@@ -393,7 +402,7 @@ pub fn asProperty(expr: *const Expr, name: string) ?Query {
}
pub fn asPropertyStringMap(expr: *const Expr, name: string, allocator: std.mem.Allocator) ?*bun.StringArrayHashMap(string) {
if (std.meta.activeTag(expr.data) != .e_object) return null;
if (expr.data != .e_object) return null;
const obj_ = expr.data.e_object;
if (obj_.properties.len == 0) return null;
const query = obj_.asProperty(name) orelse return null;
@@ -439,7 +448,7 @@ pub const ArrayIterator = struct {
};
pub fn asArray(expr: *const Expr) ?ArrayIterator {
if (std.meta.activeTag(expr.data) != .e_array) return null;
if (expr.data != .e_array) return null;
const array = expr.data.e_array;
if (array.items.len == 0) return null;
@@ -455,7 +464,7 @@ pub inline fn asUtf8StringLiteral(expr: *const Expr) ?string {
}
pub inline fn asStringLiteral(expr: *const Expr, allocator: std.mem.Allocator) ?string {
if (std.meta.activeTag(expr.data) != .e_string) return null;
if (expr.data != .e_string) return null;
return expr.data.e_string.string(allocator) catch null;
}
@@ -501,7 +510,7 @@ pub inline fn asStringZ(expr: *const Expr, allocator: std.mem.Allocator) OOM!?st
pub fn asBool(
expr: *const Expr,
) ?bool {
if (std.meta.activeTag(expr.data) != .e_boolean) return null;
if (expr.data != .e_boolean) return null;
return expr.data.e_boolean.value;
}
@@ -522,7 +531,7 @@ const Serializable = struct {
};
pub fn isMissing(a: *const Expr) bool {
return std.meta.activeTag(a.data) == Expr.Tag.e_missing;
return a.data == Expr.Tag.e_missing;
}
// The goal of this function is to "rotate" the AST if it's possible to use the

View File

@@ -325,7 +325,7 @@ pub const Runner = struct {
return _entry.value_ptr.*;
}
var blob_: ?jsc.WebCore.Blob = null;
var blob_: ?*const jsc.WebCore.Blob = null;
const mime_type: ?MimeType = null;
if (value.jsType() == .DOMWrapper) {
@@ -334,30 +334,23 @@ pub const Runner = struct {
} else if (value.as(jsc.WebCore.Request)) |resp| {
return this.run(try resp.getBlobWithoutCallFrame(this.global));
} else if (value.as(jsc.WebCore.Blob)) |resp| {
blob_ = resp.*;
blob_.?.allocator = null;
blob_ = resp;
} else if (value.as(bun.api.ResolveMessage) != null or value.as(bun.api.BuildMessage) != null) {
_ = this.macro.vm.uncaughtException(this.global, value, false);
return error.MacroFailed;
}
}
if (blob_) |*blob| {
const out_expr = Expr.fromBlob(
if (blob_) |blob| {
return Expr.fromBlob(
blob,
this.allocator,
mime_type,
this.log,
this.caller.loc,
) catch {
blob.deinit();
return error.MacroFailed;
};
if (out_expr.data == .e_string) {
blob.deinit();
}
return out_expr;
}
return Expr.init(E.String, E.String.empty, this.caller.loc);
@@ -371,41 +364,20 @@ pub const Runner = struct {
const _entry = this.visited.getOrPut(this.allocator, value) catch unreachable;
if (_entry.found_existing) {
switch (_entry.value_ptr.*.data) {
.e_object, .e_array => {
this.log.addErrorFmt(this.source, this.caller.loc, this.allocator, "converting circular structure to Bun AST is not implemented yet", .{}) catch unreachable;
return error.MacroFailed;
},
else => {},
}
return _entry.value_ptr.*;
}
var iter = try jsc.JSArrayIterator.init(value, this.global);
if (iter.len == 0) {
const result = Expr.init(
E.Array,
E.Array{
.items = ExprNodeList.empty,
.was_originally_macro = true,
},
this.caller.loc,
);
_entry.value_ptr.* = result;
return result;
}
// Process all array items
var array = this.allocator.alloc(Expr, iter.len) catch unreachable;
var out = Expr.init(
errdefer this.allocator.free(array);
const expr = Expr.init(
E.Array,
E.Array{
.items = ExprNodeList.empty,
.was_originally_macro = true,
},
E.Array{ .items = ExprNodeList.empty, .was_originally_macro = true },
this.caller.loc,
);
_entry.value_ptr.* = out;
errdefer this.allocator.free(array);
_entry.value_ptr.* = expr;
var i: usize = 0;
while (try iter.next()) |item| {
array[i] = try this.run(item);
@@ -413,24 +385,27 @@ pub const Runner = struct {
continue;
i += 1;
}
out.data.e_array.items = ExprNodeList.fromOwnedSlice(array);
_entry.value_ptr.* = out;
return out;
expr.data.e_array.items = ExprNodeList.fromOwnedSlice(array);
expr.data.e_array.items.len = @truncate(i);
return expr;
},
// TODO: optimize this
jsc.ConsoleObject.Formatter.Tag.Object => {
this.is_top_level = false;
const _entry = this.visited.getOrPut(this.allocator, value) catch unreachable;
if (_entry.found_existing) {
switch (_entry.value_ptr.*.data) {
.e_object, .e_array => {
this.log.addErrorFmt(this.source, this.caller.loc, this.allocator, "converting circular structure to Bun AST is not implemented yet", .{}) catch unreachable;
return error.MacroFailed;
},
else => {},
}
return _entry.value_ptr.*;
}
// Reserve a placeholder to break cycles.
const expr = Expr.init(
E.Object,
E.Object{ .properties = G.Property.List{}, .was_originally_macro = true },
this.caller.loc,
);
_entry.value_ptr.* = expr;
// SAFETY: tag ensures `value` is an object.
const obj = value.getObject() orelse unreachable;
var object_iter = try jsc.JSPropertyIterator(.{
@@ -439,36 +414,28 @@ pub const Runner = struct {
}).init(this.global, obj);
defer object_iter.deinit();
const out = _entry.value_ptr;
out.* = Expr.init(
E.Object,
E.Object{
.properties = bun.handleOom(
G.Property.List.initCapacity(this.allocator, object_iter.len),
),
.was_originally_macro = true,
},
this.caller.loc,
// Build properties list
var properties = bun.handleOom(
G.Property.List.initCapacity(this.allocator, object_iter.len),
);
const properties = &out.data.e_object.properties;
errdefer properties.clearAndFree(this.allocator);
while (try object_iter.next()) |prop| {
bun.assertf(
object_iter.i == properties.len,
"`properties` unexpectedly modified (length {d}, expected {d})",
.{ properties.len, object_iter.i },
);
properties.appendAssumeCapacity(G.Property{
const object_value = try this.run(object_iter.value);
properties.append(this.allocator, G.Property{
.key = Expr.init(
E.String,
E.String.init(prop.toOwnedSlice(this.allocator) catch unreachable),
this.caller.loc,
),
.value = try this.run(object_iter.value),
});
.value = object_value,
}) catch |err| bun.handleOom(err);
}
return out.*;
expr.data.e_object.properties = properties;
return expr;
},
.JSON => {

View File

@@ -170,6 +170,21 @@ pub fn NewParser_(
dirname_ref: Ref = Ref.None,
import_meta_ref: Ref = Ref.None,
hmr_api_ref: Ref = Ref.None,
/// If bake is enabled and this is a server-side file, we want to use
/// special `Response` class inside the `bun:app` built-in module to
/// support syntax like `return Response(<jsx />, {...})` or `return Response.render("/my-page")`
/// or `return Response.redirect("/other")`.
///
/// So we'll need to add a `import { Response } from 'bun:app'` to the
/// top of the file
///
/// We need to declare this `response_ref` upfront
response_ref: Ref = Ref.None,
/// We also need to declare the namespace ref for `bun:app` and attach
/// it to the symbol so the code generated `e_import_identifier`'s
bun_app_namespace_ref: Ref = Ref.None,
scopes_in_order_visitor_index: usize = 0,
has_classic_runtime_warned: bool = false,
macro_call_count: MacroCallCountType = 0,
@@ -954,7 +969,7 @@ pub fn NewParser_(
switch (call.target.data) {
.e_identifier => |ident| {
// is this a require("something")
if (strings.eqlComptime(p.loadNameFromRef(ident.ref), "require") and call.args.len == 1 and std.meta.activeTag(call.args.ptr[0].data) == .e_string) {
if (strings.eqlComptime(p.loadNameFromRef(ident.ref), "require") and call.args.len == 1 and call.args.ptr[0].data == .e_string) {
_ = p.addImportRecord(.require, loc, call.args.at(0).data.e_string.string(p.allocator) catch unreachable);
}
},
@@ -970,7 +985,7 @@ pub fn NewParser_(
switch (call.target.data) {
.e_identifier => |ident| {
// is this a require("something")
if (strings.eqlComptime(p.loadNameFromRef(ident.ref), "require") and call.args.len == 1 and std.meta.activeTag(call.args.ptr[0].data) == .e_string) {
if (strings.eqlComptime(p.loadNameFromRef(ident.ref), "require") and call.args.len == 1 and call.args.ptr[0].data == .e_string) {
_ = p.addImportRecord(.require, loc, call.args.at(0).data.e_string.string(p.allocator) catch unreachable);
}
},
@@ -1220,6 +1235,81 @@ pub fn NewParser_(
};
}
pub fn generateImportStmtForBakeResponse(
noalias p: *P,
parts: *ListManaged(js_ast.Part),
) !void {
bun.assert(!p.response_ref.isNull());
bun.assert(!p.bun_app_namespace_ref.isNull());
const allocator = p.allocator;
const import_path = "bun:app";
const import_record_i = p.addImportRecordByRange(.stmt, logger.Range.None, import_path);
var declared_symbols = DeclaredSymbol.List{};
try declared_symbols.ensureTotalCapacity(allocator, 2);
var stmts = try allocator.alloc(Stmt, 1);
declared_symbols.appendAssumeCapacity(
DeclaredSymbol{ .ref = p.bun_app_namespace_ref, .is_top_level = true },
);
try p.module_scope.generated.append(allocator, p.bun_app_namespace_ref);
const clause_items = try allocator.dupe(js_ast.ClauseItem, &.{
js_ast.ClauseItem{
.alias = "Response",
.original_name = "Response",
.alias_loc = logger.Loc{},
.name = LocRef{ .ref = p.response_ref, .loc = logger.Loc{} },
},
});
declared_symbols.appendAssumeCapacity(DeclaredSymbol{
.ref = p.response_ref,
.is_top_level = true,
});
// ensure every e_import_identifier holds the namespace
if (p.options.features.hot_module_reloading) {
const symbol = &p.symbols.items[p.response_ref.inner_index];
bun.assert(symbol.namespace_alias != null);
symbol.namespace_alias.?.import_record_index = import_record_i;
}
try p.is_import_item.put(allocator, p.response_ref, {});
try p.named_imports.put(allocator, p.response_ref, js_ast.NamedImport{
.alias = "Response",
.alias_loc = logger.Loc{},
.namespace_ref = p.bun_app_namespace_ref,
.import_record_index = import_record_i,
});
stmts[0] = p.s(
S.Import{
.namespace_ref = p.bun_app_namespace_ref,
.items = clause_items,
.import_record_index = import_record_i,
.is_single_line = true,
},
logger.Loc{},
);
var import_records = try allocator.alloc(u32, 1);
import_records[0] = import_record_i;
// This import is placed in a part before the main code, however
// the bundler ends up re-ordering this to be after... The order
// does not matter as ESM imports are always hoisted.
parts.append(js_ast.Part{
.stmts = stmts,
.declared_symbols = declared_symbols,
.import_record_indices = bun.BabyList(u32).fromOwnedSlice(import_records),
.tag = .runtime,
}) catch unreachable;
}
pub fn generateImportStmt(
noalias p: *P,
import_path: string,
@@ -1227,7 +1317,7 @@ pub fn NewParser_(
parts: *ListManaged(js_ast.Part),
symbols: anytype,
additional_stmt: ?Stmt,
comptime suffix: string,
comptime prefix: string,
comptime is_internal: bool,
) anyerror!void {
const allocator = p.allocator;
@@ -1237,13 +1327,13 @@ pub fn NewParser_(
import_record.path.namespace = "runtime";
import_record.is_internal = is_internal;
const import_path_identifier = try import_record.path.name.nonUniqueNameString(allocator);
var namespace_identifier = try allocator.alloc(u8, import_path_identifier.len + suffix.len);
var namespace_identifier = try allocator.alloc(u8, import_path_identifier.len + prefix.len);
const clause_items = try allocator.alloc(js_ast.ClauseItem, imports.len);
var stmts = try allocator.alloc(Stmt, 1 + if (additional_stmt != null) @as(usize, 1) else @as(usize, 0));
var declared_symbols = DeclaredSymbol.List{};
try declared_symbols.ensureTotalCapacity(allocator, imports.len + 1);
bun.copy(u8, namespace_identifier, suffix);
bun.copy(u8, namespace_identifier[suffix.len..], import_path_identifier);
bun.copy(u8, namespace_identifier, prefix);
bun.copy(u8, namespace_identifier[prefix.len..], import_path_identifier);
const namespace_ref = try p.newSymbol(.other, namespace_identifier);
declared_symbols.appendAssumeCapacity(.{
@@ -2014,6 +2104,25 @@ pub fn NewParser_(
.wrap_exports_for_server_reference => {},
}
// Server-side components:
// Declare upfront the symbols for "Response" and "bun:app"
switch (p.options.features.server_components) {
.none, .client_side => {},
else => {
p.response_ref = try p.declareGeneratedSymbol(.import, "Response");
p.bun_app_namespace_ref = try p.newSymbol(
.other,
"import_bun_app",
);
const symbol = &p.symbols.items[p.response_ref.inner_index];
symbol.namespace_alias = .{
.namespace_ref = p.bun_app_namespace_ref,
.alias = "Response",
.import_record_index = std.math.maxInt(u32),
};
},
}
if (p.options.features.hot_module_reloading) {
p.hmr_api_ref = try p.declareCommonJSSymbol(.unbound, "hmr");
}
@@ -3071,7 +3180,7 @@ pub fn NewParser_(
return ref;
}
fn declareGeneratedSymbol(p: *P, kind: Symbol.Kind, comptime name: string) !Ref {
pub fn declareGeneratedSymbol(p: *P, kind: Symbol.Kind, comptime name: string) !Ref {
// The bundler runs the renamer, so it is ok to not append a hash
if (p.options.bundle) {
return try declareSymbolMaybeGenerated(p, kind, logger.Loc.Empty, name, true);
@@ -3428,7 +3537,7 @@ pub fn NewParser_(
// Insert any relocated variable statements now
if (p.relocated_top_level_vars.items.len > 0) {
var already_declared = RefMap{};
var already_declared_allocator_stack = std.heap.stackFallback(1024, allocator);
var already_declared_allocator_stack = bun.allocators.stackFallback(1024, allocator);
const already_declared_allocator = already_declared_allocator_stack.get();
defer if (already_declared_allocator_stack.fixed_buffer_allocator.end_index >= 1023) already_declared.deinit(already_declared_allocator);

View File

@@ -1357,6 +1357,16 @@ pub const Parser = struct {
);
}
// Bake: transform global `Response` to use `import { Response } from 'bun:app'`
if (!p.response_ref.isNull() and is_used_and_has_no_links: {
// We only want to do this if the symbol is used and didn't get
// bound to some other value
const symbol: *const Symbol = &p.symbols.items[p.response_ref.innerIndex()];
break :is_used_and_has_no_links !symbol.hasLink() and symbol.use_count_estimate > 0;
}) {
try p.generateImportStmtForBakeResponse(&before);
}
if (before.items.len > 0 or after.items.len > 0) {
try parts.ensureUnusedCapacity(before.items.len + after.items.len);
const parts_len = parts.items.len;

View File

@@ -194,11 +194,11 @@ pub fn isTSArrowFnJSX(p: anytype) !bool {
}
if (p.lexer.token == .t_identifier) {
try p.lexer.next();
if (p.lexer.token == .t_comma) {
if (p.lexer.token == .t_comma or p.lexer.token == .t_equals) {
is_ts_arrow_fn = true;
} else if (p.lexer.token == .t_extends) {
try p.lexer.next();
is_ts_arrow_fn = p.lexer.token != .t_equals and p.lexer.token != .t_greater_than;
is_ts_arrow_fn = p.lexer.token != .t_equals and p.lexer.token != .t_greater_than and p.lexer.token != .t_slash;
}
}

View File

@@ -827,24 +827,25 @@ pub fn ParseSuffix(
const optional_chain = &optional_chain_;
while (true) {
if (p.lexer.loc().start == p.after_arrow_body_loc.start) {
while (true) {
switch (p.lexer.token) {
.t_comma => {
if (level.gte(.comma)) {
break;
}
defer left_and_out.* = left_value;
next_token: switch (p.lexer.token) {
.t_comma => {
if (level.gte(.comma)) {
return;
}
try p.lexer.next();
left.* = p.newExpr(E.Binary{
.op = .bin_comma,
.left = left.*,
.right = try p.parseExpr(.comma),
}, left.loc);
},
else => {
break;
},
}
try p.lexer.next();
left.* = p.newExpr(E.Binary{
.op = .bin_comma,
.left = left.*,
.right = try p.parseExpr(.comma),
}, left.loc);
continue :next_token p.lexer.token;
},
else => {
return;
},
}
}

View File

@@ -64,7 +64,7 @@ pub fn VisitExpr(
}
pub fn e_import_meta(p: *P, expr: Expr, in: ExprIn) Expr {
// TODO: delete import.meta might not work
const is_delete_target = std.meta.activeTag(p.delete_target) == .e_import_meta;
const is_delete_target = p.delete_target == .e_import_meta;
if (p.define.dots.get("meta")) |meta| {
for (meta) |define| {
@@ -609,7 +609,8 @@ pub fn VisitExpr(
p.delete_target = dot.data;
}
return p.visitExprInOut(dot, in);
// don't call visitExprInOut on `dot` because we've already visited `target` above!
return dot;
}
// Handle property rewrites to ensure things
@@ -1444,9 +1445,20 @@ pub fn VisitExpr(
// Why? Because we *don't* want to check for uses of
// `useState` _inside_ React, and we know React uses
// commonjs so it will never be `.e_import_identifier`.
e_.target.data == .e_import_identifier) {
check_for_usestate: {
if (e_.target.data == .e_import_identifier) break :check_for_usestate true;
// Also check for `React.useState(...)`
if (e_.target.data == .e_dot and e_.target.data.e_dot.target.data == .e_import_identifier) {
const id = e_.target.data.e_dot.target.data.e_import_identifier;
const name = p.symbols.items[id.ref.innerIndex()].original_name;
break :check_for_usestate bun.strings.eqlComptime(name, "React");
}
break :check_for_usestate false;
}) {
bun.assert(p.options.features.server_components.isServerSide());
if (bun.strings.eqlComptime(original_name, "useState")) {
if (!bun.strings.startsWith(p.source.path.pretty, "node_modules") and
bun.strings.eqlComptime(original_name, "useState"))
{
p.log.addError(
p.source,
expr.loc,

View File

@@ -27,9 +27,6 @@ pub const UserOptions = struct {
/// Currently, this function must run at the top of the event loop.
pub fn fromJS(config: JSValue, global: *jsc.JSGlobalObject) !UserOptions {
if (!config.isObject()) {
return global.throwInvalidArguments("'" ++ api_name ++ "' is not an object", .{});
}
var arena = std.heap.ArenaAllocator.init(bun.default_allocator);
errdefer arena.deinit();
const alloc = arena.allocator();
@@ -38,6 +35,38 @@ pub const UserOptions = struct {
errdefer allocations.free();
var bundler_options = SplitBundlerOptions.empty;
if (!config.isObject()) {
// Allow users to do `export default { app: 'react' }` for convenience
if (config.isString()) {
const bunstr = try config.toBunString(global);
defer bunstr.deref();
const utf8_string = bunstr.toUTF8(bun.default_allocator);
defer utf8_string.deinit();
if (bun.strings.eql(utf8_string.byteSlice(), "react")) {
const root = bun.getcwdAlloc(alloc) catch |err| switch (err) {
error.OutOfMemory => {
return global.throwOutOfMemory();
},
else => {
return global.throwError(err, "while querying current working directory");
},
};
const framework = try Framework.react(alloc);
return UserOptions{
.arena = arena,
.allocations = allocations,
.root = root,
.framework = framework,
.bundler_options = bundler_options,
};
}
}
return global.throwInvalidArguments("'" ++ api_name ++ "' is not an object", .{});
}
if (try config.getOptional(global, "bundlerOptions", JSValue)) |js_options| {
if (try js_options.getOptional(global, "server", JSValue)) |server_options| {
bundler_options.server = try BuildConfigSubset.fromJS(global, server_options);

View File

@@ -1,5 +1,6 @@
// clang-format off
#include "BakeSourceProvider.h"
#include "DevServerSourceProvider.h"
#include "BakeGlobalObject.h"
#include "JavaScriptCore/CallData.h"
#include "JavaScriptCore/Completion.h"
@@ -78,6 +79,34 @@ extern "C" JSC::EncodedJSValue BakeLoadServerHmrPatch(GlobalObject* global, BunS
return JSC::JSValue::encode(result);
}
extern "C" JSC::EncodedJSValue BakeLoadServerHmrPatchWithSourceMap(GlobalObject* global, BunString source, const char* sourceMapJSONPtr, size_t sourceMapJSONLength) {
JSC::VM&vm = global->vm();
auto scope = DECLARE_THROW_SCOPE(vm);
String string = "bake://server.patch.js"_s;
JSC::SourceOrigin origin = JSC::SourceOrigin(WTF::URL(string));
// Use DevServerSourceProvider with the source map JSON
auto provider = DevServerSourceProvider::create(
global,
source.toWTFString(),
sourceMapJSONPtr,
sourceMapJSONLength,
origin,
WTFMove(string),
WTF::TextPosition(),
JSC::SourceProviderSourceType::Program
);
JSC::SourceCode sourceCode = JSC::SourceCode(provider);
JSC::JSValue result = vm.interpreter.executeProgram(sourceCode, global, global);
RETURN_IF_EXCEPTION(scope, {});
RELEASE_ASSERT(result);
return JSC::JSValue::encode(result);
}
extern "C" JSC::EncodedJSValue BakeGetModuleNamespace(
JSC::JSGlobalObject* global,
JSC::JSValue keyValue

File diff suppressed because it is too large Load Diff

View File

@@ -41,8 +41,8 @@ pub fn runWithBody(ctx: *ErrorReportRequest, body: []const u8, r: AnyResponse) !
var s = std.io.fixedBufferStream(body);
const reader = s.reader();
var sfa_general = std.heap.stackFallback(65536, ctx.dev.allocator());
var sfa_sourcemap = std.heap.stackFallback(65536, ctx.dev.allocator());
var sfa_general = bun.allocators.stackFallback(65536, ctx.dev.allocator());
var sfa_sourcemap = bun.allocators.stackFallback(65536, ctx.dev.allocator());
const temp_alloc = sfa_general.get();
var arena = std.heap.ArenaAllocator.init(temp_alloc);
defer arena.deinit();

View File

@@ -187,7 +187,7 @@ pub fn run(first: *HotReloadEvent) void {
return;
}
var sfb = std.heap.stackFallback(4096, dev.allocator());
var sfb = bun.allocators.stackFallback(4096, dev.allocator());
const temp_alloc = sfb.get();
var entry_points: EntryPointList = .empty;
defer entry_points.deinit(temp_alloc);

View File

@@ -269,9 +269,11 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
/// All part contents
current_chunk_parts: ArrayListUnmanaged(switch (side) {
.client => FileIndex,
// These slices do not outlive the bundler, and must
// be joined before its arena is deinitialized.
.server => []const u8,
// This memory is allocated by the dev server allocator
.server => bun.ptr.OwnedIn(
[]const u8,
bun.bake.DevServer.DevAllocator,
),
}),
/// Asset IDs, which can be printed as hex in '/_bun/asset/{hash}.css'
@@ -280,6 +282,10 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
.server => void,
},
/// Source maps for server chunks and the file indices to track which
/// file each chunk comes from
current_chunk_source_maps: if (side == .server) ArrayListUnmanaged(CurrentChunkSourceMapData) else void = if (side == .server) .empty,
pub const empty: Self = .{
.bundled_files = .empty,
.stale_files = .empty,
@@ -293,6 +299,16 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
.current_chunk_parts = .empty,
.current_css_files = if (side == .client) .empty,
.current_chunk_source_maps = if (side == .server) .empty else {},
};
const CurrentChunkSourceMapData = struct {
file_index: FileIndex,
source_map: PackedMap.Shared,
pub fn deinit(self: *CurrentChunkSourceMapData) void {
self.source_map.deinit();
}
};
pub const File = switch (side) {
@@ -378,9 +394,19 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
.edges = g.edges.deinit(alloc),
.edges_free_list = g.edges_free_list.deinit(alloc),
.current_chunk_len = {},
.current_chunk_parts = g.current_chunk_parts.deinit(alloc),
.current_css_files = if (comptime side == .client)
g.current_css_files.deinit(alloc),
.current_chunk_parts = {
if (comptime side == .server) {
for (g.current_chunk_parts.items) |*part| part.deinit();
}
g.current_chunk_parts.deinit(alloc);
},
.current_css_files = if (comptime side == .client) g.current_css_files.deinit(alloc),
.current_chunk_source_maps = if (side == .server) {
for (g.current_chunk_source_maps.items) |*source_map| {
source_map.deinit();
}
g.current_chunk_source_maps.deinit(alloc);
},
});
}
@@ -412,6 +438,11 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
}
source_maps += file.source_map.memoryCost();
}
} else if (side == .server) {
graph += DevServer.memoryCostArrayList(g.current_chunk_source_maps);
for (g.current_chunk_source_maps.items) |item| {
source_maps += item.source_map.memoryCost();
}
}
return .{
.graph = graph,
@@ -445,7 +476,7 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
g: *Self,
ctx: *HotUpdateContext,
index: bun.ast.Index,
content_: union(enum) {
_content: union(enum) {
js: struct {
code: JsCode,
source_map: ?struct {
@@ -457,13 +488,16 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
},
is_ssr_graph: bool,
) !void {
var content = content_;
var content = _content;
const dev = g.owner();
dev.graph_safety_lock.assertLocked();
const path = ctx.sources[index.get()].path;
const key = path.keyForIncrementalGraph();
const log = bun.Output.scoped(.IncrementalGraphReceiveChunk, .visible);
log("receiveChunk({s}, {s})", .{ @tagName(side), key });
if (Environment.allow_assert) {
switch (content) {
.css => {},
@@ -546,7 +580,7 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
bun.assert(html_route_bundle_index == null); // suspect behind #17956
if (source_map.chunk.buffer.len() > 0) {
break :blk .{ .some = PackedMap.newNonEmpty(
source_map.chunk,
&source_map.chunk,
source_map.escaped_source.take().?,
) };
}
@@ -632,11 +666,47 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
}
}
if (content == .js) {
try g.current_chunk_parts.append(dev.allocator(), content.js.code);
try g.current_chunk_parts.append(
dev.allocator(),
bun.ptr.OwnedIn([]const u8, bun.bake.DevServer.DevAllocator).fromRawIn(
content.js.code,
dev.dev_allocator(),
),
);
g.current_chunk_len += content.js.code.len;
if (content.js.source_map) |*source_map| {
source_map.chunk.buffer.deinit();
source_map.escaped_source.deinit();
// TODO: we probably want to store SSR chunks but not
// server chunks, but not 100% sure
const should_immediately_free_sourcemap = false;
if (should_immediately_free_sourcemap) {
@compileError("Not implemented the codepath to free the sourcemap");
} else {
if (content.js.source_map) |*source_map| append_empty: {
defer source_map.chunk.deinit();
defer source_map.escaped_source.deinit();
if (source_map.chunk.buffer.len() > 0) {
const escaped_source = source_map.escaped_source.take() orelse break :append_empty;
const packed_map: PackedMap.Shared = .{ .some = PackedMap.newNonEmpty(
&source_map.chunk,
escaped_source,
) };
try g.current_chunk_source_maps.append(dev.allocator(), CurrentChunkSourceMapData{
.source_map = packed_map,
.file_index = file_index,
});
return;
}
}
// Must precompute this. Otherwise, source maps won't have
// the info needed to concatenate VLQ mappings.
const count: u32 = @intCast(bun.strings.countChar(content.js.code, '\n'));
try g.current_chunk_source_maps.append(dev.allocator(), .{
.file_index = file_index,
.source_map = PackedMap.Shared{
.line_count = .init(count),
},
});
}
}
},
@@ -808,7 +878,7 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
bun.assert(bundler_index.isValid());
bun.assert(ctx.loaders[bundler_index.get()].isCSS());
var sfb = std.heap.stackFallback(@sizeOf(bun.ast.Index) * 64, temp_alloc);
var sfb = bun.allocators.stackFallback(@sizeOf(bun.ast.Index) * 64, temp_alloc);
const queue_alloc = sfb.get();
// This queue avoids stack overflow.
@@ -1598,10 +1668,17 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
pub fn reset(g: *Self) void {
g.owner().graph_safety_lock.assertLocked();
g.current_chunk_len = 0;
g.current_chunk_parts.clearRetainingCapacity();
if (comptime side == .client) {
g.current_css_files.clearRetainingCapacity();
} else if (comptime side == .server) {
for (g.current_chunk_parts.items) |*part| part.deinit();
for (g.current_chunk_source_maps.items) |*sourcemap| sourcemap.deinit();
g.current_chunk_source_maps.clearRetainingCapacity();
}
g.current_chunk_parts.clearRetainingCapacity();
}
const TakeJSBundleOptions = switch (side) {
@@ -1614,6 +1691,7 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
},
.server => struct {
kind: ChunkKind,
script_id: SourceMapStore.Key,
},
};
@@ -1650,7 +1728,7 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
// to inform the HMR runtime some crucial entry-point info. The
// exact upper bound of this can be calculated, but is not to
// avoid worrying about windows paths.
var end_sfa = std.heap.stackFallback(65536, g.allocator());
var end_sfa = bun.allocators.stackFallback(65536, g.allocator());
var end_list = std.ArrayList(u8).initCapacity(end_sfa.get(), 65536) catch unreachable;
defer end_list.deinit();
const end = end: {
@@ -1729,7 +1807,7 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
// entry is an index into files
.client => files[entry.get()].unpack().jsCode().?,
// entry is the '[]const u8' itself
.server => entry,
.server => entry.get(),
});
}
list.appendSliceAssumeCapacity(end);
@@ -1756,46 +1834,71 @@ pub fn IncrementalGraph(comptime side: bake.Side) type {
};
/// Uses `arena` as a temporary allocator, fills in all fields of `out` except ref_count
pub fn takeSourceMap(g: *Self, arena: std.mem.Allocator, gpa: Allocator, out: *SourceMapStore.Entry) bun.OOM!void {
if (comptime side == .server) @compileError("not implemented");
pub fn takeSourceMap(g: *@This(), _: std.mem.Allocator, gpa: Allocator, out: *SourceMapStore.Entry) bun.OOM!void {
const paths = g.bundled_files.keys();
const files = g.bundled_files.values();
// This buffer is temporary, holding the quoted source paths, joined with commas.
var source_map_strings = std.ArrayList(u8).init(arena);
defer source_map_strings.deinit();
switch (side) {
.client => {
const files = g.bundled_files.values();
const buf = bun.path_buffer_pool.get();
defer bun.path_buffer_pool.put(buf);
const buf = bun.path_buffer_pool.get();
defer bun.path_buffer_pool.put(buf);
var file_paths = try ArrayListUnmanaged([]const u8).initCapacity(gpa, g.current_chunk_parts.items.len);
errdefer file_paths.deinit(gpa);
var contained_maps: bun.MultiArrayList(PackedMap.Shared) = .empty;
try contained_maps.ensureTotalCapacity(gpa, g.current_chunk_parts.items.len);
errdefer contained_maps.deinit(gpa);
var file_paths = try ArrayListUnmanaged([]const u8).initCapacity(gpa, g.current_chunk_parts.items.len);
errdefer file_paths.deinit(gpa);
var contained_maps: bun.MultiArrayList(PackedMap.Shared) = .empty;
try contained_maps.ensureTotalCapacity(gpa, g.current_chunk_parts.items.len);
errdefer contained_maps.deinit(gpa);
var overlapping_memory_cost: usize = 0;
var overlapping_memory_cost: usize = 0;
for (g.current_chunk_parts.items) |file_index| {
file_paths.appendAssumeCapacity(paths[file_index.get()]);
const source_map = files[file_index.get()].unpack().source_map.clone();
if (source_map.get()) |map| {
overlapping_memory_cost += map.memoryCost();
}
contained_maps.appendAssumeCapacity(source_map);
for (g.current_chunk_parts.items) |file_index| {
file_paths.appendAssumeCapacity(paths[file_index.get()]);
const source_map = files[file_index.get()].unpack().source_map.clone();
if (source_map.get()) |map| {
overlapping_memory_cost += map.memoryCost();
}
contained_maps.appendAssumeCapacity(source_map);
}
overlapping_memory_cost += contained_maps.memoryCost() + DevServer.memoryCostSlice(file_paths.items);
const ref_count = out.ref_count;
out.* = .{
.dev_allocator = g.dev_allocator(),
.ref_count = ref_count,
.paths = file_paths.items,
.files = contained_maps,
.overlapping_memory_cost = @intCast(overlapping_memory_cost),
};
},
.server => {
var file_paths = try ArrayListUnmanaged([]const u8).initCapacity(gpa, g.current_chunk_parts.items.len);
errdefer file_paths.deinit(gpa);
var contained_maps: bun.MultiArrayList(PackedMap.Shared) = .empty;
try contained_maps.ensureTotalCapacity(gpa, g.current_chunk_parts.items.len);
errdefer contained_maps.deinit(gpa);
var overlapping_memory_cost: u32 = 0;
// For server, we use the tracked file indices to get the correct paths
for (g.current_chunk_source_maps.items) |item| {
file_paths.appendAssumeCapacity(paths[item.file_index.get()]);
contained_maps.appendAssumeCapacity(item.source_map.clone());
overlapping_memory_cost += @intCast(item.source_map.memoryCost());
}
overlapping_memory_cost += @intCast(contained_maps.memoryCost() + DevServer.memoryCostSlice(file_paths.items));
out.* = .{
.dev_allocator = g.dev_allocator(),
.ref_count = out.ref_count,
.paths = file_paths.items,
.files = contained_maps,
.overlapping_memory_cost = overlapping_memory_cost,
};
},
}
overlapping_memory_cost += contained_maps.memoryCost() + DevServer.memoryCostSlice(file_paths.items);
const ref_count = out.ref_count;
out.* = .{
.dev_allocator = g.dev_allocator(),
.ref_count = ref_count,
.paths = file_paths.items,
.files = contained_maps,
.overlapping_memory_cost = @intCast(overlapping_memory_cost),
};
}
fn disconnectAndDeleteFile(g: *Self, file_index: FileIndex) void {

View File

@@ -19,8 +19,8 @@ end_state: struct {
original_column: i32,
},
pub fn newNonEmpty(chunk: SourceMap.Chunk, escaped_source: Owned([]u8)) bun.ptr.Shared(*Self) {
var buffer = chunk.buffer;
pub fn newNonEmpty(chunk: *SourceMap.Chunk, escaped_source: Owned([]u8)) bun.ptr.Shared(*Self) {
var buffer = &chunk.buffer;
assert(!buffer.isEmpty());
const dev_allocator = DevAllocator.downcast(buffer.allocator);
return .new(.{

View File

@@ -72,6 +72,8 @@ pub const State = enum {
unqueued,
/// A bundle associated with this route is happening
bundling,
/// A bundle associated with this route *will happen in the next bundle*
deferred_to_next_bundle,
/// This route was flagged for bundling failures. There are edge cases
/// where a route can be disconnected from its failures, so the route
/// imports has to be traced to discover if possible failures still

View File

@@ -110,7 +110,7 @@ pub fn initFromJs(dev: *DevServer, owner: Owner, value: JSValue) !SerializedFail
@panic("TODO");
}
// Avoid small re-allocations without requesting so much from the heap
var sfb = std.heap.stackFallback(65536, dev.allocator());
var sfb = bun.allocators.stackFallback(65536, dev.allocator());
var payload = std.ArrayList(u8).initCapacity(sfb.get(), 65536) catch
unreachable; // enough space
const w = payload.writer();
@@ -137,7 +137,7 @@ pub fn initFromLog(
assert(messages.len > 0);
// Avoid small re-allocations without requesting so much from the heap
var sfb = std.heap.stackFallback(65536, dev.allocator());
var sfb = bun.allocators.stackFallback(65536, dev.allocator());
var payload = std.ArrayList(u8).initCapacity(sfb.get(), 65536) catch
unreachable; // enough space
const w = payload.writer();

View File

@@ -76,11 +76,11 @@ pub const Entry = struct {
pub fn renderMappings(map: Entry, kind: ChunkKind, arena: Allocator, gpa: Allocator) ![]u8 {
var j: StringJoiner = .{ .allocator = arena };
j.pushStatic("AAAA");
try joinVLQ(&map, kind, &j, arena);
try joinVLQ(&map, kind, &j, arena, .client);
return j.done(gpa);
}
pub fn renderJSON(map: *const Entry, dev: *DevServer, arena: Allocator, kind: ChunkKind, gpa: Allocator) ![]u8 {
pub fn renderJSON(map: *const Entry, dev: *DevServer, arena: Allocator, kind: ChunkKind, gpa: Allocator, side: bake.Side) ![]u8 {
const map_files = map.files.slice();
const paths = map.paths;
@@ -106,13 +106,22 @@ pub const Entry = struct {
if (std.fs.path.isAbsolute(path)) {
const is_windows_drive_path = Environment.isWindows and path[0] != '/';
try source_map_strings.appendSlice(if (is_windows_drive_path)
"\"file:///"
else
"\"file://");
// On the client we prefix the sourcemap path with "file://" and
// percent encode it
if (side == .client) {
try source_map_strings.appendSlice(if (is_windows_drive_path)
"\"file:///"
else
"\"file://");
} else {
try source_map_strings.append('"');
}
if (Environment.isWindows and !is_windows_drive_path) {
// UNC namespace -> file://server/share/path.ext
bun.strings.percentEncodeWrite(
encodeSourceMapPath(
side,
if (path.len > 2 and path[0] == '/' and path[1] == '/')
path[2..]
else
@@ -127,7 +136,7 @@ pub const Entry = struct {
// -> file:///path/to/file.js
// windows drive letter paths have the extra slash added
// -> file:///C:/path/to/file.js
bun.strings.percentEncodeWrite(path, &source_map_strings) catch |err| switch (err) {
encodeSourceMapPath(side, path, &source_map_strings) catch |err| switch (err) {
error.IncompleteUTF8 => @panic("Unexpected: asset with incomplete UTF-8 as file path"),
error.OutOfMemory => |e| return e,
};
@@ -175,14 +184,14 @@ pub const Entry = struct {
j.pushStatic(
\\],"names":[],"mappings":"AAAA
);
try joinVLQ(map, kind, &j, arena);
try joinVLQ(map, kind, &j, arena, side);
const json_bytes = try j.doneWithEnd(gpa, "\"}");
errdefer @compileError("last try should be the final alloc");
if (bun.FeatureFlags.bake_debugging_features) if (dev.dump_dir) |dump_dir| {
const rel_path_escaped = "latest_chunk.js.map";
dumpBundle(dump_dir, .client, rel_path_escaped, json_bytes, false) catch |err| {
const rel_path_escaped = if (side == .client) "latest_chunk.js.map" else "latest_hmr.js.map";
dumpBundle(dump_dir, if (side == .client) .client else .server, rel_path_escaped, json_bytes, false) catch |err| {
bun.handleErrorReturnTrace(err, @errorReturnTrace());
Output.warn("Could not dump bundle: {}", .{err});
};
@@ -191,7 +200,22 @@ pub const Entry = struct {
return json_bytes;
}
fn joinVLQ(map: *const Entry, kind: ChunkKind, j: *StringJoiner, arena: Allocator) !void {
fn encodeSourceMapPath(
side: bake.Side,
utf8_input: []const u8,
array_list: *std.ArrayList(u8),
) error{ OutOfMemory, IncompleteUTF8 }!void {
// On the client, percent encode everything so it works in the browser
if (side == .client) {
return bun.strings.percentEncodeWrite(utf8_input, array_list);
}
const writer = array_list.writer();
try bun.js_printer.writePreQuotedString(utf8_input, @TypeOf(writer), writer, '"', false, true, .utf8);
}
fn joinVLQ(map: *const Entry, kind: ChunkKind, j: *StringJoiner, arena: Allocator, side: bake.Side) !void {
_ = side;
const map_files = map.files.slice();
const runtime: bake.HmrRuntime = switch (kind) {

View File

@@ -0,0 +1,17 @@
#include "DevServerSourceProvider.h"
#include "BunBuiltinNames.h"
#include "BunString.h"
// The Zig implementation will be provided to handle registration
extern "C" void Bun__addDevServerSourceProvider(void* bun_vm, Bake::DevServerSourceProvider* opaque_source_provider, BunString* specifier);
// Export functions for Zig to access DevServerSourceProvider
extern "C" BunString DevServerSourceProvider__getSourceSlice(Bake::DevServerSourceProvider* provider)
{
return Bun::toStringView(provider->source());
}
extern "C" MiCString DevServerSourceProvider__getSourceMapJSON(Bake::DevServerSourceProvider* provider)
{
return provider->sourceMapJSON();
}

View File

@@ -0,0 +1,74 @@
#pragma once
#include "root.h"
#include "headers-handwritten.h"
#include "JavaScriptCore/SourceOrigin.h"
#include "ZigGlobalObject.h"
#include "MiString.h"
namespace Bake {
class DevServerSourceProvider;
// Function to be implemented in Zig to register the source provider
extern "C" void Bun__addDevServerSourceProvider(void* bun_vm, DevServerSourceProvider* opaque_source_provider, BunString* specifier);
extern "C" void Bun__removeDevServerSourceProvider(void* bun_vm, DevServerSourceProvider* opaque_source_provider, BunString* specifier);
class DevServerSourceProvider final : public JSC::StringSourceProvider {
public:
static Ref<DevServerSourceProvider> create(
JSC::JSGlobalObject* globalObject,
const String& source,
const char* sourceMapJSONPtr,
size_t sourceMapJSONLength,
const JSC::SourceOrigin& sourceOrigin,
String&& sourceURL,
const TextPosition& startPosition,
JSC::SourceProviderSourceType sourceType)
{
auto provider = adoptRef(*new DevServerSourceProvider(source, sourceMapJSONPtr, sourceMapJSONLength, sourceOrigin, WTFMove(sourceURL), startPosition, sourceType));
auto* zigGlobalObject = jsCast<::Zig::GlobalObject*>(globalObject);
auto specifier = Bun::toString(provider->sourceURL());
provider->m_globalObject = zigGlobalObject;
provider->m_specifier = specifier;
Bun__addDevServerSourceProvider(zigGlobalObject->bunVM(), provider.ptr(), &specifier);
return provider;
}
MiCString sourceMapJSON() const
{
return m_sourceMapJSON.asCString();
}
private:
DevServerSourceProvider(
const String& source,
const char* sourceMapJSONPtr,
size_t sourceMapJSONLength,
const JSC::SourceOrigin& sourceOrigin,
String&& sourceURL,
const TextPosition& startPosition,
JSC::SourceProviderSourceType sourceType)
: StringSourceProvider(
source,
sourceOrigin,
JSC::SourceTaintedOrigin::Untainted,
WTFMove(sourceURL),
startPosition,
sourceType)
, m_sourceMapJSON(sourceMapJSONPtr, sourceMapJSONLength)
{
}
~DevServerSourceProvider()
{
if (m_globalObject) {
Bun__removeDevServerSourceProvider(m_globalObject->bunVM(), this, &m_specifier);
}
}
MiString m_sourceMapJSON;
Zig::GlobalObject* m_globalObject;
BunString m_specifier;
};
} // namespace Bake

View File

@@ -473,7 +473,7 @@ pub const Style = union(enum) {
pub fn fromJS(value: JSValue, global: *jsc.JSGlobalObject) !Style {
if (value.isString()) {
const bun_string = try value.toBunString(global);
var sfa = std.heap.stackFallback(4096, bun.default_allocator);
var sfa = bun.allocators.stackFallback(4096, bun.default_allocator);
const utf8 = bun_string.toUTF8(sfa.get());
defer utf8.deinit();
if (map.get(utf8.slice())) |style| {
@@ -822,6 +822,28 @@ pub const MatchedParams = struct {
key: []const u8,
value: []const u8,
};
/// Convert the matched params to a JavaScript object
/// Returns null if there are no params
pub fn toJS(self: *const MatchedParams, global: *jsc.JSGlobalObject) JSValue {
const params_array = self.params.slice();
if (params_array.len == 0) {
return JSValue.null;
}
// Create a JavaScript object with params
const obj = JSValue.createEmptyObject(global, params_array.len);
for (params_array) |param| {
const key_str = bun.String.cloneUTF8(param.key);
defer key_str.deref();
const value_str = bun.String.cloneUTF8(param.value);
defer value_str.deref();
_ = obj.putBunStringOneOrArray(global, &key_str, value_str.toJS(global)) catch unreachable;
}
return obj;
}
};
/// Fast enough for development to be seamless, but avoids building a
@@ -1219,7 +1241,7 @@ pub const JSFrameworkRouter = struct {
var params_out: MatchedParams = undefined;
if (jsfr.router.matchSlow(path_slice.slice(), &params_out)) |index| {
var sfb = std.heap.stackFallback(4096, bun.default_allocator);
var sfb = bun.allocators.stackFallback(4096, bun.default_allocator);
const alloc = sfb.get();
return (try jsc.JSObject.create(.{
@@ -1242,7 +1264,7 @@ pub const JSFrameworkRouter = struct {
pub fn toJSON(jsfr: *JSFrameworkRouter, global: *JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!JSValue {
_ = callframe;
var sfb = std.heap.stackFallback(4096, bun.default_allocator);
var sfb = bun.allocators.stackFallback(4096, bun.default_allocator);
const alloc = sfb.get();
return jsfr.routeToJson(global, Route.Index.init(0), alloc);

Some files were not shown because too many files have changed in this diff Show More