Compare commits

..

50 Commits

Author SHA1 Message Date
Claude Bot
5d34604d63 fix: add dynamic import support for CJS splitting
- Export all named exports from dynamically imported chunks
- Handle default export correctly (export as 'default' not internal name)
- Add test for dynamic imports with CJS splitting

Note: ESM interop for default exports still needs work with __toESM wrapper

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-27 01:36:35 +00:00
Claude Bot
9dc4216062 feat: implement --splitting support with --format=cjs
This PR adds support for code splitting when using CommonJS output format. Previously, --splitting was only supported for ESM format.

Changes:
- Add cross-chunk export generation for CJS format (exports.name = value)
- Add cross-chunk import generation for CJS format (const {...} = require())
- Generate proper CommonJS modules that work with Node.js
- Add comprehensive tests for CJS splitting functionality

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-27 01:11:54 +00:00
robobun
a329da97f4 Fix server stability issue with oversized requests (#22701)
## Summary
Improves server stability when handling certain request edge cases.

## Test plan
- Added regression test in `test/regression/issue/22353.test.ts`
- Test verifies server continues operating normally after handling edge
case requests
- All existing HTTP server tests pass

Fixes #22353

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-26 04:59:07 -07:00
Filip Stevanovic
f45900d7e6 fix(fetch): print request body for application/x-www-form-urlencoded in curl logs (#22849)
### What does this PR do?

fixes an issue where fetch requests with `Content-Type:
application/x-www-form-urlencoded` would not include the request body in
curl logs when `BUN_CONFIG_VERBOSE_FETCH=curl` is enabled

previously, only JSON and text-based content types were recognized as
safe-to-print in the curl formatter. This change updates the allow-list
to also handle `application/x-www-form-urlencoded`, ensuring bodies for
common form submissions are shown in logs

### How did you verify your code works?

- added `Content-Type: application/x-www-form-urlencoded` to a fetch
request and confirmed that `BUN_CONFIG_VERBOSE_FETCH=curl` now outputs a
`--data-raw` section with the encoded body
- verified the fix against the reproduction script provided in issue
#12042
 - created and ran a regression test
- checked that existing content types (JSON, text, etc.) continue to
print correctly

fixes #12042
2025-09-26 03:54:41 -07:00
Jarred Sumner
00490199f1 bun feedback (#22710)
### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-26 03:47:26 -07:00
Marko Vejnovic
17b503b389 Redis PUB/SUB 2.0 (#22568)
### What does this PR do?

**This PR is created because [the previous PR I
opened](https://github.com/oven-sh/bun/pull/21728) had some concerning
issues.** Thanks @Jarred-Sumner for the help.

The goal of this PR is to introduce PUB/SUB functionality to the
built-in Redis client. Based on the fact that the current Redis API does
not appear to have compatibility with `io-redis` or `redis-node`, I've
decided to do away with existing APIs and API compatibility with these
existing libraries.

I have decided to base my implementation on the [`redis-node` pub/sub
API](https://github.com/redis/node-redis/blob/master/docs/pub-sub.md).

#### Random Things That Happened

- [x] Refactored the build scripts so that `valgrind` can be disabled.
- [x] Added a `numeric` namespace in `harness.ts` with useful
mathematical libraries.
- [x] Added a mechanism in `cppbind.ts` to disable static assertions
(specifically to allow `check_slow` even when returning a `JSValue`).
Implemented via `// NOLINT[NEXTLINE]?\(.*\)` macros.
- [x] Fixed inconsistencies in error handling of `JSMap`.

### How did you verify your code works?

I've written a set of unit tests to hopefully catch the major use-cases
of this feature. They all appear to pass.


#### Future Improvements

I would have a lot more confidence in our Redis implementation if we
tested it with a test suite running over a network which emulates a high
network failure rate. There are large amounts of edge cases that are
worthwhile to grab, but I think we can roll that out in a future PR.

### Future Tasks

- [ ] Tests over flaky network
- [ ] Use the custom private members over `_<member>`.

---------

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-26 03:06:18 -07:00
Jarred Sumner
ea735c341f Bump WebKit (#22957)
### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-26 01:46:26 -07:00
Jarred Sumner
064ecc37fd Move Bun__JSRequest__calculateEstimatedByteSize earlier (#22993)
### What does this PR do?

### How did you verify your code works?
2025-09-26 00:33:30 -07:00
Meghan Denny
90c7a4e886 update no-validate-leaksan.txt 2025-09-26 00:24:02 -07:00
Meghan Denny
c63fa996d1 package.json: add amazonlinux machine script 2025-09-26 00:17:58 -07:00
robobun
5457d76bcb Fix double-free in createArgv function (#22978)
## Summary
- Fixed a double-free bug in the `createArgv` function in
`node_process.zig`

## Details
The `createArgv` function had two `defer allocator.free(args)`
statements:
- One on line 164 
- Another on line 192 (now removed)

This would cause the same memory to be freed twice when the function
returned, leading to undefined behavior.

Fixes #22975

## Test plan
The existing process.argv tests should continue to pass with this fix.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-25 23:52:56 -07:00
pfg
c4519c7552 Add --randomize --seed flag (#22987)
Outputs the seed when randomizing. Adds --seed flag to reproduce a
random order. Seeds might not produce the same order across operating
systems / bun versions.

Fixes #11847

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-25 23:47:46 -07:00
Jarred Sumner
656747bcf1 Fix vm destruction assertion failure in udp socket, reduce usage of protect() (#22986)
### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-25 22:41:02 -07:00
Jarred Sumner
2039ab182d Remove stale path assertion on Windows (#22988)
### What does this PR do?

This assertion is occasionally incorrect, and was originally added as a
workaround for lack of proper error handling in zig's std library. We've
seen fixed that so this assertion is no longer needed.

### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-25 22:34:49 -07:00
Meghan Denny
5a709a2dbf node:tty: use terminal VT mode on Windows (#21161)
mirrors: https://github.com/nodejs/node/pull/58358
2025-09-25 19:58:44 -07:00
Meghan Denny
51ce3bc269 [publish images] ci: ensure tests that require docker have it available (#22781) 2025-09-25 19:03:22 -07:00
Marko Vejnovic
14b62e6904 chore(build): Add build:debug:noasan and remove build:debug:asan (#22982)
### What does this PR do?

Adds a `bun run build:debug:noasan` run script and deletes the `bun run
build:debug:asan` rule.

### How did you verify your code works?

Ran the change locally.
2025-09-25 18:06:21 -07:00
robobun
d3061de1bf feat(windows): implement authenticode stripping for --compile (#22960)
## Summary

Implements authenticode signature stripping for Windows PE files when
using `bun build --compile`, ensuring that generated executables can be
properly signed with external tools after Bun embeds its data section.

## What Changed

### Core Implementation
- **Authenticode stripping**: Removes digital signatures from PE files
before adding the .bun section
- **Safe memory access**: Replaced all `@alignCast` operations with safe
unaligned access helpers to prevent crashes
- **Hardened PE parsing**: Added comprehensive bounds checking and
validation throughout
- **PE checksum recalculation**: Properly updates checksums after
modifications

### Key Features
- Always strips authenticode signatures when using `--compile` for
Windows (uses `.strip_always` mode)
- Validates PE file structure according to PE/COFF specification
- Handles overlapping memory regions safely during certificate removal
- Clears `IMAGE_DLLCHARACTERISTICS_FORCE_INTEGRITY` flag when stripping
signatures
- Ensures no unexpected overlay data remains after stripping

### Bug Fixes
- Fixed memory corruption bug using `copyBackwards` for overlapping
regions
- Fixed checksum calculation skipping 6 bytes instead of 4
- Added integer overflow protection in payload size calculations
- Fixed double alignment bug in `size_of_image` calculation

## Technical Details

The implementation follows the Windows PE/COFF specification and
includes:

- `StripMode` enum to control when signatures are stripped
(none/strip_if_signed/strip_always)
- Safe unaligned memory access helpers (`viewAtConst`, `viewAtMut`)
- Proper alignment helpers with overflow protection (`alignUpU32`,
`alignUpUsize`)
- Comprehensive error types for all failure cases

## Testing

- Passes all existing PE tests in
`test/regression/issue/pe-codesigning-integrity.test.ts`
- Compiles successfully with `bun run zig:check-windows`
- Properly integrated with StandaloneModuleGraph for Windows compilation

## Impact

This ensures Windows users can:
1. Use `bun build --compile` to create standalone executables
2. Sign the resulting executables with their own certificates
3. Distribute properly signed Windows binaries

Fixes issues where previously signed executables would have invalid
signatures after Bun added its embedded data.

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-25 18:03:27 -07:00
robobun
58782ceef2 Fix bun_dependency_versions.h regenerating on every CMake run (#22985)
## Summary
- Fixes unnecessary regeneration of `bun_dependency_versions.h` on every
CMake run
- Only writes the header file when content actually changes

## Test plan
Tested locally by running CMake configuration multiple times:
1. First run generates the file (shows "Updated dependency versions
header")
2. Subsequent runs skip writing (shows "Dependency versions header
unchanged")
3. File modification timestamp remains unchanged when content is the
same
4. File is properly regenerated when deleted or when content changes

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-25 17:23:45 -07:00
Meghan Denny
0b9a2fce2d update no-validate-leaksan.txt 2025-09-25 17:06:23 -07:00
Marko Vejnovic
749ad8a1ff fix(build): Minor Linux Build Fixes (#22972)
### What does this PR do?

### How did you verify your code works?
2025-09-25 16:53:21 -07:00
Jarred Sumner
9746d03ccb Delete slop test 2025-09-25 16:24:24 -07:00
Jarred Sumner
4dfd87a302 Fix aborting fetch() calls while the socket is connecting. Fix a thread-safety issue involving redirects and AbortSignal. (#22842)
### What does this PR do?

When we added "happy eyeballs" support to fetch(), it meant that
`onOpen` would not be called potentially for awhile. If the AbortSignal
is aborted between `connect()` and the socket becoming
readable/writable, then we would delay closing the connection until the
connection opens. Fixing that fixes #18536.

Separately, the `isHTTPS()` function used in abort and in request body
streams was not thread safe. This caused a crash when many redirects
happen simultaneously while either AbortSignal or request body messages
are in-flight.
This PR fixes https://github.com/oven-sh/bun/issues/14137



### How did you verify your code works?

There are tests

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Ciro Spaciari <ciro.spaciari@gmail.com>
2025-09-25 16:08:06 -07:00
Meghan Denny
20854fb285 node:crypto: add blake2s256 hasher (#22958) 2025-09-25 15:28:42 -07:00
robobun
be15f6c80c feat(test): add --randomize flag to run tests in random order (#22945)
## Summary

This PR adds a `--randomize` flag to `bun test` that shuffles test
execution order. This helps developers catch test interdependencies and
identify flaky tests that may depend on execution order.

## Changes

-  Added `--randomize` CLI flag to test command
- 🔀 Implemented test shuffling using `bun.fastRandom()` as PRNG seed
- 🧪 Added comprehensive tests to verify randomization behavior
- 📝 Tests are shuffled at the scheduling phase, properly handling
describe blocks and hooks

## Usage

```bash
# Run tests in random order
bun test --randomize

# Works with other test flags
bun test --randomize --bail
bun test mytest.test.ts --randomize
```

## Implementation Details

The randomization happens in `Order.zig`'s `generateOrderDescribe`
function, which shuffles the `current.entries.items` array when the
randomize flag is set. This ensures:

- All tests still run (just in different order)
- Hooks (beforeAll, afterAll, beforeEach, afterEach) maintain proper
relationships
- Describe blocks and their children are shuffled independently
- Each run uses a different random seed for varied execution orders

## Test Coverage

Added tests in `test/cli/test/test-randomize.test.ts` that verify:
- Tests run in random order with the flag
- All tests execute (none are skipped)
- Without the flag, tests run in consistent order
- Randomization works with describe blocks

## Example Output

```bash
# Without --randomize (consistent order)
$ bun test mytest.js
Running test 1
Running test 2
Running test 3
Running test 4
Running test 5

# With --randomize (different order each run)
$ bun test mytest.js --randomize
Running test 3
Running test 5
Running test 1
Running test 4
Running test 2

$ bun test mytest.js --randomize
Running test 2
Running test 4
Running test 5
Running test 1
Running test 3
```

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: pfg <pfg@pfg.pw>
2025-09-25 14:20:47 -07:00
pfg
0ea4ce1bb4 Synchronous concurrent test fix (#22928)
```ts
beforeEach(() => {
  console.log("beforeEach");
});
afterEach(() => {
  console.log("afterEach");
});
test.concurrent("test 1", () => {
  console.log("start test 1");
});
test.concurrent("test 2", async () => {
  console.log("start test 2");
});
test.concurrent("test 3", () => {
  console.log("start test 3");
});
```

```
$> bun-before test synchronous-concurrent
beforeEach
beforeEach
beforeEach
start test 1
start test 2
start test 3
afterEach
afterEach
afterEach

$> bun-after test synchronous-concurrent
beforeEach
start test 1
afterEach
beforeEach
start test 2
afterEach
beforeEach
start test 3
afterEach
```

---------

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-25 03:52:18 -07:00
robobun
6c381b0e03 Fix double slash in error stack traces when root_path has trailing slash (#22951)
## Summary
- Fixes double slashes appearing in error stack traces when `root_path`
ends with a trailing slash
- Followup to #22469 which added dimmed cwd prefixes to error messages

## Changes
- Use `strings.withoutTrailingSlash()` to strip any trailing separator
from `root_path` before adding the path separator
- This prevents paths like `/workspace//file.js` from appearing in error
messages

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-25 00:37:10 -07:00
Ciro Spaciari
7798e6638b Implement NODE_USE_SYSTEM_CA with --use-system-ca CLI flag (#22441)
### What does this PR do?
Resume work on https://github.com/oven-sh/bun/pull/21898
### How did you verify your code works?
Manually tested on MacOS, Windows 11 and Ubuntu 25.04. CI changes are
needed for the tests

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-24 21:55:57 -07:00
Marko Vejnovic
e3783c244f chore(libuv): Update to 1.51.0 (#22942)
### What does this PR do?

Uprevs `libuv` to version `1.51.0`.

### How did you verify your code works?

CI passes.

---------

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-24 20:55:25 -07:00
robobun
fee28ca66f Fix dns.resolve callback parameters to match Node.js behavior (#22814)
## Summary
- Fixed `dns.resolve()` callback to pass 2 parameters instead of 3,
matching Node.js
- Fixed `dns.promises.resolve()` to return array of strings for A/AAAA
records instead of objects
- Added comprehensive regression tests

## What was wrong?

The `dns.resolve()` callback was incorrectly passing 3 parameters
`(error, hostname, results)` instead of Node.js's 2 parameters `(error,
results)`. Additionally, `dns.promises.resolve()` was returning objects
with `{address, family}` instead of plain string arrays for A/AAAA
records.

## How this fixes it

1. Removed the extra `hostname` parameter from the callback in
`dns.resolve()` for A/AAAA records
2. Changed promise version to use `promisifyResolveX(false)` instead of
`promisifyLookup()` to return string arrays
3. Applied same fixes to the `Resolver` class methods

## Test plan
- Added regression test `test/regression/issue/22712.test.ts` with 6
test cases
- All tests pass with the fix
- Verified existing DNS tests still pass

Fixes #22712

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-24 18:29:15 -07:00
robobun
f9a042f114 Improve --reporter flag help and error messages (#22900)
## Summary
- Clarifies help text for `--reporter` and `--reporter-outfile` flags
- Improves error messages when invalid reporter formats are specified
- Makes distinction between test reporters and coverage reporters
clearer

## Changes
1. Updated help text in `Arguments.zig` to better explain:
   - What formats are currently available (only 'junit' for --reporter)
   - Default behavior (console output for tests)
   - Requirements (--reporter-outfile needed with --reporter=junit)
   
2. Improved error messages to list available options when invalid
formats are used

3. Updated CLI completions to match the new help text

## Test plan
- [x] Built and tested with `bun bd`
- [x] Verified help text displays correctly: `./build/debug/bun-debug
test --help`
- [x] Tested error message for invalid reporter:
`./build/debug/bun-debug test --reporter=json`
- [x] Tested error message for missing outfile: `./build/debug/bun-debug
test --reporter=junit`
- [x] Tested error message for invalid coverage reporter:
`./build/debug/bun-debug test --coverage-reporter=invalid`
- [x] Verified junit reporter still works: `./build/debug/bun-debug test
--reporter=junit --reporter-outfile=/tmp/junit.xml`
- [x] Verified lcov coverage reporter still works:
`./build/debug/bun-debug test --coverage --coverage-reporter=lcov`

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-24 18:26:37 -07:00
robobun
57b93f6ea3 Fix panic when macros return collections with 3+ arrays/objects (#22827)
## Summary

Fixes #22656, #11730, and #7116

Fixes a panic that occurred when macros returned collections containing
three or more arrays or objects.

## Problem

The issue was caused by hash table resizing during recursive processing.
When `this.run()` was called recursively to process nested
arrays/objects, it could add more entries to the `visited` map,
triggering a resize. This would invalidate the `_entry.value_ptr`
pointer obtained from `getOrPut`, leading to memory corruption and
crashes.

## Solution

The fix ensures we handle hash table resizing safely:

1. Use `getOrPut` to reserve an entry and store a placeholder
2. Process all children (which may trigger hash table resizing)
3. Create the final expression with all data
4. Use `put` to update the entry (safe even after resizing)

This approach is applied consistently to both arrays and objects.

## Verification

All three issues have been tested and verified as fixed:

###  #22656 - "Panic when returning collections with three or more
arrays or objects"
- **Before**: `panic(main thread): switch on corrupt value`
- **After**: Works correctly

###  #11730 - "Constructing deep objects in macros causes segfaults"
- **Before**: `Segmentation fault at address 0x8` with deep nested
structures
- **After**: Handles deep nesting without crashes

###  #7116 - "[macro] crash with large complex array"
- **Before**: Crashes with objects containing 50+ properties (hash table
stress)
- **After**: Processes large complex arrays successfully

## Test Plan

Added comprehensive regression tests that cover:
- Collections with 3+ arrays
- Collections with 3+ objects
- Deeply nested structures (5+ levels)
- Objects with many properties (50+) to stress hash table operations
- Mixed collections of arrays and objects

All tests pass with the fix applied.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-24 18:25:39 -07:00
robobun
fcd628424a Fix YAML.parse to throw SyntaxError instead of BuildMessage (#22924)
YAML.parse now throws SyntaxError for invalid syntax matching JSON.parse
behavior

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2025-09-24 16:29:05 -07:00
pfg
526686fdc9 Prevent test.only and snapshot updates in CI (#21811)
This is feature flagged and will not activate until Bun 1.3

- Makes `test.only()` throw an error in CI
- Unless `--update-snapshots` is passed:
- Makes `expect.toMatchSnapshot()` throw an error instead of adding a
new snapshot in CI
- Makes `expect.toMatchInlineSnapshot()` throw an error instead of
filling in the snapshot value in CI

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-24 15:19:16 -07:00
pfg
95b18582ec Revert "concurrent limit"
This reverts commit 4252a6df31.
2025-09-24 15:09:20 -07:00
pfg
4252a6df31 concurrent limit 2025-09-24 15:08:36 -07:00
Meghan Denny
13248bab57 package.json: add shorthands for creating remote machines (#22780) 2025-09-24 13:11:19 -07:00
Meghan Denny
80e8b9601d update no-validate-exceptions.txt (#22907) 2025-09-24 12:57:14 -07:00
btcbobby
0bd3f3757f Update install.sh to try ~/.bash_profile first for PATH modification (#11679) 2025-09-24 12:53:45 -07:00
Dylan Conway
084eeb945e fix(install): serialize updated workspaces versions correctly for bun.lockb (#22932)
### What does this PR do?
This change was missing after changing semver core numbers to use u64.

Also fixes potentially serializing uninitialized bytes from resolution
unions.
### How did you verify your code works?
Added a test for migrating a bun.lockb with most features used.
2025-09-24 02:42:57 -07:00
Meghan Denny
92bc522e85 lsan: fix reporting on linux ci (#22806) 2025-09-24 00:47:52 -07:00
robobun
e58a4a7282 feat: add concurrent-test-glob option to bunfig.toml for selective concurrent test execution (#22898)
## Summary

Adds a new `concurrentTestGlob` configuration option to bunfig.toml that
allows test files matching a glob pattern to automatically run with
concurrent test execution enabled. This provides granular control over
which tests run concurrently without modifying test files or using the
global `--concurrent` flag.

## Problem

Currently, enabling concurrent test execution in Bun requires either:
1. Using the `--concurrent` flag (affects ALL tests)
2. Manually adding `test.concurrent()` to individual test functions
(requires modifying test files)

This creates challenges for:
- Large codebases wanting to gradually migrate to concurrent testing
- Projects with mixed test types (unit tests that need isolation vs
integration tests that can run in parallel)
- CI/CD pipelines that want to optimize test execution without code
changes

## Solution

This PR introduces a `concurrentTestGlob` option in bunfig.toml that
automatically enables concurrent execution for test files matching a
specified glob pattern:

```toml
[test]
concurrentTestGlob = "**/concurrent-*.test.ts"
```

### Key Features
-  Non-breaking: Completely opt-in via configuration
-  Flexible: Use glob patterns to target specific test files or
directories
-  Override-friendly: `--concurrent` flag still forces all tests to run
concurrently
-  Zero code changes: No need to modify existing test files

## Implementation Details

### Code Changes
1. Added `concurrent_test_glob` field to `TestOptions` struct
(`src/cli.zig`)
2. Added parsing for `concurrentTestGlob` from bunfig.toml
(`src/bunfig.zig`)
3. Added `concurrent_test_glob` field to `TestRunner`
(`src/bun.js/test/jest.zig`)
4. Implemented `shouldFileRunConcurrently()` method that checks file
paths against the glob pattern
5. Updated test execution logic to apply concurrent mode based on glob
matching (`src/bun.js/test/ScopeFunctions.zig`)

### How It Works
- When a test file is loaded, its path is checked against the configured
glob pattern
- If it matches, all tests in that file run concurrently (as if
`--concurrent` was passed)
- Files not matching the pattern run sequentially as normal
- The `--concurrent` CLI flag overrides this behavior when specified

## Usage Examples

### Basic Usage
```toml
# bunfig.toml
[test]
concurrentTestGlob = "**/integration/*.test.ts"
```

### Multiple Patterns
```toml
[test]
concurrentTestGlob = [
  "**/integration/*.test.ts",
  "**/e2e/*.test.ts", 
  "**/concurrent-*.test.ts"
]
```

### Migration Strategy
Teams can gradually migrate to concurrent testing:
1. Start with integration tests: `"**/integration/*.test.ts"`
2. Add stable unit tests: `"**/fast-*.test.ts"`
3. Eventually migrate most tests except those requiring isolation

## Testing

Added comprehensive test coverage in
`test/cli/test/concurrent-test-glob.test.ts`:
-  Tests matching glob patterns run concurrently (verified via
execution order logging)
-  Tests not matching patterns run sequentially (verified via shared
state and execution order)
-  `--concurrent` flag properly overrides the glob setting
- Tests use file system logging to deterministically verify concurrent
vs sequential execution

## Documentation

Complete documentation added:
- `docs/runtime/bunfig.md` - Configuration reference
- `docs/test/configuration.md` - Test configuration details
- `docs/test/examples/concurrent-test-glob.md` - Comprehensive example
with migration guide

## Performance Considerations

- Glob matching happens once per test file during loading
- Uses Bun's existing `glob.match()` implementation
- Minimal overhead: simple string pattern matching
- Future optimization: Could cache match results per file path

## Breaking Changes

None. This is a fully backward-compatible, opt-in feature.

## Checklist

- [x] Implementation complete and building
- [x] Tests passing
- [x] Documentation updated
- [x] No breaking changes
- [x] Follows existing code patterns

## Related Issues

This addresses common requests for more granular control over concurrent
test execution, particularly for large codebases migrating from other
test runners.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-23 23:01:15 -07:00
Meghan Denny
ebe2e9da14 node:net: fix handle leak (#22913) 2025-09-23 22:02:34 -07:00
robobun
1a23797e82 feat: add test.serial() API for forcing serial test execution (#22899)
## Summary

Adds a new `test.serial()` API that forces tests to run serially even
when the `--concurrent` flag is passed. This is the opposite of
`test.concurrent()` which forces parallel execution.

## Motivation

Some tests genuinely need to run serially even in CI environments with
`--concurrent`:
- Database migration tests that must run in order
- Tests that modify shared global state
- Tests that use fixed ports or file system resources
- Tests that depend on timing or resource constraints

## Implementation

Changed `self_concurrent` from `bool` to `?bool`:
- `null` = default behavior (inherit from parent or use default)
- `true` = force concurrent execution
- `false` = force serial execution

## API Surface

```javascript
// Force serial execution
test.serial("database migration", async () => {
  // This runs serially even with --concurrent flag
});

// All modifiers work
test.serial.skip("skip this serial test", () => {});
test.serial.todo("implement this serial test");
test.serial.only("only run this serial test", () => {});
test.serial.each([[1], [2]])("serial test %i", (n) => {});
test.serial.if(condition)("conditional serial", () => {});

// Works with describe too
describe.serial("serial test suite", () => {
  test("test 1", () => {}); // runs serially
  test("test 2", () => {}); // runs serially
});

// Explicit test-level settings override describe-level
describe.concurrent("concurrent suite", () => {
  test.serial("this runs serially", () => {}); // serial wins
  test("this runs concurrently", () => {});
});
```

## Test Coverage

Comprehensive tests added including:
- Basic `test.serial()` functionality
- All modifiers (skip, todo, only, each, if)
- `describe.serial()` blocks
- Mixing serial and concurrent tests in same describe block
- Nested describe blocks with conflicting settings
- Explicit overrides (test.serial in describe.concurrent and vice versa)

All 36 tests pass 

## Example

```javascript
// Without this PR - these tests might run in parallel with --concurrent
test("migrate database schema v1", async () => { await migrateV1(); });
test("migrate database schema v2", async () => { await migrateV2(); });
test("migrate database schema v3", async () => { await migrateV3(); });

// With this PR - guaranteed serial execution
test.serial("migrate database schema v1", async () => { await migrateV1(); });
test.serial("migrate database schema v2", async () => { await migrateV2(); });
test.serial("migrate database schema v3", async () => { await migrateV3(); });
```

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-23 21:06:06 -07:00
vfilanovsky-openai
16435f3561 Make sure bun can be installed on Alpine Linux (musl) on arm64 hardware (#22892)
### What does this PR do?
This PRs adjusts the "arch" string for Linux-musl variant to make sure
it can be installed on ARM64 platforms using `npm`. Without this fix,
installing bun on Alpine Linux on arm64 fails because the native binary
cannot be found.

#### Why it fails
Bun attempts to find/download the native binaries during the postinstall
phase (see
[install.ts](https://github.com/oven-sh/bun/blob/bun-v1.1.42/packages/bun-release/src/npm/install.ts)).
The platform matching logic lives in
[platform.ts](https://github.com/oven-sh/bun/blob/bun-v1.1.42/packages/bun-release/src/platform.ts).
Note how the "musl" variant is marked [as
"aarch64"](https://github.com/oven-sh/bun/blob/bun-v1.1.42/packages/bun-release/src/platform.ts#L63-L69),
while the regular "glibc" variant is marked [as
"arm64"](https://github.com/oven-sh/bun/blob/bun-v1.1.42/packages/bun-release/src/platform.ts#L44-L49).
On Alpine Linux distributions (or when using "node-alpine" docker image)
we're supposed to be using the "musl" binary. However, since bun marks
it as "aarch64" while the matching logic relies on `process.arch`, it
never gets matched. Node.js uses "arm64", _not_ "aarch64" (see
["process.arch"
docs](https://nodejs.org/docs/latest-v22.x/api/process.html#processarch)).
In short - a mismatch between the expected arch ("aarch64") and the
actual reported arch ("arm64") prevents bun from finding the right
binary when installing with npm/pnpm.

### How did you verify your code works?
Verified by running the installer on Alpine Linux on arm64.

cc @magus
2025-09-23 20:14:57 -07:00
Ciro Spaciari
db22b7f402 fix(Bun.sql) handle numeric correctly (#22925)
### What does this PR do?
Fixes https://github.com/oven-sh/bun/issues/21225
### How did you verify your code works?
Tests

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-23 20:14:19 -07:00
Nathan Whitaker
a8ccdb02e9 fix(benchmark): postgres benchmark was not performing any queries under deno and node (#22926)
### What does this PR do?
Fixes the postgres benchmark so that it actually benchmarks query
performance on node and deno.

Before this PR, the `sql` function was just creating a tagged template
function, which involved connecting to the database. So basically bun
was doing queries, but node and deno were just connecting to the
postgres database over and over.

You can see from the first example in the docs that you're supposed to
call the default export in order to get back a function to use with
template literals: https://www.npmjs.com/package/postgres


### How did you verify your code works?
Ran it
2025-09-23 17:48:10 -07:00
pfg
144c45229e bun.ptr.Shared.Lazy fixes cppbind change (#22753)
shared lazy:

- cloneWeak didn't incrementWeak. fixed
- exposes a public Optional so you can do `bun.ptr.Shared(*T).Optional`
- the doc comment for 'take' said it set self to null. but it did not.
fixed.
- upgrading a weak to a strong incremented the weak instead of
decrementing it. fixed.
- adds a new method unsafeGetStrongFromPointer. this is currently unused
but used in pfg/describe-2:

a690faa60a/src/bun.js/api/Timer/EventLoopTimer.zig (L220-L223)

cppbind:

- moves the bindings to the root of the file at the top and puts raw at
the bottom
- fixes false_is_throw to return void instead of bool
- updates the help message

---------

Co-authored-by: taylor.fish <contact@taylor.fish>
2025-09-23 17:10:08 -07:00
Ciro Spaciari
85271f9dd9 fix(node:http) allow CONNECT in node http/https servers (#22756)
### What does this PR do?
Fixes https://github.com/oven-sh/bun/issues/22755
Fixes https://github.com/oven-sh/bun/issues/19790
Fixes https://github.com/oven-sh/bun/issues/16372
### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-23 16:46:59 -07:00
robobun
99786797c7 docs: Add missing YAML.stringify() documentation (#22921)
Co-authored-by: Alistair Smith <hi@alistair.sh>
2025-09-23 16:02:51 -07:00
269 changed files with 11697 additions and 2090 deletions

View File

@@ -552,6 +552,7 @@ function getLinkBunStep(platform, options) {
cancel_on_build_failing: isMergeQueue(),
env: {
BUN_LINK_ONLY: "ON",
ASAN_OPTIONS: "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=0",
...getBuildEnv(platform, options),
},
command: `${getBuildCommand(platform, options, "build-bun")} --target bun`,
@@ -615,6 +616,9 @@ function getTestBunStep(platform, options, testOptions = {}) {
cancel_on_build_failing: isMergeQueue(),
parallelism: unifiedTests ? undefined : os === "darwin" ? 2 : 10,
timeout_in_minutes: profile === "asan" || os === "windows" ? 45 : 30,
env: {
ASAN_OPTIONS: "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=0",
},
command:
os === "windows"
? `node .\\scripts\\runner.node.mjs ${args.join(" ")}`

View File

@@ -3,21 +3,7 @@ FormatStyle: webkit
Checks: >
-*,
clang-analyzer-*,
-clang-analyzer-optin.core.EnumCastOutOfRange,
-clang-analyzer-webkit.UncountedLambdaCapturesChecker,
-clang-analyzer-webkit.RefCntblBaseVirtualDtor,
-clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling,
-clang-analyzer-deadcode.DeadStores,
bugprone-*,
-bugprone-easily-swappable-parameters,
-bugprone-narrowing-conversions,
-bugprone-switch-missing-default-case,
-bugprone-assignment-in-if-condition,
-bugprone-implicit-widening-of-multiplication-result,
performance-*,
-performance-no-int-to-ptr,
cppcoreguidelines-*,
-cppcoreguidelines-avoid-magic-numbers,
-cppcoreguidelines-narrowing-conversions,
-cppcoreguidelines-macro-to-enum,
-cppcoreguidelines-avoid-non-const-global-variables,
-clang-analyzer-optin.core.EnumCastOutOfRange
-clang-analyzer-webkit.UncountedLambdaCapturesChecker
-clang-analyzer-optin.core.EnumCastOutOfRange
-clang-analyzer-webkit.RefCntblBaseVirtualDtor

View File

@@ -1,15 +1,15 @@
name: clang-tidy:check
name: C++ Linter comment
permissions:
actions: read
pull-requests: write
on:
# pull_request:
workflow_dispatch:
env:
LLVM_VERSION_MAJOR: "19"
workflow_run:
workflows:
- lint-cpp
types:
- completed
jobs:
comment-lint:
@@ -17,39 +17,50 @@ jobs:
name: Comment
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/setup-bun
- name: Run clang-tidy
- name: Download Comment
uses: actions/download-artifact@v4
with:
name: format.log
github-token: ${{ github.token }}
run-id: ${{ github.event.workflow_run.id }}
- name: PR Number
uses: actions/download-artifact@v4
with:
name: pr-number.txt
github-token: ${{ github.token }}
run-id: ${{ github.event.workflow_run.id }}
- name: Did Fail
uses: actions/download-artifact@v4
with:
name: did_fail.txt
github-token: ${{ github.token }}
run-id: ${{ github.event.workflow_run.id }}
- name: Setup Environment
id: env
shell: bash
run: |
sudo apt-get remove -qq clang-18
wget -qO- https://apt.llvm.org/llvm-snapshot.gpg.key | sudo tee /etc/apt/trusted.gpg.d/apt.llvm.org.asc > /dev/null
echo "deb http://apt.llvm.org/$(lsb_release -cs)/ llvm-toolchain-$(lsb_release -cs)-${{ env.LLVM_VERSION_MAJOR }} main" | sudo tee /etc/apt/sources.list.d/llvm.list > /dev/null
sudo apt-get update -qq
sudo apt-get install -y -qq --no-install-recommends --no-install-suggests -o=Dpkg::Use-Pty=0 clang-format-${{ env.LLVM_VERSION_MAJOR }}
# Copy to outputs
echo "pr-number=$(cat pr-number.txt)" >> $GITHUB_OUTPUT
{
echo 'text_output<<EOF'
bun run clang-tidy:check
cat format.log
echo EOF
} >> "$GITHUB_OUTPUT"
echo "did_fail=$(cat did_fail.txt)" >> $GITHUB_OUTPUT
- name: Find Comment
id: comment
uses: peter-evans/find-comment@v3
with:
issue-number: ${{ github.event.number }}
issue-number: ${{ steps.env.outputs.pr-number }}
comment-author: github-actions[bot]
body-includes: <!-- generated-comment lint-cpp-workflow=${{ github.workflow }} -->
- name: Update Comment
uses: peter-evans/create-or-update-comment@v4
if: steps.env.outputs.did_fail != '0'
with:
comment-id: ${{ steps.comment.outputs.comment-id }}
issue-number: ${{ github.event.number }}
edit-mode: replace
issue-number: ${{ steps.env.outputs.pr-number }}
body: |
@${{ github.actor }}, `clang-tidy` had something to share with you about your code:
@@ -60,3 +71,15 @@ jobs:
Commit: ${{ github.event.workflow_run.head_sha || github.sha }}
<!-- generated-comment lint-cpp-workflow=${{ github.workflow }} -->
edit-mode: replace
- name: Update Previous Comment
uses: peter-evans/create-or-update-comment@v4
if: steps.env.outputs.did_fail == '0' && steps.comment.outputs.comment-id != ''
with:
comment-id: ${{ steps.comment.outputs.comment-id }}
issue-number: ${{ steps.env.outputs.pr-number }}
body: |
clang-tidy nits are fixed! Thank you.
<!-- generated-comment lint-cpp-workflow=${{ github.workflow }} -->
edit-mode: replace

2
.gitignore vendored
View File

@@ -189,4 +189,4 @@ scratch*.{js,ts,tsx,cjs,mjs}
scripts/lldb-inline
# We regenerate these in all the build scripts
cmake/sources/*.txt
cmake/sources/*.txt

4
.vscode/launch.json generated vendored
View File

@@ -26,7 +26,7 @@
// "BUN_JSC_dumpSimulatedThrows": "1",
// "BUN_JSC_unexpectedExceptionStackTraceLimit": "20",
// "BUN_DESTRUCT_VM_ON_EXIT": "1",
// "ASAN_OPTIONS": "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1",
// "ASAN_OPTIONS": "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1:abort_on_error=1",
// "LSAN_OPTIONS": "malloc_context_size=100:print_suppressions=1:suppressions=${workspaceFolder}/test/leaksan.supp",
},
"console": "internalConsole",
@@ -69,7 +69,7 @@
// "BUN_JSC_dumpSimulatedThrows": "1",
// "BUN_JSC_unexpectedExceptionStackTraceLimit": "20",
// "BUN_DESTRUCT_VM_ON_EXIT": "1",
// "ASAN_OPTIONS": "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1",
// "ASAN_OPTIONS": "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1:abort_on_error=1",
// "LSAN_OPTIONS": "malloc_context_size=100:print_suppressions=1:suppressions=${workspaceFolder}/test/leaksan.supp",
},
"console": "internalConsole",

View File

@@ -21,7 +21,7 @@ $ sudo pacman -S base-devel ccache cmake git go libiconv libtool make ninja pkg-
```
```bash#Fedora
$ sudo dnf install cargo ccache cmake git golang libtool ninja-build pkg-config rustc ruby libatomic-static libstdc++-static sed unzip which libicu-devel 'perl(Math::BigInt)'
$ sudo dnf install cargo clang19 llvm19 lld19 ccache cmake git golang libtool ninja-build pkg-config rustc ruby libatomic-static libstdc++-static sed unzip which libicu-devel 'perl(Math::BigInt)'
```
```bash#openSUSE Tumbleweed

View File

@@ -1,6 +1,6 @@
const isBun = typeof globalThis?.Bun?.sql !== "undefined";
import postgres from "postgres";
const sql = isBun ? Bun.sql : postgres;
const sql = isBun ? Bun.sql : postgres();
// Create the table if it doesn't exist
await sql`

View File

@@ -48,6 +48,7 @@ const BunBuildOptions = struct {
/// enable debug logs in release builds
enable_logs: bool = false,
enable_asan: bool,
enable_valgrind: bool,
tracy_callstack_depth: u16,
reported_nodejs_version: Version,
/// To make iterating on some '@embedFile's faster, we load them at runtime
@@ -94,6 +95,7 @@ const BunBuildOptions = struct {
opts.addOption(bool, "baseline", this.isBaseline());
opts.addOption(bool, "enable_logs", this.enable_logs);
opts.addOption(bool, "enable_asan", this.enable_asan);
opts.addOption(bool, "enable_valgrind", this.enable_valgrind);
opts.addOption([]const u8, "reported_nodejs_version", b.fmt("{}", .{this.reported_nodejs_version}));
opts.addOption(bool, "zig_self_hosted_backend", this.no_llvm);
opts.addOption(bool, "override_no_export_cpp_apis", this.override_no_export_cpp_apis);
@@ -213,26 +215,21 @@ pub fn build(b: *Build) !void {
var build_options = BunBuildOptions{
.target = target,
.optimize = optimize,
.os = os,
.arch = arch,
.codegen_path = codegen_path,
.codegen_embed = codegen_embed,
.no_llvm = no_llvm,
.override_no_export_cpp_apis = override_no_export_cpp_apis,
.version = try Version.parse(bun_version),
.canary_revision = canary: {
const rev = b.option(u32, "canary", "Treat this as a canary build") orelse 0;
break :canary if (rev == 0) null else rev;
},
.reported_nodejs_version = try Version.parse(
b.option([]const u8, "reported_nodejs_version", "Reported Node.js version") orelse
"0.0.0-unset",
),
.sha = sha: {
const sha_buildoption = b.option([]const u8, "sha", "Force the git sha");
const sha_github = b.graph.env_map.get("GITHUB_SHA");
@@ -268,10 +265,10 @@ pub fn build(b: *Build) !void {
break :sha sha;
},
.tracy_callstack_depth = b.option(u16, "tracy_callstack_depth", "") orelse 10,
.enable_logs = b.option(bool, "enable_logs", "Enable logs in release") orelse false,
.enable_asan = b.option(bool, "enable_asan", "Enable asan") orelse false,
.enable_valgrind = b.option(bool, "enable_valgrind", "Enable valgrind") orelse false,
};
// zig build obj
@@ -500,6 +497,7 @@ fn addMultiCheck(
.codegen_path = root_build_options.codegen_path,
.no_llvm = root_build_options.no_llvm,
.enable_asan = root_build_options.enable_asan,
.enable_valgrind = root_build_options.enable_valgrind,
.override_no_export_cpp_apis = root_build_options.override_no_export_cpp_apis,
};
@@ -636,7 +634,7 @@ fn configureObj(b: *Build, opts: *BunBuildOptions, obj: *Compile) void {
obj.link_function_sections = true;
obj.link_data_sections = true;
if (opts.optimize == .Debug) {
if (opts.optimize == .Debug and opts.enable_valgrind) {
obj.root_module.valgrind = true;
}
}
@@ -745,6 +743,7 @@ fn addInternalImports(b: *Build, mod: *Module, opts: *BunBuildOptions) void {
.{ .file = "node-fallbacks/url.js", .enable = opts.shouldEmbedCode() },
.{ .file = "node-fallbacks/util.js", .enable = opts.shouldEmbedCode() },
.{ .file = "node-fallbacks/zlib.js", .enable = opts.shouldEmbedCode() },
.{ .file = "eval/feedback.ts", .enable = opts.shouldEmbedCode() },
}) |entry| {
if (!@hasField(@TypeOf(entry), "enable") or entry.enable) {
const path = b.pathJoin(&.{ opts.codegen_path, entry.file });

View File

@@ -114,8 +114,10 @@ endif()
if(DEBUG AND ((APPLE AND ARCH STREQUAL "aarch64") OR LINUX))
set(DEFAULT_ASAN ON)
set(DEFAULT_VALGRIND OFF)
else()
set(DEFAULT_ASAN OFF)
set(DEFAULT_VALGRIND OFF)
endif()
optionx(ENABLE_ASAN BOOL "If ASAN support should be enabled" DEFAULT ${DEFAULT_ASAN})

View File

@@ -1,17 +1,9 @@
# https://clang.llvm.org/extra/clang-tidy/
set(CLANG_TIDY_SOURCES ${BUN_C_SOURCES} ${BUN_CXX_SOURCES})
list(REMOVE_ITEM CLANG_TIDY_SOURCES ${CWD}/src/bun.js/bindings/node/http/llhttp/llhttp.c)
list(REMOVE_ITEM CLANG_TIDY_SOURCES ${CWD}/src/bun.js/bindings/node/http/llhttp/http.c)
list(REMOVE_ITEM CLANG_TIDY_SOURCES ${CWD}/src/bun.js/bindings/node/http/llhttp/api.c)
list(REMOVE_ITEM CLANG_TIDY_SOURCES ${CWD}/src/bun.js/bindings/decodeURIComponentSIMD.cpp)
list(REMOVE_ITEM CLANG_TIDY_SOURCES ${CWD}/src/bun.js/bindings/NoOpForTesting.cpp)
list(REMOVE_ITEM CLANG_TIDY_SOURCES ${CWD}/src/bun.js/bindings/ProcessBindingNatives.cpp)
list(REMOVE_ITEM CLANG_TIDY_SOURCES ${CWD}/src/bun.js/bindings/stripANSI.cpp)
list(REMOVE_ITEM CLANG_TIDY_SOURCES ${CWD}/src/bun.js/bindings/Uint8Array.cpp)
set(CLANG_TIDY_COMMAND ${CLANG_TIDY_PROGRAM}
-p ${BUILD_PATH}
-p ${BUILD_PATH}
--config-file=${CWD}/.clang-tidy
)

View File

@@ -25,7 +25,7 @@ register_command(
"Running cppcheck"
COMMAND
${CMAKE_COMMAND} -E make_directory cppcheck
&& ${CPPCHECK_COMMAND}
&& ${CPPCHECK_COMMAND}
CWD
${BUILD_PATH}
TARGETS

View File

@@ -38,7 +38,7 @@ foreach(i RANGE 10)
else()
message(STATUS "Downloading ${DOWNLOAD_URL}... (retry ${i})")
endif()
file(DOWNLOAD
${DOWNLOAD_URL}
${DOWNLOAD_TMP_FILE_${i}}

View File

@@ -10,13 +10,13 @@ function(absolute_sources OUTPUT_VAR INPUT_FILE)
else()
set(BASE_DIR ${CMAKE_CURRENT_SOURCE_DIR})
endif()
# Read the file containing relative paths
file(STRINGS "${INPUT_FILE}" RELATIVE_PATHS)
# Create a list to store absolute paths
set(RESULT_LIST "")
# Convert each relative path to absolute
foreach(REL_PATH ${RELATIVE_PATHS})
# Skip empty lines
@@ -26,10 +26,10 @@ function(absolute_sources OUTPUT_VAR INPUT_FILE)
list(APPEND RESULT_LIST ${ABS_PATH})
endif()
endforeach()
# Set the output variable in the parent scope
set(${OUTPUT_VAR} ${RESULT_LIST} PARENT_SCOPE)
# Tell CMake that the configuration depends on this file
set_property(DIRECTORY APPEND PROPERTY CMAKE_CONFIGURE_DEPENDS "${INPUT_FILE}")
endfunction()
endfunction()

View File

@@ -2,6 +2,8 @@ include(PathUtils)
if(DEBUG)
set(bun bun-debug)
elseif(ENABLE_ASAN AND ENABLE_VALGRIND)
set(bun bun-asan-valgrind)
elseif(ENABLE_ASAN)
set(bun bun-asan)
elseif(ENABLE_VALGRIND)
@@ -619,6 +621,7 @@ register_command(
-Dcpu=${ZIG_CPU}
-Denable_logs=$<IF:$<BOOL:${ENABLE_LOGS}>,true,false>
-Denable_asan=$<IF:$<BOOL:${ENABLE_ZIG_ASAN}>,true,false>
-Denable_valgrind=$<IF:$<BOOL:${ENABLE_VALGRIND}>,true,false>
-Dversion=${VERSION}
-Dreported_nodejs_version=${NODEJS_VERSION}
-Dcanary=${CANARY_REVISION}
@@ -760,7 +763,7 @@ if (NOT WIN32)
# Only enable in these scenarios:
# 1. NOT in CI, OR
# 2. In CI AND BUN_CPP_ONLY is enabled
if((NOT CI OR (CI AND BUN_CPP_ONLY)) AND NOT ENABLE_ANALYSIS)
if(NOT CI OR (CI AND BUN_CPP_ONLY))
target_precompile_headers(${bun} PRIVATE
"$<$<COMPILE_LANGUAGE:CXX>:${CWD}/src/bun.js/bindings/root.h>"
)
@@ -886,12 +889,8 @@ if(NOT WIN32)
endif()
if(ENABLE_ASAN)
target_compile_options(${bun} PUBLIC
-fsanitize=address
)
target_link_libraries(${bun} PUBLIC
-fsanitize=address
)
target_compile_options(${bun} PUBLIC -fsanitize=address)
target_link_libraries(${bun} PUBLIC -fsanitize=address)
endif()
target_compile_options(${bun} PUBLIC
@@ -930,12 +929,8 @@ if(NOT WIN32)
)
if(ENABLE_ASAN)
target_compile_options(${bun} PUBLIC
-fsanitize=address
)
target_link_libraries(${bun} PUBLIC
-fsanitize=address
)
target_compile_options(${bun} PUBLIC -fsanitize=address)
target_link_libraries(${bun} PUBLIC -fsanitize=address)
endif()
endif()
else()
@@ -969,6 +964,7 @@ if(WIN32)
/delayload:WSOCK32.dll
/delayload:ADVAPI32.dll
/delayload:IPHLPAPI.dll
/delayload:CRYPT32.dll
)
endif()
endif()
@@ -1010,6 +1006,7 @@ if(LINUX)
-Wl,--wrap=exp2
-Wl,--wrap=expf
-Wl,--wrap=fcntl64
-Wl,--wrap=gettid
-Wl,--wrap=log
-Wl,--wrap=log2
-Wl,--wrap=log2f
@@ -1061,7 +1058,7 @@ if(LINUX)
)
endif()
if (NOT DEBUG AND NOT ENABLE_ASAN)
if (NOT DEBUG AND NOT ENABLE_ASAN AND NOT ENABLE_VALGRIND)
target_link_options(${bun} PUBLIC
-Wl,-icf=safe
)
@@ -1188,6 +1185,7 @@ if(WIN32)
ntdll
userenv
dbghelp
crypt32
wsock32 # ws2_32 required by TransmitFile aka sendfile on windows
delayimp.lib
)
@@ -1230,32 +1228,32 @@ if(NOT BUN_CPP_ONLY)
OUTPUTS
${BUILD_PATH}/${bunStripExe}
)
# Then sign both executables on Windows
if(WIN32 AND ENABLE_WINDOWS_CODESIGNING)
set(SIGN_SCRIPT "${CMAKE_SOURCE_DIR}/.buildkite/scripts/sign-windows.ps1")
# Verify signing script exists
if(NOT EXISTS "${SIGN_SCRIPT}")
message(FATAL_ERROR "Windows signing script not found: ${SIGN_SCRIPT}")
endif()
# Use PowerShell for Windows code signing (native Windows, no path issues)
find_program(POWERSHELL_EXECUTABLE
find_program(POWERSHELL_EXECUTABLE
NAMES pwsh.exe powershell.exe
PATHS
PATHS
"C:/Program Files/PowerShell/7"
"C:/Program Files (x86)/PowerShell/7"
"C:/Windows/System32/WindowsPowerShell/v1.0"
DOC "Path to PowerShell executable"
)
if(NOT POWERSHELL_EXECUTABLE)
set(POWERSHELL_EXECUTABLE "powershell.exe")
endif()
message(STATUS "Using PowerShell executable: ${POWERSHELL_EXECUTABLE}")
# Sign both bun-profile.exe and bun.exe after stripping
register_command(
TARGET
@@ -1363,12 +1361,20 @@ if(NOT BUN_CPP_ONLY)
if(ENABLE_BASELINE)
set(bunTriplet ${bunTriplet}-baseline)
endif()
if(ENABLE_ASAN)
if (ENABLE_ASAN AND ENABLE_VALGRIND)
set(bunTriplet ${bunTriplet}-asan-valgrind)
set(bunPath ${bunTriplet})
elseif (ENABLE_VALGRIND)
set(bunTriplet ${bunTriplet}-valgrind)
set(bunPath ${bunTriplet})
elseif(ENABLE_ASAN)
set(bunTriplet ${bunTriplet}-asan)
set(bunPath ${bunTriplet})
else()
string(REPLACE bun ${bunTriplet} bunPath ${bun})
endif()
set(bunFiles ${bunExe} features.json)
if(WIN32)
list(APPEND bunFiles ${bun}.pdb)

View File

@@ -21,4 +21,4 @@ register_cmake_command(
-DHDR_HISTOGRAM_BUILD_STATIC=ON
-DHDR_LOG_REQUIRED=DISABLED
-DHDR_HISTOGRAM_BUILD_PROGRAMS=OFF
)
)

View File

@@ -30,4 +30,4 @@ register_cmake_command(
INCLUDES
.
hwy
)
)

View File

@@ -4,7 +4,8 @@ register_repository(
REPOSITORY
libuv/libuv
COMMIT
da527d8d2a908b824def74382761566371439003
# Corresponds to v1.51.0
5152db2cbfeb5582e9c27c5ea1dba2cd9e10759b
)
if(WIN32)

View File

@@ -14,7 +14,7 @@ set(MIMALLOC_CMAKE_ARGS
-DMI_BUILD_TESTS=OFF
-DMI_USE_CXX=ON
-DMI_SKIP_COLLECT_ON_EXIT=ON
# ```
# mimalloc_allow_large_os_pages=0 BUN_PORT=3004 mem bun http-hello.js
# Started development server: http://localhost:3004
@@ -51,7 +51,7 @@ if(ENABLE_ASAN)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_DEBUG_UBSAN=ON)
elseif(APPLE OR LINUX)
if(APPLE)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OVERRIDE=OFF)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OVERRIDE=OFF)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OSX_ZONE=OFF)
list(APPEND MIMALLOC_CMAKE_ARGS -DMI_OSX_INTERPOSE=OFF)
else()

View File

@@ -3,4 +3,4 @@ set(CMAKE_SYSTEM_PROCESSOR x64)
set(CMAKE_OSX_ARCHITECTURES x86_64)
set(CMAKE_C_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)

View File

@@ -3,4 +3,4 @@ set(CMAKE_SYSTEM_PROCESSOR aarch64)
set(ABI musl)
set(CMAKE_C_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)

View File

@@ -3,4 +3,4 @@ set(CMAKE_SYSTEM_PROCESSOR aarch64)
set(ABI gnu)
set(CMAKE_C_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)

View File

@@ -4,4 +4,4 @@ set(ENABLE_BASELINE ON)
set(ABI gnu)
set(CMAKE_C_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)

View File

@@ -4,4 +4,4 @@ set(ENABLE_BASELINE ON)
set(ABI musl)
set(CMAKE_C_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)

View File

@@ -3,4 +3,4 @@ set(CMAKE_SYSTEM_PROCESSOR x64)
set(ENABLE_BASELINE ON)
set(CMAKE_C_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)

View File

@@ -2,4 +2,4 @@ set(CMAKE_SYSTEM_NAME Windows)
set(CMAKE_SYSTEM_PROCESSOR x64)
set(CMAKE_C_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)
set(CMAKE_CXX_COMPILER_WORKS ON)

View File

@@ -41,10 +41,10 @@ endfunction()
function(generate_dependency_versions_header)
set(DEPS_PATH "${CMAKE_SOURCE_DIR}/src/deps")
set(VENDOR_PATH "${CMAKE_SOURCE_DIR}/vendor")
# Initialize version variables
set(DEPENDENCY_VERSIONS "")
# WebKit version (from SetupWebKit.cmake or command line)
if(WEBKIT_VERSION)
set(WEBKIT_VERSION_STR "${WEBKIT_VERSION}")
@@ -52,7 +52,7 @@ function(generate_dependency_versions_header)
set(WEBKIT_VERSION_STR "0ddf6f47af0a9782a354f61e06d7f83d097d9f84")
endif()
list(APPEND DEPENDENCY_VERSIONS "WEBKIT" "${WEBKIT_VERSION_STR}")
# Track input files so CMake reconfigures when they change
set_property(DIRECTORY APPEND PROPERTY CMAKE_CONFIGURE_DEPENDS
"${CMAKE_SOURCE_DIR}/package.json"
@@ -60,7 +60,7 @@ function(generate_dependency_versions_header)
"${VENDOR_PATH}/zlib/zlib.h"
"${DEPS_PATH}/zstd/lib/zstd.h"
)
# Hardcoded dependency versions (previously from generated_versions_list.zig)
# These are the commit hashes/tree objects for each dependency
list(APPEND DEPENDENCY_VERSIONS "BORINGSSL" "29a2cd359458c9384694b75456026e4b57e3e567")
@@ -74,7 +74,7 @@ function(generate_dependency_versions_header)
list(APPEND DEPENDENCY_VERSIONS "TINYCC" "ab631362d839333660a265d3084d8ff060b96753")
list(APPEND DEPENDENCY_VERSIONS "ZLIB_HASH" "886098f3f339617b4243b286f5ed364b9989e245")
list(APPEND DEPENDENCY_VERSIONS "ZSTD_HASH" "794ea1b0afca0f020f4e57b6732332231fb23c70")
# Extract semantic versions from header files where available
extract_version_from_header(
"${VENDOR_PATH}/libdeflate/libdeflate.h"
@@ -82,21 +82,21 @@ function(generate_dependency_versions_header)
LIBDEFLATE_VERSION_STRING
)
list(APPEND DEPENDENCY_VERSIONS "LIBDEFLATE_VERSION" "${LIBDEFLATE_VERSION_STRING}")
extract_version_from_header(
"${VENDOR_PATH}/zlib/zlib.h"
"#define[ \t]+ZLIB_VERSION[ \t]+\"([^\"]+)\""
ZLIB_VERSION_STRING
)
list(APPEND DEPENDENCY_VERSIONS "ZLIB_VERSION" "${ZLIB_VERSION_STRING}")
extract_version_from_header(
"${DEPS_PATH}/zstd/lib/zstd.h"
"#define[ \t]+ZSTD_VERSION_STRING[ \t]+\"([^\"]+)\""
ZSTD_VERSION_STRING
)
list(APPEND DEPENDENCY_VERSIONS "ZSTD_VERSION" "${ZSTD_VERSION_STRING}")
# Bun version from package.json
if(EXISTS "${CMAKE_SOURCE_DIR}/package.json")
file(READ "${CMAKE_SOURCE_DIR}/package.json" PACKAGE_JSON)
@@ -110,11 +110,11 @@ function(generate_dependency_versions_header)
set(BUN_VERSION_STRING "${VERSION}")
endif()
list(APPEND DEPENDENCY_VERSIONS "BUN_VERSION" "${BUN_VERSION_STRING}")
# Node.js compatibility version (hardcoded as in the current implementation)
set(NODEJS_COMPAT_VERSION "22.12.0")
list(APPEND DEPENDENCY_VERSIONS "NODEJS_COMPAT_VERSION" "${NODEJS_COMPAT_VERSION}")
# Get Bun's git SHA for uws/usockets versions (they use Bun's own SHA)
execute_process(
COMMAND git rev-parse HEAD
@@ -128,11 +128,11 @@ function(generate_dependency_versions_header)
endif()
list(APPEND DEPENDENCY_VERSIONS "UWS" "${BUN_GIT_SHA}")
list(APPEND DEPENDENCY_VERSIONS "USOCKETS" "${BUN_GIT_SHA}")
# Zig version - hardcoded for now, can be updated as needed
# This should match the version of Zig used to build Bun
list(APPEND DEPENDENCY_VERSIONS "ZIG" "0.14.1")
# Generate the header file content
set(HEADER_CONTENT "// This file is auto-generated by CMake. Do not edit manually.\n")
string(APPEND HEADER_CONTENT "#ifndef BUN_DEPENDENCY_VERSIONS_H\n")
@@ -141,7 +141,7 @@ function(generate_dependency_versions_header)
string(APPEND HEADER_CONTENT "extern \"C\" {\n")
string(APPEND HEADER_CONTENT "#endif\n\n")
string(APPEND HEADER_CONTENT "// Dependency versions\n")
# Process the version list
list(LENGTH DEPENDENCY_VERSIONS num_versions)
math(EXPR last_idx "${num_versions} - 1")
@@ -158,10 +158,10 @@ function(generate_dependency_versions_header)
endif()
math(EXPR i "${i} + 2")
endwhile()
string(APPEND HEADER_CONTENT "\n")
string(APPEND HEADER_CONTENT "// C string constants for easy access\n")
# Create C string constants
set(i 0)
while(i LESS num_versions)
@@ -176,18 +176,29 @@ function(generate_dependency_versions_header)
endif()
math(EXPR i "${i} + 2")
endwhile()
string(APPEND HEADER_CONTENT "\n#ifdef __cplusplus\n")
string(APPEND HEADER_CONTENT "}\n")
string(APPEND HEADER_CONTENT "#endif\n\n")
string(APPEND HEADER_CONTENT "#endif // BUN_DEPENDENCY_VERSIONS_H\n")
# Write the header file
# Write the header file only if content has changed
set(OUTPUT_FILE "${CMAKE_BINARY_DIR}/bun_dependency_versions.h")
file(WRITE "${OUTPUT_FILE}" "${HEADER_CONTENT}")
message(STATUS "Generated dependency versions header: ${OUTPUT_FILE}")
# Read existing content if file exists
set(EXISTING_CONTENT "")
if(EXISTS "${OUTPUT_FILE}")
file(READ "${OUTPUT_FILE}" EXISTING_CONTENT)
endif()
# Only write if content is different
if(NOT "${EXISTING_CONTENT}" STREQUAL "${HEADER_CONTENT}")
file(WRITE "${OUTPUT_FILE}" "${HEADER_CONTENT}")
message(STATUS "Updated dependency versions header: ${OUTPUT_FILE}")
else()
message(STATUS "Dependency versions header unchanged: ${OUTPUT_FILE}")
endif()
# Also create a more detailed version for debugging
set(DEBUG_OUTPUT_FILE "${CMAKE_BINARY_DIR}/bun_dependency_versions_debug.txt")
set(DEBUG_CONTENT "Bun Dependency Versions\n")
@@ -206,4 +217,4 @@ function(generate_dependency_versions_header)
endfunction()
# Call the function to generate the header
generate_dependency_versions_header()
generate_dependency_versions_header()

View File

@@ -101,7 +101,7 @@ foreach(i RANGE ${BUILDKITE_JOBS_MAX_INDEX})
list(APPEND BUILDKITE_JOBS_NOT_FOUND ${BUILDKITE_JOB_NAME})
continue()
endif()
file(READ ${BUILDKITE_ARTIFACTS_PATH} BUILDKITE_ARTIFACTS)
string(JSON BUILDKITE_ARTIFACTS_LENGTH LENGTH ${BUILDKITE_ARTIFACTS})
if(NOT BUILDKITE_ARTIFACTS_LENGTH GREATER 0)

View File

@@ -17,7 +17,6 @@ set(DEFAULT_LLVM_VERSION "19.1.7")
optionx(LLVM_VERSION STRING "The version of LLVM to use" DEFAULT ${DEFAULT_LLVM_VERSION})
string(REGEX MATCH "([0-9]+)\\.([0-9]+)\\.([0-9]+)" USE_LLVM_VERSION ${LLVM_VERSION})
if(USE_LLVM_VERSION)
set(LLVM_VERSION_MAJOR ${CMAKE_MATCH_1})
set(LLVM_VERSION_MINOR ${CMAKE_MATCH_2})
@@ -132,6 +131,9 @@ else()
find_llvm_command(CMAKE_RANLIB llvm-ranlib)
if(LINUX)
find_llvm_command(LLD_PROGRAM ld.lld)
# Ensure vendor dependencies use lld instead of ld
list(APPEND CMAKE_ARGS -DCMAKE_EXE_LINKER_FLAGS=--ld-path=${LLD_PROGRAM})
list(APPEND CMAKE_ARGS -DCMAKE_SHARED_LINKER_FLAGS=--ld-path=${LLD_PROGRAM})
endif()
if(APPLE)
find_llvm_command(CMAKE_DSYMUTIL dsymutil)

View File

@@ -53,7 +53,7 @@ endif()
optionx(CMAKE_OSX_SYSROOT STRING "The macOS SDK path to target" DEFAULT ${DEFAULT_CMAKE_OSX_SYSROOT})
list(APPEND CMAKE_ARGS
list(APPEND CMAKE_ARGS
-DCMAKE_OSX_DEPLOYMENT_TARGET=${CMAKE_OSX_DEPLOYMENT_TARGET}
-DCMAKE_OSX_SYSROOT=${CMAKE_OSX_SYSROOT}
)

View File

@@ -2,7 +2,7 @@ option(WEBKIT_VERSION "The version of WebKit to use")
option(WEBKIT_LOCAL "If a local version of WebKit should be used instead of downloading")
if(NOT WEBKIT_VERSION)
set(WEBKIT_VERSION 495c25e24927ba03277ae225cd42811588d03ff8)
set(WEBKIT_VERSION 69fa2714ab5f917c2d15501ff8cfdccfaea78882)
endif()
string(SUBSTRING ${WEBKIT_VERSION} 0 16 WEBKIT_VERSION_PREFIX)

View File

@@ -90,6 +90,7 @@ register_command(
-DZIG_PATH=${ZIG_PATH}
-DZIG_COMMIT=${ZIG_COMMIT}
-DENABLE_ASAN=${ENABLE_ASAN}
-DENABLE_VALGRIND=${ENABLE_VALGRIND}
-DZIG_COMPILER_SAFE=${ZIG_COMPILER_SAFE}
-P ${CWD}/cmake/scripts/DownloadZig.cmake
SOURCES

View File

@@ -122,7 +122,7 @@
},
{
"name": "reporter",
"description": "Specify the test reporter. Currently --reporter=junit is the only supported format.",
"description": "Test output reporter format. Available: 'junit' (requires --reporter-outfile). Default: console output.",
"hasValue": true,
"valueType": "val",
"required": false,
@@ -130,7 +130,7 @@
},
{
"name": "reporter-outfile",
"description": "The output file used for the format from --reporter.",
"description": "Output file path for the reporter format (required with --reporter).",
"hasValue": true,
"valueType": "val",
"required": false,

View File

@@ -184,6 +184,7 @@ Bun.hash.rapidhash("data", 1234);
- `"blake2b256"`
- `"blake2b512"`
- `"blake2s256"`
- `"md4"`
- `"md5"`
- `"ripemd160"`

View File

@@ -161,6 +161,102 @@ const randomTag = await redis.srandmember("tags");
const poppedTag = await redis.spop("tags");
```
## Pub/Sub
Bun provides native bindings for the [Redis
Pub/Sub](https://redis.io/docs/latest/develop/pubsub/) protocol. **New in Bun
1.2.23**
{% callout %}
**🚧** — The Redis Pub/Sub feature is experimental. Although we expect it to be
stable, we're currently actively looking for feedback and areas for improvement.
{% /callout %}
### Basic Usage
To get started publishing messages, you can set up a publisher in
`publisher.ts`:
```typescript#publisher.ts
import { RedisClient } from "bun";
const writer = new RedisClient("redis://localhost:6739");
await writer.connect();
writer.publish("general", "Hello everyone!");
writer.close();
```
In another file, create the subscriber in `subscriber.ts`:
```typescript#subscriber.ts
import { RedisClient } from "bun";
const listener = new RedisClient("redis://localhost:6739");
await listener.connect();
await listener.subscribe("general", (message, channel) => {
console.log(`Received: ${message}`);
});
```
In one shell, run your subscriber:
```bash
bun run subscriber.ts
```
and, in another, run your publisher:
```bash
bun run publisher.ts
```
{% callout %}
**Note:** The subscription mode takes over the `RedisClient` connection. A
client with subscriptions can only call `RedisClient.prototype.subscribe()`. In
other words, applications which need to message Redis need a separate
connection, acquirable through `.duplicate()`:
```typescript
import { RedisClient } from "bun";
const redis = new RedisClient("redis://localhost:6379");
await redis.connect();
const subscriber = await redis.duplicate();
await subscriber.subscribe("foo", () => {});
await redis.set("bar", "baz");
```
{% /callout %}
### Publishing
Publishing messages is done through the `publish()` method:
```typescript
await client.publish(channelName, message);
```
### Subscriptions
The Bun `RedisClient` allows you to subscribe to channels through the
`.subscribe()` method:
```typescript
await client.subscribe(channel, (message, channel) => {});
```
You can unsubscribe through the `.unsubscribe()` method:
```typescript
await client.unsubscribe(); // Unsubscribe from all channels.
await client.unsubscribe(channel); // Unsubscribe a particular channel.
await client.unsubscribe(channel, listener); // Unsubscribe a particular listener.
```
## Advanced Usage
### Command Execution and Pipelining
@@ -482,9 +578,10 @@ When connecting to Redis servers using older versions that don't support RESP3,
Current limitations of the Redis client we are planning to address in future versions:
- [ ] No dedicated API for pub/sub functionality (though you can use the raw command API)
- [ ] Transactions (MULTI/EXEC) must be done through raw commands for now
- [ ] Streams are supported but without dedicated methods
- [ ] Pub/Sub does not currently support binary data, nor pattern-based
subscriptions.
Unsupported features:

View File

@@ -3,6 +3,7 @@ In Bun, YAML is a first-class citizen alongside JSON and TOML.
Bun provides built-in support for YAML files through both runtime APIs and bundler integration. You can
- Parse YAML strings with `Bun.YAML.parse`
- Stringify JavaScript objects to YAML with `Bun.YAML.stringify`
- import & require YAML files as modules at runtime (including hot reloading & watch mode support)
- import & require YAML files in frontend apps via bun's bundler
@@ -104,7 +105,7 @@ const data = Bun.YAML.parse(yaml);
#### Error Handling
`Bun.YAML.parse()` throws a `SyntaxError` if the YAML is invalid:
`Bun.YAML.parse()` throws an error if the YAML is invalid:
```ts
try {
@@ -114,6 +115,175 @@ try {
}
```
### `Bun.YAML.stringify()`
Convert a JavaScript value into a YAML string. The API signature matches `JSON.stringify`:
```ts
YAML.stringify(value, replacer?, space?)
```
- `value`: The value to convert to YAML
- `replacer`: Currently only `null` or `undefined` (function replacers not yet supported)
- `space`: Number of spaces for indentation (e.g., `2`) or a string to use for indentation. **Without this parameter, outputs flow-style (single-line) YAML**
#### Basic Usage
```ts
import { YAML } from "bun";
const data = {
name: "John Doe",
age: 30,
hobbies: ["reading", "coding"],
};
// Without space - outputs flow-style (single-line) YAML
console.log(YAML.stringify(data));
// {name: John Doe,age: 30,hobbies: [reading,coding]}
// With space=2 - outputs block-style (multi-line) YAML
console.log(YAML.stringify(data, null, 2));
// name: John Doe
// age: 30
// hobbies:
// - reading
// - coding
```
#### Output Styles
```ts
const arr = [1, 2, 3];
// Flow style (single-line) - default
console.log(YAML.stringify(arr));
// [1,2,3]
// Block style (multi-line) - with indentation
console.log(YAML.stringify(arr, null, 2));
// - 1
// - 2
// - 3
```
#### String Quoting
`YAML.stringify()` automatically quotes strings when necessary:
- Strings that would be parsed as YAML keywords (`true`, `false`, `null`, `yes`, `no`, etc.)
- Strings that would be parsed as numbers
- Strings containing special characters or escape sequences
```ts
const examples = {
keyword: "true", // Will be quoted: "true"
number: "123", // Will be quoted: "123"
text: "hello world", // Won't be quoted: hello world
empty: "", // Will be quoted: ""
};
console.log(YAML.stringify(examples, null, 2));
// keyword: "true"
// number: "123"
// text: hello world
// empty: ""
```
#### Cycles and References
`YAML.stringify()` automatically detects and handles circular references using YAML anchors and aliases:
```ts
const obj = { name: "root" };
obj.self = obj; // Circular reference
const yamlString = YAML.stringify(obj, null, 2);
console.log(yamlString);
// &root
// name: root
// self:
// *root
// Objects with shared references
const shared = { id: 1 };
const data = {
first: shared,
second: shared,
};
console.log(YAML.stringify(data, null, 2));
// first:
// &first
// id: 1
// second:
// *first
```
#### Special Values
```ts
// Special numeric values
console.log(YAML.stringify(Infinity)); // .inf
console.log(YAML.stringify(-Infinity)); // -.inf
console.log(YAML.stringify(NaN)); // .nan
console.log(YAML.stringify(0)); // 0
console.log(YAML.stringify(-0)); // -0
// null and undefined
console.log(YAML.stringify(null)); // null
console.log(YAML.stringify(undefined)); // undefined (returns undefined, not a string)
// Booleans
console.log(YAML.stringify(true)); // true
console.log(YAML.stringify(false)); // false
```
#### Complex Objects
```ts
const config = {
server: {
port: 3000,
host: "localhost",
ssl: {
enabled: true,
cert: "/path/to/cert.pem",
key: "/path/to/key.pem",
},
},
database: {
connections: [
{ name: "primary", host: "db1.example.com" },
{ name: "replica", host: "db2.example.com" },
],
},
features: {
auth: true,
"rate-limit": 100, // Keys with special characters are preserved
},
};
const yamlString = YAML.stringify(config, null, 2);
console.log(yamlString);
// server:
// port: 3000
// host: localhost
// ssl:
// enabled: true
// cert: /path/to/cert.pem
// key: /path/to/key.pem
// database:
// connections:
// - name: primary
// host: db1.example.com
// - name: replica
// host: db2.example.com
// features:
// auth: true
// rate-limit: 100
```
## Module Import
### ES Modules

View File

@@ -117,6 +117,36 @@ Use the `--rerun-each` flag to run each test multiple times. This is useful for
$ bun test --rerun-each 100
```
## Randomize test execution order
Use the `--randomize` flag to run tests in a random order. This helps detect tests that depend on shared state or execution order.
```sh
$ bun test --randomize
```
When using `--randomize`, the seed used for randomization will be displayed in the test summary:
```sh
$ bun test --randomize
# ... test output ...
--seed=12345
2 pass
8 fail
Ran 10 tests across 2 files. [50.00ms]
```
### Reproducible random order with `--seed`
Use the `--seed` flag to specify a seed for the randomization. This allows you to reproduce the same test order when debugging order-dependent failures.
```sh
# Reproduce a previous randomized run
$ bun test --seed 123456
```
The `--seed` flag implies `--randomize`, so you don't need to specify both. Using the same seed value will always produce the same test execution order, making it easier to debug intermittent failures caused by test interdependencies.
## Bail out with `--bail`
Use the `--bail` flag to abort the test run early after a pre-determined number of test failures. By default Bun will run all tests and report all failures, but sometimes in CI environments it's preferable to terminate earlier to reduce CPU usage.

View File

@@ -359,7 +359,7 @@ export default {
page("api/file-io", "File I/O", {
description: `Read and write files fast with Bun's heavily optimized file system API.`,
}), // "`Bun.write`"),
page("api/redis", "Redis client", {
page("api/redis", "Redis Client", {
description: `Bun provides a fast, native Redis client with automatic command pipelining for better performance.`,
}),
page("api/import-meta", "import.meta", {

View File

@@ -232,6 +232,23 @@ Set path where coverage reports will be saved. Please notice, that it works only
coverageDir = "path/to/somewhere" # default "coverage"
```
### `test.concurrentTestGlob`
Specify a glob pattern to automatically run matching test files with concurrent test execution enabled. Test files matching this pattern will behave as if the `--concurrent` flag was passed, running all tests within those files concurrently.
```toml
[test]
concurrentTestGlob = "**/concurrent-*.test.ts"
```
This is useful for:
- Gradually migrating test suites to concurrent execution
- Running integration tests concurrently while keeping unit tests sequential
- Separating fast concurrent tests from tests that require sequential execution
The `--concurrent` CLI flag will override this setting when specified.
## Package manager
Package management is a complex issue; to support a range of use cases, the behavior of `bun install` can be configured under the `[install]` section.

View File

@@ -46,6 +46,25 @@ smol = true # Reduce memory usage during test runs
This is equivalent to using the `--smol` flag on the command line.
### Test execution
#### concurrentTestGlob
Automatically run test files matching a glob pattern with concurrent test execution enabled. This is useful for gradually migrating test suites to concurrent execution or for running specific test types concurrently.
```toml
[test]
concurrentTestGlob = "**/concurrent-*.test.ts" # Run files matching this pattern concurrently
```
Test files matching this pattern will behave as if the `--concurrent` flag was passed, running all tests within those files concurrently. This allows you to:
- Gradually migrate your test suite to concurrent execution
- Run integration tests concurrently while keeping unit tests sequential
- Separate fast concurrent tests from tests that require sequential execution
The `--concurrent` CLI flag will override this setting when specified, forcing all tests to run concurrently regardless of the glob pattern.
### Coverage options
In addition to the options documented in the [coverage documentation](./coverage.md), the following options are available:

View File

@@ -0,0 +1,132 @@
# Concurrent Test Glob Example
This example demonstrates how to use the `concurrentTestGlob` option to selectively run tests concurrently based on file naming patterns.
## Project Structure
```text
my-project/
├── bunfig.toml
├── tests/
│ ├── unit/
│ │ ├── math.test.ts # Sequential
│ │ └── utils.test.ts # Sequential
│ └── integration/
│ ├── concurrent-api.test.ts # Concurrent
│ └── concurrent-database.test.ts # Concurrent
```
## Configuration
### bunfig.toml
```toml
[test]
# Run all test files with "concurrent-" prefix concurrently
concurrentTestGlob = "**/concurrent-*.test.ts"
```
## Test Files
### Unit Test (Sequential)
`tests/unit/math.test.ts`
```typescript
import { test, expect } from "bun:test";
// These tests run sequentially by default
// Good for tests that share state or have specific ordering requirements
let sharedState = 0;
test("addition", () => {
sharedState = 5 + 3;
expect(sharedState).toBe(8);
});
test("uses previous state", () => {
// This test depends on the previous test's state
expect(sharedState).toBe(8);
});
```
### Integration Test (Concurrent)
`tests/integration/concurrent-api.test.ts`
```typescript
import { test, expect } from "bun:test";
// These tests automatically run concurrently due to filename matching the glob pattern.
// Using test() is equivalent to test.concurrent() when the file matches concurrentTestGlob.
// Each test is independent and can run in parallel.
test("fetch user data", async () => {
const response = await fetch("/api/user/1");
expect(response.ok).toBe(true);
});
test("fetch posts", async () => {
const response = await fetch("/api/posts");
expect(response.ok).toBe(true);
});
test("fetch comments", async () => {
const response = await fetch("/api/comments");
expect(response.ok).toBe(true);
});
```
## Running Tests
```bash
# Run all tests - concurrent-*.test.ts files will run concurrently
bun test
# Override: Force ALL tests to run concurrently
# Note: This overrides bunfig.toml and runs all tests concurrently, regardless of glob
bun test --concurrent
# Run only unit tests (sequential)
bun test tests/unit
# Run only integration tests (concurrent due to glob pattern)
bun test tests/integration
```
## Benefits
1. **Gradual Migration**: Migrate to concurrent tests file by file by renaming them
2. **Clear Organization**: File naming convention indicates execution mode
3. **Performance**: Integration tests run faster in parallel
4. **Safety**: Unit tests remain sequential where needed
5. **Flexibility**: Easy to change execution mode by renaming files
## Migration Strategy
To migrate existing tests to concurrent execution:
1. Start with independent integration tests
2. Rename files to match the glob pattern: `mv api.test.ts concurrent-api.test.ts`
3. Verify tests still pass
4. Monitor for race conditions or shared state issues
5. Continue migrating stable tests incrementally
## Tips
- Use descriptive prefixes: `concurrent-`, `parallel-`, `async-`
- Keep related sequential tests together
- Document why certain tests must remain sequential
- Use `test.concurrent()` for fine-grained control in sequential files
(In files matched by `concurrentTestGlob`, plain `test()` already runs concurrently)
- Consider separate globs for different test types:
```toml
[test]
# Multiple patterns for different test categories
concurrentTestGlob = [
"**/integration/*.test.ts",
"**/e2e/*.test.ts",
"**/concurrent-*.test.ts"
]
```

View File

@@ -33,7 +33,7 @@
"bd:v": "(bun run --silent build:debug &> /tmp/bun.debug.build.log || (cat /tmp/bun.debug.build.log && rm -rf /tmp/bun.debug.build.log && exit 1)) && rm -f /tmp/bun.debug.build.log && ./build/debug/bun-debug",
"bd": "BUN_DEBUG_QUIET_LOGS=1 bun --silent bd:v",
"build:debug": "export COMSPEC=\"C:\\Windows\\System32\\cmd.exe\" && bun ./scripts/build.mjs -GNinja -DCMAKE_BUILD_TYPE=Debug -B build/debug --log-level=NOTICE",
"build:debug:asan": "bun ./scripts/build.mjs -GNinja -DCMAKE_BUILD_TYPE=Debug -DENABLE_ASAN=ON -B build/debug-asan --log-level=NOTICE",
"build:debug:noasan": "export COMSPEC=\"C:\\Windows\\System32\\cmd.exe\" && bun ./scripts/build.mjs -GNinja -DCMAKE_BUILD_TYPE=Debug -DENABLE_ASAN=OFF -B build/debug --log-level=NOTICE",
"build:release": "bun ./scripts/build.mjs -GNinja -DCMAKE_BUILD_TYPE=Release -B build/release",
"build:ci": "bun ./scripts/build.mjs -GNinja -DCMAKE_BUILD_TYPE=Release -DCMAKE_VERBOSE_MAKEFILE=ON -DCI=true -B build/release-ci --verbose --fresh",
"build:assert": "bun ./scripts/build.mjs -GNinja -DCMAKE_BUILD_TYPE=RelWithDebInfo -DENABLE_ASSERTIONS=ON -DENABLE_LOGS=ON -B build/release-assert",
@@ -84,6 +84,11 @@
"node:test": "node ./scripts/runner.node.mjs --quiet --exec-path=$npm_execpath --node-tests ",
"node:test:cp": "bun ./scripts/fetch-node-test.ts ",
"clean:zig": "rm -rf build/debug/cache/zig build/debug/CMakeCache.txt 'build/debug/*.o' .zig-cache zig-out || true",
"machine:linux:ubuntu": "./scripts/machine.mjs ssh --cloud=aws --arch=x64 --instance-type c7i.2xlarge --os=linux --distro=ubuntu --release=25.04",
"machine:linux:debian": "./scripts/machine.mjs ssh --cloud=aws --arch=x64 --instance-type c7i.2xlarge --os=linux --distro=debian --release=12",
"machine:linux:alpine": "./scripts/machine.mjs ssh --cloud=aws --arch=x64 --instance-type c7i.2xlarge --os=linux --distro=alpine --release=3.21",
"machine:linux:amazonlinux": "./scripts/machine.mjs ssh --cloud=aws --arch=x64 --instance-type c7i.2xlarge --os=linux --distro=amazonlinux --release=2023",
"machine:windows:2019": "./scripts/machine.mjs ssh --cloud=aws --arch=x64 --instance-type c7i.2xlarge --os=windows --release=2019",
"sync-webkit-source": "bun ./scripts/sync-webkit-source.ts"
}
}

View File

@@ -62,7 +62,7 @@ export const platforms: Platform[] = [
},
{
os: "linux",
arch: "aarch64",
arch: "arm64",
abi: "musl",
bin: "bun-linux-aarch64-musl",
exe: "bin/bun",

View File

@@ -636,7 +636,7 @@ declare module "bun" {
* import { YAML } from "bun";
*
* console.log(YAML.parse("123")) // 123
* console.log(YAML.parse("123")) // null
* console.log(YAML.parse("null")) // null
* console.log(YAML.parse("false")) // false
* console.log(YAML.parse("abc")) // "abc"
* console.log(YAML.parse("- abc")) // [ "abc" ]
@@ -653,7 +653,10 @@ declare module "bun" {
*
* @param input The JavaScript value to stringify.
* @param replacer Currently not supported.
* @param space A number for how many spaces each level of indentation gets, or a string used as indentation. The number is clamped between 0 and 10, and the first 10 characters of the string are used.
* @param space A number for how many spaces each level of indentation gets, or a string used as indentation.
* Without this parameter, outputs flow-style (single-line) YAML.
* With this parameter, outputs block-style (multi-line) YAML.
* The number is clamped between 0 and 10, and the first 10 characters of the string are used.
* @returns A string containing the YAML document.
*
* @example
@@ -661,19 +664,24 @@ declare module "bun" {
* import { YAML } from "bun";
*
* const input = {
* abc: "def"
* abc: "def",
* num: 123
* };
*
* // Without space - flow style (single-line)
* console.log(YAML.stringify(input));
* // # output
* // {abc: def,num: 123}
*
* // With space - block style (multi-line)
* console.log(YAML.stringify(input, null, 2));
* // abc: def
* // num: 123
*
* const cycle = {};
* cycle.obj = cycle;
* console.log(YAML.stringify(cycle));
* // # output
* // &root
* // obj:
* // *root
* console.log(YAML.stringify(cycle, null, 2));
* // &1
* // obj: *1
*/
export function stringify(input: unknown, replacer?: undefined | null, space?: string | number): string;
}
@@ -5039,6 +5047,7 @@ declare module "bun" {
type SupportedCryptoAlgorithms =
| "blake2b256"
| "blake2b512"
| "blake2s256"
| "md4"
| "md5"
| "ripemd160"

View File

@@ -52,21 +52,25 @@ declare module "bun" {
export namespace RedisClient {
type KeyLike = string | ArrayBufferView | Blob;
type StringPubSubListener = (message: string, channel: string) => void;
// Buffer subscriptions are not yet implemented
// type BufferPubSubListener = (message: Uint8Array<ArrayBuffer>, channel: string) => void;
}
export class RedisClient {
/**
* Creates a new Redis client
* @param url URL to connect to, defaults to process.env.VALKEY_URL, process.env.REDIS_URL, or "valkey://localhost:6379"
*
* @param url URL to connect to, defaults to `process.env.VALKEY_URL`,
* `process.env.REDIS_URL`, or `"valkey://localhost:6379"`
* @param options Additional options
*
* @example
* ```ts
* const valkey = new RedisClient();
*
* await valkey.set("hello", "world");
*
* console.log(await valkey.get("hello"));
* const redis = new RedisClient();
* await redis.set("hello", "world");
* console.log(await redis.get("hello"));
* ```
*/
constructor(url?: string, options?: RedisOptions);
@@ -88,12 +92,14 @@ declare module "bun" {
/**
* Callback fired when the client disconnects from the Redis server
*
* @param error The error that caused the disconnection
*/
onclose: ((this: RedisClient, error: Error) => void) | null;
/**
* Connect to the Redis server
*
* @returns A promise that resolves when connected
*/
connect(): Promise<void>;
@@ -152,10 +158,12 @@ declare module "bun" {
set(key: RedisClient.KeyLike, value: RedisClient.KeyLike, px: "PX", milliseconds: number): Promise<"OK">;
/**
* Set key to hold the string value with expiration at a specific Unix timestamp
* Set key to hold the string value with expiration at a specific Unix
* timestamp
* @param key The key to set
* @param value The value to set
* @param exat Set the specified Unix time at which the key will expire, in seconds
* @param exat Set the specified Unix time at which the key will expire, in
* seconds
* @returns Promise that resolves with "OK" on success
*/
set(key: RedisClient.KeyLike, value: RedisClient.KeyLike, exat: "EXAT", timestampSeconds: number): Promise<"OK">;
@@ -179,7 +187,8 @@ declare module "bun" {
* @param key The key to set
* @param value The value to set
* @param nx Only set the key if it does not already exist
* @returns Promise that resolves with "OK" on success, or null if the key already exists
* @returns Promise that resolves with "OK" on success, or null if the key
* already exists
*/
set(key: RedisClient.KeyLike, value: RedisClient.KeyLike, nx: "NX"): Promise<"OK" | null>;
@@ -188,7 +197,8 @@ declare module "bun" {
* @param key The key to set
* @param value The value to set
* @param xx Only set the key if it already exists
* @returns Promise that resolves with "OK" on success, or null if the key does not exist
* @returns Promise that resolves with "OK" on success, or null if the key
* does not exist
*/
set(key: RedisClient.KeyLike, value: RedisClient.KeyLike, xx: "XX"): Promise<"OK" | null>;
@@ -196,8 +206,10 @@ declare module "bun" {
* Set key to hold the string value and return the old value
* @param key The key to set
* @param value The value to set
* @param get Return the old string stored at key, or null if key did not exist
* @returns Promise that resolves with the old value, or null if key did not exist
* @param get Return the old string stored at key, or null if key did not
* exist
* @returns Promise that resolves with the old value, or null if key did not
* exist
*/
set(key: RedisClient.KeyLike, value: RedisClient.KeyLike, get: "GET"): Promise<string | null>;
@@ -243,7 +255,8 @@ declare module "bun" {
/**
* Determine if a key exists
* @param key The key to check
* @returns Promise that resolves with true if the key exists, false otherwise
* @returns Promise that resolves with true if the key exists, false
* otherwise
*/
exists(key: RedisClient.KeyLike): Promise<boolean>;
@@ -258,7 +271,8 @@ declare module "bun" {
/**
* Get the time to live for a key in seconds
* @param key The key to get the TTL for
* @returns Promise that resolves with the TTL, -1 if no expiry, or -2 if key doesn't exist
* @returns Promise that resolves with the TTL, -1 if no expiry, or -2 if
* key doesn't exist
*/
ttl(key: RedisClient.KeyLike): Promise<number>;
@@ -290,7 +304,8 @@ declare module "bun" {
* Check if a value is a member of a set
* @param key The set key
* @param member The member to check
* @returns Promise that resolves with true if the member exists, false otherwise
* @returns Promise that resolves with true if the member exists, false
* otherwise
*/
sismember(key: RedisClient.KeyLike, member: string): Promise<boolean>;
@@ -298,7 +313,8 @@ declare module "bun" {
* Add a member to a set
* @param key The set key
* @param member The member to add
* @returns Promise that resolves with 1 if the member was added, 0 if it already existed
* @returns Promise that resolves with 1 if the member was added, 0 if it
* already existed
*/
sadd(key: RedisClient.KeyLike, member: string): Promise<number>;
@@ -306,7 +322,8 @@ declare module "bun" {
* Remove a member from a set
* @param key The set key
* @param member The member to remove
* @returns Promise that resolves with 1 if the member was removed, 0 if it didn't exist
* @returns Promise that resolves with 1 if the member was removed, 0 if it
* didn't exist
*/
srem(key: RedisClient.KeyLike, member: string): Promise<number>;
@@ -320,14 +337,16 @@ declare module "bun" {
/**
* Get a random member from a set
* @param key The set key
* @returns Promise that resolves with a random member, or null if the set is empty
* @returns Promise that resolves with a random member, or null if the set
* is empty
*/
srandmember(key: RedisClient.KeyLike): Promise<string | null>;
/**
* Remove and return a random member from a set
* @param key The set key
* @returns Promise that resolves with the removed member, or null if the set is empty
* @returns Promise that resolves with the removed member, or null if the
* set is empty
*/
spop(key: RedisClient.KeyLike): Promise<string | null>;
@@ -394,28 +413,32 @@ declare module "bun" {
/**
* Remove and get the first element in a list
* @param key The list key
* @returns Promise that resolves with the first element, or null if the list is empty
* @returns Promise that resolves with the first element, or null if the
* list is empty
*/
lpop(key: RedisClient.KeyLike): Promise<string | null>;
/**
* Remove the expiration from a key
* @param key The key to persist
* @returns Promise that resolves with 1 if the timeout was removed, 0 if the key doesn't exist or has no timeout
* @returns Promise that resolves with 1 if the timeout was removed, 0 if
* the key doesn't exist or has no timeout
*/
persist(key: RedisClient.KeyLike): Promise<number>;
/**
* Get the expiration time of a key as a UNIX timestamp in milliseconds
* @param key The key to check
* @returns Promise that resolves with the timestamp, or -1 if the key has no expiration, or -2 if the key doesn't exist
* @returns Promise that resolves with the timestamp, or -1 if the key has
* no expiration, or -2 if the key doesn't exist
*/
pexpiretime(key: RedisClient.KeyLike): Promise<number>;
/**
* Get the time to live for a key in milliseconds
* @param key The key to check
* @returns Promise that resolves with the TTL in milliseconds, or -1 if the key has no expiration, or -2 if the key doesn't exist
* @returns Promise that resolves with the TTL in milliseconds, or -1 if the
* key has no expiration, or -2 if the key doesn't exist
*/
pttl(key: RedisClient.KeyLike): Promise<number>;
@@ -429,42 +452,48 @@ declare module "bun" {
/**
* Get the number of members in a set
* @param key The set key
* @returns Promise that resolves with the cardinality (number of elements) of the set
* @returns Promise that resolves with the cardinality (number of elements)
* of the set
*/
scard(key: RedisClient.KeyLike): Promise<number>;
/**
* Get the length of the value stored in a key
* @param key The key to check
* @returns Promise that resolves with the length of the string value, or 0 if the key doesn't exist
* @returns Promise that resolves with the length of the string value, or 0
* if the key doesn't exist
*/
strlen(key: RedisClient.KeyLike): Promise<number>;
/**
* Get the number of members in a sorted set
* @param key The sorted set key
* @returns Promise that resolves with the cardinality (number of elements) of the sorted set
* @returns Promise that resolves with the cardinality (number of elements)
* of the sorted set
*/
zcard(key: RedisClient.KeyLike): Promise<number>;
/**
* Remove and return members with the highest scores in a sorted set
* @param key The sorted set key
* @returns Promise that resolves with the removed member and its score, or null if the set is empty
* @returns Promise that resolves with the removed member and its score, or
* null if the set is empty
*/
zpopmax(key: RedisClient.KeyLike): Promise<string | null>;
/**
* Remove and return members with the lowest scores in a sorted set
* @param key The sorted set key
* @returns Promise that resolves with the removed member and its score, or null if the set is empty
* @returns Promise that resolves with the removed member and its score, or
* null if the set is empty
*/
zpopmin(key: RedisClient.KeyLike): Promise<string | null>;
/**
* Get one or multiple random members from a sorted set
* @param key The sorted set key
* @returns Promise that resolves with a random member, or null if the set is empty
* @returns Promise that resolves with a random member, or null if the set
* is empty
*/
zrandmember(key: RedisClient.KeyLike): Promise<string | null>;
@@ -472,7 +501,8 @@ declare module "bun" {
* Append a value to a key
* @param key The key to append to
* @param value The value to append
* @returns Promise that resolves with the length of the string after the append operation
* @returns Promise that resolves with the length of the string after the
* append operation
*/
append(key: RedisClient.KeyLike, value: RedisClient.KeyLike): Promise<number>;
@@ -480,7 +510,8 @@ declare module "bun" {
* Set the value of a key and return its old value
* @param key The key to set
* @param value The value to set
* @returns Promise that resolves with the old value, or null if the key didn't exist
* @returns Promise that resolves with the old value, or null if the key
* didn't exist
*/
getset(key: RedisClient.KeyLike, value: RedisClient.KeyLike): Promise<string | null>;
@@ -488,7 +519,8 @@ declare module "bun" {
* Prepend one or multiple values to a list
* @param key The list key
* @param value The value to prepend
* @returns Promise that resolves with the length of the list after the push operation
* @returns Promise that resolves with the length of the list after the push
* operation
*/
lpush(key: RedisClient.KeyLike, value: RedisClient.KeyLike): Promise<number>;
@@ -496,7 +528,8 @@ declare module "bun" {
* Prepend a value to a list, only if the list exists
* @param key The list key
* @param value The value to prepend
* @returns Promise that resolves with the length of the list after the push operation, or 0 if the list doesn't exist
* @returns Promise that resolves with the length of the list after the push
* operation, or 0 if the list doesn't exist
*/
lpushx(key: RedisClient.KeyLike, value: RedisClient.KeyLike): Promise<number>;
@@ -504,7 +537,8 @@ declare module "bun" {
* Add one or more members to a HyperLogLog
* @param key The HyperLogLog key
* @param element The element to add
* @returns Promise that resolves with 1 if the HyperLogLog was altered, 0 otherwise
* @returns Promise that resolves with 1 if the HyperLogLog was altered, 0
* otherwise
*/
pfadd(key: RedisClient.KeyLike, element: string): Promise<number>;
@@ -512,7 +546,8 @@ declare module "bun" {
* Append one or multiple values to a list
* @param key The list key
* @param value The value to append
* @returns Promise that resolves with the length of the list after the push operation
* @returns Promise that resolves with the length of the list after the push
* operation
*/
rpush(key: RedisClient.KeyLike, value: RedisClient.KeyLike): Promise<number>;
@@ -520,7 +555,8 @@ declare module "bun" {
* Append a value to a list, only if the list exists
* @param key The list key
* @param value The value to append
* @returns Promise that resolves with the length of the list after the push operation, or 0 if the list doesn't exist
* @returns Promise that resolves with the length of the list after the push
* operation, or 0 if the list doesn't exist
*/
rpushx(key: RedisClient.KeyLike, value: RedisClient.KeyLike): Promise<number>;
@@ -528,7 +564,8 @@ declare module "bun" {
* Set the value of a key, only if the key does not exist
* @param key The key to set
* @param value The value to set
* @returns Promise that resolves with 1 if the key was set, 0 if the key was not set
* @returns Promise that resolves with 1 if the key was set, 0 if the key
* was not set
*/
setnx(key: RedisClient.KeyLike, value: RedisClient.KeyLike): Promise<number>;
@@ -536,14 +573,16 @@ declare module "bun" {
* Get the score associated with the given member in a sorted set
* @param key The sorted set key
* @param member The member to get the score for
* @returns Promise that resolves with the score of the member as a string, or null if the member or key doesn't exist
* @returns Promise that resolves with the score of the member as a string,
* or null if the member or key doesn't exist
*/
zscore(key: RedisClient.KeyLike, member: string): Promise<string | null>;
/**
* Get the values of all specified keys
* @param keys The keys to get
* @returns Promise that resolves with an array of values, with null for keys that don't exist
* @returns Promise that resolves with an array of values, with null for
* keys that don't exist
*/
mget(...keys: RedisClient.KeyLike[]): Promise<(string | null)[]>;
@@ -557,37 +596,46 @@ declare module "bun" {
/**
* Return a serialized version of the value stored at the specified key
* @param key The key to dump
* @returns Promise that resolves with the serialized value, or null if the key doesn't exist
* @returns Promise that resolves with the serialized value, or null if the
* key doesn't exist
*/
dump(key: RedisClient.KeyLike): Promise<string | null>;
/**
* Get the expiration time of a key as a UNIX timestamp in seconds
*
* @param key The key to check
* @returns Promise that resolves with the timestamp, or -1 if the key has no expiration, or -2 if the key doesn't exist
* @returns Promise that resolves with the timestamp, or -1 if the key has
* no expiration, or -2 if the key doesn't exist
*/
expiretime(key: RedisClient.KeyLike): Promise<number>;
/**
* Get the value of a key and delete the key
*
* @param key The key to get and delete
* @returns Promise that resolves with the value of the key, or null if the key doesn't exist
* @returns Promise that resolves with the value of the key, or null if the
* key doesn't exist
*/
getdel(key: RedisClient.KeyLike): Promise<string | null>;
/**
* Get the value of a key and optionally set its expiration
*
* @param key The key to get
* @returns Promise that resolves with the value of the key, or null if the key doesn't exist
* @returns Promise that resolves with the value of the key, or null if the
* key doesn't exist
*/
getex(key: RedisClient.KeyLike): Promise<string | null>;
/**
* Get the value of a key and set its expiration in seconds
*
* @param key The key to get
* @param ex Set the specified expire time, in seconds
* @param seconds The number of seconds until expiration
* @returns Promise that resolves with the value of the key, or null if the key doesn't exist
* @returns Promise that resolves with the value of the key, or null if the
* key doesn't exist
*/
getex(key: RedisClient.KeyLike, ex: "EX", seconds: number): Promise<string | null>;
@@ -602,6 +650,7 @@ declare module "bun" {
/**
* Get the value of a key and set its expiration at a specific Unix timestamp in seconds
*
* @param key The key to get
* @param exat Set the specified Unix time at which the key will expire, in seconds
* @param timestampSeconds The Unix timestamp in seconds
@@ -611,6 +660,7 @@ declare module "bun" {
/**
* Get the value of a key and set its expiration at a specific Unix timestamp in milliseconds
*
* @param key The key to get
* @param pxat Set the specified Unix time at which the key will expire, in milliseconds
* @param timestampMilliseconds The Unix timestamp in milliseconds
@@ -620,6 +670,7 @@ declare module "bun" {
/**
* Get the value of a key and remove its expiration
*
* @param key The key to get
* @param persist Remove the expiration from the key
* @returns Promise that resolves with the value of the key, or null if the key doesn't exist
@@ -634,10 +685,133 @@ declare module "bun" {
/**
* Ping the server with a message
*
* @param message The message to send to the server
* @returns Promise that resolves with the message if the server is reachable, or throws an error if the server is not reachable
*/
ping(message: RedisClient.KeyLike): Promise<string>;
/**
* Publish a message to a Redis channel.
*
* @param channel The channel to publish to.
* @param message The message to publish.
*
* @returns The number of clients that received the message. Note that in a
* cluster this returns the total number of clients in the same node.
*/
publish(channel: string, message: string): Promise<number>;
/**
* Subscribe to a Redis channel.
*
* Subscribing disables automatic pipelining, so all commands will be
* received immediately.
*
* Subscribing moves the channel to a dedicated subscription state which
* prevents most other commands from being executed until unsubscribed. Only
* {@link ping `.ping()`}, {@link subscribe `.subscribe()`}, and
* {@link unsubscribe `.unsubscribe()`} are legal to invoke in a subscribed
* upon channel.
*
* @param channel The channel to subscribe to.
* @param listener The listener to call when a message is received on the
* channel. The listener will receive the message as the first argument and
* the channel as the second argument.
*
* @example
* ```ts
* await client.subscribe("my-channel", (message, channel) => {
* console.log(`Received message on ${channel}: ${message}`);
* });
* ```
*/
subscribe(channel: string, listener: RedisClient.StringPubSubListener): Promise<number>;
/**
* Subscribe to multiple Redis channels.
*
* Subscribing disables automatic pipelining, so all commands will be
* received immediately.
*
* Subscribing moves the channels to a dedicated subscription state in which
* only a limited set of commands can be executed.
*
* @param channels An array of channels to subscribe to.
* @param listener The listener to call when a message is received on any of
* the subscribed channels. The listener will receive the message as the
* first argument and the channel as the second argument.
*/
subscribe(channels: string[], listener: RedisClient.StringPubSubListener): Promise<number>;
/**
* Unsubscribe from a singular Redis channel.
*
* @param channel The channel to unsubscribe from.
*
* If there are no more channels subscribed to, the client automatically
* re-enables pipelining if it was previously enabled.
*
* Unsubscribing moves the channel back to a normal state out of the
* subscription state if all channels have been unsubscribed from. For
* further details on the subscription state, see
* {@link subscribe `.subscribe()`}.
*/
unsubscribe(channel: string): Promise<void>;
/**
* Remove a listener from a given Redis channel.
*
* If there are no more channels subscribed to, the client automatically
* re-enables pipelining if it was previously enabled.
*
* Unsubscribing moves the channel back to a normal state out of the
* subscription state if all channels have been unsubscribed from. For
* further details on the subscription state, see
* {@link subscribe `.subscribe()`}.
*
* @param channel The channel to unsubscribe from.
* @param listener The listener to remove. This is tested against
* referential equality so you must pass the exact same listener instance as
* when subscribing.
*/
unsubscribe(channel: string, listener: RedisClient.StringPubSubListener): Promise<void>;
/**
* Unsubscribe from all registered Redis channels.
*
* The client will automatically re-enable pipelining if it was previously
* enabled.
*
* Unsubscribing moves the channel back to a normal state out of the
* subscription state if all channels have been unsubscribed from. For
* further details on the subscription state, see
* {@link subscribe `.subscribe()`}.
*/
unsubscribe(): Promise<void>;
/**
* Unsubscribe from multiple Redis channels.
*
* @param channels An array of channels to unsubscribe from.
*
* If there are no more channels subscribed to, the client automatically
* re-enables pipelining if it was previously enabled.
*
* Unsubscribing moves the channel back to a normal state out of the
* subscription state if all channels have been unsubscribed from. For
* further details on the subscription state, see
* {@link subscribe `.subscribe()`}.
*/
unsubscribe(channels: string[]): Promise<void>;
/**
* @brief Create a new RedisClient instance with the same configuration as
* the current instance.
*
* This will open up a new connection to the Redis server.
*/
duplicate(): Promise<RedisClient>;
}
/**

View File

@@ -230,6 +230,11 @@ declare module "bun:test" {
* Marks this group of tests to be executed concurrently.
*/
concurrent: Describe<T>;
/**
* Marks this group of tests to be executed serially (one after another),
* even when the --concurrent flag is used.
*/
serial: Describe<T>;
/**
* Runs this group of tests, only if `condition` is true.
*
@@ -459,6 +464,11 @@ declare module "bun:test" {
* Runs the test concurrently with other concurrent tests.
*/
concurrent: Test<T>;
/**
* Forces the test to run serially (not in parallel),
* even when the --concurrent flag is used.
*/
serial: Test<T>;
/**
* Runs this test, if `condition` is true.
*
@@ -491,6 +501,13 @@ declare module "bun:test" {
* @param condition if the test should run concurrently
*/
concurrentIf(condition: boolean): Test<T>;
/**
* Forces the test to run serially (not in parallel), if `condition` is true.
* This applies even when the --concurrent flag is used.
*
* @param condition if the test should run serially
*/
serialIf(condition: boolean): Test<T>;
/**
* Returns a function that runs for each item in `table`.
*

View File

@@ -17,7 +17,6 @@
/* Todo: this file should lie in networking/bsd.c */
// NOLINTNEXTLINE(bugprone-reserved-identifier)
#define __APPLE_USE_RFC_3542
#include "libusockets.h"
@@ -593,7 +592,6 @@ LIBUS_SOCKET_DESCRIPTOR bsd_create_socket(int domain, int type, int protocol, in
*err = 0;
}
// NOLINTNEXTLINE(cppcoreguidelines-init-variables)
LIBUS_SOCKET_DESCRIPTOR created_fd;
#if defined(SOCK_CLOEXEC) && defined(SOCK_NONBLOCK)
const int flags = SOCK_CLOEXEC | SOCK_NONBLOCK;
@@ -697,7 +695,6 @@ int bsd_addr_get_port(struct bsd_addr_t *addr) {
// called by dispatch_ready_poll
LIBUS_SOCKET_DESCRIPTOR bsd_accept_socket(LIBUS_SOCKET_DESCRIPTOR fd, struct bsd_addr_t *addr) {
// NOLINTNEXTLINE(cppcoreguidelines-init-variables)
LIBUS_SOCKET_DESCRIPTOR accepted_fd;
while (1) {
@@ -857,7 +854,7 @@ int bsd_would_block() {
}
static int us_internal_bind_and_listen(LIBUS_SOCKET_DESCRIPTOR listenFd, struct sockaddr *listenAddr, socklen_t listenAddrLength, int backlog, int* error) {
int result = 0;
int result;
do
result = bind(listenFd, listenAddr, listenAddrLength);
while (IS_EINTR(result));
@@ -975,7 +972,7 @@ inline __attribute__((always_inline)) LIBUS_SOCKET_DESCRIPTOR bsd_bind_listen_fd
// return LIBUS_SOCKET_ERROR or the fd that represents listen socket
// listen both on ipv6 and ipv4
LIBUS_SOCKET_DESCRIPTOR bsd_create_listen_socket(const char *host, int port, int options, int* error) {
struct addrinfo hints, *result = NULL;
struct addrinfo hints, *result;
memset(&hints, 0, sizeof(struct addrinfo));
hints.ai_flags = AI_PASSIVE;
@@ -990,7 +987,7 @@ LIBUS_SOCKET_DESCRIPTOR bsd_create_listen_socket(const char *host, int port, int
}
LIBUS_SOCKET_DESCRIPTOR listenFd = LIBUS_SOCKET_ERROR;
struct addrinfo* listenAddr = NULL;
struct addrinfo *listenAddr;
for (struct addrinfo *a = result; a != NULL; a = a->ai_next) {
if (a->ai_family == AF_INET6) {
listenFd = bsd_create_socket(a->ai_family, a->ai_socktype, a->ai_protocol, NULL);
@@ -1176,7 +1173,7 @@ LIBUS_SOCKET_DESCRIPTOR bsd_create_udp_socket(const char *host, int port, int op
*err = 0;
}
struct addrinfo hints, *result = NULL;
struct addrinfo hints, *result;
memset(&hints, 0, sizeof(struct addrinfo));
hints.ai_flags = AI_PASSIVE;
@@ -1286,7 +1283,7 @@ LIBUS_SOCKET_DESCRIPTOR bsd_create_udp_socket(const char *host, int port, int op
}
int bsd_connect_udp_socket(LIBUS_SOCKET_DESCRIPTOR fd, const char *host, int port) {
struct addrinfo hints, *result = NULL;
struct addrinfo hints, *result;
memset(&hints, 0, sizeof(struct addrinfo));
hints.ai_family = AF_UNSPEC;
@@ -1396,8 +1393,8 @@ static int bsd_do_connect_raw(LIBUS_SOCKET_DESCRIPTOR fd, struct sockaddr *addr,
#else
int r = 0;
do {
int r;
do {
errno = 0;
r = connect(fd, (struct sockaddr *)addr, namelen);
} while (IS_EINTR(r));

View File

@@ -517,7 +517,7 @@ void *us_socket_context_connect(int ssl, struct us_socket_context_t *context, co
return us_socket_context_connect_resolved_dns(context, &addr, options, socket_ext_size);
}
struct addrinfo_request* ai_req = NULL;
struct addrinfo_request* ai_req;
if (Bun__addrinfo_get(loop, host, (uint16_t)port, &ai_req) == 0) {
// fast path for cached results
struct addrinfo_result *result = Bun__addrinfo_getRequestResult(ai_req);

View File

@@ -113,7 +113,7 @@ int passphrase_cb(char *buf, int size, int rwflag, void *u) {
const char *passphrase = (const char *)u;
size_t passphrase_length = strlen(passphrase);
memcpy(buf, passphrase, passphrase_length);
buf[passphrase_length] = 0;
// put null at end? no?
return (int)passphrase_length;
}
@@ -807,7 +807,7 @@ create_ssl_context_from_options(struct us_socket_context_options_t options) {
}
if (options.ca_file_name) {
STACK_OF(X509_NAME) * ca_list = NULL;
STACK_OF(X509_NAME) * ca_list;
ca_list = SSL_load_client_CA_file(options.ca_file_name);
if (ca_list == NULL) {
free_ssl_context(ssl_context);
@@ -825,7 +825,7 @@ create_ssl_context_from_options(struct us_socket_context_options_t options) {
if (options.dh_params_file_name) {
/* Set up ephemeral DH parameters. */
DH *dh_2048 = NULL;
FILE *paramfile = NULL;
FILE *paramfile;
paramfile = fopen(options.dh_params_file_name, "r");
if (paramfile) {
@@ -857,7 +857,7 @@ create_ssl_context_from_options(struct us_socket_context_options_t options) {
if (options.ssl_ciphers) {
if (!SSL_CTX_set_cipher_list(ssl_context, options.ssl_ciphers)) {
unsigned long ssl_err = ERR_get_error();
unsigned long ssl_err = ERR_get_error();
if (!(strlen(options.ssl_ciphers) == 0 && ERR_GET_REASON(ssl_err) == SSL_R_NO_CIPHER_MATCH)) {
// TLS1.2 ciphers were deliberately cleared, so don't consider
// SSL_R_NO_CIPHER_MATCH to be an error (this is how _set_cipher_suites()
@@ -881,8 +881,8 @@ create_ssl_context_from_options(struct us_socket_context_options_t options) {
int us_ssl_ctx_use_privatekey_content(SSL_CTX *ctx, const char *content,
int type) {
int reason_code = 0, ret = 0;
BIO *in = NULL;
int reason_code, ret = 0;
BIO *in;
EVP_PKEY *pkey = NULL;
in = BIO_new_mem_buf(content, strlen(content));
if (in == NULL) {
@@ -947,7 +947,7 @@ end:
}
int us_ssl_ctx_use_certificate_chain(SSL_CTX *ctx, const char *content) {
BIO *in = NULL;
BIO *in;
int ret = 0;
X509 *x = NULL;
@@ -975,9 +975,9 @@ int us_ssl_ctx_use_certificate_chain(SSL_CTX *ctx, const char *content) {
if (ret) {
// If we could set up our certificate, now proceed to the CA
// certificates.
X509 *ca = NULL;
int r = 0;
uint32_t err = 0;
X509 *ca;
int r;
uint32_t err;
SSL_CTX_clear_chain_certs(ctx);
@@ -1202,7 +1202,7 @@ SSL_CTX *create_ssl_context_from_bun_options(
if (options.ca_file_name) {
SSL_CTX_set_cert_store(ssl_context, us_get_default_ca_store());
STACK_OF(X509_NAME) * ca_list = NULL;
STACK_OF(X509_NAME) * ca_list;
ca_list = SSL_load_client_CA_file(options.ca_file_name);
if (ca_list == NULL) {
*err = CREATE_BUN_SOCKET_ERROR_LOAD_CA_FILE;
@@ -1268,7 +1268,7 @@ SSL_CTX *create_ssl_context_from_bun_options(
if (options.dh_params_file_name) {
/* Set up ephemeral DH parameters. */
DH *dh_2048 = NULL;
FILE *paramfile = NULL;
FILE *paramfile;
paramfile = fopen(options.dh_params_file_name, "r");
if (paramfile) {
@@ -1300,7 +1300,7 @@ SSL_CTX *create_ssl_context_from_bun_options(
if (options.ssl_ciphers) {
if (!SSL_CTX_set_cipher_list(ssl_context, options.ssl_ciphers)) {
unsigned long ssl_err = ERR_get_error();
unsigned long ssl_err = ERR_get_error();
if (!(strlen(options.ssl_ciphers) == 0 && ERR_GET_REASON(ssl_err) == SSL_R_NO_CIPHER_MATCH)) {
char error_msg[256];
ERR_error_string_n(ERR_peek_last_error(), error_msg, sizeof(error_msg));
@@ -1308,7 +1308,7 @@ SSL_CTX *create_ssl_context_from_bun_options(
// SSL_R_NO_CIPHER_MATCH to be an error (this is how _set_cipher_suites()
// works). If the user actually sets a value (like "no-such-cipher"), then
// that's actually an error.
*err = CREATE_BUN_SOCKET_ERROR_INVALID_CIPHERS;
*err = CREATE_BUN_SOCKET_ERROR_INVALID_CIPHERS;
free_ssl_context(ssl_context);
return NULL;
}

View File

@@ -6,10 +6,46 @@
#include <atomic>
#include <string.h>
#include "./default_ciphers.h"
// System-specific includes for certificate loading
#include "./root_certs_platform.h"
#ifdef _WIN32
#include <windows.h>
#include <wincrypt.h>
#else
// Linux/Unix includes
#include <dirent.h>
#include <stdio.h>
#include <limits.h>
#endif
static const int root_certs_size = sizeof(root_certs) / sizeof(root_certs[0]);
extern "C" void BUN__warn__extra_ca_load_failed(const char* filename, const char* error_msg);
// Forward declarations for platform-specific functions
// (Actual implementations are in platform-specific files)
// External variable from Zig CLI arguments
extern "C" bool Bun__Node__UseSystemCA;
// Helper function to check if system CA should be used
// Checks both CLI flag (--use-system-ca) and environment variable (NODE_USE_SYSTEM_CA=1)
static bool us_should_use_system_ca() {
// Check CLI flag first
if (Bun__Node__UseSystemCA) {
return true;
}
// Check environment variable
const char *use_system_ca = getenv("NODE_USE_SYSTEM_CA");
return use_system_ca && strcmp(use_system_ca, "1") == 0;
}
// Platform-specific system certificate loading implementations are separated:
// - macOS: root_certs_darwin.cpp (Security framework with dynamic loading)
// - Windows: root_certs_windows.cpp (Windows CryptoAPI)
// - Linux/Unix: us_load_system_certificates_linux() below
// This callback is used to avoid the default passphrase callback in OpenSSL
// which will typically prompt for the passphrase. The prompting is designed
// for the OpenSSL CLI, but works poorly for this case because it involves
@@ -101,7 +137,8 @@ end:
static void us_internal_init_root_certs(
X509 *root_cert_instances[root_certs_size],
STACK_OF(X509) *&root_extra_cert_instances) {
STACK_OF(X509) *&root_extra_cert_instances,
STACK_OF(X509) *&root_system_cert_instances) {
static std::atomic_flag root_cert_instances_lock = ATOMIC_FLAG_INIT;
static std::atomic_bool root_cert_instances_initialized = 0;
@@ -123,6 +160,17 @@ static void us_internal_init_root_certs(
if (extra_certs && extra_certs[0]) {
root_extra_cert_instances = us_ssl_ctx_load_all_certs_from_file(extra_certs);
}
// load system certificates if NODE_USE_SYSTEM_CA=1
if (us_should_use_system_ca()) {
#ifdef __APPLE__
us_load_system_certificates_macos(&root_system_cert_instances);
#elif defined(_WIN32)
us_load_system_certificates_windows(&root_system_cert_instances);
#else
us_load_system_certificates_linux(&root_system_cert_instances);
#endif
}
}
atomic_flag_clear_explicit(&root_cert_instances_lock,
@@ -137,12 +185,15 @@ extern "C" int us_internal_raw_root_certs(struct us_cert_string_t **out) {
struct us_default_ca_certificates {
X509 *root_cert_instances[root_certs_size];
STACK_OF(X509) *root_extra_cert_instances;
STACK_OF(X509) *root_system_cert_instances;
};
us_default_ca_certificates* us_get_default_ca_certificates() {
static us_default_ca_certificates default_ca_certificates = {{NULL}, NULL};
static us_default_ca_certificates default_ca_certificates = {{NULL}, NULL, NULL};
us_internal_init_root_certs(default_ca_certificates.root_cert_instances, default_ca_certificates.root_extra_cert_instances);
us_internal_init_root_certs(default_ca_certificates.root_cert_instances,
default_ca_certificates.root_extra_cert_instances,
default_ca_certificates.root_system_cert_instances);
return &default_ca_certificates;
}
@@ -151,20 +202,33 @@ STACK_OF(X509) *us_get_root_extra_cert_instances() {
return us_get_default_ca_certificates()->root_extra_cert_instances;
}
STACK_OF(X509) *us_get_root_system_cert_instances() {
if (!us_should_use_system_ca())
return NULL;
// Ensure single-path initialization via us_internal_init_root_certs
auto certs = us_get_default_ca_certificates();
return certs->root_system_cert_instances;
}
extern "C" X509_STORE *us_get_default_ca_store() {
X509_STORE *store = X509_STORE_new();
if (store == NULL) {
return NULL;
}
if (!X509_STORE_set_default_paths(store)) {
X509_STORE_free(store);
return NULL;
// Only load system default paths when NODE_USE_SYSTEM_CA=1
// Otherwise, rely on bundled certificates only (like Node.js behavior)
if (us_should_use_system_ca()) {
if (!X509_STORE_set_default_paths(store)) {
X509_STORE_free(store);
return NULL;
}
}
us_default_ca_certificates *default_ca_certificates = us_get_default_ca_certificates();
X509** root_cert_instances = default_ca_certificates->root_cert_instances;
STACK_OF(X509) *root_extra_cert_instances = default_ca_certificates->root_extra_cert_instances;
STACK_OF(X509) *root_system_cert_instances = default_ca_certificates->root_system_cert_instances;
// load all root_cert_instances on the default ca store
for (size_t i = 0; i < root_certs_size; i++) {
@@ -183,8 +247,59 @@ extern "C" X509_STORE *us_get_default_ca_store() {
}
}
if (us_should_use_system_ca() && root_system_cert_instances) {
for (int i = 0; i < sk_X509_num(root_system_cert_instances); i++) {
X509 *cert = sk_X509_value(root_system_cert_instances, i);
X509_up_ref(cert);
X509_STORE_add_cert(store, cert);
}
}
return store;
}
extern "C" const char *us_get_default_ciphers() {
return DEFAULT_CIPHER_LIST;
}
}
// Platform-specific implementations for loading system certificates
#if defined(_WIN32)
// Windows implementation is split to avoid header conflicts:
// - root_certs_windows.cpp loads raw certificate data (uses Windows headers)
// - This file converts raw data to X509* (uses OpenSSL headers)
#include <vector>
struct RawCertificate {
std::vector<unsigned char> data;
};
// Defined in root_certs_windows.cpp - loads raw certificate data
extern void us_load_system_certificates_windows_raw(
std::vector<RawCertificate>& raw_certs);
// Convert raw Windows certificates to OpenSSL X509 format
void us_load_system_certificates_windows(STACK_OF(X509) **system_certs) {
*system_certs = sk_X509_new_null();
if (*system_certs == NULL) {
return;
}
// Load raw certificates from Windows stores
std::vector<RawCertificate> raw_certs;
us_load_system_certificates_windows_raw(raw_certs);
// Convert each raw certificate to X509
for (const auto& raw_cert : raw_certs) {
const unsigned char* data = raw_cert.data.data();
X509* x509_cert = d2i_X509(NULL, &data, raw_cert.data.size());
if (x509_cert != NULL) {
sk_X509_push(*system_certs, x509_cert);
}
}
}
#else
// Linux and other Unix-like systems - implementation is in root_certs_linux.cpp
extern "C" void us_load_system_certificates_linux(STACK_OF(X509) **system_certs);
#endif

View File

@@ -0,0 +1,431 @@
#ifdef __APPLE__
#include <dlfcn.h>
#include <CoreFoundation/CoreFoundation.h>
#include <atomic>
#include <openssl/x509.h>
#include <openssl/x509_vfy.h>
#include <stdio.h>
// Security framework types and constants - dynamically loaded
typedef struct OpaqueSecCertificateRef* SecCertificateRef;
typedef struct OpaqueSecTrustRef* SecTrustRef;
typedef struct OpaqueSecPolicyRef* SecPolicyRef;
typedef int32_t OSStatus;
typedef uint32_t SecTrustSettingsDomain;
// Security framework constants
enum {
errSecSuccess = 0,
errSecItemNotFound = -25300,
};
// Trust settings domains
enum {
kSecTrustSettingsDomainUser = 0,
kSecTrustSettingsDomainAdmin = 1,
kSecTrustSettingsDomainSystem = 2,
};
// Trust status enumeration
enum class TrustStatus {
TRUSTED,
DISTRUSTED,
UNSPECIFIED
};
// Dynamic Security framework loader
class SecurityFramework {
public:
void* handle;
void* cf_handle;
// Core Foundation constants
CFStringRef kSecClass;
CFStringRef kSecClassCertificate;
CFStringRef kSecMatchLimit;
CFStringRef kSecMatchLimitAll;
CFStringRef kSecReturnRef;
CFStringRef kSecMatchTrustedOnly;
CFBooleanRef kCFBooleanTrue;
CFAllocatorRef kCFAllocatorDefault;
CFArrayCallBacks* kCFTypeArrayCallBacks;
CFDictionaryKeyCallBacks* kCFTypeDictionaryKeyCallBacks;
CFDictionaryValueCallBacks* kCFTypeDictionaryValueCallBacks;
// Core Foundation function pointers
CFMutableArrayRef (*CFArrayCreateMutable)(CFAllocatorRef allocator, CFIndex capacity, const CFArrayCallBacks *callBacks);
CFArrayRef (*CFArrayCreate)(CFAllocatorRef allocator, const void **values, CFIndex numValues, const CFArrayCallBacks *callBacks);
void (*CFArraySetValueAtIndex)(CFMutableArrayRef theArray, CFIndex idx, const void *value);
const void* (*CFArrayGetValueAtIndex)(CFArrayRef theArray, CFIndex idx);
CFIndex (*CFArrayGetCount)(CFArrayRef theArray);
void (*CFRelease)(CFTypeRef cf);
CFDictionaryRef (*CFDictionaryCreate)(CFAllocatorRef allocator, const void **keys, const void **values, CFIndex numValues, const CFDictionaryKeyCallBacks *keyCallBacks, const CFDictionaryValueCallBacks *valueCallBacks);
const UInt8* (*CFDataGetBytePtr)(CFDataRef theData);
CFIndex (*CFDataGetLength)(CFDataRef theData);
// Security framework function pointers
OSStatus (*SecItemCopyMatching)(CFDictionaryRef query, CFTypeRef *result);
CFDataRef (*SecCertificateCopyData)(SecCertificateRef certificate);
OSStatus (*SecTrustCreateWithCertificates)(CFArrayRef certificates, CFArrayRef policies, SecTrustRef *trust);
SecPolicyRef (*SecPolicyCreateSSL)(Boolean server, CFStringRef hostname);
Boolean (*SecTrustEvaluateWithError)(SecTrustRef trust, CFErrorRef *error);
OSStatus (*SecTrustSettingsCopyTrustSettings)(SecCertificateRef certRef, SecTrustSettingsDomain domain, CFArrayRef *trustSettings);
SecurityFramework() : handle(nullptr), cf_handle(nullptr),
kSecClass(nullptr), kSecClassCertificate(nullptr),
kSecMatchLimit(nullptr), kSecMatchLimitAll(nullptr),
kSecReturnRef(nullptr), kSecMatchTrustedOnly(nullptr), kCFBooleanTrue(nullptr),
kCFAllocatorDefault(nullptr), kCFTypeArrayCallBacks(nullptr),
kCFTypeDictionaryKeyCallBacks(nullptr), kCFTypeDictionaryValueCallBacks(nullptr),
CFArrayCreateMutable(nullptr), CFArrayCreate(nullptr),
CFArraySetValueAtIndex(nullptr), CFArrayGetValueAtIndex(nullptr),
CFArrayGetCount(nullptr), CFRelease(nullptr),
CFDictionaryCreate(nullptr), CFDataGetBytePtr(nullptr), CFDataGetLength(nullptr),
SecItemCopyMatching(nullptr), SecCertificateCopyData(nullptr),
SecTrustCreateWithCertificates(nullptr), SecPolicyCreateSSL(nullptr),
SecTrustEvaluateWithError(nullptr), SecTrustSettingsCopyTrustSettings(nullptr) {}
~SecurityFramework() {
if (handle) {
dlclose(handle);
}
if (cf_handle) {
dlclose(cf_handle);
}
}
bool load() {
if (handle && cf_handle) return true; // Already loaded
// Load CoreFoundation framework
cf_handle = dlopen("/System/Library/Frameworks/CoreFoundation.framework/CoreFoundation", RTLD_LAZY | RTLD_LOCAL);
if (!cf_handle) {
fprintf(stderr, "Failed to load CoreFoundation framework: %s\n", dlerror());
return false;
}
// Load Security framework
handle = dlopen("/System/Library/Frameworks/Security.framework/Security", RTLD_LAZY | RTLD_LOCAL);
if (!handle) {
fprintf(stderr, "Failed to load Security framework: %s\n", dlerror());
dlclose(cf_handle);
cf_handle = nullptr;
return false;
}
// Load constants and functions
if (!load_constants()) {
if (handle) {
dlclose(handle);
handle = nullptr;
}
if (cf_handle) {
dlclose(cf_handle);
cf_handle = nullptr;
}
return false;
}
if (!load_functions()) {
if (handle) {
dlclose(handle);
handle = nullptr;
}
if (cf_handle) {
dlclose(cf_handle);
cf_handle = nullptr;
}
return false;
}
return true;
}
private:
bool load_constants() {
// Load Security framework constants
void* ptr = dlsym(handle, "kSecClass");
if (!ptr) { fprintf(stderr, "DEBUG: kSecClass not found\n"); return false; }
kSecClass = *(CFStringRef*)ptr;
ptr = dlsym(handle, "kSecClassCertificate");
if (!ptr) { fprintf(stderr, "DEBUG: kSecClassCertificate not found\n"); return false; }
kSecClassCertificate = *(CFStringRef*)ptr;
ptr = dlsym(handle, "kSecMatchLimit");
if (!ptr) { fprintf(stderr, "DEBUG: kSecMatchLimit not found\n"); return false; }
kSecMatchLimit = *(CFStringRef*)ptr;
ptr = dlsym(handle, "kSecMatchLimitAll");
if (!ptr) { fprintf(stderr, "DEBUG: kSecMatchLimitAll not found\n"); return false; }
kSecMatchLimitAll = *(CFStringRef*)ptr;
ptr = dlsym(handle, "kSecReturnRef");
if (!ptr) { fprintf(stderr, "DEBUG: kSecReturnRef not found\n"); return false; }
kSecReturnRef = *(CFStringRef*)ptr;
ptr = dlsym(handle, "kSecMatchTrustedOnly");
if (!ptr) { fprintf(stderr, "DEBUG: kSecMatchTrustedOnly not found\n"); return false; }
kSecMatchTrustedOnly = *(CFStringRef*)ptr;
// Load CoreFoundation constants
ptr = dlsym(cf_handle, "kCFBooleanTrue");
if (!ptr) { fprintf(stderr, "DEBUG: kCFBooleanTrue not found\n"); return false; }
kCFBooleanTrue = *(CFBooleanRef*)ptr;
ptr = dlsym(cf_handle, "kCFAllocatorDefault");
if (!ptr) { fprintf(stderr, "DEBUG: kCFAllocatorDefault not found\n"); return false; }
kCFAllocatorDefault = *(CFAllocatorRef*)ptr;
ptr = dlsym(cf_handle, "kCFTypeArrayCallBacks");
if (!ptr) { fprintf(stderr, "DEBUG: kCFTypeArrayCallBacks not found\n"); return false; }
kCFTypeArrayCallBacks = (CFArrayCallBacks*)ptr;
ptr = dlsym(cf_handle, "kCFTypeDictionaryKeyCallBacks");
if (!ptr) { fprintf(stderr, "DEBUG: kCFTypeDictionaryKeyCallBacks not found\n"); return false; }
kCFTypeDictionaryKeyCallBacks = (CFDictionaryKeyCallBacks*)ptr;
ptr = dlsym(cf_handle, "kCFTypeDictionaryValueCallBacks");
if (!ptr) { fprintf(stderr, "DEBUG: kCFTypeDictionaryValueCallBacks not found\n"); return false; }
kCFTypeDictionaryValueCallBacks = (CFDictionaryValueCallBacks*)ptr;
return true;
}
bool load_functions() {
// Load CoreFoundation functions
CFArrayCreateMutable = (CFMutableArrayRef (*)(CFAllocatorRef, CFIndex, const CFArrayCallBacks*))dlsym(cf_handle, "CFArrayCreateMutable");
CFArrayCreate = (CFArrayRef (*)(CFAllocatorRef, const void**, CFIndex, const CFArrayCallBacks*))dlsym(cf_handle, "CFArrayCreate");
CFArraySetValueAtIndex = (void (*)(CFMutableArrayRef, CFIndex, const void*))dlsym(cf_handle, "CFArraySetValueAtIndex");
CFArrayGetValueAtIndex = (const void* (*)(CFArrayRef, CFIndex))dlsym(cf_handle, "CFArrayGetValueAtIndex");
CFArrayGetCount = (CFIndex (*)(CFArrayRef))dlsym(cf_handle, "CFArrayGetCount");
CFRelease = (void (*)(CFTypeRef))dlsym(cf_handle, "CFRelease");
CFDictionaryCreate = (CFDictionaryRef (*)(CFAllocatorRef, const void**, const void**, CFIndex, const CFDictionaryKeyCallBacks*, const CFDictionaryValueCallBacks*))dlsym(cf_handle, "CFDictionaryCreate");
CFDataGetBytePtr = (const UInt8* (*)(CFDataRef))dlsym(cf_handle, "CFDataGetBytePtr");
CFDataGetLength = (CFIndex (*)(CFDataRef))dlsym(cf_handle, "CFDataGetLength");
// Load Security framework functions
SecItemCopyMatching = (OSStatus (*)(CFDictionaryRef, CFTypeRef*))dlsym(handle, "SecItemCopyMatching");
SecCertificateCopyData = (CFDataRef (*)(SecCertificateRef))dlsym(handle, "SecCertificateCopyData");
SecTrustCreateWithCertificates = (OSStatus (*)(CFArrayRef, CFArrayRef, SecTrustRef*))dlsym(handle, "SecTrustCreateWithCertificates");
SecPolicyCreateSSL = (SecPolicyRef (*)(Boolean, CFStringRef))dlsym(handle, "SecPolicyCreateSSL");
SecTrustEvaluateWithError = (Boolean (*)(SecTrustRef, CFErrorRef*))dlsym(handle, "SecTrustEvaluateWithError");
SecTrustSettingsCopyTrustSettings = (OSStatus (*)(SecCertificateRef, SecTrustSettingsDomain, CFArrayRef*))dlsym(handle, "SecTrustSettingsCopyTrustSettings");
return CFArrayCreateMutable && CFArrayCreate && CFArraySetValueAtIndex &&
CFArrayGetValueAtIndex && CFArrayGetCount && CFRelease &&
CFDictionaryCreate && CFDataGetBytePtr && CFDataGetLength &&
SecItemCopyMatching && SecCertificateCopyData &&
SecTrustCreateWithCertificates && SecPolicyCreateSSL &&
SecTrustEvaluateWithError && SecTrustSettingsCopyTrustSettings;
}
};
// Global instance for dynamic loading
static std::atomic<SecurityFramework*> g_security_framework{nullptr};
static SecurityFramework* get_security_framework() {
SecurityFramework* framework = g_security_framework.load();
if (!framework) {
SecurityFramework* new_framework = new SecurityFramework();
if (new_framework->load()) {
SecurityFramework* expected = nullptr;
if (g_security_framework.compare_exchange_strong(expected, new_framework)) {
framework = new_framework;
} else {
delete new_framework;
framework = expected;
}
} else {
delete new_framework;
framework = nullptr;
}
}
return framework;
}
// Helper function to determine if a certificate is self-issued
static bool is_certificate_self_issued(X509* cert) {
X509_NAME* subject = X509_get_subject_name(cert);
X509_NAME* issuer = X509_get_issuer_name(cert);
return subject && issuer && X509_NAME_cmp(subject, issuer) == 0;
}
// Validate certificate trust using Security framework
static bool is_certificate_trust_valid(SecurityFramework* security, SecCertificateRef cert_ref) {
CFMutableArrayRef subj_certs = security->CFArrayCreateMutable(nullptr, 1, security->kCFTypeArrayCallBacks);
if (!subj_certs) return false;
security->CFArraySetValueAtIndex(subj_certs, 0, cert_ref);
SecPolicyRef policy = security->SecPolicyCreateSSL(true, nullptr);
if (!policy) {
security->CFRelease(subj_certs);
return false;
}
CFArrayRef policies = security->CFArrayCreate(nullptr, (const void**)&policy, 1, security->kCFTypeArrayCallBacks);
if (!policies) {
security->CFRelease(policy);
security->CFRelease(subj_certs);
return false;
}
SecTrustRef sec_trust = nullptr;
OSStatus ortn = security->SecTrustCreateWithCertificates(subj_certs, policies, &sec_trust);
bool result = false;
if (ortn == errSecSuccess && sec_trust) {
result = security->SecTrustEvaluateWithError(sec_trust, nullptr);
}
// Cleanup
if (sec_trust) security->CFRelease(sec_trust);
security->CFRelease(policies);
security->CFRelease(policy);
security->CFRelease(subj_certs);
return result;
}
// Check trust settings for policy (simplified version)
static TrustStatus is_trust_settings_trusted_for_policy(SecurityFramework* security, CFArrayRef trust_settings, bool is_self_issued) {
if (!trust_settings) {
return TrustStatus::UNSPECIFIED;
}
// Empty trust settings array means "always trust this certificate"
if (security->CFArrayGetCount(trust_settings) == 0) {
return is_self_issued ? TrustStatus::TRUSTED : TrustStatus::UNSPECIFIED;
}
// For simplicity, we'll do basic checking here
// A full implementation would parse the trust dictionary entries
return TrustStatus::UNSPECIFIED;
}
// Check if certificate is trusted for server auth policy
static bool is_certificate_trusted_for_policy(SecurityFramework* security, X509* cert, SecCertificateRef cert_ref) {
bool is_self_issued = is_certificate_self_issued(cert);
bool trust_evaluated = false;
// Check user trust domain, then admin domain
for (const auto& trust_domain : {kSecTrustSettingsDomainUser, kSecTrustSettingsDomainAdmin, kSecTrustSettingsDomainSystem}) {
CFArrayRef trust_settings = nullptr;
OSStatus err = security->SecTrustSettingsCopyTrustSettings(cert_ref, trust_domain, &trust_settings);
if (err != errSecSuccess && err != errSecItemNotFound) {
continue;
}
if (err == errSecSuccess && trust_settings) {
TrustStatus result = is_trust_settings_trusted_for_policy(security, trust_settings, is_self_issued);
security->CFRelease(trust_settings);
if (result == TrustStatus::TRUSTED) {
return true;
} else if (result == TrustStatus::DISTRUSTED) {
return false;
}
}
// If no trust settings and we haven't evaluated trust yet, check trust validity
if (!trust_settings && !trust_evaluated) {
if (is_certificate_trust_valid(security, cert_ref)) {
return true;
}
trust_evaluated = true;
}
}
return false;
}
// Main function to load system certificates on macOS
extern "C" void us_load_system_certificates_macos(STACK_OF(X509) **system_certs) {
*system_certs = sk_X509_new_null();
if (!*system_certs) {
return;
}
SecurityFramework* security = get_security_framework();
if (!security) {
return; // Fail silently
}
// Create search dictionary for certificates
CFTypeRef search_keys[] = {
security->kSecClass,
security->kSecMatchLimit,
security->kSecReturnRef,
security->kSecMatchTrustedOnly,
};
CFTypeRef search_values[] = {
security->kSecClassCertificate,
security->kSecMatchLimitAll,
security->kCFBooleanTrue,
security->kCFBooleanTrue,
};
CFDictionaryRef search = security->CFDictionaryCreate(
security->kCFAllocatorDefault,
search_keys,
search_values,
4,
security->kCFTypeDictionaryKeyCallBacks,
security->kCFTypeDictionaryValueCallBacks
);
if (!search) {
return;
}
CFArrayRef certificates = nullptr;
OSStatus status = security->SecItemCopyMatching(search, (CFTypeRef*)&certificates);
security->CFRelease(search);
if (status != errSecSuccess || !certificates) {
return;
}
CFIndex count = security->CFArrayGetCount(certificates);
for (CFIndex i = 0; i < count; ++i) {
SecCertificateRef cert_ref = (SecCertificateRef)security->CFArrayGetValueAtIndex(certificates, i);
if (!cert_ref) continue;
// Get certificate data
CFDataRef cert_data = security->SecCertificateCopyData(cert_ref);
if (!cert_data) continue;
// Convert to X509
const unsigned char* data_ptr = security->CFDataGetBytePtr(cert_data);
long data_len = security->CFDataGetLength(cert_data);
X509* x509_cert = d2i_X509(nullptr, &data_ptr, data_len);
security->CFRelease(cert_data);
if (!x509_cert) continue;
// Only consider CA certificates
if (X509_check_ca(x509_cert) == 1 &&
is_certificate_trusted_for_policy(security, x509_cert, cert_ref)) {
sk_X509_push(*system_certs, x509_cert);
} else {
X509_free(x509_cert);
}
}
security->CFRelease(certificates);
}
// Cleanup function for Security framework
extern "C" void us_cleanup_security_framework() {
SecurityFramework* framework = g_security_framework.exchange(nullptr);
if (framework) {
delete framework;
}
}
#endif // __APPLE__

View File

@@ -5,6 +5,7 @@
#define CPPDECL extern "C"
STACK_OF(X509) *us_get_root_extra_cert_instances();
STACK_OF(X509) *us_get_root_system_cert_instances();
#else
#define CPPDECL extern

View File

@@ -0,0 +1,170 @@
#ifndef _WIN32
#ifndef __APPLE__
#include <dirent.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <limits.h>
#include <openssl/x509.h>
#include <openssl/x509_vfy.h>
#include <openssl/pem.h>
extern "C" void BUN__warn__extra_ca_load_failed(const char* filename, const char* error_msg);
// Helper function to load certificates from a directory
static void load_certs_from_directory(const char* dir_path, STACK_OF(X509)* cert_stack) {
DIR* dir = opendir(dir_path);
if (!dir) {
return;
}
struct dirent* entry;
while ((entry = readdir(dir)) != NULL) {
// Skip . and ..
if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0) {
continue;
}
// Check if file has .crt, .pem, or .cer extension
const char* ext = strrchr(entry->d_name, '.');
if (!ext || (strcmp(ext, ".crt") != 0 && strcmp(ext, ".pem") != 0 && strcmp(ext, ".cer") != 0)) {
continue;
}
// Build full path
char filepath[PATH_MAX];
snprintf(filepath, sizeof(filepath), "%s/%s", dir_path, entry->d_name);
// Try to load certificate
FILE* file = fopen(filepath, "r");
if (file) {
X509* cert = PEM_read_X509(file, NULL, NULL, NULL);
fclose(file);
if (cert) {
if (!sk_X509_push(cert_stack, cert)) {
X509_free(cert);
}
}
}
}
closedir(dir);
}
// Helper function to load certificates from a bundle file
static void load_certs_from_bundle(const char* bundle_path, STACK_OF(X509)* cert_stack) {
FILE* file = fopen(bundle_path, "r");
if (!file) {
return;
}
X509* cert;
while ((cert = PEM_read_X509(file, NULL, NULL, NULL)) != NULL) {
if (!sk_X509_push(cert_stack, cert)) {
X509_free(cert);
break;
}
}
ERR_clear_error();
fclose(file);
}
// Main function to load system certificates on Linux and other Unix-like systems
extern "C" void us_load_system_certificates_linux(STACK_OF(X509) **system_certs) {
*system_certs = sk_X509_new_null();
if (*system_certs == NULL) {
return;
}
// First check environment variables (same as Node.js and OpenSSL)
const char* ssl_cert_file = getenv("SSL_CERT_FILE");
const char* ssl_cert_dir = getenv("SSL_CERT_DIR");
// If SSL_CERT_FILE is set, load from it
if (ssl_cert_file && strlen(ssl_cert_file) > 0) {
load_certs_from_bundle(ssl_cert_file, *system_certs);
}
// If SSL_CERT_DIR is set, load from each directory (colon-separated)
if (ssl_cert_dir && strlen(ssl_cert_dir) > 0) {
char* dir_copy = strdup(ssl_cert_dir);
if (dir_copy) {
char* token = strtok(dir_copy, ":");
while (token != NULL) {
// Skip empty tokens
if (strlen(token) > 0) {
load_certs_from_directory(token, *system_certs);
}
token = strtok(NULL, ":");
}
free(dir_copy);
}
}
// If environment variables were set, use only those (even if they yield zero certs)
if (ssl_cert_file || ssl_cert_dir) {
return;
}
// Otherwise, load certificates from standard Linux/Unix paths
// These are the common locations for system certificates
// Common certificate bundle locations (single file with multiple certs)
// These paths are based on common Linux distributions and OpenSSL defaults
static const char* bundle_paths[] = {
"/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo
"/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL 6
"/etc/ssl/ca-bundle.pem", // OpenSUSE
"/etc/pki/tls/cert.pem", // Fedora/RHEL 7+
"/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem", // CentOS/RHEL 7+
"/etc/ssl/cert.pem", // Alpine Linux, macOS OpenSSL
"/usr/local/etc/openssl/cert.pem", // Homebrew OpenSSL on macOS
"/usr/local/share/ca-certificates/ca-certificates.crt", // Custom CA installs
NULL
};
// Common certificate directory locations (multiple files)
// Note: OpenSSL expects hashed symlinks in directories (c_rehash format)
static const char* dir_paths[] = {
"/etc/ssl/certs", // Common location (Debian/Ubuntu with hashed links)
"/etc/pki/tls/certs", // RHEL/Fedora
"/usr/share/ca-certificates", // Debian/Ubuntu (original certs, not hashed)
"/usr/local/share/certs", // FreeBSD
"/etc/openssl/certs", // NetBSD
"/var/ssl/certs", // AIX
"/usr/local/etc/openssl/certs", // Homebrew OpenSSL on macOS
"/System/Library/OpenSSL/certs", // macOS system OpenSSL (older versions)
NULL
};
// Try loading from bundle files first
for (const char** path = bundle_paths; *path != NULL; path++) {
load_certs_from_bundle(*path, *system_certs);
}
// Then try loading from directories
for (const char** path = dir_paths; *path != NULL; path++) {
load_certs_from_directory(*path, *system_certs);
}
// Also check NODE_EXTRA_CA_CERTS environment variable
const char* extra_ca_certs = getenv("NODE_EXTRA_CA_CERTS");
if (extra_ca_certs && strlen(extra_ca_certs) > 0) {
FILE* file = fopen(extra_ca_certs, "r");
if (file) {
X509* cert;
while ((cert = PEM_read_X509(file, NULL, NULL, NULL)) != NULL) {
sk_X509_push(*system_certs, cert);
}
fclose(file);
} else {
BUN__warn__extra_ca_load_failed(extra_ca_certs, "Failed to open file");
}
}
}
#endif // !__APPLE__
#endif // !_WIN32

View File

@@ -0,0 +1,18 @@
#pragma once
#include <openssl/x509.h>
// Platform-specific certificate loading functions
extern "C" {
// Load system certificates for the current platform
void us_load_system_certificates_linux(STACK_OF(X509) **system_certs);
void us_load_system_certificates_macos(STACK_OF(X509) **system_certs);
void us_load_system_certificates_windows(STACK_OF(X509) **system_certs);
// Platform-specific cleanup functions
#ifdef __APPLE__
void us_cleanup_security_framework();
#endif
}

View File

@@ -0,0 +1,53 @@
#ifdef _WIN32
#include <windows.h>
#include <wincrypt.h>
#include <vector>
#include <cstring>
// Forward declaration to avoid including OpenSSL headers here
// This prevents conflicts with Windows macros like X509_NAME
// Note: We don't use STACK_OF macro here since we don't have OpenSSL headers
// Structure to hold raw certificate data
struct RawCertificate {
std::vector<unsigned char> data;
};
// Helper function to load raw certificates from a Windows certificate store
static void LoadRawCertsFromStore(std::vector<RawCertificate>& raw_certs,
DWORD store_flags,
const wchar_t* store_name) {
HCERTSTORE cert_store = CertOpenStore(
CERT_STORE_PROV_SYSTEM_W,
0,
0,
store_flags | CERT_STORE_READONLY_FLAG,
store_name
);
if (cert_store == NULL) {
return;
}
PCCERT_CONTEXT cert_context = NULL;
while ((cert_context = CertEnumCertificatesInStore(cert_store, cert_context)) != NULL) {
RawCertificate raw_cert;
raw_cert.data.assign(cert_context->pbCertEncoded,
cert_context->pbCertEncoded + cert_context->cbCertEncoded);
raw_certs.push_back(std::move(raw_cert));
}
CertCloseStore(cert_store, 0);
}
// Main function to load raw system certificates on Windows
// Returns certificates as raw DER data to avoid OpenSSL header conflicts
extern void us_load_system_certificates_windows_raw(
std::vector<RawCertificate>& raw_certs) {
// Load only from ROOT by default
LoadRawCertsFromStore(raw_certs, CERT_SYSTEM_STORE_CURRENT_USER, L"ROOT");
LoadRawCertsFromStore(raw_certs, CERT_SYSTEM_STORE_LOCAL_MACHINE, L"ROOT");
}
#endif // _WIN32

View File

@@ -44,7 +44,7 @@ void us_loop_run_bun_tick(struct us_loop_t *loop, const struct timespec* timeout
#define GET_READY_POLL(loop, index) (struct us_poll_t *) loop->ready_polls[index].data.ptr
#define SET_READY_POLL(loop, index, poll) loop->ready_polls[index].data.ptr = (void*)poll
#else
#define GET_READY_POLL(loop, index) (struct us_poll_t *) (loop)->ready_polls[index].udata
#define GET_READY_POLL(loop, index) (struct us_poll_t *) loop->ready_polls[index].udata
#define SET_READY_POLL(loop, index, poll) loop->ready_polls[index].udata = (uint64_t)poll
#endif
@@ -264,7 +264,7 @@ void us_loop_run_bun_tick(struct us_loop_t *loop, const struct timespec* timeout
us_internal_loop_pre(loop);
if (loop->data.jsc_vm)
if (loop->data.jsc_vm)
Bun__JSC_onBeforeWait(loop->data.jsc_vm);
/* Fetch ready polls */
@@ -336,7 +336,7 @@ void us_internal_loop_update_pending_ready_polls(struct us_loop_t *loop, struct
// if new events does not contain the ready events of this poll then remove (no we filter that out later on)
SET_READY_POLL(loop, i, new_poll);
num_entries_possibly_remaining--;
}
}
@@ -366,7 +366,7 @@ int kqueue_change(int kqfd, int fd, int old_events, int new_events, void *user_d
/* Do they differ in writable? */
EV_SET64(&change_list[change_length++], fd, EVFILT_WRITE, (new_events & LIBUS_SOCKET_WRITABLE) ? EV_ADD : EV_DELETE, 0, 0, (uint64_t)(void*)user_data, 0, 0);
}
int ret = 0;
int ret;
do {
ret = kevent64(kqfd, change_list, change_length, change_list, change_length, KEVENT_FLAG_ERROR_EVENTS, NULL);
} while (IS_EINTR(ret));
@@ -379,7 +379,7 @@ int kqueue_change(int kqfd, int fd, int old_events, int new_events, void *user_d
struct us_poll_t *us_poll_resize(struct us_poll_t *p, struct us_loop_t *loop, unsigned int ext_size) {
int events = us_poll_events(p);
struct us_poll_t *new_p = us_realloc(p, sizeof(struct us_poll_t) + ext_size);
if (p != new_p) {
@@ -391,7 +391,6 @@ struct us_poll_t *us_poll_resize(struct us_poll_t *p, struct us_loop_t *loop, un
/* Forcefully update poll by resetting them with new_p as user data */
kqueue_change(loop->fd, new_p->state.fd, 0, LIBUS_SOCKET_WRITABLE | LIBUS_SOCKET_READABLE, new_p);
#endif /* This is needed for epoll also (us_change_poll doesn't update the old poll) */
// NOLINTNEXTLINE(clang-analyzer-unix.Malloc)
us_internal_loop_update_pending_ready_polls(loop, p, new_p, events, events);
}
@@ -563,8 +562,8 @@ void us_timer_close(struct us_timer_t *timer, int fallthrough) {
struct us_internal_callback_t *internal_cb = (struct us_internal_callback_t *) timer;
struct kevent64_s event;
EV_SET64(&event, (uint64_t) internal_cb, EVFILT_TIMER, EV_DELETE, 0, 0, (uint64_t)internal_cb, 0, 0);
int ret = 0;
EV_SET64(&event, (uint64_t) (void*) internal_cb, EVFILT_TIMER, EV_DELETE, 0, 0, (uint64_t)internal_cb, 0, 0);
int ret;
do {
ret = kevent64(internal_cb->loop->fd, &event, 1, &event, 1, KEVENT_FLAG_ERROR_EVENTS, NULL);
} while (IS_EINTR(ret));
@@ -585,10 +584,10 @@ void us_timer_set(struct us_timer_t *t, void (*cb)(struct us_timer_t *t), int ms
/* Bug: repeat_ms must be the same as ms, or 0 */
struct kevent64_s event;
uint64_t ptr = (uint64_t)internal_cb;
uint64_t ptr = (uint64_t)(void*)internal_cb;
EV_SET64(&event, ptr, EVFILT_TIMER, EV_ADD | (repeat_ms ? 0 : EV_ONESHOT), 0, ms, (uint64_t)internal_cb, 0, 0);
int ret = 0;
int ret;
do {
ret = kevent64(internal_cb->loop->fd, &event, 1, &event, 1, KEVENT_FLAG_ERROR_EVENTS, NULL);
} while (IS_EINTR(ret));
@@ -684,10 +683,10 @@ void us_internal_async_close(struct us_internal_async *a) {
struct us_internal_callback_t *internal_cb = (struct us_internal_callback_t *) a;
struct kevent64_s event;
uint64_t ptr = (uint64_t)internal_cb;
EV_SET64(&event, ptr, EVFILT_MACHPORT, EV_DELETE, 0, 0, (uint64_t)internal_cb, 0,0);
uint64_t ptr = (uint64_t)(void*)internal_cb;
EV_SET64(&event, ptr, EVFILT_MACHPORT, EV_DELETE, 0, 0, (uint64_t)(void*)internal_cb, 0,0);
int ret = 0;
int ret;
do {
ret = kevent64(internal_cb->loop->fd, &event, 1, &event, 1, KEVENT_FLAG_ERROR_EVENTS, NULL);
} while (IS_EINTR(ret));
@@ -714,11 +713,11 @@ void us_internal_async_set(struct us_internal_async *a, void (*cb)(struct us_int
event.filter = EVFILT_MACHPORT;
event.flags = EV_ADD | EV_ENABLE;
event.fflags = MACH_RCV_MSG | MACH_RCV_OVERWRITE;
event.ext[0] = (uint64_t)internal_cb->machport_buf;
event.ext[0] = (uint64_t)(void*)internal_cb->machport_buf;
event.ext[1] = MACHPORT_BUF_LEN;
event.udata = (uint64_t)internal_cb;
event.udata = (uint64_t)(void*)internal_cb;
int ret = 0;
int ret;
do {
ret = kevent64(internal_cb->loop->fd, &event, 1, &event, 1, KEVENT_FLAG_ERROR_EVENTS, NULL);
} while (IS_EINTR(ret));
@@ -750,14 +749,20 @@ void us_internal_async_wakeup(struct us_internal_async *a) {
);
switch (kr) {
case KERN_SUCCESS:
case KERN_SUCCESS: {
break;
}
// This means that the send would've blocked because the
// queue is full. We assume success because the port is full.
case MACH_SEND_TIMED_OUT:
case MACH_SEND_TIMED_OUT: {
break;
}
// No space means it will wake up.
case MACH_SEND_NO_BUFFER:
case MACH_SEND_NO_BUFFER: {
break;
}
default: {
break;

View File

@@ -181,7 +181,7 @@ static const int MAX_LOW_PRIO_SOCKETS_PER_LOOP_ITERATION = 5;
void us_internal_handle_low_priority_sockets(struct us_loop_t *loop) {
struct us_internal_loop_data_t *loop_data = &loop->data;
struct us_socket_t *s = NULL;
struct us_socket_t *s;
loop_data->low_prio_budget = MAX_LOW_PRIO_SOCKETS_PER_LOOP_ITERATION;
@@ -425,7 +425,7 @@ void us_internal_dispatch_ready_poll(struct us_poll_t *p, int error, int eof, in
const int recv_flags = MSG_DONTWAIT | MSG_NOSIGNAL;
#endif
int length = 0;
int length;
#if !defined(_WIN32)
if(s->flags.is_ipc) {
struct msghdr msg = {0};

View File

@@ -627,9 +627,15 @@ public:
return std::move(*this);
}
void setOnClose(HttpContextData<SSL>::OnSocketClosedCallback onClose) {
void setOnSocketClosed(HttpContextData<SSL>::OnSocketClosedCallback onClose) {
httpContext->getSocketContextData()->onSocketClosed = onClose;
}
void setOnSocketDrain(HttpContextData<SSL>::OnSocketDrainCallback onDrain) {
httpContext->getSocketContextData()->onSocketDrain = onDrain;
}
void setOnSocketData(HttpContextData<SSL>::OnSocketDataCallback onData) {
httpContext->getSocketContextData()->onSocketData = onData;
}
void setOnClientError(HttpContextData<SSL>::OnClientErrorCallback onClientError) {
httpContext->getSocketContextData()->onClientError = std::move(onClientError);

View File

@@ -193,23 +193,32 @@ private:
auto *httpResponseData = reinterpret_cast<HttpResponseData<SSL> *>(us_socket_ext(SSL, s));
/* Call filter */
HttpContextData<SSL> *httpContextData = getSocketContextDataS(s);
if(httpResponseData && httpResponseData->isConnectRequest) {
if (httpResponseData->socketData && httpContextData->onSocketData) {
httpContextData->onSocketData(httpResponseData->socketData, SSL, s, "", 0, true);
}
if(httpResponseData->inStream) {
httpResponseData->inStream(reinterpret_cast<HttpResponse<SSL> *>(s), "", 0, true, httpResponseData->userData);
httpResponseData->inStream = nullptr;
}
}
for (auto &f : httpContextData->filterHandlers) {
f((HttpResponse<SSL> *) s, -1);
}
if (httpResponseData->socketData && httpContextData->onSocketClosed) {
httpContextData->onSocketClosed(httpResponseData->socketData, SSL, s);
}
/* Signal broken HTTP request only if we have a pending request */
if (httpResponseData->onAborted != nullptr && httpResponseData->userData != nullptr) {
httpResponseData->onAborted((HttpResponse<SSL> *)s, httpResponseData->userData);
}
if (httpResponseData->socketData && httpContextData->onSocketClosed) {
httpContextData->onSocketClosed(httpResponseData->socketData, SSL, s);
}
/* Destruct socket ext */
httpResponseData->~HttpResponseData<SSL>();
@@ -254,7 +263,9 @@ private:
/* The return value is entirely up to us to interpret. The HttpParser cares only for whether the returned value is DIFFERENT from passed user */
auto result = httpResponseData->consumePostPadded(httpContextData->maxHeaderSize, httpContextData->flags.requireHostHeader,httpContextData->flags.useStrictMethodValidation, data, (unsigned int) length, s, proxyParser, [httpContextData](void *s, HttpRequest *httpRequest) -> void * {
auto result = httpResponseData->consumePostPadded(httpContextData->maxHeaderSize, httpResponseData->isConnectRequest, httpContextData->flags.requireHostHeader,httpContextData->flags.useStrictMethodValidation, data, (unsigned int) length, s, proxyParser, [httpContextData](void *s, HttpRequest *httpRequest) -> void * {
/* For every request we reset the timeout and hang until user makes action */
/* Warning: if we are in shutdown state, resetting the timer is a security issue! */
us_socket_timeout(SSL, (us_socket_t *) s, 0);
@@ -330,7 +341,12 @@ private:
/* Continue parsing */
return s;
}, [httpResponseData](void *user, std::string_view data, bool fin) -> void * {
}, [httpResponseData, httpContextData](void *user, std::string_view data, bool fin) -> void * {
if (httpResponseData->isConnectRequest && httpResponseData->socketData && httpContextData->onSocketData) {
httpContextData->onSocketData(httpResponseData->socketData, SSL, (struct us_socket_t *) user, data.data(), data.length(), fin);
}
/* We always get an empty chunk even if there is no data */
if (httpResponseData->inStream) {
@@ -449,7 +465,7 @@ private:
us_socket_context_on_writable(SSL, getSocketContext(), [](us_socket_t *s) {
auto *asyncSocket = reinterpret_cast<AsyncSocket<SSL> *>(s);
auto *httpResponseData = reinterpret_cast<HttpResponseData<SSL> *>(asyncSocket->getAsyncSocketData());
/* Attempt to drain the socket buffer before triggering onWritable callback */
size_t bufferedAmount = asyncSocket->getBufferedAmount();
if (bufferedAmount > 0) {
@@ -470,6 +486,12 @@ private:
*/
}
auto *httpContextData = getSocketContextDataS(s);
if (httpResponseData->isConnectRequest && httpResponseData->socketData && httpContextData->onSocketDrain) {
httpContextData->onSocketDrain(httpResponseData->socketData, SSL, (struct us_socket_t *) s);
}
/* Ask the developer to write data and return success (true) or failure (false), OR skip sending anything and return success (true). */
if (httpResponseData->onWritable) {
/* We are now writable, so hang timeout again, the user does not have to do anything so we should hang until end or tryEnd rearms timeout */
@@ -514,6 +536,7 @@ private:
us_socket_context_on_end(SSL, getSocketContext(), [](us_socket_t *s) {
auto *asyncSocket = reinterpret_cast<AsyncSocket<SSL> *>(s);
asyncSocket->uncorkWithoutSending();
/* We do not care for half closed sockets */
return asyncSocket->close();
});

View File

@@ -44,7 +44,10 @@ struct alignas(16) HttpContextData {
private:
std::vector<MoveOnlyFunction<void(HttpResponse<SSL> *, int)>> filterHandlers;
using OnSocketClosedCallback = void (*)(void* userData, int is_ssl, struct us_socket_t *rawSocket);
using OnSocketDataCallback = void (*)(void* userData, int is_ssl, struct us_socket_t *rawSocket, const char *data, int length, bool last);
using OnSocketDrainCallback = void (*)(void* userData, int is_ssl, struct us_socket_t *rawSocket);
using OnClientErrorCallback = MoveOnlyFunction<void(int is_ssl, struct us_socket_t *rawSocket, uWS::HttpParserError errorCode, char *rawPacket, int rawPacketLength)>;
MoveOnlyFunction<void(const char *hostname)> missingServerNameHandler;
@@ -61,6 +64,8 @@ private:
void *upgradedWebSocket = nullptr;
/* Used to simulate Node.js socket events. */
OnSocketClosedCallback onSocketClosed = nullptr;
OnSocketDrainCallback onSocketDrain = nullptr;
OnSocketDataCallback onSocketData = nullptr;
OnClientErrorCallback onClientError = nullptr;
uint64_t maxHeaderSize = 0; // 0 means no limit

View File

@@ -117,18 +117,19 @@ namespace uWS
struct ConsumeRequestLineResult {
char *position;
bool isAncientHTTP;
bool isConnect;
HTTPHeaderParserError headerParserError;
public:
static ConsumeRequestLineResult error(HTTPHeaderParserError error) {
return ConsumeRequestLineResult{nullptr, false, error};
return ConsumeRequestLineResult{nullptr, false, false, error};
}
static ConsumeRequestLineResult success(char *position, bool isAncientHTTP = false) {
return ConsumeRequestLineResult{position, isAncientHTTP, HTTP_HEADER_PARSER_ERROR_NONE};
static ConsumeRequestLineResult success(char *position, bool isAncientHTTP = false, bool isConnect = false) {
return ConsumeRequestLineResult{position, isAncientHTTP, isConnect, HTTP_HEADER_PARSER_ERROR_NONE};
}
static ConsumeRequestLineResult shortRead(bool isAncientHTTP = false) {
return ConsumeRequestLineResult{nullptr, isAncientHTTP, HTTP_HEADER_PARSER_ERROR_NONE};
static ConsumeRequestLineResult shortRead(bool isAncientHTTP = false, bool isConnect = false) {
return ConsumeRequestLineResult{nullptr, isAncientHTTP, isConnect, HTTP_HEADER_PARSER_ERROR_NONE};
}
bool isErrorOrShortRead() {
@@ -551,7 +552,10 @@ namespace uWS
return ConsumeRequestLineResult::shortRead();
}
if (data[0] == 32 && (__builtin_expect(data[1] == '/', 1) || isHTTPorHTTPSPrefixForProxies(data + 1, end) == 1)) [[likely]] {
bool isHTTPMethod = (__builtin_expect(data[1] == '/', 1));
bool isConnect = !isHTTPMethod && (isHTTPorHTTPSPrefixForProxies(data + 1, end) == 1 || ((data - start) == 7 && memcmp(start, "CONNECT", 7) == 0));
if (isHTTPMethod || isConnect) [[likely]] {
header.key = {start, (size_t) (data - start)};
data++;
if(!isValidMethod(header.key, useStrictMethodValidation)) {
@@ -577,22 +581,22 @@ namespace uWS
if (nextPosition >= end) {
/* Whatever we have must be part of the version string */
if (memcmp(" HTTP/1.1\r\n", data, std::min<unsigned int>(11, (unsigned int) (end - data))) == 0) {
return ConsumeRequestLineResult::shortRead();
return ConsumeRequestLineResult::shortRead(false, isConnect);
} else if (memcmp(" HTTP/1.0\r\n", data, std::min<unsigned int>(11, (unsigned int) (end - data))) == 0) {
/*Indicates that the request line is ancient HTTP*/
return ConsumeRequestLineResult::shortRead(true);
return ConsumeRequestLineResult::shortRead(true, isConnect);
}
return ConsumeRequestLineResult::error(HTTP_HEADER_PARSER_ERROR_INVALID_HTTP_VERSION);
}
if (memcmp(" HTTP/1.1\r\n", data, 11) == 0) {
return ConsumeRequestLineResult::success(nextPosition);
return ConsumeRequestLineResult::success(nextPosition, false, isConnect);
} else if (memcmp(" HTTP/1.0\r\n", data, 11) == 0) {
/*Indicates that the request line is ancient HTTP*/
return ConsumeRequestLineResult::success(nextPosition, true);
return ConsumeRequestLineResult::success(nextPosition, true, isConnect);
}
/* If we stand at the post padded CR, we have fragmented input so try again later */
if (data[0] == '\r') {
return ConsumeRequestLineResult::shortRead();
return ConsumeRequestLineResult::shortRead(false, isConnect);
}
/* This is an error */
return ConsumeRequestLineResult::error(HTTP_HEADER_PARSER_ERROR_INVALID_HTTP_VERSION);
@@ -602,14 +606,14 @@ namespace uWS
/* If we stand at the post padded CR, we have fragmented input so try again later */
if (data[0] == '\r') {
return ConsumeRequestLineResult::shortRead();
return ConsumeRequestLineResult::shortRead(false, isConnect);
}
if (data[0] == 32) {
switch (isHTTPorHTTPSPrefixForProxies(data + 1, end)) {
// If we haven't received enough data to check if it's http:// or https://, let's try again later
case -1:
return ConsumeRequestLineResult::shortRead();
return ConsumeRequestLineResult::shortRead(false, isConnect);
// Otherwise, if it's not http:// or https://, return 400
default:
return ConsumeRequestLineResult::error(HTTP_HEADER_PARSER_ERROR_INVALID_REQUEST);
@@ -635,7 +639,7 @@ namespace uWS
}
/* End is only used for the proxy parser. The HTTP parser recognizes "\ra" as invalid "\r\n" scan and breaks. */
static HttpParserResult getHeaders(char *postPaddedBuffer, char *end, struct HttpRequest::Header *headers, void *reserved, bool &isAncientHTTP, bool useStrictMethodValidation, uint64_t maxHeaderSize) {
static HttpParserResult getHeaders(char *postPaddedBuffer, char *end, struct HttpRequest::Header *headers, void *reserved, bool &isAncientHTTP, bool &isConnectRequest, bool useStrictMethodValidation, uint64_t maxHeaderSize) {
char *preliminaryKey, *preliminaryValue, *start = postPaddedBuffer;
#ifdef UWS_WITH_PROXY
/* ProxyParser is passed as reserved parameter */
@@ -689,6 +693,9 @@ namespace uWS
if(requestLineResult.isAncientHTTP) {
isAncientHTTP = true;
}
if(requestLineResult.isConnect) {
isConnectRequest = true;
}
/* No request headers found */
const char * headerStart = (headers[0].key.length() > 0) ? headers[0].key.data() : end;
@@ -798,7 +805,7 @@ namespace uWS
/* This is the only caller of getHeaders and is thus the deepest part of the parser. */
template <bool ConsumeMinimally>
HttpParserResult fenceAndConsumePostPadded(uint64_t maxHeaderSize, bool requireHostHeader, bool useStrictMethodValidation, char *data, unsigned int length, void *user, void *reserved, HttpRequest *req, MoveOnlyFunction<void *(void *, HttpRequest *)> &requestHandler, MoveOnlyFunction<void *(void *, std::string_view, bool)> &dataHandler) {
HttpParserResult fenceAndConsumePostPadded(uint64_t maxHeaderSize, bool& isConnectRequest, bool requireHostHeader, bool useStrictMethodValidation, char *data, unsigned int length, void *user, void *reserved, HttpRequest *req, MoveOnlyFunction<void *(void *, HttpRequest *)> &requestHandler, MoveOnlyFunction<void *(void *, std::string_view, bool)> &dataHandler) {
/* How much data we CONSUMED (to throw away) */
unsigned int consumedTotal = 0;
@@ -809,7 +816,7 @@ namespace uWS
data[length + 1] = 'a'; /* Anything that is not \n, to trigger "invalid request" */
req->ancientHttp = false;
for (;length;) {
auto result = getHeaders(data, data + length, req->headers, reserved, req->ancientHttp, useStrictMethodValidation, maxHeaderSize);
auto result = getHeaders(data, data + length, req->headers, reserved, req->ancientHttp, isConnectRequest, useStrictMethodValidation, maxHeaderSize);
if(result.isError()) {
return result;
}
@@ -916,6 +923,10 @@ namespace uWS
length -= emittable;
consumedTotal += emittable;
}
} else if(isConnectRequest) {
// This only server to mark that the connect request read all headers
// and can starting emitting data
remainingStreamingBytes = STATE_IS_CHUNKED;
} else {
/* If we came here without a body; emit an empty data chunk to signal no data */
dataHandler(user, {}, true);
@@ -931,15 +942,16 @@ namespace uWS
}
public:
HttpParserResult consumePostPadded(uint64_t maxHeaderSize, bool requireHostHeader, bool useStrictMethodValidation, char *data, unsigned int length, void *user, void *reserved, MoveOnlyFunction<void *(void *, HttpRequest *)> &&requestHandler, MoveOnlyFunction<void *(void *, std::string_view, bool)> &&dataHandler) {
HttpParserResult consumePostPadded(uint64_t maxHeaderSize, bool& isConnectRequest, bool requireHostHeader, bool useStrictMethodValidation, char *data, unsigned int length, void *user, void *reserved, MoveOnlyFunction<void *(void *, HttpRequest *)> &&requestHandler, MoveOnlyFunction<void *(void *, std::string_view, bool)> &&dataHandler) {
/* This resets BloomFilter by construction, but later we also reset it again.
* Optimize this to skip resetting twice (req could be made global) */
HttpRequest req;
if (remainingStreamingBytes) {
/* It's either chunked or with a content-length */
if (isParsingChunkedEncoding(remainingStreamingBytes)) {
if (isConnectRequest) {
dataHandler(user, std::string_view(data, length), false);
return HttpParserResult::success(0, user);
} else if (isParsingChunkedEncoding(remainingStreamingBytes)) {
/* It's either chunked or with a content-length */
std::string_view dataToConsume(data, length);
for (auto chunk : uWS::ChunkIterator(&dataToConsume, &remainingStreamingBytes)) {
dataHandler(user, chunk, chunk.length() == 0);
@@ -950,6 +962,7 @@ public:
data = (char *) dataToConsume.data();
length = (unsigned int) dataToConsume.length();
} else {
// this is exactly the same as below!
// todo: refactor this
if (remainingStreamingBytes >= length) {
@@ -980,7 +993,7 @@ public:
fallback.append(data, maxCopyDistance);
// break here on break
HttpParserResult consumed = fenceAndConsumePostPadded<true>(maxHeaderSize, requireHostHeader, useStrictMethodValidation, fallback.data(), (unsigned int) fallback.length(), user, reserved, &req, requestHandler, dataHandler);
HttpParserResult consumed = fenceAndConsumePostPadded<true>(maxHeaderSize, isConnectRequest, requireHostHeader, useStrictMethodValidation, fallback.data(), (unsigned int) fallback.length(), user, reserved, &req, requestHandler, dataHandler);
/* Return data will be different than user if we are upgraded to WebSocket or have an error */
if (consumed.returnedData != user) {
return consumed;
@@ -997,8 +1010,11 @@ public:
length -= consumedBytes - had;
if (remainingStreamingBytes) {
/* It's either chunked or with a content-length */
if (isParsingChunkedEncoding(remainingStreamingBytes)) {
if(isConnectRequest) {
dataHandler(user, std::string_view(data, length), false);
return HttpParserResult::success(0, user);
} else if (isParsingChunkedEncoding(remainingStreamingBytes)) {
/* It's either chunked or with a content-length */
std::string_view dataToConsume(data, length);
for (auto chunk : uWS::ChunkIterator(&dataToConsume, &remainingStreamingBytes)) {
dataHandler(user, chunk, chunk.length() == 0);
@@ -1037,7 +1053,7 @@ public:
}
}
HttpParserResult consumed = fenceAndConsumePostPadded<false>(maxHeaderSize, requireHostHeader, useStrictMethodValidation, data, length, user, reserved, &req, requestHandler, dataHandler);
HttpParserResult consumed = fenceAndConsumePostPadded<false>(maxHeaderSize, isConnectRequest, requireHostHeader, useStrictMethodValidation, data, length, user, reserved, &req, requestHandler, dataHandler);
/* Return data will be different than user if we are upgraded to WebSocket or have an error */
if (consumed.returnedData != user) {
return consumed;

View File

@@ -243,7 +243,7 @@ public:
/* Manually upgrade to WebSocket. Typically called in upgrade handler. Immediately calls open handler.
* NOTE: Will invalidate 'this' as socket might change location in memory. Throw away after use. */
template <typename UserData>
us_socket_t *upgrade(UserData &&userData, std::string_view secWebSocketKey, std::string_view secWebSocketProtocol,
us_socket_t *upgrade(UserData&& userData, std::string_view secWebSocketKey, std::string_view secWebSocketProtocol,
std::string_view secWebSocketExtensions,
struct us_socket_context_t *webSocketContext) {
@@ -350,7 +350,8 @@ public:
us_socket_timeout(SSL, (us_socket_t *) webSocket, webSocketContextData->idleTimeoutComponents.first);
/* Move construct the UserData right before calling open handler */
new (webSocket->getUserData()) UserData(std::move(userData));
new (webSocket->getUserData()) UserData(std::forward<UserData>(userData));
/* Emit open event and start the timeout */
if (webSocketContextData->openHandler) {
@@ -741,6 +742,10 @@ public:
return httpResponseData->socketData;
}
bool isConnectRequest() {
HttpResponseData<SSL> *httpResponseData = getHttpResponseData();
return httpResponseData->isConnectRequest;
}
void setWriteOffset(uint64_t offset) {
HttpResponseData<SSL> *httpResponseData = getHttpResponseData();

View File

@@ -108,6 +108,7 @@ struct HttpResponseData : AsyncSocketData<SSL>, HttpParser {
uint8_t state = 0;
uint8_t idleTimeout = 10; // default HTTP_TIMEOUT 10 seconds
bool fromAncientRequest = false;
bool isConnectRequest = false;
bool isIdle = true;
bool shouldCloseOnceIdle = false;

View File

@@ -1,5 +1,5 @@
#!/bin/sh
# Version: 18
# Version: 19
# A script that installs the dependencies needed to build and test Bun.
# This should work on macOS and Linux with a POSIX shell.
@@ -685,6 +685,8 @@ install_common_software() {
apt-transport-https \
software-properties-common
fi
install_packages \
libc6-dbg
;;
dnf)
install_packages \
@@ -1193,7 +1195,7 @@ install_docker() {
execute_sudo amazon-linux-extras install docker
;;
amzn-* | alpine-*)
install_packages docker
install_packages docker docker-cli-compose
;;
*)
sh="$(require sh)"
@@ -1208,10 +1210,17 @@ install_docker() {
if [ -f "$systemctl" ]; then
execute_sudo "$systemctl" enable docker
fi
if [ "$os" = "linux" ] && [ "$distro" = "alpine" ]; then
execute doas rc-update add docker default
execute doas rc-service docker start
fi
getent="$(which getent)"
if [ -n "$("$getent" group docker)" ]; then
usermod="$(which usermod)"
if [ -z "$usermod" ]; then
usermod="$(sudo which usermod)"
fi
if [ -f "$usermod" ]; then
execute_sudo "$usermod" -aG docker "$user"
fi

View File

@@ -88,13 +88,13 @@ if [ "$MODE" = "check" ]; then
fi
dir=$(dirname "$dir")
done
if ! $CLANG_FORMAT --dry-run --Werror "$file" 2>/dev/null; then
echo "Format check failed: $file"
FAILED=1
fi
done
if [ $FAILED -eq 1 ]; then
echo "Some files need formatting. Run 'bun run clang-format' to fix."
exit 1
@@ -123,4 +123,4 @@ else
echo " fix - Same as format" >&2
echo " diff - Show formatting differences" >&2
exit 1
fi
fi

View File

@@ -80,7 +80,7 @@ function getNodeParallelTestTimeout(testPath) {
if (testPath.includes("test-dns")) {
return 90_000;
}
return 10_000;
return 20_000;
}
process.on("SIGTRAP", () => {
@@ -593,7 +593,7 @@ async function runTests() {
}
if ((basename(execPath).includes("asan") || !isCI) && shouldValidateLeakSan(testPath)) {
env.BUN_DESTRUCT_VM_ON_EXIT = "1";
env.ASAN_OPTIONS = "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1";
env.ASAN_OPTIONS = "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1:abort_on_error=1";
// prettier-ignore
env.LSAN_OPTIONS = `malloc_context_size=100:print_suppressions=0:suppressions=${process.cwd()}/test/leaksan.supp`;
}
@@ -658,6 +658,7 @@ async function runTests() {
const buildResult = await spawnBun(execPath, {
cwd: vendorPath,
args: ["run", "build"],
timeout: 60_000,
});
if (!buildResult.ok) {
throw new Error(`Failed to build vendor: ${buildResult.error}`);
@@ -684,6 +685,9 @@ async function runTests() {
}
}
// tests are all over, close the group from the final test. any further output should print ungrouped.
startGroup("End");
if (isGithubAction) {
reportOutputToGitHubAction("failing_tests_count", failedResults.length);
const markdown = formatTestToMarkdown(failedResults, false, 0);
@@ -1133,10 +1137,6 @@ async function spawnBun(execPath, { args, cwd, timeout, env, stdout, stderr }) {
: { BUN_ENABLE_CRASH_REPORTING: "0" }),
};
if (basename(execPath).includes("asan") && bunEnv.ASAN_OPTIONS === undefined) {
bunEnv.ASAN_OPTIONS = "allow_user_segv_handler=1:disable_coredump=0";
}
if (isWindows && bunEnv.Path) {
delete bunEnv.Path;
}
@@ -1335,7 +1335,7 @@ async function spawnBunTest(execPath, testPath, opts = { cwd }) {
}
if ((basename(execPath).includes("asan") || !isCI) && shouldValidateLeakSan(relative(cwd, absPath))) {
env.BUN_DESTRUCT_VM_ON_EXIT = "1";
env.ASAN_OPTIONS = "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1";
env.ASAN_OPTIONS = "allow_user_segv_handler=1:disable_coredump=0:detect_leaks=1:abort_on_error=1";
// prettier-ignore
env.LSAN_OPTIONS = `malloc_context_size=100:print_suppressions=0:suppressions=${process.cwd()}/test/leaksan.supp`;
}

View File

@@ -2808,6 +2808,7 @@ export function endGroup() {
} else {
console.groupEnd();
}
// when a file exits with an ASAN error, there is no trailing newline so we add one here to make sure `console.group()` detection doesn't get broken in CI.
console.log();
}
@@ -2865,6 +2866,12 @@ export function printEnvironment() {
spawnSync([shell, "-c", "free -m -w"], { stdio: "inherit" });
}
});
startGroup("Docker", () => {
const shell = which(["sh", "bash"]);
if (shell) {
spawnSync([shell, "-c", "docker ps"], { stdio: "inherit" });
}
});
}
if (isWindows) {
startGroup("Disk (win)", () => {

View File

@@ -121,6 +121,10 @@ pub fn exit(code: u32) noreturn {
std.os.windows.kernel32.ExitProcess(code);
},
else => {
if (Environment.enable_asan) {
std.c.exit(@bitCast(code));
std.c.abort(); // exit should be noreturn
}
bun.c.quick_exit(@bitCast(code));
std.c.abort(); // quick_exit should be noreturn
},

View File

@@ -724,7 +724,8 @@ pub const StandaloneModuleGraph = struct {
return bun.invalid_fd;
};
defer pe_file.deinit();
pe_file.addBunSection(bytes) catch |err| {
// Always strip authenticode when adding .bun section for --compile
pe_file.addBunSection(bytes, .strip_always) catch |err| {
Output.prettyErrorln("Error adding Bun section to PE file: {}", .{err});
cleanup(zname, cloned_executable_fd);
return bun.invalid_fd;

View File

@@ -1,7 +1,6 @@
#include "wtf/Compiler.h"
#if ASAN_ENABLED
// NOLINTNEXTLINE(bugprone-reserved-identifier)
const char* __asan_default_options(void)
{
// detect_stack_use_after_return causes some stack allocations to be made on the heap instead,

View File

@@ -371,41 +371,20 @@ pub const Runner = struct {
const _entry = this.visited.getOrPut(this.allocator, value) catch unreachable;
if (_entry.found_existing) {
switch (_entry.value_ptr.*.data) {
.e_object, .e_array => {
this.log.addErrorFmt(this.source, this.caller.loc, this.allocator, "converting circular structure to Bun AST is not implemented yet", .{}) catch unreachable;
return error.MacroFailed;
},
else => {},
}
return _entry.value_ptr.*;
}
var iter = try jsc.JSArrayIterator.init(value, this.global);
if (iter.len == 0) {
const result = Expr.init(
E.Array,
E.Array{
.items = ExprNodeList.empty,
.was_originally_macro = true,
},
this.caller.loc,
);
_entry.value_ptr.* = result;
return result;
}
// Process all array items
var array = this.allocator.alloc(Expr, iter.len) catch unreachable;
var out = Expr.init(
errdefer this.allocator.free(array);
const expr = Expr.init(
E.Array,
E.Array{
.items = ExprNodeList.empty,
.was_originally_macro = true,
},
E.Array{ .items = ExprNodeList.empty, .was_originally_macro = true },
this.caller.loc,
);
_entry.value_ptr.* = out;
errdefer this.allocator.free(array);
_entry.value_ptr.* = expr;
var i: usize = 0;
while (try iter.next()) |item| {
array[i] = try this.run(item);
@@ -413,24 +392,27 @@ pub const Runner = struct {
continue;
i += 1;
}
out.data.e_array.items = ExprNodeList.fromOwnedSlice(array);
_entry.value_ptr.* = out;
return out;
expr.data.e_array.items = ExprNodeList.fromOwnedSlice(array);
expr.data.e_array.items.len = @truncate(i);
return expr;
},
// TODO: optimize this
jsc.ConsoleObject.Formatter.Tag.Object => {
this.is_top_level = false;
const _entry = this.visited.getOrPut(this.allocator, value) catch unreachable;
if (_entry.found_existing) {
switch (_entry.value_ptr.*.data) {
.e_object, .e_array => {
this.log.addErrorFmt(this.source, this.caller.loc, this.allocator, "converting circular structure to Bun AST is not implemented yet", .{}) catch unreachable;
return error.MacroFailed;
},
else => {},
}
return _entry.value_ptr.*;
}
// Reserve a placeholder to break cycles.
const expr = Expr.init(
E.Object,
E.Object{ .properties = G.Property.List{}, .was_originally_macro = true },
this.caller.loc,
);
_entry.value_ptr.* = expr;
// SAFETY: tag ensures `value` is an object.
const obj = value.getObject() orelse unreachable;
var object_iter = try jsc.JSPropertyIterator(.{
@@ -439,36 +421,28 @@ pub const Runner = struct {
}).init(this.global, obj);
defer object_iter.deinit();
const out = _entry.value_ptr;
out.* = Expr.init(
E.Object,
E.Object{
.properties = bun.handleOom(
G.Property.List.initCapacity(this.allocator, object_iter.len),
),
.was_originally_macro = true,
},
this.caller.loc,
// Build properties list
var properties = bun.handleOom(
G.Property.List.initCapacity(this.allocator, object_iter.len),
);
const properties = &out.data.e_object.properties;
errdefer properties.clearAndFree(this.allocator);
while (try object_iter.next()) |prop| {
bun.assertf(
object_iter.i == properties.len,
"`properties` unexpectedly modified (length {d}, expected {d})",
.{ properties.len, object_iter.i },
);
properties.appendAssumeCapacity(G.Property{
const object_value = try this.run(object_iter.value);
properties.append(this.allocator, G.Property{
.key = Expr.init(
E.String,
E.String.init(prop.toOwnedSlice(this.allocator) catch unreachable),
this.caller.loc,
),
.value = try this.run(object_iter.value),
});
.value = object_value,
}) catch |err| bun.handleOom(err);
}
return out.*;
expr.data.e_object.properties = properties;
return expr;
},
.JSON => {

View File

@@ -102,6 +102,7 @@ pub fn buildCommand(ctx: bun.cli.Command.Context) !void {
if (vm.exit_handler.exit_code == 0) {
vm.exit_handler.exit_code = 1;
}
vm.onExit();
vm.globalExit();
},
else => |e| return e,

View File

@@ -127,7 +127,6 @@ pub fn create(this: *VirtualMachine, globalObject: *JSGlobalObject) !void {
debugger.script_execution_context_id = Bun__createJSDebugger(globalObject);
if (!this.has_started_debugger) {
this.has_started_debugger = true;
futex_atomic = std.atomic.Value(u32).init(0);
var thread = try std.Thread.spawn(.{}, startJSDebuggerThread, .{this});
thread.detach();
}

View File

@@ -833,8 +833,8 @@ pub fn onExit(this: *VirtualMachine) void {
extern fn Zig__GlobalObject__destructOnExit(*JSGlobalObject) void;
pub fn globalExit(this: *VirtualMachine) noreturn {
bun.assert(this.isShuttingDown());
if (this.shouldDestructMainThreadOnExit()) {
this.is_shutting_down = true;
if (this.eventLoop().forever_timer) |t| t.deinit(true);
Zig__GlobalObject__destructOnExit(this.global);
this.transpiler.deinit();
@@ -2308,7 +2308,7 @@ pub fn loadMacroEntryPoint(this: *VirtualMachine, entry_path: string, function_n
/// We cannot hold it from Zig code because it relies on C++ ARIA to automatically release the lock
/// and it is not safe to copy the lock itself
/// So we have to wrap entry points to & from JavaScript with an API lock that calls out to C++
pub inline fn runWithAPILock(this: *VirtualMachine, comptime Context: type, ctx: *Context, comptime function: fn (ctx: *Context) void) void {
pub fn runWithAPILock(this: *VirtualMachine, comptime Context: type, ctx: *Context, comptime function: fn (ctx: *Context) void) void {
this.global.vm().holdAPILock(ctx, jsc.OpaqueWrap(Context, function));
}

View File

@@ -1294,7 +1294,7 @@ pub fn setTLSDefaultCiphers(globalThis: *jsc.JSGlobalObject, _: *jsc.JSObject, c
}
pub fn getValkeyDefaultClient(globalThis: *jsc.JSGlobalObject, _: *jsc.JSObject) jsc.JSValue {
const valkey = jsc.API.Valkey.create(globalThis, &.{.js_undefined}) catch |err| {
var valkey = jsc.API.Valkey.createNoJs(globalThis, &.{.js_undefined}) catch |err| {
if (err != error.JSError) {
_ = globalThis.throwError(err, "Failed to create Redis client") catch {};
return .zero;
@@ -1302,7 +1302,11 @@ pub fn getValkeyDefaultClient(globalThis: *jsc.JSGlobalObject, _: *jsc.JSObject)
return .zero;
};
return valkey.toJS(globalThis);
const as_js = valkey.toJS(globalThis);
valkey.this_value = jsc.JSRef.initWeak(as_js);
return as_js;
}
pub fn getValkeyClientConstructor(globalThis: *jsc.JSGlobalObject, _: *jsc.JSObject) jsc.JSValue {
@@ -1938,6 +1942,7 @@ pub const JSZstd = struct {
pub fn runFromJS(this: *ZstdJob) void {
defer this.deinit();
if (this.vm.isShuttingDown()) {
return;
}

View File

@@ -925,7 +925,14 @@ pub fn parse(
const root = bun.interchange.yaml.YAML.parse(source, &log, arena.allocator()) catch |err| return switch (err) {
error.OutOfMemory => |oom| oom,
error.StackOverflow => global.throwStackOverflow(),
else => global.throwValue(try log.toJS(global, bun.default_allocator, "Failed to parse YAML")),
else => {
if (log.msgs.items.len > 0) {
const first_msg = log.msgs.items[0];
const error_text = first_msg.data.text;
return global.throwValue(global.createSyntaxErrorInstance("YAML Parse error: {s}", .{error_text}));
}
return global.throwValue(global.createSyntaxErrorInstance("YAML Parse error: Unable to parse YAML string", .{}));
},
};
var ctx: ParserCtx = .{
@@ -1023,7 +1030,7 @@ const ParserCtx = struct {
const key_str = try key.toBunString(ctx.global);
defer key_str.deref();
obj.putMayBeIndex(ctx.global, &key_str, value);
try obj.putMayBeIndex(ctx.global, &key_str, value);
}
return obj;

View File

@@ -111,7 +111,7 @@ pub fn NewSocket(comptime ssl: bool) type {
pub fn doConnect(this: *This, connection: Listener.UnixOrHost) !void {
bun.assert(this.socket_context != null);
this.ref();
errdefer this.deref();
defer this.deref();
switch (connection) {
.host => |c| {

View File

@@ -14,21 +14,23 @@ fn onClose(socket: *uws.udp.Socket) callconv(.C) void {
const this: *UDPSocket = bun.cast(*UDPSocket, socket.user().?);
this.closed = true;
this.poll_ref.disable();
_ = this.js_refcount.fetchSub(1, .monotonic);
this.this_value.downgrade();
this.socket = null;
}
fn onDrain(socket: *uws.udp.Socket) callconv(.C) void {
jsc.markBinding(@src());
const this: *UDPSocket = bun.cast(*UDPSocket, socket.user().?);
const callback = this.config.on_drain;
if (callback == .zero) return;
const thisValue = this.this_value.tryGet() orelse return;
const callback = UDPSocket.js.gc.on_drain.get(thisValue) orelse return;
if (callback.isEmptyOrUndefinedOrNull()) return;
const vm = jsc.VirtualMachine.get();
const event_loop = vm.eventLoop();
event_loop.enter();
defer event_loop.exit();
_ = callback.call(this.globalThis, this.thisValue, &.{this.thisValue}) catch |err| {
_ = callback.call(this.globalThis, thisValue, &.{thisValue}) catch |err| {
this.callErrorHandler(.zero, this.globalThis.takeException(err));
};
}
@@ -37,10 +39,12 @@ fn onData(socket: *uws.udp.Socket, buf: *uws.udp.PacketBuffer, packets: c_int) c
jsc.markBinding(@src());
const udpSocket: *UDPSocket = bun.cast(*UDPSocket, socket.user().?);
const callback = udpSocket.config.on_data;
if (callback == .zero) return;
const thisValue = udpSocket.this_value.tryGet() orelse return;
const callback = UDPSocket.js.gc.on_data.get(thisValue) orelse return;
if (callback.isEmptyOrUndefinedOrNull()) return;
const globalThis = udpSocket.globalThis;
defer thisValue.ensureStillAlive();
var i: c_int = 0;
while (i < packets) : (i += 1) {
@@ -73,12 +77,6 @@ fn onData(socket: *uws.udp.Socket, buf: *uws.udp.PacketBuffer, packets: c_int) c
const slice = buf.getPayload(i);
const loop = udpSocket.vm.eventLoop();
loop.enter();
defer loop.exit();
_ = udpSocket.js_refcount.fetchAdd(1, .monotonic);
defer _ = udpSocket.js_refcount.fetchSub(1, .monotonic);
const span = std.mem.span(hostname.?);
var hostname_string = if (scope_id) |id| blk: {
if (comptime !bun.Environment.isWindows) {
@@ -91,13 +89,18 @@ fn onData(socket: *uws.udp.Socket, buf: *uws.udp.PacketBuffer, packets: c_int) c
break :blk bun.handleOom(bun.String.createFormat("{s}%{d}", .{ span, id }));
} else bun.String.init(span);
_ = callback.call(globalThis, udpSocket.thisValue, &.{
udpSocket.thisValue,
const loop = udpSocket.vm.eventLoop();
loop.enter();
defer loop.exit();
defer thisValue.ensureStillAlive();
_ = callback.call(globalThis, thisValue, &.{
thisValue,
udpSocket.config.binary_type.toJS(slice, globalThis) catch return, // TODO: properly propagate exception upwards
.jsNumber(port),
hostname_string.transferToJS(globalThis),
}) catch |err| {
udpSocket.callErrorHandler(.zero, udpSocket.globalThis.takeException(err));
udpSocket.callErrorHandler(.zero, globalThis.takeException(err));
};
}
}
@@ -112,37 +115,20 @@ pub const UDPSocketConfig = struct {
const ConnectConfig = struct {
port: u16,
address: [:0]u8,
address: bun.String,
};
hostname: [:0]u8,
hostname: bun.String = .empty,
connect: ?ConnectConfig = null,
port: u16,
flags: i32,
port: u16 = 0,
flags: i32 = 0,
binary_type: jsc.ArrayBuffer.BinaryType = .Buffer,
on_data: JSValue = .zero,
on_drain: JSValue = .zero,
on_error: JSValue = .zero,
pub fn fromJS(globalThis: *JSGlobalObject, options: JSValue) bun.JSError!This {
pub fn fromJS(globalThis: *JSGlobalObject, options: JSValue, thisValue: JSValue) bun.JSError!This {
if (options.isEmptyOrUndefinedOrNull() or !options.isObject()) {
return globalThis.throwInvalidArguments("Expected an object", .{});
}
const hostname = brk: {
if (try options.getTruthy(globalThis, "hostname")) |value| {
if (!value.isString()) {
return globalThis.throwInvalidArguments("Expected \"hostname\" to be a string", .{});
}
const str = value.toBunString(globalThis) catch @panic("unreachable");
defer str.deref();
break :brk bun.handleOom(str.toOwnedSliceZ(default_allocator));
} else {
break :brk bun.handleOom(default_allocator.dupeZ(u8, "0.0.0.0"));
}
};
defer if (globalThis.hasException()) default_allocator.free(hostname);
const port: u16 = brk: {
if (try options.getTruthy(globalThis, "port")) |value| {
const number = try value.coerceToInt32(globalThis);
@@ -160,12 +146,25 @@ pub const UDPSocketConfig = struct {
else
0;
const hostname = brk: {
if (try options.getTruthy(globalThis, "hostname")) |value| {
if (!value.isString()) {
return globalThis.throwInvalidArguments("Expected \"hostname\" to be a string", .{});
}
break :brk try value.toBunString(globalThis);
} else {
break :brk bun.String.static("0.0.0.0");
}
};
var config = This{
.hostname = hostname,
.port = port,
.flags = flags,
};
errdefer config.deinit();
if (try options.getTruthy(globalThis, "socket")) |socket| {
if (!socket.isObject()) {
return globalThis.throwInvalidArguments("Expected \"socket\" to be an object", .{});
@@ -186,15 +185,8 @@ pub const UDPSocketConfig = struct {
if (!value.isCell() or !value.isCallable()) {
return globalThis.throwInvalidArguments("Expected \"socket.{s}\" to be a function", .{handler.@"0"});
}
@field(config, handler.@"1") = value.withAsyncContextIfNeeded(globalThis);
}
}
}
defer {
if (globalThis.hasException()) {
if (config.connect) |connect| {
default_allocator.free(connect.address);
const callback = value.withAsyncContextIfNeeded(globalThis);
UDPSocket.js.gc.set(@field(UDPSocket.js.gc, handler.@"1"), thisValue, globalThis, callback);
}
}
}
@@ -217,9 +209,7 @@ pub const UDPSocketConfig = struct {
};
const connect_port = try connect_port_js.coerceToInt32(globalThis);
const str = try connect_host_js.toBunString(globalThis);
defer str.deref();
const connect_host = bun.handleOom(str.toOwnedSliceZ(default_allocator));
const connect_host = try connect_host_js.toBunString(globalThis);
config.connect = .{
.port = if (connect_port < 1 or connect_port > 0xffff) 0 else @as(u16, @intCast(connect_port)),
@@ -227,28 +217,13 @@ pub const UDPSocketConfig = struct {
};
}
config.protect();
return config;
}
pub fn protect(this: This) void {
inline for (handlers) |handler| {
@field(this, handler.@"1").protect();
}
}
pub fn unprotect(this: This) void {
inline for (handlers) |handler| {
@field(this, handler.@"1").unprotect();
}
}
pub fn deinit(this: This) void {
this.unprotect();
default_allocator.free(this.hostname);
if (this.connect) |val| {
default_allocator.free(val.address);
pub fn deinit(this: *This) void {
this.hostname.deref();
if (this.connect) |*val| {
val.address.deref();
}
}
};
@@ -258,11 +233,11 @@ pub const UDPSocket = struct {
config: UDPSocketConfig,
socket: *uws.udp.Socket,
socket: ?*uws.udp.Socket = null,
loop: *uws.Loop,
globalThis: *JSGlobalObject,
thisValue: JSValue = .zero,
this_value: JSRef = JSRef.empty(),
jsc_ref: jsc.Ref = jsc.Ref.init(),
poll_ref: Async.KeepAlive = Async.KeepAlive.init(),
@@ -270,7 +245,6 @@ pub const UDPSocket = struct {
closed: bool = false,
connect_info: ?ConnectInfo = null,
vm: *jsc.VirtualMachine,
js_refcount: std.atomic.Value(usize) = std.atomic.Value(usize).init(1),
const ConnectInfo = struct {
port: u16,
@@ -281,101 +255,113 @@ pub const UDPSocket = struct {
pub const fromJS = js.fromJS;
pub const fromJSDirect = js.fromJSDirect;
pub fn hasPendingActivity(this: *This) callconv(.C) bool {
return this.js_refcount.load(.monotonic) > 0;
}
pub const new = bun.TrivialNew(@This());
pub fn udpSocket(globalThis: *JSGlobalObject, options: JSValue) bun.JSError!JSValue {
log("udpSocket", .{});
const config = try UDPSocketConfig.fromJS(globalThis, options);
const vm = globalThis.bunVM();
var this = This.new(.{
.socket = undefined,
.config = config,
.socket = null,
.config = .{},
.globalThis = globalThis,
.loop = uws.Loop.get(),
.vm = vm,
});
errdefer {
this.closed = true;
if (this.socket) |socket| {
this.socket = null;
socket.close();
}
// Do not deinit, rely on GC to free it.
}
const thisValue = this.toJS(globalThis);
thisValue.ensureStillAlive();
this.this_value.setStrong(thisValue, globalThis);
this.config = try UDPSocketConfig.fromJS(globalThis, options, thisValue);
var err: i32 = 0;
if (uws.udp.Socket.create(
const hostname_slice = this.config.hostname.toUTF8(bun.default_allocator);
defer hostname_slice.deinit();
const hostname_z = bun.handleOom(bun.default_allocator.dupeZ(u8, hostname_slice.slice()));
defer bun.default_allocator.free(hostname_z);
this.socket = uws.udp.Socket.create(
this.loop,
onData,
onDrain,
onClose,
config.hostname,
config.port,
config.flags,
hostname_z,
this.config.port,
this.config.flags,
&err,
this,
)) |socket| {
this.socket = socket;
} else {
) orelse {
this.closed = true;
defer this.deinit();
if (err != 0) {
const code = @tagName(bun.sys.SystemErrno.init(@as(c_int, @intCast(err))).?);
const sys_err = jsc.SystemError{
.errno = err,
.code = bun.String.static(code),
.message = bun.handleOom(bun.String.createFormat("bind {s} {s}", .{ code, config.hostname })),
.message = bun.handleOom(bun.String.createFormat("bind {s} {}", .{ code, this.config.hostname })),
};
const error_value = sys_err.toErrorInstance(globalThis);
error_value.put(globalThis, "address", try bun.String.createUTF8ForJS(globalThis, config.hostname));
error_value.put(globalThis, "address", this.config.hostname.toJS(globalThis));
return globalThis.throwValue(error_value);
}
return globalThis.throw("Failed to bind socket", .{});
}
};
errdefer {
this.socket.close();
this.deinit();
}
if (config.connect) |connect| {
const ret = this.socket.connect(connect.address, connect.port);
if (this.config.connect) |*connect| {
const address_slice = connect.address.toUTF8(bun.default_allocator);
defer address_slice.deinit();
const address_z = bun.handleOom(bun.default_allocator.dupeZ(u8, address_slice.slice()));
defer bun.default_allocator.free(address_z);
const ret = this.socket.?.connect(address_z, connect.port);
if (ret != 0) {
if (bun.sys.Maybe(void).errnoSys(ret, .connect)) |sys_err| {
return globalThis.throwValue(try sys_err.toJS(globalThis));
if (bun.sys.Maybe(void).errnoSys(ret, .connect)) |*sys_err| {
return globalThis.throwValue(sys_err.err.toJS(globalThis));
}
if (bun.c_ares.Error.initEAI(ret)) |eai_err| {
return globalThis.throwValue(eai_err.toJSWithSyscallAndHostname(globalThis, "connect", connect.address));
return globalThis.throwValue(eai_err.toJSWithSyscallAndHostname(globalThis, "connect", address_slice.slice()));
}
}
this.connect_info = .{ .port = connect.port };
}
this.poll_ref.ref(vm);
const thisValue = this.toJS(globalThis);
thisValue.ensureStillAlive();
this.thisValue = thisValue;
return jsc.JSPromise.resolvedPromiseValue(globalThis, thisValue);
}
pub fn callErrorHandler(
this: *This,
thisValue: JSValue,
thisValue_: JSValue,
err: JSValue,
) void {
const callback = this.config.on_error;
const thisValue = if (thisValue_ == .zero) this.this_value.tryGet() orelse return else thisValue_;
const callback = This.js.gc.on_error.get(thisValue) orelse .zero;
const globalThis = this.globalThis;
const vm = globalThis.bunVM();
if (err.isTerminationException()) {
return;
}
if (callback == .zero) {
if (callback.isEmptyOrUndefinedOrNull()) {
_ = vm.uncaughtException(globalThis, err, false);
return;
}
_ = callback.call(globalThis, thisValue, &.{err}) catch |e| globalThis.reportActiveExceptionAsUnhandled(e);
const event_loop = vm.eventLoop();
event_loop.enter();
defer event_loop.exit();
_ = callback.call(globalThis, thisValue, &.{err.toError() orelse err}) catch |e| globalThis.reportActiveExceptionAsUnhandled(e);
}
pub fn setBroadcast(this: *This, globalThis: *JSGlobalObject, callframe: *CallFrame) bun.JSError!JSValue {
@@ -389,7 +375,7 @@ pub const UDPSocket = struct {
}
const enabled = arguments[0].toBoolean();
const res = this.socket.setBroadcast(enabled);
const res = this.socket.?.setBroadcast(enabled);
if (getUSError(res, .setsockopt, true)) |err| {
return globalThis.throwValue(try err.toJS(globalThis));
@@ -409,7 +395,7 @@ pub const UDPSocket = struct {
}
const enabled = arguments[0].toBoolean();
const res = this.socket.setMulticastLoopback(enabled);
const res = this.socket.?.setMulticastLoopback(enabled);
if (getUSError(res, .setsockopt, true)) |err| {
return globalThis.throwValue(try err.toJS(globalThis));
@@ -435,12 +421,14 @@ pub const UDPSocket = struct {
var interface = std.mem.zeroes(std.posix.sockaddr.storage);
const socket = this.socket orelse return globalThis.throw("Socket is closed", .{});
const res = if (arguments.len > 1 and try parseAddr(this, globalThis, .jsNumber(0), arguments[1], &interface)) blk: {
if (addr.family != interface.family) {
return globalThis.throwInvalidArguments("Family mismatch between address and interface", .{});
}
break :blk this.socket.setMembership(&addr, &interface, drop);
} else this.socket.setMembership(&addr, null, drop);
break :blk socket.setMembership(&addr, &interface, drop);
} else socket.setMembership(&addr, null, drop);
if (getUSError(res, .setsockopt, true)) |err| {
return globalThis.throwValue(try err.toJS(globalThis));
@@ -483,12 +471,14 @@ pub const UDPSocket = struct {
var interface: std.posix.sockaddr.storage = undefined;
const socket = this.socket orelse return globalThis.throw("Socket is closed", .{});
const res = if (arguments.len > 2 and try parseAddr(this, globalThis, .jsNumber(0), arguments[2], &interface)) blk: {
if (source_addr.family != interface.family) {
return globalThis.throwInvalidArguments("Family mismatch among source, group and interface addresses", .{});
}
break :blk this.socket.setSourceSpecificMembership(&source_addr, &group_addr, &interface, drop);
} else this.socket.setSourceSpecificMembership(&source_addr, &group_addr, null, drop);
break :blk socket.setSourceSpecificMembership(&source_addr, &group_addr, &interface, drop);
} else socket.setSourceSpecificMembership(&source_addr, &group_addr, null, drop);
if (getUSError(res, .setsockopt, true)) |err| {
return globalThis.throwValue(try err.toJS(globalThis));
@@ -521,7 +511,9 @@ pub const UDPSocket = struct {
return .false;
}
const res = this.socket.setMulticastInterface(&addr);
const socket = this.socket orelse return globalThis.throw("Socket is closed", .{});
const res = socket.setMulticastInterface(&addr);
if (getUSError(res, .setsockopt, true)) |err| {
return globalThis.throwValue(try err.toJS(globalThis));
@@ -572,7 +564,7 @@ pub const UDPSocket = struct {
}
const ttl = try arguments[0].coerceToInt32(globalThis);
const res = function(this.socket, ttl);
const res = function(this.socket.?, ttl);
if (getUSError(res, .setsockopt, true)) |err| {
return globalThis.throwValue(try err.toJS(globalThis));
@@ -625,7 +617,7 @@ pub const UDPSocket = struct {
if (val.asArrayBuffer(globalThis)) |arrayBuffer| {
break :brk arrayBuffer.slice();
} else if (val.isString()) {
break :brk val.toString(globalThis).toSlice(globalThis, alloc).slice();
break :brk (try val.toJSString(globalThis)).toSlice(globalThis, alloc).slice();
} else {
return globalThis.throwInvalidArguments("Expected ArrayBufferView or string as payload", .{});
}
@@ -651,7 +643,8 @@ pub const UDPSocket = struct {
if (i != array_len) {
return globalThis.throwInvalidArguments("Mismatch between array length property and number of items", .{});
}
const res = this.socket.send(payloads, lens, addr_ptrs);
const socket = this.socket orelse return globalThis.throw("Socket is closed", .{});
const res = socket.send(payloads, lens, addr_ptrs);
if (getUSError(res, .send, true)) |err| {
return globalThis.throwValue(try err.toJS(globalThis));
}
@@ -709,7 +702,8 @@ pub const UDPSocket = struct {
}
};
const res = this.socket.send(&.{payload.ptr}, &.{payload.len}, &.{addr_ptr});
const socket = this.socket orelse return globalThis.throw("Socket is closed", .{});
const res = socket.send(&.{payload.ptr}, &.{payload.len}, &.{addr_ptr});
if (getUSError(res, .send, true)) |err| {
return globalThis.throwValue(try err.toJS(globalThis));
}
@@ -796,7 +790,12 @@ pub const UDPSocket = struct {
_: *JSGlobalObject,
_: *CallFrame,
) bun.JSError!JSValue {
if (!this.closed) this.socket.close();
if (!this.closed) {
const socket = this.socket orelse return .js_undefined;
this.socket = null;
socket.close();
this.this_value.downgrade();
}
return .js_undefined;
}
@@ -809,12 +808,12 @@ pub const UDPSocket = struct {
}
const options = args.ptr[0];
const config = try UDPSocketConfig.fromJS(globalThis, options);
const thisValue = this.this_value.tryGet() orelse return .js_undefined;
const config = try UDPSocketConfig.fromJS(globalThis, options, thisValue);
config.protect();
var previous_config = this.config;
previous_config.unprotect();
this.config = config;
previous_config.deinit();
return .js_undefined;
}
@@ -824,13 +823,12 @@ pub const UDPSocket = struct {
}
pub fn getHostname(this: *This, _: *JSGlobalObject) JSValue {
const hostname = jsc.ZigString.init(this.config.hostname);
return hostname.toJS(this.globalThis);
return this.config.hostname.toJS(this.globalThis);
}
pub fn getPort(this: *This, _: *JSGlobalObject) JSValue {
if (this.closed) return .js_undefined;
return JSValue.jsNumber(this.socket.boundPort());
return JSValue.jsNumber(this.socket.?.boundPort());
}
fn createSockAddr(globalThis: *JSGlobalObject, address_bytes: []const u8, port: u16) JSValue {
@@ -842,10 +840,10 @@ pub const UDPSocket = struct {
if (this.closed) return .js_undefined;
var buf: [64]u8 = [_]u8{0} ** 64;
var length: i32 = 64;
this.socket.boundIp(&buf, &length);
this.socket.?.boundIp(&buf, &length);
const address_bytes = buf[0..@as(usize, @intCast(length))];
const port = this.socket.boundPort();
const port = this.socket.?.boundPort();
return createSockAddr(globalThis, address_bytes, @intCast(port));
}
@@ -854,7 +852,7 @@ pub const UDPSocket = struct {
const connect_info = this.connect_info orelse return .js_undefined;
var buf: [64]u8 = [_]u8{0} ** 64;
var length: i32 = 64;
this.socket.remoteIp(&buf, &length);
this.socket.?.remoteIp(&buf, &length);
const address_bytes = buf[0..@as(usize, @intCast(length))];
return createSockAddr(globalThis, address_bytes, connect_info.port);
@@ -874,15 +872,15 @@ pub const UDPSocket = struct {
pub fn finalize(this: *This) void {
log("Finalize {*}", .{this});
this.this_value.finalize();
this.deinit();
}
pub fn deinit(this: *This) void {
// finalize is only called when js_refcount reaches 0
// js_refcount can only reach 0 when the socket is closed
bun.assert(this.closed);
bun.assert(this.closed or this.vm.isShuttingDown());
this.poll_ref.disable();
this.config.deinit();
this.this_value.deinit();
bun.destroy(this);
}
@@ -919,15 +917,15 @@ pub const UDPSocket = struct {
const connect_port = connect_port_js.asInt32();
const port: u16 = if (connect_port < 1 or connect_port > 0xffff) 0 else @as(u16, @intCast(connect_port));
if (this.socket.connect(connect_host, port) == -1) {
if (this.socket.?.connect(connect_host, port) == -1) {
return globalThis.throw("Failed to connect socket", .{});
}
this.connect_info = .{
.port = port,
};
js.addressSetCached(callFrame.this(), globalThis, .zero);
js.remoteAddressSetCached(callFrame.this(), globalThis, .zero);
This.js.addressSetCached(callFrame.this(), globalThis, .zero);
This.js.remoteAddressSetCached(callFrame.this(), globalThis, .zero);
return .js_undefined;
}
@@ -945,7 +943,7 @@ pub const UDPSocket = struct {
return globalObject.throw("Socket is closed", .{});
}
if (this.socket.disconnect() == -1) {
if (this.socket.?.disconnect() == -1) {
return globalObject.throw("Failed to disconnect socket", .{});
}
this.connect_info = null;
@@ -965,5 +963,6 @@ const default_allocator = bun.default_allocator;
const jsc = bun.jsc;
const CallFrame = jsc.CallFrame;
const JSGlobalObject = jsc.JSGlobalObject;
const JSRef = jsc.JSRef;
const JSValue = jsc.JSValue;
const SocketAddress = jsc.API.SocketAddress;

View File

@@ -488,6 +488,7 @@ const CryptoHasherZig = struct {
.{ "sha3-512", std.crypto.hash.sha3.Sha3_512 },
.{ "shake128", std.crypto.hash.sha3.Shake128 },
.{ "shake256", std.crypto.hash.sha3.Shake256 },
.{ "blake2s256", std.crypto.hash.blake2.Blake2s256 },
};
inline fn digestLength(Algorithm: type) comptime_int {

View File

@@ -21,6 +21,7 @@ pub const Algorithm = enum {
// @"ecdsa-with-SHA1",
blake2b256,
blake2b512,
blake2s256,
md4,
md5,
ripemd160,
@@ -69,6 +70,7 @@ pub const Algorithm = enum {
pub const map = bun.ComptimeStringMap(Algorithm, .{
.{ "blake2b256", .blake2b256 },
.{ "blake2b512", .blake2b512 },
.{ "blake2s256", .blake2s256 },
.{ "ripemd160", .ripemd160 },
.{ "rmd160", .ripemd160 },
.{ "md4", .md4 },

View File

@@ -324,26 +324,10 @@ pub const JSPasswordObject = struct {
pub export fn JSPasswordObject__create(globalObject: *jsc.JSGlobalObject) jsc.JSValue {
var object = JSValue.createEmptyObject(globalObject, 4);
object.put(
globalObject,
ZigString.static("hash"),
jsc.createCallback(globalObject, ZigString.static("hash"), 2, JSPasswordObject__hash),
);
object.put(
globalObject,
ZigString.static("hashSync"),
jsc.createCallback(globalObject, ZigString.static("hashSync"), 2, JSPasswordObject__hashSync),
);
object.put(
globalObject,
ZigString.static("verify"),
jsc.createCallback(globalObject, ZigString.static("verify"), 2, JSPasswordObject__verify),
);
object.put(
globalObject,
ZigString.static("verifySync"),
jsc.createCallback(globalObject, ZigString.static("verifySync"), 2, JSPasswordObject__verifySync),
);
object.put(globalObject, ZigString.static("hash"), jsc.createCallback(globalObject, ZigString.static("hash"), 2, JSPasswordObject__hash));
object.put(globalObject, ZigString.static("hashSync"), jsc.createCallback(globalObject, ZigString.static("hashSync"), 2, JSPasswordObject__hashSync));
object.put(globalObject, ZigString.static("verify"), jsc.createCallback(globalObject, ZigString.static("verify"), 2, JSPasswordObject__verify));
object.put(globalObject, ZigString.static("verifySync"), jsc.createCallback(globalObject, ZigString.static("verifySync"), 2, JSPasswordObject__verifySync));
return object;
}

View File

@@ -517,7 +517,7 @@ pub const MatchedRoute = struct {
pub fn createQueryObject(ctx: *jsc.JSGlobalObject, map: *QueryStringMap) JSValue {
const QueryObjectCreator = struct {
query: *QueryStringMap,
pub fn create(this: *@This(), obj: *JSObject, global: *JSGlobalObject) void {
pub fn create(this: *@This(), obj: *JSObject, global: *JSGlobalObject) bun.JSError!void {
var iter = this.query.iter();
while (iter.next(&query_string_values_buf)) |entry| {
const entry_name = entry.name;
@@ -529,10 +529,10 @@ pub const MatchedRoute = struct {
for (entry.values, 0..) |value, i| {
values[i] = ZigString.init(value).withEncoding();
}
obj.putRecord(global, &str, values);
try obj.putRecord(global, &str, values);
} else {
query_string_value_refs_buf[0] = ZigString.init(entry.values[0]).withEncoding();
obj.putRecord(global, &str, query_string_value_refs_buf[0..1]);
try obj.putRecord(global, &str, query_string_value_refs_buf[0..1]);
}
}
}

View File

@@ -719,10 +719,7 @@ pub fn NewServer(protocol_enum: enum { http, https }, development_kind: enum { d
}
{
var js_string = message_value.toString(globalThis);
if (globalThis.hasException()) {
return .zero;
}
var js_string = try message_value.toJSString(globalThis);
const view = js_string.view(globalThis);
const slice = view.toSlice(bun.default_allocator);
defer slice.deinit();
@@ -961,18 +958,13 @@ pub fn NewServer(protocol_enum: enum { http, https }, development_kind: enum { d
// obviously invalid pointer marks it as used
upgrader.upgrade_context = @as(*uws.SocketContext, @ptrFromInt(std.math.maxInt(usize)));
const signal = upgrader.signal;
upgrader.signal = null;
upgrader.resp = null;
request.request_context = AnyRequestContext.Null;
upgrader.request_weakref.deref();
data_value.ensureStillAlive();
const ws = ServerWebSocket.new(.{
.handler = &this.config.websocket.?.handler,
.this_value = data_value,
.signal = signal,
});
const ws = ServerWebSocket.init(&this.config.websocket.?.handler, data_value, signal);
data_value.ensureStillAlive();
var sec_websocket_protocol_str = sec_websocket_protocol.toSlice(bun.default_allocator);
@@ -2151,7 +2143,36 @@ pub fn NewServer(protocol_enum: enum { http, https }, development_kind: enum { d
pub fn prepareJsRequestContext(this: *ThisServer, req: *uws.Request, resp: *App.Response, should_deinit_context: ?*bool, create_js_request: bool, method: ?bun.http.Method) ?PreparedRequest {
jsc.markBinding(@src());
// We need to register the handler immediately since uSockets will not buffer.
//
// We first validate the self-reported request body length so that
// we avoid needing to worry as much about what memory to free.
const request_body_length: ?usize = request_body_length: {
if ((HTTP.Method.which(req.method()) orelse HTTP.Method.OPTIONS).hasRequestBody()) {
const len: usize = brk: {
if (req.header("content-length")) |content_length| {
break :brk std.fmt.parseInt(usize, content_length, 10) catch 0;
}
break :brk 0;
};
// Abort the request very early.
if (len > this.config.max_request_body_size) {
resp.writeStatus("413 Request Entity Too Large");
resp.endWithoutBody(true);
return null;
}
break :request_body_length len;
}
break :request_body_length null;
};
this.onPendingRequest();
if (comptime Environment.isDebug) {
this.vm.eventLoop().debug.enter();
}
@@ -2201,25 +2222,7 @@ pub fn NewServer(protocol_enum: enum { http, https }, development_kind: enum { d
};
}
// we need to do this very early unfortunately
// it seems to work fine for synchronous requests but anything async will take too long to register the handler
// we do this only for HTTP methods that support request bodies, so not GET, HEAD, OPTIONS, or CONNECT.
if ((HTTP.Method.which(req.method()) orelse HTTP.Method.OPTIONS).hasRequestBody()) {
const req_len: usize = brk: {
if (req.header("content-length")) |content_length| {
break :brk std.fmt.parseInt(usize, content_length, 10) catch 0;
}
break :brk 0;
};
if (req_len > this.config.max_request_body_size) {
resp.writeStatus("413 Request Entity Too Large");
resp.endWithoutBody(true);
this.finalize();
return null;
}
if (request_body_length) |req_len| {
ctx.request_body_content_len = req_len;
ctx.flags.is_transfer_encoding = req.header("transfer-encoding") != null;
if (req_len > 0 or ctx.flags.is_transfer_encoding) {
@@ -2643,7 +2646,7 @@ pub fn NewServer(protocol_enum: enum { http, https }, development_kind: enum { d
// If onNodeHTTPRequest is configured, it might be needed for Node.js compatibility layer
// for specific Node API routes, even if it's not the main "/*" handler.
if (this.config.onNodeHTTPRequest != .zero) {
NodeHTTP_assignOnCloseFunction(ssl_enabled, app);
NodeHTTP_assignOnNodeJSCompat(ssl_enabled, app);
}
return route_list_value;
@@ -2815,7 +2818,7 @@ pub fn NewServer(protocol_enum: enum { http, https }, development_kind: enum { d
pub fn onClientErrorCallback(this: *ThisServer, socket: *uws.Socket, error_code: u8, raw_packet: []const u8) void {
if (this.on_clienterror.get()) |callback| {
const is_ssl = protocol_enum == .https;
const node_socket = bun.jsc.fromJSHostCall(this.globalThis, @src(), Bun__createNodeHTTPServerSocket, .{ is_ssl, socket, this.globalThis }) catch return;
const node_socket = bun.jsc.fromJSHostCall(this.globalThis, @src(), Bun__createNodeHTTPServerSocketForClientError, .{ is_ssl, socket, this.globalThis }) catch return;
if (node_socket.isUndefinedOrNull()) return;
const error_code_value = JSValue.jsNumber(error_code);
@@ -3313,9 +3316,8 @@ extern fn NodeHTTPServer__onRequest_https(
node_response_ptr: *?*NodeHTTPResponse,
) jsc.JSValue;
extern fn Bun__createNodeHTTPServerSocket(bool, *anyopaque, *jsc.JSGlobalObject) jsc.JSValue;
extern fn NodeHTTP_assignOnCloseFunction(bool, *anyopaque) void;
extern fn NodeHTTP_setUsingCustomExpectHandler(bool, *anyopaque, bool) void;
extern fn Bun__createNodeHTTPServerSocketForClientError(bool, *anyopaque, *jsc.JSGlobalObject) jsc.JSValue;
extern "c" fn Bun__ServerRouteList__callRoute(
globalObject: *jsc.JSGlobalObject,
index: u32,
@@ -3344,6 +3346,9 @@ fn throwSSLErrorIfNecessary(globalThis: *jsc.JSGlobalObject) bool {
return false;
}
extern fn NodeHTTP_assignOnNodeJSCompat(bool, *anyopaque) void;
extern fn NodeHTTP_setUsingCustomExpectHandler(bool, *anyopaque, bool) void;
const string = []const u8;
const Sys = @import("../../sys.zig");

View File

@@ -17,7 +17,7 @@ raw_response: uws.AnyResponse,
flags: Flags = .{},
js_ref: jsc.Ref = .{},
poll_ref: jsc.Ref = .{},
body_read_state: BodyReadState = .none,
body_read_ref: jsc.Ref = .{},
@@ -122,17 +122,19 @@ pub fn getServerSocketValue(this: *NodeHTTPResponse) jsc.JSValue {
pub fn pauseSocket(this: *NodeHTTPResponse) void {
log("pauseSocket", .{});
if (this.flags.socket_closed or this.flags.upgraded) {
if (this.flags.socket_closed or this.flags.upgraded or this.raw_response.isConnectRequest()) {
return;
}
this.raw_response.pause();
}
pub fn resumeSocket(this: *NodeHTTPResponse) void {
log("resumeSocket", .{});
if (this.flags.socket_closed or this.flags.upgraded) {
if (this.flags.socket_closed or this.flags.upgraded or this.raw_response.isConnectRequest()) {
return;
}
this.raw_response.@"resume"();
}
@@ -145,7 +147,7 @@ const OnBeforeOpen = struct {
Bun__setNodeHTTPServerSocketUsSocketValue(ctx.socketValue, socket.asSocket());
ServerWebSocket.js.gc.socket.set(js_websocket, ctx.globalObject, ctx.socketValue);
ctx.this.flags.upgraded = true;
defer ctx.this.js_ref.unref(ctx.globalObject.bunVM());
defer ctx.this.poll_ref.unref(ctx.globalObject.bunVM());
switch (ctx.this.raw_response) {
.SSL => ctx.this.raw_response = uws.AnyResponse.init(uws.NewApp(true).Response.castRes(@alignCast(@ptrCast(socket)))),
.TCP => ctx.this.raw_response = uws.AnyResponse.init(uws.NewApp(false).Response.castRes(@alignCast(@ptrCast(socket)))),
@@ -168,10 +170,7 @@ pub fn upgrade(this: *NodeHTTPResponse, data_value: JSValue, sec_websocket_proto
}
data_value.ensureStillAlive();
const ws = ServerWebSocket.new(.{
.handler = ws_handler,
.this_value = data_value,
});
const ws = ServerWebSocket.init(ws_handler, data_value, null);
var sec_websocket_protocol_str: ?ZigString.Slice = null;
defer if (sec_websocket_protocol_str) |*str| str.deinit();
@@ -231,6 +230,7 @@ pub fn maybeStopReadingBody(this: *NodeHTTPResponse, vm: *jsc.VirtualMachine, th
{
const had_ref = this.body_read_ref.has;
if (!this.flags.upgraded and !this.flags.socket_closed) {
log("clearOnData", .{});
this.raw_response.clearOnData();
}
@@ -275,7 +275,7 @@ fn markRequestAsDone(this: *NodeHTTPResponse) void {
this.buffered_request_body_data_during_pause.clearAndFree(bun.default_allocator);
const server = this.server;
this.js_ref.unref(jsc.VirtualMachine.get());
this.poll_ref.unref(jsc.VirtualMachine.get());
this.deref();
server.onRequestComplete();
}
@@ -331,7 +331,7 @@ pub fn create(
if (has_body.*) {
response.body_read_ref.ref(vm);
}
response.js_ref.ref(vm);
response.poll_ref.ref(vm);
const js_this = response.toJS(globalObject);
node_response_ptr.* = response;
return js_this;
@@ -400,14 +400,14 @@ pub fn getBufferedAmount(this: *const NodeHTTPResponse, _: *jsc.JSGlobalObject)
pub fn jsRef(this: *NodeHTTPResponse, globalObject: *jsc.JSGlobalObject, _: *jsc.CallFrame) bun.JSError!jsc.JSValue {
if (!this.isDone()) {
this.js_ref.ref(globalObject.bunVM());
this.poll_ref.ref(globalObject.bunVM());
}
return .js_undefined;
}
pub fn jsUnref(this: *NodeHTTPResponse, globalObject: *jsc.JSGlobalObject, _: *jsc.CallFrame) bun.JSError!jsc.JSValue {
if (!this.isDone()) {
this.js_ref.unref(globalObject.bunVM());
this.poll_ref.unref(globalObject.bunVM());
}
return .js_undefined;
}
@@ -570,18 +570,16 @@ pub fn onTimeout(this: *NodeHTTPResponse, _: uws.AnyResponse) void {
this.handleAbortOrTimeout(.timeout, .zero);
}
pub fn doPause(this: *NodeHTTPResponse, _: *jsc.JSGlobalObject, _: *jsc.CallFrame, thisValue: jsc.JSValue) bun.JSError!jsc.JSValue {
pub fn doPause(this: *NodeHTTPResponse, _: *jsc.JSGlobalObject, _: *jsc.CallFrame, _: jsc.JSValue) bun.JSError!jsc.JSValue {
log("doPause", .{});
if (this.flags.request_has_completed or this.flags.socket_closed or this.flags.ended or this.flags.upgraded) {
return .false;
}
if (this.body_read_ref.has and js.onDataGetCached(thisValue) == null) {
this.flags.is_data_buffered_during_pause = true;
this.raw_response.onData(*NodeHTTPResponse, onBufferRequestBodyWhilePaused, this);
}
this.flags.is_data_buffered_during_pause = true;
this.raw_response.onData(*NodeHTTPResponse, onBufferRequestBodyWhilePaused, this);
// TODO: figure out why windows is not emitting EOF with UV_DISCONNECT
if (!Environment.isWindows) {
// TODO: figure out why windows is not emitting EOF with UV_DISCONNECT
pauseSocket(this);
}
return .true;
@@ -592,6 +590,7 @@ pub fn drainRequestBody(this: *NodeHTTPResponse, globalObject: *jsc.JSGlobalObje
}
fn drainBufferedRequestBodyFromPause(this: *NodeHTTPResponse, globalObject: *jsc.JSGlobalObject) ?jsc.JSValue {
log("drainBufferedRequestBodyFromPause {d}", .{this.buffered_request_body_data_during_pause.len});
if (this.buffered_request_body_data_during_pause.len > 0) {
const result = jsc.JSValue.createBuffer(globalObject, this.buffered_request_body_data_during_pause.slice());
this.buffered_request_body_data_during_pause = .{};
@@ -605,12 +604,10 @@ pub fn doResume(this: *NodeHTTPResponse, globalObject: *jsc.JSGlobalObject, _: *
if (this.flags.request_has_completed or this.flags.socket_closed or this.flags.ended or this.flags.upgraded) {
return .false;
}
this.setOnAbortedHandler();
this.raw_response.onData(*NodeHTTPResponse, onData, this);
this.flags.is_data_buffered_during_pause = false;
var result: jsc.JSValue = .true;
if (this.flags.is_data_buffered_during_pause) {
this.raw_response.clearOnData();
this.flags.is_data_buffered_during_pause = false;
}
if (this.drainBufferedRequestBodyFromPause(globalObject)) |buffered_data| {
result = buffered_data;
@@ -626,7 +623,7 @@ pub fn onRequestComplete(this: *NodeHTTPResponse) void {
}
log("onRequestComplete", .{});
this.flags.request_has_completed = true;
this.js_ref.unref(jsc.VirtualMachine.get());
this.poll_ref.unref(jsc.VirtualMachine.get());
this.markRequestAsDoneIfNecessary();
}
@@ -644,6 +641,7 @@ pub export fn Bun__NodeHTTPRequest__onResolve(globalObject: *jsc.JSGlobalObject,
if (this_value != .zero) {
js.onAbortedSetCached(this_value, globalObject, .zero);
}
log("clearOnData", .{});
this.raw_response.clearOnData();
this.raw_response.clearOnWritable();
this.raw_response.clearTimeout();
@@ -670,6 +668,7 @@ pub export fn Bun__NodeHTTPRequest__onReject(globalObject: *jsc.JSGlobalObject,
if (this_value != .zero) {
js.onAbortedSetCached(this_value, globalObject, .zero);
}
log("clearOnData", .{});
this.raw_response.clearOnData();
this.raw_response.clearOnWritable();
this.raw_response.clearTimeout();
@@ -695,6 +694,7 @@ pub fn abort(this: *NodeHTTPResponse, _: *jsc.JSGlobalObject, _: *jsc.CallFrame)
return .js_undefined;
}
resumeSocket(this);
log("clearOnData", .{});
this.raw_response.clearOnData();
this.raw_response.clearOnWritable();
this.raw_response.clearTimeout();
@@ -718,7 +718,43 @@ fn onBufferRequestBodyWhilePaused(this: *NodeHTTPResponse, chunk: []const u8, la
}
}
fn getBytes(this: *NodeHTTPResponse, globalThis: *jsc.JSGlobalObject, chunk: []const u8) jsc.JSValue {
// TODO: we should have a error event for this but is better than ignoring it
// right now the socket instead of emitting an error event it will reportUncaughtException
// this makes the behavior aligned with current implementation, but not ideal
const bytes: jsc.JSValue = brk: {
if (chunk.len > 0 and this.buffered_request_body_data_during_pause.len > 0) {
const buffer = jsc.JSValue.createBufferFromLength(globalThis, chunk.len + this.buffered_request_body_data_during_pause.len) catch |err| {
globalThis.reportUncaughtExceptionFromError(err);
return .js_undefined;
};
const array_buffer = buffer.asArrayBuffer(globalThis).?;
defer this.buffered_request_body_data_during_pause.clearAndFree(bun.default_allocator);
var input = array_buffer.slice();
@memcpy(input[0..this.buffered_request_body_data_during_pause.len], this.buffered_request_body_data_during_pause.slice());
@memcpy(input[this.buffered_request_body_data_during_pause.len..], chunk);
break :brk buffer;
}
if (this.drainBufferedRequestBodyFromPause(globalThis)) |buffered_data| {
break :brk buffered_data;
}
if (chunk.len > 0) {
break :brk jsc.ArrayBuffer.createBuffer(globalThis, chunk) catch |err| {
globalThis.reportUncaughtExceptionFromError(err);
return .js_undefined;
};
}
break :brk .js_undefined;
};
return bytes;
}
fn onDataOrAborted(this: *NodeHTTPResponse, chunk: []const u8, last: bool, event: AbortEvent, thisValue: jsc.JSValue) void {
log("onDataOrAborted({d}, {})", .{ chunk.len, last });
if (last) {
this.ref();
this.body_read_state = .done;
@@ -743,27 +779,7 @@ fn onDataOrAborted(this: *NodeHTTPResponse, chunk: []const u8, last: bool, event
const globalThis = jsc.VirtualMachine.get().global;
const event_loop = globalThis.bunVM().eventLoop();
const bytes: jsc.JSValue = brk: {
if (chunk.len > 0 and this.buffered_request_body_data_during_pause.len > 0) {
const buffer = jsc.JSValue.createBufferFromLength(globalThis, chunk.len + this.buffered_request_body_data_during_pause.len) catch return; // TODO: properly propagate exception upwards
this.buffered_request_body_data_during_pause.clearAndFree(bun.default_allocator);
if (buffer.asArrayBuffer(globalThis)) |array_buffer| {
var input = array_buffer.slice();
@memcpy(input[0..this.buffered_request_body_data_during_pause.len], this.buffered_request_body_data_during_pause.slice());
@memcpy(input[this.buffered_request_body_data_during_pause.len..], chunk);
break :brk buffer;
}
}
if (this.drainBufferedRequestBodyFromPause(globalThis)) |buffered_data| {
break :brk buffered_data;
}
if (chunk.len > 0) {
break :brk jsc.ArrayBuffer.createBuffer(globalThis, chunk) catch return; // TODO: properly propagate exception upwards
}
break :brk .js_undefined;
};
const bytes = this.getBytes(globalThis, chunk);
event_loop.runCallback(callback, globalThis, .js_undefined, &.{
bytes,
@@ -779,23 +795,29 @@ pub fn onData(this: *NodeHTTPResponse, chunk: []const u8, last: bool) void {
onDataOrAborted(this, chunk, last, .none, this.getThisValue());
}
fn onDrain(this: *NodeHTTPResponse, offset: u64, response: uws.AnyResponse) bool {
log("onDrain({d})", .{offset});
fn onDrainCorked(this: *NodeHTTPResponse, offset: u64) void {
log("onDrainCorked({d})", .{offset});
this.ref();
defer this.deref();
response.clearOnWritable();
const thisValue = this.getThisValue();
const on_writable = js.onWritableGetCached(thisValue) orelse return;
const globalThis = jsc.VirtualMachine.get().global;
js.onWritableSetCached(thisValue, globalThis, .js_undefined); // TODO(@heimskr): is this necessary?
const vm = globalThis.bunVM();
vm.eventLoop().runCallback(on_writable, globalThis, .js_undefined, &.{jsc.JSValue.jsNumberFromUint64(offset)});
}
fn onDrain(this: *NodeHTTPResponse, offset: u64, response: uws.AnyResponse) bool {
log("onDrain({d})", .{offset});
if (this.flags.socket_closed or this.flags.request_has_completed or this.flags.upgraded) {
// return false means we don't have anything to drain
return false;
}
const thisValue = this.getThisValue();
const on_writable = js.onWritableGetCached(thisValue) orelse return false;
const globalThis = jsc.VirtualMachine.get().global;
js.onWritableSetCached(thisValue, globalThis, .js_undefined); // TODO(@heimskr): is this necessary?
const vm = globalThis.bunVM();
response.corked(jsc.EventLoop.runCallback, .{ vm.eventLoop(), on_writable, globalThis, .js_undefined, &.{jsc.JSValue.jsNumberFromUint64(offset)} });
response.corked(onDrainCorked, .{ this, offset });
// return true means we may have something to drain
return true;
}
@@ -995,12 +1017,15 @@ pub fn setHasCustomOnData(this: *NodeHTTPResponse, _: *jsc.JSGlobalObject, value
}
fn clearOnDataCallback(this: *NodeHTTPResponse, thisValue: jsc.JSValue, globalObject: *jsc.JSGlobalObject) void {
log("clearOnDataCallback", .{});
if (this.body_read_state != .none) {
if (thisValue != .zero) {
js.onDataSetCached(thisValue, globalObject, .js_undefined);
}
if (!this.flags.socket_closed and !this.flags.upgraded)
if (!this.flags.socket_closed and !this.flags.upgraded) {
log("clearOnData", .{});
this.raw_response.clearOnData();
}
if (this.body_read_state != .done) {
this.body_read_state = .done;
}
@@ -1018,6 +1043,7 @@ pub fn setOnData(this: *NodeHTTPResponse, thisValue: jsc.JSValue, globalObject:
switch (this.body_read_state) {
.pending, .done => {
if (!this.flags.request_has_completed and !this.flags.socket_closed and !this.flags.upgraded) {
log("clearOnData", .{});
this.raw_response.clearOnData();
}
this.body_read_state = .done;
@@ -1133,12 +1159,12 @@ pub fn finalize(this: *NodeHTTPResponse) void {
fn deinit(this: *NodeHTTPResponse) void {
bun.debugAssert(!this.body_read_ref.has);
bun.debugAssert(!this.js_ref.has);
bun.debugAssert(!this.poll_ref.has);
bun.debugAssert(!this.flags.is_request_pending);
bun.debugAssert(this.flags.socket_closed or this.flags.request_has_completed);
this.buffered_request_body_data_during_pause.deinit(bun.default_allocator);
this.js_ref.unref(jsc.VirtualMachine.get());
this.poll_ref.unref(jsc.VirtualMachine.get());
this.body_read_ref.unref(jsc.VirtualMachine.get());
this.promise.deinit();

View File

@@ -1,9 +1,9 @@
const ServerWebSocket = @This();
handler: *WebSocketServer.Handler,
this_value: JSValue = .zero,
flags: Flags = .{},
signal: ?*bun.webcore.AbortSignal = null,
#handler: *WebSocketServer.Handler,
#this_value: jsc.JSRef = .empty(),
#flags: Flags = .{},
#signal: ?*bun.webcore.AbortSignal = null,
// We pack the per-socket data into this struct below
const Flags = packed struct(u64) {
@@ -26,7 +26,7 @@ const Flags = packed struct(u64) {
};
inline fn websocket(this: *const ServerWebSocket) uws.AnyWebSocket {
return this.flags.websocket();
return this.#flags.websocket();
}
pub const js = jsc.Codegen.JSServerWebSocket;
@@ -34,10 +34,25 @@ pub const toJS = js.toJS;
pub const fromJS = js.fromJS;
pub const fromJSDirect = js.fromJSDirect;
pub const new = bun.TrivialNew(ServerWebSocket);
const new = bun.TrivialNew(ServerWebSocket);
/// Initialize a ServerWebSocket with the given handler, data value, and signal.
/// The signal will not be ref'd inside the ServerWebSocket init function, but will unref itself when the ServerWebSocket is destroyed.
pub fn init(handler: *WebSocketServer.Handler, data_value: jsc.JSValue, signal: ?*bun.webcore.AbortSignal) *ServerWebSocket {
const globalObject = handler.globalObject;
const this = ServerWebSocket.new(.{
.#handler = handler,
.#signal = signal,
});
// Get a strong ref and downgrade when terminating/close and GC will be able to collect the newly created value
const this_value = this.toJS(globalObject);
this.#this_value = .initStrong(this_value, globalObject);
js.dataSetCached(this_value, globalObject, data_value);
return this;
}
pub fn memoryCost(this: *const ServerWebSocket) usize {
if (this.flags.closed) {
if (this.#flags.closed) {
return @sizeOf(ServerWebSocket);
}
return this.websocket().memoryCost() + @sizeOf(ServerWebSocket);
@@ -48,15 +63,12 @@ const log = Output.scoped(.WebSocketServer, .visible);
pub fn onOpen(this: *ServerWebSocket, ws: uws.AnyWebSocket) void {
log("OnOpen", .{});
this.flags.packed_websocket_ptr = @truncate(@intFromPtr(ws.raw()));
this.flags.closed = false;
this.flags.ssl = ws == .ssl;
this.#flags.packed_websocket_ptr = @truncate(@intFromPtr(ws.raw()));
this.#flags.closed = false;
this.#flags.ssl = ws == .ssl;
// the this value is initially set to whatever the user passed in
const value_to_cache = this.this_value;
var handler = this.handler;
const vm = this.handler.vm;
var handler = this.#handler;
const vm = this.#handler.vm;
handler.active_connections +|= 1;
const globalObject = handler.globalObject;
const onOpenHandler = handler.onOpen;
@@ -66,25 +78,19 @@ pub fn onOpen(this: *ServerWebSocket, ws: uws.AnyWebSocket) void {
return;
}
this.this_value = .zero;
this.flags.opened = false;
if (value_to_cache != .zero) {
const current_this = this.getThisValue();
js.dataSetCached(current_this, globalObject, value_to_cache);
}
this.#flags.opened = false;
if (onOpenHandler.isEmptyOrUndefinedOrNull()) {
if (bun.take(&this.handler.onBeforeOpen)) |on_before_open| {
if (bun.take(&this.#handler.onBeforeOpen)) |on_before_open| {
// Only create the "this" value if needed.
const this_value = this.getThisValue();
on_before_open.callback(on_before_open.ctx, this_value, ws.raw());
on_before_open.callback(on_before_open.ctx, this.#this_value.tryGet() orelse .js_undefined, ws.raw());
}
return;
}
const this_value = this.getThisValue();
const this_value = this.#this_value.tryGet() orelse .js_undefined;
var args = [_]JSValue{this_value};
if (bun.take(&this.handler.onBeforeOpen)) |on_before_open| {
if (bun.take(&this.#handler.onBeforeOpen)) |on_before_open| {
on_before_open.callback(on_before_open.ctx, this_value, ws.raw());
}
@@ -99,12 +105,12 @@ pub fn onOpen(this: *ServerWebSocket, ws: uws.AnyWebSocket) void {
};
ws.cork(&corker, Corker.run);
const result = corker.result;
this.flags.opened = true;
this.#flags.opened = true;
if (result.toError()) |err_value| {
log("onOpen exception", .{});
if (!this.flags.closed) {
this.flags.closed = true;
if (!this.#flags.closed) {
this.#flags.closed = true;
// we un-gracefully close the connection if there was an exception
// we don't want any event handlers to fire after this for anything other than error()
// https://github.com/oven-sh/bun/issues/1480
@@ -117,16 +123,6 @@ pub fn onOpen(this: *ServerWebSocket, ws: uws.AnyWebSocket) void {
}
}
pub fn getThisValue(this: *ServerWebSocket) JSValue {
var this_value = this.this_value;
if (this_value == .zero) {
this_value = this.toJS(this.handler.globalObject);
this_value.protect();
this.this_value = this_value;
}
return this_value;
}
pub fn onMessage(
this: *ServerWebSocket,
ws: uws.AnyWebSocket,
@@ -137,11 +133,11 @@ pub fn onMessage(
@intFromEnum(opcode),
message,
});
const onMessageHandler = this.handler.onMessage;
const onMessageHandler = this.#handler.onMessage;
if (onMessageHandler.isEmptyOrUndefinedOrNull()) return;
var globalObject = this.handler.globalObject;
var globalObject = this.#handler.globalObject;
// This is the start of a task.
const vm = this.handler.vm;
const vm = this.#handler.vm;
if (vm.isShuttingDown()) {
log("onMessage called after script execution", .{});
ws.close();
@@ -153,7 +149,7 @@ pub fn onMessage(
defer loop.exit();
const arguments = [_]JSValue{
this.getThisValue(),
this.#this_value.tryGet() orelse .js_undefined,
switch (opcode) {
.text => bun.String.createUTF8ForJS(globalObject, message) catch .zero, // TODO: properly propagate exception upwards
.binary => this.binaryToJS(globalObject, message) catch .zero, // TODO: properly propagate exception upwards
@@ -173,7 +169,7 @@ pub fn onMessage(
if (result.isEmptyOrUndefinedOrNull()) return;
if (result.toError()) |err_value| {
this.handler.runErrorCallback(vm, globalObject, err_value);
this.#handler.runErrorCallback(vm, globalObject, err_value);
return;
}
@@ -190,13 +186,13 @@ pub fn onMessage(
}
pub inline fn isClosed(this: *const ServerWebSocket) bool {
return this.flags.closed;
return this.#flags.closed;
}
pub fn onDrain(this: *ServerWebSocket, _: uws.AnyWebSocket) void {
log("onDrain", .{});
const handler = this.handler;
const handler = this.#handler;
const vm = handler.vm;
if (this.isClosed() or vm.isShuttingDown())
return;
@@ -205,7 +201,7 @@ pub fn onDrain(this: *ServerWebSocket, _: uws.AnyWebSocket) void {
const globalObject = handler.globalObject;
var corker = Corker{
.args = &[_]jsc.JSValue{this.getThisValue()},
.args = &[_]jsc.JSValue{this.#this_value.tryGet() orelse .js_undefined},
.globalObject = globalObject,
.callback = handler.onDrain,
};
@@ -222,7 +218,7 @@ pub fn onDrain(this: *ServerWebSocket, _: uws.AnyWebSocket) void {
}
fn binaryToJS(this: *const ServerWebSocket, globalThis: *jsc.JSGlobalObject, data: []const u8) bun.JSError!jsc.JSValue {
return switch (this.flags.binary_type) {
return switch (this.#flags.binary_type) {
.Buffer => jsc.ArrayBuffer.createBuffer(
globalThis,
data,
@@ -243,7 +239,7 @@ fn binaryToJS(this: *const ServerWebSocket, globalThis: *jsc.JSGlobalObject, dat
pub fn onPing(this: *ServerWebSocket, _: uws.AnyWebSocket, data: []const u8) void {
log("onPing: {s}", .{data});
const handler = this.handler;
const handler = this.#handler;
var cb = handler.onPing;
const vm = handler.vm;
if (cb.isEmptyOrUndefinedOrNull() or vm.isShuttingDown()) return;
@@ -257,7 +253,7 @@ pub fn onPing(this: *ServerWebSocket, _: uws.AnyWebSocket, data: []const u8) voi
_ = cb.call(
globalThis,
.js_undefined,
&[_]jsc.JSValue{ this.getThisValue(), this.binaryToJS(globalThis, data) catch .zero }, // TODO: properly propagate exception upwards
&[_]jsc.JSValue{ this.#this_value.tryGet() orelse .js_undefined, this.binaryToJS(globalThis, data) catch .zero }, // TODO: properly propagate exception upwards
) catch |e| {
const err = globalThis.takeException(e);
log("onPing error", .{});
@@ -268,7 +264,7 @@ pub fn onPing(this: *ServerWebSocket, _: uws.AnyWebSocket, data: []const u8) voi
pub fn onPong(this: *ServerWebSocket, _: uws.AnyWebSocket, data: []const u8) void {
log("onPong: {s}", .{data});
const handler = this.handler;
const handler = this.#handler;
var cb = handler.onPong;
if (cb.isEmptyOrUndefinedOrNull()) return;
@@ -285,7 +281,7 @@ pub fn onPong(this: *ServerWebSocket, _: uws.AnyWebSocket, data: []const u8) voi
_ = cb.call(
globalThis,
.js_undefined,
&[_]jsc.JSValue{ this.getThisValue(), this.binaryToJS(globalThis, data) catch .zero }, // TODO: properly propagate exception upwards
&[_]jsc.JSValue{ this.#this_value.tryGet() orelse .js_undefined, this.binaryToJS(globalThis, data) catch .zero }, // TODO: properly propagate exception upwards
) catch |e| {
const err = globalThis.takeException(e);
log("onPong error", .{});
@@ -295,26 +291,27 @@ pub fn onPong(this: *ServerWebSocket, _: uws.AnyWebSocket, data: []const u8) voi
pub fn onClose(this: *ServerWebSocket, _: uws.AnyWebSocket, code: i32, message: []const u8) void {
log("onClose", .{});
var handler = this.handler;
// TODO: Can this called inside finalize?
var handler = this.#handler;
const was_closed = this.isClosed();
this.flags.closed = true;
this.#flags.closed = true;
defer {
if (!was_closed) {
handler.active_connections -|= 1;
}
}
const signal = this.signal;
this.signal = null;
if (js.socketGetCached(this.getThisValue())) |socket| {
Bun__callNodeHTTPServerSocketOnClose(socket);
}
const signal = this.#signal;
this.#signal = null;
defer {
if (signal) |sig| {
sig.pendingActivityUnref();
sig.unref();
}
if (this.#this_value.isNotEmpty()) {
this.#this_value.downgrade();
}
}
const vm = handler.vm;
@@ -337,14 +334,14 @@ pub fn onClose(this: *ServerWebSocket, _: uws.AnyWebSocket, code: i32, message:
const message_js = bun.String.createUTF8ForJS(globalObject, message) catch |e| {
const err = globalObject.takeException(e);
log("onClose error", .{});
log("onClose error (message) {}", .{this.#this_value.isNotEmpty()});
handler.runErrorCallback(vm, globalObject, err);
return;
};
_ = handler.onClose.call(globalObject, .js_undefined, &[_]jsc.JSValue{ this.getThisValue(), JSValue.jsNumber(code), message_js }) catch |e| {
_ = handler.onClose.call(globalObject, .js_undefined, &[_]jsc.JSValue{ this.#this_value.tryGet() orelse .js_undefined, JSValue.jsNumber(code), message_js }) catch |e| {
const err = globalObject.takeException(e);
log("onClose error", .{});
log("onClose error {}", .{this.#this_value.isNotEmpty()});
handler.runErrorCallback(vm, globalObject, err);
return;
};
@@ -358,8 +355,6 @@ pub fn onClose(this: *ServerWebSocket, _: uws.AnyWebSocket, code: i32, message:
sig.signal(handler.globalObject, .ConnectionClosed);
}
}
this.this_value.unprotect();
}
pub fn behavior(comptime ServerType: type, comptime ssl: bool, opts: uws.WebSocketBehavior) uws.WebSocketBehavior {
@@ -372,6 +367,12 @@ pub fn constructor(globalObject: *jsc.JSGlobalObject, _: *jsc.CallFrame) bun.JSE
pub fn finalize(this: *ServerWebSocket) void {
log("finalize", .{});
this.#this_value.finalize();
if (this.#signal) |signal| {
this.#signal = null;
signal.pendingActivityUnref();
signal.unref();
}
bun.destroy(this);
}
@@ -387,11 +388,11 @@ pub fn publish(
return globalThis.throw("publish requires at least 1 argument", .{});
}
const app = this.handler.app orelse {
const app = this.#handler.app orelse {
log("publish() closed", .{});
return JSValue.jsNumber(0);
};
const flags = this.handler.flags;
const flags = this.#handler.flags;
const ssl = flags.ssl;
const publish_to_self = flags.publish_to_self;
@@ -437,10 +438,7 @@ pub fn publish(
}
{
var js_string = message_value.toString(globalThis);
if (globalThis.hasException()) {
return .zero;
}
var js_string = try message_value.toJSString(globalThis);
const view = js_string.view(globalThis);
const slice = view.toSlice(bun.default_allocator);
defer slice.deinit();
@@ -474,11 +472,11 @@ pub fn publishText(
return globalThis.throw("publish requires at least 1 argument", .{});
}
const app = this.handler.app orelse {
const app = this.#handler.app orelse {
log("publish() closed", .{});
return JSValue.jsNumber(0);
};
const flags = this.handler.flags;
const flags = this.#handler.flags;
const ssl = flags.ssl;
const publish_to_self = flags.publish_to_self;
@@ -504,10 +502,7 @@ pub fn publishText(
return globalThis.throw("publishText requires a non-empty message", .{});
}
var js_string = message_value.toString(globalThis);
if (globalThis.hasException()) {
return .zero;
}
var js_string = try message_value.toJSString(globalThis);
const view = js_string.view(globalThis);
const slice = view.toSlice(bun.default_allocator);
defer slice.deinit();
@@ -540,11 +535,11 @@ pub fn publishBinary(
return globalThis.throw("publishBinary requires at least 1 argument", .{});
}
const app = this.handler.app orelse {
const app = this.#handler.app orelse {
log("publish() closed", .{});
return JSValue.jsNumber(0);
};
const flags = this.handler.flags;
const flags = this.#handler.flags;
const ssl = flags.ssl;
const publish_to_self = flags.publish_to_self;
const topic_value = args.ptr[0];
@@ -595,11 +590,11 @@ pub fn publishBinaryWithoutTypeChecks(
topic_str: *jsc.JSString,
array: *jsc.JSUint8Array,
) bun.JSError!jsc.JSValue {
const app = this.handler.app orelse {
const app = this.#handler.app orelse {
log("publish() closed", .{});
return JSValue.jsNumber(0);
};
const flags = this.handler.flags;
const flags = this.#handler.flags;
const ssl = flags.ssl;
const publish_to_self = flags.publish_to_self;
@@ -634,11 +629,11 @@ pub fn publishTextWithoutTypeChecks(
topic_str: *jsc.JSString,
str: *jsc.JSString,
) bun.JSError!jsc.JSValue {
const app = this.handler.app orelse {
const app = this.#handler.app orelse {
log("publish() closed", .{});
return JSValue.jsNumber(0);
};
const flags = this.handler.flags;
const flags = this.#handler.flags;
const ssl = flags.ssl;
const publish_to_self = flags.publish_to_self;
@@ -674,12 +669,9 @@ pub fn cork(
this: *ServerWebSocket,
globalThis: *jsc.JSGlobalObject,
callframe: *jsc.CallFrame,
// Since we're passing the `this` value to the cork function, we need to
// make sure the `this` value is up to date.
this_value: jsc.JSValue,
) bun.JSError!JSValue {
const args = callframe.arguments_old(1);
this.this_value = this_value;
if (args.len < 1) {
return globalThis.throwNotEnoughArguments("cork", 1, 0);
@@ -758,10 +750,7 @@ pub fn send(
}
{
var js_string = message_value.toString(globalThis);
if (globalThis.hasException()) {
return .zero;
}
var js_string = try message_value.toJSString(globalThis);
const view = js_string.view(globalThis);
const slice = view.toSlice(bun.default_allocator);
defer slice.deinit();
@@ -816,10 +805,7 @@ pub fn sendText(
return globalThis.throw("sendText expects a string", .{});
}
var js_string = message_value.toString(globalThis);
if (globalThis.hasException()) {
return .zero;
}
var js_string = try message_value.toJSString(globalThis);
const view = js_string.view(globalThis);
const slice = view.toSlice(bun.default_allocator);
defer slice.deinit();
@@ -999,7 +985,7 @@ inline fn sendPing(
},
}
} else if (value.isString()) {
var string_value = value.toString(globalThis).toSlice(globalThis, bun.default_allocator);
var string_value = (try value.toJSString(globalThis)).toSlice(globalThis, bun.default_allocator);
defer string_value.deinit();
const buffer = string_value.slice();
@@ -1040,10 +1026,13 @@ inline fn sendPing(
}
pub fn getData(
_: *ServerWebSocket,
this: *ServerWebSocket,
_: *jsc.JSGlobalObject,
) JSValue {
log("getData()", .{});
if (this.#this_value.tryGet()) |this_value| {
return js.dataGetCached(this_value) orelse .js_undefined;
}
return .js_undefined;
}
@@ -1053,7 +1042,9 @@ pub fn setData(
value: jsc.JSValue,
) void {
log("setData()", .{});
js.dataSetCached(this.this_value, globalObject, value);
if (this.#this_value.tryGet()) |this_value| {
js.dataSetCached(this_value, globalObject, value);
}
}
pub fn getReadyState(
@@ -1074,11 +1065,10 @@ pub fn close(
globalThis: *jsc.JSGlobalObject,
callframe: *jsc.CallFrame,
// Since close() can lead to the close() callback being called, let's always ensure the `this` value is up to date.
this_value: jsc.JSValue,
_: jsc.JSValue,
) bun.JSError!JSValue {
const args = callframe.arguments_old(2);
log("close()", .{});
this.this_value = this_value;
if (this.isClosed()) {
return .js_undefined;
@@ -1104,31 +1094,24 @@ pub fn close(
defer message_value.deinit();
this.flags.closed = true;
this.#flags.closed = true;
this.websocket().end(code, message_value.slice());
return .js_undefined;
}
pub fn terminate(
this: *ServerWebSocket,
globalThis: *jsc.JSGlobalObject,
callframe: *jsc.CallFrame,
// Since terminate() can lead to close() being called, let's always ensure the `this` value is up to date.
this_value: jsc.JSValue,
_: *jsc.JSGlobalObject,
_: *jsc.CallFrame,
_: jsc.JSValue,
) bun.JSError!JSValue {
_ = globalThis;
const args = callframe.arguments_old(2);
_ = args;
log("terminate()", .{});
this.this_value = this_value;
if (this.isClosed()) {
return .js_undefined;
}
this.flags.closed = true;
this.this_value.unprotect();
this.#flags.closed = true;
this.websocket().close();
return .js_undefined;
@@ -1140,7 +1123,7 @@ pub fn getBinaryType(
) JSValue {
log("getBinaryType()", .{});
return switch (this.flags.binary_type) {
return switch (this.#flags.binary_type) {
.Uint8Array => bun.String.static("uint8array").toJS(globalThis),
.Buffer => bun.String.static("nodebuffer").toJS(globalThis),
.ArrayBuffer => bun.String.static("arraybuffer").toJS(globalThis),
@@ -1156,7 +1139,7 @@ pub fn setBinaryType(this: *ServerWebSocket, globalThis: *jsc.JSGlobalObject, va
// some other value which we don't support
.Float64Array) {
.ArrayBuffer, .Buffer, .Uint8Array => |val| {
this.flags.binary_type = val;
this.#flags.binary_type = val;
return;
},
else => {
@@ -1295,8 +1278,6 @@ const Corker = struct {
}
};
extern "c" fn Bun__callNodeHTTPServerSocketOnClose(jsc.JSValue) void;
const string = []const u8;
const std = @import("std");

View File

@@ -302,7 +302,7 @@ export default [
JSType: "0b11101110",
finalize: true,
construct: true,
hasPendingActivity: true,
values: ["on_data", "on_drain", "on_error"],
proto: {
send: {
fn: "send",

View File

@@ -4,6 +4,7 @@ export default [
define({
name: "RedisClient",
construct: true,
constructNeedsThis: true,
call: false,
finalize: true,
configurable: false,
@@ -226,11 +227,12 @@ export default [
zrank: { fn: "zrank" },
zrevrank: { fn: "zrevrank" },
subscribe: { fn: "subscribe" },
duplicate: { fn: "duplicate" },
psubscribe: { fn: "psubscribe" },
unsubscribe: { fn: "unsubscribe" },
punsubscribe: { fn: "punsubscribe" },
pubsub: { fn: "pubsub" },
},
values: ["onconnect", "onclose", "connectionPromise", "hello"],
values: ["onconnect", "onclose", "connectionPromise", "hello", "subscriptionCallbackMap"],
}),
];

View File

@@ -689,6 +689,7 @@ extern "C" JSC_DEFINE_HOST_FUNCTION(JSMock__jsModuleMock, (JSC::JSGlobalObject *
if (removeFromCJS) {
globalObject->requireMap()->remove(globalObject, specifierString);
RETURN_IF_EXCEPTION(scope, {});
}
globalObject->onLoadPlugins.addModuleMock(vm, specifier, mock);

View File

@@ -2160,8 +2160,8 @@ static JSValue constructProcessConfigObject(VM& vm, JSObject* processObject)
config->putDirect(vm, JSC::Identifier::fromString(vm, "target_defaults"_s), JSC::constructEmptyObject(globalObject), 0);
config->putDirect(vm, JSC::Identifier::fromString(vm, "variables"_s), variables, 0);
#if OS(WINDOWS)
variables->putDirect(vm, JSC::Identifier::fromString(vm, "asan"_s), JSC::jsNumber(0), 0);
variables->putDirect(vm, JSC::Identifier::fromString(vm, "control_flow_guard"_s), JSC::jsBoolean(false), 0);
variables->putDirect(vm, JSC::Identifier::fromString(vm, "coverage"_s), JSC::jsBoolean(false), 0);
variables->putDirect(vm, JSC::Identifier::fromString(vm, "dcheck_always_on"_s), JSC::jsNumber(0), 0);
@@ -2175,7 +2175,6 @@ static JSValue constructProcessConfigObject(VM& vm, JSObject* processObject)
variables->putDirect(vm, JSC::Identifier::fromString(vm, "napi_build_version"_s), JSC::jsNumber(Napi::DEFAULT_NAPI_VERSION), 0);
variables->putDirect(vm, JSC::Identifier::fromString(vm, "nasm_version"_s), JSC::jsNumber(2), 0);
#elif OS(MACOS)
variables->putDirect(vm, JSC::Identifier::fromString(vm, "asan"_s), JSC::jsNumber(0), 0); // TODO: ASAN_ENABLED
variables->putDirect(vm, JSC::Identifier::fromString(vm, "control_flow_guard"_s), JSC::jsBoolean(false), 0);
variables->putDirect(vm, JSC::Identifier::fromString(vm, "coverage"_s), JSC::jsBoolean(false), 0);
variables->putDirect(vm, JSC::Identifier::fromString(vm, "dcheck_always_on"_s), JSC::jsNumber(0), 0);
@@ -2190,7 +2189,6 @@ static JSValue constructProcessConfigObject(VM& vm, JSObject* processObject)
variables->putDirect(vm, JSC::Identifier::fromString(vm, "arm_fpu"_s), JSC::jsString(vm, String("neon"_s)), 0);
#endif
#elif OS(LINUX)
variables->putDirect(vm, JSC::Identifier::fromString(vm, "asan"_s), JSC::jsNumber(0), 0); // TODO: ASAN_ENABLED
variables->putDirect(vm, JSC::Identifier::fromString(vm, "control_flow_guard"_s), JSC::jsBoolean(false), 0);
variables->putDirect(vm, JSC::Identifier::fromString(vm, "coverage"_s), JSC::jsBoolean(false), 0);
variables->putDirect(vm, JSC::Identifier::fromString(vm, "dcheck_always_on"_s), JSC::jsNumber(0), 0);
@@ -2216,6 +2214,14 @@ static JSValue constructProcessConfigObject(VM& vm, JSObject* processObject)
#error "Unsupported architecture"
#endif
#if ASAN_ENABLED
// TODO: figure out why this causes v8.test.ts to fail.
// variables->putDirect(vm, JSC::Identifier::fromString(vm, "asan"_s), JSC::jsNumber(1), 0);
variables->putDirect(vm, JSC::Identifier::fromString(vm, "asan"_s), JSC::jsNumber(0), 0);
#else
variables->putDirect(vm, JSC::Identifier::fromString(vm, "asan"_s), JSC::jsNumber(0), 0);
#endif
config->freeze(vm);
return config;
}

View File

@@ -77,7 +77,7 @@ pub const CatchScope = struct {
/// Intended for use with `try`. Returns if there is already a pending exception or if traps cause
/// an exception to be thrown (this is the same as how RETURN_IF_EXCEPTION behaves in C++)
pub fn returnIfException(self: *CatchScope) bun.JSError!void {
pub fn returnIfException(self: *CatchScope) !void {
if (self.exceptionIncludingTraps() != null) return error.JSError;
}

View File

@@ -276,24 +276,25 @@ const errors: ErrorCodeMapping = [
["ERR_OSSL_EVP_INVALID_DIGEST", Error],
["ERR_KEY_GENERATION_JOB_FAILED", Error],
["ERR_MISSING_OPTION", TypeError],
["ERR_REDIS_CONNECTION_CLOSED", Error, "RedisError"],
["ERR_REDIS_INVALID_RESPONSE", Error, "RedisError"],
["ERR_REDIS_INVALID_BULK_STRING", Error, "RedisError"],
["ERR_REDIS_INVALID_ARRAY", Error, "RedisError"],
["ERR_REDIS_INVALID_INTEGER", Error, "RedisError"],
["ERR_REDIS_INVALID_SIMPLE_STRING", Error, "RedisError"],
["ERR_REDIS_INVALID_ERROR_STRING", Error, "RedisError"],
["ERR_REDIS_TLS_NOT_AVAILABLE", Error, "RedisError"],
["ERR_REDIS_TLS_UPGRADE_FAILED", Error, "RedisError"],
["ERR_REDIS_AUTHENTICATION_FAILED", Error, "RedisError"],
["ERR_REDIS_INVALID_PASSWORD", Error, "RedisError"],
["ERR_REDIS_INVALID_USERNAME", Error, "RedisError"],
["ERR_REDIS_INVALID_DATABASE", Error, "RedisError"],
["ERR_REDIS_INVALID_COMMAND", Error, "RedisError"],
["ERR_REDIS_INVALID_ARGUMENT", Error, "RedisError"],
["ERR_REDIS_INVALID_RESPONSE_TYPE", Error, "RedisError"],
["ERR_REDIS_CONNECTION_CLOSED", Error, "RedisError"],
["ERR_REDIS_CONNECTION_TIMEOUT", Error, "RedisError"],
["ERR_REDIS_IDLE_TIMEOUT", Error, "RedisError"],
["ERR_REDIS_INVALID_ARGUMENT", Error, "RedisError"],
["ERR_REDIS_INVALID_ARRAY", Error, "RedisError"],
["ERR_REDIS_INVALID_BULK_STRING", Error, "RedisError"],
["ERR_REDIS_INVALID_COMMAND", Error, "RedisError"],
["ERR_REDIS_INVALID_DATABASE", Error, "RedisError"],
["ERR_REDIS_INVALID_ERROR_STRING", Error, "RedisError"],
["ERR_REDIS_INVALID_INTEGER", Error, "RedisError"],
["ERR_REDIS_INVALID_PASSWORD", Error, "RedisError"],
["ERR_REDIS_INVALID_RESPONSE", Error, "RedisError"],
["ERR_REDIS_INVALID_RESPONSE_TYPE", Error, "RedisError"],
["ERR_REDIS_INVALID_SIMPLE_STRING", Error, "RedisError"],
["ERR_REDIS_INVALID_STATE", Error, "RedisError"],
["ERR_REDIS_INVALID_USERNAME", Error, "RedisError"],
["ERR_REDIS_TLS_NOT_AVAILABLE", Error, "RedisError"],
["ERR_REDIS_TLS_UPGRADE_FAILED", Error, "RedisError"],
["HPE_UNEXPECTED_CONTENT_LENGTH", Error],
["HPE_INVALID_TRANSFER_ENCODING", Error],
["HPE_INVALID_EOF_STATE", Error],

View File

@@ -5,6 +5,10 @@
#include "JavaScriptCore/JSCast.h"
#include "JavaScriptCore/JSArrayBufferView.h"
#if ASAN_ENABLED
#include <sanitizer/lsan_interface.h>
#endif
namespace Bun {
using namespace JSC;
@@ -30,4 +34,12 @@ JSC_DEFINE_HOST_FUNCTION(jsFunction_hasReifiedStatic, (JSC::JSGlobalObject * glo
return JSValue::encode(jsBoolean(false));
}
JSC_DEFINE_HOST_FUNCTION(jsFunction_lsanDoLeakCheck, (JSC::JSGlobalObject * globalObject, JSC::CallFrame* callFrame))
{
#if ASAN_ENABLED
return JSValue::encode(jsNumber(__lsan_do_recoverable_leak_check()));
#endif
return encodedJSUndefined();
}
}

View File

@@ -7,5 +7,6 @@ namespace Bun {
JSC_DECLARE_HOST_FUNCTION(jsFunction_arrayBufferViewHasBuffer);
JSC_DECLARE_HOST_FUNCTION(jsFunction_hasReifiedStatic);
JSC_DECLARE_HOST_FUNCTION(jsFunction_lsanDoLeakCheck);
}

View File

@@ -37,9 +37,7 @@ JSC::JSValue generateModule(JSC::JSGlobalObject* globalObject, JSC::VM& vm, cons
{
auto throwScope = DECLARE_THROW_SCOPE(vm);
auto&& origin = SourceOrigin(WTF::URL(urlString));
SourceCode source = JSC::makeSource(SOURCE, origin,
JSC::SourceTaintedOrigin::Untainted,
moduleName);
SourceCode source = JSC::makeSource(SOURCE, origin, JSC::SourceTaintedOrigin::Untainted, moduleName);
maybeAddCodeCoverage(vm, source);
JSFunction* func
= JSFunction::create(
@@ -100,12 +98,7 @@ ALWAYS_INLINE JSC::JSValue generateNativeModule(
}
#ifdef BUN_DYNAMIC_JS_LOAD_PATH
JSValue initializeInternalModuleFromDisk(
JSGlobalObject* globalObject,
VM& vm,
const WTF::String& moduleName,
WTF::String fileBase,
const WTF::String& urlString)
JSValue initializeInternalModuleFromDisk(JSGlobalObject* globalObject, VM& vm, const WTF::String& moduleName, WTF::String fileBase, const WTF::String& urlString)
{
WTF::String file = makeString(ASCIILiteral::fromLiteralUnsafe(BUN_DYNAMIC_JS_LOAD_PATH), "/"_s, WTFMove(fileBase));
if (auto contents = WTF::FileSystemImpl::readEntireFile(file)) {

View File

@@ -21,6 +21,8 @@ static JSC_DECLARE_CUSTOM_GETTER(jsJSBunRequestGetCookies);
static JSC_DECLARE_HOST_FUNCTION(jsJSBunRequestClone);
extern "C" void Bun__JSRequest__calculateEstimatedByteSize(void* requestPtr);
static const HashTableValue JSBunRequestPrototypeValues[] = {
{ "params"_s, static_cast<unsigned>(JSC::PropertyAttribute::CustomAccessor | JSC::PropertyAttribute::ReadOnly | JSC::PropertyAttribute::DontDelete), NoIntrinsic, { HashTableValue::GetterSetterType, jsJSBunRequestGetParams, nullptr } },
{ "cookies"_s, static_cast<unsigned>(JSC::PropertyAttribute::CustomAccessor | JSC::PropertyAttribute::ReadOnly | JSC::PropertyAttribute::DontDelete), NoIntrinsic, { HashTableValue::GetterSetterType, jsJSBunRequestGetCookies, nullptr } },
@@ -29,6 +31,10 @@ static const HashTableValue JSBunRequestPrototypeValues[] = {
JSBunRequest* JSBunRequest::create(JSC::VM& vm, JSC::Structure* structure, void* sinkPtr, JSObject* params)
{
// Do this **extremely** early, before we create the JSValue.
// We do not want to risk the GC running before this function is called.
Bun__JSRequest__calculateEstimatedByteSize(sinkPtr);
JSBunRequest* ptr = new (NotNull, JSC::allocateCell<JSBunRequest>(vm)) JSBunRequest(vm, structure, sinkPtr);
ptr->finishCreation(vm, params);
return ptr;
@@ -124,13 +130,12 @@ JSBunRequest::JSBunRequest(JSC::VM& vm, JSC::Structure* structure, void* sinkPtr
{
}
extern "C" size_t Request__estimatedSize(void* requestPtr);
extern "C" void Bun__JSRequest__calculateEstimatedByteSize(void* requestPtr);
void JSBunRequest::finishCreation(JSC::VM& vm, JSObject* params)
{
Base::finishCreation(vm);
m_params.setMayBeNull(vm, this, params);
m_cookies.clear();
Bun__JSRequest__calculateEstimatedByteSize(this->wrapped());
auto size = Request__estimatedSize(this->wrapped());
vm.heap.reportExtraMemoryAllocated(this, size);

View File

@@ -367,6 +367,10 @@ pub const JSGlobalObject = opaque {
return this.throwValue(err);
}
/// Throw an Error from a formatted string.
///
/// Note: If you are throwing an error within somewhere in the Bun API,
/// chances are you should be using `.ERR(...).throw()` instead.
pub fn throw(this: *JSGlobalObject, comptime fmt: [:0]const u8, args: anytype) JSError {
const instance = this.createErrorInstance(fmt, args);
bun.assert(instance != .zero);
@@ -789,6 +793,9 @@ pub const JSGlobalObject = opaque {
return .{ .globalObject = this };
}
/// Throw an error from within the Bun runtime.
///
/// The set of errors accepted by `ERR()` is defined in `ErrorCode.ts`.
pub fn ERR(global: *JSGlobalObject, comptime code: jsc.Error, comptime fmt: [:0]const u8, args: anytype) @import("ErrorCode").ErrorBuilder(code, fmt, @TypeOf(args)) {
return .{ .global = global, .args = args };
}

View File

@@ -1,34 +1,26 @@
/// Opaque type for working with JavaScript `Map` objects.
pub const JSMap = opaque {
extern fn JSC__JSMap__create(*JSGlobalObject) JSValue;
pub const create = bun.cpp.JSC__JSMap__create;
pub const set = bun.cpp.JSC__JSMap__set;
pub fn create(globalObject: *JSGlobalObject) JSValue {
return JSC__JSMap__create(globalObject);
}
/// Retrieve a value from this JS Map object.
///
/// Note this shares semantics with the JS `Map.prototype.get` method, and
/// will return .js_undefined if a value is not found.
pub const get = bun.cpp.JSC__JSMap__get;
pub fn set(this: *JSMap, globalObject: *JSGlobalObject, key: JSValue, value: JSValue) void {
return bun.cpp.JSC__JSMap__set(this, globalObject, key, value);
}
/// Test whether this JS Map object has a given key.
pub const has = bun.cpp.JSC__JSMap__has;
pub fn get_(this: *JSMap, globalObject: *JSGlobalObject, key: JSValue) JSValue {
return bun.cpp.JSC__JSMap__get_(this, globalObject, key);
}
/// Attempt to remove a key from this JS Map object.
pub const remove = bun.cpp.JSC__JSMap__remove;
pub fn get(this: *JSMap, globalObject: *JSGlobalObject, key: JSValue) ?JSValue {
const value = get_(this, globalObject, key);
if (value.isEmpty()) {
return null;
}
return value;
}
pub fn has(this: *JSMap, globalObject: *JSGlobalObject, key: JSValue) bool {
return bun.cpp.JSC__JSMap__has(this, globalObject, key);
}
pub fn remove(this: *JSMap, globalObject: *JSGlobalObject, key: JSValue) bool {
return bun.cpp.JSC__JSMap__remove(this, globalObject, key);
}
/// Retrieve the number of entries in this JS Map object.
pub const size = bun.cpp.JSC__JSMap__size;
/// Attempt to convert a `JSValue` to a `*JSMap`.
///
/// Returns `null` if the value is not a Map.
pub fn fromJS(value: JSValue) ?*JSMap {
if (value.jsTypeLoose() == .Map) {
return bun.cast(*JSMap, value.asEncoded().asPtr.?);
@@ -41,5 +33,4 @@ pub const JSMap = opaque {
const bun = @import("bun");
const jsc = bun.jsc;
const JSGlobalObject = jsc.JSGlobalObject;
const JSValue = jsc.JSValue;

View File

@@ -6,7 +6,6 @@ pub const JSObject = opaque {
}
extern fn JSC__JSObject__getIndex(this: JSValue, globalThis: *JSGlobalObject, i: u32) JSValue;
extern fn JSC__JSObject__putRecord(this: *JSObject, global: *JSGlobalObject, key: *ZigString, values: [*]ZigString, len: usize) void;
extern fn Bun__JSObject__getCodePropertyVMInquiry(global: *JSGlobalObject, obj: *JSObject) JSValue;
extern fn JSC__createStructure(global: *jsc.JSGlobalObject, owner: *jsc.JSCell, length: u32, names: [*]ExternColumnIdentifier) jsc.JSValue;
extern fn JSC__JSObject__create(global_object: *JSGlobalObject, length: usize, ctx: *anyopaque, initializer: InitializeCallback) JSValue;
@@ -123,10 +122,10 @@ pub const JSObject = opaque {
const InitializeCallback = *const fn (ctx: *anyopaque, obj: *JSObject, global: *JSGlobalObject) callconv(.C) void;
pub fn Initializer(comptime Ctx: type, comptime func: fn (*Ctx, obj: *JSObject, global: *JSGlobalObject) void) type {
pub fn Initializer(comptime Ctx: type, comptime func: fn (*Ctx, obj: *JSObject, global: *JSGlobalObject) bun.JSError!void) type {
return struct {
pub fn call(this: *anyopaque, obj: *JSObject, global: *JSGlobalObject) callconv(.C) void {
@call(bun.callmod_inline, func, .{ @as(*Ctx, @ptrCast(@alignCast(this))), obj, global });
func(@ptrCast(@alignCast(this)), obj, global) catch |err| bun.jsc.host_fn.voidFromJSError(err, global);
}
};
}
@@ -150,8 +149,8 @@ pub const JSObject = opaque {
return value;
}
pub fn putRecord(this: *JSObject, global: *JSGlobalObject, key: *ZigString, values: []ZigString) void {
return JSC__JSObject__putRecord(this, global, key, values.ptr, values.len);
pub fn putRecord(this: *JSObject, global: *JSGlobalObject, key: *ZigString, values: []ZigString) bun.JSError!void {
return bun.cpp.JSC__JSObject__putRecord(this, global, key, values.ptr, values.len);
}
/// This will not call getters or be observable from JavaScript.

View File

@@ -220,6 +220,9 @@ pub const JSPromise = opaque {
bun.cpp.JSC__JSPromise__setHandled(this, vm);
}
/// Create a new resolved promise resolving to a given value.
///
/// Note: If you want the result as a JSValue, use `JSPromise.resolvedPromiseValue` instead.
pub fn resolvedPromise(globalThis: *JSGlobalObject, value: JSValue) *JSPromise {
return JSC__JSPromise__resolvedPromise(globalThis, value);
}
@@ -230,6 +233,9 @@ pub const JSPromise = opaque {
return JSC__JSPromise__resolvedPromiseValue(globalThis, value);
}
/// Create a new rejected promise rejecting to a given value.
///
/// Note: If you want the result as a JSValue, use `JSPromise.rejectedPromiseValue` instead.
pub fn rejectedPromise(globalThis: *JSGlobalObject, value: JSValue) *JSPromise {
return JSC__JSPromise__rejectedPromise(globalThis, value);
}
@@ -275,6 +281,11 @@ pub const JSPromise = opaque {
bun.cpp.JSC__JSPromise__rejectAsHandled(this, globalThis, value) catch return bun.debugAssert(false); // TODO: properly propagate exception upwards
}
/// Create a new pending promise.
///
/// Note: You should use `JSPromise.resolvedPromise` or
/// `JSPromise.rejectedPromise` if you want to create a promise that
/// is already resolved or rejected.
pub fn create(globalThis: *JSGlobalObject) *JSPromise {
return JSC__JSPromise__create(globalThis);
}

View File

@@ -1,3 +1,7 @@
/// Holds a reference to a JSValue.
///
/// This reference can be either weak (a JSValue) or may be strong, in which
/// case it prevents the garbage collector from collecting the value.
pub const JSRef = union(enum) {
weak: jsc.JSValue,
strong: jsc.Strong.Optional,
@@ -91,6 +95,11 @@ pub const JSRef = union(enum) {
};
}
/// Test whether this reference is a strong reference.
pub fn isStrong(this: *const @This()) bool {
return this.* == .strong;
}
pub fn deinit(this: *@This()) void {
switch (this.*) {
.weak => {

View File

@@ -347,12 +347,11 @@ pub const JSValue = enum(i64) {
@compileError("Unsupported key type in put(). Expected ZigString or bun.String, got " ++ @typeName(Key));
}
}
extern fn JSC__JSValue__putMayBeIndex(target: JSValue, globalObject: *JSGlobalObject, key: *const String, value: jsc.JSValue) void;
/// Note: key can't be numeric (if so, use putMayBeIndex instead)
/// Same as `.put` but accepts both non-numeric and numeric keys.
/// Prefer to use `.put` if the key is guaranteed to be non-numeric (e.g. known at comptime)
pub inline fn putMayBeIndex(this: JSValue, globalObject: *JSGlobalObject, key: *const String, value: JSValue) void {
JSC__JSValue__putMayBeIndex(this, globalObject, key, value);
pub fn putMayBeIndex(this: JSValue, globalObject: *JSGlobalObject, key: *const String, value: JSValue) bun.JSError!void {
return bun.cpp.JSC__JSValue__putMayBeIndex(this, globalObject, key, value);
}
extern fn JSC__JSValue__putToPropertyKey(target: JSValue, globalObject: *JSGlobalObject, key: jsc.JSValue, value: jsc.JSValue) void;
@@ -1191,10 +1190,8 @@ pub const JSValue = enum(i64) {
return getZigString(this, global).toSliceZ(allocator);
}
extern fn JSC__JSValue__toString(this: JSValue, globalThis: *JSGlobalObject) *JSString;
/// On exception, this returns the empty string.
pub fn toString(this: JSValue, globalThis: *JSGlobalObject) *JSString {
return JSC__JSValue__toString(this, globalThis);
pub fn toJSString(this: JSValue, globalThis: *JSGlobalObject) bun.JSError!*JSString {
return bun.cpp.JSC__JSValue__toStringOrNull(this, globalThis);
}
extern fn JSC__JSValue__jsonStringify(this: JSValue, globalThis: *JSGlobalObject, indent: u32, out: *bun.String) void;
@@ -1202,17 +1199,6 @@ pub const JSValue = enum(i64) {
return bun.jsc.fromJSHostCallGeneric(globalThis, @src(), JSC__JSValue__jsonStringify, .{ this, globalThis, indent, out });
}
extern fn JSC__JSValue__toStringOrNull(this: JSValue, globalThis: *JSGlobalObject) ?*JSString;
// Calls JSValue::toStringOrNull. Returns error on exception.
pub fn toJSString(this: JSValue, globalThis: *JSGlobalObject) bun.JSError!*JSString {
var scope: ExceptionValidationScope = undefined;
scope.init(globalThis, @src());
defer scope.deinit();
const maybe_string = JSC__JSValue__toStringOrNull(this, globalThis);
scope.assertExceptionPresenceMatches(maybe_string == null);
return maybe_string orelse error.JSError;
}
/// Call `toString()` on the JSValue and clone the result.
pub fn toSliceOrNull(this: JSValue, globalThis: *JSGlobalObject) bun.JSError!ZigString.Slice {
const str = try bun.String.fromJS(this, globalThis);
@@ -2424,7 +2410,6 @@ const ArrayBuffer = jsc.ArrayBuffer;
const C_API = bun.jsc.C;
const CatchScope = jsc.CatchScope;
const DOMURL = jsc.DOMURL;
const ExceptionValidationScope = jsc.ExceptionValidationScope;
const JSArrayIterator = jsc.JSArrayIterator;
const JSCell = jsc.JSCell;
const JSGlobalObject = jsc.JSGlobalObject;

Some files were not shown because too many files have changed in this diff Show More