### What does this PR do?
Fixes#24387
### How did you verify your code works?
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Marko Vejnovic <marko@bun.com>
## Summary
Fixes a segfault that occurred when calling `process.dlopen` with
`null`, `undefined`, or primitive values for `exports`.
Previously, this would cause a crash at address `0x00000000` in
`node_module_register` due to dereferencing an uninitialized
`strongExportsObject`.
## Changes
- Modified `src/bun.js/bindings/v8/node.cpp` to use JSC's `toObject()`
instead of manual type checking
- This matches Node.js `ToObject()` behavior:
- Throws `TypeError` for `null`/`undefined`
- Creates wrapper objects for primitives
- Preserves existing objects
## Test Plan
Added `test/js/node/process/dlopen-non-object-exports.test.ts` with
three test cases:
- Null exports (should throw)
- Undefined exports (should throw)
- Primitive exports (should create wrapper)
All tests pass with the fix.
## Related Issue
Fixes the first bug discovered in the segfault investigation.
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Fixes incorrect JWK "d" field length for exported elliptic curve private
keys. The "d" field is now correctly padded to ensure RFC 7518
compliance.
## Problem
When exporting EC private keys to JWK format, the "d" field would
sometimes be shorter than required by RFC 7518 because
`convertToBytes()` doesn't pad the result when the BIGNUM has leading
zeros. This caused incompatibility with Chrome's strict validation,
though Node.js and Firefox would accept the malformed keys.
Expected lengths per RFC 7518:
- P-256: 32 bytes → 43 base64url characters
- P-384: 48 bytes → 64 base64url characters
- P-521: 66 bytes → 88 base64url characters
## Solution
Changed `src/bun.js/bindings/webcrypto/CryptoKeyECOpenSSL.cpp:420` to
use `convertToBytesExpand(privateKey, keySizeInBytes)` instead of
`convertToBytes(privateKey)`, ensuring the private key is padded with
leading zeros when necessary. This matches the behavior already used for
the x and y public key coordinates.
## Test plan
- ✅ Added regression test in `test/regression/issue/24399.test.ts` that
generates multiple keys for each curve and verifies correct "d" field
length
- ✅ Test fails with `USE_SYSTEM_BUN=1 bun test` (reproduces the bug)
- ✅ Test passes with `bun bd test` (verifies the fix)
- ✅ Existing crypto tests pass
Fixes#24399🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
Refactored NAPI property and element access to use inline methods and
improved error handling. Added comprehensive tests for default value
behavior and numeric string key operations in NAPI, ensuring correct
handling of missing properties, integer keys, and property deletion.
Updated TypeScript tests to cover new scenarios.
### How did you verify your code works?
Tests
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
## Summary
Fixes 100% CPU usage on idle WebSocket servers between bun-v1.2.23 and
bun-v1.3.0.
Many users reported WebSocket server CPU usage jumping to 100% on idle
connections after upgrading to v1.3.0. Investigation revealed a missing
`poll_ref.unref()` call in the WebSocket upgrade path.
## Root Cause
In commit 625e537f5d (#23348), the `OnBeforeOpen` callback mechanism was
removed as part of refactoring the WebSocket upgrade process. However,
this callback contained a critical cleanup step:
```zig
defer ctx.this.poll_ref.unref(ctx.globalObject.bunVM());
```
When a `NodeHTTPResponse` is created, `poll_ref.ref()` is called (line
314) to keep the event loop alive while handling the HTTP request. After
a WebSocket upgrade, the HTTP response object is no longer relevant and
its `poll_ref` must be unref'd to indicate the request processing is
complete.
Without this unref, the event loop maintains an active reference even
after the upgrade completes, causing the CPU to spin at 100% waiting for
events on what should be an idle connection.
## Changes
- Added `poll_ref.unref()` call in `NodeHTTPResponse.upgrade()` after
setting the `upgraded` flag
- Added regression test to verify event loop properly exits after
WebSocket upgrade
## Test Plan
- [x] Code compiles successfully
- [x] Existing WebSocket tests pass
- [x] Manual testing confirms CPU usage returns to normal on idle
WebSocket connections
## Related Issues
Fixes issue reported by users between bun-v1.2.23 and bun-v1.3.0
regarding 100% CPU usage on idle WebSocket servers.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
Restore call to us_get_default_ca_certificates, and
X509_STORE_set_default_paths
Fixes https://github.com/oven-sh/bun/issues/23735
### How did you verify your code works?
Manually test running:
```bash
bun -e "await fetch('https://secure-api.eloview.com').then(res => res.t
ext()).then(console.log);"
```
should not result in:
```js
error: unable to get local issuer certificate
path: "https://secure-api.eloview.com/",
errno: 0,
code: "UNABLE_TO_GET_ISSUER_CERT_LOCALLY"
```
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Bug Fixes**
* Enhanced system root certificate handling to ensure consistent
validation across all secure connections.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
### What does this PR do?
Adds `"configVersion"` to bun.lock(b). The version will be used to keep
default settings the same if they would be breaking across bun versions.
fixes ENG-21389
fixes ENG-21388
### How did you verify your code works?
TODO:
- [ ] new project
- [ ] existing project without configVersion
- [ ] existing project with configVersion
- [ ] same as above but with bun.lockb
- [ ] configVersion@0 defaults to hoisted linker
- [ ] new projects use isolated linker
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
This PR introduces a new postinstall optimization system that
significantly reduces the need to run lifecycle scripts for certain
packages by intelligently handling their requirements at install time.
## Key Features
### 1. Native Binlink Optimization
When packages like `esbuild` ship platform-specific binaries as optional
dependencies, we now:
- Detect the native binlink pattern (enabled by default for `esbuild`)
- Find the matching platform-specific dependency based on target CPU/OS
- Link binaries directly from the platform-specific package (e.g.,
`@esbuild/darwin-arm64`)
- Fall back gracefully if the platform-specific package isn't found
**Result**: No postinstall scripts needed for esbuild and similar
packages.
### 2. Lifecycle Script Skipping
For packages like `sharp` that run heavy postinstall scripts:
- Skip lifecycle scripts entirely (enabled by default for `sharp`)
- Prevents downloading large binaries or compiling native code
unnecessarily
- Reduces install time and potential failures in restricted environments
## Configuration
Both features can be configured via `package.json`:
```json
{
"nativeDependencies": ["esbuild", "my-custom-package"],
"ignoreScripts": ["sharp", "another-package"]
}
```
Set to empty arrays to disable defaults:
```json
{
"nativeDependencies": [],
"ignoreScripts": []
}
```
Environment variable overrides:
- `BUN_FEATURE_FLAG_DISABLE_NATIVE_DEPENDENCY_LINKER=1` - disable native
binlink
- `BUN_FEATURE_FLAG_DISABLE_IGNORE_SCRIPTS=1` - disable script ignoring
## Implementation Details
### Core Components
- **`postinstall_optimizer.zig`**: New file containing the optimizer
logic
- `PostinstallOptimizer` enum with `native_binlink` and `ignore`
variants
- `List` type to track optimization strategies per package hash
- Defaults for `esbuild` (native binlink) and `sharp` (ignore)
- **`Bin.Linker` changes**: Extended to support separate target paths
- `target_node_modules_path`: Where to find the actual binary
- `target_package_name`: Name of the package containing the binary
- Fallback logic when native binlink optimization fails
### Modified Components
- **PackageInstaller.zig**: Checks optimizer before:
- Enqueueing lifecycle scripts
- Linking binaries (with platform-specific package resolution)
- **isolated_install/Installer.zig**: Similar checks for isolated linker
mode
- `maybeReplaceNodeModulesPath()` resolves platform-specific packages
- Retry logic without optimization on failure
- **Lockfile**: Added `postinstall_optimizer` field to persist
configuration
## Changes Included
- Updated `esbuild` from 0.21.5 to 0.25.11 (testing with latest)
- VS Code launch config updates for debugging install with new flags
- New feature flags in `env_var.zig`
## Test Plan
- [x] Existing install tests pass
- [ ] Test esbuild install without postinstall scripts running
- [ ] Test sharp install with scripts skipped
- [ ] Test custom package.json configuration
- [ ] Test fallback when platform-specific package not found
- [ ] Test feature flag overrides
🤖 Generated with [Claude Code](https://claude.com/claude-code)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Native binlink optimization: installs platform-specific binaries when
available, with a safe retry fallback and verbose logging option.
* Per-package postinstall controls to optionally skip lifecycle scripts.
* New feature flags to disable native binlink optimization and to
disable lifecycle-script ignoring.
* **Tests**
* End-to-end tests and test packages added to validate native binlink
behavior across install scenarios and linker modes.
* **Documentation**
* Bench README and sample app migrated to a Next.js-based setup.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Fixes Next.js 16 + React Compiler build failure when using Bun runtime.
## Issue
When `Module._resolveFilename` was overridden (e.g., by Next.js's
require-hook), Bun was not passing the `options` parameter (which
contains `paths`) to the override function. This caused resolution
failures when the override tried to use custom resolution paths.
Additionally, when `Module._resolveFilename` was called directly with
`options.paths`, Bun was ignoring the paths parameter and using default
resolution.
## Root Causes
1. In `ImportMetaObject.cpp`, when calling an overridden
`_resolveFilename` function, the options object with paths was not being
passed as the 4th argument.
2. In `NodeModuleModule.cpp`, `jsFunctionResolveFileName` was calling
`Bun__resolveSync` without extracting and using the `options.paths`
parameter.
## Solution
1. In `ImportMetaObject.cpp`: When `userPathList` is provided, construct
an options object with `{paths: userPathList}` and pass it as the 4th
argument to the overridden `_resolveFilename` function.
2. In `NodeModuleModule.cpp`: Extract `options.paths` from the 4th
argument and call `Bun__resolveSyncWithPaths` when paths are provided,
instead of always using `Bun__resolveSync`.
## Reproduction
Before this fix, running:
```bash
bun --bun next build --turbopack
```
on a Next.js 16 app with React Compiler enabled would fail with:
```
Cannot find module './node_modules/babel-plugin-react-compiler'
```
## Testing
- Added comprehensive tests for `Module._resolveFilename` with
`options.paths`
- Verified Next.js 16 + React Compiler + Turbopack builds successfully
with Bun
- All 5 new tests pass with the fix, 3 fail without it
- All existing tests continue to pass
## Files Changed
- `src/bun.js/bindings/ImportMetaObject.cpp` - Pass options to override
- `src/bun.js/modules/NodeModuleModule.cpp` - Handle options.paths in
_resolveFilename
- `test/js/node/module/module-resolve-filename-paths.test.js` - New test
suite
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Extract `FetchTasklet` struct from `src/bun.js/webcore/fetch.zig` into
its own file at `src/bun.js/webcore/fetch/FetchTasklet.zig` to improve
code organization and modularity.
## Changes
- Moved `FetchTasklet` struct definition (1336 lines) to new file
`src/bun.js/webcore/fetch/FetchTasklet.zig`
- Added all necessary imports to the new file
- Updated `fetch.zig` line 61 to import `FetchTasklet` from the new
location: `pub const FetchTasklet =
@import("./fetch/FetchTasklet.zig").FetchTasklet;`
- Verified compilation succeeds with `bun bd`
## Impact
- No functional changes - this is a pure refactoring
- Improves code organization by separating the large `FetchTasklet`
implementation
- Makes the codebase more maintainable and easier to navigate
- Reduces `fetch.zig` from 2768 lines to 1433 lines
## Test plan
- [x] Built successfully with `bun bd`
- [x] No changes to functionality - pure code organization refactor
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
fixes#23901
### How did you verify your code works?
with a test
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Fixes a bug where `bun update --interactive` only updated `package.json`
but didn't actually install the updated packages. Users had to manually
run `bun install` afterwards.
## Root Cause
The bug was in `savePackageJson()` in
`src/cli/update_interactive_command.zig`:
1. The function wrote the updated `package.json` to disk
2. But it **didn't update the in-memory cache**
(`WorkspacePackageJSONCache`)
3. When `installWithManager()` ran, it called `getWithPath()` which
returned the **stale cached version**
4. So the installation proceeded with the old dependencies
## The Fix
Update the cache entry after writing to disk (line 116):
```zig
package_json.*.source.contents = new_package_json_source;
```
This matches the behavior in `updatePackageJSONAndInstall.zig` line 269.
## Test Plan
Added comprehensive regression tests in
`test/cli/update_interactive_install.test.ts`:
- ✅ Verifies that `package.json` is updated
- ✅ Verifies that `node_modules` is updated (this was failing before the
fix)
- ✅ Tests both normal update and `--latest` flag
- ✅ Compares installed version to confirm packages were actually
installed
Run tests with:
```bash
bun bd test test/cli/update_interactive_install.test.ts
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Adds a `subscriptions` getter to `ServerWebSocket` that returns an array
of all topics the WebSocket is currently subscribed to.
## Implementation
- Added `getTopicsCount()` and `iterateTopics()` helpers to uWS
WebSocket
- Implemented C++ function `uws_ws_get_topics_as_js_array` that:
- Uses `JSC::MarkedArgumentBuffer` to protect values from GC
- Constructs JSArray directly in C++ for efficiency
- Uses template pattern for SSL/TCP variants
- Properly handles iterator locks with explicit scopes
- Exposed as `subscriptions` getter property on ServerWebSocket
- Returns empty array when WebSocket is closed (not null)
## API
```typescript
const server = Bun.serve({
websocket: {
open(ws) {
ws.subscribe("chat");
ws.subscribe("notifications");
console.log(ws.subscriptions); // ["chat", "notifications"]
ws.unsubscribe("chat");
console.log(ws.subscriptions); // ["notifications"]
}
}
});
```
## Test Coverage
Added 5 comprehensive test cases covering:
- Basic subscription/unsubscription flow
- All subscriptions removed
- Behavior after WebSocket close
- Duplicate subscriptions (should only appear once)
- Multiple subscribe/unsubscribe cycles
All tests pass with 24 assertions.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
Allows optional peers to resolve to package if possible.
Optional peers aren't auto-installed, but they should still be given a
chance to resolve. If they're always left unresolved it's possible for
multiple dependencies on the same package to result in different peer
resolutions when they should be the same. For example, this bug this
could cause monorepos using elysia to have corrupt node_modules because
there might be more than one copy of elysia in `node_modules/.bun` (or
more than the expected number of copies).
fixes#23725
most likely fixes#23895
fixes ENG-21411
### How did you verify your code works?
Added a test for optional peers and non-optional peers that would
previously trigger this bug.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Improved resolution of optional peer dependencies during isolated
installations, with better propagation across package hierarchies.
* **Tests**
* Added comprehensive test suite covering optional peer dependency
scenarios in isolated workspaces.
* Added test fixtures for packages with peer and optional peer
dependencies.
* Enhanced lockfile migration test verification using snapshot-based
assertions.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
### What does this PR do?
When `napi_create_external_buffer` receives empty input, the returned
buffer should be detached.
This fixes the remaining tests in `ref-napi` other than three that use a
few uv symbols
<img width="329" height="159" alt="Screenshot 2025-11-01 at 8 38 01 PM"
src="https://github.com/user-attachments/assets/2c75f937-79c5-467a-bde3-44e45e05d9a0"
/>
### How did you verify your code works?
Added tests for correct values from `napi_get_buffer_info`,
`napi_get_arraybuffer_info`, and `napi_is_detached_arraybuffer` when
given an empty buffer from `napi_create_external_buffer`
---------
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
Adds comprehensive documentation explaining how Bun's event loop works,
including task draining, microtasks, process.nextTick, and I/O polling
integration.
## What does this document?
- **Task draining algorithm**: Shows the exact flow for processing each
task (run → release weak refs → drain microtasks → deferred tasks)
- **Process.nextTick ordering**: Explains batching behavior - all
nextTick callbacks in current batch run, then microtasks drain
- **Microtask integration**: How JavaScriptCore's microtask queue and
Bun's nextTick queue interact
- **I/O polling**: How uSockets epoll/kqueue events integrate with the
event loop
- **Timer ordering**: Why setImmediate runs before setTimeout
- **Enter/Exit mechanism**: How the counter prevents excessive microtask
draining
## Visual aids
Includes ASCII flowcharts showing:
- Main tick flow
- autoTick flow (with I/O polling)
- Per-task draining sequence
## Code references
All explanations include specific file paths and line numbers for
verification:
- `src/bun.js/event_loop/Task.zig`
- `src/bun.js/event_loop.zig`
- `src/bun.js/bindings/ZigGlobalObject.cpp`
- `src/js/builtins/ProcessObjectInternals.ts`
- `packages/bun-usockets/src/eventing/epoll_kqueue.c`
## Examples
Includes JavaScript examples demonstrating:
- nextTick vs Promise ordering
- Batching behavior when nextTick callbacks schedule more nextTicks
- setImmediate vs setTimeout ordering
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
### How did you verify your code works?
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Bug Fixes**
* Improved HTTP connection handling during write failures to ensure more
reliable timeout behavior and connection state management.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
### What does this PR do?
If the input was empty `ArrayBuffer::createFromBytes` would create a
buffer that `JSUint8Array::create` would see as detached, so it would
throw an exception. This would likely cause crashes in `node-addon-api`
because finalize data is freed if `napi_create_external_buffer` fails,
and we already setup the finalizer.
Example of creating empty buffer:
a7f62a4caa/src/binding.cc (L687)fixes#6737fixes#10965fixes#12331fixes#12937fixes#13622
most likely fixes#14822
### How did you verify your code works?
Manually and added tests.
## Summary
Removes the `MemoryReportingAllocator` wrapper and simplifies
`fetch.zig` to use `bun.default_allocator` directly. The
`MemoryReportingAllocator` was wrapping `bun.default_allocator` to track
memory usage for reporting to the VM, but this added unnecessary
complexity and indirection without meaningful benefit.
## Changes
**Deleted:**
- `src/allocators/MemoryReportingAllocator.zig` (96 lines)
**Modified `src/bun.js/webcore/fetch.zig`:**
- Removed `memory_reporter: *bun.MemoryReportingAllocator` field from
`FetchTasklet` struct
- Removed `memory_reporter: *bun.MemoryReportingAllocator` field from
`FetchOptions` struct
- Replaced all `this.memory_reporter.allocator()` calls with
`bun.default_allocator`
- Removed all `this.memory_reporter.discard()` calls (no longer needed)
- Simplified `fetch()` function by removing memory reporter
allocation/wrapping/cleanup code
- Updated `deinit()` and `clearData()` to use `bun.default_allocator`
directly
**Cleanup:**
- Removed `MemoryReportingAllocator` export from `src/allocators.zig`
- Removed `MemoryReportingAllocator` export from `src/bun.zig`
- Removed `bun.MemoryReportingAllocator.isInstance()` check from
`src/safety/alloc.zig`
## Testing
- ✅ Builds successfully with `bun bd`
- All fetch operations now use `bun.default_allocator` directly
## Impact
- **Net -116 lines** of code
- Eliminates allocator wrapper overhead in fetch operations
- Simplifies memory management code
- No functional changes to fetch behavior
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
### What does this PR do?
calling `loadConfigPath` could allow loading bunfig more than once.
### How did you verify your code works?
manually
---------
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
## Summary
This PR makes `bun list` an alias for `bun pm ls`, allowing users to
list their dependency tree with a shorter command.
## Changes
- Updated `src/cli.zig` to route `list` command to
`PackageManagerCommand` instead of `ReservedCommand`
- Modified `src/cli/package_manager_command.zig` to detect when `bun
list` is invoked directly and treat it as `ls`
- Updated help text in `bun pm --help` to show both `bun list` and `bun
pm ls` as valid options
## Implementation Details
The implementation follows the same pattern used for `bun whoami`, which
is also a direct alias to a pm subcommand. When `bun list` is detected,
it's internally converted to the `ls` subcommand.
## Testing
Tested locally:
- ✅ `bun list` shows the dependency tree
- ✅ `bun list --all` works correctly with the `--all` flag
- ✅ `bun pm ls` continues to work (backward compatible)
## Test Output
```bash
$ bun list
/tmp/test-bun-list node_modules (3)
└── react@18.3.1
$ bun list --all
/tmp/test-bun-list node_modules
├── js-tokens@4.0.0
├── loose-envify@1.4.0
└── react@18.3.1
$ bun pm ls
/tmp/test-bun-list node_modules (3)
└── react@18.3.1
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Fixes CPU profiler generating invalid timestamps that Chrome DevTools
couldn't parse (though VSCode's profiler viewer accepted them).
## The Problem
CPU profiles generated by `--cpu-prof` had timestamps that were either:
1. Negative (in the original broken profile from the gist)
2. Truncated/corrupted (after initial timestamp calculation fix)
Example from the broken profile:
```json
{
"startTime": -822663297,
"endTime": -804820609
}
```
After initial fix, timestamps were positive but still wrong:
```json
{
"startTime": 1573519100, // Should be ~1761784720948727
"endTime": 1573849434
}
```
## Root Cause
**Primary Issue**: `WTF::JSON::Object::setInteger()` has precision
issues with large values (> 2^31). When setting timestamps like
`1761784720948727` (microseconds since Unix epoch - 16 digits), the
method was truncating/corrupting them.
**Secondary Issue**: The timestamp calculation logic needed
clarification - now explicitly uses the earliest sample's wall clock
time as startTime and calculates a consistent wallClockOffset.
## The Fix
### src/bun.js/bindings/BunCPUProfiler.cpp
Changed from `setInteger()` to `setDouble()` for timestamp
serialization:
```cpp
// Before (broken):
json->setInteger("startTime"_s, static_cast<long long>(startTime));
json->setInteger("endTime"_s, static_cast<long long>(endTime));
// After (fixed):
json->setDouble("startTime"_s, startTime);
json->setDouble("endTime"_s, endTime);
```
JSON `Number` type can precisely represent integers up to 2^53 (~9
quadrillion), which is far more than needed for microsecond timestamps
(~10^15 for current dates).
Also clarified the timestamp calculation to use `wallClockStart`
directly as the profile's `startTime` and calculate a `wallClockOffset`
for converting stopwatch times to wall clock times.
### test/cli/run/cpu-prof.test.ts
Added validation that timestamps are:
- Positive
- In microseconds (> 1000000000000000, < 3000000000000000)
- Within valid Unix epoch range
## Testing
```bash
bun bd test test/cli/run/cpu-prof.test.ts
```
All tests pass ✅
Generated profile now has correct timestamps:
```json
{
"startTime": 1761784720948727.2,
"endTime": 1761784721305814
}
```
## Why VSCode Worked But Chrome DevTools Didn't
- **VSCode**: Only cares about relative timing (duration = endTime -
startTime), doesn't validate absolute timestamp ranges
- **Chrome DevTools**: Expects timestamps in microseconds since Unix
epoch (positive, ~16 digits), fails validation when timestamps are
negative, too small, or out of valid range
## References
- Gist with CPU profile format documentation:
https://gist.github.com/Jarred-Sumner/2c12da481845e20ce6a6175ee8b05a3e
- Chrome DevTools Protocol - Profiler:
https://chromedevtools.github.io/devtools-protocol/tot/Profiler/🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>