Compare commits

...

13 Commits

Author SHA1 Message Date
Claude Bot
df2cc28125 fix(console): improve Error.cause and AggregateError logging
- Add [cause] label when printing Error.cause property to show the
  relationship between parent and cause errors
- Print AggregateError's own message and stack trace before printing
  its errors array
- Add [errors] label before AggregateError's errors to clearly show
  the aggregated errors section
- Handle AggregateError's cause property properly

This makes error output more similar to Node.js, where the relationship
between errors is clearly labeled with [cause] and [errors] markers.

Fixes #1352

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 01:04:56 +00:00
robobun
11aedbe402 fix(fs.watch): emit 'change' events for files in watched directories on Linux (#26009)
## Summary
- Fixes #3657 - `fs.watch` on directory doesn't emit `change` events for
files created after watch starts

When watching a directory with `fs.watch`, files created after the watch
was established would only emit a 'rename' event on creation, but
subsequent modifications would not emit 'change' events.

## Root Cause

The issue was twofold:
1. `watch_dir_mask` in INotifyWatcher.zig was missing `IN.MODIFY`, so
the inotify system call was not subscribed to file modification events
for watched directories.
2. When directory events were processed in path_watcher.zig, all events
were hardcoded to emit 'rename' instead of properly distinguishing
between file creation/deletion ('rename') and file modification
('change').

## Changes

- Adds `IN.MODIFY` to `watch_dir_mask` to receive modification events
- Adds a `create` flag to `WatchEvent.Op` to track `IN.CREATE` events
- Updates directory event processing to emit 'change' for pure write
events and 'rename' for create/delete/move events

## Test plan
- [x] Added regression test `test/regression/issue/3657.test.ts`
- [x] Verified test fails with system Bun (before fix)
- [x] Verified test passes with debug build (after fix)
- [x] Verified manual reproduction from issue now works correctly

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 16:46:20 -08:00
robobun
05df51ff84 fix(runner): filter out non-JS files from node tests (#26092)
## Summary
- `isNodeTest()` was only checking if the path included the node test
directories but not verifying the file was actually a JavaScript file
- This caused `test/js/node/test/parallel/CLAUDE.md` to be incorrectly
treated as a test file
- Added `isJavaScript(path)` check to filter out non-JS files

## Test plan
- [x] Verify CLAUDE.md is no longer picked up as a test file

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 16:28:29 -08:00
robobun
b72af3d329 fix(compile): respect autoloadBunfig: false when execArgv is present (#26017)
## Summary

Fixes #25640

- Fixed bug where compiled binaries with `autoloadBunfig: false` would
still load `bunfig.toml` when `execArgv` was also provided
- The issue was that `Command.init(.AutoCommand)` was called to parse
execArgv, which loaded bunfig before checking the disable flag

## Test plan

- [x] Added tests for `autoloadBunfig: false` with `execArgv` in
`test/bundler/bundler_compile_autoload.test.ts`
- [x] Verified tests pass with debug build: `bun bd test
test/bundler/bundler_compile_autoload.test.ts`
- [x] Verified tests fail with system bun (demonstrates fix works):
`USE_SYSTEM_BUN=1 bun test test/bundler/bundler_compile_autoload.test.ts
-t "AutoloadBunfigDisabledWithExecArgv"`
- [x] All existing autoload tests still pass (22 tests total)

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 16:08:49 -08:00
robobun
f27c6768ce fix(bundler): include lazy chunks in frontend.files for compiled fullstack builds (#26024)
## Summary

- Fixed lazy-loaded chunks from dynamic imports not appearing in
`frontend.files` when using `--splitting` with `--compile` in fullstack
builds
- Updated `computeChunks.zig` to mark non-entry-point chunks as browser
chunks when they contain browser-targeted files
- Updated `HTMLImportManifest.zig` to include browser chunks from server
builds in the files manifest

Fixes #25628

## Test plan

- [ ] Added regression test `test/regression/issue/25628.test.ts` that
verifies lazy chunks appear in `frontend.files`
- [ ] Manually verified: system bun reports `CHUNK_COUNT:1` (bug), debug
bun reports `CHUNK_COUNT:2` (fix)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 16:08:06 -08:00
robobun
c57d0f73b4 fix(css): preserve logical border-radius properties (#26006)
## Summary
- CSS logical border-radius properties (`border-start-start-radius`,
`border-start-end-radius`, `border-end-end-radius`,
`border-end-start-radius`) were being silently dropped when processed by
the CSS bundler
- The bug was in `src/css/properties/border_radius.zig` where
`VendorPrefix{}` (all fields false) was used instead of `VendorPrefix{
.none = true }` when computing prefixes for logical properties
- This caused the properties to be dropped by a later `isEmpty()` check
since an empty prefix struct was returned

## Test plan
- [x] Added regression test `test/regression/issue/25785.test.ts`
- [x] Verified test fails with system Bun (`USE_SYSTEM_BUN=1 bun test`)
- [x] Verified test passes with fixed bun-debug (`bun bd test`)

Fixes #25785

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-14 13:34:31 -08:00
robobun
6a27a25e5b fix(debugger): retroactively report tests when TestReporter.enable is called (#25986)
## Summary
- Fixes #25972: TestReporter domain events not firing when debugger
connects after test discovery

When a debugger client connects and enables the TestReporter domain
after tests have been discovered (e.g., using `--inspect` instead of
`--inspect-wait`), the `TestReporter.found`, `TestReporter.start`, and
`TestReporter.end` events would not fire. This is because tests
discovered without an enabled debugger have `test_id_for_debugger = 0`,
and the event emission code checks for non-zero IDs.

The fix retroactively assigns test IDs and reports discovered tests when
`TestReporter.enable` is called:

1. Check if there's an active test file in collection or execution phase
2. Iterate through the test tree (DescribeScopes and test entries)
3. Assign unique `test_id_for_debugger` values to each test/describe
4. Send `TestReporter.found` events for each discovered test

## Test plan
- [ ] Verify IDE integrations can now receive test telemetry when
connecting after test discovery
- [ ] Ensure existing `--inspect-wait` behavior continues to work
(debugger enabled before discovery)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 13:32:51 -08:00
robobun
2b86ab0cd3 fix(shell): implement long listing format for ls -l builtin (#25991)
## Summary
- Implements the `-l` (long listing) flag functionality for the shell
`ls` builtin
- The flag was being parsed but never used - output was identical to
short format
- Now displays proper long listing format: file type, permissions, hard
link count, UID, GID, size, modification time, and filename

## Test plan
- [x] Added regression test in `test/regression/issue/25831.test.ts`
- [x] Test passes with debug build: `bun bd test
test/regression/issue/25831.test.ts`
- [x] Test fails with system bun (confirming the bug exists):
`USE_SYSTEM_BUN=1 bun test test/regression/issue/25831.test.ts`

Example output with fix:
```
$ bun -e 'import { $ } from "bun"; console.log(await $`ls -l`.text())'
drwxr-xr-x   2  1000  1000     4096 Jan 12 15:30 subdir
-rw-r--r--   1  1000  1000       11 Jan 12 15:30 file.txt
```

Fixes #25831

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 13:31:06 -08:00
robobun
6e6896510a fix(cli): prevent --version/--help interception in standalone executables with compile-exec-argv (#26083)
## Summary

Fixes https://github.com/oven-sh/bun/issues/26082

- Fixes a bug where standalone executables compiled with
`--compile-exec-argv` would intercept `--version`, `-v`, `--help`, and
`-h` flags before user code could handle them
- CLI applications using libraries like `commander` can now properly
implement their own version and help commands

## Root Cause

When `--compile-exec-argv` is used, `Command.init` was being called with
`.AutoCommand`, which parses ALL arguments (including user arguments).
The `Arguments.parse` function intercepts `--version`/`--help` flags for
`AutoCommand`, preventing them from reaching user code.

## Fix

Temporarily set `bun.argv` to only include the executable name +
embedded exec argv options when calling `Command.init`. This ensures:
1. Bun's embedded options (like `--smol`, `--use-system-ca`) are
properly parsed
2. User arguments (including `--version`/`--help`) are NOT intercepted
by Bun's parser
3. User arguments are properly passed through to user code

## Test plan

- [x] Added tests for `--version`, `-v`, `--help`, and `-h` flags in
`compile-argv.test.ts`
- [x] Verified tests fail with `USE_SYSTEM_BUN=1` (proving the bug
exists)
- [x] Verified tests pass with debug build
- [x] Verified existing compile-argv tests still pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 13:10:53 -08:00
robobun
5a71ead8a2 Add CLAUDE.md for Node.js compatibility tests (#26084)
## Summary
- Adds a CLAUDE.md file to `test/js/node/test/parallel/` documenting
that these are official Node.js tests
- Explains that these tests should not be modified since they come from
the Node.js repository
- Documents how to run these tests with debug builds (`bun bd
<file-path>` instead of `bun bd test`)

## Test plan
- [x] Verified file was created correctly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 13:07:38 -08:00
robobun
a9b5f5cbd1 fix(sql): prevent hang in sequential MySQL transactions with returned array queries (#26048)
## Summary

- Fix a hang in sequential MySQL transactions where an INSERT is awaited
followed by a SELECT returned in an array
- The issue occurred because `handleResultSetOK`'s defer block only
called `queue.advance()` without flushing, causing queries added during
the JS callback to not be properly sent
- Changed to call `flushQueue()` instead of just `advance()` to ensure
data is actually sent to the server

Fixes #26030

## Test plan

- Added regression test `test/regression/issue/26030.test.ts` with three
test cases:
- `Sequential transactions with INSERT and returned SELECT should not
hang` - reproduces the exact pattern from the bug report
- `Sequential transactions with returned array of multiple queries` -
tests returning multiple queries in array
- `Many sequential transactions with awaited INSERT and returned SELECT`
- stress tests with 5 sequential transactions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 12:53:04 -08:00
robobun
7333500df8 fix(bundler): rename named function expressions when shadowing outer symbol (#26027)
## Summary
- Fixed a bug where named function expressions were not renamed when
their name shadowed an outer symbol that's referenced inside the
function body
- This caused infinite recursion at runtime when namespace imports were
inlined
- Particularly affected Svelte 5 apps in dev mode

## Test plan
- [x] Added regression test that reproduces the issue
- [x] Verified test fails with system bun and passes with fix
- [x] Ran bundler tests (bundler_regressions, bundler_naming,
bundler_edgecase, bundler_minify) - all pass

## Root cause
The bundler was skipping `function_args` scopes when renaming symbols.
This meant named function expression names (which are declared in the
function_args scope) were never considered for renaming when they
collided with outer symbols.

For example, this code:
```javascript
import * as $ from './lib';
$.doSomething(function get() {
  return $.get(123);  // Should call outer get
});
```

Would be bundled as:
```javascript
function get(x) { return x * 2; } // from lib
doSomething(function get() {
  return get(123);  // Calls itself - infinite recursion!
});
```

Instead of:
```javascript
function get(x) { return x * 2; }
doSomething(function get2() {  // Renamed to avoid collision
  return get(123);  // Correctly calls outer get
});
```

Fixes #25648

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-01-14 12:52:41 -08:00
robobun
e6733333f0 fix(sql): MySQL VARCHAR with binary collations returns string instead of Buffer (#26064)
## Summary

- Fixed MySQL VARCHAR/CHAR/TEXT columns with binary collations (like
`utf8mb4_bin`) being incorrectly returned as `Buffer` instead of
`string`
- The fix checks for `character_set == 63` (binary collation) in
addition to the BINARY flag to properly distinguish true binary types

Fixes #26063

## Root Cause

PR #26011 introduced a fix for binary column handling that checked
`column.flags.BINARY` to determine if data should be returned as
`Buffer`. However, MySQL sets the BINARY flag on VARCHAR/CHAR/TEXT
columns with binary collations (like `utf8mb4_bin`) even though they
should return strings.

The proper way to detect true binary types (BINARY, VARBINARY, BLOB) is
to check if `character_set == 63` (the "binary" collation), not just the
BINARY flag.

## Changes

1. **Text Protocol** (`ResultSet.zig:143-148`): Updated binary check to
`column.flags.BINARY and column.character_set == 63`
2. **Binary Protocol** (`DecodeBinaryValue.zig:154-156`): Added
`character_set` parameter and updated binary check

## Test plan

- [ ] Added regression test `test/regression/issue/26063.test.ts` that
tests VARCHAR, CHAR, and TEXT columns with `utf8mb4_bin` collation
return strings
- [ ] Test verifies that true BINARY/VARBINARY/BLOB columns still return
Buffers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-14 12:50:36 -08:00
33 changed files with 2079 additions and 65 deletions

View File

@@ -1585,6 +1585,9 @@ function isNodeTest(path) {
if (isCI && isMacOS && isX64) {
return false;
}
if (!isJavaScript(path)) {
return false;
}
const unixPath = path.replaceAll(sep, "/");
return (
unixPath.includes("js/node/test/parallel/") ||

View File

@@ -1159,7 +1159,9 @@ pub const StandaloneModuleGraph = struct {
return .success;
}
pub fn fromExecutable(allocator: std.mem.Allocator) !?StandaloneModuleGraph {
/// Loads the standalone module graph from the executable, allocates it on the heap,
/// sets it globally, and returns the pointer.
pub fn fromExecutable(allocator: std.mem.Allocator) !?*StandaloneModuleGraph {
if (comptime Environment.isMac) {
const macho_bytes = Macho.getData() orelse return null;
if (macho_bytes.len < @sizeOf(Offsets) + trailer.len) {
@@ -1173,7 +1175,7 @@ pub const StandaloneModuleGraph = struct {
return null;
}
const offsets = std.mem.bytesAsValue(Offsets, macho_bytes_slice).*;
return try StandaloneModuleGraph.fromBytes(allocator, @constCast(macho_bytes), offsets);
return try fromBytesAlloc(allocator, @constCast(macho_bytes), offsets);
}
if (comptime Environment.isWindows) {
@@ -1189,7 +1191,7 @@ pub const StandaloneModuleGraph = struct {
return null;
}
const offsets = std.mem.bytesAsValue(Offsets, pe_bytes_slice).*;
return try StandaloneModuleGraph.fromBytes(allocator, @constCast(pe_bytes), offsets);
return try fromBytesAlloc(allocator, @constCast(pe_bytes), offsets);
}
// Do not invoke libuv here.
@@ -1284,7 +1286,15 @@ pub const StandaloneModuleGraph = struct {
}
}
return try StandaloneModuleGraph.fromBytes(allocator, to_read, offsets);
return try fromBytesAlloc(allocator, to_read, offsets);
}
/// Allocates a StandaloneModuleGraph on the heap, populates it from bytes, sets it globally, and returns the pointer.
fn fromBytesAlloc(allocator: std.mem.Allocator, raw_bytes: []u8, offsets: Offsets) !*StandaloneModuleGraph {
const graph_ptr = try allocator.create(StandaloneModuleGraph);
graph_ptr.* = try StandaloneModuleGraph.fromBytes(allocator, raw_bytes, offsets);
graph_ptr.set();
return graph_ptr;
}
/// heuristic: `bun build --compile` won't be supported if the name is "bun", "bunx", or "node".

View File

@@ -166,12 +166,7 @@ pub const WatchEvent = struct {
pub fn merge(this: *WatchEvent, other: WatchEvent) void {
this.name_len += other.name_len;
this.op = Op{
.delete = this.op.delete or other.op.delete,
.metadata = this.op.metadata or other.op.metadata,
.rename = this.op.rename or other.op.rename,
.write = this.op.write or other.op.write,
};
this.op = Op.merge(this.op, other.op);
}
pub const Op = packed struct(u8) {
@@ -180,7 +175,8 @@ pub const WatchEvent = struct {
rename: bool = false,
write: bool = false,
move_to: bool = false,
_padding: u3 = 0,
create: bool = false,
_padding: u2 = 0,
pub fn merge(before: Op, after: Op) Op {
return .{
@@ -189,6 +185,7 @@ pub const WatchEvent = struct {
.metadata = before.metadata or after.metadata,
.rename = before.rename or after.rename,
.move_to = before.move_to or after.move_to,
.create = before.create or after.create,
};
}

View File

@@ -13,21 +13,18 @@ pub const Run = struct {
var run: Run = undefined;
pub fn bootStandalone(ctx: Command.Context, entry_path: string, graph: bun.StandaloneModuleGraph) !void {
pub fn bootStandalone(ctx: Command.Context, entry_path: string, graph_ptr: *bun.StandaloneModuleGraph) !void {
jsc.markBinding(@src());
bun.jsc.initialize(false);
bun.analytics.Features.standalone_executable += 1;
const graph_ptr = try bun.default_allocator.create(bun.StandaloneModuleGraph);
graph_ptr.* = graph;
graph_ptr.set();
js_ast.Expr.Data.Store.create();
js_ast.Stmt.Data.Store.create();
const arena = Arena.init();
// Load bunfig.toml unless disabled by compile flags
if (!ctx.debug.loaded_bunfig and !graph.flags.disable_autoload_bunfig) {
// Note: config loading with execArgv is handled earlier in cli.zig via loadConfig
if (!ctx.debug.loaded_bunfig and !graph_ptr.flags.disable_autoload_bunfig) {
try bun.cli.Arguments.loadConfigPath(ctx.allocator, true, "bunfig.toml", ctx, .RunCommand);
}
@@ -87,7 +84,7 @@ pub const Run = struct {
// If .env loading is disabled, only load process env vars
// Otherwise, load all .env files
if (graph.flags.disable_default_env_files) {
if (graph_ptr.flags.disable_default_env_files) {
b.options.env.behavior = .disable;
} else {
b.options.env.behavior = .load_all_without_inlining;
@@ -95,8 +92,8 @@ pub const Run = struct {
// Control loading of tsconfig.json and package.json at runtime
// By default, these are disabled for standalone executables
b.resolver.opts.load_tsconfig_json = !graph.flags.disable_autoload_tsconfig;
b.resolver.opts.load_package_json = !graph.flags.disable_autoload_package_json;
b.resolver.opts.load_tsconfig_json = !graph_ptr.flags.disable_autoload_tsconfig;
b.resolver.opts.load_package_json = !graph_ptr.flags.disable_autoload_package_json;
b.configureDefines() catch {
failWithBuildError(vm);

View File

@@ -316,6 +316,7 @@ pub const TestReporterAgent = struct {
pub const Handle = opaque {
extern "c" fn Bun__TestReporterAgentReportTestFound(agent: *Handle, callFrame: *jsc.CallFrame, testId: c_int, name: *bun.String, item_type: TestType, parentId: c_int) void;
extern "c" fn Bun__TestReporterAgentReportTestFoundWithLocation(agent: *Handle, testId: c_int, name: *bun.String, item_type: TestType, parentId: c_int, sourceURL: *bun.String, line: c_int) void;
extern "c" fn Bun__TestReporterAgentReportTestStart(agent: *Handle, testId: c_int) void;
extern "c" fn Bun__TestReporterAgentReportTestEnd(agent: *Handle, testId: c_int, bunTestStatus: TestStatus, elapsed: f64) void;
@@ -323,6 +324,10 @@ pub const TestReporterAgent = struct {
Bun__TestReporterAgentReportTestFound(this, callFrame, testId, name, item_type, parentId);
}
pub fn reportTestFoundWithLocation(this: *Handle, testId: i32, name: *bun.String, item_type: TestType, parentId: i32, sourceURL: *bun.String, line: i32) void {
Bun__TestReporterAgentReportTestFoundWithLocation(this, testId, name, item_type, parentId, sourceURL, line);
}
pub fn reportTestStart(this: *Handle, testId: c_int) void {
Bun__TestReporterAgentReportTestStart(this, testId);
}
@@ -335,8 +340,88 @@ pub const TestReporterAgent = struct {
if (VirtualMachine.get().debugger) |*debugger| {
debug("enable", .{});
debugger.test_reporter_agent.handle = agent;
// Retroactively report any tests that were already discovered before the debugger connected
retroactivelyReportDiscoveredTests(agent);
}
}
/// When TestReporter.enable is called after test collection has started/finished,
/// we need to retroactively assign test IDs and report discovered tests.
fn retroactivelyReportDiscoveredTests(agent: *Handle) void {
const Jest = jsc.Jest.Jest;
const runner = Jest.runner orelse return;
const active_file = runner.bun_test_root.active_file.get() orelse return;
// Only report if we're in collection or execution phase (tests have been discovered)
switch (active_file.phase) {
.collection, .execution => {},
.done => return,
}
// Get the file path for source location info
const file_path = runner.files.get(active_file.file_id).source.path.text;
var source_url = bun.String.init(file_path);
// Track the maximum ID we assign
var max_id: i32 = 0;
// Recursively report all discovered tests starting from root scope
const root_scope = active_file.collection.root_scope;
retroactivelyReportScope(agent, root_scope, -1, &max_id, &source_url);
debug("retroactively reported {} tests", .{max_id});
}
fn retroactivelyReportScope(agent: *Handle, scope: *bun_test.DescribeScope, parent_id: i32, max_id: *i32, source_url: *bun.String) void {
for (scope.entries.items) |*entry| {
switch (entry.*) {
.describe => |describe| {
// Only report and assign ID if not already assigned
if (describe.base.test_id_for_debugger == 0) {
max_id.* += 1;
const test_id = max_id.*;
// Assign the ID so start/end events will fire during execution
describe.base.test_id_for_debugger = test_id;
var name = bun.String.init(describe.base.name orelse "(unnamed)");
agent.reportTestFoundWithLocation(
test_id,
&name,
.describe,
parent_id,
source_url,
@intCast(describe.base.line_no),
);
// Recursively report children with this describe as parent
retroactivelyReportScope(agent, describe, test_id, max_id, source_url);
} else {
// Already has ID, just recurse with existing ID as parent
retroactivelyReportScope(agent, describe, describe.base.test_id_for_debugger, max_id, source_url);
}
},
.test_callback => |test_entry| {
// Only report and assign ID if not already assigned
if (test_entry.base.test_id_for_debugger == 0) {
max_id.* += 1;
const test_id = max_id.*;
// Assign the ID so start/end events will fire during execution
test_entry.base.test_id_for_debugger = test_id;
var name = bun.String.init(test_entry.base.name orelse "(unnamed)");
agent.reportTestFoundWithLocation(
test_id,
&name,
.@"test",
parent_id,
source_url,
@intCast(test_entry.base.line_no),
);
}
},
}
}
}
const bun_test = jsc.Jest.bun_test;
pub export fn Bun__TestReporterAgentDisable(_: *Handle) void {
if (VirtualMachine.get().debugger) |*debugger| {
debug("disable", .{});

View File

@@ -2409,6 +2409,22 @@ pub fn printErrorlikeObject(
}
if (value.isAggregateError(this.global)) {
// First, print the AggregateError itself (name, message, stack, cause)
this.printAggregateErrorInstance(
value,
exception,
exception_list,
formatter,
Writer,
writer,
allow_ansi_color,
allow_side_effects,
) catch return;
// Print the [errors] label
writer.writeAll(comptime Output.prettyFmt("\n<cyan>[errors]<r>\n", allow_ansi_color)) catch return;
// Now iterate through the errors array
const AggregateErrorIterator = struct {
writer: Writer,
current_exception_list: ?*ExceptionList = null,
@@ -2517,6 +2533,91 @@ fn printErrorFromMaybePrivateData(
return false;
}
/// Prints an AggregateError's name, message, stack trace, and cause.
/// Does NOT print the `errors` property - that should be handled separately.
fn printAggregateErrorInstance(
this: *VirtualMachine,
error_instance: JSValue,
exception: ?*Exception,
exception_list: ?*ExceptionList,
formatter: *ConsoleObject.Formatter,
comptime Writer: type,
writer: Writer,
comptime allow_ansi_color: bool,
comptime allow_side_effects: bool,
) !void {
var exception_holder = ZigException.Holder.init();
var zig_exception = exception_holder.zigException();
defer exception_holder.deinit(this);
defer error_instance.ensureStillAlive();
var source_code_slice: ?ZigString.Slice = null;
defer if (source_code_slice) |slice| slice.deinit();
this.remapZigException(
zig_exception,
error_instance,
exception_list,
&exception_holder.need_to_clear_parser_arena_on_deinit,
&source_code_slice,
formatter.error_display_level != .warn,
);
const prev_had_errors = this.had_errors;
this.had_errors = true;
defer this.had_errors = prev_had_errors;
if (allow_side_effects) {
if (this.debugger) |*debugger| {
debugger.lifecycle_reporter_agent.reportError(zig_exception);
}
}
defer if (allow_side_effects and Output.is_github_action)
printGithubAnnotation(zig_exception);
// Print the error name and message
try this.printErrorNameAndMessage(
zig_exception.name,
zig_exception.message,
!zig_exception.browser_url.isEmpty(),
null,
Writer,
writer,
allow_ansi_color,
formatter.error_display_level,
);
// Print the stack trace
try printStackTrace(@TypeOf(writer), writer, zig_exception.stack, allow_ansi_color);
// Handle cause property (it's not enumerable, so we need to check it explicitly)
if (error_instance.getOwn(this.global, "cause") catch null) |cause| {
if (cause.jsType() == .ErrorInstance) {
try writer.writeAll(comptime Output.prettyFmt("\n<cyan>[cause]<r>\n", allow_ansi_color));
try this.printErrorInstance(.js, cause, exception_list, formatter, Writer, writer, allow_ansi_color, allow_side_effects);
}
}
// Also include any exception info from the wrapper if available
if (exception) |ex| {
if (exception_list) |list| {
var holder = ZigException.Holder.init();
var ex_exception: *ZigException = holder.zigException();
holder.deinit(this);
ex.getStackTrace(this.global, &ex_exception.stack);
if (ex_exception.stack.frames_len > 0) {
if (allow_ansi_color) {
printStackTrace(Writer, writer, ex_exception.stack, true) catch {};
} else {
printStackTrace(Writer, writer, ex_exception.stack, false) catch {};
}
}
ex_exception.addToErrorList(list, this.transpiler.fs.top_level_dir, &this.origin) catch {};
}
}
}
pub fn reportUncaughtException(globalObject: *JSGlobalObject, exception: *Exception) JSValue {
var jsc_vm = globalObject.bunVM();
_ = jsc_vm.uncaughtException(globalObject, exception.value(), false);
@@ -3112,10 +3213,11 @@ fn printErrorInstance(
}
// This is usually unsafe to do, but we are protecting them each time first
var errors_to_append = std.array_list.Managed(JSValue).init(this.allocator);
const LabeledError = struct { err: JSValue, label: ?[]const u8 };
var errors_to_append = std.array_list.Managed(LabeledError).init(this.allocator);
defer {
for (errors_to_append.items) |err| {
err.unprotect();
for (errors_to_append.items) |item| {
item.err.unprotect();
}
errors_to_append.deinit();
}
@@ -3151,11 +3253,12 @@ fn printErrorInstance(
// avoid infinite recursion
!prev_had_errors)
{
if (field.eqlComptime("cause")) {
const is_cause = field.eqlComptime("cause");
if (is_cause) {
saw_cause = true;
}
value.protect();
try errors_to_append.append(value);
try errors_to_append.append(.{ .err = value, .label = if (is_cause) "cause" else null });
} else if (kind.isObject() or kind.isArray() or value.isPrimitive() or kind.isStringLike()) {
var bun_str = bun.String.empty;
defer bun_str.deref();
@@ -3234,7 +3337,7 @@ fn printErrorInstance(
if (try error_instance.getOwn(this.global, "cause")) |cause| {
if (cause.jsType() == .ErrorInstance) {
cause.protect();
try errors_to_append.append(cause);
try errors_to_append.append(.{ .err = cause, .label = "cause" });
}
}
}
@@ -3272,7 +3375,7 @@ fn printErrorInstance(
);
}
for (errors_to_append.items) |err| {
for (errors_to_append.items) |item| {
// Check for circular references to prevent infinite recursion in cause chains
if (formatter.map_node == null) {
formatter.map_node = ConsoleObject.Formatter.Visited.Pool.get(default_allocator);
@@ -3280,7 +3383,7 @@ fn printErrorInstance(
formatter.map = formatter.map_node.?.data;
}
const entry = formatter.map.getOrPut(err) catch unreachable;
const entry = formatter.map.getOrPut(item.err) catch unreachable;
if (entry.found_existing) {
try writer.writeAll("\n");
try writer.writeAll(comptime Output.prettyFmt("<r><cyan>[Circular]<r>", allow_ansi_color));
@@ -3288,8 +3391,11 @@ fn printErrorInstance(
}
try writer.writeAll("\n");
try this.printErrorInstance(.js, err, exception_list, formatter, Writer, writer, allow_ansi_color, allow_side_effects);
_ = formatter.map.remove(err);
if (item.label) |label| {
try writer.print(comptime Output.prettyFmt("<cyan>[{s}]<r>\n", allow_ansi_color), .{label});
}
try this.printErrorInstance(.js, item.err, exception_list, formatter, Writer, writer, allow_ansi_color, allow_side_effects);
_ = formatter.map.remove(item.err);
}
}

View File

@@ -50,6 +50,26 @@ void Bun__TestReporterAgentReportTestFound(Inspector::InspectorTestReporterAgent
agent->reportTestFound(callFrame, testId, str, type, parentId);
}
void Bun__TestReporterAgentReportTestFoundWithLocation(Inspector::InspectorTestReporterAgent* agent, int testId, BunString* name, BunTestType item_type, int parentId, BunString* sourceURL, int line)
{
auto str = name->toWTFString(BunString::ZeroCopy);
auto sourceURLStr = sourceURL->toWTFString(BunString::ZeroCopy);
Protocol::TestReporter::TestType type;
switch (item_type) {
case BunTestType::Test:
type = Protocol::TestReporter::TestType::Test;
break;
case BunTestType::Describe:
type = Protocol::TestReporter::TestType::Describe;
break;
default:
ASSERT_NOT_REACHED();
}
agent->reportTestFoundWithLocation(testId, str, type, parentId, sourceURLStr, line);
}
void Bun__TestReporterAgentReportTestStart(Inspector::InspectorTestReporterAgent* agent, int testId)
{
agent->reportTestStart(testId);
@@ -211,6 +231,21 @@ void InspectorTestReporterAgent::reportTestFound(JSC::CallFrame* callFrame, int
parentId > 0 ? parentId : std::optional<int>());
}
void InspectorTestReporterAgent::reportTestFoundWithLocation(int testId, const String& name, Protocol::TestReporter::TestType type, int parentId, const String& sourceURL, int line)
{
if (!m_enabled)
return;
m_frontendDispatcher->found(
testId,
String(), // sourceID - not available for retroactively reported tests
sourceURL,
line,
name,
type,
parentId > 0 ? parentId : std::optional<int>());
}
void InspectorTestReporterAgent::reportTestStart(int testId)
{
if (!m_enabled || !m_frontendDispatcher)

View File

@@ -34,6 +34,7 @@ public:
// Public API for reporting test events
void reportTestFound(JSC::CallFrame*, int testId, const String& name, Protocol::TestReporter::TestType type = Protocol::TestReporter::TestType::Test, int parentId = -1);
void reportTestFoundWithLocation(int testId, const String& name, Protocol::TestReporter::TestType type, int parentId, const String& sourceURL, int line);
void reportTestStart(int testId);
void reportTestEnd(int testId, Protocol::TestReporter::TestStatus status, double elapsed);

View File

@@ -248,7 +248,9 @@ pub const PathWatcherManager = struct {
const hash = Watcher.getHash(path_slice);
// skip consecutive duplicates
const event_type: PathWatcher.EventType = .rename; // renaming folders, creating folder or files will be always be rename
// If it's a create, delete, rename, or move event, emit "rename"
// If it's a pure write (modify) event, emit "change"
const event_type: PathWatcher.EventType = if (event.op.create or event.op.delete or event.op.rename or event.op.move_to) .rename else .change;
for (watchers) |w| {
if (w) |watcher| {
if (comptime Environment.isMac) {

View File

@@ -166,8 +166,14 @@ pub fn write(index: u32, graph: *const Graph, linker_graph: *const LinkerGraph,
defer already_visited_output_file.deinit(bun.default_allocator);
// Write all chunks that have files associated with this entry point.
// Also include browser chunks from server builds (lazy-loaded chunks from dynamic imports).
// When there's only one HTML import, all browser chunks belong to that manifest.
// When there are multiple HTML imports, only include chunks that intersect with this entry's bits.
const has_single_html_import = graph.html_imports.html_source_indices.len == 1;
for (chunks) |*ch| {
if (ch.entryBits().hasIntersection(&entry_point_bits)) {
if (ch.entryBits().hasIntersection(&entry_point_bits) or
(has_single_html_import and ch.flags.is_browser_chunk_from_server_build))
{
if (!first) try writer.writeAll(",");
first = false;

View File

@@ -229,6 +229,16 @@ pub noinline fn computeChunks(
.output_source_map = SourceMap.SourceMapPieces.init(this.allocator()),
.flags = .{ .is_browser_chunk_from_server_build = is_browser_chunk_from_server_build },
};
} else if (could_be_browser_target_from_server_build and
!js_chunk_entry.value_ptr.entry_point.is_entry_point and
!js_chunk_entry.value_ptr.flags.is_browser_chunk_from_server_build and
ast_targets[source_index.get()] == .browser)
{
// If any file in the chunk has browser target, mark the whole chunk as browser.
// This handles the case where a lazy-loaded chunk (code splitting chunk, not entry point)
// contains browser-targeted files but was first created by a non-browser file.
// We only apply this to non-entry-point chunks to preserve the correct side for server entry points.
js_chunk_entry.value_ptr.flags.is_browser_chunk_from_server_build = true;
}
const entry = js_chunk_entry.value_ptr.files_with_parts_in_chunk.getOrPut(this.allocator(), @as(u32, @truncate(source_index.get()))) catch unreachable;

View File

@@ -690,13 +690,26 @@ pub const Command = struct {
const original_argv_len = bun.argv.len;
var argv_list = std.array_list.Managed([:0]const u8).fromOwnedSlice(bun.default_allocator, bun.argv);
try bun.appendOptionsEnv(graph.compile_exec_argv, &argv_list, bun.default_allocator);
bun.argv = argv_list.items;
// Store the full argv including user arguments
const full_argv = argv_list.items;
const num_exec_argv_options = full_argv.len -| original_argv_len;
// Calculate offset: skip executable name + all exec argv options
offset_for_passthrough = if (bun.argv.len > 1) 1 + (bun.argv.len -| original_argv_len) else 0;
offset_for_passthrough = if (full_argv.len > 1) 1 + num_exec_argv_options else 0;
// Temporarily set bun.argv to only include executable name + exec_argv options.
// This prevents user arguments like --version/--help from being intercepted
// by Bun's argument parser (they should be passed through to user code).
bun.argv = full_argv[0..@min(1 + num_exec_argv_options, full_argv.len)];
// Handle actual options to parse.
break :brk try Command.init(allocator, log, .AutoCommand);
const result = try Command.init(allocator, log, .AutoCommand);
// Restore full argv so passthrough calculation works correctly
bun.argv = full_argv;
break :brk result;
}
context_data = .{

View File

@@ -297,6 +297,16 @@ fn getHomeConfigPath(buf: *bun.PathBuffer) ?[:0]const u8 {
return null;
}
pub fn loadConfig(allocator: std.mem.Allocator, user_config_path_: ?string, ctx: Command.Context, comptime cmd: Command.Tag) OOM!void {
// If running as a standalone executable with autoloadBunfig disabled, skip config loading
// unless an explicit config path was provided via --config
if (user_config_path_ == null) {
if (bun.StandaloneModuleGraph.get()) |graph| {
if (graph.flags.disable_autoload_bunfig) {
return;
}
}
}
var config_buf: bun.PathBuffer = undefined;
if (comptime cmd.readGlobalConfig()) {
if (!ctx.has_loaded_global_config) {

View File

@@ -220,7 +220,7 @@ pub const BorderRadiusHandler = struct {
if (logical_supported) {
bun.handleOom(d.append(ctx.allocator, v));
} else {
const prefix = ctx.targets.prefixes(css.VendorPrefix{}, css.prefixes.Feature.border_radius);
const prefix = ctx.targets.prefixes(css.VendorPrefix{ .none = true }, css.prefixes.Feature.border_radius);
switch (v) {
.@"border-start-start-radius",
.@"border-start-end-radius",

View File

@@ -597,12 +597,6 @@ pub const NumberRenamer = struct {
r.number_scope_pool.put(s);
};
// Ignore function argument scopes
if (scope.kind == .function_args and scope.children.len == 1) {
scope = scope.children.ptr[0];
bun.assert(scope.kind == .function_body);
}
while (true) {
if (scope.members.count() > 0 or scope.generated.len > 0) {
const new_child_scope = r.number_scope_pool.get();
@@ -617,10 +611,6 @@ pub const NumberRenamer = struct {
if (scope.children.len == 1) {
scope = scope.children.ptr[0];
if (scope.kind == .function_args and scope.children.len == 1) {
scope = scope.children.ptr[0];
bun.assert(scope.kind == .function_body);
}
} else {
break;
}

View File

@@ -225,6 +225,9 @@ pub const ShellLsTask = struct {
is_absolute: bool = false,
err: ?Syscall.Error = null,
result_kind: enum { file, dir, idk } = .idk,
/// Cached current time (seconds since epoch) for formatting timestamps.
/// Cached once per task to avoid repeated syscalls.
#now_secs: u64 = 0,
event_loop: jsc.EventLoopHandle,
concurrent_task: jsc.EventLoopTask,
@@ -293,6 +296,11 @@ pub const ShellLsTask = struct {
}
pub fn run(this: *@This()) void {
// Cache current time once per task for timestamp formatting
if (this.opts.long_listing) {
this.#now_secs = @intCast(std.time.timestamp());
}
const fd = switch (ShellSyscall.openat(this.cwd, this.path, bun.O.RDONLY | bun.O.DIRECTORY, 0)) {
.err => |e| {
switch (e.getErrno()) {
@@ -301,7 +309,7 @@ pub const ShellLsTask = struct {
},
.NOTDIR => {
this.result_kind = .file;
this.addEntry(this.path);
this.addEntry(this.path, this.cwd);
},
else => {
this.err = this.errorWithPath(e, this.path);
@@ -329,7 +337,7 @@ pub const ShellLsTask = struct {
// If `-a` is used, "." and ".." should show up as results. However,
// our `DirIterator` abstraction skips them, so let's just add them
// now.
this.addDotEntriesIfNeeded();
this.addDotEntriesIfNeeded(fd);
while (switch (entry) {
.err => |e| {
@@ -338,7 +346,7 @@ pub const ShellLsTask = struct {
},
.result => |ent| ent,
}) |current| : (entry = iterator.next()) {
this.addEntry(current.name.sliceAssumeZ());
this.addEntry(current.name.sliceAssumeZ(), fd);
if (current.kind == .directory and this.opts.recursive) {
this.enqueue(current.name.sliceAssumeZ());
}
@@ -367,20 +375,167 @@ pub const ShellLsTask = struct {
}
// TODO more complex output like multi-column
fn addEntry(this: *@This(), name: [:0]const u8) void {
fn addEntry(this: *@This(), name: [:0]const u8, dir_fd: bun.FileDescriptor) void {
const skip = this.shouldSkipEntry(name);
debug("Entry: (skip={}) {s} :: {s}", .{ skip, this.path, name });
if (skip) return;
bun.handleOom(this.output.ensureUnusedCapacity(name.len + 1));
bun.handleOom(this.output.appendSlice(name));
bun.handleOom(this.output.append('\n'));
if (this.opts.long_listing) {
this.addEntryLong(name, dir_fd);
} else {
bun.handleOom(this.output.ensureUnusedCapacity(name.len + 1));
bun.handleOom(this.output.appendSlice(name));
bun.handleOom(this.output.append('\n'));
}
}
fn addDotEntriesIfNeeded(this: *@This()) void {
fn addEntryLong(this: *@This(), name: [:0]const u8, dir_fd: bun.FileDescriptor) void {
// Use lstatat to not follow symlinks (so symlinks show as 'l' type)
const stat_result = Syscall.lstatat(dir_fd, name);
const stat = switch (stat_result) {
.err => {
// If stat fails, just output the name with placeholders
const writer = this.output.writer();
bun.handleOom(writer.print("?????????? ? ? ? ? ? {s}\n", .{name}));
return;
},
.result => |s| s,
};
const writer = this.output.writer();
// File type and permissions
const mode: u32 = @intCast(stat.mode);
const file_type = getFileTypeChar(mode);
const perms = formatPermissions(mode);
// Number of hard links
const nlink: u64 = @intCast(stat.nlink);
// Owner and group (numeric)
const uid: u64 = @intCast(stat.uid);
const gid: u64 = @intCast(stat.gid);
// File size
const size: i64 = @intCast(stat.size);
// Modification time
const mtime = stat.mtime();
const time_str = formatTime(@intCast(mtime.sec), this.#now_secs);
bun.handleOom(writer.print("{c}{s} {d: >3} {d: >5} {d: >5} {d: >8} {s} {s}\n", .{
file_type,
&perms,
nlink,
uid,
gid,
size,
&time_str,
name,
}));
}
fn getFileTypeChar(mode: u32) u8 {
const file_type = mode & bun.S.IFMT;
return switch (file_type) {
bun.S.IFDIR => 'd',
bun.S.IFLNK => 'l',
bun.S.IFBLK => 'b',
bun.S.IFCHR => 'c',
bun.S.IFIFO => 'p',
bun.S.IFSOCK => 's',
else => '-', // IFREG or unknown
};
}
fn formatPermissions(mode: u32) [9]u8 {
var perms: [9]u8 = undefined;
// Owner permissions
perms[0] = if (mode & bun.S.IRUSR != 0) 'r' else '-';
perms[1] = if (mode & bun.S.IWUSR != 0) 'w' else '-';
// Owner execute with setuid handling
const owner_exec = mode & bun.S.IXUSR != 0;
const setuid = mode & bun.S.ISUID != 0;
perms[2] = if (setuid)
(if (owner_exec) 's' else 'S')
else
(if (owner_exec) 'x' else '-');
// Group permissions
perms[3] = if (mode & bun.S.IRGRP != 0) 'r' else '-';
perms[4] = if (mode & bun.S.IWGRP != 0) 'w' else '-';
// Group execute with setgid handling
const group_exec = mode & bun.S.IXGRP != 0;
const setgid = mode & bun.S.ISGID != 0;
perms[5] = if (setgid)
(if (group_exec) 's' else 'S')
else
(if (group_exec) 'x' else '-');
// Other permissions
perms[6] = if (mode & bun.S.IROTH != 0) 'r' else '-';
perms[7] = if (mode & bun.S.IWOTH != 0) 'w' else '-';
// Other execute with sticky bit handling
const other_exec = mode & bun.S.IXOTH != 0;
const sticky = mode & bun.S.ISVTX != 0;
perms[8] = if (sticky)
(if (other_exec) 't' else 'T')
else
(if (other_exec) 'x' else '-');
return perms;
}
fn formatTime(timestamp: i64, now_secs: u64) [12]u8 {
var buf: [12]u8 = undefined;
// Format as "Mon DD HH:MM" for recent files (within 6 months)
// or "Mon DD YYYY" for older files
const epoch_secs: u64 = if (timestamp < 0) 0 else @intCast(timestamp);
const epoch = std.time.epoch.EpochSeconds{ .secs = epoch_secs };
const day_seconds = epoch.getDaySeconds();
const year_day = epoch.getEpochDay().calculateYearDay();
const month_names = [_][]const u8{ "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec" };
const month_day = year_day.calculateMonthDay();
const month_name = month_names[month_day.month.numeric() - 1];
// Check if file is older than 6 months (approximately 180 days)
const six_months_secs: u64 = 180 * 24 * 60 * 60;
const is_recent = epoch_secs > now_secs -| six_months_secs and epoch_secs <= now_secs + six_months_secs;
if (is_recent) {
const hours = day_seconds.getHoursIntoDay();
const minutes = day_seconds.getMinutesIntoHour();
_ = std.fmt.bufPrint(&buf, "{s} {d:0>2} {d:0>2}:{d:0>2}", .{
month_name,
month_day.day_index + 1,
hours,
minutes,
}) catch {
@memcpy(&buf, "??? ?? ??:??");
};
} else {
// Show year for old files
const year = year_day.year;
_ = std.fmt.bufPrint(&buf, "{s} {d:0>2} {d:4}", .{
month_name,
month_day.day_index + 1,
year,
}) catch {
@memcpy(&buf, "??? ?? ????");
};
}
return buf;
}
fn addDotEntriesIfNeeded(this: *@This(), dir_fd: bun.FileDescriptor) void {
// `.addEntry()` already checks will check if we can add "." and ".." to
// the result
this.addEntry(".");
this.addEntry("..");
this.addEntry(".", dir_fd);
this.addEntry("..", dir_fd);
}
fn errorWithPath(this: *@This(), err: Syscall.Error, path: [:0]const u8) Syscall.Error {

View File

@@ -646,7 +646,7 @@ pub fn handleCommand(this: *MySQLConnection, comptime Context: type, reader: New
.failed => {
const connection = this.getJSConnection();
defer {
this.queue.advance(connection);
this.flushQueue() catch {};
}
this.#flags.is_ready_for_query = true;
this.queue.markAsReadyForQuery();
@@ -933,7 +933,11 @@ fn handleResultSetOK(this: *MySQLConnection, request: *JSMySQLQuery, statement:
const connection = this.getJSConnection();
debug("handleResultSetOK: {d} {}", .{ status_flags.toInt(), is_last_result });
defer {
this.queue.advance(connection);
// Use flushQueue instead of just advance to ensure any data written
// by queries added during onQueryResult is actually sent.
// This fixes a race condition where the auto flusher may not be
// registered if the queue's current item is completed (not pending).
this.flushQueue() catch {};
}
this.#flags.is_ready_for_query = is_last_result;
if (is_last_result) {
@@ -977,7 +981,7 @@ fn handleResultSet(this: *MySQLConnection, comptime Context: type, reader: NewRe
try err.decode(reader);
defer err.deinit();
defer {
this.queue.advance(connection);
this.flushQueue() catch {};
}
if (request.getStatement()) |statement| {
statement.reset();

View File

@@ -1,4 +1,9 @@
pub fn decodeBinaryValue(globalObject: *jsc.JSGlobalObject, field_type: types.FieldType, column_length: u32, raw: bool, bigint: bool, unsigned: bool, binary: bool, comptime Context: type, reader: NewReader(Context)) !SQLDataCell {
/// MySQL's "binary" pseudo-charset ID. Columns with this character_set value
/// are true binary types (BINARY, VARBINARY, BLOB), as opposed to string columns
/// with binary collations (e.g., utf8mb4_bin) which have different character_set values.
pub const binary_charset: u16 = 63;
pub fn decodeBinaryValue(globalObject: *jsc.JSGlobalObject, field_type: types.FieldType, column_length: u32, raw: bool, bigint: bool, unsigned: bool, binary: bool, character_set: u16, comptime Context: type, reader: NewReader(Context)) !SQLDataCell {
debug("decodeBinaryValue: {s}", .{@tagName(field_type)});
return switch (field_type) {
.MYSQL_TYPE_TINY => {
@@ -151,7 +156,11 @@ pub fn decodeBinaryValue(globalObject: *jsc.JSGlobalObject, field_type: types.Fi
}
var string_data = try reader.encodeLenString();
defer string_data.deinit();
if (binary) {
// Only treat as binary if character_set indicates the binary pseudo-charset.
// The BINARY flag alone is insufficient because VARCHAR/CHAR columns
// with _bin collations (e.g., utf8mb4_bin) also have the BINARY flag set,
// but should return strings, not buffers.
if (binary and character_set == binary_charset) {
return SQLDataCell.raw(&string_data);
}
const slice = string_data.slice();

View File

@@ -140,7 +140,11 @@ pub const Row = struct {
}
},
else => {
if (column.flags.BINARY) {
// Only treat as binary if character_set indicates the binary pseudo-charset.
// The BINARY flag alone is insufficient because VARCHAR/CHAR columns
// with _bin collations (e.g., utf8mb4_bin) also have the BINARY flag set,
// but should return strings, not buffers.
if (column.flags.BINARY and column.character_set == DecodeBinaryValue.binary_charset) {
cell.* = SQLDataCell.raw(value);
} else {
const slice = value.slice();
@@ -230,7 +234,7 @@ pub const Row = struct {
}
const column = this.columns[i];
value.* = try decodeBinaryValue(this.globalObject, column.column_type, column.column_length, this.raw, this.bigint, column.flags.UNSIGNED, column.flags.BINARY, Context, reader);
value.* = try decodeBinaryValue(this.globalObject, column.column_type, column.column_length, this.raw, this.bigint, column.flags.UNSIGNED, column.flags.BINARY, column.character_set, Context, reader);
value.index = switch (column.name_or_index) {
// The indexed columns can be out of order.
.index => |idx| idx,
@@ -260,9 +264,11 @@ const std = @import("std");
const Data = @import("../../shared/Data.zig").Data;
const SQLDataCell = @import("../../shared/SQLDataCell.zig").SQLDataCell;
const SQLQueryResultMode = @import("../../shared/SQLQueryResultMode.zig").SQLQueryResultMode;
const decodeBinaryValue = @import("./DecodeBinaryValue.zig").decodeBinaryValue;
const decodeLengthInt = @import("./EncodeInt.zig").decodeLengthInt;
const DecodeBinaryValue = @import("./DecodeBinaryValue.zig");
const decodeBinaryValue = DecodeBinaryValue.decodeBinaryValue;
const NewReader = @import("./NewReader.zig").NewReader;
const decoderWrap = @import("./NewReader.zig").decoderWrap;

View File

@@ -744,6 +744,28 @@ pub fn fstatat(fd: bun.FileDescriptor, path: [:0]const u8) Maybe(bun.Stat) {
return Maybe(bun.Stat){ .result = stat_buf };
}
/// Like fstatat but does not follow symlinks (uses AT_SYMLINK_NOFOLLOW)
pub fn lstatat(fd: bun.FileDescriptor, path: [:0]const u8) Maybe(bun.Stat) {
if (Environment.isWindows) {
// On Windows, use O.NOFOLLOW to get lstat behavior (prevents following symlinks)
return switch (openatWindowsA(fd, path, O.NOFOLLOW, 0)) {
.result => |file| {
defer file.close();
return fstat(file);
},
.err => |err| Maybe(bun.Stat){ .err = err },
};
}
var stat_buf = mem.zeroes(bun.Stat);
const fd_valid = if (fd == bun.invalid_fd) std.posix.AT.FDCWD else fd.native();
if (Maybe(bun.Stat).errnoSysFP(syscall.fstatat(fd_valid, path, &stat_buf, std.posix.AT.SYMLINK_NOFOLLOW), .fstatat, fd, path)) |err| {
log("lstatat({f}, {s}) = {s}", .{ fd, path, @tagName(err.getErrno()) });
return err;
}
log("lstatat({f}, {s}) = 0", .{ fd, path });
return Maybe(bun.Stat){ .result = stat_buf };
}
pub fn mkdir(file_path: [:0]const u8, flags: mode_t) Maybe(void) {
return switch (Environment.os) {
.mac => Maybe(void).errnoSysP(syscall.mkdir(file_path, flags), .mkdir, file_path) orelse .success,

View File

@@ -77,7 +77,7 @@ pub fn watchDir(this: *INotifyWatcher, pathname: [:0]const u8) bun.sys.Maybe(Eve
bun.assert(this.loaded);
const old_count = this.watch_count.fetchAdd(1, .release);
defer if (old_count == 0) Futex.wake(&this.watch_count, 10);
const watch_dir_mask = IN.EXCL_UNLINK | IN.DELETE | IN.DELETE_SELF | IN.CREATE | IN.MOVE_SELF | IN.ONLYDIR | IN.MOVED_TO;
const watch_dir_mask = IN.EXCL_UNLINK | IN.DELETE | IN.DELETE_SELF | IN.CREATE | IN.MOVE_SELF | IN.ONLYDIR | IN.MOVED_TO | IN.MODIFY;
const rc = system.inotify_add_watch(this.fd.cast(), pathname, watch_dir_mask);
log("inotify_add_watch({f}) = {}", .{ this.fd, rc });
return bun.sys.Maybe(EventListIndex).errnoSysP(rc, .watch, pathname) orelse
@@ -364,6 +364,7 @@ pub fn watchEventFromInotifyEvent(event: *align(1) const INotifyWatcher.Event, i
.rename = (event.mask & IN.MOVE_SELF) > 0,
.move_to = (event.mask & IN.MOVED_TO) > 0,
.write = (event.mask & IN.MODIFY) > 0,
.create = (event.mask & IN.CREATE) > 0,
},
.index = index,
};

View File

@@ -453,6 +453,84 @@ console.log("PRELOAD");
},
});
// Test that autoloadBunfig: false works with execArgv (regression test for #25640)
// When execArgv is present, bunfig should still be disabled if autoloadBunfig: false
itBundled("compile/AutoloadBunfigDisabledWithExecArgv", {
compile: {
autoloadBunfig: false,
execArgv: ["--smol"],
},
files: {
"/entry.ts": /* js */ `
console.log("ENTRY");
`,
},
runtimeFiles: {
"/bunfig.toml": `
preload = ["./preload.ts"]
`,
"/preload.ts": `
console.log("PRELOAD");
`,
},
run: {
// When bunfig is disabled, preload should NOT execute even with execArgv
stdout: "ENTRY",
setCwd: true,
},
});
// Test CLI backend for autoloadBunfig: false with execArgv (regression test for #25640)
itBundled("compile/AutoloadBunfigDisabledWithExecArgvCLI", {
compile: {
autoloadBunfig: false,
execArgv: ["--smol"],
},
backend: "cli",
files: {
"/entry.ts": /* js */ `
console.log("ENTRY");
`,
},
runtimeFiles: {
"/bunfig.toml": `
preload = ["./preload.ts"]
`,
"/preload.ts": `
console.log("PRELOAD");
`,
},
run: {
stdout: "ENTRY",
setCwd: true,
},
});
// Test that autoloadBunfig: true with execArgv still loads bunfig
itBundled("compile/AutoloadBunfigEnabledWithExecArgv", {
compile: {
autoloadBunfig: true,
execArgv: ["--smol"],
},
files: {
"/entry.ts": /* js */ `
console.log("ENTRY");
`,
},
runtimeFiles: {
"/bunfig.toml": `
preload = ["./preload.ts"]
`,
"/preload.ts": `
console.log("PRELOAD");
`,
},
run: {
stdout: "PRELOAD\nENTRY",
setCwd: true,
},
});
// Test that both tsconfig and package.json can be enabled together
itBundled("compile/AutoloadBothTsconfigAndPackageJson", {
compile: {

View File

@@ -175,4 +175,105 @@ describe("bundler", () => {
stdout: /SUCCESS: user arguments properly passed with exec argv present/,
},
});
// Test that --version and --help flags are passed through to user code (issue #26082)
// When compile-exec-argv is used, user flags like --version should NOT be intercepted by Bun
itBundled("compile/CompileExecArgvVersionHelpPassthrough", {
compile: {
execArgv: ["--smol"],
},
backend: "cli",
files: {
"/entry.ts": /* js */ `
// Test that --version and --help are passed through to user code, not intercepted by Bun
const args = process.argv.slice(2);
console.log("User args:", JSON.stringify(args));
if (args.includes("--version")) {
console.log("APP_VERSION:1.0.0");
} else if (args.includes("-v")) {
console.log("APP_VERSION:1.0.0");
} else if (args.includes("--help")) {
console.log("APP_HELP:This is my app help");
} else if (args.includes("-h")) {
console.log("APP_HELP:This is my app help");
} else {
console.log("NO_FLAG_MATCHED");
}
`,
},
run: {
args: ["--version"],
stdout: /APP_VERSION:1\.0\.0/,
},
});
// Test with -v short flag
itBundled("compile/CompileExecArgvShortVersionPassthrough", {
compile: {
execArgv: ["--smol"],
},
backend: "cli",
files: {
"/entry.ts": /* js */ `
const args = process.argv.slice(2);
if (args.includes("-v")) {
console.log("APP_VERSION:1.0.0");
} else {
console.log("FAIL: -v not found in args:", args);
process.exit(1);
}
`,
},
run: {
args: ["-v"],
stdout: /APP_VERSION:1\.0\.0/,
},
});
// Test with --help flag
itBundled("compile/CompileExecArgvHelpPassthrough", {
compile: {
execArgv: ["--smol"],
},
backend: "cli",
files: {
"/entry.ts": /* js */ `
const args = process.argv.slice(2);
if (args.includes("--help")) {
console.log("APP_HELP:my custom help");
} else {
console.log("FAIL: --help not found in args:", args);
process.exit(1);
}
`,
},
run: {
args: ["--help"],
stdout: /APP_HELP:my custom help/,
},
});
// Test with -h short flag
itBundled("compile/CompileExecArgvShortHelpPassthrough", {
compile: {
execArgv: ["--smol"],
},
backend: "cli",
files: {
"/entry.ts": /* js */ `
const args = process.argv.slice(2);
if (args.includes("-h")) {
console.log("APP_HELP:my custom help");
} else {
console.log("FAIL: -h not found in args:", args);
process.exit(1);
}
`,
},
run: {
args: ["-h"],
stdout: /APP_HELP:my custom help/,
},
});
});

View File

@@ -0,0 +1,262 @@
import { Subprocess, spawn } from "bun";
import { afterEach, describe, expect, test } from "bun:test";
import { bunEnv, bunExe, isPosix, tempDir } from "harness";
import { join } from "node:path";
import { InspectorSession, connect } from "./junit-reporter";
import { SocketFramer } from "./socket-framer";
/**
* Extended InspectorSession with helper methods for TestReporter testing
*/
class TestReporterSession extends InspectorSession {
private foundTests: Map<number, any> = new Map();
private startedTests: Set<number> = new Set();
private endedTests: Map<number, any> = new Map();
constructor() {
super();
this.setupTestEventListeners();
}
private setupTestEventListeners() {
this.addEventListener("TestReporter.found", (params: any) => {
this.foundTests.set(params.id, params);
});
this.addEventListener("TestReporter.start", (params: any) => {
this.startedTests.add(params.id);
});
this.addEventListener("TestReporter.end", (params: any) => {
this.endedTests.set(params.id, params);
});
}
enableInspector() {
this.send("Inspector.enable");
}
enableTestReporter() {
this.send("TestReporter.enable");
}
enableAll() {
this.send("Inspector.enable");
this.send("TestReporter.enable");
this.send("LifecycleReporter.enable");
this.send("Console.enable");
this.send("Runtime.enable");
}
initialize() {
this.send("Inspector.initialized");
}
unref() {
this.socket?.unref();
}
ref() {
this.socket?.ref();
}
getFoundTests() {
return this.foundTests;
}
getStartedTests() {
return this.startedTests;
}
getEndedTests() {
return this.endedTests;
}
clearFoundTests() {
this.foundTests.clear();
}
waitForEvent(eventName: string, timeout = 10000): Promise<any> {
this.ref();
return new Promise((resolve, reject) => {
const timer = setTimeout(() => {
reject(new Error(`Timeout waiting for event: ${eventName}`));
}, timeout);
const listener = (params: any) => {
clearTimeout(timer);
resolve(params);
};
this.addEventListener(eventName, listener);
});
}
/**
* Wait for a specific number of TestReporter.found events
*/
waitForFoundTests(count: number, timeout = 10000): Promise<Map<number, any>> {
this.ref();
return new Promise((resolve, reject) => {
const timer = setTimeout(() => {
reject(
new Error(
`Timeout waiting for ${count} found tests, got ${this.foundTests.size}: ${JSON.stringify([...this.foundTests.values()])}`,
),
);
}, timeout);
const check = () => {
if (this.foundTests.size >= count) {
clearTimeout(timer);
resolve(this.foundTests);
}
};
// Check immediately in case we already have enough
check();
// Also listen for new events
this.addEventListener("TestReporter.found", check);
});
}
/**
* Wait for a specific number of TestReporter.end events
*/
waitForEndedTests(count: number, timeout = 10000): Promise<Map<number, any>> {
this.ref();
return new Promise((resolve, reject) => {
const timer = setTimeout(() => {
reject(new Error(`Timeout waiting for ${count} ended tests, got ${this.endedTests.size}`));
}, timeout);
const check = () => {
if (this.endedTests.size >= count) {
clearTimeout(timer);
resolve(this.endedTests);
}
};
check();
this.addEventListener("TestReporter.end", check);
});
}
}
describe.if(isPosix)("TestReporter inspector protocol", () => {
let proc: Subprocess | undefined;
let socket: ReturnType<typeof connect> extends Promise<infer T> ? T : never;
afterEach(() => {
proc?.kill();
proc = undefined;
// @ts-ignore - close the socket if it exists
socket?.end?.();
socket = undefined as any;
});
test("retroactively reports tests when TestReporter.enable is called after tests are discovered", async () => {
// This test specifically verifies that when TestReporter.enable is called AFTER
// test collection has started, the already-discovered tests are retroactively reported.
//
// The flow is:
// 1. Connect to inspector and enable only Inspector domain (NOT TestReporter)
// 2. Send Inspector.initialized to allow test collection and execution to proceed
// 3. Wait briefly for test collection to complete
// 4. THEN send TestReporter.enable - this should trigger retroactive reporting
// of tests that were discovered but not yet reported
using dir = tempDir("test-reporter-delayed-enable", {
"delayed.test.ts": `
import { describe, test, expect } from "bun:test";
describe("suite A", () => {
test("test A1", async () => {
// Add delay to ensure we have time to enable TestReporter during execution
await Bun.sleep(500);
expect(1).toBe(1);
});
test("test A2", () => {
expect(2).toBe(2);
});
});
describe("suite B", () => {
test("test B1", () => {
expect(3).toBe(3);
});
});
`,
});
const socketPath = join(String(dir), `inspector-${Math.random().toString(36).substring(2)}.sock`);
const session = new TestReporterSession();
const framer = new SocketFramer((message: string) => {
session.onMessage(message);
});
const socketPromise = connect(`unix://${socketPath}`).then(s => {
socket = s;
session.socket = s;
session.framer = framer;
s.data = {
onData: framer.onData.bind(framer),
};
return s;
});
proc = spawn({
cmd: [bunExe(), `--inspect-wait=unix:${socketPath}`, "test", "delayed.test.ts"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
await socketPromise;
// Enable Inspector only (NOT TestReporter)
session.enableInspector();
// Signal ready - this allows test collection and execution to proceed
session.initialize();
// Wait for test collection and first test to start running
// The first test has a 500ms sleep, so waiting 200ms ensures we're in execution phase
await Bun.sleep(200);
// Now enable TestReporter - this should trigger retroactive reporting
// of all tests that were discovered while TestReporter was disabled
session.enableTestReporter();
// We should receive found events for all tests retroactively
// Structure: 2 describes + 3 tests = 5 items
const foundTests = await session.waitForFoundTests(5, 15000);
expect(foundTests.size).toBe(5);
const testsArray = [...foundTests.values()];
const describes = testsArray.filter(t => t.type === "describe");
const tests = testsArray.filter(t => t.type === "test");
expect(describes.length).toBe(2);
expect(tests.length).toBe(3);
// Verify the test names
const testNames = tests.map(t => t.name).sort();
expect(testNames).toEqual(["test A1", "test A2", "test B1"]);
// Verify describe names
const describeNames = describes.map(d => d.name).sort();
expect(describeNames).toEqual(["suite A", "suite B"]);
// Wait for tests to complete
const endedTests = await session.waitForEndedTests(3, 15000);
expect(endedTests.size).toBe(3);
const exitCode = await proc.exited;
expect(exitCode).toBe(0);
});
});

View File

@@ -0,0 +1,18 @@
# Node.js Compatibility Tests
These are official Node.js tests from the Node.js repository.
## Important Notes
- These are Node.js compatibility tests **not written by Bun**, so we cannot modify these tests
- The tests pass by exiting with code 0
## Running Tests
To run these tests with a debug build:
```bash
bun bd <file-path>
```
Note: `bun bd test <file-path>` does **not** work since these tests are meant to be run directly without the Bun test runner.

View File

@@ -0,0 +1,121 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe, tempDir } from "harness";
test("Error.cause should display with [cause] label", async () => {
using dir = tempDir("error-cause-test", {
"test.js": `
const err = new Error("Main error");
err.cause = new Error("Cause error");
console.error(err);
`,
});
await using proc = Bun.spawn({
cmd: [bunExe(), "test.js"],
env: bunEnv,
cwd: String(dir),
stderr: "pipe",
stdout: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
// The output should contain "[cause]" label
expect(stderr).toContain("[cause]");
expect(stderr).toContain("Main error");
expect(stderr).toContain("Cause error");
expect(exitCode).toBe(0);
});
test("AggregateError should display message and [errors] label", async () => {
using dir = tempDir("aggregate-error-test", {
"test.js": `
const aggregate = new AggregateError(
[new Error('Error 1'), new Error('Error 2')],
'Aggregate error message.'
);
throw aggregate;
`,
});
await using proc = Bun.spawn({
cmd: [bunExe(), "test.js"],
env: bunEnv,
cwd: String(dir),
stderr: "pipe",
stdout: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
// The output should contain the AggregateError message
expect(stderr).toContain("AggregateError");
expect(stderr).toContain("Aggregate error message.");
// The output should contain "[errors]" label
expect(stderr).toContain("[errors]");
expect(stderr).toContain("Error 1");
expect(stderr).toContain("Error 2");
expect(exitCode).not.toBe(0); // throw causes non-zero exit
});
test("AggregateError with cause should display [cause] label", async () => {
using dir = tempDir("aggregate-error-cause-test", {
"test.js": `
const aggregate = new AggregateError(
[new Error('Error 1')],
'Aggregate error message.',
{ cause: new Error('Cause') }
);
throw aggregate;
`,
});
await using proc = Bun.spawn({
cmd: [bunExe(), "test.js"],
env: bunEnv,
cwd: String(dir),
stderr: "pipe",
stdout: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
// The output should contain [cause] label
expect(stderr).toContain("[cause]");
expect(stderr).toContain("Cause");
// The output should contain [errors] label
expect(stderr).toContain("[errors]");
expect(stderr).toContain("Error 1");
expect(exitCode).not.toBe(0); // throw causes non-zero exit
});
test("Nested Error.cause chain should display properly", async () => {
using dir = tempDir("nested-cause-test", {
"test.js": `
const err3 = new Error("Third level");
const err2 = new Error("Second level", { cause: err3 });
const err1 = new Error("First level", { cause: err2 });
console.error(err1);
`,
});
await using proc = Bun.spawn({
cmd: [bunExe(), "test.js"],
env: bunEnv,
cwd: String(dir),
stderr: "pipe",
stdout: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
// Should show cause labels for nested errors
expect(stderr).toContain("First level");
expect(stderr).toContain("Second level");
expect(stderr).toContain("Third level");
// Should have multiple [cause] labels
const causeMatches = stderr.match(/\[cause\]/g);
expect(causeMatches).not.toBeNull();
expect(causeMatches!.length).toBeGreaterThanOrEqual(2);
expect(exitCode).toBe(0);
});

View File

@@ -0,0 +1,89 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe, isWindows, tempDir } from "harness";
// https://github.com/oven-sh/bun/issues/25628
// Bug: Lazy code-splitting chunks are not accessible via frontend.files in fullstack builds
// when using --splitting with --compile. The chunks are physically written to disk and embedded
// in the executable, but they're filtered out when accessing the embedded files array.
test("lazy chunks from code splitting should appear in frontend.files", { timeout: 60000 }, async () => {
using dir = tempDir("issue-25628", {
// Server entry that prints frontend.files and exits
"server.ts": `
import frontend from "./client.html";
// Get all file paths from frontend.files
const filePaths = frontend.files?.map((f: any) => f.path) ?? [];
// Count the number of chunk files (lazy chunks are named chunk-xxx.js)
const chunkCount = filePaths.filter((p: string) =>
p.includes("chunk-")
).length;
// There should be at least 2 chunks:
// 1. The main app entry chunk
// 2. The lazy-loaded chunk from the dynamic import
console.log("CHUNK_COUNT:" + chunkCount);
console.log("FILES:" + filePaths.join(","));
// Exit immediately after printing
process.exit(0);
`,
"client.html": `<!DOCTYPE html>
<html>
<head>
<script type="module" src="./main.js"></script>
</head>
<body></body>
</html>`,
"main.js": `
// Dynamic import creates a lazy chunk
const lazyMod = () => import("./lazy.js");
lazyMod().then(m => m.hello());
`,
"lazy.js": `
export function hello() {
console.log("Hello from lazy module!");
}
`,
});
// Build with splitting and compile
await using buildProc = Bun.spawn({
cmd: [bunExe(), "build", "--compile", "server.ts", "--splitting", "--outfile", "server"],
cwd: String(dir),
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [buildStdout, buildStderr, buildExitCode] = await Promise.all([
buildProc.stdout.text(),
buildProc.stderr.text(),
buildProc.exited,
]);
expect(buildStderr).not.toContain("error:");
expect(buildExitCode).toBe(0);
// Run the compiled executable
const serverPath = isWindows ? "server.exe" : "./server";
await using runProc = Bun.spawn({
cmd: [serverPath],
cwd: String(dir),
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [runStdout, runStderr, runExitCode] = await Promise.all([
runProc.stdout.text(),
runProc.stderr.text(),
runProc.exited,
]);
// There should be at least 2 chunk files in frontend.files:
// one for the main entry and one for the lazy-loaded module
expect(runStdout).toMatch(/CHUNK_COUNT:[2-9]/);
expect(runExitCode).toBe(0);
});

View File

@@ -0,0 +1,224 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe, tempDir } from "harness";
// https://github.com/oven-sh/bun/issues/25648
// Named function expression names should be renamed when they shadow an outer symbol
// that's referenced inside the function body. This prevents infinite recursion.
test("named function expression should be renamed when shadowing outer symbol", async () => {
using dir = tempDir("issue-25648", {
"lib.ts": `
export function get(x: number) {
return x * 2;
}
export function doSomething(fn: () => number) {
return fn();
}
`,
"index.ts": `
import * as $ from './lib';
export function test() {
return $.doSomething(function get() {
return $.get(123); // This should reference the outer get, not the function expression
});
}
console.log(test());
`,
});
// Bundle and run the code
await using buildProc = Bun.spawn({
cmd: [bunExe(), "build", "index.ts", "--bundle", "--outfile=out.js"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [buildStdout, buildStderr, buildExitCode] = await Promise.all([
buildProc.stdout.text(),
buildProc.stderr.text(),
buildProc.exited,
]);
expect(buildStderr).toBe("");
expect(buildExitCode).toBe(0);
// Run the bundled output
await using runProc = Bun.spawn({
cmd: [bunExe(), "out.js"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [runStdout, runStderr, runExitCode] = await Promise.all([
runProc.stdout.text(),
runProc.stderr.text(),
runProc.exited,
]);
// Should print 246 (123 * 2), NOT cause infinite recursion
expect(runStdout.trim()).toBe("246");
expect(runStderr).toBe("");
expect(runExitCode).toBe(0);
});
test("named function expression with namespace import should not cause infinite recursion", async () => {
using dir = tempDir("issue-25648-2", {
"svelte-mock.ts": `
export function get<T>(store: { value: T }): T {
return store.value;
}
export function set<T>(store: { value: T }, value: T) {
store.value = value;
}
export function bind_value(
element: HTMLElement,
get_fn: () => string,
set_fn: (value: string) => void
) {
return get_fn();
}
`,
"index.ts": `
import * as $ from './svelte-mock';
const query = { value: "hello" };
// This pattern is generated by the Svelte compiler in dev mode
const result = $.bind_value(
{} as HTMLElement,
function get() {
return $.get(query); // Should call outer $.get, not this function
},
function set($$value: string) {
$.set(query, $$value);
}
);
console.log(result);
`,
});
// Bundle and run the code
await using buildProc = Bun.spawn({
cmd: [bunExe(), "build", "index.ts", "--bundle", "--outfile=out.js"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [buildStdout, buildStderr, buildExitCode] = await Promise.all([
buildProc.stdout.text(),
buildProc.stderr.text(),
buildProc.exited,
]);
expect(buildStderr).toBe("");
expect(buildExitCode).toBe(0);
// Run the bundled output
await using runProc = Bun.spawn({
cmd: [bunExe(), "out.js"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [runStdout, runStderr, runExitCode] = await Promise.all([
runProc.stdout.text(),
runProc.stderr.text(),
runProc.exited,
]);
// Should print "hello", NOT cause "Maximum call stack size exceeded"
expect(runStdout.trim()).toBe("hello");
expect(runStderr).toBe("");
expect(runExitCode).toBe(0);
});
test("class expression name should be renamed when shadowing outer symbol", async () => {
using dir = tempDir("issue-25648-3", {
"lib.ts": `
export class Foo {
value = 42;
}
export function makeThing<T>(cls: new () => T): T {
return new cls();
}
`,
"index.ts": `
import * as $ from './lib';
export function test() {
return $.makeThing(class Foo extends $.Foo {
getValue() {
return this.value;
}
// Self-reference: uses the inner class name Foo
static create() {
return new Foo();
}
clone() {
return new Foo();
}
});
}
const instance = test();
console.log(instance.getValue());
// Test self-referencing static method
console.log((instance.constructor as any).create().getValue());
// Test self-referencing instance method
console.log(instance.clone().getValue());
`,
});
// Bundle and run the code
await using buildProc = Bun.spawn({
cmd: [bunExe(), "build", "index.ts", "--bundle", "--outfile=out.js"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [buildStdout, buildStderr, buildExitCode] = await Promise.all([
buildProc.stdout.text(),
buildProc.stderr.text(),
buildProc.exited,
]);
expect(buildStderr).toBe("");
expect(buildExitCode).toBe(0);
// Run the bundled output
await using runProc = Bun.spawn({
cmd: [bunExe(), "out.js"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [runStdout, runStderr, runExitCode] = await Promise.all([
runProc.stdout.text(),
runProc.stderr.text(),
runProc.exited,
]);
// Should print 42 three times (getValue, static create().getValue, clone().getValue)
expect(runStdout.trim()).toBe("42\n42\n42");
expect(runStderr).toBe("");
expect(runExitCode).toBe(0);
});

View File

@@ -0,0 +1,106 @@
import { expect, test } from "bun:test";
import { tempDir } from "harness";
// Regression test for https://github.com/oven-sh/bun/issues/25785
// CSS logical border-radius properties were being silently dropped
test("CSS bundler should preserve logical border-radius properties", async () => {
using dir = tempDir("issue-25785", {
"test.css": `
.test1 {
border-start-start-radius: 0.75rem;
}
.test2 {
border-end-start-radius: 0.75rem;
}
.test3 {
border-start-end-radius: 0.75rem;
}
.test4 {
border-end-end-radius: 0.75rem;
}
.test5 {
border-top-left-radius: 0.75rem;
}
`,
});
const result = await Bun.build({
entrypoints: [`${dir}/test.css`],
outdir: `${dir}/dist`,
experimentalCss: true,
minify: false,
});
expect(result.success).toBe(true);
expect(result.outputs.length).toBe(1);
const output = await result.outputs[0].text();
// Logical properties are compiled to physical properties with LTR/RTL rules
// .test1 with border-start-start-radius compiles to border-top-left-radius (LTR) and border-top-right-radius (RTL)
expect(output).toContain(".test1");
expect(output).toContain("border-top-left-radius");
expect(output).toContain("border-top-right-radius");
// .test2 with border-end-start-radius compiles to border-bottom-left-radius (LTR) and border-bottom-right-radius (RTL)
expect(output).toContain(".test2");
expect(output).toContain("border-bottom-left-radius");
expect(output).toContain("border-bottom-right-radius");
// .test3 with border-start-end-radius
expect(output).toContain(".test3");
// .test4 with border-end-end-radius
expect(output).toContain(".test4");
// Physical property should also be preserved
expect(output).toContain(".test5");
});
test("CSS bundler should handle logical border-radius with targets that compile logical properties", async () => {
using dir = tempDir("issue-25785-compiled", {
"test.css": `
.test1 {
border-start-start-radius: 0.75rem;
}
.test2 {
border-end-start-radius: 0.75rem;
}
.test3 {
border-start-end-radius: 0.75rem;
}
.test4 {
border-end-end-radius: 0.75rem;
}
`,
});
const result = await Bun.build({
entrypoints: [`${dir}/test.css`],
outdir: `${dir}/dist`,
experimentalCss: true,
minify: false,
// Target older browsers that don't support logical properties
target: "browser",
});
expect(result.success).toBe(true);
expect(result.outputs.length).toBe(1);
const output = await result.outputs[0].text();
// When logical properties are compiled down, they should produce physical properties
// with :lang() selectors to handle LTR/RTL
// At minimum, the output should NOT be empty (the bug caused empty output)
expect(output.trim().length).toBeGreaterThan(0);
// Should have some border-radius output (compiled to physical)
expect(output).toMatch(/border-.*-radius/);
// All classes should be present in the output
expect(output).toContain(".test1");
expect(output).toContain(".test2");
expect(output).toContain(".test3");
expect(output).toContain(".test4");
});

View File

@@ -0,0 +1,169 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe, tempDir } from "harness";
test("ls -l shows long listing format", async () => {
// Create temp directory with test files
using dir = tempDir("ls-long-listing", {
"file.txt": "hello world",
"script.sh": "#!/bin/bash\necho hello",
subdir: {
"nested.txt": "nested content",
},
});
// Run ls -l in the temp directory
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
import { $ } from "bun";
$.cwd("${String(dir).replace(/\\/g, "\\\\")}");
const result = await $\`ls -l\`.text();
console.log(result);
`,
],
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
// Verify no errors on stderr
expect(stderr).toBe("");
// Should show permission string (starts with - or d, followed by rwx/sStT permissions)
// Format: -rw-r--r-- 1 uid gid size date name
expect(stdout).toMatch(/^[-dlbcps][-rwxsStT]{9}/m); // Permission string pattern
expect(stdout).toContain("file.txt");
expect(stdout).toContain("script.sh");
expect(stdout).toContain("subdir");
// Verify that it's actually showing long format (contains size and date info)
// Long format has at least permissions, link count, uid, gid, size, date, name
const lines = stdout
.trim()
.split("\n")
.filter(line => line.includes("file.txt"));
expect(lines.length).toBeGreaterThan(0);
// Each line should have multiple space-separated fields
const fileLine = lines[0];
const fields = fileLine.trim().split(/\s+/);
expect(fields.length).toBeGreaterThanOrEqual(7); // perms, nlink, uid, gid, size, date fields, name
expect(exitCode).toBe(0);
});
test("ls without -l shows short format", async () => {
using dir = tempDir("ls-short-listing", {
"file1.txt": "content1",
"file2.txt": "content2",
});
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
import { $ } from "bun";
$.cwd("${String(dir).replace(/\\/g, "\\\\")}");
const result = await $\`ls\`.text();
console.log(result);
`,
],
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
// Verify no errors on stderr
expect(stderr).toBe("");
// Short format should just show filenames, not permission strings
expect(stdout).not.toMatch(/^[-dlbcps][-rwxsStT]{9}/m);
expect(stdout).toContain("file1.txt");
expect(stdout).toContain("file2.txt");
expect(exitCode).toBe(0);
});
test("ls -al shows hidden files in long format", async () => {
using dir = tempDir("ls-all-long", {
".hidden": "hidden content",
"visible.txt": "visible content",
});
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
import { $ } from "bun";
$.cwd("${String(dir).replace(/\\/g, "\\\\")}");
const result = await $\`ls -al\`.text();
console.log(result);
`,
],
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
// Verify no errors on stderr
expect(stderr).toBe("");
// Should show hidden files
expect(stdout).toContain(".hidden");
expect(stdout).toContain("visible.txt");
// Should also show . and .. entries
expect(stdout).toMatch(/^d[-rwxsStT]{9}.*\s\.$/m); // . directory
expect(stdout).toMatch(/^d[-rwxsStT]{9}.*\s\.\.$/m); // .. directory
// Should be in long format
expect(stdout).toMatch(/^[-dlbcps][-rwxsStT]{9}/m);
expect(exitCode).toBe(0);
});
test("ls -l shows directory type indicator", async () => {
using dir = tempDir("ls-dir-type", {
"regular-file.txt": "content",
subdir: {
"nested.txt": "nested",
},
});
await using proc = Bun.spawn({
cmd: [
bunExe(),
"-e",
`
import { $ } from "bun";
$.cwd("${String(dir).replace(/\\/g, "\\\\")}");
const result = await $\`ls -l\`.text();
console.log(result);
`,
],
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
// Verify no errors on stderr
expect(stderr).toBe("");
// Directory should start with 'd'
expect(stdout).toMatch(/^d[-rwxsStT]{9}.*subdir$/m);
// Regular file should start with '-'
expect(stdout).toMatch(/^-[-rwxsStT]{9}.*regular-file\.txt$/m);
expect(exitCode).toBe(0);
});

View File

@@ -0,0 +1,133 @@
import { SQL, randomUUIDv7 } from "bun";
import { beforeEach, expect, test } from "bun:test";
import { describeWithContainer } from "harness";
describeWithContainer(
"mysql",
{
image: "mysql_plain",
env: {},
args: [],
},
container => {
const getOptions = () => ({
url: `mysql://root@${container.host}:${container.port}/bun_sql_test`,
max: 1,
bigint: true,
});
beforeEach(async () => {
await container.ready;
});
// Regression test for https://github.com/oven-sh/bun/issues/26030
// Bun hangs when executing multiple sequential MySQL transactions in a loop where:
// 1. An INSERT is awaited inside the transaction callback
// 2. A SELECT query (e.g., SELECT LAST_INSERT_ID()) is returned as an array without being awaited
test("Sequential transactions with INSERT and returned SELECT should not hang", async () => {
await using sql = new SQL(getOptions());
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
// Create a table similar to the reproduction case
await sql`CREATE TABLE IF NOT EXISTS ${sql(random_name)} (
id INT AUTO_INCREMENT PRIMARY KEY,
contract_name VARCHAR(255),
amount INT
)`;
try {
const rows = [
{ contract_name: "Contract A", amount: 100000 },
{ contract_name: "Contract B", amount: 200000 },
{ contract_name: "Contract C", amount: 300000 },
];
const contractIds: number[] = [];
for (const row of rows) {
// This is the pattern from the bug report:
// - INSERT is awaited
// - SELECT LAST_INSERT_ID() is returned as array (not awaited individually)
const [[result]] = await sql.begin(async tx => {
await tx`
INSERT INTO ${sql(random_name)} (contract_name, amount)
VALUES (${row.contract_name}, ${row.amount})
`;
// Return array with non-awaited query - this triggers the hang
return [tx`SELECT LAST_INSERT_ID() as id`];
});
contractIds.push(Number(result.id));
}
// Verify all transactions completed
expect(contractIds.length).toBe(3);
expect(contractIds[0]).toBe(1);
expect(contractIds[1]).toBe(2);
expect(contractIds[2]).toBe(3);
// Verify data in database
const count = await sql`SELECT COUNT(*) as count FROM ${sql(random_name)}`;
expect(Number(count[0].count)).toBe(3);
} finally {
await sql`DROP TABLE IF EXISTS ${sql(random_name)}`;
}
});
test("Sequential transactions with returned array of multiple queries", async () => {
await using sql = new SQL(getOptions());
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TABLE IF NOT EXISTS ${sql(random_name)} (
id INT AUTO_INCREMENT PRIMARY KEY,
value INT
)`;
try {
for (let i = 0; i < 3; i++) {
const results = await sql.begin(async tx => {
await tx`INSERT INTO ${sql(random_name)} (value) VALUES (${i * 10})`;
// Return multiple queries as array
return [tx`SELECT LAST_INSERT_ID() as id`, tx`SELECT COUNT(*) as count FROM ${sql(random_name)}`];
});
expect(results.length).toBe(2);
}
const count = await sql`SELECT COUNT(*) as count FROM ${sql(random_name)}`;
expect(Number(count[0].count)).toBe(3);
} finally {
await sql`DROP TABLE IF EXISTS ${sql(random_name)}`;
}
});
test("Many sequential transactions with awaited INSERT and returned SELECT", async () => {
await using sql = new SQL(getOptions());
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TABLE IF NOT EXISTS ${sql(random_name)} (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(255)
)`;
try {
// Multiple sequential transactions with awaited INSERT and returned SELECT
for (let i = 0; i < 5; i++) {
const [[result]] = await sql.begin(async tx => {
// First insert
await tx`INSERT INTO ${sql(random_name)} (name) VALUES (${"item_" + i})`;
// Return array with SELECT
return [tx`SELECT LAST_INSERT_ID() as id`];
});
expect(Number(result.id)).toBe(i + 1);
}
const count = await sql`SELECT COUNT(*) as count FROM ${sql(random_name)}`;
expect(Number(count[0].count)).toBe(5);
} finally {
await sql`DROP TABLE IF EXISTS ${sql(random_name)}`;
}
});
},
);

View File

@@ -0,0 +1,140 @@
import { SQL, randomUUIDv7 } from "bun";
import { afterAll, beforeAll, expect, test } from "bun:test";
import { describeWithContainer, isDockerEnabled } from "harness";
// Regression test for https://github.com/oven-sh/bun/issues/26063
// MySQL VARCHAR columns with binary collations (like utf8mb4_bin) were incorrectly
// returned as Buffer instead of string since version 1.3.6.
if (isDockerEnabled()) {
describeWithContainer(
"issue #26063: VARCHAR with binary collation returns Buffer instead of string",
{
image: "mysql_plain",
concurrent: true,
},
container => {
let sql: SQL;
beforeAll(async () => {
await container.ready;
sql = new SQL({
url: `mysql://root@${container.host}:${container.port}/bun_sql_test`,
max: 1,
});
});
afterAll(async () => {
await sql.close();
});
test("VARCHAR with utf8mb4_bin collation should return string (binary protocol)", async () => {
const tableName = "test_" + randomUUIDv7("hex").replaceAll("-", "");
await sql`
CREATE TEMPORARY TABLE ${sql(tableName)} (
id VARCHAR(32) COLLATE utf8mb4_bin NOT NULL,
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARACTER SET=utf8mb4 COLLATE=utf8mb4_unicode_ci
`;
await sql`INSERT INTO ${sql(tableName)} ${sql([{ id: "1" }, { id: "2" }])}`;
const result = await sql`SELECT * FROM ${sql(tableName)}`;
// Should return strings, not Buffers
expect(typeof result[0].id).toBe("string");
expect(typeof result[1].id).toBe("string");
expect(result[0].id).toBe("1");
expect(result[1].id).toBe("2");
});
test("VARCHAR with utf8mb4_bin collation should return string (text protocol)", async () => {
const tableName = "test_" + randomUUIDv7("hex").replaceAll("-", "");
await sql`
CREATE TEMPORARY TABLE ${sql(tableName)} (
id VARCHAR(32) COLLATE utf8mb4_bin NOT NULL,
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARACTER SET=utf8mb4 COLLATE=utf8mb4_unicode_ci
`;
await sql`INSERT INTO ${sql(tableName)} ${sql([{ id: "1" }, { id: "2" }])}`;
// Use .simple() to force text protocol
const result = await sql`SELECT * FROM ${sql(tableName)}`.simple();
// Should return strings, not Buffers
expect(typeof result[0].id).toBe("string");
expect(typeof result[1].id).toBe("string");
expect(result[0].id).toBe("1");
expect(result[1].id).toBe("2");
});
test("CHAR with utf8mb4_bin collation should return string", async () => {
const tableName = "test_" + randomUUIDv7("hex").replaceAll("-", "");
await sql`
CREATE TEMPORARY TABLE ${sql(tableName)} (
code CHAR(10) COLLATE utf8mb4_bin NOT NULL
)
`;
await sql`INSERT INTO ${sql(tableName)} VALUES (${"ABC"})`;
const result = await sql`SELECT * FROM ${sql(tableName)}`;
const resultSimple = await sql`SELECT * FROM ${sql(tableName)}`.simple();
// Should return strings, not Buffers
expect(typeof result[0].code).toBe("string");
expect(typeof resultSimple[0].code).toBe("string");
});
test("TEXT with utf8mb4_bin collation should return string", async () => {
const tableName = "test_" + randomUUIDv7("hex").replaceAll("-", "");
await sql`
CREATE TEMPORARY TABLE ${sql(tableName)} (
content TEXT COLLATE utf8mb4_bin
)
`;
await sql`INSERT INTO ${sql(tableName)} VALUES (${"Hello, World!"})`;
const result = await sql`SELECT * FROM ${sql(tableName)}`;
const resultSimple = await sql`SELECT * FROM ${sql(tableName)}`.simple();
// Should return strings, not Buffers
expect(typeof result[0].content).toBe("string");
expect(result[0].content).toBe("Hello, World!");
expect(typeof resultSimple[0].content).toBe("string");
expect(resultSimple[0].content).toBe("Hello, World!");
});
test("true BINARY/VARBINARY columns should still return Buffer", async () => {
const tableName = "test_" + randomUUIDv7("hex").replaceAll("-", "");
await sql`
CREATE TEMPORARY TABLE ${sql(tableName)} (
a BINARY(4),
b VARBINARY(10),
c BLOB
)
`;
await sql`INSERT INTO ${sql(tableName)} VALUES (${Buffer.from([1, 2, 3, 4])}, ${Buffer.from([5, 6])}, ${Buffer.from([7, 8, 9])})`;
const result = await sql`SELECT * FROM ${sql(tableName)}`;
const resultSimple = await sql`SELECT * FROM ${sql(tableName)}`.simple();
// True binary types should return Buffers
expect(Buffer.isBuffer(result[0].a)).toBe(true);
expect(Buffer.isBuffer(result[0].b)).toBe(true);
expect(Buffer.isBuffer(result[0].c)).toBe(true);
expect(Buffer.isBuffer(resultSimple[0].a)).toBe(true);
expect(Buffer.isBuffer(resultSimple[0].b)).toBe(true);
expect(Buffer.isBuffer(resultSimple[0].c)).toBe(true);
});
},
);
}

View File

@@ -0,0 +1,111 @@
// https://github.com/oven-sh/bun/issues/3657
// fs.watch on a directory should emit 'change' events for files created after the watch is established
import { describe, expect, test } from "bun:test";
import { isLinux, tempDirWithFiles } from "harness";
import fs from "node:fs";
import path from "node:path";
describe.skipIf(!isLinux)("GitHub Issue #3657", () => {
test("fs.watch on directory emits 'change' events for files created after watch starts", async () => {
const testDir = tempDirWithFiles("issue-3657", {});
const testFile = path.join(testDir, "test.txt");
const events: Array<{ eventType: string; filename: string | null }> = [];
let resolver: () => void;
const promise = new Promise<void>(resolve => {
resolver = resolve;
});
const watcher = fs.watch(testDir, { signal: AbortSignal.timeout(5000) }, (eventType, filename) => {
events.push({ eventType, filename: filename as string | null });
// We expect at least 2 events: one rename (create) and one change (modify)
if (events.length >= 2) {
resolver();
}
});
// Give the watcher time to initialize
await Bun.sleep(100);
// Create the file - should emit 'rename' event
fs.writeFileSync(testFile, "hello");
// Wait a bit for the event to be processed
await Bun.sleep(100);
// Modify the file - should emit 'change' event
fs.appendFileSync(testFile, " world");
try {
await promise;
} finally {
watcher.close();
}
// Verify we got at least one event for "test.txt"
const testFileEvents = events.filter(e => e.filename === "test.txt");
expect(testFileEvents.length).toBeGreaterThanOrEqual(2);
// Verify we got a 'rename' event (file creation)
const renameEvents = testFileEvents.filter(e => e.eventType === "rename");
expect(renameEvents.length).toBeGreaterThanOrEqual(1);
// Verify we got a 'change' event (file modification)
const changeEvents = testFileEvents.filter(e => e.eventType === "change");
expect(changeEvents.length).toBeGreaterThanOrEqual(1);
});
test("fs.watch emits multiple 'change' events for repeated modifications", async () => {
const testDir = tempDirWithFiles("issue-3657-multi", {});
const testFile = path.join(testDir, "multi.txt");
const events: Array<{ eventType: string; filename: string | null }> = [];
let resolver: () => void;
const promise = new Promise<void>(resolve => {
resolver = resolve;
});
const watcher = fs.watch(testDir, { signal: AbortSignal.timeout(5000) }, (eventType, filename) => {
events.push({ eventType, filename: filename as string | null });
// We expect 1 rename (create) + 3 change events = 4 total
if (events.length >= 4) {
resolver();
}
});
// Give the watcher time to initialize
await Bun.sleep(100);
// Create the file - should emit 'rename' event
fs.writeFileSync(testFile, "line1\n");
await Bun.sleep(100);
// Multiple modifications - should emit 'change' events
fs.appendFileSync(testFile, "line2\n");
await Bun.sleep(100);
fs.appendFileSync(testFile, "line3\n");
await Bun.sleep(100);
fs.appendFileSync(testFile, "line4\n");
try {
await promise;
} finally {
watcher.close();
}
// Verify we got events for "multi.txt"
const testFileEvents = events.filter(e => e.filename === "multi.txt");
expect(testFileEvents.length).toBeGreaterThanOrEqual(4);
// Verify we got a 'rename' event (file creation)
const renameEvents = testFileEvents.filter(e => e.eventType === "rename");
expect(renameEvents.length).toBeGreaterThanOrEqual(1);
// Verify we got multiple 'change' events (file modifications)
const changeEvents = testFileEvents.filter(e => e.eventType === "change");
expect(changeEvents.length).toBeGreaterThanOrEqual(3);
});
});