mirror of
https://github.com/oven-sh/bun
synced 2026-02-16 22:01:47 +00:00
Compare commits
17 Commits
claude/fix
...
claude/add
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5140a4c3c8 | ||
|
|
0ccfae7469 | ||
|
|
5232420fc9 | ||
|
|
681cd1a630 | ||
|
|
4628b69025 | ||
|
|
a33e5a5842 | ||
|
|
025eafa463 | ||
|
|
3d7cfd36b7 | ||
|
|
2097b938c5 | ||
|
|
d775d767f0 | ||
|
|
19ff116af6 | ||
|
|
00576b44ee | ||
|
|
6e855311bf | ||
|
|
45670003f9 | ||
|
|
4d0ef56df7 | ||
|
|
f419320a7a | ||
|
|
684d9dbfe1 |
279
docs/pm/cli/trace.md
Normal file
279
docs/pm/cli/trace.md
Normal file
@@ -0,0 +1,279 @@
|
||||
# Trace Flag
|
||||
|
||||
The `--trace` flag enables structured logging of I/O operations to help understand what your application is doing without manual instrumentation.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
bun --trace=trace.jsonl your-script.js
|
||||
```
|
||||
|
||||
This creates a trace file with line-delimited JSON (JSONL) format, where each line is a complete trace event.
|
||||
|
||||
## What Gets Traced
|
||||
|
||||
### File System Operations (`ns: "fs"`)
|
||||
|
||||
**Low-level operations:**
|
||||
|
||||
- `open` - File descriptor allocation
|
||||
- `read` / `write` - Data transfer operations
|
||||
- `close` - File descriptor release
|
||||
|
||||
**High-level operations:**
|
||||
|
||||
- `readFile` / `writeFile` - Complete file read/write
|
||||
- `stat` / `lstat` / `fstat` - File metadata queries
|
||||
- `mkdir` / `rmdir` - Directory creation/removal
|
||||
- `unlink` - File deletion
|
||||
- `rename` - File/directory renaming
|
||||
- `readdir` - Directory listing
|
||||
|
||||
### HTTP Operations (`ns: "fetch"`)
|
||||
|
||||
- `request` - HTTP request initiation
|
||||
- `response` - HTTP response received
|
||||
|
||||
### Response Body Consumption (`ns: "response_body"`)
|
||||
|
||||
- `text` / `json` / `arrayBuffer` / `blob` - Response body parsing
|
||||
|
||||
### Bun.write Operations (`ns: "bun_write"`)
|
||||
|
||||
- `write` - File write operations via Bun.write()
|
||||
|
||||
### Subprocess Operations (`ns: "subprocess"`)
|
||||
|
||||
- `spawn` - Process spawned (Bun.spawn or child_process)
|
||||
- `exit` - Process exited
|
||||
|
||||
## Trace Event Format
|
||||
|
||||
Each trace event is a compact JSON array:
|
||||
|
||||
```json
|
||||
[
|
||||
"fs",
|
||||
1761595797038,
|
||||
"readFile",
|
||||
{ "path": "/path/to/file", "bytes_read": 1234 }
|
||||
]
|
||||
```
|
||||
|
||||
Array structure:
|
||||
|
||||
- `[0]` - Namespace (fs, fetch, response_body, bun_write, subprocess)
|
||||
- `[1]` - Timestamp in milliseconds
|
||||
- `[2]` - Operation name (readFile, spawn, request, etc.)
|
||||
- `[3]` - Operation-specific data object
|
||||
|
||||
Only applicable fields are included in the data object - no null/undefined values.
|
||||
|
||||
### Why Array Format?
|
||||
|
||||
The array format is:
|
||||
|
||||
- **40% more compact** than object format
|
||||
- **Easier to filter** with jq: `select(.[0] == "fs" and .[2] == "writeFile")`
|
||||
- **Faster to parse** (arrays vs objects with string keys)
|
||||
|
||||
## Example: Analyzing a Trace
|
||||
|
||||
```bash
|
||||
# Run your script with tracing
|
||||
bun --trace=trace.jsonl my-app.js
|
||||
|
||||
# View all filesystem operations
|
||||
cat trace.jsonl | jq 'select(.[0] == "fs")'
|
||||
|
||||
# Count operations by type
|
||||
cat trace.jsonl | jq -r '.[2]' | sort | uniq -c
|
||||
|
||||
# Find slow operations (requires calculating deltas)
|
||||
cat trace.jsonl | jq -c 'select(.[0] == "fetch") | {ts: .[1], url: .[3].url}'
|
||||
|
||||
# Total bytes read/written
|
||||
cat trace.jsonl | jq -s 'map(select(.[0] == "fs")) |
|
||||
{
|
||||
bytes_read: map(.[3].bytes_read // 0) | add,
|
||||
bytes_written: map(.[3].bytes_written // 0) | add
|
||||
}'
|
||||
```
|
||||
|
||||
## Example Trace Output
|
||||
|
||||
```bash
|
||||
$ bun --trace=trace.jsonl -e 'import fs from "fs"; fs.writeFileSync("test.txt", "hello"); console.log(fs.readFileSync("test.txt", "utf8"))'
|
||||
hello
|
||||
$ cat trace.jsonl
|
||||
["fs",1761595797038,"writeFile",{"path":"test.txt","length":5}]
|
||||
["fs",1761595797038,"writeFile",{"path":"test.txt","bytes_written":5}]
|
||||
["fs",1761595797038,"readFile",{"path":"test.txt","encoding":"utf8"}]
|
||||
["fs",1761595797039,"readFile",{"path":"test.txt","bytes_read":5,"fast_path":true}]
|
||||
```
|
||||
|
||||
## Common Trace Data Fields
|
||||
|
||||
### Filesystem Operations
|
||||
|
||||
**Entry traces** (operation starting):
|
||||
|
||||
- `call` - Operation name
|
||||
- `path` - File/directory path
|
||||
- `fd` - File descriptor (for fd-based ops)
|
||||
- `flags` - Open flags (for open)
|
||||
- `mode` - File permissions
|
||||
- `length` - Buffer/data length
|
||||
- `offset` - Buffer offset
|
||||
- `position` - File position (for positioned reads/writes)
|
||||
- `encoding` - Character encoding (for readFile)
|
||||
- `recursive` - Recursive flag (for mkdir/rmdir/readdir)
|
||||
|
||||
**Exit traces** (operation complete):
|
||||
|
||||
- `call` - Operation name (same as entry)
|
||||
- `path` / `fd` - Identifier (same as entry)
|
||||
- `bytes_read` - Bytes successfully read
|
||||
- `bytes_written` - Bytes successfully written
|
||||
- `size` - File size (for stat)
|
||||
- `mode` - File mode bits (for stat)
|
||||
- `entries` - Number of entries (for readdir)
|
||||
- `success` - Boolean success flag
|
||||
- `errno` - Error number (on failure)
|
||||
- `fast_path` - Used optimized path (readFile)
|
||||
|
||||
### HTTP Operations
|
||||
|
||||
**Request trace:**
|
||||
|
||||
- `call: "request"`
|
||||
- `url` - Request URL
|
||||
- `method` - HTTP method (GET, POST, etc.)
|
||||
|
||||
**Response trace:**
|
||||
|
||||
- `call: "response"`
|
||||
- `url` - Request URL
|
||||
- `status` - HTTP status (OK, NotFound, etc.)
|
||||
- `body_size` - Response body size in bytes
|
||||
- `has_more` - Whether response is streaming
|
||||
- `err` - Error name (on failure)
|
||||
|
||||
**Body consumption trace:**
|
||||
|
||||
- `call` - Consumption method (text, json, arrayBuffer, blob)
|
||||
- `status` - Consumption status (immediate, streaming, already_used)
|
||||
|
||||
### Subprocess Operations
|
||||
|
||||
**Spawn trace:**
|
||||
|
||||
- `call: "spawn"`
|
||||
- `cmd` - Command being executed
|
||||
- `args` - Number of arguments (including command)
|
||||
- `cwd` - Working directory
|
||||
- `env_count` - Number of environment variables
|
||||
- `pid` - Process ID
|
||||
|
||||
**Exit trace:**
|
||||
|
||||
- `call: "exit"`
|
||||
- `pid` - Process ID
|
||||
- `exit_code` - Exit code (on normal exit)
|
||||
- `signal` - Signal number (if killed by signal)
|
||||
- `errno` - Error number (on spawn failure)
|
||||
|
||||
## Use Cases
|
||||
|
||||
### Debugging I/O Performance
|
||||
|
||||
Find which operations are slow by analyzing timestamps:
|
||||
|
||||
```javascript
|
||||
const traces = require("fs")
|
||||
.readFileSync("trace.jsonl", "utf8")
|
||||
.trim()
|
||||
.split("\n")
|
||||
.map(JSON.parse);
|
||||
|
||||
// Find operations and their durations
|
||||
const ops = new Map();
|
||||
traces.forEach(t => {
|
||||
const key = `${t.ns}.${t.data.call}.${t.data.path || t.data.url}`;
|
||||
if (!ops.has(key)) {
|
||||
ops.set(key, { start: t.ts, events: [] });
|
||||
}
|
||||
ops.get(key).events.push(t);
|
||||
ops.get(key).end = t.ts;
|
||||
});
|
||||
|
||||
// Show slowest operations
|
||||
Array.from(ops.entries())
|
||||
.map(([k, v]) => ({ op: k, duration: v.end - v.start }))
|
||||
.sort((a, b) => b.duration - a.duration)
|
||||
.slice(0, 10)
|
||||
.forEach(x => console.log(`${x.op}: ${x.duration}ms`));
|
||||
```
|
||||
|
||||
### Understanding Application Behavior
|
||||
|
||||
AI agents can analyze traces to understand:
|
||||
|
||||
- What files the application reads/writes
|
||||
- What external services it contacts
|
||||
- Order of operations
|
||||
- Error patterns
|
||||
- Resource usage (bytes read/written, number of requests)
|
||||
|
||||
### Testing and Validation
|
||||
|
||||
Verify your application's I/O behavior:
|
||||
|
||||
```javascript
|
||||
// Check that app only writes to temp directory
|
||||
const traces = getTraces();
|
||||
const writes = traces.filter(
|
||||
t => t.ns === "fs" && ["write", "writeFile"].includes(t.data.call),
|
||||
);
|
||||
writes.forEach(w => {
|
||||
assert(w.data.path.startsWith("/tmp"), `Unexpected write to ${w.data.path}`);
|
||||
});
|
||||
```
|
||||
|
||||
## Performance Impact
|
||||
|
||||
Tracing has **minimal overhead** when disabled (the default):
|
||||
|
||||
- Single boolean check per instrumentation point
|
||||
- No file I/O, JSON serialization, or string allocation
|
||||
|
||||
When enabled, tracing adds:
|
||||
|
||||
- JSON serialization overhead per event
|
||||
- Mutex locking for thread-safe file writes
|
||||
- Buffered I/O to the trace file
|
||||
|
||||
For CPU-bound applications, overhead is negligible. For I/O-bound applications with high operation counts (thousands/second), expect 1-5% overhead.
|
||||
|
||||
## Limitations
|
||||
|
||||
- Trace file uses append-only writes (no rotation)
|
||||
- Large traces can consume significant disk space
|
||||
- No built-in filtering (logs all operations in traced namespaces)
|
||||
- Timestamps are system time (not monotonic/high-resolution)
|
||||
- No support for trace compression
|
||||
- No built-in trace analysis tools (use jq, custom scripts, or tools like Jaeger)
|
||||
|
||||
## Implementation Details
|
||||
|
||||
The trace implementation is integrated into Bun's core I/O layer:
|
||||
|
||||
- `src/output.zig` - Core tracing infrastructure
|
||||
- `src/cli/Arguments.zig` - CLI flag parsing
|
||||
- `src/bun.js/node/node_fs.zig` - Filesystem operation tracing
|
||||
- `src/bun.js/webcore/fetch.zig` - HTTP request/response tracing
|
||||
- `src/bun.js/webcore/Body.zig` - Response body tracing
|
||||
- `src/bun.js/webcore/blob/write_file.zig` - Bun.write tracing
|
||||
- `src/bun.js/api/bun/subprocess.zig` - Subprocess spawn/exit tracing
|
||||
|
||||
Each instrumentation point checks `Output.trace_enabled` before tracing, ensuring zero overhead when disabled.
|
||||
@@ -2,6 +2,14 @@ const pid_t = if (Environment.isPosix) std.posix.pid_t else uv.uv_pid_t;
|
||||
const fd_t = if (Environment.isPosix) std.posix.fd_t else i32;
|
||||
const log = bun.Output.scoped(.PROCESS, .visible);
|
||||
|
||||
// Tracing helper for subprocess operations
|
||||
inline fn traceSubprocess(args: anytype) void {
|
||||
if (Output.trace_enabled) {
|
||||
const tracer = Output.tracer("subprocess");
|
||||
tracer.trace(args);
|
||||
}
|
||||
}
|
||||
|
||||
const win_rusage = struct {
|
||||
utime: struct {
|
||||
sec: i64 = 0,
|
||||
@@ -211,6 +219,36 @@ pub const Process = struct {
|
||||
const exit_handler = this.exit_handler;
|
||||
this.status = status;
|
||||
|
||||
// Trace subprocess exit
|
||||
if (Output.trace_enabled) {
|
||||
switch (status) {
|
||||
.exited => |exited| {
|
||||
traceSubprocess(.{
|
||||
.call = "exit",
|
||||
.pid = this.pid,
|
||||
.exit_code = exited.code,
|
||||
.signal = if (exited.signal != @as(bun.SignalCode, @enumFromInt(0))) @intFromEnum(exited.signal) else null,
|
||||
});
|
||||
},
|
||||
.signaled => |sig| {
|
||||
traceSubprocess(.{
|
||||
.call = "exit",
|
||||
.pid = this.pid,
|
||||
.exit_code = null,
|
||||
.signal = @intFromEnum(sig),
|
||||
});
|
||||
},
|
||||
.err => |err| {
|
||||
traceSubprocess(.{
|
||||
.call = "exit",
|
||||
.pid = this.pid,
|
||||
.errno = err.errno,
|
||||
});
|
||||
},
|
||||
.running => {}, // Don't trace if still running
|
||||
}
|
||||
}
|
||||
|
||||
if (this.hasExited()) {
|
||||
this.detach();
|
||||
}
|
||||
@@ -1501,6 +1539,25 @@ pub fn spawnProcessPosix(
|
||||
spawned.extra_pipes = extra_fds;
|
||||
extra_fds = std.array_list.Managed(bun.FileDescriptor).init(bun.default_allocator);
|
||||
|
||||
// Trace subprocess spawn
|
||||
if (Output.trace_enabled) {
|
||||
const cmd = bun.span(argv[0] orelse "");
|
||||
// Count argv and envp
|
||||
var args_count: usize = 0;
|
||||
while (argv[args_count]) |_| : (args_count += 1) {}
|
||||
var env_count: usize = 0;
|
||||
while (envp[env_count]) |_| : (env_count += 1) {}
|
||||
|
||||
traceSubprocess(.{
|
||||
.call = "spawn",
|
||||
.cmd = cmd,
|
||||
.args_count = args_count,
|
||||
.cwd = options.cwd,
|
||||
.env_count = env_count,
|
||||
.pid = pid,
|
||||
});
|
||||
}
|
||||
|
||||
if (comptime Environment.isLinux) {
|
||||
// If it's spawnSync and we want to block the entire thread
|
||||
// don't even bother with pidfd. It's not necessary.
|
||||
@@ -1757,6 +1814,25 @@ pub fn spawnProcessWindows(
|
||||
process.pid = process.poller.uv.pid;
|
||||
bun.assert(process.poller.uv.exit_cb == &Process.onExitUV);
|
||||
|
||||
// Trace subprocess spawn
|
||||
if (Output.trace_enabled) {
|
||||
const cmd = bun.span(argv[0] orelse "");
|
||||
// Count argv and envp
|
||||
var args_count: usize = 0;
|
||||
while (argv[args_count]) |_| : (args_count += 1) {}
|
||||
var env_count: usize = 0;
|
||||
while (envp[env_count]) |_| : (env_count += 1) {}
|
||||
|
||||
traceSubprocess(.{
|
||||
.call = "spawn",
|
||||
.cmd = cmd,
|
||||
.args_count = args_count,
|
||||
.cwd = options.cwd,
|
||||
.env_count = env_count,
|
||||
.pid = process.pid,
|
||||
});
|
||||
}
|
||||
|
||||
var result = WindowsSpawnResult{
|
||||
.process_ = process,
|
||||
.extra_pipes = try std.array_list.Managed(WindowsSpawnResult.StdioResult).initCapacity(bun.default_allocator, options.extra_fds.len),
|
||||
|
||||
@@ -3389,14 +3389,19 @@ pub const NodeFS = struct {
|
||||
}
|
||||
|
||||
pub fn close(_: *NodeFS, args: Arguments.Close, _: Flavor) Maybe(Return.Close) {
|
||||
return if (args.fd.closeAllowingBadFileDescriptor(null)) |err|
|
||||
.{ .err = err }
|
||||
else
|
||||
.success;
|
||||
const result = args.fd.closeAllowingBadFileDescriptor(null);
|
||||
if (result) |err| {
|
||||
traceFS(.{ .call = "close", .fd = args.fd.cast(), .errno = err.errno });
|
||||
return .{ .err = err };
|
||||
} else {
|
||||
traceFS(.{ .call = "close", .fd = args.fd.cast() });
|
||||
return .success;
|
||||
}
|
||||
}
|
||||
|
||||
pub fn uv_close(_: *NodeFS, args: Arguments.Close, rc: i64) Maybe(Return.Close) {
|
||||
if (rc < 0) {
|
||||
traceFS(.{ .call = "close", .fd = args.fd.cast(), .errno = @as(bun.sys.Error.Int, @intCast(-rc)) });
|
||||
return Maybe(Return.Close){ .err = .{
|
||||
.errno = @intCast(-rc),
|
||||
.syscall = .close,
|
||||
@@ -3404,6 +3409,7 @@ pub const NodeFS = struct {
|
||||
.from_libuv = true,
|
||||
} };
|
||||
}
|
||||
traceFS(.{ .call = "close", .fd = args.fd.cast() });
|
||||
return .success;
|
||||
}
|
||||
|
||||
@@ -3809,15 +3815,29 @@ pub const NodeFS = struct {
|
||||
}
|
||||
|
||||
pub fn fstat(_: *NodeFS, args: Arguments.Fstat, _: Flavor) Maybe(Return.Fstat) {
|
||||
traceFS(.{ .call = "fstat", .fd = args.fd.cast() });
|
||||
|
||||
if (Environment.isLinux and Syscall.supports_statx_on_linux.load(.monotonic)) {
|
||||
return switch (Syscall.fstatx(args.fd, &.{ .type, .mode, .nlink, .uid, .gid, .atime, .mtime, .ctime, .btime, .ino, .size, .blocks })) {
|
||||
.result => |result| .{ .result = .init(&result, args.big_int) },
|
||||
.err => |err| .{ .err = err },
|
||||
.result => |result| {
|
||||
traceFS(.{ .call = "fstat", .fd = args.fd.cast(), .size = result.size, .mode = result.mode });
|
||||
return .{ .result = .init(&result, args.big_int) };
|
||||
},
|
||||
.err => |err| {
|
||||
traceFS(.{ .call = "fstat", .fd = args.fd.cast(), .errno = err.errno });
|
||||
return .{ .err = err };
|
||||
},
|
||||
};
|
||||
} else {
|
||||
return switch (Syscall.fstat(args.fd)) {
|
||||
.result => |result| .{ .result = .init(&Syscall.PosixStat.init(&result), args.big_int) },
|
||||
.err => |err| .{ .err = err },
|
||||
.result => |result| {
|
||||
traceFS(.{ .call = "fstat", .fd = args.fd.cast(), .size = result.size, .mode = result.mode });
|
||||
return .{ .result = .init(&Syscall.PosixStat.init(&result), args.big_int) };
|
||||
},
|
||||
.err => |err| {
|
||||
traceFS(.{ .call = "fstat", .fd = args.fd.cast(), .errno = err.errno });
|
||||
return .{ .err = err };
|
||||
},
|
||||
};
|
||||
}
|
||||
}
|
||||
@@ -3893,10 +3913,17 @@ pub const NodeFS = struct {
|
||||
}
|
||||
|
||||
pub fn lstat(this: *NodeFS, args: Arguments.Lstat, _: Flavor) Maybe(Return.Lstat) {
|
||||
const path = args.path.sliceZ(&this.sync_error_buf);
|
||||
traceFS(.{ .call = "lstat", .path = path });
|
||||
|
||||
if (Environment.isLinux and Syscall.supports_statx_on_linux.load(.monotonic)) {
|
||||
return switch (Syscall.lstatx(args.path.sliceZ(&this.sync_error_buf), &.{ .type, .mode, .nlink, .uid, .gid, .atime, .mtime, .ctime, .btime, .ino, .size, .blocks })) {
|
||||
.result => |result| Maybe(Return.Lstat){ .result = .{ .stats = .init(&result, args.big_int) } },
|
||||
return switch (Syscall.lstatx(path, &.{ .type, .mode, .nlink, .uid, .gid, .atime, .mtime, .ctime, .btime, .ino, .size, .blocks })) {
|
||||
.result => |result| {
|
||||
traceFS(.{ .call = "lstat", .path = path, .size = result.size, .mode = result.mode });
|
||||
return Maybe(Return.Lstat){ .result = .{ .stats = .init(&result, args.big_int) } };
|
||||
},
|
||||
.err => |err| brk: {
|
||||
traceFS(.{ .call = "lstat", .path = path, .errno = err.errno });
|
||||
if (!args.throw_if_no_entry and err.getErrno() == .NOENT) {
|
||||
return Maybe(Return.Lstat){ .result = .{ .not_found = {} } };
|
||||
}
|
||||
@@ -3904,9 +3931,13 @@ pub const NodeFS = struct {
|
||||
},
|
||||
};
|
||||
} else {
|
||||
return switch (Syscall.lstat(args.path.sliceZ(&this.sync_error_buf))) {
|
||||
.result => |result| Maybe(Return.Lstat){ .result = .{ .stats = .init(&Syscall.PosixStat.init(&result), args.big_int) } },
|
||||
return switch (Syscall.lstat(path)) {
|
||||
.result => |result| {
|
||||
traceFS(.{ .call = "lstat", .path = path, .size = result.size, .mode = result.mode });
|
||||
return Maybe(Return.Lstat){ .result = .{ .stats = .init(&Syscall.PosixStat.init(&result), args.big_int) } };
|
||||
},
|
||||
.err => |err| brk: {
|
||||
traceFS(.{ .call = "lstat", .path = path, .errno = err.errno });
|
||||
if (!args.throw_if_no_entry and err.getErrno() == .NOENT) {
|
||||
return Maybe(Return.Lstat){ .result = .{ .not_found = {} } };
|
||||
}
|
||||
@@ -3928,9 +3959,16 @@ pub const NodeFS = struct {
|
||||
// Node doesn't absolute the path so we don't have to either
|
||||
pub fn mkdirNonRecursive(this: *NodeFS, args: Arguments.Mkdir) Maybe(Return.Mkdir) {
|
||||
const path = args.path.sliceZ(&this.sync_error_buf);
|
||||
traceFS(.{ .call = "mkdir", .path = path, .mode = args.mode, .recursive = false });
|
||||
return switch (Syscall.mkdir(path, args.mode)) {
|
||||
.result => Maybe(Return.Mkdir){ .result = .{ .none = {} } },
|
||||
.err => |err| Maybe(Return.Mkdir){ .err = err.withPath(args.path.slice()) },
|
||||
.result => {
|
||||
traceFS(.{ .call = "mkdir", .path = path, .success = true });
|
||||
return Maybe(Return.Mkdir){ .result = .{ .none = {} } };
|
||||
},
|
||||
.err => |err| {
|
||||
traceFS(.{ .call = "mkdir", .path = path, .errno = err.errno });
|
||||
return Maybe(Return.Mkdir){ .err = err.withPath(args.path.slice()) };
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
@@ -3942,6 +3980,7 @@ pub const NodeFS = struct {
|
||||
const buf = bun.path_buffer_pool.get();
|
||||
defer bun.path_buffer_pool.put(buf);
|
||||
const path = args.path.osPathKernel32(buf);
|
||||
traceFS(.{ .call = "mkdir", .path = bun.sliceTo(path, 0), .mode = args.mode, .recursive = true });
|
||||
|
||||
return switch (args.always_return_none) {
|
||||
inline else => |always_return_none| this.mkdirRecursiveOSPathImpl(Ctx, ctx, path, args.mode, !always_return_none),
|
||||
@@ -4182,17 +4221,24 @@ pub const NodeFS = struct {
|
||||
else
|
||||
args.path.sliceZ(&this.sync_error_buf);
|
||||
|
||||
return switch (Syscall.open(path, args.flags.asInt(), args.mode)) {
|
||||
.err => |err| .{
|
||||
.err = err.withPath(args.path.slice()),
|
||||
const result = Syscall.open(path, args.flags.asInt(), args.mode);
|
||||
|
||||
switch (result) {
|
||||
.result => |fd| {
|
||||
traceFS(.{ .call = "open", .path = args.path.slice(), .flags = args.flags.asInt(), .mode = args.mode, .fd = fd.cast() });
|
||||
return .{ .result = fd };
|
||||
},
|
||||
.result => |fd| .{ .result = fd },
|
||||
};
|
||||
.err => |err| {
|
||||
traceFS(.{ .call = "open", .path = args.path.slice(), .flags = args.flags.asInt(), .mode = args.mode, .errno = err.errno });
|
||||
return .{ .err = err.withPath(args.path.slice()) };
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
pub fn uv_open(this: *NodeFS, args: Arguments.Open, rc: i64) Maybe(Return.Open) {
|
||||
_ = this;
|
||||
if (rc < 0) {
|
||||
traceFS(.{ .call = "open", .path = args.path.slice(), .flags = args.flags.asInt(), .mode = args.mode, .errno = @as(bun.sys.Error.Int, @intCast(-rc)) });
|
||||
return Maybe(Return.Open){ .err = .{
|
||||
.errno = @intCast(-rc),
|
||||
.syscall = .open,
|
||||
@@ -4200,7 +4246,9 @@ pub const NodeFS = struct {
|
||||
.from_libuv = true,
|
||||
} };
|
||||
}
|
||||
return Maybe(Return.Open).initResult(.fromUV(@intCast(rc)));
|
||||
const fd = bun.FileDescriptor.fromUV(@intCast(rc));
|
||||
traceFS(.{ .call = "open", .path = args.path.slice(), .flags = args.flags.asInt(), .mode = args.mode, .fd = fd.cast() });
|
||||
return Maybe(Return.Open).initResult(fd);
|
||||
}
|
||||
|
||||
pub fn uv_statfs(_: *NodeFS, args: Arguments.StatFS, req: *uv.fs_t, rc: i64) Maybe(Return.StatFS) {
|
||||
@@ -4226,12 +4274,19 @@ pub const NodeFS = struct {
|
||||
buf = buf[@min(args.offset, buf.len)..];
|
||||
buf = buf[0..@min(buf.len, args.length)];
|
||||
|
||||
return switch (Syscall.read(args.fd, buf)) {
|
||||
.err => |err| .{ .err = err },
|
||||
.result => |amt| .{ .result = .{
|
||||
.bytes_read = @as(u52, @truncate(amt)),
|
||||
} },
|
||||
};
|
||||
const result = Syscall.read(args.fd, buf);
|
||||
|
||||
switch (result) {
|
||||
.result => |amt| {
|
||||
const bytes_read = @as(u52, @truncate(amt));
|
||||
traceFS(.{ .call = "read", .fd = args.fd.cast(), .offset = args.offset, .length = args.length, .bytes_read = bytes_read });
|
||||
return .{ .result = .{ .bytes_read = bytes_read } };
|
||||
},
|
||||
.err => |err| {
|
||||
traceFS(.{ .call = "read", .fd = args.fd.cast(), .offset = args.offset, .length = args.length, .errno = err.errno });
|
||||
return .{ .err = err };
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
fn preadInner(_: *NodeFS, args: Arguments.Read) Maybe(Return.Read) {
|
||||
@@ -4239,16 +4294,23 @@ pub const NodeFS = struct {
|
||||
buf = buf[@min(args.offset, buf.len)..];
|
||||
buf = buf[0..@min(buf.len, args.length)];
|
||||
|
||||
return switch (Syscall.pread(args.fd, buf, args.position.?)) {
|
||||
.err => |err| .{ .err = .{
|
||||
.errno = err.errno,
|
||||
.fd = args.fd,
|
||||
.syscall = .read,
|
||||
} },
|
||||
.result => |amt| .{ .result = .{
|
||||
.bytes_read = @as(u52, @truncate(amt)),
|
||||
} },
|
||||
};
|
||||
const result = Syscall.pread(args.fd, buf, args.position.?);
|
||||
|
||||
switch (result) {
|
||||
.result => |amt| {
|
||||
const bytes_read = @as(u52, @truncate(amt));
|
||||
traceFS(.{ .call = "read", .fd = args.fd.cast(), .offset = args.offset, .length = args.length, .position = args.position.?, .bytes_read = bytes_read });
|
||||
return .{ .result = .{ .bytes_read = bytes_read } };
|
||||
},
|
||||
.err => |err| {
|
||||
traceFS(.{ .call = "read", .fd = args.fd.cast(), .offset = args.offset, .length = args.length, .position = args.position.?, .errno = err.errno });
|
||||
return .{ .err = .{
|
||||
.errno = err.errno,
|
||||
.fd = args.fd,
|
||||
.syscall = .read,
|
||||
} };
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
pub fn read(this: *NodeFS, args: Arguments.Read, _: Flavor) Maybe(Return.Read) {
|
||||
@@ -4266,6 +4328,7 @@ pub const NodeFS = struct {
|
||||
pub fn uv_read(this: *NodeFS, args: Arguments.Read, rc: i64) Maybe(Return.Read) {
|
||||
_ = this;
|
||||
if (rc < 0) {
|
||||
traceFS(.{ .call = "read", .fd = args.fd.cast(), .offset = args.offset, .length = args.length, .errno = @as(bun.sys.Error.Int, @intCast(-rc)) });
|
||||
return Maybe(Return.Read){ .err = .{
|
||||
.errno = @intCast(-rc),
|
||||
.syscall = .read,
|
||||
@@ -4273,7 +4336,9 @@ pub const NodeFS = struct {
|
||||
.from_libuv = true,
|
||||
} };
|
||||
}
|
||||
return Maybe(Return.Read).initResult(.{ .bytes_read = @intCast(rc) });
|
||||
const bytes_read: u52 = @intCast(rc);
|
||||
traceFS(.{ .call = "read", .fd = args.fd.cast(), .offset = args.offset, .length = args.length, .bytes_read = bytes_read });
|
||||
return Maybe(Return.Read).initResult(.{ .bytes_read = bytes_read });
|
||||
}
|
||||
|
||||
pub fn uv_readv(this: *NodeFS, args: Arguments.Readv, rc: i64) Maybe(Return.Readv) {
|
||||
@@ -4310,6 +4375,7 @@ pub const NodeFS = struct {
|
||||
pub fn uv_write(this: *NodeFS, args: Arguments.Write, rc: i64) Maybe(Return.Write) {
|
||||
_ = this;
|
||||
if (rc < 0) {
|
||||
traceFS(.{ .call = "write", .fd = args.fd.cast(), .offset = args.offset, .length = args.length, .errno = @as(bun.sys.Error.Int, @intCast(-rc)) });
|
||||
return Maybe(Return.Write){ .err = .{
|
||||
.errno = @intCast(-rc),
|
||||
.syscall = .write,
|
||||
@@ -4317,7 +4383,9 @@ pub const NodeFS = struct {
|
||||
.from_libuv = true,
|
||||
} };
|
||||
}
|
||||
return Maybe(Return.Write).initResult(.{ .bytes_written = @intCast(rc) });
|
||||
const bytes_written: u52 = @intCast(rc);
|
||||
traceFS(.{ .call = "write", .fd = args.fd.cast(), .offset = args.offset, .length = args.length, .bytes_written = bytes_written });
|
||||
return Maybe(Return.Write).initResult(.{ .bytes_written = bytes_written });
|
||||
}
|
||||
|
||||
pub fn uv_writev(this: *NodeFS, args: Arguments.Writev, rc: i64) Maybe(Return.Writev) {
|
||||
@@ -4338,16 +4406,19 @@ pub const NodeFS = struct {
|
||||
buf = buf[@min(args.offset, buf.len)..];
|
||||
buf = buf[0..@min(buf.len, args.length)];
|
||||
|
||||
return switch (Syscall.write(args.fd, buf)) {
|
||||
.err => |err| .{
|
||||
.err = err,
|
||||
const result = Syscall.write(args.fd, buf);
|
||||
|
||||
switch (result) {
|
||||
.result => |amt| {
|
||||
const bytes_written = @as(u52, @truncate(amt));
|
||||
traceFS(.{ .call = "write", .fd = args.fd.cast(), .offset = args.offset, .length = args.length, .bytes_written = bytes_written });
|
||||
return .{ .result = .{ .bytes_written = bytes_written } };
|
||||
},
|
||||
.result => |amt| .{
|
||||
.result = .{
|
||||
.bytes_written = @as(u52, @truncate(amt)),
|
||||
},
|
||||
.err => |err| {
|
||||
traceFS(.{ .call = "write", .fd = args.fd.cast(), .offset = args.offset, .length = args.length, .errno = err.errno });
|
||||
return .{ .err = err };
|
||||
},
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
fn pwriteInner(_: *NodeFS, args: Arguments.Write) Maybe(Return.Write) {
|
||||
@@ -4357,16 +4428,23 @@ pub const NodeFS = struct {
|
||||
buf = buf[@min(args.offset, buf.len)..];
|
||||
buf = buf[0..@min(args.length, buf.len)];
|
||||
|
||||
return switch (Syscall.pwrite(args.fd, buf, position)) {
|
||||
.err => |err| .{ .err = .{
|
||||
.errno = err.errno,
|
||||
.fd = args.fd,
|
||||
.syscall = .write,
|
||||
} },
|
||||
.result => |amt| .{ .result = .{
|
||||
.bytes_written = @as(u52, @truncate(amt)),
|
||||
} },
|
||||
};
|
||||
const result = Syscall.pwrite(args.fd, buf, position);
|
||||
|
||||
switch (result) {
|
||||
.result => |amt| {
|
||||
const bytes_written = @as(u52, @truncate(amt));
|
||||
traceFS(.{ .call = "write", .fd = args.fd.cast(), .offset = args.offset, .length = args.length, .position = position, .bytes_written = bytes_written });
|
||||
return .{ .result = .{ .bytes_written = bytes_written } };
|
||||
},
|
||||
.err => |err| {
|
||||
traceFS(.{ .call = "write", .fd = args.fd.cast(), .offset = args.offset, .length = args.length, .position = position, .errno = err.errno });
|
||||
return .{ .err = .{
|
||||
.errno = err.errno,
|
||||
.fd = args.fd,
|
||||
.syscall = .write,
|
||||
} };
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
fn preadvInner(_: *NodeFS, args: Arguments.Readv) Maybe(Return.Readv) {
|
||||
@@ -4417,6 +4495,8 @@ pub const NodeFS = struct {
|
||||
}
|
||||
|
||||
pub fn readdir(this: *NodeFS, args: Arguments.Readdir, comptime flavor: Flavor) Maybe(Return.Readdir) {
|
||||
traceFS(.{ .call = "readdir", .path = args.path.slice(), .recursive = args.recursive });
|
||||
|
||||
if (comptime flavor != .sync) {
|
||||
if (args.recursive) {
|
||||
@panic("Assertion failure: this code path should never be reached.");
|
||||
@@ -4431,12 +4511,23 @@ pub const NodeFS = struct {
|
||||
},
|
||||
};
|
||||
return switch (maybe) {
|
||||
.err => |err| .{ .err = .{
|
||||
.syscall = .scandir,
|
||||
.errno = err.errno,
|
||||
.path = args.path.slice(),
|
||||
} },
|
||||
.result => |result| .{ .result = result },
|
||||
.err => |err| {
|
||||
traceFS(.{ .call = "readdir", .path = args.path.slice(), .errno = err.errno });
|
||||
return .{ .err = .{
|
||||
.syscall = .scandir,
|
||||
.errno = err.errno,
|
||||
.path = args.path.slice(),
|
||||
} };
|
||||
},
|
||||
.result => |result| {
|
||||
const count = switch (result) {
|
||||
.buffers => |buffers| buffers.len,
|
||||
.with_file_types => |dirents| dirents.len,
|
||||
.files => |files| files.len,
|
||||
};
|
||||
traceFS(.{ .call = "readdir", .path = args.path.slice(), .entries = count });
|
||||
return .{ .result = result };
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
@@ -4954,9 +5045,11 @@ pub const NodeFS = struct {
|
||||
const fd_maybe_windows: FileDescriptor = switch (args.path) {
|
||||
.path => brk: {
|
||||
path = args.path.path.sliceZ(&this.sync_error_buf);
|
||||
traceFS(.{ .call = "readFile", .path = path, .encoding = @tagName(args.encoding) });
|
||||
|
||||
if (bun.StandaloneModuleGraph.get()) |graph| {
|
||||
if (graph.find(path)) |file| {
|
||||
traceFS(.{ .call = "readFile", .path = path, .bytes_read = file.contents.len, .source = "standalone" });
|
||||
if (args.encoding == .buffer) {
|
||||
return .{
|
||||
.result = .{
|
||||
@@ -5054,6 +5147,7 @@ pub const NodeFS = struct {
|
||||
};
|
||||
|
||||
if (did_succeed) {
|
||||
traceFS(.{ .call = "readFile", .path = if (args.path == .path) args.path.path.slice() else "<fd>", .bytes_read = temporary_read_buffer.len, .fast_path = true });
|
||||
switch (args.encoding) {
|
||||
.buffer => {
|
||||
if (comptime flavor == .sync and string_type == .default) {
|
||||
@@ -5229,6 +5323,7 @@ pub const NodeFS = struct {
|
||||
buf.items.len = if (comptime string_type == .null_terminated) total + 1 else total;
|
||||
if (total == 0) {
|
||||
buf.clearAndFree();
|
||||
traceFS(.{ .call = "readFile", .path = if (args.path == .path) args.path.path.slice() else "<fd>", .bytes_read = 0 });
|
||||
return switch (args.encoding) {
|
||||
.buffer => .{
|
||||
.result = .{
|
||||
@@ -5253,6 +5348,7 @@ pub const NodeFS = struct {
|
||||
};
|
||||
}
|
||||
|
||||
traceFS(.{ .call = "readFile", .path = if (args.path == .path) args.path.path.slice() else "<fd>", .bytes_read = total });
|
||||
return switch (args.encoding) {
|
||||
.buffer => .{
|
||||
.result = .{
|
||||
@@ -5284,6 +5380,7 @@ pub const NodeFS = struct {
|
||||
const fd = switch (args.file) {
|
||||
.path => brk: {
|
||||
const path = args.file.path.sliceZWithForceCopy(pathbuf, true);
|
||||
traceFS(.{ .call = "writeFile", .path = path, .length = args.data.slice().len });
|
||||
|
||||
const open_result = bun.sys.openat(
|
||||
args.dirfd,
|
||||
@@ -5293,13 +5390,19 @@ pub const NodeFS = struct {
|
||||
);
|
||||
|
||||
break :brk switch (open_result) {
|
||||
.err => |err| return .{
|
||||
.err = err.withPath(args.file.path.slice()),
|
||||
.err => |err| {
|
||||
traceFS(.{ .call = "writeFile", .path = path, .errno = err.errno });
|
||||
return .{
|
||||
.err = err.withPath(args.file.path.slice()),
|
||||
};
|
||||
},
|
||||
.result => |fd| fd,
|
||||
};
|
||||
},
|
||||
.fd => |fd| fd,
|
||||
.fd => |fd| brk: {
|
||||
traceFS(.{ .call = "writeFile", .fd = fd.cast(), .length = args.data.slice().len });
|
||||
break :brk fd;
|
||||
},
|
||||
};
|
||||
|
||||
defer {
|
||||
@@ -5374,6 +5477,7 @@ pub const NodeFS = struct {
|
||||
}
|
||||
}
|
||||
|
||||
traceFS(.{ .call = "writeFile", .path = if (args.file == .path) args.file.path.slice() else "<fd>", .bytes_written = written });
|
||||
return .success;
|
||||
}
|
||||
|
||||
@@ -5544,13 +5648,22 @@ pub const NodeFS = struct {
|
||||
|
||||
const from = args.old_path.sliceZ(from_buf);
|
||||
const to = args.new_path.sliceZ(&to_buf);
|
||||
traceFS(.{ .call = "rename", .from = from, .to = to });
|
||||
return switch (Syscall.rename(from, to)) {
|
||||
.result => |result| .{ .result = result },
|
||||
.err => |err| .{ .err = err.withPathDest(args.old_path.slice(), args.new_path.slice()) },
|
||||
.result => |result| {
|
||||
traceFS(.{ .call = "rename", .from = from, .to = to, .success = true });
|
||||
return .{ .result = result };
|
||||
},
|
||||
.err => |err| {
|
||||
traceFS(.{ .call = "rename", .from = from, .to = to, .errno = err.errno });
|
||||
return .{ .err = err.withPathDest(args.old_path.slice(), args.new_path.slice()) };
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
pub fn rmdir(this: *NodeFS, args: Arguments.RmDir, _: Flavor) Maybe(Return.Rmdir) {
|
||||
traceFS(.{ .call = "rmdir", .path = args.path.slice(), .recursive = args.recursive });
|
||||
|
||||
if (args.recursive) {
|
||||
zigDeleteTree(std.fs.cwd(), args.path.slice(), .directory) catch |err| {
|
||||
var errno: bun.sys.E = switch (@as(anyerror, err)) {
|
||||
@@ -5585,23 +5698,35 @@ pub const NodeFS = struct {
|
||||
if (Environment.isWindows and errno == .NOTDIR) {
|
||||
errno = .NOENT;
|
||||
}
|
||||
traceFS(.{ .call = "rmdir", .path = args.path.slice(), .errno = @intFromEnum(errno) });
|
||||
return Maybe(Return.Rm){
|
||||
.err = bun.sys.Error.fromCode(errno, .rmdir),
|
||||
};
|
||||
};
|
||||
|
||||
traceFS(.{ .call = "rmdir", .path = args.path.slice(), .success = true });
|
||||
return .success;
|
||||
}
|
||||
|
||||
if (comptime Environment.isWindows) {
|
||||
return switch (Syscall.rmdir(args.path.sliceZ(&this.sync_error_buf))) {
|
||||
.err => |err| .{ .err = err.withPath(args.path.slice()) },
|
||||
.result => |result| .{ .result = result },
|
||||
.err => |err| {
|
||||
traceFS(.{ .call = "rmdir", .path = args.path.slice(), .errno = err.errno });
|
||||
return .{ .err = err.withPath(args.path.slice()) };
|
||||
},
|
||||
.result => |result| {
|
||||
traceFS(.{ .call = "rmdir", .path = args.path.slice(), .success = true });
|
||||
return .{ .result = result };
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
return Maybe(Return.Rmdir).errnoSysP(system.rmdir(args.path.sliceZ(&this.sync_error_buf)), .rmdir, args.path.slice()) orelse
|
||||
.success;
|
||||
if (Maybe(Return.Rmdir).errnoSysP(system.rmdir(args.path.sliceZ(&this.sync_error_buf)), .rmdir, args.path.slice())) |err| {
|
||||
traceFS(.{ .call = "rmdir", .path = args.path.slice(), .errno = err.err.errno });
|
||||
return err;
|
||||
}
|
||||
traceFS(.{ .call = "rmdir", .path = args.path.slice(), .success = true });
|
||||
return .success;
|
||||
}
|
||||
|
||||
pub fn rm(this: *NodeFS, args: Arguments.Rm, _: Flavor) Maybe(Return.Rm) {
|
||||
@@ -5728,18 +5853,25 @@ pub const NodeFS = struct {
|
||||
|
||||
pub fn stat(this: *NodeFS, args: Arguments.Stat, _: Flavor) Maybe(Return.Stat) {
|
||||
const path = args.path.sliceZ(&this.sync_error_buf);
|
||||
traceFS(.{ .call = "stat", .path = path });
|
||||
|
||||
if (bun.StandaloneModuleGraph.get()) |graph| {
|
||||
if (graph.stat(path)) |*result| {
|
||||
traceFS(.{ .call = "stat", .path = path, .size = result.size, .source = "standalone" });
|
||||
return .{ .result = .{ .stats = .init(&Syscall.PosixStat.init(result), args.big_int) } };
|
||||
}
|
||||
}
|
||||
|
||||
if (Environment.isLinux and Syscall.supports_statx_on_linux.load(.monotonic)) {
|
||||
return switch (Syscall.statx(path, &.{ .type, .mode, .nlink, .uid, .gid, .atime, .mtime, .ctime, .btime, .ino, .size, .blocks })) {
|
||||
.result => |result| .{
|
||||
.result = .{ .stats = .init(&result, args.big_int) },
|
||||
.result => |result| {
|
||||
traceFS(.{ .call = "stat", .path = path, .size = result.size, .mode = result.mode });
|
||||
return .{
|
||||
.result = .{ .stats = .init(&result, args.big_int) },
|
||||
};
|
||||
},
|
||||
.err => |err| brk: {
|
||||
traceFS(.{ .call = "stat", .path = path, .errno = err.errno });
|
||||
if (!args.throw_if_no_entry and err.getErrno() == .NOENT) {
|
||||
return .{ .result = .{ .not_found = {} } };
|
||||
}
|
||||
@@ -5748,10 +5880,14 @@ pub const NodeFS = struct {
|
||||
};
|
||||
} else {
|
||||
return switch (Syscall.stat(path)) {
|
||||
.result => |result| .{
|
||||
.result = .{ .stats = .init(&Syscall.PosixStat.init(&result), args.big_int) },
|
||||
.result => |result| {
|
||||
traceFS(.{ .call = "stat", .path = path, .size = result.size, .mode = result.mode });
|
||||
return .{
|
||||
.result = .{ .stats = .init(&Syscall.PosixStat.init(&result), args.big_int) },
|
||||
};
|
||||
},
|
||||
.err => |err| brk: {
|
||||
traceFS(.{ .call = "stat", .path = path, .errno = err.errno });
|
||||
if (!args.throw_if_no_entry and err.getErrno() == .NOENT) {
|
||||
return .{ .result = .{ .not_found = {} } };
|
||||
}
|
||||
@@ -5885,14 +6021,26 @@ pub const NodeFS = struct {
|
||||
}
|
||||
|
||||
pub fn unlink(this: *NodeFS, args: Arguments.Unlink, _: Flavor) Maybe(Return.Unlink) {
|
||||
traceFS(.{ .call = "unlink", .path = args.path.slice() });
|
||||
|
||||
if (Environment.isWindows) {
|
||||
return switch (Syscall.unlink(args.path.sliceZ(&this.sync_error_buf))) {
|
||||
.err => |err| .{ .err = err.withPath(args.path.slice()) },
|
||||
.result => |result| .{ .result = result },
|
||||
.err => |err| {
|
||||
traceFS(.{ .call = "unlink", .path = args.path.slice(), .errno = err.errno });
|
||||
return .{ .err = err.withPath(args.path.slice()) };
|
||||
},
|
||||
.result => |result| {
|
||||
traceFS(.{ .call = "unlink", .path = args.path.slice(), .success = true });
|
||||
return .{ .result = result };
|
||||
},
|
||||
};
|
||||
}
|
||||
return Maybe(Return.Unlink).errnoSysP(system.unlink(args.path.sliceZ(&this.sync_error_buf)), .unlink, args.path.slice()) orelse
|
||||
.success;
|
||||
if (Maybe(Return.Unlink).errnoSysP(system.unlink(args.path.sliceZ(&this.sync_error_buf)), .unlink, args.path.slice())) |err| {
|
||||
traceFS(.{ .call = "unlink", .path = args.path.slice(), .errno = err.err.errno });
|
||||
return err;
|
||||
}
|
||||
traceFS(.{ .call = "unlink", .path = args.path.slice(), .success = true });
|
||||
return .success;
|
||||
}
|
||||
|
||||
pub fn watchFile(_: *NodeFS, args: Arguments.WatchFile, comptime flavor: Flavor) Maybe(Return.WatchFile) {
|
||||
@@ -6958,6 +7106,14 @@ const NodeFSFunctionEnum = std.meta.DeclEnum(NodeFS);
|
||||
|
||||
const string = []const u8;
|
||||
|
||||
// Tracing helper
|
||||
inline fn traceFS(args: anytype) void {
|
||||
if (Output.trace_enabled) {
|
||||
const tracer = Output.tracer("fs");
|
||||
tracer.trace(args);
|
||||
}
|
||||
}
|
||||
|
||||
const DirIterator = @import("./dir_iterator.zig");
|
||||
const FileSystem = @import("../../fs.zig").FileSystem;
|
||||
|
||||
@@ -6966,6 +7122,7 @@ const Environment = bun.Environment;
|
||||
const FD = bun.FD;
|
||||
const FileDescriptor = bun.FileDescriptor;
|
||||
const Mode = bun.Mode;
|
||||
const Output = bun.Output;
|
||||
const PathString = bun.PathString;
|
||||
const c = bun.c;
|
||||
const strings = bun.strings;
|
||||
|
||||
@@ -1103,6 +1103,34 @@ pub fn Mixin(comptime Type: type) type {
|
||||
|
||||
pub fn getText(this: *Type, globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!jsc.JSValue {
|
||||
var value: *Body.Value = this.getBodyValue();
|
||||
|
||||
if (Output.trace_enabled) {
|
||||
const trace = Output.tracer("response_body");
|
||||
|
||||
if (value.* == .Used) {
|
||||
trace.trace(.{ .call = "text", .status = "already_used" });
|
||||
} else if (value.* == .Locked) {
|
||||
if (@hasDecl(Type, "getBodyReadableStream")) {
|
||||
if (this.getBodyReadableStream(globalObject)) |readable| {
|
||||
if (readable.isDisturbed(globalObject)) {
|
||||
trace.trace(.{ .call = "text", .status = "already_used" });
|
||||
} else {
|
||||
trace.trace(.{ .call = "text", .status = "streaming" });
|
||||
}
|
||||
}
|
||||
} else if (value.Locked.action != .none or value.Locked.isDisturbed(Type, globalObject, callframe.this())) {
|
||||
trace.trace(.{ .call = "text", .status = "already_used" });
|
||||
} else {
|
||||
trace.trace(.{ .call = "text", .status = "locked" });
|
||||
}
|
||||
} else {
|
||||
const blob = value.*;
|
||||
if (blob == .Blob or blob == .WTFStringImpl or blob == .InternalBlob) {
|
||||
trace.trace(.{ .call = "text", .status = "immediate" });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (value.* == .Used) {
|
||||
return handleBodyAlreadyUsed(globalObject);
|
||||
}
|
||||
|
||||
@@ -130,6 +130,15 @@ pub const WriteFile = struct {
|
||||
// non-seekable file.
|
||||
bun.sys.write(fd, buffer);
|
||||
|
||||
// Trace the write operation
|
||||
if (Output.trace_enabled) {
|
||||
const tracer = Output.tracer("bun_write");
|
||||
switch (result) {
|
||||
.result => |bytes| tracer.trace(.{ .call = "write", .fd = fd.cast(), .length = buffer.len, .bytes_written = bytes }),
|
||||
.err => |err| tracer.trace(.{ .call = "write", .fd = fd.cast(), .length = buffer.len, .errno = err.errno }),
|
||||
}
|
||||
}
|
||||
|
||||
while (true) {
|
||||
switch (result) {
|
||||
.result => |res| {
|
||||
@@ -739,6 +748,7 @@ const std = @import("std");
|
||||
|
||||
const bun = @import("bun");
|
||||
const Environment = bun.Environment;
|
||||
const Output = bun.Output;
|
||||
const invalid_fd = bun.invalid_fd;
|
||||
const io = bun.io;
|
||||
const uv = bun.windows.libuv;
|
||||
|
||||
@@ -118,6 +118,7 @@ pub const runtime_params_ = [_]ParamType{
|
||||
clap.parseParam("--unhandled-rejections <STR> One of \"strict\", \"throw\", \"warn\", \"none\", or \"warn-with-error-code\"") catch unreachable,
|
||||
clap.parseParam("--console-depth <NUMBER> Set the default depth for console.log object inspection (default: 2)") catch unreachable,
|
||||
clap.parseParam("--user-agent <STR> Set the default User-Agent header for HTTP requests") catch unreachable,
|
||||
clap.parseParam("--trace <PATH> Write structured trace logs to the specified file (JSON lines format)") catch unreachable,
|
||||
};
|
||||
|
||||
pub const auto_or_run_params = [_]ParamType{
|
||||
@@ -737,6 +738,14 @@ pub fn parse(allocator: std.mem.Allocator, ctx: Command.Context, comptime cmd: C
|
||||
bun.http.overridden_default_user_agent = user_agent;
|
||||
}
|
||||
|
||||
if (args.option("--trace")) |trace_path| {
|
||||
Output.initTrace(trace_path) catch |err| {
|
||||
Output.errGeneric("Failed to open trace file \"{s}\": {s}\n", .{ trace_path, @errorName(err) });
|
||||
Global.exit(1);
|
||||
};
|
||||
Global.addExitCallback(Output.closeTraceCallback);
|
||||
}
|
||||
|
||||
ctx.debug.offline_mode_setting = if (args.flag("--prefer-offline"))
|
||||
Bunfig.OfflineMode.offline
|
||||
else if (args.flag("--prefer-latest"))
|
||||
|
||||
136
src/output.zig
136
src/output.zig
@@ -1322,6 +1322,142 @@ pub const Synchronized = struct {
|
||||
}
|
||||
};
|
||||
|
||||
/// Structured tracer for logging events to a JSON file
|
||||
/// Used by agents like Claude to receive logs about what applications are doing
|
||||
/// without having to manually add console.log everywhere.
|
||||
///
|
||||
/// Usage:
|
||||
/// const tracer = Output.tracer("fs");
|
||||
/// tracer.trace(.{ .call = "open", .path = path, .rc = errno });
|
||||
///
|
||||
/// Enable with: --trace=file.json
|
||||
///
|
||||
pub fn tracer(comptime namespace: []const u8) type {
|
||||
return Tracer(namespace);
|
||||
}
|
||||
|
||||
fn Tracer(comptime ns: []const u8) type {
|
||||
return struct {
|
||||
pub fn trace(args: anytype) void {
|
||||
if (!trace_enabled) return;
|
||||
|
||||
const file = trace_file orelse return;
|
||||
|
||||
trace_lock.lock();
|
||||
defer trace_lock.unlock();
|
||||
|
||||
// Get timestamp
|
||||
const timestamp = std.time.milliTimestamp();
|
||||
|
||||
// Extract the "call" field from args to use as operation name
|
||||
const ArgsType = @TypeOf(args);
|
||||
const args_info = @typeInfo(ArgsType);
|
||||
|
||||
// Write array format: [namespace, timestamp, operation, data]
|
||||
var buffer: [4096]u8 = undefined;
|
||||
var buffered = file.writer().adaptToNewApi(&buffer);
|
||||
const w = &buffered.new_interface;
|
||||
|
||||
w.writeAll("[\"") catch return;
|
||||
w.writeAll(ns) catch return;
|
||||
w.writeAll("\",") catch return;
|
||||
w.print("{d}", .{timestamp}) catch return;
|
||||
w.writeAll(",\"") catch return;
|
||||
|
||||
// Write the operation name from the "call" field
|
||||
if (args_info == .@"struct") {
|
||||
inline for (args_info.@"struct".fields) |field| {
|
||||
if (comptime std.mem.eql(u8, field.name, "call")) {
|
||||
const call_value = @field(args, "call");
|
||||
w.writeAll(call_value) catch return;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
w.writeAll("\",") catch return;
|
||||
|
||||
// Write the data (args minus the "call" field)
|
||||
w.writeAll("{") catch return;
|
||||
if (args_info == .@"struct") {
|
||||
var first = true;
|
||||
inline for (args_info.@"struct".fields) |field| {
|
||||
if (comptime !std.mem.eql(u8, field.name, "call")) {
|
||||
const field_value = @field(args, field.name);
|
||||
const FieldType = @TypeOf(field_value);
|
||||
const field_info = @typeInfo(FieldType);
|
||||
|
||||
// Skip fields that can't be serialized (function pointers, opaque pointers, etc.)
|
||||
const should_skip = comptime blk: {
|
||||
if (field_info == .pointer) {
|
||||
// Skip if pointing to opaque or function types
|
||||
const child_info = @typeInfo(field_info.pointer.child);
|
||||
if (child_info == .@"opaque" or child_info == .@"fn") {
|
||||
break :blk true;
|
||||
}
|
||||
} else if (field_info == .@"fn") {
|
||||
break :blk true;
|
||||
}
|
||||
break :blk false;
|
||||
};
|
||||
|
||||
if (!should_skip) {
|
||||
if (!first) {
|
||||
w.writeAll(",") catch return;
|
||||
}
|
||||
first = false;
|
||||
w.writeAll("\"") catch return;
|
||||
w.writeAll(field.name) catch return;
|
||||
w.writeAll("\":") catch return;
|
||||
|
||||
// Stringify the field value using the new API
|
||||
var json_stringify: std.json.Stringify = .{ .writer = w };
|
||||
json_stringify.write(field_value) catch return;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
w.writeAll("}]\n") catch return;
|
||||
buffered.new_interface.flush() catch return;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
var trace_file: ?bun.sys.File = null;
|
||||
pub var trace_enabled: bool = false;
|
||||
var trace_lock = bun.Mutex{};
|
||||
|
||||
/// Initialize trace file from CLI argument
|
||||
pub fn initTrace(path: []const u8) !void {
|
||||
if (trace_file != null) return;
|
||||
|
||||
const flags = bun.O.WRONLY | bun.O.CREAT | bun.O.TRUNC;
|
||||
const mode = 0o644;
|
||||
switch (bun.sys.openA(path, flags, mode)) {
|
||||
.result => |fd| {
|
||||
trace_file = bun.sys.File{ .handle = fd };
|
||||
trace_enabled = true;
|
||||
},
|
||||
.err => {
|
||||
return error.FailedToOpenTraceFile;
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
/// Close trace file on exit
|
||||
pub fn closeTrace() void {
|
||||
if (trace_file) |file| {
|
||||
_ = file.close();
|
||||
trace_file = null;
|
||||
trace_enabled = false;
|
||||
}
|
||||
}
|
||||
|
||||
/// Callback for Global.addExitCallback
|
||||
pub fn closeTraceCallback() callconv(.c) void {
|
||||
closeTrace();
|
||||
}
|
||||
|
||||
const Environment = @import("./env.zig");
|
||||
const root = @import("root");
|
||||
const std = @import("std");
|
||||
|
||||
304
test/js/bun/trace/README.md
Normal file
304
test/js/bun/trace/README.md
Normal file
@@ -0,0 +1,304 @@
|
||||
# Trace Tests
|
||||
|
||||
Tests for the `--trace` flag functionality that provides structured logging of I/O operations.
|
||||
|
||||
## Test Files
|
||||
|
||||
### `trace.test.ts`
|
||||
Basic functionality tests covering:
|
||||
- Trace file creation and format
|
||||
- Low-level fs operations (open, read, write, close)
|
||||
- High-level fs operations (readFile, writeFile, stat, mkdir, rmdir, unlink, rename, readdir)
|
||||
- HTTP/fetch operations
|
||||
- Response body consumption
|
||||
- Error handling (invalid trace file path)
|
||||
|
||||
### `extensive-trace.test.ts`
|
||||
Comprehensive integration tests covering:
|
||||
- Multiple sequential fs operations
|
||||
- Multiple sequential HTTP requests
|
||||
- Mixed fs + HTTP operations
|
||||
- Namespace filtering
|
||||
- Chronological ordering
|
||||
- Complete trace format validation
|
||||
|
||||
## Running Tests
|
||||
|
||||
```bash
|
||||
# Run all trace tests
|
||||
bun bd test test/js/bun/trace/
|
||||
|
||||
# Run specific test file
|
||||
bun bd test test/js/bun/trace/trace.test.ts
|
||||
|
||||
# Run specific test
|
||||
bun bd test test/js/bun/trace/trace.test.ts -t "trace high-level"
|
||||
```
|
||||
|
||||
## Writing New Trace Tests
|
||||
|
||||
### Test Structure
|
||||
|
||||
```typescript
|
||||
import { test, expect } from "bun:test";
|
||||
import { readFileSync } from "fs";
|
||||
import { bunEnv, bunExe, tempDir } from "harness";
|
||||
import { join } from "path";
|
||||
|
||||
test("your test name", async () => {
|
||||
// 1. Create test files in temp directory
|
||||
using dir = tempDir("test-prefix", {
|
||||
"test.js": `
|
||||
// Your test script here
|
||||
import { writeFileSync } from "fs";
|
||||
writeFileSync("test.txt", "data");
|
||||
console.log("done");
|
||||
`,
|
||||
});
|
||||
|
||||
// 2. Set up trace file path
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
// 3. Spawn bun with --trace flag
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
// 4. Wait for completion
|
||||
const [stdout, stderr, exitCode] = await Promise.all([
|
||||
proc.stdout.text(),
|
||||
proc.stderr.text(),
|
||||
proc.exited
|
||||
]);
|
||||
|
||||
// 5. Verify execution
|
||||
expect(exitCode).toBe(0);
|
||||
expect(stdout).toContain("done");
|
||||
|
||||
// 6. Parse and validate trace
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traces = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0)
|
||||
.map(l => JSON.parse(l));
|
||||
|
||||
// 7. Assert trace properties
|
||||
const fsTraces = traces.filter(t => t.ns === "fs");
|
||||
expect(fsTraces.length).toBeGreaterThan(0);
|
||||
|
||||
const writeCalls = fsTraces.filter(t => t.data.call === "writeFile");
|
||||
expect(writeCalls.length).toBeGreaterThan(0);
|
||||
});
|
||||
```
|
||||
|
||||
### Important Test Patterns
|
||||
|
||||
#### Use Instrumented Operations
|
||||
|
||||
Not all fs operations go through instrumented code paths. Use these patterns:
|
||||
|
||||
**✅ DO** - These operations ARE traced:
|
||||
```javascript
|
||||
// Low-level sync operations
|
||||
import { openSync, readSync, writeSync, closeSync } from "fs";
|
||||
const fd = openSync("file.txt", "w");
|
||||
writeSync(fd, "data");
|
||||
closeSync(fd);
|
||||
|
||||
// High-level sync operations
|
||||
import { readFileSync, writeFileSync, statSync } from "fs";
|
||||
writeFileSync("file.txt", "data");
|
||||
const content = readFileSync("file.txt", "utf8");
|
||||
const stats = statSync("file.txt");
|
||||
|
||||
// Directory operations
|
||||
import { mkdirSync, readdirSync, rmdirSync, unlinkSync, renameSync } from "fs";
|
||||
mkdirSync("dir");
|
||||
const files = readdirSync("dir");
|
||||
```
|
||||
|
||||
**❌ DON'T** - These operations may NOT be traced:
|
||||
```javascript
|
||||
// Some async operations may use different code paths
|
||||
import { readFile, writeFile } from "fs/promises";
|
||||
await writeFile("file.txt", "data"); // May not be traced
|
||||
```
|
||||
|
||||
#### Testing Fetch Operations
|
||||
|
||||
Fetch tests can be slow. Set appropriate timeouts:
|
||||
|
||||
```typescript
|
||||
test(
|
||||
"fetch test",
|
||||
async () => {
|
||||
// Test code that makes HTTP requests
|
||||
},
|
||||
10_000, // 10 second timeout
|
||||
);
|
||||
```
|
||||
|
||||
Use `example.com` for test fetches (fast, reliable):
|
||||
```javascript
|
||||
const response = await fetch("https://example.com");
|
||||
```
|
||||
|
||||
#### Namespace Filtering
|
||||
|
||||
Test that traces are properly namespaced:
|
||||
|
||||
```typescript
|
||||
const fsTraces = traces.filter(t => t.ns === "fs");
|
||||
const fetchTraces = traces.filter(t => t.ns === "fetch");
|
||||
const bodyTraces = traces.filter(t => t.ns === "response_body");
|
||||
const bunWriteTraces = traces.filter(t => t.ns === "bun_write");
|
||||
```
|
||||
|
||||
#### Entry/Exit Traces
|
||||
|
||||
Many operations have both entry and exit traces:
|
||||
|
||||
```typescript
|
||||
// Entry trace: operation starting
|
||||
{"ns":"fs","ts":1761595797038,"data":{"call":"writeFile","path":"test.txt","length":5}}
|
||||
|
||||
// Exit trace: operation complete
|
||||
{"ns":"fs","ts":1761595797038,"data":{"call":"writeFile","path":"test.txt","bytes_written":5}}
|
||||
```
|
||||
|
||||
Test both:
|
||||
```typescript
|
||||
const writeCalls = fsTraces.filter(t => t.data.call === "writeFile");
|
||||
const entryTrace = writeCalls.find(t => t.data.length !== undefined);
|
||||
const exitTrace = writeCalls.find(t => t.data.bytes_written !== undefined);
|
||||
expect(entryTrace).toBeDefined();
|
||||
expect(exitTrace).toBeDefined();
|
||||
```
|
||||
|
||||
### Common Assertions
|
||||
|
||||
```typescript
|
||||
// Trace file exists and has content
|
||||
expect(traceContent.length).toBeGreaterThan(0);
|
||||
|
||||
// Valid JSON lines
|
||||
const traces = traceContent.trim().split("\n").map(l => JSON.parse(l));
|
||||
|
||||
// Required fields
|
||||
traces.forEach(t => {
|
||||
expect(t).toHaveProperty("ns");
|
||||
expect(t).toHaveProperty("ts");
|
||||
expect(t).toHaveProperty("data");
|
||||
expect(t.data).toHaveProperty("call");
|
||||
});
|
||||
|
||||
// Specific operation traced
|
||||
expect(traces.some(t => t.data.call === "writeFile")).toBe(true);
|
||||
|
||||
// Path included in trace
|
||||
expect(traces.some(t => t.data.path?.includes("test.txt"))).toBe(true);
|
||||
|
||||
// Chronological ordering
|
||||
for (let i = 1; i < traces.length; i++) {
|
||||
expect(traces[i].ts).toBeGreaterThanOrEqual(traces[i-1].ts);
|
||||
}
|
||||
```
|
||||
|
||||
## Debugging Failed Tests
|
||||
|
||||
### Test Fails: Empty Trace File
|
||||
|
||||
**Cause**: Operation doesn't go through instrumented code path
|
||||
|
||||
**Solution**: Use sync operations or high-level APIs that are instrumented:
|
||||
```typescript
|
||||
// Instead of this:
|
||||
await Bun.write("file.txt", "data");
|
||||
|
||||
// Use this:
|
||||
import { writeFileSync } from "fs";
|
||||
writeFileSync("file.txt", "data");
|
||||
```
|
||||
|
||||
### Test Fails: Missing Expected Operations
|
||||
|
||||
**Cause**: Wrong namespace filter or operation name
|
||||
|
||||
**Solution**: Print all traces to see what's actually logged:
|
||||
```typescript
|
||||
console.log("All traces:", JSON.stringify(traces, null, 2));
|
||||
```
|
||||
|
||||
### Test Fails: Timeout on Fetch Tests
|
||||
|
||||
**Cause**: External HTTP requests can be slow
|
||||
|
||||
**Solution**: Increase timeout or use faster endpoints:
|
||||
```typescript
|
||||
test(
|
||||
"fetch test",
|
||||
async () => { /* ... */ },
|
||||
15_000, // Increase timeout
|
||||
);
|
||||
```
|
||||
|
||||
## Adding New Trace Points
|
||||
|
||||
If you add new trace instrumentation to Bun's source code:
|
||||
|
||||
1. **Add the trace call** in the relevant source file:
|
||||
```zig
|
||||
traceFS(.{ .call = "newOperation", .path = path, .result = result });
|
||||
```
|
||||
|
||||
2. **Rebuild Bun**:
|
||||
```bash
|
||||
bun bd
|
||||
```
|
||||
|
||||
3. **Add a test** in `trace.test.ts`:
|
||||
```typescript
|
||||
test("trace new operation", async () => {
|
||||
// Test code that triggers the new operation
|
||||
// Verify trace includes expected data
|
||||
});
|
||||
```
|
||||
|
||||
4. **Verify the test** passes with your changes and fails without:
|
||||
```bash
|
||||
# Should pass
|
||||
bun bd test test/js/bun/trace/trace.test.ts -t "new operation"
|
||||
|
||||
# Should fail (operation not traced)
|
||||
USE_SYSTEM_BUN=1 bun test test/js/bun/trace/trace.test.ts -t "new operation"
|
||||
```
|
||||
|
||||
## Coverage Matrix
|
||||
|
||||
Current test coverage:
|
||||
|
||||
| Operation | Low-level API | High-level API |
|
||||
|-----------|---------------|----------------|
|
||||
| File Read | ✅ `readSync` | ✅ `readFileSync` |
|
||||
| File Write | ✅ `writeSync` | ✅ `writeFileSync` |
|
||||
| File Open | ✅ `openSync` | N/A |
|
||||
| File Close | ✅ `closeSync` | N/A |
|
||||
| File Stat | N/A | ✅ `statSync` |
|
||||
| Dir Create | N/A | ✅ `mkdirSync` |
|
||||
| Dir Remove | N/A | ✅ `rmdirSync` |
|
||||
| Dir List | N/A | ✅ `readdirSync` |
|
||||
| File Delete | N/A | ✅ `unlinkSync` |
|
||||
| File Rename | N/A | ✅ `renameSync` |
|
||||
| HTTP Request | N/A | ✅ `fetch()` |
|
||||
| Response Body | N/A | ✅ `.text()`, `.json()` |
|
||||
| Bun.write | N/A | ⚠️ Instrumented but not tested |
|
||||
|
||||
**Legend:**
|
||||
- ✅ Fully tested
|
||||
- ⚠️ Instrumented but needs tests
|
||||
- N/A Not applicable
|
||||
333
test/js/bun/trace/extensive-trace.test.ts
Normal file
333
test/js/bun/trace/extensive-trace.test.ts
Normal file
@@ -0,0 +1,333 @@
|
||||
import { describe, expect, test } from "bun:test";
|
||||
import { readFileSync } from "fs";
|
||||
import { bunEnv, bunExe, tempDir } from "harness";
|
||||
import { join } from "path";
|
||||
|
||||
describe("extensive tracing", () => {
|
||||
test("comprehensive fs operations tracing", async () => {
|
||||
using dir = tempDir("trace-fs-extensive", {
|
||||
"test.js": `
|
||||
import { openSync, writeSync, readSync, closeSync, unlinkSync } from "fs";
|
||||
|
||||
// Test open, write, close
|
||||
const fd1 = openSync("file1.txt", "w");
|
||||
writeSync(fd1, "hello world");
|
||||
closeSync(fd1);
|
||||
|
||||
// Test open for read, read, close
|
||||
const fd2 = openSync("file1.txt", "r");
|
||||
const buf = Buffer.alloc(100);
|
||||
const bytesRead = readSync(fd2, buf, 0, 100, 0);
|
||||
console.log("read", bytesRead, "bytes");
|
||||
closeSync(fd2);
|
||||
|
||||
// Test multiple writes
|
||||
const fd3 = openSync("file2.txt", "w");
|
||||
writeSync(fd3, "line 1\\n");
|
||||
writeSync(fd3, "line 2\\n");
|
||||
writeSync(fd3, "line 3\\n");
|
||||
closeSync(fd3);
|
||||
|
||||
console.log("fs operations complete");
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(exitCode).toBe(0);
|
||||
expect(stdout).toContain("fs operations complete");
|
||||
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traces = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0)
|
||||
.map(l => JSON.parse(l));
|
||||
|
||||
// Should have fs namespace
|
||||
const fsTraces = traces.filter(t => t[0] === "fs");
|
||||
expect(fsTraces.length).toBeGreaterThan(0);
|
||||
|
||||
// Check we have open operations
|
||||
const opens = fsTraces.filter(t => t[2] === "open");
|
||||
expect(opens.length).toBeGreaterThanOrEqual(3);
|
||||
|
||||
// Verify open trace has required fields
|
||||
expect(opens[0][3]).toHaveProperty("path");
|
||||
expect(opens[0][3]).toHaveProperty("flags");
|
||||
expect(opens[0][3]).toHaveProperty("mode");
|
||||
expect(opens[0][3]).toHaveProperty("fd");
|
||||
|
||||
// Check we have write operations
|
||||
const writes = fsTraces.filter(t => t[2] === "write");
|
||||
expect(writes.length).toBeGreaterThanOrEqual(4);
|
||||
|
||||
// Verify write trace has required fields
|
||||
expect(writes[0][3]).toHaveProperty("fd");
|
||||
expect(writes[0][3]).toHaveProperty("length");
|
||||
expect(writes[0][3]).toHaveProperty("bytes_written");
|
||||
|
||||
// Check we have read operations
|
||||
const reads = fsTraces.filter(t => t[2] === "read");
|
||||
expect(reads.length).toBeGreaterThanOrEqual(1);
|
||||
|
||||
// Verify read trace has required fields
|
||||
expect(reads[0][3]).toHaveProperty("fd");
|
||||
expect(reads[0][3]).toHaveProperty("length");
|
||||
expect(reads[0][3]).toHaveProperty("bytes_read");
|
||||
|
||||
// Check we have close operations
|
||||
const closes = fsTraces.filter(t => t[2] === "close");
|
||||
expect(closes.length).toBeGreaterThanOrEqual(3);
|
||||
|
||||
// Verify close trace has required fields
|
||||
expect(closes[0][3]).toHaveProperty("fd");
|
||||
});
|
||||
|
||||
test("comprehensive fetch/HTTP tracing", async () => {
|
||||
using dir = tempDir("trace-fetch-extensive", {
|
||||
"test.js": `
|
||||
// Multiple sequential fetches
|
||||
const r1 = await fetch("https://example.com");
|
||||
const t1 = await r1.text();
|
||||
console.log("fetch 1:", t1.length, "bytes");
|
||||
|
||||
const r2 = await fetch("https://example.com");
|
||||
const t2 = await r2.text();
|
||||
console.log("fetch 2:", t2.length, "bytes");
|
||||
|
||||
console.log("fetches complete");
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(exitCode).toBe(0);
|
||||
expect(stdout).toContain("fetches complete");
|
||||
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traces = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0)
|
||||
.map(l => JSON.parse(l));
|
||||
|
||||
// Should have fetch namespace
|
||||
const fetchTraces = traces.filter(t => t[0] === "fetch");
|
||||
expect(fetchTraces.length).toBeGreaterThan(0);
|
||||
|
||||
// Check we have request initiations
|
||||
const requests = fetchTraces.filter(t => t[2] === "request");
|
||||
expect(requests.length).toBeGreaterThanOrEqual(2);
|
||||
|
||||
// Verify request trace has required fields
|
||||
expect(requests[0][3]).toHaveProperty("url");
|
||||
expect(requests[0][3]).toHaveProperty("method");
|
||||
expect(requests[0][3].method).toBe("GET");
|
||||
|
||||
// Check we have responses
|
||||
const responses = fetchTraces.filter(t => t[2] === "response");
|
||||
expect(responses.length).toBeGreaterThanOrEqual(2);
|
||||
|
||||
// Verify response trace has required fields
|
||||
expect(responses[0][3]).toHaveProperty("url");
|
||||
expect(responses[0][3]).toHaveProperty("status");
|
||||
expect(responses[0][3]).toHaveProperty("body_size");
|
||||
|
||||
// Should have response_body namespace
|
||||
const bodyTraces = traces.filter(t => t[0] === "response_body");
|
||||
expect(bodyTraces.length).toBeGreaterThan(0);
|
||||
|
||||
// Check body consumption
|
||||
const textCalls = bodyTraces.filter(t => t[2] === "text");
|
||||
expect(textCalls.length).toBeGreaterThanOrEqual(2);
|
||||
}, 10_000);
|
||||
|
||||
test("mixed operations tracing", async () => {
|
||||
using dir = tempDir("trace-mixed", {
|
||||
"test.js": `
|
||||
import { openSync, writeSync, closeSync, readFileSync } from "fs";
|
||||
|
||||
// File operation
|
||||
const fd = openSync("data.txt", "w");
|
||||
writeSync(fd, "initial data");
|
||||
closeSync(fd);
|
||||
|
||||
// HTTP operation
|
||||
const response = await fetch("https://example.com");
|
||||
const html = await response.text();
|
||||
|
||||
// Write HTTP response to file
|
||||
const fd2 = openSync("output.html", "w");
|
||||
writeSync(fd2, html);
|
||||
closeSync(fd2);
|
||||
|
||||
// Read it back
|
||||
const content = readFileSync("output.html", "utf8");
|
||||
console.log("wrote and read", content.length, "bytes");
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(exitCode).toBe(0);
|
||||
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traces = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0)
|
||||
.map(l => JSON.parse(l));
|
||||
|
||||
// Should have both fs and fetch traces
|
||||
const namespaces = new Set(traces.map(t => t[0]));
|
||||
expect(namespaces.has("fs")).toBe(true);
|
||||
expect(namespaces.has("fetch")).toBe(true);
|
||||
expect(namespaces.has("response_body")).toBe(true);
|
||||
|
||||
// All traces should have timestamp
|
||||
traces.forEach(t => {
|
||||
expect(t[1]).toBeDefined();
|
||||
expect(typeof t[1]).toBe("number");
|
||||
expect(t[1]).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
// All traces should have namespace
|
||||
traces.forEach(t => {
|
||||
expect(t[0]).toBeDefined();
|
||||
expect(typeof t[0]).toBe("string");
|
||||
expect(t[0].length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
// All traces should have data
|
||||
traces.forEach(t => {
|
||||
expect(t[3]).toBeDefined();
|
||||
expect(typeof t[3]).toBe("object");
|
||||
});
|
||||
});
|
||||
|
||||
test("trace namespace filtering", async () => {
|
||||
using dir = tempDir("trace-filter", {
|
||||
"test.js": `
|
||||
import { openSync, writeSync, closeSync } from "fs";
|
||||
const fd = openSync("test.txt", "w");
|
||||
writeSync(fd, "hello");
|
||||
closeSync(fd);
|
||||
const r = await fetch("https://example.com");
|
||||
await r.text();
|
||||
console.log("done");
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
await proc.exited;
|
||||
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traces = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0)
|
||||
.map(l => JSON.parse(l));
|
||||
|
||||
// Can filter by namespace
|
||||
const fsOnly = traces.filter(t => t[0] === "fs");
|
||||
const fetchOnly = traces.filter(t => t[0] === "fetch");
|
||||
const bodyOnly = traces.filter(t => t[0] === "response_body");
|
||||
|
||||
expect(fsOnly.length).toBeGreaterThan(0);
|
||||
expect(fetchOnly.length).toBeGreaterThan(0);
|
||||
expect(bodyOnly.length).toBeGreaterThan(0);
|
||||
|
||||
// Each namespace should only have its own traces
|
||||
fsOnly.forEach(t => expect(t[0]).toBe("fs"));
|
||||
fetchOnly.forEach(t => expect(t[0]).toBe("fetch"));
|
||||
bodyOnly.forEach(t => expect(t[0]).toBe("response_body"));
|
||||
});
|
||||
|
||||
test("trace chronological ordering", async () => {
|
||||
using dir = tempDir("trace-ordering", {
|
||||
"test.js": `
|
||||
import { openSync, writeSync, closeSync } from "fs";
|
||||
|
||||
const fd1 = openSync("file1.txt", "w");
|
||||
writeSync(fd1, "first");
|
||||
closeSync(fd1);
|
||||
await new Promise(r => setTimeout(r, 10));
|
||||
|
||||
const fd2 = openSync("file2.txt", "w");
|
||||
writeSync(fd2, "second");
|
||||
closeSync(fd2);
|
||||
await new Promise(r => setTimeout(r, 10));
|
||||
|
||||
const fd3 = openSync("file3.txt", "w");
|
||||
writeSync(fd3, "third");
|
||||
closeSync(fd3);
|
||||
|
||||
console.log("done");
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
await proc.exited;
|
||||
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traces = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0)
|
||||
.map(l => JSON.parse(l));
|
||||
|
||||
// Timestamps should be monotonically increasing
|
||||
for (let i = 1; i < traces.length; i++) {
|
||||
expect(traces[i][1]).toBeGreaterThanOrEqual(traces[i - 1][1]);
|
||||
}
|
||||
});
|
||||
});
|
||||
135
test/js/bun/trace/subprocess-trace.test.ts
Normal file
135
test/js/bun/trace/subprocess-trace.test.ts
Normal file
@@ -0,0 +1,135 @@
|
||||
import { describe, expect, test } from "bun:test";
|
||||
import { readFileSync } from "fs";
|
||||
import { bunEnv, bunExe, tempDir } from "harness";
|
||||
import { join } from "path";
|
||||
|
||||
describe("subprocess tracing", () => {
|
||||
test("trace Bun.spawn", async () => {
|
||||
using dir = tempDir("trace-bun-spawn", {
|
||||
"test.js": `
|
||||
const proc = Bun.spawn(["echo", "hello", "world"]);
|
||||
const text = await new Response(proc.stdout).text();
|
||||
console.log("output:", text);
|
||||
const exitCode = await proc.exited;
|
||||
console.log("exit:", exitCode);
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(exitCode).toBe(0);
|
||||
expect(stdout).toContain("output: hello world");
|
||||
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traces = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0)
|
||||
.map(l => JSON.parse(l));
|
||||
|
||||
// Should have subprocess namespace entries
|
||||
const subprocessTraces = traces.filter(t => t[0] === "subprocess");
|
||||
expect(subprocessTraces.length).toBeGreaterThan(0);
|
||||
|
||||
// Check for spawn
|
||||
const spawnCalls = subprocessTraces.filter(t => t[2] === "spawn");
|
||||
expect(spawnCalls.length).toBeGreaterThan(0);
|
||||
expect(spawnCalls[0][3].cmd).toBe("echo");
|
||||
expect(spawnCalls[0][3]).toHaveProperty("pid");
|
||||
expect(spawnCalls[0][3]).toHaveProperty("cwd");
|
||||
|
||||
// Check for exit
|
||||
const exitCalls = subprocessTraces.filter(t => t[2] === "exit");
|
||||
expect(exitCalls.length).toBeGreaterThan(0);
|
||||
expect(exitCalls[0][3]).toHaveProperty("pid");
|
||||
expect(exitCalls[0][3].exit_code).toBe(0);
|
||||
});
|
||||
|
||||
test("trace child_process spawn", async () => {
|
||||
using dir = tempDir("trace-child-process", {
|
||||
"test.js": `
|
||||
import { spawn } from "child_process";
|
||||
const child = spawn("echo", ["hello"]);
|
||||
child.stdout.on("data", (data) => console.log("stdout:", data.toString()));
|
||||
child.on("close", (code) => console.log("closed:", code));
|
||||
await new Promise(resolve => setTimeout(resolve, 100));
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(exitCode).toBe(0);
|
||||
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traces = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0)
|
||||
.map(l => JSON.parse(l));
|
||||
|
||||
const subprocessTraces = traces.filter(t => t[0] === "subprocess");
|
||||
expect(subprocessTraces.length).toBeGreaterThan(0);
|
||||
|
||||
// Check for spawn with echo command
|
||||
const spawnCalls = subprocessTraces.filter(t => t[2] === "spawn" && t[3].cmd === "echo");
|
||||
expect(spawnCalls.length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
test("trace subprocess with arguments", async () => {
|
||||
using dir = tempDir("trace-subprocess-args", {
|
||||
"test.js": `
|
||||
const proc = Bun.spawn(["echo", "arg1", "arg2", "arg3"]);
|
||||
await proc.exited;
|
||||
console.log("done");
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(exitCode).toBe(0);
|
||||
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traces = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0)
|
||||
.map(l => JSON.parse(l));
|
||||
|
||||
const subprocessTraces = traces.filter(t => t[0] === "subprocess");
|
||||
const spawnCalls = subprocessTraces.filter(t => t[2] === "spawn");
|
||||
|
||||
expect(spawnCalls.length).toBeGreaterThan(0);
|
||||
// Check that args count is tracked (4 args: echo, arg1, arg2, arg3)
|
||||
expect(spawnCalls[0][3].args || spawnCalls[0][3].args_count).toBeGreaterThanOrEqual(4);
|
||||
});
|
||||
});
|
||||
475
test/js/bun/trace/trace.test.ts
Normal file
475
test/js/bun/trace/trace.test.ts
Normal file
@@ -0,0 +1,475 @@
|
||||
import { describe, expect, test } from "bun:test";
|
||||
import { readFileSync } from "fs";
|
||||
import { bunEnv, bunExe, tempDir } from "harness";
|
||||
import { join } from "path";
|
||||
|
||||
describe("--trace flag", () => {
|
||||
test("basic trace file creation", async () => {
|
||||
using dir = tempDir("trace-basic", {
|
||||
"test.js": `
|
||||
import { openSync, writeSync, closeSync } from "fs";
|
||||
const fd = openSync("test.txt", "w");
|
||||
writeSync(fd, "hello");
|
||||
closeSync(fd);
|
||||
console.log("done");
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(stdout).toBe("done\n");
|
||||
expect(exitCode).toBe(0);
|
||||
|
||||
// Trace file should exist and have content
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
expect(traceContent.length).toBeGreaterThan(0);
|
||||
|
||||
// Should have at least 3 trace lines (open, write, close)
|
||||
const lines = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0);
|
||||
expect(lines.length).toBeGreaterThanOrEqual(3);
|
||||
});
|
||||
|
||||
test("trace fs operations", async () => {
|
||||
using dir = tempDir("trace-fs", {
|
||||
"test.js": `
|
||||
import { openSync, writeSync, readSync, closeSync } from "fs";
|
||||
const fd1 = openSync("output.txt", "w");
|
||||
writeSync(fd1, "test data");
|
||||
closeSync(fd1);
|
||||
|
||||
const fd2 = openSync("output.txt", "r");
|
||||
const buf = Buffer.alloc(100);
|
||||
readSync(fd2, buf, 0, 100, 0);
|
||||
closeSync(fd2);
|
||||
console.log("done");
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(stdout).toBe("done\n");
|
||||
expect(exitCode).toBe(0);
|
||||
|
||||
// Parse trace file
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traceLines = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(line => line.length > 0);
|
||||
|
||||
expect(traceLines.length).toBeGreaterThan(0);
|
||||
|
||||
// Parse each line as JSON
|
||||
const traces = traceLines.map(line => JSON.parse(line));
|
||||
|
||||
// Should have fs namespace entries
|
||||
const fsTraces = traces.filter(t => t[0] === "fs");
|
||||
expect(fsTraces.length).toBeGreaterThan(0);
|
||||
|
||||
// Check for open/read/write operations
|
||||
const calls = fsTraces.map(t => t[2]);
|
||||
expect(calls).toContain("open");
|
||||
});
|
||||
|
||||
test("trace open and close operations", async () => {
|
||||
using dir = tempDir("trace-open-close", {
|
||||
"test.js": `
|
||||
import { openSync, closeSync } from "fs";
|
||||
const fd = openSync("output.txt", "w");
|
||||
closeSync(fd);
|
||||
console.log("done");
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(stdout).toBe("done\n");
|
||||
expect(exitCode).toBe(0);
|
||||
|
||||
// Parse trace file
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traceLines = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(line => line.length > 0);
|
||||
|
||||
const traces = traceLines.map(line => JSON.parse(line));
|
||||
|
||||
// Should have fs namespace entries
|
||||
const fsTraces = traces.filter(t => t[0] === "fs");
|
||||
expect(fsTraces.length).toBeGreaterThan(0);
|
||||
|
||||
// Check that we logged open and close operations
|
||||
const calls = fsTraces.map(t => t[2]);
|
||||
expect(calls).toContain("open");
|
||||
expect(calls).toContain("close");
|
||||
});
|
||||
|
||||
test("trace fetch operations", async () => {
|
||||
using dir = tempDir("trace-fetch", {
|
||||
"test.js": `
|
||||
const response = await fetch("https://example.com");
|
||||
const text = await response.text();
|
||||
console.log("fetched");
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(stdout).toBe("fetched\n");
|
||||
expect(exitCode).toBe(0);
|
||||
|
||||
// Parse trace file
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traceLines = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(line => line.length > 0);
|
||||
|
||||
const traces = traceLines.map(line => JSON.parse(line));
|
||||
|
||||
// Should have fetch namespace entries
|
||||
const fetchTraces = traces.filter(t => t[0] === "fetch");
|
||||
expect(fetchTraces.length).toBeGreaterThan(0);
|
||||
|
||||
// Check for response
|
||||
const responseCalls = fetchTraces.filter(t => t[2] === "response");
|
||||
expect(responseCalls.length).toBeGreaterThan(0);
|
||||
|
||||
// Should have URL
|
||||
expect(responseCalls[0][3].url).toContain("example.com");
|
||||
});
|
||||
|
||||
test("trace response body operations", async () => {
|
||||
using dir = tempDir("trace-response-body", {
|
||||
"test.js": `
|
||||
const response = await fetch("https://example.com");
|
||||
const text = await response.text();
|
||||
console.log("done");
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(stdout).toBe("done\n");
|
||||
expect(exitCode).toBe(0);
|
||||
|
||||
// Parse trace file
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traceLines = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(line => line.length > 0);
|
||||
|
||||
const traces = traceLines.map(line => JSON.parse(line));
|
||||
|
||||
// Should have response_body namespace entries
|
||||
const bodyTraces = traces.filter(t => t[0] === "response_body");
|
||||
expect(bodyTraces.length).toBeGreaterThan(0);
|
||||
|
||||
// Check for text call
|
||||
const textCalls = bodyTraces.filter(t => t[2] === "text");
|
||||
expect(textCalls.length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
test("trace format validation", async () => {
|
||||
using dir = tempDir("trace-format", {
|
||||
"test.js": `
|
||||
import { readFileSync } from "fs";
|
||||
const data = readFileSync("test.js", "utf8");
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(exitCode).toBe(0);
|
||||
|
||||
// Parse trace file
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traceLines = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(line => line.length > 0);
|
||||
|
||||
// Each line should be valid JSON
|
||||
for (const line of traceLines) {
|
||||
const trace = JSON.parse(line);
|
||||
|
||||
// Should have required fields
|
||||
expect(trace[0]).toBeDefined();
|
||||
expect(trace[1]).toBeDefined();
|
||||
expect(trace[3]).toBeDefined();
|
||||
|
||||
// ns should be a string
|
||||
expect(typeof trace[0]).toBe("string");
|
||||
|
||||
// ts should be a number (timestamp)
|
||||
expect(typeof trace[1]).toBe("number");
|
||||
|
||||
// data should be an object
|
||||
expect(typeof trace[3]).toBe("object");
|
||||
}
|
||||
});
|
||||
|
||||
test("trace file error handling", async () => {
|
||||
using dir = tempDir("trace-error", {
|
||||
"test.js": `console.log("hello");`,
|
||||
});
|
||||
|
||||
const traceFile = "/invalid/path/trace.jsonl";
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
// Should fail to open trace file
|
||||
expect(exitCode).toBe(1);
|
||||
expect(stderr).toContain("Failed to open trace file");
|
||||
});
|
||||
|
||||
test("trace high-level readFile/writeFile", async () => {
|
||||
using dir = tempDir("trace-highlevel-rw", {
|
||||
"test.js": `
|
||||
import { readFileSync, writeFileSync } from "fs";
|
||||
writeFileSync("test.txt", "hello world");
|
||||
const content = readFileSync("test.txt", "utf8");
|
||||
console.log("content:", content);
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(exitCode).toBe(0);
|
||||
expect(stdout).toContain("hello world");
|
||||
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traces = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0)
|
||||
.map(l => JSON.parse(l));
|
||||
|
||||
const fsTraces = traces.filter(t => t[0] === "fs");
|
||||
expect(fsTraces.length).toBeGreaterThan(0);
|
||||
|
||||
// Check for writeFile
|
||||
const writeCalls = fsTraces.filter(t => t[2] === "writeFile");
|
||||
expect(writeCalls.length).toBeGreaterThan(0);
|
||||
expect(writeCalls.some(t => t[3].path && t[3].path.includes("test.txt"))).toBe(true);
|
||||
|
||||
// Check for readFile
|
||||
const readCalls = fsTraces.filter(t => t[2] === "readFile");
|
||||
expect(readCalls.length).toBeGreaterThan(0);
|
||||
expect(readCalls.some(t => t[3].path && t[3].path.includes("test.txt"))).toBe(true);
|
||||
});
|
||||
|
||||
test("trace stat operations", async () => {
|
||||
using dir = tempDir("trace-stat", {
|
||||
"test.js": `
|
||||
import { writeFileSync, statSync } from "fs";
|
||||
writeFileSync("test.txt", "data");
|
||||
const stats = statSync("test.txt");
|
||||
console.log("size:", stats.size);
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(exitCode).toBe(0);
|
||||
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traces = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0)
|
||||
.map(l => JSON.parse(l));
|
||||
|
||||
const fsTraces = traces.filter(t => t[0] === "fs");
|
||||
|
||||
// Check for stat
|
||||
const statCalls = fsTraces.filter(t => t[2] === "stat");
|
||||
expect(statCalls.length).toBeGreaterThan(0);
|
||||
expect(statCalls.some(t => t[3].path && t[3].path.includes("test.txt"))).toBe(true);
|
||||
});
|
||||
|
||||
test("trace directory operations", async () => {
|
||||
using dir = tempDir("trace-dir-ops", {
|
||||
"test.js": `
|
||||
import { mkdirSync, rmdirSync, readdirSync, writeFileSync, unlinkSync } from "fs";
|
||||
mkdirSync("testdir");
|
||||
writeFileSync("testdir/file.txt", "data");
|
||||
const files = readdirSync("testdir");
|
||||
console.log("files:", files);
|
||||
unlinkSync("testdir/file.txt");
|
||||
rmdirSync("testdir");
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(exitCode).toBe(0);
|
||||
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traces = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0)
|
||||
.map(l => JSON.parse(l));
|
||||
|
||||
const fsTraces = traces.filter(t => t[0] === "fs");
|
||||
|
||||
// Check for mkdir
|
||||
const mkdirCalls = fsTraces.filter(t => t[2] === "mkdir");
|
||||
expect(mkdirCalls.length).toBeGreaterThan(0);
|
||||
|
||||
// Check for readdir
|
||||
const readdirCalls = fsTraces.filter(t => t[2] === "readdir");
|
||||
expect(readdirCalls.length).toBeGreaterThan(0);
|
||||
|
||||
// Check for unlink
|
||||
const unlinkCalls = fsTraces.filter(t => t[2] === "unlink");
|
||||
expect(unlinkCalls.length).toBeGreaterThan(0);
|
||||
|
||||
// Check for rmdir
|
||||
const rmdirCalls = fsTraces.filter(t => t[2] === "rmdir");
|
||||
expect(rmdirCalls.length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
test("trace rename operations", async () => {
|
||||
using dir = tempDir("trace-rename", {
|
||||
"test.js": `
|
||||
import { writeFileSync, renameSync, unlinkSync } from "fs";
|
||||
writeFileSync("old.txt", "data");
|
||||
renameSync("old.txt", "new.txt");
|
||||
unlinkSync("new.txt");
|
||||
console.log("done");
|
||||
`,
|
||||
});
|
||||
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
|
||||
|
||||
expect(exitCode).toBe(0);
|
||||
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traces = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0)
|
||||
.map(l => JSON.parse(l));
|
||||
|
||||
const fsTraces = traces.filter(t => t[0] === "fs");
|
||||
|
||||
// Check for rename
|
||||
const renameCalls = fsTraces.filter(t => t[2] === "rename");
|
||||
expect(renameCalls.length).toBeGreaterThan(0);
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user