mirror of
https://github.com/oven-sh/bun
synced 2026-02-09 10:28:47 +00:00
Add comprehensive documentation for --trace flag
Added two documentation files to help users and future developers understand and use the trace functionality: 1. docs/cli/trace.md - User-facing documentation covering: - Usage examples and CLI syntax - Complete list of traced operations - Trace event format and fields - Analysis examples with jq - Common use cases (debugging, testing, AI analysis) - Performance impact details - Implementation details 2. test/js/bun/trace/README.md - Developer documentation covering: - Test file organization - How to write new trace tests - Important test patterns (instrumented vs non-instrumented ops) - Common assertions and debugging tips - Coverage matrix showing what's tested - Guidelines for adding new trace points These docs will help future Claudes (and humans) effectively use and extend the trace functionality. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
230
docs/pm/cli/trace.md
Normal file
230
docs/pm/cli/trace.md
Normal file
@@ -0,0 +1,230 @@
|
||||
# Trace Flag
|
||||
|
||||
The `--trace` flag enables structured logging of I/O operations to help understand what your application is doing without manual instrumentation.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
bun --trace=trace.jsonl your-script.js
|
||||
```
|
||||
|
||||
This creates a trace file with line-delimited JSON (JSONL) format, where each line is a complete trace event.
|
||||
|
||||
## What Gets Traced
|
||||
|
||||
### File System Operations (`ns: "fs"`)
|
||||
|
||||
**Low-level operations:**
|
||||
- `open` - File descriptor allocation
|
||||
- `read` / `write` - Data transfer operations
|
||||
- `close` - File descriptor release
|
||||
|
||||
**High-level operations:**
|
||||
- `readFile` / `writeFile` - Complete file read/write
|
||||
- `stat` / `lstat` / `fstat` - File metadata queries
|
||||
- `mkdir` / `rmdir` - Directory creation/removal
|
||||
- `unlink` - File deletion
|
||||
- `rename` - File/directory renaming
|
||||
- `readdir` - Directory listing
|
||||
|
||||
### HTTP Operations (`ns: "fetch"`)
|
||||
|
||||
- `request` - HTTP request initiation
|
||||
- `response` - HTTP response received
|
||||
|
||||
### Response Body Consumption (`ns: "response_body"`)
|
||||
|
||||
- `text` / `json` / `arrayBuffer` / `blob` - Response body parsing
|
||||
|
||||
### Bun.write Operations (`ns: "bun_write"`)
|
||||
|
||||
- `write` - File write operations via Bun.write()
|
||||
|
||||
## Trace Event Format
|
||||
|
||||
Each trace event is a JSON object with:
|
||||
|
||||
```json
|
||||
{
|
||||
"ns": "fs", // Namespace (fs, fetch, response_body, bun_write)
|
||||
"ts": 1761595797038, // Timestamp in milliseconds
|
||||
"data": { // Operation-specific data
|
||||
"call": "readFile",
|
||||
"path": "/path/to/file",
|
||||
"bytes_read": 1234
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Only applicable fields are included - no null/undefined values.
|
||||
|
||||
## Example: Analyzing a Trace
|
||||
|
||||
```bash
|
||||
# Run your script with tracing
|
||||
bun --trace=trace.jsonl my-app.js
|
||||
|
||||
# View all filesystem operations
|
||||
cat trace.jsonl | jq 'select(.ns == "fs")'
|
||||
|
||||
# Count operations by type
|
||||
cat trace.jsonl | jq -r '.data.call' | sort | uniq -c
|
||||
|
||||
# Find slow operations (requires calculating deltas)
|
||||
cat trace.jsonl | jq -c 'select(.ns == "fetch") | {ts, url: .data.url}'
|
||||
|
||||
# Total bytes read/written
|
||||
cat trace.jsonl | jq -s 'map(select(.ns == "fs")) |
|
||||
{
|
||||
bytes_read: map(.data.bytes_read // 0) | add,
|
||||
bytes_written: map(.data.bytes_written // 0) | add
|
||||
}'
|
||||
```
|
||||
|
||||
## Example Trace Output
|
||||
|
||||
```bash
|
||||
$ bun --trace=trace.jsonl -e 'import fs from "fs"; fs.writeFileSync("test.txt", "hello"); console.log(fs.readFileSync("test.txt", "utf8"))'
|
||||
hello
|
||||
$ cat trace.jsonl
|
||||
{"ns":"fs","ts":1761595797038,"data":{"call":"writeFile","path":"test.txt","length":5}}
|
||||
{"ns":"fs","ts":1761595797038,"data":{"call":"writeFile","path":"test.txt","bytes_written":5}}
|
||||
{"ns":"fs","ts":1761595797038,"data":{"call":"readFile","path":"test.txt","encoding":"utf8"}}
|
||||
{"ns":"fs","ts":1761595797039,"data":{"call":"readFile","path":"test.txt","bytes_read":5,"fast_path":true}}
|
||||
```
|
||||
|
||||
## Common Trace Data Fields
|
||||
|
||||
### Filesystem Operations
|
||||
|
||||
**Entry traces** (operation starting):
|
||||
- `call` - Operation name
|
||||
- `path` - File/directory path
|
||||
- `fd` - File descriptor (for fd-based ops)
|
||||
- `flags` - Open flags (for open)
|
||||
- `mode` - File permissions
|
||||
- `length` - Buffer/data length
|
||||
- `offset` - Buffer offset
|
||||
- `position` - File position (for positioned reads/writes)
|
||||
- `encoding` - Character encoding (for readFile)
|
||||
- `recursive` - Recursive flag (for mkdir/rmdir/readdir)
|
||||
|
||||
**Exit traces** (operation complete):
|
||||
- `call` - Operation name (same as entry)
|
||||
- `path` / `fd` - Identifier (same as entry)
|
||||
- `bytes_read` - Bytes successfully read
|
||||
- `bytes_written` - Bytes successfully written
|
||||
- `size` - File size (for stat)
|
||||
- `mode` - File mode bits (for stat)
|
||||
- `entries` - Number of entries (for readdir)
|
||||
- `success` - Boolean success flag
|
||||
- `errno` - Error number (on failure)
|
||||
- `fast_path` - Used optimized path (readFile)
|
||||
|
||||
### HTTP Operations
|
||||
|
||||
**Request trace:**
|
||||
- `call: "request"`
|
||||
- `url` - Request URL
|
||||
- `method` - HTTP method (GET, POST, etc.)
|
||||
|
||||
**Response trace:**
|
||||
- `call: "response"`
|
||||
- `url` - Request URL
|
||||
- `status` - HTTP status (OK, NotFound, etc.)
|
||||
- `body_size` - Response body size in bytes
|
||||
- `has_more` - Whether response is streaming
|
||||
- `err` - Error name (on failure)
|
||||
|
||||
**Body consumption trace:**
|
||||
- `call` - Consumption method (text, json, arrayBuffer, blob)
|
||||
- `status` - Consumption status (immediate, streaming, already_used)
|
||||
|
||||
## Use Cases
|
||||
|
||||
### Debugging I/O Performance
|
||||
|
||||
Find which operations are slow by analyzing timestamps:
|
||||
|
||||
```javascript
|
||||
const traces = require('fs').readFileSync('trace.jsonl', 'utf8')
|
||||
.trim().split('\n').map(JSON.parse);
|
||||
|
||||
// Find operations and their durations
|
||||
const ops = new Map();
|
||||
traces.forEach(t => {
|
||||
const key = `${t.ns}.${t.data.call}.${t.data.path || t.data.url}`;
|
||||
if (!ops.has(key)) {
|
||||
ops.set(key, { start: t.ts, events: [] });
|
||||
}
|
||||
ops.get(key).events.push(t);
|
||||
ops.get(key).end = t.ts;
|
||||
});
|
||||
|
||||
// Show slowest operations
|
||||
Array.from(ops.entries())
|
||||
.map(([k, v]) => ({ op: k, duration: v.end - v.start }))
|
||||
.sort((a, b) => b.duration - a.duration)
|
||||
.slice(0, 10)
|
||||
.forEach(x => console.log(`${x.op}: ${x.duration}ms`));
|
||||
```
|
||||
|
||||
### Understanding Application Behavior
|
||||
|
||||
AI agents can analyze traces to understand:
|
||||
- What files the application reads/writes
|
||||
- What external services it contacts
|
||||
- Order of operations
|
||||
- Error patterns
|
||||
- Resource usage (bytes read/written, number of requests)
|
||||
|
||||
### Testing and Validation
|
||||
|
||||
Verify your application's I/O behavior:
|
||||
|
||||
```javascript
|
||||
// Check that app only writes to temp directory
|
||||
const traces = getTraces();
|
||||
const writes = traces.filter(t =>
|
||||
t.ns === 'fs' &&
|
||||
['write', 'writeFile'].includes(t.data.call)
|
||||
);
|
||||
writes.forEach(w => {
|
||||
assert(w.data.path.startsWith('/tmp'),
|
||||
`Unexpected write to ${w.data.path}`);
|
||||
});
|
||||
```
|
||||
|
||||
## Performance Impact
|
||||
|
||||
Tracing has **minimal overhead** when disabled (the default):
|
||||
- Single boolean check per instrumentation point
|
||||
- No file I/O, JSON serialization, or string allocation
|
||||
|
||||
When enabled, tracing adds:
|
||||
- JSON serialization overhead per event
|
||||
- Mutex locking for thread-safe file writes
|
||||
- Buffered I/O to the trace file
|
||||
|
||||
For CPU-bound applications, overhead is negligible. For I/O-bound applications with high operation counts (thousands/second), expect 1-5% overhead.
|
||||
|
||||
## Limitations
|
||||
|
||||
- Trace file uses append-only writes (no rotation)
|
||||
- Large traces can consume significant disk space
|
||||
- No built-in filtering (logs all operations in traced namespaces)
|
||||
- Timestamps are system time (not monotonic/high-resolution)
|
||||
- No support for trace compression
|
||||
- No built-in trace analysis tools (use jq, custom scripts, or tools like Jaeger)
|
||||
|
||||
## Implementation Details
|
||||
|
||||
The trace implementation is integrated into Bun's core I/O layer:
|
||||
- `src/output.zig` - Core tracing infrastructure
|
||||
- `src/cli/Arguments.zig` - CLI flag parsing
|
||||
- `src/bun.js/node/node_fs.zig` - Filesystem operation tracing
|
||||
- `src/bun.js/webcore/fetch.zig` - HTTP request/response tracing
|
||||
- `src/bun.js/webcore/Body.zig` - Response body tracing
|
||||
- `src/bun.js/webcore/blob/write_file.zig` - Bun.write tracing
|
||||
|
||||
Each instrumentation point checks `Output.trace_enabled` before tracing, ensuring zero overhead when disabled.
|
||||
304
test/js/bun/trace/README.md
Normal file
304
test/js/bun/trace/README.md
Normal file
@@ -0,0 +1,304 @@
|
||||
# Trace Tests
|
||||
|
||||
Tests for the `--trace` flag functionality that provides structured logging of I/O operations.
|
||||
|
||||
## Test Files
|
||||
|
||||
### `trace.test.ts`
|
||||
Basic functionality tests covering:
|
||||
- Trace file creation and format
|
||||
- Low-level fs operations (open, read, write, close)
|
||||
- High-level fs operations (readFile, writeFile, stat, mkdir, rmdir, unlink, rename, readdir)
|
||||
- HTTP/fetch operations
|
||||
- Response body consumption
|
||||
- Error handling (invalid trace file path)
|
||||
|
||||
### `extensive-trace.test.ts`
|
||||
Comprehensive integration tests covering:
|
||||
- Multiple sequential fs operations
|
||||
- Multiple sequential HTTP requests
|
||||
- Mixed fs + HTTP operations
|
||||
- Namespace filtering
|
||||
- Chronological ordering
|
||||
- Complete trace format validation
|
||||
|
||||
## Running Tests
|
||||
|
||||
```bash
|
||||
# Run all trace tests
|
||||
bun bd test test/js/bun/trace/
|
||||
|
||||
# Run specific test file
|
||||
bun bd test test/js/bun/trace/trace.test.ts
|
||||
|
||||
# Run specific test
|
||||
bun bd test test/js/bun/trace/trace.test.ts -t "trace high-level"
|
||||
```
|
||||
|
||||
## Writing New Trace Tests
|
||||
|
||||
### Test Structure
|
||||
|
||||
```typescript
|
||||
import { test, expect } from "bun:test";
|
||||
import { readFileSync } from "fs";
|
||||
import { bunEnv, bunExe, tempDir } from "harness";
|
||||
import { join } from "path";
|
||||
|
||||
test("your test name", async () => {
|
||||
// 1. Create test files in temp directory
|
||||
using dir = tempDir("test-prefix", {
|
||||
"test.js": `
|
||||
// Your test script here
|
||||
import { writeFileSync } from "fs";
|
||||
writeFileSync("test.txt", "data");
|
||||
console.log("done");
|
||||
`,
|
||||
});
|
||||
|
||||
// 2. Set up trace file path
|
||||
const traceFile = join(String(dir), "trace.jsonl");
|
||||
|
||||
// 3. Spawn bun with --trace flag
|
||||
await using proc = Bun.spawn({
|
||||
cmd: [bunExe(), "--trace", traceFile, "test.js"],
|
||||
env: bunEnv,
|
||||
cwd: String(dir),
|
||||
stderr: "pipe",
|
||||
stdout: "pipe",
|
||||
});
|
||||
|
||||
// 4. Wait for completion
|
||||
const [stdout, stderr, exitCode] = await Promise.all([
|
||||
proc.stdout.text(),
|
||||
proc.stderr.text(),
|
||||
proc.exited
|
||||
]);
|
||||
|
||||
// 5. Verify execution
|
||||
expect(exitCode).toBe(0);
|
||||
expect(stdout).toContain("done");
|
||||
|
||||
// 6. Parse and validate trace
|
||||
const traceContent = readFileSync(traceFile, "utf8");
|
||||
const traces = traceContent
|
||||
.trim()
|
||||
.split("\n")
|
||||
.filter(l => l.length > 0)
|
||||
.map(l => JSON.parse(l));
|
||||
|
||||
// 7. Assert trace properties
|
||||
const fsTraces = traces.filter(t => t.ns === "fs");
|
||||
expect(fsTraces.length).toBeGreaterThan(0);
|
||||
|
||||
const writeCalls = fsTraces.filter(t => t.data.call === "writeFile");
|
||||
expect(writeCalls.length).toBeGreaterThan(0);
|
||||
});
|
||||
```
|
||||
|
||||
### Important Test Patterns
|
||||
|
||||
#### Use Instrumented Operations
|
||||
|
||||
Not all fs operations go through instrumented code paths. Use these patterns:
|
||||
|
||||
**✅ DO** - These operations ARE traced:
|
||||
```javascript
|
||||
// Low-level sync operations
|
||||
import { openSync, readSync, writeSync, closeSync } from "fs";
|
||||
const fd = openSync("file.txt", "w");
|
||||
writeSync(fd, "data");
|
||||
closeSync(fd);
|
||||
|
||||
// High-level sync operations
|
||||
import { readFileSync, writeFileSync, statSync } from "fs";
|
||||
writeFileSync("file.txt", "data");
|
||||
const content = readFileSync("file.txt", "utf8");
|
||||
const stats = statSync("file.txt");
|
||||
|
||||
// Directory operations
|
||||
import { mkdirSync, readdirSync, rmdirSync, unlinkSync, renameSync } from "fs";
|
||||
mkdirSync("dir");
|
||||
const files = readdirSync("dir");
|
||||
```
|
||||
|
||||
**❌ DON'T** - These operations may NOT be traced:
|
||||
```javascript
|
||||
// Some async operations may use different code paths
|
||||
import { readFile, writeFile } from "fs/promises";
|
||||
await writeFile("file.txt", "data"); // May not be traced
|
||||
```
|
||||
|
||||
#### Testing Fetch Operations
|
||||
|
||||
Fetch tests can be slow. Set appropriate timeouts:
|
||||
|
||||
```typescript
|
||||
test(
|
||||
"fetch test",
|
||||
async () => {
|
||||
// Test code that makes HTTP requests
|
||||
},
|
||||
10_000, // 10 second timeout
|
||||
);
|
||||
```
|
||||
|
||||
Use `example.com` for test fetches (fast, reliable):
|
||||
```javascript
|
||||
const response = await fetch("https://example.com");
|
||||
```
|
||||
|
||||
#### Namespace Filtering
|
||||
|
||||
Test that traces are properly namespaced:
|
||||
|
||||
```typescript
|
||||
const fsTraces = traces.filter(t => t.ns === "fs");
|
||||
const fetchTraces = traces.filter(t => t.ns === "fetch");
|
||||
const bodyTraces = traces.filter(t => t.ns === "response_body");
|
||||
const bunWriteTraces = traces.filter(t => t.ns === "bun_write");
|
||||
```
|
||||
|
||||
#### Entry/Exit Traces
|
||||
|
||||
Many operations have both entry and exit traces:
|
||||
|
||||
```typescript
|
||||
// Entry trace: operation starting
|
||||
{"ns":"fs","ts":1761595797038,"data":{"call":"writeFile","path":"test.txt","length":5}}
|
||||
|
||||
// Exit trace: operation complete
|
||||
{"ns":"fs","ts":1761595797038,"data":{"call":"writeFile","path":"test.txt","bytes_written":5}}
|
||||
```
|
||||
|
||||
Test both:
|
||||
```typescript
|
||||
const writeCalls = fsTraces.filter(t => t.data.call === "writeFile");
|
||||
const entryTrace = writeCalls.find(t => t.data.length !== undefined);
|
||||
const exitTrace = writeCalls.find(t => t.data.bytes_written !== undefined);
|
||||
expect(entryTrace).toBeDefined();
|
||||
expect(exitTrace).toBeDefined();
|
||||
```
|
||||
|
||||
### Common Assertions
|
||||
|
||||
```typescript
|
||||
// Trace file exists and has content
|
||||
expect(traceContent.length).toBeGreaterThan(0);
|
||||
|
||||
// Valid JSON lines
|
||||
const traces = traceContent.trim().split("\n").map(l => JSON.parse(l));
|
||||
|
||||
// Required fields
|
||||
traces.forEach(t => {
|
||||
expect(t).toHaveProperty("ns");
|
||||
expect(t).toHaveProperty("ts");
|
||||
expect(t).toHaveProperty("data");
|
||||
expect(t.data).toHaveProperty("call");
|
||||
});
|
||||
|
||||
// Specific operation traced
|
||||
expect(traces.some(t => t.data.call === "writeFile")).toBe(true);
|
||||
|
||||
// Path included in trace
|
||||
expect(traces.some(t => t.data.path?.includes("test.txt"))).toBe(true);
|
||||
|
||||
// Chronological ordering
|
||||
for (let i = 1; i < traces.length; i++) {
|
||||
expect(traces[i].ts).toBeGreaterThanOrEqual(traces[i-1].ts);
|
||||
}
|
||||
```
|
||||
|
||||
## Debugging Failed Tests
|
||||
|
||||
### Test Fails: Empty Trace File
|
||||
|
||||
**Cause**: Operation doesn't go through instrumented code path
|
||||
|
||||
**Solution**: Use sync operations or high-level APIs that are instrumented:
|
||||
```typescript
|
||||
// Instead of this:
|
||||
await Bun.write("file.txt", "data");
|
||||
|
||||
// Use this:
|
||||
import { writeFileSync } from "fs";
|
||||
writeFileSync("file.txt", "data");
|
||||
```
|
||||
|
||||
### Test Fails: Missing Expected Operations
|
||||
|
||||
**Cause**: Wrong namespace filter or operation name
|
||||
|
||||
**Solution**: Print all traces to see what's actually logged:
|
||||
```typescript
|
||||
console.log("All traces:", JSON.stringify(traces, null, 2));
|
||||
```
|
||||
|
||||
### Test Fails: Timeout on Fetch Tests
|
||||
|
||||
**Cause**: External HTTP requests can be slow
|
||||
|
||||
**Solution**: Increase timeout or use faster endpoints:
|
||||
```typescript
|
||||
test(
|
||||
"fetch test",
|
||||
async () => { /* ... */ },
|
||||
15_000, // Increase timeout
|
||||
);
|
||||
```
|
||||
|
||||
## Adding New Trace Points
|
||||
|
||||
If you add new trace instrumentation to Bun's source code:
|
||||
|
||||
1. **Add the trace call** in the relevant source file:
|
||||
```zig
|
||||
traceFS(.{ .call = "newOperation", .path = path, .result = result });
|
||||
```
|
||||
|
||||
2. **Rebuild Bun**:
|
||||
```bash
|
||||
bun bd
|
||||
```
|
||||
|
||||
3. **Add a test** in `trace.test.ts`:
|
||||
```typescript
|
||||
test("trace new operation", async () => {
|
||||
// Test code that triggers the new operation
|
||||
// Verify trace includes expected data
|
||||
});
|
||||
```
|
||||
|
||||
4. **Verify the test** passes with your changes and fails without:
|
||||
```bash
|
||||
# Should pass
|
||||
bun bd test test/js/bun/trace/trace.test.ts -t "new operation"
|
||||
|
||||
# Should fail (operation not traced)
|
||||
USE_SYSTEM_BUN=1 bun test test/js/bun/trace/trace.test.ts -t "new operation"
|
||||
```
|
||||
|
||||
## Coverage Matrix
|
||||
|
||||
Current test coverage:
|
||||
|
||||
| Operation | Low-level API | High-level API |
|
||||
|-----------|---------------|----------------|
|
||||
| File Read | ✅ `readSync` | ✅ `readFileSync` |
|
||||
| File Write | ✅ `writeSync` | ✅ `writeFileSync` |
|
||||
| File Open | ✅ `openSync` | N/A |
|
||||
| File Close | ✅ `closeSync` | N/A |
|
||||
| File Stat | N/A | ✅ `statSync` |
|
||||
| Dir Create | N/A | ✅ `mkdirSync` |
|
||||
| Dir Remove | N/A | ✅ `rmdirSync` |
|
||||
| Dir List | N/A | ✅ `readdirSync` |
|
||||
| File Delete | N/A | ✅ `unlinkSync` |
|
||||
| File Rename | N/A | ✅ `renameSync` |
|
||||
| HTTP Request | N/A | ✅ `fetch()` |
|
||||
| Response Body | N/A | ✅ `.text()`, `.json()` |
|
||||
| Bun.write | N/A | ⚠️ Instrumented but not tested |
|
||||
|
||||
**Legend:**
|
||||
- ✅ Fully tested
|
||||
- ⚠️ Instrumented but needs tests
|
||||
- N/A Not applicable
|
||||
Reference in New Issue
Block a user