Compare commits

...

6 Commits

Author SHA1 Message Date
Jarred Sumner
9e6a6148e2 Include --feature flags in RuntimeTranspilerCache hash
The feature("NAME") builtin from bun:bundle is replaced at transpile time
based on --feature CLI flags, but these flags weren't part of the cache
key. This caused stale output when toggling flags between runs.

- Sort feature flag keys on init so CLI flag order doesn't affect the hash
- Append each flag (null-terminated) to the transpiler cache hasher
- When no flags are set, nothing is added, preserving existing cache hits
2026-03-10 18:05:42 -07:00
robobun
120f06e581 Add GitHub Action to find issues a PR might fix (#27991)
## Summary
- Adds a Claude-powered bot that runs when PRs are opened and searches
for open issues the PR may resolve
- Uses the same pattern as the existing dedupe issues bot: a workflow
triggers Claude Code with a `/find-issues` slash command
- The command reads the PR diff, launches 5 parallel search agents with
diverse keyword strategies, filters false positives, and comments on the
PR with up to 5 related open issues

## Test plan
- [ ] Open a test PR and verify the workflow triggers
- [ ] Verify the bot comments with relevant issues and doesn't duplicate
comments on re-runs
- [ ] Verify issues already referenced in the PR body are excluded from
results

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-10 16:07:19 -07:00
igorkofman
94292f276b fix(process): check listener count before uninstalling signal handler (#27986)
## What does this PR do?

Fixes a bug where removing **one** of multiple signal listeners via
`process.off("SIGxxx", handler)` would unconditionally uninstall the OS
signal handler, breaking any remaining listeners for that signal.

### Root cause

`onDidChangeListener` fires on **every** listener add/remove (not just
first/last). In `onDidChangeListeners` in BunProcess.cpp, the
signal-removal path checked only whether the signal was registered in
`signalToContextIdsMap`, then unconditionally tore down the OS handler —
even if other JS listeners remained.

The IPC case a few lines above handles this correctly by checking
`totalListenerCount == 0`. This PR applies the same guard to the signal
path.

### Fix

Add `&& eventEmitter.listenerCount(eventName) == 0` to the removal
condition so the OS signal handler is only uninstalled when no listeners
remain.

### Test

`test/js/node/process/process-signal-listener-count.test.ts` verifies:
1. Removing one of multiple listeners keeps the handler installed (fails
on `main`, passes with fix)
2. Removing all listeners properly uninstalls (default signal behavior
kills the process)
3. Re-adding a listener after removing all reinstalls the handler

## How did you verify your code works?

- `USE_SYSTEM_BUN=1 bun test` → test 1 fails (confirms bug exists in
current release)
- `bun bd test` → all 3 tests pass
- Ran existing signal tests (`ctrl-c.test.ts`) — all pass

Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
2026-03-10 15:24:48 -07:00
Dylan Conway
f1a040d331 Revert "fix(windows): use TerminateProcess to prevent NAPI module segfault on exit (#27829)"
This reverts commit b2e657ec51.
2026-03-10 14:04:52 -07:00
kashyap murali
2eb2b01823 docs: add Oxford comma to platform support list in README (#27953)
Adds a serial (Oxford) comma before "and Windows" in the platform
support line for grammatical consistency.
2026-03-10 00:21:31 -07:00
robobun
05026087b3 fix(watch): fix off-by-one in file:// URL prefix stripping (#27970)
## Summary

- Fix off-by-one error when stripping `file://` prefix in
`node_fs_watcher.zig` and `node_fs_stat_watcher.zig`
- `"file://"` is 7 characters, but `slice[6..]` was used instead of
`slice[7..]`, retaining the second `/`
- Use `slice["file://".len..]` for clarity, matching the existing
pattern in `VirtualMachine.zig:1750`

The bug was masked by downstream path normalization in
`joinAbsStringBufZ` which collapses the duplicate leading slash (e.g.
`//tmp/foo` → `/tmp/foo`).

## Test plan

- [x] Existing `fs.watch` URL tests pass
(`test/js/node/watch/fs.watch.test.ts`)
- [x] Existing `fs.watchFile` URL tests pass
(`test/js/node/watch/fs.watchFile.test.ts`)
- [x] Debug build compiles successfully

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-09 19:52:09 -07:00
10 changed files with 280 additions and 23 deletions

View File

@@ -0,0 +1,50 @@
---
allowed-tools: Bash(gh pr view:*), Bash(gh pr diff:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh issue view:*), Bash(gh api:*), Bash(gh pr comment:*)
description: Find GitHub issues that a PR might fix
---
# Find issues for PR command
Find open GitHub issues that a pull request might fix. Include all likely matches — do not artificially limit the number of results.
To do this, follow these steps precisely:
1. Use an agent to check if the PR (a) is closed/merged, or (b) already has a related-issues comment (check for the exact HTML marker `<!-- find-issues-bot:marker -->` in the PR comments - ignore other bot comments). If so, do not proceed.
2. Use an agent to view the PR title, body, and diff (`gh pr view` and `gh pr diff`), and ask the agent to return a summary of:
- What the PR changes (files modified, functions changed, features added/fixed)
- Key technical terms, error messages, API names, or module names involved
- Any issue numbers already referenced in the PR body or commit messages
3. Then, launch 5 parallel agents to search GitHub for open issues that this PR might fix, using diverse keywords and search approaches derived from the summary in Step 2. **IMPORTANT**: Always scope searches with `repo:owner/repo` to constrain results to the current repository only. Each agent should try a different search strategy:
- Agent 1: Search using error messages or symptoms described in the diff
- Agent 2: Search using feature/module names from the changed files
- Agent 3: Search using API names or function names that were modified
- Agent 4: Search using keywords from the PR title and description
- Agent 5: Search using broader terms related to the area of code changed
4. Next, feed the results from Steps 2 and 3 into another agent, so that it can filter out false positives that are likely not actually related to the PR's changes. Exclude issues already referenced in the PR body (e.g. "fixes #123", "closes #456", "resolves #789"). Only keep issues where the PR changes are clearly relevant to the issue. If there are no related issues remaining, do not proceed.
5. Finally, comment on the PR with all related open issues found (or zero, if there are no likely matches). Do not cap the number — list every issue that is a likely match.
Notes (be sure to tell this to your agents, too):
- Use `gh` to interact with GitHub, rather than web fetch
- Do not use other tools, beyond `gh` (eg. don't use other MCP servers, file edit, etc.)
- Make a todo list first
- Always scope searches with `repo:owner/repo` to prevent cross-repo false positives
- Only match against **open** issues - do not suggest closed issues
- Exclude issues that are already linked in the PR description
- For your comment, follow the following format precisely (assuming for this example that you found 3 related issues):
---
Found 3 issues this PR may fix:
1. <link to issue> - <one-line summary of why this PR is relevant>
2. <link to issue> - <one-line summary of why this PR is relevant>
3. <link to issue> - <one-line summary of why this PR is relevant>
> If this is helpful, consider adding `Fixes #<number>` to the PR description to auto-close the issue on merge.
🤖 Generated with [Claude Code](https://claude.ai/code)
<!-- find-issues-bot:marker -->
---

View File

@@ -0,0 +1,35 @@
name: Claude Find Issues for PR
on:
pull_request:
types: [opened]
workflow_dispatch:
inputs:
pr_number:
description: 'PR number to find related issues for'
required: true
type: string
jobs:
claude-find-issues:
runs-on: ubuntu-latest
timeout-minutes: 10
concurrency:
group: claude-find-issues-${{ github.event.pull_request.number || inputs.pr_number }}
cancel-in-progress: true
permissions:
contents: read
pull-requests: write
issues: read
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Run Claude Code slash command
uses: anthropics/claude-code-base-action@beta
with:
prompt: "/find-issues ${{ github.repository }}/pull/${{ github.event.pull_request.number || inputs.pr_number }}"
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: "--model claude-opus-4-6[1m]"
claude_env: |
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -43,7 +43,7 @@ bunx cowsay 'Hello, world!' # execute a package
## Install
Bun supports Linux (x64 & arm64), macOS (x64 & Apple Silicon) and Windows (x64 & arm64).
Bun supports Linux (x64 & arm64), macOS (x64 & Apple Silicon), and Windows (x64 & arm64).
> **Linux users** — Kernel version 5.6 or higher is strongly recommended, but the minimum is 5.1.

View File

@@ -121,25 +121,7 @@ pub fn exit(code: u32) noreturn {
.mac => std.c.exit(@bitCast(code)),
.windows => {
Bun__onExit();
// Use TerminateProcess instead of ExitProcess to skip
// DLL_PROCESS_DETACH notifications. ExitProcess terminates all
// threads first, then calls DllMain(DLL_PROCESS_DETACH) for every
// loaded DLL. Native addons (NAPI modules such as skia-canvas)
// can crash during this phase because their worker threads have
// been killed and GC-dependent buffer finalizers haven't run.
// TerminateProcess skips DLL cleanup entirely, matching the
// behavior on Linux where quick_exit() also skips library
// teardown. Bun's own cleanup has already run via Bun__onExit().
const rc = std.os.windows.kernel32.TerminateProcess(
std.os.windows.kernel32.GetCurrentProcess(),
code,
);
// TerminateProcess should not return on the current process, but
// if it somehow fails, fall back to ExitProcess.
if (rc == 0) {
std.os.windows.kernel32.ExitProcess(code);
}
unreachable;
std.os.windows.kernel32.ExitProcess(code);
},
else => {
if (Environment.enable_asan) {

View File

@@ -1507,7 +1507,7 @@ static void onDidChangeListeners(EventEmitter& eventEmitter, const Identifier& e
signalToContextIdsMap->set(signalNumber, signal_handle);
}
} else {
if (signalToContextIdsMap->find(signalNumber) != signalToContextIdsMap->end()) {
if (signalToContextIdsMap->find(signalNumber) != signalToContextIdsMap->end() && eventEmitter.listenerCount(eventName) == 0) {
#if !OS(WINDOWS)
if (void (*oldHandler)(int) = signal(signalNumber, SIG_DFL); oldHandler != forwardSignal) {

View File

@@ -509,7 +509,7 @@ pub const StatWatcher = struct {
defer bun.path_buffer_pool.put(buf);
var slice = args.path.slice();
if (bun.strings.startsWith(slice, "file://")) {
slice = slice[6..];
slice = slice["file://".len..];
}
var parts = [_]string{slice};

View File

@@ -632,7 +632,7 @@ pub const FSWatcher = struct {
const file_path: [:0]const u8 = brk: {
var slice = args.path.slice();
if (bun.strings.startsWith(slice, "file://")) {
slice = slice[6..];
slice = slice["file://".len..];
}
const cwd = bun.fs.FileSystem.instance.top_level_dir;

View File

@@ -227,6 +227,7 @@ pub const Runtime = struct {
/// Initialize bundler feature flags for dead-code elimination via `import { feature } from "bun:bundle"`.
/// Returns a pointer to a StringSet containing the enabled flags, or the empty set if no flags are provided.
/// Keys are kept sorted so iteration order is deterministic (for RuntimeTranspilerCache hashing).
pub fn initBundlerFeatureFlags(allocator: std.mem.Allocator, feature_flags: []const []const u8) *const bun.StringSet {
if (feature_flags.len == 0) {
return &empty_bundler_feature_flags;
@@ -237,6 +238,12 @@ pub const Runtime = struct {
for (feature_flags) |flag| {
bun.handleOom(set.insert(flag));
}
set.map.sort(struct {
keys: []const []const u8,
pub fn lessThan(ctx: @This(), a: usize, b: usize) bool {
return std.mem.lessThan(u8, ctx.keys[a], ctx.keys[b]);
}
}{ .keys = set.map.keys() });
return set;
}
@@ -271,6 +278,15 @@ pub const Runtime = struct {
}
hasher.update(std.mem.asBytes(&bools));
// Hash --feature flags. These directly affect transpiled output via
// feature("NAME") replacement in visitExpr.zig. When empty, we add
// nothing to the hash so existing cache entries remain valid.
// Keys are sorted in initBundlerFeatureFlags so flag order on the CLI doesn't matter.
for (this.bundler_feature_flags.keys()) |flag| {
hasher.update(flag);
hasher.update("\x00");
}
}
pub fn shouldUnwrapRequire(this: *const Features, package_name: string) bool {

View File

@@ -186,4 +186,42 @@ describe("transpiler cache", () => {
expect(b.stdout == "production 5");
expect(newCacheCount()).toBe(0);
});
test("--feature flag invalidates cache", () => {
// feature() can only appear in an if/ternary, so wrap it
const code = `import { feature } from "bun:bundle";\nif (feature("SUPER_SECRET")) console.log("enabled"); else console.log("disabled");`;
const filler = Buffer.alloc((50 * 1024 * 1.5) | 0, "/").toString();
writeFileSync(join(temp_dir, "a.js"), code + "\n//" + filler);
const run = (extra: string[]) => {
const result = Bun.spawnSync({
cmd: [bunExe(), ...extra, "a.js"],
cwd: temp_dir,
env,
});
if (!result.success) throw new Error(result.stderr.toString());
return result.stdout.toString().trim();
};
// First run with flag: cache miss, write entry
expect(run(["--feature=SUPER_SECRET"])).toBe("enabled");
expect(newCacheCount()).toBe(1);
// Same flag: cache hit
expect(run(["--feature=SUPER_SECRET"])).toBe("enabled");
expect(newCacheCount()).toBe(0);
// No flag: features_hash differs -> old entry deleted, new entry written
expect(run([])).toBe("disabled");
expect(newCacheCount()).toBe(0); // deleted + written = net 0
// Flag again: another delete + write
expect(run(["--feature=SUPER_SECRET"])).toBe("enabled");
expect(newCacheCount()).toBe(0);
// Multiple flags, different order: same hash, cache hit
expect(run(["--feature=SUPER_SECRET", "--feature=OTHER"])).toBe("enabled");
expect(newCacheCount()).toBe(0); // delete + write
expect(run(["--feature=OTHER", "--feature=SUPER_SECRET"])).toBe("enabled");
expect(newCacheCount()).toBe(0); // cache hit, order doesn't matter
});
});

View File

@@ -0,0 +1,136 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe, isWindows, normalizeBunSnapshot } from "harness";
// When multiple listeners are registered for the same signal, removing one
// listener must NOT uninstall the underlying OS signal handler while other
// listeners remain.
test.skipIf(isWindows)("removing one of multiple signal listeners keeps the handler installed", async () => {
const script = /*js*/ `
const { promise, resolve } = Promise.withResolvers();
let handlerBCount = 0;
function handlerA() {
console.log("handlerA fired (bug: I was removed!)");
}
function handlerB() {
handlerBCount++;
console.log("handlerB fired", handlerBCount);
if (handlerBCount === 2) {
resolve();
}
}
process.on("SIGUSR2", handlerA);
process.on("SIGUSR2", handlerB);
// Remove handlerA - handlerB should still receive signals.
process.off("SIGUSR2", handlerA);
// Send ourselves the signal twice.
process.kill(process.pid, "SIGUSR2");
// Wait for first signal, then send again.
await new Promise(r => setImmediate(r));
await new Promise(r => setImmediate(r));
process.kill(process.pid, "SIGUSR2");
await promise;
console.log("done");
`;
await using proc = Bun.spawn({
cmd: [bunExe(), "-e", script],
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stderr).toBe("");
expect(normalizeBunSnapshot(stdout)).toMatchInlineSnapshot(`
"handlerB fired 1
handlerB fired 2
done"
`);
expect(exitCode).toBe(0);
});
// Verify that removing ALL listeners does properly uninstall the handler,
// so the process dies with the default signal behavior.
test.skipIf(isWindows)("removing all signal listeners uninstalls the handler (default signal behavior)", async () => {
const script = /*js*/ `
function handlerA() {}
function handlerB() {}
process.on("SIGUSR2", handlerA);
process.on("SIGUSR2", handlerB);
process.off("SIGUSR2", handlerA);
process.off("SIGUSR2", handlerB);
// Keep event loop alive briefly so signal can be delivered
setTimeout(() => {
// If we get here, the signal handler was incorrectly still installed
// (or signal was ignored). Exit with a distinct code.
process.exit(42);
}, 1000);
process.kill(process.pid, "SIGUSR2");
`;
await using proc = Bun.spawn({
cmd: [bunExe(), "-e", script],
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stdout).toBe("");
// Default SIGUSR2 behavior is to terminate the process with a signal.
// If the handler was correctly uninstalled, the process dies via signal (not exit code 42).
expect(exitCode).not.toBe(42);
expect(exitCode).not.toBe(0);
expect(proc.signalCode).not.toBeNull();
});
// Re-adding a listener after all were removed should reinstall the handler.
test.skipIf(isWindows)("re-adding a listener after removing all reinstalls the handler", async () => {
const script = /*js*/ `
const { promise, resolve } = Promise.withResolvers();
function handlerA() {}
function handlerB() {
console.log("handlerB fired");
resolve();
}
process.on("SIGUSR2", handlerA);
process.off("SIGUSR2", handlerA);
process.on("SIGUSR2", handlerB);
process.kill(process.pid, "SIGUSR2");
await promise;
console.log("done");
`;
await using proc = Bun.spawn({
cmd: [bunExe(), "-e", script],
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stderr).toBe("");
expect(normalizeBunSnapshot(stdout)).toMatchInlineSnapshot(`
"handlerB fired
done"
`);
expect(exitCode).toBe(0);
});