Compare commits

...

11 Commits

Author SHA1 Message Date
Claude Bot
feebc81063 fix(bundler): don't inline process.env.NODE_ENV in --compile builds
When using `bun build --compile`, `process.env.NODE_ENV` and
`process.env.BUN_ENV` were replaced with compile-time constants
(defaulting to "development"), making it impossible to set these
values at runtime via environment variables.

Skip the compile-time define insertion for these two variables when
building standalone executables, so the compiled binary reads them
from the runtime environment as expected.

Fixes #26994

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-13 09:52:05 +00:00
robobun
7a801fcf93 fix(ini): prevent OOB read and UB on truncated/invalid UTF-8 in INI parser (#26947)
## Summary

- Fix out-of-bounds read in the INI parser's `prepareStr` function when
a multi-byte UTF-8 lead byte appears at the end of a value with
insufficient continuation bytes
- Fix undefined behavior when bare continuation bytes (0x80-0xBF) cause
`utf8ByteSequenceLength` to return 0, hitting an `unreachable` branch
(UB in ReleaseFast builds)
- Add bounds checking before accessing `val[i+1]`, `val[i+2]`,
`val[i+3]` in both escaped and non-escaped code paths

The vulnerability could be triggered by a crafted `.npmrc` file
containing truncated UTF-8 sequences. In release builds, this could
cause OOB heap reads (potential info leak) or undefined behavior.

## Test plan

- [x] Added 9 tests covering truncated 2/3/4-byte sequences, bare
continuation bytes, and escaped contexts
- [x] All 52 INI parser tests pass (`bun bd test
test/js/bun/ini/ini.test.ts`)
- [x] No regressions in npmrc tests (failures are pre-existing Verdaccio
connectivity issues)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-12 00:28:44 -08:00
robobun
44541eb574 fix(sql): reject null bytes in connection parameters to prevent protocol injection (#26952)
## Summary

- Reject null bytes in `username`, `password`, `database`, and `path`
connection parameters for both PostgreSQL and MySQL to prevent wire
protocol parameter injection
- Both the Postgres and MySQL wire protocols use null-terminated strings
in their startup/handshake messages, so embedded null bytes in these
fields act as field terminators, allowing injection of arbitrary
protocol parameters (e.g. `search_path` for schema hijacking)
- The fix validates these fields immediately after UTF-8 conversion and
throws `InvalidArguments` error with a clear message if null bytes are
found

## Test plan

- [x] New test
`test/regression/issue/postgres-null-byte-injection.test.ts` verifies:
- Null bytes in username are rejected with an error before any data is
sent
- Null bytes in database are rejected with an error before any data is
sent
- Null bytes in password are rejected with an error before any data is
sent
  - Normal connections without null bytes still work correctly
- [x] Test verified to fail with `USE_SYSTEM_BUN=1` (unfixed bun) and
pass with `bun bd test` (fixed build)
- [x] Existing SQL tests pass (`adapter-env-var-precedence.test.ts`,
`postgres-stringbuilder-assertion-aggressive.test.ts`)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-12 00:27:00 -08:00
robobun
993be3f931 fix(plugin): set virtualModules to nullptr after delete in clearAll (#26940)
## Summary

- Fix double-free in `Bun.plugin.clearAll()` by setting `virtualModules
= nullptr` after `delete`
- In `jsFunctionBunPluginClear` (`BunPlugin.cpp:956`), `delete
global->onLoadPlugins.virtualModules` freed the pointer without
nullifying it. When the `OnLoad` destructor later runs (during Worker
termination or VM destruction), it checks `if (virtualModules)` — the
dangling non-null pointer passes the check and is deleted again,
corrupting the heap allocator.

## Test plan

- [ ] New test
`test/regression/issue/plugin-clearall-double-free.test.ts` spawns a
subprocess that registers a virtual module, calls
`Bun.plugin.clearAll()`, and exits with `BUN_DESTRUCT_VM_ON_EXIT=1` to
trigger the destructor path
- [ ] Verified the test fails on the system bun (pre-fix) with `pas
panic: deallocation did fail ... Alloc bit not set`
- [ ] Verified the test passes with the debug build (post-fix)
- [ ] Existing plugin tests (`test/js/bun/plugin/plugins.test.ts`) all
pass (29/29)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-11 23:14:43 -08:00
robobun
a68393926b fix(ws): handle fragmented pong frames and validate control frame size (#26944)
## Summary

- Fix WebSocket client pong frame handler to properly handle payloads
split across TCP segments, preventing frame desync that could cause
protocol confusion
- Add missing RFC 6455 Section 5.5 validation: control frame payloads
must not exceed 125 bytes (pong handler lacked this check, unlike ping
and close handlers)

## Details

The pong handler (lines 652-663) had two issues:

1. **Frame desync on fragmented delivery**: When a pong payload was
split across TCP segments (`data.len < receive_body_remain`), the
handler consumed only the available bytes but unconditionally reset
`receive_state = .need_header` and `receive_body_remain = 0`. The
remaining payload bytes in the next TCP delivery were then
misinterpreted as WebSocket frame headers.

2. **Missing payload length validation**: Unlike the ping handler (line
615) and close handler (line 680), the pong handler did not validate the
7-bit payload length against the RFC 6455 limit of 125 bytes for control
frames.

The fix models the pong handler after the existing ping handler pattern:
track partial delivery state with a `pong_received` boolean, buffer
incoming data into `ping_frame_bytes`, and only reset to `need_header`
after the complete payload has been consumed.

## Test plan

- [x] New test `websocket-pong-fragmented.test.ts` verifies:
- Fragmented pong delivery (50-byte payload split into 2+48 bytes) does
not cause frame desync, and a subsequent text frame is received
correctly
- Pong frames with >125 byte payloads are rejected as invalid control
frames
- [x] Test fails with `USE_SYSTEM_BUN=1` (reproduces the bug) and passes
with `bun bd test`
- [x] Existing WebSocket tests pass: `websocket-client.test.ts`,
`websocket-close-fragmented.test.ts`,
`websocket-client-short-read.test.ts`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-11 23:12:28 -08:00
robobun
e8a5f23385 fix(s3): reject CRLF characters in header values to prevent header injection (#26942)
## Summary

- Fixes HTTP header injection vulnerability in S3 client where
user-controlled options (`contentDisposition`, `contentEncoding`,
`type`) were passed to HTTP headers without CRLF validation
- Adds input validation at the JS-to-Zig boundary in
`src/s3/credentials.zig` that throws a `TypeError` if `\r` or `\n`
characters are detected
- An attacker could previously inject arbitrary headers (e.g.
`X-Amz-Security-Token`) by embedding `\r\n` in these string fields

## Test plan

- [x] Added `test/regression/issue/s3-header-injection.test.ts` with 6
tests:
  - CRLF in `contentDisposition` throws
  - CRLF in `contentEncoding` throws
  - CRLF in `type` (content-type) throws
  - Lone CR in `contentDisposition` throws
  - Lone LF in `contentDisposition` throws
  - Valid `contentDisposition` without CRLF still works correctly
- [x] Tests fail with `USE_SYSTEM_BUN=1` (confirming vulnerability
exists in current release)
- [x] Tests pass with `bun bd test` (confirming fix works)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-11 23:02:39 -08:00
robobun
16b3e7cde7 fix(libarchive): use normalized path in mkdiratZ to prevent directory traversal (#26956)
## Summary

- Fix path traversal vulnerability in tarball directory extraction on
POSIX systems where `mkdiratZ` used the un-normalized `pathname` (raw
from tarball) instead of the normalized `path` variable, allowing `../`
components to escape the extraction root via kernel path resolution
- The Windows directory creation, symlink creation, and file creation
code paths already correctly used the normalized path — only the two
POSIX `mkdiratZ` calls were affected (lines 463 and 469)
- `bun install` is not affected because npm mode skips directory
entries; affected callers include `bun create`, GitHub tarball
extraction, and `compile_target`

## Test plan

- [x] Added regression test that crafts a tarball with
`safe_dir/../../escaped_dir/` directory entry and verifies it cannot
create directories outside the extraction root
- [x] Verified test **fails** with system bun (vulnerable) and
**passes** with debug build (fixed)
- [x] Full `archive.test.ts` suite passes (99/99 tests)
- [x] `symlink-path-traversal.test.ts` continues to pass (3/3 tests)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-11 22:47:41 -08:00
robobun
4c32f15339 fix(sql): use constant-time comparison for SCRAM server signature (#26937)
## Summary

- Replace `bun.strings.eqlLong` with BoringSSL's `CRYPTO_memcmp` for
SCRAM-SHA-256 server signature verification in the PostgreSQL client
- The previous comparison (`eqlLong`) returned early on the first
mismatching byte, potentially leaking information about the expected
server signature via timing side-channel
- `CRYPTO_memcmp` is already used elsewhere in the codebase for
constant-time comparisons (CSRF tokens, `crypto.timingSafeEqual`,
KeyObject comparison)

## Test plan

- [x] `bun bd` compiles successfully
- [ ] Existing SCRAM-SHA-256 integration tests in
`test/js/sql/sql.test.ts` pass (require Docker/PostgreSQL)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-11 22:45:47 -08:00
robobun
635034ee33 fix(shell): use-after-free in runFromJS when setupIOBeforeRun fails (#26920)
## Summary

- Fixes #26918 — segfault at address `0x28189480080` caused by
use-after-free in the shell interpreter
- When `setupIOBeforeRun()` fails (e.g., stdout handle unavailable on
Windows), the `runFromJS` error path called `deinitFromExec()` which
directly freed the GC-managed interpreter object with
`allocator.destroy(this)`. When the GC later swept and called
`deinitFromFinalizer()` on the already-freed memory, it caused a
segfault.
- Replaced `deinitFromExec()` with `derefRootShellAndIOIfNeeded(true)`
which properly cleans up runtime resources (IO handles, shell
environment) while leaving final object destruction to the GC finalizer
— matching the pattern already used in `finish()`.

## Test plan

- [x] Added regression test in `test/regression/issue/26918.test.ts`
that verifies the shell interpreter handles closed stdout gracefully
without crashing
- [x] Test passes with `bun bd test test/regression/issue/26918.test.ts`
- [ ] The actual crash is primarily reproducible on Windows where stdout
handles can be truly unavailable — CI Windows tests should validate the
fix

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2026-02-11 17:51:10 -08:00
robobun
3e792d0d2e fix(test): write JUnit reporter outfile when --bail triggers early exit (#26852)
## Summary
- When `--bail` caused an early exit after a test failure, the JUnit
reporter output file (`--reporter-outfile`) was never written because
`Global.exit()` was called before the normal completion path
- Extracted the JUnit write logic into a `writeJUnitReportIfNeeded()`
method on `CommandLineReporter` and call it in both bail exit paths
(test failure and unhandled rejection) as well as the normal completion
path

Closes #26851

## Test plan
- [x] Added regression test `test/regression/issue/26851.test.ts` with
two cases:
  - Single failing test file with `--bail` produces JUnit XML output
- Multiple test files where bail triggers on second file still writes
the report
- [x] Verified test fails with system bun (`USE_SYSTEM_BUN=1`)
- [x] Verified test passes with `bun bd test`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-11 17:41:45 -08:00
robobun
b7d505b6c1 deflake: make HMR rapid edits test event-driven (#26890)
## Summary
- Add `expectMessageEventually(value)` to the bake test harness `Client`
class — waits for a specific message to appear, draining any
intermediate messages that arrived before it
- Rewrite "hmr handles rapid consecutive edits" test to use raw
`Bun.write` + sleep for intermediate edits and `expectMessageEventually`
for the final assertion, avoiding flaky failures when HMR batches
updates non-deterministically across platforms

Fixes flaky failure on Windows where an extra "render 10" message
arrived after `expectMessage` consumed its expected messages but before
client disposal.

## Test plan
- [x] `bun bd test test/bake/dev-and-prod.test.ts` — all 12 tests pass
- [x] Ran the specific test multiple times to confirm no flakiness

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Alistair Smith <alistair@anthropic.com>
2026-02-11 16:05:25 -08:00
18 changed files with 1081 additions and 81 deletions

View File

@@ -954,6 +954,7 @@ BUN_DEFINE_HOST_FUNCTION(jsFunctionBunPluginClear, (JSC::JSGlobalObject * global
global->onResolvePlugins.namespaces.clear();
delete global->onLoadPlugins.virtualModules;
global->onLoadPlugins.virtualModules = nullptr;
return JSC::JSValue::encode(JSC::jsUndefined());
}

View File

@@ -948,6 +948,7 @@ pub const CommandLineReporter = struct {
this.printSummary();
Output.prettyError("\nBailed out after {d} failure{s}<r>\n", .{ this.jest.bail, if (this.jest.bail == 1) "" else "s" });
Output.flush();
this.writeJUnitReportIfNeeded();
Global.exit(1);
}
},
@@ -970,6 +971,20 @@ pub const CommandLineReporter = struct {
Output.printStartEnd(bun.start_time, std.time.nanoTimestamp());
}
/// Writes the JUnit reporter output file if a JUnit reporter is active and
/// an outfile path was configured. This must be called before any early exit
/// (e.g. bail) so that the report is not lost.
pub fn writeJUnitReportIfNeeded(this: *CommandLineReporter) void {
if (this.reporters.junit) |junit| {
if (this.jest.test_options.reporter_outfile) |outfile| {
if (junit.current_file.len > 0) {
junit.endTestSuite() catch {};
}
junit.writeToFile(outfile) catch {};
}
}
}
pub fn generateCodeCoverage(this: *CommandLineReporter, vm: *jsc.VirtualMachine, opts: *TestCommand.CodeCoverageOptions, comptime reporters: TestCommand.Reporters, comptime enable_ansi_colors: bool) !void {
if (comptime !reporters.text and !reporters.lcov) {
return;
@@ -1772,12 +1787,7 @@ pub const TestCommand = struct {
Output.prettyError("\n", .{});
Output.flush();
if (reporter.reporters.junit) |junit| {
if (junit.current_file.len > 0) {
junit.endTestSuite() catch {};
}
junit.writeToFile(ctx.test_options.reporter_outfile.?) catch {};
}
reporter.writeJUnitReportIfNeeded();
if (vm.hot_reload == .watch) {
vm.runWithAPILock(jsc.VirtualMachine, vm, runEventLoopForWatch);
@@ -1920,6 +1930,7 @@ pub const TestCommand = struct {
if (reporter.jest.bail == reporter.summary().fail) {
reporter.printSummary();
Output.prettyError("\nBailed out after {d} failure{s}<r>\n", .{ reporter.jest.bail, if (reporter.jest.bail == 1) "" else "s" });
reporter.writeJUnitReportIfNeeded();
vm.exit_handler.exit_code = 1;
vm.is_shutting_down = true;

View File

@@ -27,6 +27,7 @@ pub fn NewWebSocketClient(comptime ssl: bool) type {
ping_frame_bytes: [128 + 6]u8 = [_]u8{0} ** (128 + 6),
ping_len: u8 = 0,
ping_received: bool = false,
pong_received: bool = false,
close_received: bool = false,
close_frame_buffering: bool = false,
@@ -120,6 +121,7 @@ pub fn NewWebSocketClient(comptime ssl: bool) type {
this.clearReceiveBuffers(true);
this.clearSendBuffers(true);
this.ping_received = false;
this.pong_received = false;
this.ping_len = 0;
this.close_frame_buffering = false;
this.receive_pending_chunk_len = 0;
@@ -650,14 +652,38 @@ pub fn NewWebSocketClient(comptime ssl: bool) type {
if (data.len == 0) break;
},
.pong => {
const pong_len = @min(data.len, @min(receive_body_remain, this.ping_frame_bytes.len));
if (!this.pong_received) {
if (receive_body_remain > 125) {
this.terminate(ErrorCode.invalid_control_frame);
terminated = true;
break;
}
this.ping_len = @truncate(receive_body_remain);
receive_body_remain = 0;
this.pong_received = true;
}
const pong_len = this.ping_len;
this.dispatchData(data[0..pong_len], .Pong);
if (data.len > 0) {
const total_received = @min(pong_len, receive_body_remain + data.len);
const slice = this.ping_frame_bytes[6..][receive_body_remain..total_received];
@memcpy(slice, data[0..slice.len]);
receive_body_remain = total_received;
data = data[slice.len..];
}
const pending_body = pong_len - receive_body_remain;
if (pending_body > 0) {
// wait for more data - pong payload is fragmented across TCP segments
break;
}
const pong_data = this.ping_frame_bytes[6..][0..pong_len];
this.dispatchData(pong_data, .Pong);
data = data[pong_len..];
receive_state = .need_header;
receive_body_remain = 0;
receiving_type = last_receive_data_type;
this.pong_received = false;
if (data.len == 0) break;
},

View File

@@ -291,25 +291,32 @@ pub const Parser = struct {
}
},
else => {
try unesc.appendSlice(switch (bun.strings.utf8ByteSequenceLength(c)) {
1 => brk: {
break :brk &[_]u8{ '\\', c };
switch (bun.strings.utf8ByteSequenceLength(c)) {
0, 1 => try unesc.appendSlice(&[_]u8{ '\\', c }),
2 => if (val.len - i >= 2) {
try unesc.appendSlice(&[_]u8{ '\\', c, val[i + 1] });
i += 1;
} else {
try unesc.appendSlice(&[_]u8{ '\\', c });
},
2 => brk: {
defer i += 1;
break :brk &[_]u8{ '\\', c, val[i + 1] };
3 => if (val.len - i >= 3) {
try unesc.appendSlice(&[_]u8{ '\\', c, val[i + 1], val[i + 2] });
i += 2;
} else {
try unesc.append('\\');
try unesc.appendSlice(val[i..val.len]);
i = val.len - 1;
},
3 => brk: {
defer i += 2;
break :brk &[_]u8{ '\\', c, val[i + 1], val[i + 2] };
4 => if (val.len - i >= 4) {
try unesc.appendSlice(&[_]u8{ '\\', c, val[i + 1], val[i + 2], val[i + 3] });
i += 3;
} else {
try unesc.append('\\');
try unesc.appendSlice(val[i..val.len]);
i = val.len - 1;
},
4 => brk: {
defer i += 3;
break :brk &[_]u8{ '\\', c, val[i + 1], val[i + 2], val[i + 3] };
},
// this means invalid utf8
else => unreachable,
});
}
},
}
@@ -342,25 +349,30 @@ pub const Parser = struct {
try unesc.append('.');
}
},
else => try unesc.appendSlice(switch (bun.strings.utf8ByteSequenceLength(c)) {
1 => brk: {
break :brk &[_]u8{c};
else => switch (bun.strings.utf8ByteSequenceLength(c)) {
0, 1 => try unesc.append(c),
2 => if (val.len - i >= 2) {
try unesc.appendSlice(&[_]u8{ c, val[i + 1] });
i += 1;
} else {
try unesc.append(c);
},
2 => brk: {
defer i += 1;
break :brk &[_]u8{ c, val[i + 1] };
3 => if (val.len - i >= 3) {
try unesc.appendSlice(&[_]u8{ c, val[i + 1], val[i + 2] });
i += 2;
} else {
try unesc.appendSlice(val[i..val.len]);
i = val.len - 1;
},
3 => brk: {
defer i += 2;
break :brk &[_]u8{ c, val[i + 1], val[i + 2] };
4 => if (val.len - i >= 4) {
try unesc.appendSlice(&[_]u8{ c, val[i + 1], val[i + 2], val[i + 3] });
i += 3;
} else {
try unesc.appendSlice(val[i..val.len]);
i = val.len - 1;
},
4 => brk: {
defer i += 3;
break :brk &[_]u8{ c, val[i + 1], val[i + 2], val[i + 3] };
},
// this means invalid utf8
else => unreachable,
}),
},
}
}

View File

@@ -460,13 +460,13 @@ pub const Archiver = struct {
if (comptime Environment.isWindows) {
try bun.MakePath.makePath(u16, dir, path);
} else {
std.posix.mkdiratZ(dir_fd, pathname, @intCast(mode)) catch |err| {
std.posix.mkdiratZ(dir_fd, path, @intCast(mode)) catch |err| {
// It's possible for some tarballs to return a directory twice, with and
// without `./` in the beginning. So if it already exists, continue to the
// next entry.
if (err == error.PathAlreadyExists or err == error.NotDir) continue;
bun.makePath(dir, std.fs.path.dirname(path_slice) orelse return err) catch {};
std.posix.mkdiratZ(dir_fd, pathname, 0o777) catch {};
std.posix.mkdiratZ(dir_fd, path, 0o777) catch {};
};
}
},

View File

@@ -1442,6 +1442,7 @@ pub fn definesFromTransformOptions(
NODE_ENV: ?string,
drop: []const []const u8,
omit_unused_global_calls: bool,
is_standalone: bool,
) !*defines.Define {
const input_user_define = maybe_input_define orelse std.mem.zeroes(api.StringMap);
@@ -1484,38 +1485,43 @@ pub fn definesFromTransformOptions(
}
if (behavior != .load_all_without_inlining) {
const quoted_node_env: string = brk: {
if (NODE_ENV) |node_env| {
if (node_env.len > 0) {
if ((strings.startsWithChar(node_env, '"') and strings.endsWithChar(node_env, '"')) or
(strings.startsWithChar(node_env, '\'') and strings.endsWithChar(node_env, '\'')))
{
break :brk node_env;
}
// Don't inline process.env.NODE_ENV/BUN_ENV for standalone executables (--compile),
// since the compiled binary should read these from the runtime environment.
// User-specified --define overrides are still respected (already in user_defines).
if (!is_standalone) {
const quoted_node_env: string = brk: {
if (NODE_ENV) |node_env| {
if (node_env.len > 0) {
if ((strings.startsWithChar(node_env, '"') and strings.endsWithChar(node_env, '"')) or
(strings.startsWithChar(node_env, '\'') and strings.endsWithChar(node_env, '\'')))
{
break :brk node_env;
}
// avoid allocating if we can
if (strings.eqlComptime(node_env, "production")) {
break :brk "\"production\"";
} else if (strings.eqlComptime(node_env, "development")) {
break :brk "\"development\"";
} else if (strings.eqlComptime(node_env, "test")) {
break :brk "\"test\"";
} else {
break :brk try std.fmt.allocPrint(allocator, "\"{s}\"", .{node_env});
// avoid allocating if we can
if (strings.eqlComptime(node_env, "production")) {
break :brk "\"production\"";
} else if (strings.eqlComptime(node_env, "development")) {
break :brk "\"development\"";
} else if (strings.eqlComptime(node_env, "test")) {
break :brk "\"test\"";
} else {
break :brk try std.fmt.allocPrint(allocator, "\"{s}\"", .{node_env});
}
}
}
}
break :brk "\"development\"";
};
break :brk "\"development\"";
};
_ = try user_defines.getOrPutValue(
"process.env.NODE_ENV",
quoted_node_env,
);
_ = try user_defines.getOrPutValue(
"process.env.BUN_ENV",
quoted_node_env,
);
_ = try user_defines.getOrPutValue(
"process.env.NODE_ENV",
quoted_node_env,
);
_ = try user_defines.getOrPutValue(
"process.env.BUN_ENV",
quoted_node_env,
);
}
// Automatically set `process.browser` to `true` for browsers and false for node+js
// This enables some extra dead code elimination
@@ -1932,6 +1938,7 @@ pub const BundleOptions = struct {
},
this.drop,
this.dead_code_elimination and this.minify_syntax,
this.compile,
);
this.defines_loaded = true;
}

View File

@@ -221,7 +221,11 @@ pub const S3Credentials = struct {
defer str.deref();
if (str.tag != .Empty and str.tag != .Dead) {
new_credentials._contentDispositionSlice = str.toUTF8(bun.default_allocator);
new_credentials.content_disposition = new_credentials._contentDispositionSlice.?.slice();
const slice = new_credentials._contentDispositionSlice.?.slice();
if (containsNewlineOrCR(slice)) {
return globalObject.throwInvalidArguments("contentDisposition must not contain newline characters (CR/LF)", .{});
}
new_credentials.content_disposition = slice;
}
} else {
return globalObject.throwInvalidArgumentTypeValue("contentDisposition", "string", js_value);
@@ -236,7 +240,11 @@ pub const S3Credentials = struct {
defer str.deref();
if (str.tag != .Empty and str.tag != .Dead) {
new_credentials._contentTypeSlice = str.toUTF8(bun.default_allocator);
new_credentials.content_type = new_credentials._contentTypeSlice.?.slice();
const slice = new_credentials._contentTypeSlice.?.slice();
if (containsNewlineOrCR(slice)) {
return globalObject.throwInvalidArguments("type must not contain newline characters (CR/LF)", .{});
}
new_credentials.content_type = slice;
}
} else {
return globalObject.throwInvalidArgumentTypeValue("type", "string", js_value);
@@ -251,7 +259,11 @@ pub const S3Credentials = struct {
defer str.deref();
if (str.tag != .Empty and str.tag != .Dead) {
new_credentials._contentEncodingSlice = str.toUTF8(bun.default_allocator);
new_credentials.content_encoding = new_credentials._contentEncodingSlice.?.slice();
const slice = new_credentials._contentEncodingSlice.?.slice();
if (containsNewlineOrCR(slice)) {
return globalObject.throwInvalidArguments("contentEncoding must not contain newline characters (CR/LF)", .{});
}
new_credentials.content_encoding = slice;
}
} else {
return globalObject.throwInvalidArgumentTypeValue("contentEncoding", "string", js_value);
@@ -1150,6 +1162,12 @@ const CanonicalRequest = struct {
}
};
/// Returns true if the given slice contains any CR (\r) or LF (\n) characters,
/// which would allow HTTP header injection if used in a header value.
fn containsNewlineOrCR(value: []const u8) bool {
return std.mem.indexOfAny(u8, value, "\r\n") != null;
}
const std = @import("std");
const ACL = @import("./acl.zig").ACL;
const MultiPartUploadOptions = @import("./multipart_options.zig").MultiPartUploadOptions;

View File

@@ -1154,7 +1154,7 @@ pub const Interpreter = struct {
_ = callframe; // autofix
if (this.setupIOBeforeRun().asErr()) |e| {
defer this.#deinitFromExec();
defer this.#derefRootShellAndIOIfNeeded(true);
const shellerr = bun.shell.ShellErr.newSys(e);
return try throwShellErr(&shellerr, .{ .js = globalThis.bunVM().event_loop });
}

View File

@@ -422,6 +422,19 @@ pub fn createInstance(globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFra
break :brk b.allocatedSlice();
};
// Reject null bytes in connection parameters to prevent protocol injection
// (null bytes act as field terminators in the MySQL wire protocol).
inline for (.{ .{ username, "username" }, .{ password, "password" }, .{ database, "database" }, .{ path, "path" } }) |entry| {
if (entry[0].len > 0 and std.mem.indexOfScalar(u8, entry[0], 0) != null) {
bun.default_allocator.free(options_buf);
tls_config.deinit();
if (tls_ctx) |tls| {
tls.deinit(true);
}
return globalObject.throwInvalidArguments(entry[1] ++ " must not contain null bytes", .{});
}
}
const on_connect = arguments[9];
const on_close = arguments[10];
const idle_timeout = arguments[11].toInt32();

View File

@@ -680,6 +680,20 @@ pub fn call(globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JS
break :brk b.allocatedSlice();
};
// Reject null bytes in connection parameters to prevent Postgres startup
// message parameter injection (null bytes act as field terminators in the
// wire protocol's key\0value\0 format).
inline for (.{ .{ username, "username" }, .{ password, "password" }, .{ database, "database" }, .{ path, "path" } }) |entry| {
if (entry[0].len > 0 and std.mem.indexOfScalar(u8, entry[0], 0) != null) {
bun.default_allocator.free(options_buf);
tls_config.deinit();
if (tls_ctx) |tls| {
tls.deinit(true);
}
return globalObject.throwInvalidArguments(entry[1] ++ " must not contain null bytes", .{});
}
}
const on_connect = arguments[9];
const on_close = arguments[10];
const idle_timeout = arguments[11].toInt32();
@@ -1626,7 +1640,10 @@ pub fn on(this: *PostgresSQLConnection, comptime MessageType: @Type(.enum_litera
// This will usually start with "v="
const comparison_signature = final.data.slice();
if (comparison_signature.len < 2 or !bun.strings.eqlLong(server_signature, comparison_signature[2..], true)) {
if (comparison_signature.len < 2 or
server_signature.len != comparison_signature.len - 2 or
BoringSSL.c.CRYPTO_memcmp(server_signature.ptr, comparison_signature[2..].ptr, server_signature.len) != 0)
{
debug("SASLFinal - SASL Server signature mismatch\nExpected: {s}\nActual: {s}", .{ server_signature, comparison_signature[2..] });
this.fail("The server did not return the correct signature", error.SASL_SIGNATURE_MISMATCH);
} else {

View File

@@ -260,14 +260,35 @@ devTest("hmr handles rapid consecutive edits", {
await Bun.sleep(1);
}
// Wait event-driven for "render 10" to appear. Intermediate renders may
// be skipped (watcher coalescing) and the final render may fire multiple
// times (duplicate reloads), so we just listen for any occurrence.
const finalRender = "render 10";
while (true) {
const message = await client.getStringMessage();
if (message === finalRender) break;
if (typeof message === "string" && message.includes("HMR_ERROR")) {
throw new Error("Unexpected HMR error message: " + message);
}
}
await new Promise<void>((resolve, reject) => {
const check = () => {
for (const msg of client.messages) {
if (typeof msg === "string" && msg.includes("HMR_ERROR")) {
cleanup();
reject(new Error("Unexpected HMR error message: " + msg));
return;
}
if (msg === finalRender) {
cleanup();
resolve();
return;
}
}
};
const cleanup = () => {
client.off("message", check);
};
client.on("message", check);
// Check messages already buffered.
check();
});
// Drain all buffered messages — intermediate renders and possible
// duplicates of the final render are expected and harmless.
client.messages.length = 0;
const hmrErrors = await client.js`return globalThis.__hmrErrors ? [...globalThis.__hmrErrors] : [];`;
if (hmrErrors.length > 0) {

View File

@@ -611,6 +611,82 @@ describe("Bun.Archive", () => {
// Very deep paths might fail on some systems - that's acceptable
}
});
test("directory entries with path traversal components cannot escape extraction root", async () => {
// Manually craft a tar archive containing directory entries with "../" traversal
// components in their pathnames. This tests that the extraction code uses the
// normalized path (which strips "..") rather than the raw pathname from the tarball.
function createTarHeader(
name: string,
size: number,
type: "0" | "5", // 0=file, 5=directory
): Uint8Array {
const header = new Uint8Array(512);
const enc = new TextEncoder();
header.set(enc.encode(name).slice(0, 100), 0);
header.set(enc.encode(type === "5" ? "0000755 " : "0000644 "), 100);
header.set(enc.encode("0000000 "), 108);
header.set(enc.encode("0000000 "), 116);
header.set(enc.encode(size.toString(8).padStart(11, "0") + " "), 124);
const mtime = Math.floor(Date.now() / 1000)
.toString(8)
.padStart(11, "0");
header.set(enc.encode(mtime + " "), 136);
header.set(enc.encode(" "), 148); // checksum placeholder
header[156] = type.charCodeAt(0);
header.set(enc.encode("ustar"), 257);
header[262] = 0;
header.set(enc.encode("00"), 263);
let checksum = 0;
for (let i = 0; i < 512; i++) checksum += header[i];
header.set(enc.encode(checksum.toString(8).padStart(6, "0") + "\0 "), 148);
return header;
}
const blocks: Uint8Array[] = [];
const enc = new TextEncoder();
// A legitimate directory
blocks.push(createTarHeader("safe_dir/", 0, "5"));
// A directory entry with traversal: "safe_dir/../../escaped_dir/"
// After normalization this becomes "escaped_dir" (safe),
// but the raw pathname resolves ".." via the kernel in mkdirat.
blocks.push(createTarHeader("safe_dir/../../escaped_dir/", 0, "5"));
// A normal file
const content = enc.encode("hello");
blocks.push(createTarHeader("safe_dir/file.txt", content.length, "0"));
blocks.push(content);
const pad = 512 - (content.length % 512);
if (pad < 512) blocks.push(new Uint8Array(pad));
// End-of-archive markers
blocks.push(new Uint8Array(1024));
const totalLen = blocks.reduce((s, b) => s + b.length, 0);
const tarball = new Uint8Array(totalLen);
let offset = 0;
for (const b of blocks) {
tarball.set(b, offset);
offset += b.length;
}
// Create a parent directory so we can check if "escaped_dir" appears outside extractDir
using parentDir = tempDir("archive-traversal-parent", {});
const extractPath = join(String(parentDir), "extract");
const { mkdirSync, existsSync } = require("fs");
mkdirSync(extractPath, { recursive: true });
const archive = new Bun.Archive(tarball);
await archive.extract(extractPath);
// The "escaped_dir" should NOT exist in the parent directory (outside extraction root)
const escapedOutside = join(String(parentDir), "escaped_dir");
expect(existsSync(escapedOutside)).toBe(false);
// The "safe_dir" should exist inside the extraction directory
expect(existsSync(join(extractPath, "safe_dir"))).toBe(true);
// The normalized "escaped_dir" may or may not exist inside extractPath
// (depending on whether normalization keeps it), but it must NOT be outside
});
});
describe("Archive.write()", () => {

View File

@@ -489,6 +489,61 @@ brr = 3
"zr": ["deedee"],
});
});
describe("truncated/invalid utf-8", () => {
test("bare continuation byte (0x80) should not crash", () => {
// 0x80 is a continuation byte without a leading byte
// utf8ByteSequenceLength returns 0, which must not hit unreachable
const ini = Buffer.concat([Buffer.from("key = "), Buffer.from([0x80])]).toString("latin1");
// Should not crash - just parse gracefully
expect(() => parse(ini)).not.toThrow();
});
test("truncated 2-byte sequence at end of value", () => {
// 0xC0 is a 2-byte lead byte, but there's no continuation byte following
const ini = Buffer.concat([Buffer.from("key = "), Buffer.from([0xc0])]).toString("latin1");
expect(() => parse(ini)).not.toThrow();
});
test("truncated 3-byte sequence at end of value", () => {
// 0xE0 is a 3-byte lead byte, but only 0 continuation bytes follow
const ini = Buffer.concat([Buffer.from("key = "), Buffer.from([0xe0])]).toString("latin1");
expect(() => parse(ini)).not.toThrow();
});
test("truncated 3-byte sequence with 1 continuation byte at end", () => {
// 0xE0 is a 3-byte lead byte, but only 1 continuation byte follows
const ini = Buffer.concat([Buffer.from("key = "), Buffer.from([0xe0, 0x80])]).toString("latin1");
expect(() => parse(ini)).not.toThrow();
});
test("truncated 4-byte sequence at end of value", () => {
// 0xF0 is a 4-byte lead byte, but only 0 continuation bytes follow
const ini = Buffer.concat([Buffer.from("key = "), Buffer.from([0xf0])]).toString("latin1");
expect(() => parse(ini)).not.toThrow();
});
test("truncated 4-byte sequence with 1 continuation byte at end", () => {
const ini = Buffer.concat([Buffer.from("key = "), Buffer.from([0xf0, 0x80])]).toString("latin1");
expect(() => parse(ini)).not.toThrow();
});
test("truncated 4-byte sequence with 2 continuation bytes at end", () => {
const ini = Buffer.concat([Buffer.from("key = "), Buffer.from([0xf0, 0x80, 0x80])]).toString("latin1");
expect(() => parse(ini)).not.toThrow();
});
test("truncated 2-byte sequence in escaped context", () => {
// Backslash followed by a 2-byte lead byte at end of value
const ini = Buffer.concat([Buffer.from("key = \\"), Buffer.from([0xc0])]).toString("latin1");
expect(() => parse(ini)).not.toThrow();
});
test("bare continuation byte in escaped context", () => {
const ini = Buffer.concat([Buffer.from("key = \\"), Buffer.from([0x80])]).toString("latin1");
expect(() => parse(ini)).not.toThrow();
});
});
});
const wtf = {

View File

@@ -0,0 +1,222 @@
import { TCPSocketListener } from "bun";
import { describe, expect, test } from "bun:test";
const hostname = "127.0.0.1";
const MAX_HEADER_SIZE = 16 * 1024;
function doHandshake(
socket: any,
handshakeBuffer: Uint8Array,
data: Uint8Array,
): { buffer: Uint8Array; done: boolean } {
const newBuffer = new Uint8Array(handshakeBuffer.length + data.length);
newBuffer.set(handshakeBuffer);
newBuffer.set(data, handshakeBuffer.length);
if (newBuffer.length > MAX_HEADER_SIZE) {
socket.end();
throw new Error("Handshake headers too large");
}
const dataStr = new TextDecoder("utf-8").decode(newBuffer);
const endOfHeaders = dataStr.indexOf("\r\n\r\n");
if (endOfHeaders === -1) {
return { buffer: newBuffer, done: false };
}
if (!dataStr.startsWith("GET")) {
throw new Error("Invalid handshake");
}
const magic = /Sec-WebSocket-Key:\s*(.*)\r\n/i.exec(dataStr);
if (!magic) {
throw new Error("Missing Sec-WebSocket-Key");
}
const hasher = new Bun.CryptoHasher("sha1");
hasher.update(magic[1].trim());
hasher.update("258EAFA5-E914-47DA-95CA-C5AB0DC85B11");
const accept = hasher.digest("base64");
socket.write(
"HTTP/1.1 101 Switching Protocols\r\n" +
"Upgrade: websocket\r\n" +
"Connection: Upgrade\r\n" +
`Sec-WebSocket-Accept: ${accept}\r\n` +
"\r\n",
);
socket.flush();
return { buffer: newBuffer, done: true };
}
function makeTextFrame(text: string): Uint8Array {
const payload = new TextEncoder().encode(text);
const len = payload.length;
let header: Uint8Array;
if (len < 126) {
header = new Uint8Array([0x81, len]);
} else if (len < 65536) {
header = new Uint8Array([0x81, 126, (len >> 8) & 0xff, len & 0xff]);
} else {
throw new Error("Message too large for this test");
}
const frame = new Uint8Array(header.length + len);
frame.set(header);
frame.set(payload, header.length);
return frame;
}
describe("WebSocket", () => {
test("fragmented pong frame does not cause frame desync", async () => {
let server: TCPSocketListener | undefined;
let client: WebSocket | undefined;
let handshakeBuffer = new Uint8Array(0);
let handshakeComplete = false;
try {
const { promise, resolve, reject } = Promise.withResolvers<void>();
server = Bun.listen({
socket: {
data(socket, data) {
if (handshakeComplete) {
// After handshake, we just receive client frames (like close) - ignore them
return;
}
const result = doHandshake(socket, handshakeBuffer, new Uint8Array(data));
handshakeBuffer = result.buffer;
if (!result.done) return;
handshakeComplete = true;
// Build a pong frame with a 50-byte payload, but deliver it in two parts.
// Pong opcode = 0x8A, FIN=1
const pongPayload = new Uint8Array(50);
for (let i = 0; i < 50; i++) pongPayload[i] = 0x41 + (i % 26); // 'A'-'Z' repeated
const pongFrame = new Uint8Array(2 + 50);
pongFrame[0] = 0x8a; // FIN + Pong opcode
pongFrame[1] = 50; // payload length
pongFrame.set(pongPayload, 2);
// Part 1 of pong: header (2 bytes) + first 2 bytes of payload = 4 bytes
// This leaves 48 bytes of pong payload undelivered.
const pongPart1 = pongFrame.slice(0, 4);
// Part 2: remaining 48 bytes of pong payload
const pongPart2 = pongFrame.slice(4);
// A text message to send after the pong completes.
const textFrame = makeTextFrame("hello after pong");
// Send part 1 of pong
socket.write(pongPart1);
socket.flush();
// After a delay, send part 2 of pong + the follow-up text message
setTimeout(() => {
// Concatenate part2 + text frame to simulate them arriving together
const combined = new Uint8Array(pongPart2.length + textFrame.length);
combined.set(pongPart2);
combined.set(textFrame, pongPart2.length);
socket.write(combined);
socket.flush();
}, 50);
},
},
hostname,
port: 0,
});
const messages: string[] = [];
client = new WebSocket(`ws://${server.hostname}:${server.port}`);
client.addEventListener("error", event => {
reject(new Error("WebSocket error"));
});
client.addEventListener("close", event => {
// If the connection closes unexpectedly due to frame desync, the test should fail
reject(new Error(`WebSocket closed unexpectedly: code=${event.code} reason=${event.reason}`));
});
client.addEventListener("message", event => {
messages.push(event.data as string);
if (messages.length === 1) {
// We got the text message after the fragmented pong
try {
expect(messages[0]).toBe("hello after pong");
resolve();
} catch (err) {
reject(err);
}
}
});
await promise;
} finally {
client?.close();
server?.stop(true);
}
});
test("pong frame with payload > 125 bytes is rejected", async () => {
let server: TCPSocketListener | undefined;
let client: WebSocket | undefined;
let handshakeBuffer = new Uint8Array(0);
let handshakeComplete = false;
try {
const { promise, resolve, reject } = Promise.withResolvers<void>();
server = Bun.listen({
socket: {
data(socket, data) {
if (handshakeComplete) return;
const result = doHandshake(socket, handshakeBuffer, new Uint8Array(data));
handshakeBuffer = result.buffer;
if (!result.done) return;
handshakeComplete = true;
// Send a pong frame with a 126-byte payload (invalid per RFC 6455 Section 5.5)
// Control frames MUST have a payload length of 125 bytes or less.
// Use 2-byte extended length encoding since 126 > 125.
// But actually, the 7-bit length field in byte[1] can encode 0-125 directly.
// For 126, the server must use the extended 16-bit length.
// However, control frames with >125 payload are invalid regardless of encoding.
const pongFrame = new Uint8Array(4 + 126);
pongFrame[0] = 0x8a; // FIN + Pong
pongFrame[1] = 126; // Signals 16-bit extended length follows
pongFrame[2] = 0; // High byte of length
pongFrame[3] = 126; // Low byte of length = 126
// Fill payload with arbitrary data
for (let i = 0; i < 126; i++) pongFrame[4 + i] = 0x42;
socket.write(pongFrame);
socket.flush();
},
},
hostname,
port: 0,
});
client = new WebSocket(`ws://${server.hostname}:${server.port}`);
client.addEventListener("error", () => {
// Expected - the connection should error due to invalid control frame
resolve();
});
client.addEventListener("close", () => {
// Also acceptable - connection closes due to protocol error
resolve();
});
client.addEventListener("message", () => {
reject(new Error("Should not receive a message from an invalid pong frame"));
});
await promise;
} finally {
client?.close();
server?.stop(true);
}
});
});

View File

@@ -0,0 +1,77 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe, tempDir } from "harness";
import { join } from "path";
test("--bail writes JUnit reporter outfile", async () => {
using dir = tempDir("bail-junit", {
"fail.test.ts": `
import { test, expect } from "bun:test";
test("failing test", () => { expect(1).toBe(2); });
`,
});
const outfile = join(String(dir), "results.xml");
await using proc = Bun.spawn({
cmd: [bunExe(), "test", "--bail", "--reporter=junit", `--reporter-outfile=${outfile}`, "fail.test.ts"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const exitCode = await proc.exited;
// The test should fail and bail
expect(exitCode).not.toBe(0);
// The JUnit report file should still be written despite bail
const file = Bun.file(outfile);
expect(await file.exists()).toBe(true);
const xml = await file.text();
expect(xml).toContain("<?xml");
expect(xml).toContain("<testsuites");
expect(xml).toContain("</testsuites>");
expect(xml).toContain("failing test");
});
test("--bail writes JUnit reporter outfile with multiple files", async () => {
using dir = tempDir("bail-junit-multi", {
"a_pass.test.ts": `
import { test, expect } from "bun:test";
test("passing test", () => { expect(1).toBe(1); });
`,
"b_fail.test.ts": `
import { test, expect } from "bun:test";
test("another failing test", () => { expect(1).toBe(2); });
`,
});
const outfile = join(String(dir), "results.xml");
await using proc = Bun.spawn({
cmd: [bunExe(), "test", "--bail", "--reporter=junit", `--reporter-outfile=${outfile}`],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const exitCode = await proc.exited;
// The test should fail and bail
expect(exitCode).not.toBe(0);
// The JUnit report file should still be written despite bail
const file = Bun.file(outfile);
expect(await file.exists()).toBe(true);
const xml = await file.text();
expect(xml).toContain("<?xml");
expect(xml).toContain("<testsuites");
expect(xml).toContain("</testsuites>");
// Both the passing and failing tests should be recorded
expect(xml).toContain("passing test");
expect(xml).toContain("another failing test");
});

View File

@@ -0,0 +1,109 @@
import { describe, expect, test } from "bun:test";
import { mkdtempSync, rmSync, writeFileSync } from "fs";
import { bunEnv, bunExe } from "harness";
import { join } from "path";
// Use a directory on the root filesystem rather than /tmp (tmpfs) since
// debug-mode standalone binaries are ~1.3GB each.
function makeTempDir(prefix: string): string {
return mkdtempSync(join("/workspace", `.tmp-${prefix}-`));
}
describe("issue #26994 - NODE_ENV not baked into --compile output", () => {
test("process.env.NODE_ENV and BUN_ENV read from runtime environment", () => {
const dir = makeTempDir("issue-26994");
try {
writeFileSync(join(dir, "index.ts"), `console.log(process.env.NODE_ENV + "," + process.env.BUN_ENV);`);
const binPath = join(dir, "test-bin");
// Build standalone executable
const buildResult = Bun.spawnSync({
cmd: [bunExe(), "build", "--compile", join(dir, "index.ts"), "--outfile", binPath],
env: bunEnv,
stderr: "pipe",
});
expect(buildResult.stderr.toString()).toBe("");
expect(buildResult.exitCode).toBe(0);
// Run with NODE_ENV=production, BUN_ENV=production
const runProduction = Bun.spawnSync({
cmd: [binPath],
env: { ...bunEnv, NODE_ENV: "production", BUN_ENV: "production" },
stdout: "pipe",
stderr: "pipe",
});
expect(runProduction.stdout.toString().trim()).toBe("production,production");
expect(runProduction.exitCode).toBe(0);
// Run with NODE_ENV=development, BUN_ENV=staging
const runDev = Bun.spawnSync({
cmd: [binPath],
env: { ...bunEnv, NODE_ENV: "development", BUN_ENV: "staging" },
stdout: "pipe",
stderr: "pipe",
});
expect(runDev.stdout.toString().trim()).toBe("development,staging");
expect(runDev.exitCode).toBe(0);
// Run with NODE_ENV=test
const runTest = Bun.spawnSync({
cmd: [binPath],
env: { ...bunEnv, NODE_ENV: "test", BUN_ENV: "test" },
stdout: "pipe",
stderr: "pipe",
});
expect(runTest.stdout.toString().trim()).toBe("test,test");
expect(runTest.exitCode).toBe(0);
} finally {
rmSync(dir, { recursive: true, force: true });
}
}, 30_000);
test("NODE_ENV comparison works at runtime in compiled binary", () => {
const dir = makeTempDir("issue-26994-cond");
try {
writeFileSync(
join(dir, "index.ts"),
`
if (process.env.NODE_ENV === "production") {
console.log("prod mode");
} else {
console.log("dev mode");
}
`,
);
const binPath = join(dir, "test-bin");
// Build standalone executable
const buildResult = Bun.spawnSync({
cmd: [bunExe(), "build", "--compile", join(dir, "index.ts"), "--outfile", binPath],
env: bunEnv,
stderr: "pipe",
});
expect(buildResult.stderr.toString()).toBe("");
expect(buildResult.exitCode).toBe(0);
// Run with production
const runProd = Bun.spawnSync({
cmd: [binPath],
env: { ...bunEnv, NODE_ENV: "production" },
stdout: "pipe",
stderr: "pipe",
});
expect(runProd.stdout.toString().trim()).toBe("prod mode");
expect(runProd.exitCode).toBe(0);
// Run with development
const runDev = Bun.spawnSync({
cmd: [binPath],
env: { ...bunEnv, NODE_ENV: "development" },
stdout: "pipe",
stderr: "pipe",
});
expect(runDev.stdout.toString().trim()).toBe("dev mode");
expect(runDev.exitCode).toBe(0);
} finally {
rmSync(dir, { recursive: true, force: true });
}
}, 30_000);
});

View File

@@ -0,0 +1,187 @@
import { SQL } from "bun";
import { expect, test } from "bun:test";
import net from "net";
test("postgres connection rejects null bytes in username", async () => {
let serverReceivedData = false;
const server = net.createServer(socket => {
serverReceivedData = true;
socket.destroy();
});
await new Promise<void>(r => server.listen(0, "127.0.0.1", () => r()));
const port = (server.address() as net.AddressInfo).port;
try {
const sql = new SQL({
hostname: "127.0.0.1",
port,
username: "alice\x00search_path\x00evil_schema,public",
database: "testdb",
max: 1,
idleTimeout: 1,
connectionTimeout: 2,
});
await sql`SELECT 1`;
expect.unreachable();
} catch (e: any) {
expect(e.message).toContain("null bytes");
} finally {
server.close();
}
// The server should never have received any data because the null byte
// should be rejected before the connection is established.
expect(serverReceivedData).toBe(false);
});
test("postgres connection rejects null bytes in database", async () => {
let serverReceivedData = false;
const server = net.createServer(socket => {
serverReceivedData = true;
socket.destroy();
});
await new Promise<void>(r => server.listen(0, "127.0.0.1", () => r()));
const port = (server.address() as net.AddressInfo).port;
try {
const sql = new SQL({
hostname: "127.0.0.1",
port,
username: "alice",
database: "testdb\x00search_path\x00evil_schema,public",
max: 1,
idleTimeout: 1,
connectionTimeout: 2,
});
await sql`SELECT 1`;
expect.unreachable();
} catch (e: any) {
expect(e.message).toContain("null bytes");
} finally {
server.close();
}
expect(serverReceivedData).toBe(false);
});
test("postgres connection rejects null bytes in password", async () => {
let serverReceivedData = false;
const server = net.createServer(socket => {
serverReceivedData = true;
socket.destroy();
});
await new Promise<void>(r => server.listen(0, "127.0.0.1", () => r()));
const port = (server.address() as net.AddressInfo).port;
try {
const sql = new SQL({
hostname: "127.0.0.1",
port,
username: "alice",
password: "pass\x00search_path\x00evil_schema",
database: "testdb",
max: 1,
idleTimeout: 1,
connectionTimeout: 2,
});
await sql`SELECT 1`;
expect.unreachable();
} catch (e: any) {
expect(e.message).toContain("null bytes");
} finally {
server.close();
}
expect(serverReceivedData).toBe(false);
});
test("postgres connection does not use truncated path with null bytes", async () => {
// The JS layer's fs.existsSync() rejects paths containing null bytes,
// so the path is dropped before reaching the native layer. Verify that a
// path with null bytes doesn't silently connect via a truncated path.
let serverReceivedData = false;
const server = net.createServer(socket => {
serverReceivedData = true;
socket.destroy();
});
await new Promise<void>(r => server.listen(0, "127.0.0.1", () => r()));
const port = (server.address() as net.AddressInfo).port;
try {
const sql = new SQL({
hostname: "127.0.0.1",
port,
username: "alice",
database: "testdb",
path: "/tmp\x00injected",
max: 1,
idleTimeout: 1,
connectionTimeout: 2,
});
await sql`SELECT 1`;
} catch {
// Expected to fail
} finally {
server.close();
}
// The path had null bytes so it should have been dropped by the JS layer,
// falling back to TCP where it hits our mock server (not a truncated Unix socket).
expect(serverReceivedData).toBe(true);
});
test("postgres connection works with normal parameters (no null bytes)", async () => {
// Verify that normal connections without null bytes still work.
// Use a mock server that sends an auth error so we can verify the
// startup message is sent correctly.
let receivedData = false;
const server = net.createServer(socket => {
socket.once("data", () => {
receivedData = true;
const errMsg = Buffer.from("SFATAL\0VFATAL\0C28000\0Mauthentication failed\0\0");
const len = errMsg.length + 4;
const header = Buffer.alloc(5);
header.write("E", 0);
header.writeInt32BE(len, 1);
socket.write(Buffer.concat([header, errMsg]));
socket.destroy();
});
});
await new Promise<void>(r => server.listen(0, "127.0.0.1", () => r()));
const port = (server.address() as net.AddressInfo).port;
try {
const sql = new SQL({
hostname: "127.0.0.1",
port,
username: "alice",
database: "testdb",
max: 1,
idleTimeout: 1,
connectionTimeout: 2,
});
await sql`SELECT 1`;
} catch {
// Expected - mock server sends auth error
} finally {
server.close();
}
// Normal parameters should connect fine - the server should receive data
expect(receivedData).toBe(true);
});

View File

@@ -0,0 +1,148 @@
import { S3Client } from "bun";
import { describe, expect, test } from "bun:test";
// Test that CRLF characters in S3 options are rejected to prevent header injection.
// See: HTTP Header Injection via S3 Content-Disposition Value
describe("S3 header injection prevention", () => {
test("contentDisposition with CRLF should throw", () => {
using server = Bun.serve({
port: 0,
fetch() {
return new Response("OK", { status: 200 });
},
});
const client = new S3Client({
accessKeyId: "test-key",
secretAccessKey: "test-secret",
endpoint: server.url.href,
bucket: "test-bucket",
});
expect(() =>
client.write("test-file.txt", "Hello", {
contentDisposition: 'attachment; filename="evil"\r\nX-Injected: value',
}),
).toThrow(/CR\/LF/);
});
test("contentEncoding with CRLF should throw", () => {
using server = Bun.serve({
port: 0,
fetch() {
return new Response("OK", { status: 200 });
},
});
const client = new S3Client({
accessKeyId: "test-key",
secretAccessKey: "test-secret",
endpoint: server.url.href,
bucket: "test-bucket",
});
expect(() =>
client.write("test-file.txt", "Hello", {
contentEncoding: "gzip\r\nX-Injected: value",
}),
).toThrow(/CR\/LF/);
});
test("type (content-type) with CRLF should throw", () => {
using server = Bun.serve({
port: 0,
fetch() {
return new Response("OK", { status: 200 });
},
});
const client = new S3Client({
accessKeyId: "test-key",
secretAccessKey: "test-secret",
endpoint: server.url.href,
bucket: "test-bucket",
});
expect(() =>
client.write("test-file.txt", "Hello", {
type: "text/plain\r\nX-Injected: value",
}),
).toThrow(/CR\/LF/);
});
test("contentDisposition with only CR should throw", () => {
using server = Bun.serve({
port: 0,
fetch() {
return new Response("OK", { status: 200 });
},
});
const client = new S3Client({
accessKeyId: "test-key",
secretAccessKey: "test-secret",
endpoint: server.url.href,
bucket: "test-bucket",
});
expect(() =>
client.write("test-file.txt", "Hello", {
contentDisposition: "attachment\rinjected",
}),
).toThrow(/CR\/LF/);
});
test("contentDisposition with only LF should throw", () => {
using server = Bun.serve({
port: 0,
fetch() {
return new Response("OK", { status: 200 });
},
});
const client = new S3Client({
accessKeyId: "test-key",
secretAccessKey: "test-secret",
endpoint: server.url.href,
bucket: "test-bucket",
});
expect(() =>
client.write("test-file.txt", "Hello", {
contentDisposition: "attachment\ninjected",
}),
).toThrow(/CR\/LF/);
});
test("valid contentDisposition without CRLF should not throw", async () => {
const { promise: requestReceived, resolve: onRequestReceived } = Promise.withResolvers<Headers>();
using server = Bun.serve({
port: 0,
async fetch(req) {
onRequestReceived(req.headers);
return new Response("OK", { status: 200 });
},
});
const client = new S3Client({
accessKeyId: "test-key",
secretAccessKey: "test-secret",
endpoint: server.url.href,
bucket: "test-bucket",
});
// Valid content-disposition values should not throw synchronously.
// The write may eventually fail because the mock server doesn't speak S3 protocol,
// but the option parsing should succeed and a request should be initiated.
expect(() =>
client.write("test-file.txt", "Hello", {
contentDisposition: 'attachment; filename="report.pdf"',
}),
).not.toThrow();
const receivedHeaders = await requestReceived;
expect(receivedHeaders.get("content-disposition")).toBe('attachment; filename="report.pdf"');
});
});