Compare commits

...

12 Commits

Author SHA1 Message Date
Claude Bot
4b10b907fc Replace std.ArrayList with bun.collections.ArrayListDefault
Replace 5 instances of `std.ArrayList(...).init(bun.default_allocator)`
with `bun.collections.ArrayListDefault(...)` for better memory management.

Changes:
- src/string.zig: ArrayList(u8) for string formatting
- src/install/lockfile.zig: ArrayList(u8) for lockfile serialization
- src/io/PipeWriter.zig: ArrayList(u8) in StreamBuffer struct
- src/s3/list_objects.zig: ArrayList for S3 contents and prefixes
- src/bundler/bundle_v2.zig: ArrayList(OutputFile) return type

ArrayListDefault uses the default allocator with zero overhead while
providing consistent deinit semantics. Used deinitShallow() where
element types don't have deinit methods (e.g., []const u8).
2025-11-08 03:44:00 +00:00
Jarred Sumner
6f8138b6e4 in build Add NO_SCCACHE env var 2025-11-07 04:40:29 -08:00
taylor.fish
23a2b2129c Use std.debug.captureStackTrace on all platforms (#24456)
In the crash reporter, we currently use glibc's `backtrace()` function
on glibc Linux targets. However, this has resulted in poor stack traces
in many scenarios, particularly when a JSC signal handlers is involved,
in which case the stack trace tends to have only one frame—the signal
handler itself. Considering that JSC installs a signal handler for SEGV,
this is particularly bad.

Zig's `std.debug.captureStackTrace` generates considerably more complete
stack traces, but it has an issue where the top frame is missing when a
signal handler is involved. This is unfortunate, but it's still the
better option for now. Note that our stack traces on macOS also have
this missing frame issue.

In the future, we will investigate backporting the changes to stack
trace capturing that were recently made in Zig's `master` branch, since
that seems to have fixed the missing frame issue.

This PR still uses the stack trace provided by `backtrace()` if it
returns more frames than `captureStackTrace`. In particular, ARM may
need this behavior.

(For internal tracking: fixes ENG-21406)
2025-11-07 04:07:53 -08:00
Jarred Sumner
8ec856124c Add ccache back, with fallback for sccache 2025-11-07 04:01:10 -08:00
Jarred Sumner
94bc68f72c Ignore maxBuffer when not piped (#24440)
### What does this PR do?

### How did you verify your code works?
2025-11-07 00:54:01 -08:00
Marko Vejnovic
75f271a306 ENG-21473: Fix installations without sccache (#24453) 2025-11-06 17:26:28 -08:00
Marko Vejnovic
267be9a54a ci(ENG-21474): Minor Cleanup (#24450) 2025-11-06 17:26:19 -08:00
Alistair Smith
44402ad27a Document & cover some missing spawn/spawnSync options (#24417) 2025-11-06 14:37:26 -08:00
pfg
e01f454635 Fix #23865 (#24355)
Fixes #23865, Fixes ENG-21446

Previously, a termination exception would be thrown. We didn't handle it
properly and eventually it got caught by a `catch @panic()` handler.
Now, no termination exception is thrown.

```
drainMicrotasksWithGlobal calls JSC__JSGlobalObject__drainMicrotasks
JSC__JSGlobalObject__drainMicrotasks returns m_terminationException
-> drainMicrotasksWithGlobal
-> event_loop.zig:exit, which catches the error and discards it
-> ...
```

For workers, we will need to handle termination exceptions in this
codepath.

~~Previously, it would see the exception, call
reportUncaughtExceptoinAtEventLoop, but the exception would still
survive and return out from the catch scope. You're not supposed to
still have an exception signaled at the exit of a catch scope. Exception
checker may not have caught it because maybe the branch wasn't taken.~~

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-05 22:04:14 -08:00
Jarred Sumner
f56232a810 Move Bun.spawn & Bun.spawnSync into a separate file (#24425)
### What does this PR do?



### How did you verify your code works?

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-05 22:03:27 -08:00
Marko Vejnovic
cf0ae19c2a ENG-21468: RELEASE=1 disables sccache (#24428)
### What does this PR do?

What the title says

### How did you verify your code works?

Tested locally:

```bash
killall sccache
RELEASE=1 bun run build
sccache --show-stats
```

```
marko@fedora:~/Desktop/bun-2$ sccache --show-stats
Compile requests                      0
Compile requests executed             0
Cache hits                            0
Cache misses                          0
Cache hits rate                       -
Cache timeouts                        0
Cache read errors                     0
Forced recaches                       0
Cache write errors                    0
Cache errors                          0
Compilations                          0
Compilation failures                  0
Non-cacheable compilations            0
Non-cacheable calls                   0
Non-compilation calls                 0
Unsupported compiler calls            0
Average cache write               0.000 s
Average compiler                  0.000 s
Average cache read hit            0.000 s
Failed distributed compilations       0
Cache location                  Local disk: "/home/marko/.cache/sccache"
Use direct/preprocessor mode?   yes
Version (client)                0.12.0
Max cache size                       10 GiB
```

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-05 22:03:10 -08:00
Jarred Sumner
4ac293bf01 Add error when loading known unsupported v8 c++ api (#24384)
### What does this PR do?

### How did you verify your code works?
2025-11-05 19:17:03 -08:00
28 changed files with 1658 additions and 1213 deletions

View File

@@ -223,7 +223,7 @@ function getImageName(platform, options) {
* @param {number} [limit]
* @link https://buildkite.com/docs/pipelines/command-step#retry-attributes
*/
function getRetry(limit = 0) {
function getRetry() {
return {
manual: {
permit_on_passed: true,
@@ -292,7 +292,7 @@ function getEc2Agent(platform, options, ec2Options) {
* @returns {string}
*/
function getCppAgent(platform, options) {
const { os, arch, distro } = platform;
const { os, arch } = platform;
if (os === "darwin") {
return {
@@ -313,7 +313,7 @@ function getCppAgent(platform, options) {
* @returns {string}
*/
function getLinkBunAgent(platform, options) {
const { os, arch, distro } = platform;
const { os, arch } = platform;
if (os === "darwin") {
return {
@@ -352,14 +352,7 @@ function getZigPlatform() {
* @param {PipelineOptions} options
* @returns {Agent}
*/
function getZigAgent(platform, options) {
const { arch } = platform;
// Uncomment to restore to using macOS on-prem for Zig.
// return {
// queue: "build-zig",
// };
function getZigAgent(_platform, options) {
return getEc2Agent(getZigPlatform(), options, {
instanceType: "r8g.large",
});
@@ -461,23 +454,6 @@ function getBuildCommand(target, options, label) {
return `bun run build:${buildProfile}`;
}
/**
* @param {Platform} platform
* @param {PipelineOptions} options
* @returns {Step}
*/
function getBuildVendorStep(platform, options) {
return {
key: `${getTargetKey(platform)}-build-vendor`,
label: `${getTargetLabel(platform)} - build-vendor`,
agents: getCppAgent(platform, options),
retry: getRetry(),
cancel_on_build_failing: isMergeQueue(),
env: getBuildEnv(platform, options),
command: `${getBuildCommand(platform, options)} --target dependencies`,
};
}
/**
* @param {Platform} platform
* @param {PipelineOptions} options
@@ -527,9 +503,9 @@ function getBuildZigStep(platform, options) {
const toolchain = getBuildToolchain(platform);
return {
key: `${getTargetKey(platform)}-build-zig`,
retry: getRetry(),
label: `${getTargetLabel(platform)} - build-zig`,
agents: getZigAgent(platform, options),
retry: getRetry(),
cancel_on_build_failing: isMergeQueue(),
env: getBuildEnv(platform, options),
command: `${getBuildCommand(platform, options)} --target bun-zig --toolchain ${toolchain}`,

View File

@@ -24,7 +24,16 @@ if(CMAKE_HOST_APPLE)
include(SetupMacSDK)
endif()
include(SetupLLVM)
include(SetupSccache)
find_program(SCCACHE_PROGRAM sccache)
if(SCCACHE_PROGRAM AND NOT DEFINED ENV{NO_SCCACHE})
include(SetupSccache)
else()
find_program(CCACHE_PROGRAM ccache)
if(CCACHE_PROGRAM)
include(SetupCcache)
endif()
endif()
# --- Project ---

View File

@@ -310,7 +310,7 @@ function(find_command)
${FIND_VALIDATOR}
)
if(NOT FIND_REQUIRED STREQUAL "OFF" AND ${FIND_VARIABLE} MATCHES "NOTFOUND")
if(FIND_REQUIRED AND ${FIND_VARIABLE} MATCHES "NOTFOUND")
set(error "Command not found: \"${FIND_NAME}\"")
if(FIND_VERSION)

View File

@@ -0,0 +1,54 @@
optionx(ENABLE_CCACHE BOOL "If ccache should be enabled" DEFAULT ON)
if(NOT ENABLE_CCACHE OR CACHE_STRATEGY STREQUAL "none")
setenv(CCACHE_DISABLE 1)
return()
endif()
if (CI AND NOT APPLE)
setenv(CCACHE_DISABLE 1)
return()
endif()
find_command(
VARIABLE
CCACHE_PROGRAM
COMMAND
ccache
REQUIRED
${CI}
)
if(NOT CCACHE_PROGRAM)
return()
endif()
set(CCACHE_ARGS CMAKE_C_COMPILER_LAUNCHER CMAKE_CXX_COMPILER_LAUNCHER)
foreach(arg ${CCACHE_ARGS})
setx(${arg} ${CCACHE_PROGRAM})
list(APPEND CMAKE_ARGS -D${arg}=${${arg}})
endforeach()
setenv(CCACHE_DIR ${CACHE_PATH}/ccache)
setenv(CCACHE_BASEDIR ${CWD})
setenv(CCACHE_NOHASHDIR 1)
if(CACHE_STRATEGY STREQUAL "read-only")
setenv(CCACHE_READONLY 1)
elseif(CACHE_STRATEGY STREQUAL "write-only")
setenv(CCACHE_RECACHE 1)
endif()
setenv(CCACHE_FILECLONE 1)
setenv(CCACHE_STATSLOG ${BUILD_PATH}/ccache.log)
if(CI)
# FIXME: Does not work on Ubuntu 18.04
# setenv(CCACHE_SLOPPINESS "pch_defines,time_macros,locale,clang_index_store,gcno_cwd,include_file_ctime,include_file_mtime")
else()
setenv(CCACHE_MAXSIZE 100G)
setenv(CCACHE_SLOPPINESS "pch_defines,time_macros,locale,random_seed,clang_index_store,gcno_cwd")
endif()

View File

@@ -5289,7 +5289,12 @@ declare module "bun" {
options: udp.ConnectSocketOptions<DataBinaryType>,
): Promise<udp.ConnectedSocket<DataBinaryType>>;
namespace SpawnOptions {
/**
* @deprecated use {@link Bun.Spawn} instead
*/
export import SpawnOptions = Spawn;
namespace Spawn {
/**
* Option for stdout/stderr
*/
@@ -5320,7 +5325,12 @@ declare module "bun" {
| Response
| Request;
interface OptionsObject<In extends Writable, Out extends Readable, Err extends Readable> {
/**
* @deprecated use BaseOptions or the specific options for the specific {@link spawn} or {@link spawnSync} usage
*/
type OptionsObject<In extends Writable, Out extends Readable, Err extends Readable> = BaseOptions<In, Out, Err>;
interface BaseOptions<In extends Writable, Out extends Readable, Err extends Readable> {
/**
* The current working directory of the process
*
@@ -5328,6 +5338,22 @@ declare module "bun" {
*/
cwd?: string;
/**
* Run the child in a separate process group, detached from the parent.
*
* - POSIX: calls `setsid()` so the child starts a new session and becomes
* the process group leader. It can outlive the parent and receive
* signals independently of the parents terminal/process group.
* - Windows: sets `UV_PROCESS_DETACHED`, allowing the child to outlive
* the parent and receive signals independently.
*
* Note: stdio may keep the parent process alive. Pass `stdio: ["ignore",
* "ignore", "ignore"]` to the spawn constructor to prevent this.
*
* @default false
*/
detached?: boolean;
/**
* The environment variables of the process
*
@@ -5430,6 +5456,48 @@ declare module "bun" {
error?: ErrorLike,
): void | Promise<void>;
/**
* Called exactly once when the IPC channel between the parent and this
* subprocess is closed. After this runs, no further IPC messages will be
* delivered.
*
* When it fires:
* - The child called `process.disconnect()` or the parent called
* `subprocess.disconnect()`.
* - The child exited for any reason (normal exit or due to a signal like
* `SIGILL`, `SIGKILL`, etc.).
* - The child replaced itself with a program that does not support Bun
* IPC.
*
* Notes:
* - This callback indicates that the pipe is closed; it is not an error
* by itself. Use {@link onExit} or {@link Subprocess.exited} to
* determine why the process ended.
* - It may occur before or after {@link onExit} depending on timing; do
* not rely on ordering. Typically, if you or the child call
* `disconnect()` first, this fires before {@link onExit}; if the
* process exits without an explicit disconnect, either may happen
* first.
* - Only runs when {@link ipc} is enabled and runs at most once per
* subprocess.
* - If the child becomes a zombie (exited but not yet reaped), the IPC is
* already closed, and this callback will fire (or may already have
* fired).
*
* @example
*
* ```ts
* const subprocess = spawn({
* cmd: ["echo", "hello"],
* ipc: (message) => console.log(message),
* onDisconnect: () => {
* console.log("IPC channel disconnected");
* },
* });
* ```
*/
onDisconnect?(): void | Promise<void>;
/**
* When specified, Bun will open an IPC channel to the subprocess. The passed callback is called for
* incoming messages, and `subprocess.send` can send messages to the subprocess. Messages are serialized
@@ -5549,6 +5617,34 @@ declare module "bun" {
maxBuffer?: number;
}
interface SpawnSyncOptions<In extends Writable, Out extends Readable, Err extends Readable>
extends BaseOptions<In, Out, Err> {}
interface SpawnOptions<In extends Writable, Out extends Readable, Err extends Readable>
extends BaseOptions<In, Out, Err> {
/**
* If true, stdout and stderr pipes will not automatically start reading
* data. Reading will only begin when you access the `stdout` or `stderr`
* properties.
*
* This can improve performance when you don't need to read output
* immediately.
*
* @default false
*
* @example
* ```ts
* const subprocess = Bun.spawn({
* cmd: ["echo", "hello"],
* lazy: true, // Don't start reading stdout until accessed
* });
* // stdout reading hasn't started yet
* await subprocess.stdout.text(); // Now reading starts
* ```
*/
lazy?: boolean;
}
type ReadableToIO<X extends Readable> = X extends "pipe" | undefined
? ReadableStream<Uint8Array<ArrayBuffer>>
: X extends BunFile | ArrayBufferView | number
@@ -5806,7 +5902,7 @@ declare module "bun" {
const Out extends SpawnOptions.Readable = "pipe",
const Err extends SpawnOptions.Readable = "inherit",
>(
options: SpawnOptions.OptionsObject<In, Out, Err> & {
options: SpawnOptions.SpawnOptions<In, Out, Err> & {
/**
* The command to run
*
@@ -5856,7 +5952,7 @@ declare module "bun" {
* ```
*/
cmds: string[],
options?: SpawnOptions.OptionsObject<In, Out, Err>,
options?: SpawnOptions.SpawnOptions<In, Out, Err>,
): Subprocess<In, Out, Err>;
/**
@@ -5878,7 +5974,7 @@ declare module "bun" {
const Out extends SpawnOptions.Readable = "pipe",
const Err extends SpawnOptions.Readable = "pipe",
>(
options: SpawnOptions.OptionsObject<In, Out, Err> & {
options: SpawnOptions.SpawnSyncOptions<In, Out, Err> & {
/**
* The command to run
*
@@ -5929,7 +6025,7 @@ declare module "bun" {
* ```
*/
cmds: string[],
options?: SpawnOptions.OptionsObject<In, Out, Err>,
options?: SpawnOptions.SpawnSyncOptions<In, Out, Err>,
): SyncSubprocess<Out, Err>;
/** Utility type for any process from {@link Bun.spawn()} with both stdout and stderr set to `"pipe"` */

View File

@@ -262,7 +262,7 @@ declare module "bun:test" {
*/
each<T extends Readonly<[any, ...any[]]>>(table: readonly T[]): Describe<[...T]>;
each<T extends any[]>(table: readonly T[]): Describe<[...T]>;
each<T>(table: T[]): Describe<[T]>;
each<const T>(table: T[]): Describe<[T]>;
}
/**
* Describes a group of related tests.
@@ -552,7 +552,7 @@ declare module "bun:test" {
*/
each<T extends Readonly<[unknown, ...unknown[]]>>(table: readonly T[]): Test<T>;
each<T extends unknown[]>(table: readonly T[]): Test<T>;
each<T>(table: T[]): Test<[T]>;
each<const T>(table: T[]): Test<[T]>;
}
/**
* Runs a test.

View File

@@ -3,10 +3,12 @@ import { existsSync, readFileSync } from "node:fs";
import { basename, join, relative, resolve } from "node:path";
import {
formatAnnotationToHtml,
getEnv,
getSecret,
isCI,
isWindows,
parseAnnotations,
parseBoolean,
printEnvironment,
reportAnnotationToBuildKite,
startGroup,
@@ -71,7 +73,7 @@ async function build(args) {
}
if (!generateOptions["-DCACHE_STRATEGY"]) {
generateOptions["-DCACHE_STRATEGY"] = "read-write";
generateOptions["-DCACHE_STRATEGY"] = parseBoolean(getEnv("RELEASE", false) || "false") ? "none" : "read-write";
}
const toolchain = generateOptions["--toolchain"];

View File

@@ -2494,7 +2494,7 @@ export function formatAnnotationToHtml(annotation, options = {}) {
* @param {AnnotationOptions} [options]
* @returns {AnnotationResult}
*/
export function parseAnnotations(content, options = {}) {
export function parseAnnotations(content) {
/** @type {Annotation[]} */
const annotations = [];

169
src/SignalCode.zig Normal file
View File

@@ -0,0 +1,169 @@
pub const SignalCode = enum(u8) {
SIGHUP = 1,
SIGINT = 2,
SIGQUIT = 3,
SIGILL = 4,
SIGTRAP = 5,
SIGABRT = 6,
SIGBUS = 7,
SIGFPE = 8,
SIGKILL = 9,
SIGUSR1 = 10,
SIGSEGV = 11,
SIGUSR2 = 12,
SIGPIPE = 13,
SIGALRM = 14,
SIGTERM = 15,
SIG16 = 16,
SIGCHLD = 17,
SIGCONT = 18,
SIGSTOP = 19,
SIGTSTP = 20,
SIGTTIN = 21,
SIGTTOU = 22,
SIGURG = 23,
SIGXCPU = 24,
SIGXFSZ = 25,
SIGVTALRM = 26,
SIGPROF = 27,
SIGWINCH = 28,
SIGIO = 29,
SIGPWR = 30,
SIGSYS = 31,
_,
// The `subprocess.kill()` method sends a signal to the child process. If no
// argument is given, the process will be sent the 'SIGTERM' signal.
pub const default = SignalCode.SIGTERM;
pub const Map = ComptimeEnumMap(SignalCode);
pub fn name(value: SignalCode) ?[]const u8 {
if (@intFromEnum(value) <= @intFromEnum(SignalCode.SIGSYS)) {
return asByteSlice(@tagName(value));
}
return null;
}
pub fn valid(value: SignalCode) bool {
return @intFromEnum(value) <= @intFromEnum(SignalCode.SIGSYS) and @intFromEnum(value) >= @intFromEnum(SignalCode.SIGHUP);
}
/// Shell scripts use exit codes 128 + signal number
/// https://tldp.org/LDP/abs/html/exitcodes.html
pub fn toExitCode(value: SignalCode) ?u8 {
return switch (@intFromEnum(value)) {
1...31 => 128 +% @intFromEnum(value),
else => null,
};
}
pub fn description(signal: SignalCode) ?[]const u8 {
// Description names copied from fish
// https://github.com/fish-shell/fish-shell/blob/00ffc397b493f67e28f18640d3de808af29b1434/fish-rust/src/signal.rs#L420
return switch (signal) {
.SIGHUP => "Terminal hung up",
.SIGINT => "Quit request",
.SIGQUIT => "Quit request",
.SIGILL => "Illegal instruction",
.SIGTRAP => "Trace or breakpoint trap",
.SIGABRT => "Abort",
.SIGBUS => "Misaligned address error",
.SIGFPE => "Floating point exception",
.SIGKILL => "Forced quit",
.SIGUSR1 => "User defined signal 1",
.SIGUSR2 => "User defined signal 2",
.SIGSEGV => "Address boundary error",
.SIGPIPE => "Broken pipe",
.SIGALRM => "Timer expired",
.SIGTERM => "Polite quit request",
.SIGCHLD => "Child process status changed",
.SIGCONT => "Continue previously stopped process",
.SIGSTOP => "Forced stop",
.SIGTSTP => "Stop request from job control (^Z)",
.SIGTTIN => "Stop from terminal input",
.SIGTTOU => "Stop from terminal output",
.SIGURG => "Urgent socket condition",
.SIGXCPU => "CPU time limit exceeded",
.SIGXFSZ => "File size limit exceeded",
.SIGVTALRM => "Virtual timefr expired",
.SIGPROF => "Profiling timer expired",
.SIGWINCH => "Window size change",
.SIGIO => "I/O on asynchronous file descriptor is possible",
.SIGSYS => "Bad system call",
.SIGPWR => "Power failure",
else => null,
};
}
pub fn from(value: anytype) SignalCode {
return @enumFromInt(std.mem.asBytes(&value)[0]);
}
// This wrapper struct is lame, what if bun's color formatter was more versatile
const Fmt = struct {
signal: SignalCode,
enable_ansi_colors: bool,
pub fn format(this: Fmt, comptime _: []const u8, _: std.fmt.FormatOptions, writer: anytype) !void {
const signal = this.signal;
switch (this.enable_ansi_colors) {
inline else => |enable_ansi_colors| {
if (signal.name()) |str| if (signal.description()) |desc| {
try writer.print(Output.prettyFmt("{s} <d>({s})<r>", enable_ansi_colors), .{ str, desc });
return;
};
try writer.print("code {d}", .{@intFromEnum(signal)});
},
}
}
};
pub fn fmt(signal: SignalCode, enable_ansi_colors: bool) Fmt {
return .{ .signal = signal, .enable_ansi_colors = enable_ansi_colors };
}
pub fn fromJS(arg: jsc.JSValue, globalThis: *jsc.JSGlobalObject) !SignalCode {
if (arg.getNumber()) |sig64| {
// Node does this:
if (std.math.isNan(sig64)) {
return SignalCode.default;
}
// This matches node behavior, minus some details with the error messages: https://gist.github.com/Jarred-Sumner/23ba38682bf9d84dff2f67eb35c42ab6
if (std.math.isInf(sig64) or @trunc(sig64) != sig64) {
return globalThis.throwInvalidArguments("Unknown signal", .{});
}
if (sig64 < 0) {
return globalThis.throwInvalidArguments("Invalid signal: must be >= 0", .{});
}
if (sig64 > 31) {
return globalThis.throwInvalidArguments("Invalid signal: must be < 32", .{});
}
const code: SignalCode = @enumFromInt(@as(u8, @intFromFloat(sig64)));
return code;
} else if (arg.isString()) {
if (arg.asString().length() == 0) {
return SignalCode.default;
}
const signal_code = try arg.toEnum(globalThis, "signal", SignalCode);
return signal_code;
} else if (!arg.isEmptyOrUndefinedOrNull()) {
return globalThis.throwInvalidArguments("Invalid signal: must be a string or an integer", .{});
}
return SignalCode.default;
}
};
const std = @import("std");
const bun = @import("bun");
const ComptimeEnumMap = bun.ComptimeEnumMap;
const Output = bun.Output;
const asByteSlice = bun.asByteSlice;
const jsc = bun.jsc;
const JSGlobalObject = jsc.JSGlobalObject;
const JSValue = jsc.JSValue;

View File

@@ -322,7 +322,7 @@ pub fn buildWithVm(ctx: bun.cli.Command.Context, cwd: []const u8, vm: *VirtualMa
allocator,
.{ .js = vm.event_loop },
);
const bundled_outputs = bundled_outputs_list.items;
const bundled_outputs = bundled_outputs_list.items();
if (bundled_outputs.len == 0) {
Output.prettyln("done", .{});
Output.flush();

View File

@@ -0,0 +1,934 @@
// This is split into a separate function to conserve stack space.
// On Windows, a single path buffer can take 64 KB.
fn getArgv0(globalThis: *jsc.JSGlobalObject, PATH: []const u8, cwd: []const u8, pretend_argv0: ?[*:0]const u8, first_cmd: JSValue, allocator: std.mem.Allocator) bun.JSError!struct {
argv0: [:0]const u8,
arg0: [:0]u8,
} {
var arg0 = try first_cmd.toSliceOrNullWithAllocator(globalThis, allocator);
defer arg0.deinit();
// Heap allocate it to ensure we don't run out of stack space.
const path_buf: *bun.PathBuffer = try bun.default_allocator.create(bun.PathBuffer);
defer bun.default_allocator.destroy(path_buf);
var actual_argv0: [:0]const u8 = "";
const argv0_to_use: []const u8 = arg0.slice();
// This mimicks libuv's behavior, which mimicks execvpe
// Only resolve from $PATH when the command is not an absolute path
const PATH_to_use: []const u8 = if (strings.containsChar(argv0_to_use, '/'))
""
// If no $PATH is provided, we fallback to the one from environ
// This is already the behavior of the PATH passed in here.
else if (PATH.len > 0)
PATH
else if (comptime Environment.isPosix)
// If the user explicitly passed an empty $PATH, we fallback to the OS-specific default (which libuv also does)
bun.sliceTo(BUN_DEFAULT_PATH_FOR_SPAWN, 0)
else
"";
if (PATH_to_use.len == 0) {
actual_argv0 = try allocator.dupeZ(u8, argv0_to_use);
} else {
const resolved = which(path_buf, PATH_to_use, cwd, argv0_to_use) orelse {
return throwCommandNotFound(globalThis, argv0_to_use);
};
actual_argv0 = try allocator.dupeZ(u8, resolved);
}
return .{
.argv0 = actual_argv0,
.arg0 = if (pretend_argv0) |p| try allocator.dupeZ(u8, bun.sliceTo(p, 0)) else try allocator.dupeZ(u8, arg0.slice()),
};
}
/// `argv` for `Bun.spawn` & `Bun.spawnSync`
fn getArgv(globalThis: *jsc.JSGlobalObject, args: JSValue, PATH: []const u8, cwd: []const u8, argv0: *?[*:0]const u8, allocator: std.mem.Allocator, argv: *std.ArrayList(?[*:0]const u8)) bun.JSError!void {
var cmds_array = try args.arrayIterator(globalThis);
// + 1 for argv0
// + 1 for null terminator
argv.* = try @TypeOf(argv.*).initCapacity(allocator, cmds_array.len + 2);
if (args.isEmptyOrUndefinedOrNull()) {
return globalThis.throwInvalidArguments("cmd must be an array of strings", .{});
}
if (cmds_array.len == 0) {
return globalThis.throwInvalidArguments("cmd must not be empty", .{});
}
const argv0_result = try getArgv0(globalThis, PATH, cwd, argv0.*, (try cmds_array.next()).?, allocator);
argv0.* = argv0_result.argv0.ptr;
argv.appendAssumeCapacity(argv0_result.arg0.ptr);
while (try cmds_array.next()) |value| {
const arg = try value.toBunString(globalThis);
defer arg.deref();
argv.appendAssumeCapacity(try arg.toOwnedSliceZ(allocator));
}
if (argv.items.len == 0) {
return globalThis.throwInvalidArguments("cmd must be an array of strings", .{});
}
}
/// Bun.spawn() calls this.
pub fn spawn(globalThis: *jsc.JSGlobalObject, args: JSValue, secondaryArgsValue: ?JSValue) bun.JSError!JSValue {
return spawnMaybeSync(globalThis, args, secondaryArgsValue, false);
}
/// Bun.spawnSync() calls this.
pub fn spawnSync(globalThis: *jsc.JSGlobalObject, args: JSValue, secondaryArgsValue: ?JSValue) bun.JSError!JSValue {
return spawnMaybeSync(globalThis, args, secondaryArgsValue, true);
}
pub fn spawnMaybeSync(
globalThis: *jsc.JSGlobalObject,
args_: JSValue,
secondaryArgsValue: ?JSValue,
comptime is_sync: bool,
) bun.JSError!JSValue {
if (comptime is_sync) {
// We skip this on Windows due to test failures.
if (comptime !Environment.isWindows) {
// Since the event loop is recursively called, we need to check if it's safe to recurse.
if (!bun.StackCheck.init().isSafeToRecurse()) {
return globalThis.throwStackOverflow();
}
}
}
var arena = bun.ArenaAllocator.init(bun.default_allocator);
defer arena.deinit();
const allocator = arena.allocator();
var override_env = false;
var env_array = std.ArrayListUnmanaged(?[*:0]const u8){};
var jsc_vm = globalThis.bunVM();
var cwd = jsc_vm.transpiler.fs.top_level_dir;
var stdio = [3]Stdio{
.{ .ignore = {} },
.{ .pipe = {} },
.{ .inherit = {} },
};
if (comptime is_sync) {
stdio[1] = .{ .pipe = {} };
stdio[2] = .{ .pipe = {} };
}
var lazy = false;
var on_exit_callback = JSValue.zero;
var on_disconnect_callback = JSValue.zero;
var PATH = jsc_vm.transpiler.env.get("PATH") orelse "";
var argv = std.ArrayList(?[*:0]const u8).init(allocator);
var cmd_value = JSValue.zero;
var detached = false;
var args = args_;
var maybe_ipc_mode: if (is_sync) void else ?IPC.Mode = if (is_sync) {} else null;
var ipc_callback: JSValue = .zero;
var extra_fds = std.ArrayList(bun.spawn.SpawnOptions.Stdio).init(bun.default_allocator);
var argv0: ?[*:0]const u8 = null;
var ipc_channel: i32 = -1;
var timeout: ?i32 = null;
var killSignal: SignalCode = SignalCode.default;
var maxBuffer: ?i64 = null;
var windows_hide: bool = false;
var windows_verbatim_arguments: bool = false;
var abort_signal: ?*jsc.WebCore.AbortSignal = null;
defer {
// Ensure we clean it up on error.
if (abort_signal) |signal| {
signal.unref();
}
}
{
if (args.isEmptyOrUndefinedOrNull()) {
return globalThis.throwInvalidArguments("cmd must be an array", .{});
}
const args_type = args.jsType();
if (args_type.isArray()) {
cmd_value = args;
args = secondaryArgsValue orelse JSValue.zero;
} else if (!args.isObject()) {
return globalThis.throwInvalidArguments("cmd must be an array", .{});
} else if (try args.getTruthy(globalThis, "cmd")) |cmd_value_| {
cmd_value = cmd_value_;
} else {
return globalThis.throwInvalidArguments("cmd must be an array", .{});
}
if (args.isObject()) {
if (try args.getTruthy(globalThis, "argv0")) |argv0_| {
const argv0_str = try argv0_.getZigString(globalThis);
if (argv0_str.len > 0) {
argv0 = try argv0_str.toOwnedSliceZ(allocator);
}
}
// need to update `cwd` before searching for executable with `Which.which`
if (try args.getTruthy(globalThis, "cwd")) |cwd_| {
const cwd_str = try cwd_.getZigString(globalThis);
if (cwd_str.len > 0) {
cwd = try cwd_str.toOwnedSliceZ(allocator);
}
}
}
if (args != .zero and args.isObject()) {
// This must run before the stdio parsing happens
if (!is_sync) {
if (try args.getTruthy(globalThis, "ipc")) |val| {
if (val.isCell() and val.isCallable()) {
maybe_ipc_mode = ipc_mode: {
if (try args.getTruthy(globalThis, "serialization")) |mode_val| {
if (mode_val.isString()) {
break :ipc_mode try IPC.Mode.fromJS(globalThis, mode_val) orelse {
return globalThis.throwInvalidArguments("serialization must be \"json\" or \"advanced\"", .{});
};
} else {
if (!globalThis.hasException()) {
return globalThis.throwInvalidArgumentType("spawn", "serialization", "string");
}
return .zero;
}
}
break :ipc_mode .advanced;
};
ipc_callback = val.withAsyncContextIfNeeded(globalThis);
}
}
}
if (try args.getTruthy(globalThis, "signal")) |signal_val| {
if (signal_val.as(jsc.WebCore.AbortSignal)) |signal| {
abort_signal = signal.ref();
} else {
return globalThis.throwInvalidArgumentTypeValue("signal", "AbortSignal", signal_val);
}
}
if (try args.getTruthy(globalThis, "onDisconnect")) |onDisconnect_| {
if (!onDisconnect_.isCell() or !onDisconnect_.isCallable()) {
return globalThis.throwInvalidArguments("onDisconnect must be a function or undefined", .{});
}
on_disconnect_callback = if (comptime is_sync)
onDisconnect_
else
onDisconnect_.withAsyncContextIfNeeded(globalThis);
}
if (try args.getTruthy(globalThis, "onExit")) |onExit_| {
if (!onExit_.isCell() or !onExit_.isCallable()) {
return globalThis.throwInvalidArguments("onExit must be a function or undefined", .{});
}
on_exit_callback = if (comptime is_sync)
onExit_
else
onExit_.withAsyncContextIfNeeded(globalThis);
}
if (try args.getTruthy(globalThis, "env")) |env_arg| {
env_arg.ensureStillAlive();
const object = env_arg.getObject() orelse {
return globalThis.throwInvalidArguments("env must be an object", .{});
};
override_env = true;
// If the env object does not include a $PATH, it must disable path lookup for argv[0]
var NEW_PATH: []const u8 = "";
var envp_managed = env_array.toManaged(allocator);
try appendEnvpFromJS(globalThis, object, &envp_managed, &NEW_PATH);
env_array = envp_managed.moveToUnmanaged();
PATH = NEW_PATH;
}
try getArgv(globalThis, cmd_value, PATH, cwd, &argv0, allocator, &argv);
if (try args.get(globalThis, "stdio")) |stdio_val| {
if (!stdio_val.isEmptyOrUndefinedOrNull()) {
if (stdio_val.jsType().isArray()) {
var stdio_iter = try stdio_val.arrayIterator(globalThis);
var i: u31 = 0;
while (try stdio_iter.next()) |value| : (i += 1) {
try stdio[i].extract(globalThis, i, value, is_sync);
if (i == 2)
break;
}
i += 1;
while (try stdio_iter.next()) |value| : (i += 1) {
var new_item: Stdio = undefined;
try new_item.extract(globalThis, i, value, is_sync);
const opt = switch (new_item.asSpawnOption(i)) {
.result => |opt| opt,
.err => |e| {
return e.throwJS(globalThis);
},
};
if (opt == .ipc) {
ipc_channel = @intCast(extra_fds.items.len);
}
try extra_fds.append(opt);
}
} else {
return globalThis.throwInvalidArguments("stdio must be an array", .{});
}
}
} else {
if (try args.get(globalThis, "stdin")) |value| {
try stdio[0].extract(globalThis, 0, value, is_sync);
}
if (try args.get(globalThis, "stderr")) |value| {
try stdio[2].extract(globalThis, 2, value, is_sync);
}
if (try args.get(globalThis, "stdout")) |value| {
try stdio[1].extract(globalThis, 1, value, is_sync);
}
}
if (comptime !is_sync) {
if (try args.get(globalThis, "lazy")) |lazy_val| {
if (lazy_val.isBoolean()) {
lazy = lazy_val.toBoolean();
}
}
}
if (try args.get(globalThis, "detached")) |detached_val| {
if (detached_val.isBoolean()) {
detached = detached_val.toBoolean();
}
}
if (Environment.isWindows) {
if (try args.get(globalThis, "windowsHide")) |val| {
if (val.isBoolean()) {
windows_hide = val.asBoolean();
}
}
if (try args.get(globalThis, "windowsVerbatimArguments")) |val| {
if (val.isBoolean()) {
windows_verbatim_arguments = val.asBoolean();
}
}
}
if (try args.get(globalThis, "timeout")) |timeout_value| brk: {
if (timeout_value != .null) {
if (timeout_value.isNumber() and std.math.isPositiveInf(timeout_value.asNumber())) {
break :brk;
}
const timeout_int = try globalThis.validateIntegerRange(timeout_value, u64, 0, .{ .min = 0, .field_name = "timeout" });
if (timeout_int > 0)
timeout = @intCast(@as(u31, @truncate(timeout_int)));
}
}
if (try args.get(globalThis, "killSignal")) |val| {
killSignal = try bun.SignalCode.fromJS(val, globalThis);
}
if (try args.get(globalThis, "maxBuffer")) |val| {
if (val.isNumber() and val.isFinite()) { // 'Infinity' does not set maxBuffer
const value = try val.coerce(i64, globalThis);
if (value > 0 and (stdio[0].isPiped() or stdio[1].isPiped() or stdio[2].isPiped())) {
maxBuffer = value;
}
}
}
} else {
try getArgv(globalThis, cmd_value, PATH, cwd, &argv0, allocator, &argv);
}
}
log("spawn maxBuffer: {?d}", .{maxBuffer});
if (!override_env and env_array.items.len == 0) {
env_array.items = jsc_vm.transpiler.env.map.createNullDelimitedEnvMap(allocator) catch |err| return globalThis.throwError(err, "in Bun.spawn") catch return .zero;
env_array.capacity = env_array.items.len;
}
inline for (0..stdio.len) |fd_index| {
if (stdio[fd_index].canUseMemfd(is_sync, fd_index > 0 and maxBuffer != null)) {
if (stdio[fd_index].useMemfd(fd_index)) {
jsc_vm.counters.mark(.spawn_memfd);
}
}
}
var should_close_memfd = Environment.isLinux;
defer {
if (should_close_memfd) {
inline for (0..stdio.len) |fd_index| {
if (stdio[fd_index] == .memfd) {
stdio[fd_index].memfd.close();
stdio[fd_index] = .ignore;
}
}
}
}
//"NODE_CHANNEL_FD=" is 16 bytes long, 15 bytes for the number, and 1 byte for the null terminator should be enough/safe
var ipc_env_buf: [32]u8 = undefined;
if (!is_sync) if (maybe_ipc_mode) |ipc_mode| {
// IPC is currently implemented in a very limited way.
//
// Node lets you pass as many fds as you want, they all become be sockets; then, IPC is just a special
// runtime-owned version of "pipe" (in which pipe is a misleading name since they're bidirectional sockets).
//
// Bun currently only supports three fds: stdin, stdout, and stderr, which are all unidirectional
//
// And then one fd is assigned specifically and only for IPC. If the user dont specify it, we add one (default: 3).
//
// When Bun.spawn() is given an `.ipc` callback, it enables IPC as follows:
env_array.ensureUnusedCapacity(allocator, 3) catch |err| return globalThis.throwError(err, "in Bun.spawn") catch return .zero;
const ipc_fd: i32 = brk: {
if (ipc_channel == -1) {
// If the user didn't specify an IPC channel, we need to add one
ipc_channel = @intCast(extra_fds.items.len);
var ipc_extra_fd_default = Stdio{ .ipc = {} };
const fd: i32 = ipc_channel + 3;
switch (ipc_extra_fd_default.asSpawnOption(fd)) {
.result => |opt| {
try extra_fds.append(opt);
},
.err => |e| {
return e.throwJS(globalThis);
},
}
break :brk fd;
} else {
break :brk @intCast(ipc_channel + 3);
}
};
const pipe_env = std.fmt.bufPrintZ(
&ipc_env_buf,
"NODE_CHANNEL_FD={d}",
.{ipc_fd},
) catch {
return globalThis.throwOutOfMemory();
};
env_array.appendAssumeCapacity(pipe_env);
env_array.appendAssumeCapacity(switch (ipc_mode) {
inline else => |t| "NODE_CHANNEL_SERIALIZATION_MODE=" ++ @tagName(t),
});
};
try env_array.append(allocator, null);
try argv.append(null);
if (comptime is_sync) {
for (&stdio, 0..) |*io, i| {
io.toSync(@truncate(i));
}
}
// If the whole thread is supposed to do absolutely nothing while waiting,
// we can block the thread which reduces CPU usage.
//
// That means:
// - No maximum buffer
// - No timeout
// - No abort signal
// - No stdin, stdout, stderr pipes
// - No extra fds
// - No auto killer (for tests)
// - No execution time limit (for tests)
// - No IPC
// - No inspector (since they might want to press pause or step)
const can_block_entire_thread_to_reduce_cpu_usage_in_fast_path = (comptime Environment.isPosix and is_sync) and
abort_signal == null and
timeout == null and
maxBuffer == null and
!stdio[0].isPiped() and
!stdio[1].isPiped() and
!stdio[2].isPiped() and
extra_fds.items.len == 0 and
!jsc_vm.auto_killer.enabled and
!jsc_vm.jsc_vm.hasExecutionTimeLimit() and
!jsc_vm.isInspectorEnabled() and
!bun.feature_flag.BUN_FEATURE_FLAG_DISABLE_SPAWNSYNC_FAST_PATH.get();
const spawn_options = bun.spawn.SpawnOptions{
.cwd = cwd,
.detached = detached,
.stdin = switch (stdio[0].asSpawnOption(0)) {
.result => |opt| opt,
.err => |e| return e.throwJS(globalThis),
},
.stdout = switch (stdio[1].asSpawnOption(1)) {
.result => |opt| opt,
.err => |e| return e.throwJS(globalThis),
},
.stderr = switch (stdio[2].asSpawnOption(2)) {
.result => |opt| opt,
.err => |e| return e.throwJS(globalThis),
},
.extra_fds = extra_fds.items,
.argv0 = argv0,
.can_block_entire_thread_to_reduce_cpu_usage_in_fast_path = can_block_entire_thread_to_reduce_cpu_usage_in_fast_path,
.windows = if (Environment.isWindows) .{
.hide_window = windows_hide,
.verbatim_arguments = windows_verbatim_arguments,
.loop = jsc.EventLoopHandle.init(jsc_vm),
},
};
var spawned = switch (bun.spawn.spawnProcess(
&spawn_options,
@ptrCast(argv.items.ptr),
@ptrCast(env_array.items.ptr),
) catch |err| switch (err) {
error.EMFILE, error.ENFILE => {
spawn_options.deinit();
const display_path: [:0]const u8 = if (argv.items.len > 0 and argv.items[0] != null)
std.mem.sliceTo(argv.items[0].?, 0)
else
"";
var systemerror = bun.sys.Error.fromCode(if (err == error.EMFILE) .MFILE else .NFILE, .posix_spawn).withPath(display_path).toSystemError();
systemerror.errno = if (err == error.EMFILE) -bun.sys.UV_E.MFILE else -bun.sys.UV_E.NFILE;
return globalThis.throwValue(systemerror.toErrorInstance(globalThis));
},
else => {
spawn_options.deinit();
return globalThis.throwError(err, ": failed to spawn process") catch return .zero;
},
}) {
.err => |err| {
spawn_options.deinit();
switch (err.getErrno()) {
.ACCES, .NOENT, .PERM, .ISDIR, .NOTDIR => |errno| {
const display_path: [:0]const u8 = if (argv.items.len > 0 and argv.items[0] != null)
std.mem.sliceTo(argv.items[0].?, 0)
else
"";
if (display_path.len > 0) {
var systemerror = err.withPath(display_path).toSystemError();
if (errno == .NOENT) systemerror.errno = -bun.sys.UV_E.NOENT;
return globalThis.throwValue(systemerror.toErrorInstance(globalThis));
}
},
else => {},
}
return globalThis.throwValue(err.toJS(globalThis));
},
.result => |result| result,
};
const loop = jsc_vm.eventLoop();
const process = spawned.toProcess(loop, is_sync);
var subprocess = bun.new(Subprocess, .{
.ref_count = .init(),
.globalThis = globalThis,
.process = process,
.pid_rusage = null,
.stdin = .{ .ignore = {} },
.stdout = .{ .ignore = {} },
.stderr = .{ .ignore = {} },
.stdio_pipes = .{},
.ipc_data = null,
.flags = .{
.is_sync = is_sync,
},
.killSignal = undefined,
});
const posix_ipc_fd = if (Environment.isPosix and !is_sync and maybe_ipc_mode != null)
spawned.extra_pipes.items[@intCast(ipc_channel)]
else
bun.invalid_fd;
MaxBuf.createForSubprocess(subprocess, &subprocess.stderr_maxbuf, maxBuffer);
MaxBuf.createForSubprocess(subprocess, &subprocess.stdout_maxbuf, maxBuffer);
var promise_for_stream: jsc.JSValue = .zero;
// When run synchronously, subprocess isn't garbage collected
subprocess.* = Subprocess{
.globalThis = globalThis,
.process = process,
.pid_rusage = null,
.stdin = Writable.init(
&stdio[0],
loop,
subprocess,
spawned.stdin,
&promise_for_stream,
) catch {
subprocess.deref();
return globalThis.throwOutOfMemory();
},
.stdout = Readable.init(
stdio[1],
loop,
subprocess,
spawned.stdout,
jsc_vm.allocator,
subprocess.stdout_maxbuf,
is_sync,
),
.stderr = Readable.init(
stdio[2],
loop,
subprocess,
spawned.stderr,
jsc_vm.allocator,
subprocess.stderr_maxbuf,
is_sync,
),
// 1. JavaScript.
// 2. Process.
.ref_count = .initExactRefs(2),
.stdio_pipes = spawned.extra_pipes.moveToUnmanaged(),
.ipc_data = if (!is_sync and comptime Environment.isWindows)
if (maybe_ipc_mode) |ipc_mode| ( //
.init(ipc_mode, .{ .subprocess = subprocess }, .uninitialized) //
) else null
else
null,
.flags = .{
.is_sync = is_sync,
},
.killSignal = killSignal,
.stderr_maxbuf = subprocess.stderr_maxbuf,
.stdout_maxbuf = subprocess.stdout_maxbuf,
};
subprocess.process.setExitHandler(subprocess);
promise_for_stream.ensureStillAlive();
subprocess.flags.is_stdin_a_readable_stream = promise_for_stream != .zero;
if (promise_for_stream != .zero and !globalThis.hasException()) {
if (promise_for_stream.toError()) |err| {
_ = globalThis.throwValue(err) catch {};
}
}
if (globalThis.hasException()) {
const err = globalThis.takeException(error.JSError);
// Ensure we kill the process so we don't leave things in an unexpected state.
_ = subprocess.tryKill(subprocess.killSignal);
if (globalThis.hasException()) {
return error.JSError;
}
return globalThis.throwValue(err);
}
var posix_ipc_info: if (Environment.isPosix) IPC.Socket else void = undefined;
if (Environment.isPosix and !is_sync) {
if (maybe_ipc_mode) |mode| {
if (uws.us_socket_t.fromFd(
jsc_vm.rareData().spawnIPCContext(jsc_vm),
@sizeOf(*IPC.SendQueue),
posix_ipc_fd.cast(),
1,
)) |socket| {
subprocess.ipc_data = .init(mode, .{ .subprocess = subprocess }, .uninitialized);
posix_ipc_info = IPC.Socket.from(socket);
}
}
}
if (subprocess.ipc_data) |*ipc_data| {
if (Environment.isPosix) {
if (posix_ipc_info.ext(*IPC.SendQueue)) |ctx| {
ctx.* = &subprocess.ipc_data.?;
subprocess.ipc_data.?.socket = .{ .open = posix_ipc_info };
}
} else {
if (ipc_data.windowsConfigureServer(
subprocess.stdio_pipes.items[@intCast(ipc_channel)].buffer,
).asErr()) |err| {
subprocess.deref();
return globalThis.throwValue(err.toJS(globalThis));
}
subprocess.stdio_pipes.items[@intCast(ipc_channel)] = .unavailable;
}
ipc_data.writeVersionPacket(globalThis);
}
if (subprocess.stdin == .pipe and promise_for_stream == .zero) {
subprocess.stdin.pipe.signal = jsc.WebCore.streams.Signal.init(&subprocess.stdin);
}
const out = if (comptime !is_sync)
subprocess.toJS(globalThis)
else
JSValue.zero;
if (out != .zero) {
subprocess.this_value.setWeak(out);
// Immediately upgrade to strong if there's pending activity to prevent premature GC
subprocess.updateHasPendingActivity();
}
var send_exit_notification = false;
// This must go before other things happen so that the exit handler is registered before onProcessExit can potentially be called.
if (timeout) |timeout_val| {
subprocess.event_loop_timer.next = bun.timespec.msFromNow(timeout_val);
globalThis.bunVM().timer.insert(&subprocess.event_loop_timer);
subprocess.setEventLoopTimerRefd(true);
}
if (comptime !is_sync) {
bun.debugAssert(out != .zero);
if (on_exit_callback.isCell()) {
jsc.Codegen.JSSubprocess.onExitCallbackSetCached(out, globalThis, on_exit_callback);
}
if (on_disconnect_callback.isCell()) {
jsc.Codegen.JSSubprocess.onDisconnectCallbackSetCached(out, globalThis, on_disconnect_callback);
}
if (ipc_callback.isCell()) {
jsc.Codegen.JSSubprocess.ipcCallbackSetCached(out, globalThis, ipc_callback);
}
if (stdio[0] == .readable_stream) {
jsc.Codegen.JSSubprocess.stdinSetCached(out, globalThis, stdio[0].readable_stream.value);
}
switch (subprocess.process.watch()) {
.result => {},
.err => {
send_exit_notification = true;
lazy = false;
},
}
}
defer {
if (send_exit_notification) {
if (subprocess.process.hasExited()) {
// process has already exited, we called wait4(), but we did not call onProcessExit()
subprocess.process.onExit(subprocess.process.status, &std.mem.zeroes(Rusage));
} else {
// process has already exited, but we haven't called wait4() yet
// https://cs.github.com/libuv/libuv/blob/b00d1bd225b602570baee82a6152eaa823a84fa6/src/unix/process.c#L1007
subprocess.process.wait(is_sync);
}
}
}
if (subprocess.stdin == .buffer) {
if (subprocess.stdin.buffer.start().asErr()) |err| {
_ = subprocess.tryKill(subprocess.killSignal);
_ = globalThis.throwValue(err.toJS(globalThis)) catch {};
return error.JSError;
}
}
if (subprocess.stdout == .pipe) {
if (subprocess.stdout.pipe.start(subprocess, loop).asErr()) |err| {
_ = subprocess.tryKill(subprocess.killSignal);
_ = globalThis.throwValue(err.toJS(globalThis)) catch {};
return error.JSError;
}
if ((is_sync or !lazy) and subprocess.stdout == .pipe) {
subprocess.stdout.pipe.readAll();
}
}
if (subprocess.stderr == .pipe) {
if (subprocess.stderr.pipe.start(subprocess, loop).asErr()) |err| {
_ = subprocess.tryKill(subprocess.killSignal);
_ = globalThis.throwValue(err.toJS(globalThis)) catch {};
return error.JSError;
}
if ((is_sync or !lazy) and subprocess.stderr == .pipe) {
subprocess.stderr.pipe.readAll();
}
}
should_close_memfd = false;
if (comptime !is_sync) {
// Once everything is set up, we can add the abort listener
// Adding the abort listener may call the onAbortSignal callback immediately if it was already aborted
// Therefore, we must do this at the very end.
if (abort_signal) |signal| {
signal.pendingActivityRef();
subprocess.abort_signal = signal.addListener(subprocess, Subprocess.onAbortSignal);
abort_signal = null;
}
if (!subprocess.process.hasExited()) {
jsc_vm.onSubprocessSpawn(subprocess.process);
}
return out;
}
comptime bun.assert(is_sync);
if (can_block_entire_thread_to_reduce_cpu_usage_in_fast_path) {
jsc_vm.counters.mark(.spawnSync_blocking);
const debug_timer = Output.DebugTimer.start();
subprocess.process.wait(true);
log("spawnSync fast path took {}", .{debug_timer});
// watchOrReap will handle the already exited case for us.
}
switch (subprocess.process.watchOrReap()) {
.result => {
// Once everything is set up, we can add the abort listener
// Adding the abort listener may call the onAbortSignal callback immediately if it was already aborted
// Therefore, we must do this at the very end.
if (abort_signal) |signal| {
signal.pendingActivityRef();
subprocess.abort_signal = signal.addListener(subprocess, Subprocess.onAbortSignal);
abort_signal = null;
}
},
.err => {
subprocess.process.wait(true);
},
}
if (!subprocess.process.hasExited()) {
jsc_vm.onSubprocessSpawn(subprocess.process);
}
// We cannot release heap access while JS is running
{
const old_vm = jsc_vm.uwsLoop().internal_loop_data.jsc_vm;
jsc_vm.uwsLoop().internal_loop_data.jsc_vm = null;
defer {
jsc_vm.uwsLoop().internal_loop_data.jsc_vm = old_vm;
}
while (subprocess.computeHasPendingActivity()) {
if (subprocess.stdin == .buffer) {
subprocess.stdin.buffer.watch();
}
if (subprocess.stderr == .pipe) {
subprocess.stderr.pipe.watch();
}
if (subprocess.stdout == .pipe) {
subprocess.stdout.pipe.watch();
}
jsc_vm.tick();
jsc_vm.eventLoop().autoTick();
}
}
subprocess.updateHasPendingActivity();
const signalCode = subprocess.getSignalCode(globalThis);
const exitCode = subprocess.getExitCode(globalThis);
const stdout = try subprocess.stdout.toBufferedValue(globalThis);
const stderr = try subprocess.stderr.toBufferedValue(globalThis);
const resource_usage: JSValue = if (!globalThis.hasException()) try subprocess.createResourceUsageObject(globalThis) else .zero;
const exitedDueToTimeout = subprocess.event_loop_timer.state == .FIRED;
const exitedDueToMaxBuffer = subprocess.exited_due_to_maxbuf;
const resultPid = jsc.JSValue.jsNumberFromInt32(subprocess.pid());
subprocess.finalize();
if (globalThis.hasException()) {
// e.g. a termination exception.
return .zero;
}
const sync_value = jsc.JSValue.createEmptyObject(globalThis, 5 + @as(usize, @intFromBool(!signalCode.isEmptyOrUndefinedOrNull())));
sync_value.put(globalThis, jsc.ZigString.static("exitCode"), exitCode);
if (!signalCode.isEmptyOrUndefinedOrNull()) {
sync_value.put(globalThis, jsc.ZigString.static("signalCode"), signalCode);
}
sync_value.put(globalThis, jsc.ZigString.static("stdout"), stdout);
sync_value.put(globalThis, jsc.ZigString.static("stderr"), stderr);
sync_value.put(globalThis, jsc.ZigString.static("success"), JSValue.jsBoolean(exitCode.isInt32() and exitCode.asInt32() == 0));
sync_value.put(globalThis, jsc.ZigString.static("resourceUsage"), resource_usage);
if (timeout != null) sync_value.put(globalThis, jsc.ZigString.static("exitedDueToTimeout"), if (exitedDueToTimeout) .true else .false);
if (maxBuffer != null) sync_value.put(globalThis, jsc.ZigString.static("exitedDueToMaxBuffer"), if (exitedDueToMaxBuffer != null) .true else .false);
sync_value.put(globalThis, jsc.ZigString.static("pid"), resultPid);
return sync_value;
}
fn throwCommandNotFound(globalThis: *jsc.JSGlobalObject, command: []const u8) bun.JSError {
const err = jsc.SystemError{
.message = bun.handleOom(bun.String.createFormat("Executable not found in $PATH: \"{s}\"", .{command})),
.code = bun.String.static("ENOENT"),
.errno = -bun.sys.UV_E.NOENT,
.path = bun.String.cloneUTF8(command),
};
return globalThis.throwValue(err.toErrorInstance(globalThis));
}
pub fn appendEnvpFromJS(globalThis: *jsc.JSGlobalObject, object: *jsc.JSObject, envp: *std.ArrayList(?[*:0]const u8), PATH: *[]const u8) bun.JSError!void {
var object_iter = try jsc.JSPropertyIterator(.{ .skip_empty_name = false, .include_value = true }).init(globalThis, object);
defer object_iter.deinit();
try envp.ensureTotalCapacityPrecise(object_iter.len +
// +1 incase there's IPC
// +1 for null terminator
2);
while (try object_iter.next()) |key| {
var value = object_iter.value;
if (value.isUndefined()) continue;
const line = try std.fmt.allocPrintZ(envp.allocator, "{}={}", .{ key, try value.getZigString(globalThis) });
if (key.eqlComptime("PATH")) {
PATH.* = bun.asByteSlice(line["PATH=".len..]);
}
try envp.append(line);
}
}
const log = Output.scoped(.Subprocess, .hidden);
extern "C" const BUN_DEFAULT_PATH_FOR_SPAWN: [*:0]const u8;
const IPC = @import("../../ipc.zig");
const std = @import("std");
const Allocator = std.mem.Allocator;
const bun = @import("bun");
const Environment = bun.Environment;
const Output = bun.Output;
const SignalCode = bun.SignalCode;
const default_allocator = bun.default_allocator;
const strings = bun.strings;
const uws = bun.uws;
const which = bun.which;
const windows = bun.windows;
const MaxBuf = bun.io.MaxBuf;
const jsc = bun.jsc;
const JSGlobalObject = jsc.JSGlobalObject;
const JSValue = jsc.JSValue;
const Subprocess = jsc.Subprocess;
const Readable = Subprocess.Readable;
const Writable = Subprocess.Writable;
const Process = bun.spawn.Process;
const Rusage = bun.spawn.Rusage;
const Stdio = bun.spawn.Stdio;

File diff suppressed because it is too large Load Diff

View File

@@ -487,6 +487,16 @@ JSC_DEFINE_HOST_FUNCTION(Process_functionDlopen, (JSC::JSGlobalObject * globalOb
#endif
};
// Handle known yet-to-be-working in Bun
{
static constexpr ASCIILiteral better_sqlite3_node = "better_sqlite3.node"_s;
static constexpr ASCIILiteral better_sqlite3_message = "'better-sqlite3' is not yet supported in Bun.\nTrack the status in https://github.com/oven-sh/bun/issues/4290\nIn the meantime, you could try bun:sqlite which has a similar API."_s;
if (filename.endsWith(better_sqlite3_node)) {
return throwError(globalObject, scope, ErrorCode::ERR_DLOPEN_FAILED,
better_sqlite3_message);
}
}
{
auto utf8_filename = filename.tryGetUTF8(ConversionMode::LenientConversion);
if (!utf8_filename) [[unlikely]] {
@@ -496,6 +506,8 @@ JSC_DEFINE_HOST_FUNCTION(Process_functionDlopen, (JSC::JSGlobalObject * globalOb
utf8 = *utf8_filename;
}
Bun__process_dlopen_count++;
#if OS(WINDOWS)
BunString filename_str = Bun::toString(filename);
HMODULE handle = Bun__LoadLibraryBunString(&filename_str);
@@ -511,8 +523,6 @@ JSC_DEFINE_HOST_FUNCTION(Process_functionDlopen, (JSC::JSGlobalObject * globalOb
globalObject->m_pendingNapiModuleDlopenHandle = handle;
Bun__process_dlopen_count++;
if (!handle) {
#if OS(WINDOWS)
DWORD errorId = GetLastError();

View File

@@ -2973,7 +2973,10 @@ void GlobalObject::handleRejectedPromises()
continue;
Bun__handleRejectedPromise(this, promise);
if (auto ex = scope.exception()) this->reportUncaughtExceptionAtEventLoop(this, ex);
if (auto ex = scope.exception()) {
scope.clearException();
this->reportUncaughtExceptionAtEventLoop(this, ex);
}
}
}

View File

@@ -177,7 +177,7 @@ pub fn handleTimeout(this: *Execution, globalThis: *jsc.JSGlobalObject) bun.JSEr
defer groupLog.end();
// if the concurrent group has one sequence and the sequence has an active entry that has timed out,
// request a termination exception and kill any dangling processes
// kill any dangling processes
// when using test.concurrent(), we can't do this because it could kill multiple tests at once.
if (this.activeGroup()) |current_group| {
const sequences = current_group.sequences(this);
@@ -186,7 +186,6 @@ pub fn handleTimeout(this: *Execution, globalThis: *jsc.JSGlobalObject) bun.JSEr
if (sequence.active_entry) |entry| {
const now = bun.timespec.now();
if (entry.timespec.order(&now) == .lt) {
globalThis.requestTermination();
const kill_count = globalThis.bunVM().auto_killer.kill();
if (kill_count.processes > 0) {
bun.Output.prettyErrorln("<d>killed {d} dangling process{s}<r>", .{ kill_count.processes, if (kill_count.processes != 1) "es" else "" });

View File

@@ -1069,129 +1069,7 @@ pub fn parseDouble(input: []const u8) !f64 {
return jsc.wtf.parseDouble(input);
}
pub const SignalCode = enum(u8) {
SIGHUP = 1,
SIGINT = 2,
SIGQUIT = 3,
SIGILL = 4,
SIGTRAP = 5,
SIGABRT = 6,
SIGBUS = 7,
SIGFPE = 8,
SIGKILL = 9,
SIGUSR1 = 10,
SIGSEGV = 11,
SIGUSR2 = 12,
SIGPIPE = 13,
SIGALRM = 14,
SIGTERM = 15,
SIG16 = 16,
SIGCHLD = 17,
SIGCONT = 18,
SIGSTOP = 19,
SIGTSTP = 20,
SIGTTIN = 21,
SIGTTOU = 22,
SIGURG = 23,
SIGXCPU = 24,
SIGXFSZ = 25,
SIGVTALRM = 26,
SIGPROF = 27,
SIGWINCH = 28,
SIGIO = 29,
SIGPWR = 30,
SIGSYS = 31,
_,
// The `subprocess.kill()` method sends a signal to the child process. If no
// argument is given, the process will be sent the 'SIGTERM' signal.
pub const default = SignalCode.SIGTERM;
pub const Map = ComptimeEnumMap(SignalCode);
pub fn name(value: SignalCode) ?[]const u8 {
if (@intFromEnum(value) <= @intFromEnum(SignalCode.SIGSYS)) {
return asByteSlice(@tagName(value));
}
return null;
}
pub fn valid(value: SignalCode) bool {
return @intFromEnum(value) <= @intFromEnum(SignalCode.SIGSYS) and @intFromEnum(value) >= @intFromEnum(SignalCode.SIGHUP);
}
/// Shell scripts use exit codes 128 + signal number
/// https://tldp.org/LDP/abs/html/exitcodes.html
pub fn toExitCode(value: SignalCode) ?u8 {
return switch (@intFromEnum(value)) {
1...31 => 128 +% @intFromEnum(value),
else => null,
};
}
pub fn description(signal: SignalCode) ?[]const u8 {
// Description names copied from fish
// https://github.com/fish-shell/fish-shell/blob/00ffc397b493f67e28f18640d3de808af29b1434/fish-rust/src/signal.rs#L420
return switch (signal) {
.SIGHUP => "Terminal hung up",
.SIGINT => "Quit request",
.SIGQUIT => "Quit request",
.SIGILL => "Illegal instruction",
.SIGTRAP => "Trace or breakpoint trap",
.SIGABRT => "Abort",
.SIGBUS => "Misaligned address error",
.SIGFPE => "Floating point exception",
.SIGKILL => "Forced quit",
.SIGUSR1 => "User defined signal 1",
.SIGUSR2 => "User defined signal 2",
.SIGSEGV => "Address boundary error",
.SIGPIPE => "Broken pipe",
.SIGALRM => "Timer expired",
.SIGTERM => "Polite quit request",
.SIGCHLD => "Child process status changed",
.SIGCONT => "Continue previously stopped process",
.SIGSTOP => "Forced stop",
.SIGTSTP => "Stop request from job control (^Z)",
.SIGTTIN => "Stop from terminal input",
.SIGTTOU => "Stop from terminal output",
.SIGURG => "Urgent socket condition",
.SIGXCPU => "CPU time limit exceeded",
.SIGXFSZ => "File size limit exceeded",
.SIGVTALRM => "Virtual timefr expired",
.SIGPROF => "Profiling timer expired",
.SIGWINCH => "Window size change",
.SIGIO => "I/O on asynchronous file descriptor is possible",
.SIGSYS => "Bad system call",
.SIGPWR => "Power failure",
else => null,
};
}
pub fn from(value: anytype) SignalCode {
return @enumFromInt(std.mem.asBytes(&value)[0]);
}
// This wrapper struct is lame, what if bun's color formatter was more versatile
const Fmt = struct {
signal: SignalCode,
enable_ansi_colors: bool,
pub fn format(this: Fmt, comptime _: []const u8, _: std.fmt.FormatOptions, writer: anytype) !void {
const signal = this.signal;
switch (this.enable_ansi_colors) {
inline else => |enable_ansi_colors| {
if (signal.name()) |str| if (signal.description()) |desc| {
try writer.print(Output.prettyFmt("{s} <d>({s})<r>", enable_ansi_colors), .{ str, desc });
return;
};
try writer.print("code {d}", .{@intFromEnum(signal)});
},
}
}
};
pub fn fmt(signal: SignalCode, enable_ansi_colors: bool) Fmt {
return .{ .signal = signal, .enable_ansi_colors = enable_ansi_colors };
}
};
pub const SignalCode = @import("./SignalCode.zig").SignalCode;
pub fn isMissingIOUring() bool {
if (comptime !Environment.isLinux)

View File

@@ -1548,7 +1548,7 @@ pub const BundleV2 = struct {
bake_options: BakeOptions,
alloc: std.mem.Allocator,
event_loop: EventLoop,
) !std.ArrayList(options.OutputFile) {
) !bun.collections.ArrayListDefault(options.OutputFile) {
var this = try BundleV2.init(
server_transpiler,
bake_options,
@@ -1596,7 +1596,7 @@ pub const BundleV2 = struct {
);
if (chunks.len == 0) {
return std.ArrayList(options.OutputFile).init(bun.default_allocator);
return bun.collections.ArrayListDefault(options.OutputFile).init();
}
return try this.linker.generateChunksInParallel(chunks, false);

View File

@@ -2,7 +2,7 @@ pub fn generateChunksInParallel(
c: *LinkerContext,
chunks: []Chunk,
comptime is_dev_server: bool,
) !if (is_dev_server) void else std.ArrayList(options.OutputFile) {
) !if (is_dev_server) void else bun.collections.ArrayListDefault(options.OutputFile) {
const trace = bun.perf.trace("Bundler.generateChunksInParallel");
defer trace.end();

View File

@@ -163,6 +163,37 @@ pub const Action = union(enum) {
}
};
fn captureLibcBacktrace(begin_addr: usize, stack_trace: *std.builtin.StackTrace) void {
const backtrace = struct {
extern "c" fn backtrace(buffer: [*]*anyopaque, size: c_int) c_int;
}.backtrace;
const addrs = stack_trace.instruction_addresses;
const count = backtrace(@ptrCast(addrs), @intCast(addrs.len));
stack_trace.index = @intCast(count);
// Skip frames until we find begin_addr (or close to it)
// backtrace() captures everything including crash handler frames
const tolerance: usize = 128;
const skip: usize = for (addrs[0..stack_trace.index], 0..) |addr, i| {
// Check if this address is close to begin_addr (within tolerance)
const delta = if (addr >= begin_addr)
addr - begin_addr
else
begin_addr - addr;
if (delta <= tolerance) break i;
// Give up searching after 8 frames
if (i >= 8) break 0;
} else 0;
// Shift the addresses to skip crash handler frames
// If begin_addr was not found, use the complete backtrace
if (skip > 0) {
std.mem.copyForwards(usize, addrs, addrs[skip..stack_trace.index]);
stack_trace.index -= skip;
}
}
/// This function is invoked when a crash happens. A crash is classified in `CrashReason`.
pub fn crashHandler(
reason: CrashReason,
@@ -308,67 +339,31 @@ pub fn crashHandler(
var trace_buf: std.builtin.StackTrace = undefined;
// If a trace was not provided, compute one now
const trace = @as(?*std.builtin.StackTrace, if (error_return_trace) |ert|
if (ert.index > 0)
ert
else
null
else
null) orelse get_backtrace: {
const trace = blk: {
if (error_return_trace) |ert| {
if (ert.index > 0) break :blk ert;
}
trace_buf = std.builtin.StackTrace{
.index = 0,
.instruction_addresses = &addr_buf,
};
const desired_begin_addr = begin_addr orelse @returnAddress();
std.debug.captureStackTrace(desired_begin_addr, &trace_buf);
// On Linux with glibc, always use backtrace() instead of Zig's StackIterator
// because Zig's frame pointer-based unwinding doesn't work reliably,
// especially on aarch64. glibc's backtrace() uses DWARF unwinding.
if (bun.Environment.isLinux and !bun.Environment.isMusl) {
const backtrace_fn = struct {
extern "c" fn backtrace(buffer: [*]?*anyopaque, size: c_int) c_int;
}.backtrace;
const count = backtrace_fn(@ptrCast(&addr_buf), addr_buf.len);
if (count > 0) {
trace_buf.index = @intCast(count);
// Skip frames until we find begin_addr (or close to it)
// backtrace() captures everything including crash handler frames
var skip: usize = 0;
var found_begin = false;
const tolerance: usize = 128;
for (addr_buf[0..trace_buf.index], 0..) |addr, i| {
// Check if this address is close to begin_addr (within tolerance)
const delta = if (addr >= desired_begin_addr)
addr - desired_begin_addr
else
desired_begin_addr - addr;
if (delta <= tolerance) {
skip = i;
found_begin = true;
break;
}
// Give up searching after 8 frames
if (i >= 8) break;
}
// Shift the addresses to skip crash handler frames
// If begin_addr was not found, use the complete backtrace
if (found_begin and skip > 0 and skip < trace_buf.index) {
const remaining = trace_buf.index - skip;
var j: usize = 0;
while (j < remaining) : (j += 1) {
addr_buf[j] = addr_buf[skip + j];
}
trace_buf.index = remaining;
}
if (comptime bun.Environment.isLinux and !bun.Environment.isMusl) {
var addr_buf_libc: [20]usize = undefined;
var trace_buf_libc: std.builtin.StackTrace = .{
.index = 0,
.instruction_addresses = &addr_buf_libc,
};
captureLibcBacktrace(desired_begin_addr, &trace_buf_libc);
// Use stack trace from glibc's backtrace() if it has more frames
if (trace_buf_libc.index > trace_buf.index) {
addr_buf = addr_buf_libc;
trace_buf.index = trace_buf_libc.index;
}
} else {
// Fall back to Zig's stack capture on other platforms
std.debug.captureStackTrace(desired_begin_addr, &trace_buf);
}
break :get_backtrace &trace_buf;
break :blk &trace_buf;
};
if (debug_trace) {

View File

@@ -1257,7 +1257,7 @@ pub fn saveToDisk(this: *Lockfile, load_result: *const LoadResult, options: *con
break :bytes writer_buf.list.items;
}
var bytes = std.ArrayList(u8).init(bun.default_allocator);
var bytes = bun.collections.ArrayListDefault(u8).init();
var total_size: usize = 0;
var end_pos: usize = 0;
@@ -1265,9 +1265,9 @@ pub fn saveToDisk(this: *Lockfile, load_result: *const LoadResult, options: *con
Output.err(err, "failed to serialize lockfile", .{});
Global.crash();
};
if (bytes.items.len >= end_pos)
bytes.items[end_pos..][0..@sizeOf(usize)].* = @bitCast(total_size);
break :bytes bytes.items;
if (bytes.items().len >= end_pos)
bytes.items()[end_pos..][0..@sizeOf(usize)].* = @bitCast(total_size);
break :bytes bytes.toOwnedSlice() catch bun.outOfMemory();
};
defer bun.default_allocator.free(bytes);

View File

@@ -1097,7 +1097,7 @@ pub fn WindowsBufferedWriter(Parent: type, function_table: anytype) type {
/// Basic std.ArrayList(u8) + usize cursor wrapper
pub const StreamBuffer = struct {
list: std.ArrayList(u8) = std.ArrayList(u8).init(bun.default_allocator),
list: bun.collections.ArrayListDefault(u8) = bun.collections.ArrayListDefault(u8).init(),
cursor: usize = 0,
pub fn reset(this: *StreamBuffer) void {
@@ -1107,19 +1107,19 @@ pub const StreamBuffer = struct {
}
pub fn maybeShrink(this: *StreamBuffer) void {
if (this.list.capacity > std.heap.pageSize()) {
if (this.list.capacity() > std.heap.pageSize()) {
// workaround insane zig decision to make it undefined behavior to resize .len < .capacity
this.list.expandToCapacity();
this.list.expandToCapacity(undefined);
this.list.shrinkAndFree(std.heap.pageSize());
}
}
pub fn memoryCost(this: *const StreamBuffer) usize {
return this.list.capacity;
return this.list.capacity();
}
pub fn size(this: *const StreamBuffer) usize {
return this.list.items.len - this.cursor;
return this.list.items().len - this.cursor;
}
pub fn isEmpty(this: *const StreamBuffer) bool {
@@ -1152,7 +1152,7 @@ pub const StreamBuffer = struct {
pub fn writeTypeAsBytesAssumeCapacity(this: *StreamBuffer, comptime T: type, data: T) void {
var byte_list = bun.ByteList.moveFromList(&this.list);
defer this.list = byte_list.moveToListManaged(this.list.allocator);
defer this.list = byte_list.moveToListManaged(this.list.allocator());
byte_list.writeTypeAsBytesAssumeCapacity(T, data);
}
@@ -1164,20 +1164,20 @@ pub const StreamBuffer = struct {
{
var byte_list = bun.ByteList.moveFromList(&this.list);
defer this.list = byte_list.moveToListManaged(this.list.allocator);
_ = try byte_list.writeLatin1(this.list.allocator, buffer);
defer this.list = byte_list.moveToListManaged(this.list.allocator());
_ = try byte_list.writeLatin1(this.list.allocator(), buffer);
}
return this.list.items[this.cursor..];
return this.list.items()[this.cursor..];
} else if (comptime @TypeOf(writeFn) == @TypeOf(&writeUTF16) and writeFn == &writeUTF16) {
{
var byte_list = bun.ByteList.moveFromList(&this.list);
defer this.list = byte_list.moveToListManaged(this.list.allocator);
defer this.list = byte_list.moveToListManaged(this.list.allocator());
_ = try byte_list.writeUTF16(this.list.allocator, buffer);
_ = try byte_list.writeUTF16(this.list.allocator(), buffer);
}
return this.list.items[this.cursor..];
return this.list.items()[this.cursor..];
} else if (comptime @TypeOf(writeFn) == @TypeOf(&write) and writeFn == &write) {
return buffer;
} else {
@@ -1193,25 +1193,25 @@ pub const StreamBuffer = struct {
}
var byte_list = bun.ByteList.moveFromList(&this.list);
defer this.list = byte_list.moveToListManaged(this.list.allocator);
defer this.list = byte_list.moveToListManaged(this.list.allocator());
_ = try byte_list.writeLatin1(this.list.allocator, buffer);
_ = try byte_list.writeLatin1(this.list.allocator(), buffer);
}
pub fn writeUTF16(this: *StreamBuffer, buffer: []const u16) OOM!void {
var byte_list = bun.ByteList.moveFromList(&this.list);
defer this.list = byte_list.moveToListManaged(this.list.allocator);
defer this.list = byte_list.moveToListManaged(this.list.allocator());
_ = try byte_list.writeUTF16(this.list.allocator, buffer);
_ = try byte_list.writeUTF16(this.list.allocator(), buffer);
}
pub fn slice(this: *const StreamBuffer) []const u8 {
return this.list.items[this.cursor..];
return this.list.items()[this.cursor..];
}
pub fn deinit(this: *StreamBuffer) void {
this.cursor = 0;
if (this.list.capacity > 0) {
if (this.list.capacity() > 0) {
this.list.clearAndFree();
}
}

View File

@@ -59,16 +59,16 @@ pub const S3ListObjectsV2Result = struct {
continuation_token: ?[]const u8,
next_continuation_token: ?[]const u8,
start_after: ?[]const u8,
common_prefixes: ?std.ArrayList([]const u8),
contents: ?std.ArrayList(S3ListObjectsContents),
common_prefixes: ?bun.collections.ArrayListDefault([]const u8),
contents: ?bun.collections.ArrayListDefault(S3ListObjectsContents),
pub fn deinit(this: *const @This()) void {
if (this.contents) |contents| {
for (contents.items) |*item| item.deinit();
for (contents.items()) |*item| item.deinit();
contents.deinit();
}
if (this.common_prefixes) |common_prefixes| {
common_prefixes.deinit();
common_prefixes.deinitShallow();
}
}
@@ -115,9 +115,9 @@ pub const S3ListObjectsV2Result = struct {
}
if (this.contents) |contents| {
const jsContents = try JSValue.createEmptyArray(globalObject, contents.items.len);
const jsContents = try JSValue.createEmptyArray(globalObject, contents.items().len);
for (contents.items, 0..) |item, i| {
for (contents.items(), 0..) |item, i| {
const objectInfo = JSValue.createEmptyObject(globalObject, 1);
objectInfo.put(globalObject, jsc.ZigString.static("key"), try bun.String.createUTF8ForJS(globalObject, item.key));
@@ -165,9 +165,9 @@ pub const S3ListObjectsV2Result = struct {
}
if (this.common_prefixes) |common_prefixes| {
const jsCommonPrefixes = try JSValue.createEmptyArray(globalObject, common_prefixes.items.len);
const jsCommonPrefixes = try JSValue.createEmptyArray(globalObject, common_prefixes.items().len);
for (common_prefixes.items, 0..) |prefix, i| {
for (common_prefixes.items(), 0..) |prefix, i| {
const jsPrefix = JSValue.createEmptyObject(globalObject, 1);
jsPrefix.put(globalObject, jsc.ZigString.static("prefix"), try bun.String.createUTF8ForJS(globalObject, prefix));
try jsCommonPrefixes.putIndex(globalObject, @intCast(i), jsPrefix);
@@ -196,8 +196,8 @@ pub fn parseS3ListObjectsResult(xml: []const u8) !S3ListObjectsV2Result {
.start_after = null,
};
var contents = std.ArrayList(S3ListObjectsContents).init(bun.default_allocator);
var common_prefixes = std.ArrayList([]const u8).init(bun.default_allocator);
var contents = bun.collections.ArrayListDefault(S3ListObjectsContents).init();
var common_prefixes = bun.collections.ArrayListDefault([]const u8).init();
// we dont use trailing ">" as it may finish with xmlns=...
if (strings.indexOf(xml, "<ListBucketResult")) |delete_result_pos| {
@@ -482,17 +482,17 @@ pub fn parseS3ListObjectsResult(xml: []const u8) !S3ListObjectsV2Result {
}
}
if (contents.items.len != 0) {
if (contents.items().len != 0) {
result.contents = contents;
} else {
for (contents.items) |*item| item.deinit();
for (contents.items()) |*item| item.deinit();
contents.deinit();
}
if (common_prefixes.items.len != 0) {
if (common_prefixes.items().len != 0) {
result.common_prefixes = common_prefixes;
} else {
common_prefixes.deinit();
common_prefixes.deinitShallow();
}
}

View File

@@ -857,10 +857,10 @@ pub const String = extern struct {
pub fn createFormatForJS(globalObject: *jsc.JSGlobalObject, comptime fmt: [:0]const u8, args: anytype) bun.JSError!jsc.JSValue {
jsc.markBinding(@src());
var builder = std.ArrayList(u8).init(bun.default_allocator);
var builder = bun.collections.ArrayListDefault(u8).init();
defer builder.deinit();
bun.handleOom(builder.writer().print(fmt, args));
return bun.cpp.BunString__createUTF8ForJS(globalObject, builder.items.ptr, builder.items.len);
return bun.cpp.BunString__createUTF8ForJS(globalObject, builder.items().ptr, builder.items().len);
}
pub fn parseDate(this: *String, globalObject: *jsc.JSGlobalObject) bun.JSError!f64 {

View File

@@ -187,3 +187,45 @@ tsd.expectAssignable<NullSubprocess>(Bun.spawn([], { stdio: ["ignore", "inherit"
tsd.expectAssignable<NullSubprocess>(Bun.spawn([], { stdio: [null, null, null] }));
tsd.expectAssignable<SyncSubprocess<Bun.SpawnOptions.Readable, Bun.SpawnOptions.Readable>>(Bun.spawnSync([], {}));
// Lazy option types (async only)
{
// valid: lazy usable with async spawn
const p1 = Bun.spawn(["echo", "hello"], {
stdout: "pipe",
stderr: "pipe",
lazy: true,
});
tsd.expectType(p1.stdout).is<ReadableStream<Uint8Array<ArrayBuffer>>>();
}
{
// valid: lazy false is also allowed
const p2 = Bun.spawn(["echo", "hello"], {
stdout: "pipe",
stderr: "pipe",
lazy: false,
});
tsd.expectType(p2.stderr).is<ReadableStream<Uint8Array<ArrayBuffer>>>();
}
{
// invalid: lazy is not supported in spawnSync
Bun.spawnSync(["echo", "hello"], {
stdout: "pipe",
stderr: "pipe",
// @ts-expect-error lazy applies only to async spawn
lazy: true,
});
}
{
// invalid: lazy is not supported in spawnSync (object overload)
// @ts-expect-error lazy applies only to async spawn
Bun.spawnSync({
cmd: ["echo", "hello"],
stdout: "pipe",
stderr: "pipe",
lazy: true,
});
}

View File

@@ -79,6 +79,10 @@ describe("bun:test", () => {
});
});
test.each([1, 2, 3])("test.each", a => {
expectType<1 | 2 | 3>(a);
});
// inference should work when data is passed directly in
test.each([
["a", true, 5],

View File

@@ -480,6 +480,7 @@ for (let [gcTick, label] of [
}
resolve && resolve();
// @ts-expect-error
resolve = undefined;
})();
await promise;
@@ -679,9 +680,10 @@ describe("should not hang", () => {
it(
"sleep " + sleep,
async () => {
const runs = [];
const runs: Promise<void>[] = [];
let initialMaxFD = -1;
for (let order of [
for (const order of [
["sleep", "kill", "unref", "exited"],
["sleep", "unref", "kill", "exited"],
["kill", "sleep", "unref", "exited"],
@@ -789,6 +791,7 @@ describe("close handling", () => {
await exitPromise;
})();
Bun.gc(false);
await Bun.sleep(0);
@@ -837,3 +840,182 @@ it("error does not UAF", async () => {
}
expect(emsg).toInclude(" ");
});
describe("onDisconnect", () => {
it.todoIf(isWindows)("ipc delivers message", async () => {
const msg = Promise.withResolvers<void>();
let ipcMessage: unknown;
await using proc = spawn({
cmd: [
bunExe(),
"-e",
`
process.send("hello");
Promise.resolve().then(() => process.exit(0));
`,
],
ipc: message => {
ipcMessage = message;
msg.resolve();
},
stdio: ["inherit", "inherit", "inherit"],
env: bunEnv,
});
await msg.promise;
expect(ipcMessage).toBe("hello");
expect(await proc.exited).toBe(0);
});
it.todoIf(isWindows)("onDisconnect callback is called when IPC disconnects", async () => {
const disc = Promise.withResolvers<void>();
let disconnectCalled = false;
await using proc = spawn({
cmd: [
bunExe(),
"-e",
`
Promise.resolve().then(() => {
process.disconnect();
process.exit(0);
});
`,
],
// Ensure IPC channel is opened without relying on a message
ipc: () => {},
onDisconnect: () => {
disconnectCalled = true;
disc.resolve();
},
stdio: ["inherit", "inherit", "inherit"],
env: bunEnv,
});
await disc.promise;
expect(disconnectCalled).toBe(true);
expect(await proc.exited).toBe(0);
});
it("onDisconnect is not called when IPC is not used", async () => {
await using proc = spawn({
cmd: [bunExe(), "-e", "console.log('hello')"],
onDisconnect: () => {
expect().fail("onDisconnect was called()");
},
stdout: "pipe",
stderr: "ignore",
stdin: "ignore",
});
expect(await proc.exited).toBe(0);
});
});
describe("argv0", () => {
it("argv0 option changes process.argv0 but not executable", async () => {
await using proc = spawn({
cmd: [bunExe(), "-e", "console.log(process.argv0); console.log(process.execPath)"],
argv0: "custom-argv0",
stdout: "pipe",
stderr: "ignore",
stdin: "ignore",
env: bunEnv,
});
const output = await proc.stdout.text();
const lines = output.trim().split(/\r?\n/);
expect(lines[0]).toBe("custom-argv0");
expect(path.normalize(lines[1])).toBe(path.normalize(bunExe()));
await proc.exited;
});
it("argv0 option works with spawnSync", () => {
const argv0 = "custom-argv0-sync";
const proc = spawnSync({
cmd: [bunExe(), "-e", "console.log(JSON.stringify({ argv0: process.argv0, execPath: process.execPath }))"],
argv0,
stdout: "pipe",
stderr: "ignore",
stdin: "ignore",
env: bunEnv,
});
const output = JSON.parse(proc.stdout.toString().trim());
expect(output).toEqual({ argv0, execPath: path.normalize(bunExe()) });
});
it("argv0 defaults to cmd[0] when not specified", async () => {
await using proc = spawn({
cmd: [bunExe(), "-e", "console.log(process.argv0)"],
stdout: "pipe",
stderr: "ignore",
stdin: "ignore",
env: bunEnv,
});
const output = await proc.stdout.text();
expect(output.trim()).toBe(bunExe());
await proc.exited;
});
});
describe("option combinations", () => {
it("detached + argv0 works together", async () => {
await using proc = spawn({
cmd: [bunExe(), "-e", "console.log(process.argv0)"],
detached: true,
argv0: "custom-name",
stdout: "pipe",
stderr: "ignore",
stdin: "ignore",
env: bunEnv,
});
const output = await proc.stdout.text();
expect(output.trim()).toBe("custom-name");
await proc.exited;
});
it.todoIf(isWindows)("onDisconnect + ipc + serialization works together", async () => {
let messageReceived = false;
let disconnectCalled = false;
const msg = Promise.withResolvers<void>();
const disc = Promise.withResolvers<void>();
await using proc = spawn({
cmd: [
bunExe(),
"-e",
`
process.send({type: "hello", data: "world"});
Promise.resolve().then(() => {
process.disconnect();
process.exit(0);
});
`,
],
ipc: message => {
expect(message).toEqual({ type: "hello", data: "world" });
messageReceived = true;
msg.resolve();
},
onDisconnect: () => {
disconnectCalled = true;
disc.resolve();
},
serialization: "advanced",
stdio: ["inherit", "inherit", "inherit"],
env: bunEnv,
});
await Promise.all([msg.promise, disc.promise]);
expect(messageReceived).toBe(true);
expect(disconnectCalled).toBe(true);
expect(await proc.exited).toBe(0);
});
});

View File

@@ -0,0 +1,7 @@
// Should not crash
test("abc", () => {
expect(async () => {
await Bun.sleep(100);
throw new Error("uh oh!");
}).toThrow("uh oh!");
}, 50);

View File

@@ -0,0 +1,27 @@
import { bunEnv, bunExe, normalizeBunSnapshot } from "harness";
// the test should time out, not crash
test("23865", async () => {
const proc = Bun.spawn({
cmd: [bunExe(), "test", "./23865.fixture.ts"],
env: bunEnv,
cwd: import.meta.dir,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(exitCode).not.toBe(0);
expect(normalizeBunSnapshot(stdout)).toMatchInlineSnapshot(`"bun test <version> (<revision>)"`);
expect(normalizeBunSnapshot(stderr)).toMatchInlineSnapshot(`
"23865.fixture.ts:
(fail) abc
^ this test timed out after 50ms.
0 pass
1 fail
1 expect() calls
Ran 1 test across 1 file."
`);
});