Compare commits

..

10 Commits

Author SHA1 Message Date
autofix-ci[bot]
fae1e98802 [autofix.ci] apply automated fixes 2025-09-15 07:57:20 +00:00
Claude Bot
6dbdae7267 feat(pm): add --json and --depth support to bun pm ls
Implements npm-compatible JSON output for dependency listing with:
- --json flag for JSON output matching npm's schema
- --depth flag to limit tree depth (works for both JSON and tree output)
- Proper dependency type separation (dependencies, devDependencies, optionalDependencies, peerDependencies)
- Accurate resolved URLs from lockfile resolution data
- "from" field showing original version specs (only at root level)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-15 07:55:12 +00:00
Jarred Sumner
6bafe2602e Fix Windows shell crash with && operator and external commands (#22651)
## What does this PR do?

Fixes https://github.com/oven-sh/bun/issues/22650
Fixes https://github.com/oven-sh/bun/issues/22615
Fixes https://github.com/oven-sh/bun/issues/22603
Fixes https://github.com/oven-sh/bun/issues/22602

Fixes a crash that occurred when running shell commands through `bun
run` (package.json scripts) on Windows that use the `&&` operator
followed by an external command.

### The Problem

The minimal reproduction was:
```bash
bun exec 'echo && node --version'
```

This would crash with: `panic(main thread): attempt to use null value`

### Root Causes

Two issues were causing the crash:

1. **Missing top_level_dir**: When `runPackageScriptForeground` creates
a MiniEventLoop for running package scripts, it wasn't setting the
`top_level_dir` field. This caused a null pointer dereference when the
shell tried to access it.

2. **MovableIfWindowsFd handling**: After PR #21800 introduced
`MovableIfWindowsFd` to handle file descriptor ownership on Windows, the
`IOWriter.fd` could be moved to libuv, leaving it null. When the shell
tried to spawn an external command after a `&&` operator, it would crash
trying to access this null fd.

### The Fix

1. Set `mini.top_level_dir = cwd` after initializing the MiniEventLoop
in `run_command.zig`
2. In `IO.zig`, when the fd has been moved to libuv (is null), use
`.inherit` for stdio instead of trying to pass the null fd

### How did you verify your code works?

- Added a regression test that reproduces the issue
- Verified the test fails without the fix and passes with it
- Tested the minimal reproduction command directly
- The fix correctly allows both commands in the `&&` chain to execute

```bash
# Before fix: crashes
> bun exec 'echo test && node --version'
panic(main thread): attempt to use null value

# After fix: works correctly
> bun exec 'echo test && node --version'
test
v22.4.1
```
<sub>
also probably fixes #22615 and fixes #22603 and fixes #22602
</sub>

---------

Co-authored-by: Zack Radisic <zack@theradisic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-14 04:14:48 -07:00
Jarred Sumner
9411c62756 Make the bundler tests use the API by default in most cases (#22646)
### What does this PR do?

This branch:
> Ran 1600 tests across 46 files. [63.24s]

Main:
> Ran 1600 tests across 46 files. [137.05s]

This makes the bundler tests run about 60 seconds faster 

### How did you verify your code works?
2025-09-14 02:27:53 -07:00
Jarred Sumner
8614b98f6b Disable glibcxx assertions in WebKit release non-asan builds (#22647)
### What does this PR do?

Evidently, Ubuntu turns them on by default. 

Claude:
```
> What is the typical overhead for glibcxx assertions?

⏺ The overhead of glibc/libstdc++ assertions can be significant, especially in
  performance-critical code:

  Typical Overhead Ranges:

  libstdc++ (C++ standard library) assertions:
  - 5-15% overall performance impact in typical applications
  - Up to 30-50% in container-heavy code (vectors, maps, etc.)
  - Iterator validity checks are particularly expensive

  glibc (C library) assertions:
  - 2-8% overall impact in typical applications
  - Up to 20% in malloc/free intensive code
  - Heap consistency checks, buffer overflow detection
 ```
2025-09-14 02:25:23 -07:00
Jarred Sumner
ecd23df4ca Fix banner positioning with --format=cjs --target=bun (#22641)
## Summary
- Fixes incorrect banner positioning when using `--banner` with
`--format=cjs` and `--target=bun`
- Ensures Bun-specific comments (`// @bun @bun-cjs`) appear before user
banner content
- Properly extracts and positions hashbangs from banner content

## Problem
When using `--banner` with `--format=cjs --target=bun`, the banner was
incorrectly placed before the `// @bun @bun-cjs` comment and CJS wrapper
function, breaking the module format that Bun expects.

## Solution
Implemented proper ordering:
1. **Hashbang** (from source file or extracted from banner if it starts
with `#!`)
2. **@bun comments** (e.g., `// @bun`, `// @bun @bun-cjs`, `// @bun
@bytecode`)
3. **CJS wrapper** `(function(exports, require, module, __filename,
__dirname) {`
4. **Banner content** (excluding any extracted hashbang)

## Test plan
- [x] Added comprehensive tests for banner positioning with CJS/ESM and
Bun target
- [x] Tests cover hashbang extraction from banners
- [x] Tests verify proper ordering with bytecode generation
- [x] All existing tests pass

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-14 01:01:22 -07:00
Jarred Sumner
3d8139dc27 fix(bundler): propagate TLA through importers (#22229)
(For internal tracking: fixes ENG-20351)

---------

Co-authored-by: Dylan Conway <dylan.conway567@gmail.com>
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: taylor.fish <contact@taylor.fish>
2025-09-13 16:15:03 -07:00
Ciro Spaciari
beea7180f3 refactor(MySQL) (#22619)
### What does this PR do?

### How did you verify your code works?

---------

Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-13 14:52:19 -07:00
Jarred Sumner
8e786c1cfc Fix bug causing truncated stack traces (#22624)
### What does this PR do?

Fixes https://github.com/oven-sh/bun/issues/21593 (very likely)

### How did you verify your code works?

Regression test
2025-09-13 03:25:34 -07:00
robobun
9b97dd11e2 Fix TTY reopening after stdin EOF (#22591)
## Summary
- Fixes ENXIO error when reopening `/dev/tty` after stdin reaches EOF
- Fixes ESPIPE error when reading from reopened TTY streams  
- Adds ref/unref methods to tty.ReadStream for socket-like behavior
- Enables TUI applications that read piped input then switch to
interactive TTY mode

## The Problem
TUI applications and interactive CLI tools have a pattern where they:
1. Read piped input as initial data: `echo "data" | tui-app`
2. After stdin ends, reopen `/dev/tty` for interactive session
3. Use the TTY for interactive input/output

This didn't work in Bun due to missing functionality:
- **ESPIPE error**: TTY ReadStreams incorrectly had `pos=0` causing
`pread()` syscall usage which fails on character devices
- **Missing methods**: tty.ReadStream lacked ref/unref methods that TUI
apps expect for socket-like behavior
- **Hardcoded isTTY**: tty.ReadStream always set `isTTY = true` even for
non-TTY file descriptors

## The Solution
1. **Fix ReadStream position**: For fd-based streams (like TTY), don't
default `start` to 0. This keeps `pos` undefined, ensuring `read()`
syscall is used instead of `pread()`.

2. **Add ref/unref methods**: Implement ref/unref on tty.ReadStream
prototype to match Node.js socket-like behavior, allowing TUI apps to
control event loop behavior.

3. **Dynamic isTTY check**: Use `isatty(fd)` to properly detect if the
file descriptor is actually a TTY.

## Test Results
```bash
$ bun test test/regression/issue/tty-reopen-after-stdin-eof.test.ts
✓ can reopen /dev/tty after stdin EOF for interactive session
✓ TTY ReadStream should not set position for character devices

$ bun test test/regression/issue/tty-readstream-ref-unref.test.ts
✓ tty.ReadStream should have ref/unref methods when opened on /dev/tty
✓ tty.ReadStream ref/unref should behave like Node.js

$ bun test test/regression/issue/tui-app-tty-pattern.test.ts
✓ TUI app pattern: read piped stdin then reopen /dev/tty
✓ tty.ReadStream handles non-TTY file descriptors correctly
```

## Compatibility
Tested against Node.js v24.3.0 - our behavior now matches:
-  Can reopen `/dev/tty` after stdin EOF
-  TTY ReadStream has `pos: undefined` and `start: undefined`
-  tty.ReadStream has ref/unref methods for socket-like behavior
-  `isTTY` is properly determined using `isatty(fd)`

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Jarred Sumner <jarred@jarredsumner.com>
2025-09-13 01:00:57 -07:00
76 changed files with 4429 additions and 2544 deletions

View File

@@ -2,7 +2,7 @@ option(WEBKIT_VERSION "The version of WebKit to use")
option(WEBKIT_LOCAL "If a local version of WebKit should be used instead of downloading")
if(NOT WEBKIT_VERSION)
set(WEBKIT_VERSION 2d2e8dd5b020cc165e2bc1d284461b4504d624e5)
set(WEBKIT_VERSION 495c25e24927ba03277ae225cd42811588d03ff8)
endif()
string(SUBSTRING ${WEBKIT_VERSION} 0 16 WEBKIT_VERSION_PREFIX)

View File

@@ -135,7 +135,7 @@ pub const Run = struct {
null,
);
try bundle.runEnvLoader(false);
const mini = jsc.MiniEventLoop.initGlobal(bundle.env);
const mini = jsc.MiniEventLoop.initGlobal(bundle.env, null);
mini.top_level_dir = ctx.args.absolute_working_dir orelse "";
return bun.shell.Interpreter.initAndRunFromFile(ctx, mini, entry_path);
}

View File

@@ -627,8 +627,8 @@ pub fn NewServer(protocol_enum: enum { http, https }, development_kind: enum { d
}
pub fn jsValueAssertAlive(server: *ThisServer) jsc.JSValue {
// With JSRef, we can safely access the JS value even after stop() via weak reference
return server.js_value.get();
bun.assert(server.js_value.isNotEmpty());
return server.js_value.tryGet().?;
}
pub fn requestIP(this: *ThisServer, request: *jsc.WebCore.Request) bun.JSError!jsc.JSValue {
@@ -1124,7 +1124,7 @@ pub fn NewServer(protocol_enum: enum { http, https }, development_kind: enum { d
this.onReloadFromZig(&new_config, globalThis);
return this.js_value.get();
return this.js_value.tryGet() orelse .js_undefined;
}
pub fn onFetch(this: *ThisServer, ctx: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!jsc.JSValue {
@@ -1426,7 +1426,7 @@ pub fn NewServer(protocol_enum: enum { http, https }, development_kind: enum { d
pub fn finalize(this: *ThisServer) void {
httplog("finalize", .{});
this.js_value.deinit();
this.js_value.finalize();
this.flags.has_js_deinited = true;
this.deinitIfWeCan();
}
@@ -1539,8 +1539,7 @@ pub fn NewServer(protocol_enum: enum { http, https }, development_kind: enum { d
}
pub fn stop(this: *ThisServer, abrupt: bool) void {
const current_value = this.js_value.get();
this.js_value.setWeak(current_value);
this.js_value.downgrade();
if (this.config.allow_hot and this.config.id.len > 0) {
if (this.globalThis.bunVM().hotMap()) |hot| {

View File

@@ -1981,7 +1981,7 @@ pub fn NewRequestContext(comptime ssl_enabled: bool, comptime debug_mode: bool,
this.flags.has_called_error_handler = true;
const result = server.config.onError.call(
server.globalThis,
server.js_value.get(),
server.js_value.tryGet() orelse .js_undefined,
&.{value},
) catch |err| server.globalThis.takeException(err);
defer result.ensureStillAlive();

View File

@@ -9,7 +9,7 @@ for (const type of types) {
construct: true,
finalize: true,
configurable: false,
hasPendingActivity: true,
hasPendingActivity: type === "PostgresSQL",
klass: {
// escapeString: {
// fn: "escapeString",
@@ -60,7 +60,6 @@ for (const type of types) {
construct: true,
finalize: true,
configurable: false,
JSType: "0b11101110",
klass: {},
proto: {
@@ -77,11 +76,11 @@ for (const type of types) {
length: 0,
},
setMode: {
fn: "setMode",
fn: "setModeFromJS",
length: 1,
},
setPendingValue: {
fn: "setPendingValue",
fn: "setPendingValueFromJS",
length: 1,
},
},

View File

@@ -4,33 +4,28 @@ pub const JSRef = union(enum) {
finalized: void,
pub fn initWeak(value: jsc.JSValue) @This() {
bun.assert(!value.isEmptyOrUndefinedOrNull());
return .{ .weak = value };
}
pub fn initStrong(value: jsc.JSValue, globalThis: *jsc.JSGlobalObject) @This() {
bun.assert(!value.isEmptyOrUndefinedOrNull());
return .{ .strong = .create(value, globalThis) };
}
pub fn empty() @This() {
return .{ .weak = .zero };
return .{ .weak = .js_undefined };
}
pub fn get(this: *@This()) jsc.JSValue {
pub fn tryGet(this: *const @This()) ?jsc.JSValue {
return switch (this.*) {
.weak => this.weak,
.strong => this.strong.get() orelse .zero,
.finalized => .zero,
};
}
pub fn tryGet(this: *@This()) ?jsc.JSValue {
return switch (this.*) {
.weak => if (this.weak != .zero) this.weak else null,
.weak => if (this.weak.isEmptyOrUndefinedOrNull()) null else this.weak,
.strong => this.strong.get(),
.finalized => null,
};
}
pub fn setWeak(this: *@This(), value: jsc.JSValue) void {
bun.assert(!value.isEmptyOrUndefinedOrNull());
switch (this.*) {
.weak => {},
.strong => {
@@ -44,6 +39,7 @@ pub const JSRef = union(enum) {
}
pub fn setStrong(this: *@This(), value: jsc.JSValue, globalThis: *jsc.JSGlobalObject) void {
bun.assert(!value.isEmptyOrUndefinedOrNull());
if (this.* == .strong) {
this.strong.set(globalThis, value);
return;
@@ -54,7 +50,7 @@ pub const JSRef = union(enum) {
pub fn upgrade(this: *@This(), globalThis: *jsc.JSGlobalObject) void {
switch (this.*) {
.weak => {
bun.assert(this.weak != .zero);
bun.assert(!this.weak.isEmptyOrUndefinedOrNull());
this.* = .{ .strong = .create(this.weak, globalThis) };
},
.strong => {},
@@ -64,10 +60,41 @@ pub const JSRef = union(enum) {
}
}
pub fn downgrade(this: *@This()) void {
switch (this.*) {
.weak => {},
.strong => |*strong| {
const value = strong.trySwap() orelse .js_undefined;
value.ensureStillAlive();
strong.deinit();
this.* = .{ .weak = value };
},
.finalized => {
bun.debugAssert(false);
},
}
}
pub fn isEmpty(this: *const @This()) bool {
return switch (this.*) {
.weak => this.weak.isEmptyOrUndefinedOrNull(),
.strong => !this.strong.has(),
.finalized => true,
};
}
pub fn isNotEmpty(this: *const @This()) bool {
return switch (this.*) {
.weak => !this.weak.isEmptyOrUndefinedOrNull(),
.strong => this.strong.has(),
.finalized => false,
};
}
pub fn deinit(this: *@This()) void {
switch (this.*) {
.weak => {
this.weak = .zero;
this.weak = .js_undefined;
},
.strong => {
this.strong.deinit();
@@ -75,6 +102,11 @@ pub const JSRef = union(enum) {
.finalized => {},
}
}
pub fn finalize(this: *@This()) void {
this.deinit();
this.* = .{ .finalized = {} };
}
};
const bun = @import("bun");

View File

@@ -109,7 +109,8 @@ pub const ZigException = extern struct {
.source_lines_len = source_lines_count,
.source_lines_to_collect = source_lines_count,
.frames_ptr = &this.frames,
.frames_len = this.frames.len,
.frames_len = 0,
.frames_cap = this.frames.len,
},
};
this.loaded = true;

View File

@@ -1552,9 +1552,18 @@ JSC_DEFINE_HOST_FUNCTION(functionQueueMicrotask,
auto* globalObject = defaultGlobalObject(lexicalGlobalObject);
JSC::JSValue asyncContext = globalObject->m_asyncContextData.get()->getInternalField(0);
auto function = globalObject->performMicrotaskFunction();
#if ASSERT_ENABLED
ASSERT_WITH_MESSAGE(function, "Invalid microtask function");
ASSERT_WITH_MESSAGE(!callback.isEmpty(), "Invalid microtask callback");
#endif
if (asyncContext.isEmpty()) {
asyncContext = JSC::jsUndefined();
}
// This is a JSC builtin function
lexicalGlobalObject->queueMicrotask(globalObject->performMicrotaskFunction(), callback, asyncContext,
lexicalGlobalObject->queueMicrotask(function, callback, asyncContext,
JSC::JSValue {}, JSC::JSValue {});
return JSC::JSValue::encode(JSC::jsUndefined());
@@ -4151,6 +4160,12 @@ extern "C" void JSC__JSGlobalObject__queueMicrotaskCallback(Zig::GlobalObject* g
{
JSFunction* function = globalObject->nativeMicrotaskTrampoline();
#if ASSERT_ENABLED
ASSERT_WITH_MESSAGE(function, "Invalid microtask function");
ASSERT_WITH_MESSAGE(ptr, "Invalid microtask context");
ASSERT_WITH_MESSAGE(callback, "Invalid microtask callback");
#endif
// Do not use JSCell* here because the GC will try to visit it.
globalObject->queueMicrotask(function, JSValue(std::bit_cast<double>(reinterpret_cast<uintptr_t>(ptr))), JSValue(std::bit_cast<double>(reinterpret_cast<uintptr_t>(callback))), jsUndefined(), jsUndefined());
}

View File

@@ -9,6 +9,7 @@ pub const ZigStackTrace = extern struct {
frames_ptr: [*]ZigStackFrame,
frames_len: u8,
frames_cap: u8,
/// Non-null if `source_lines_*` points into data owned by a JSC::SourceProvider.
/// If so, then .deref must be called on it to release the memory.
@@ -23,6 +24,7 @@ pub const ZigStackTrace = extern struct {
.frames_ptr = frames_slice.ptr,
.frames_len = @min(frames_slice.len, std.math.maxInt(u8)),
.frames_cap = @min(frames_slice.len, std.math.maxInt(u8)),
.referenced_source_provider = null,
};

View File

@@ -3453,6 +3453,7 @@ void JSC__JSPromise__rejectOnNextTickWithHandled(JSC::JSPromise* promise, JSC::J
JSC::EncodedJSValue encoedValue, bool handled)
{
JSC::JSValue value = JSC::JSValue::decode(encoedValue);
auto& vm = JSC::getVM(lexicalGlobalObject);
auto scope = DECLARE_THROW_SCOPE(vm);
uint32_t flags = promise->internalField(JSC::JSPromise::Field::Flags).get().asUInt32();
@@ -3463,10 +3464,28 @@ void JSC__JSPromise__rejectOnNextTickWithHandled(JSC::JSPromise* promise, JSC::J
promise->internalField(JSC::JSPromise::Field::Flags).set(vm, promise, jsNumber(flags | JSC::JSPromise::isFirstResolvingFunctionCalledFlag));
auto* globalObject = jsCast<Zig::GlobalObject*>(promise->globalObject());
auto microtaskFunction = globalObject->performMicrotaskFunction();
auto rejectPromiseFunction = globalObject->rejectPromiseFunction();
auto asyncContext = globalObject->m_asyncContextData.get()->getInternalField(0);
#if ASSERT_ENABLED
ASSERT_WITH_MESSAGE(microtaskFunction, "Invalid microtask function");
ASSERT_WITH_MESSAGE(rejectPromiseFunction, "Invalid microtask callback");
ASSERT_WITH_MESSAGE(!value.isEmpty(), "Invalid microtask value");
#endif
if (asyncContext.isEmpty()) {
asyncContext = jsUndefined();
}
if (value.isEmpty()) {
value = jsUndefined();
}
globalObject->queueMicrotask(
globalObject->performMicrotaskFunction(),
globalObject->rejectPromiseFunction(),
microtaskFunction,
rejectPromiseFunction,
globalObject->m_asyncContextData.get()->getInternalField(0),
promise,
value);
@@ -4369,45 +4388,45 @@ bool JSC__JSValue__stringIncludes(JSC::EncodedJSValue value, JSC::JSGlobalObject
return stringToSearchIn.find(searchString, 0) != WTF::notFound;
}
static void populateStackFrameMetadata(JSC::VM& vm, JSC::JSGlobalObject* globalObject, const JSC::StackFrame* stackFrame, ZigStackFrame* frame)
static void populateStackFrameMetadata(JSC::VM& vm, JSC::JSGlobalObject* globalObject, const JSC::StackFrame& stackFrame, ZigStackFrame& frame)
{
if (stackFrame->isWasmFrame()) {
frame->code_type = ZigStackFrameCodeWasm;
if (stackFrame.isWasmFrame()) {
frame.code_type = ZigStackFrameCodeWasm;
auto name = Zig::functionName(vm, globalObject, *stackFrame, false, nullptr);
auto name = Zig::functionName(vm, globalObject, stackFrame, false, nullptr);
if (!name.isEmpty()) {
frame->function_name = Bun::toStringRef(name);
frame.function_name = Bun::toStringRef(name);
}
auto sourceURL = Zig::sourceURL(vm, *stackFrame);
auto sourceURL = Zig::sourceURL(vm, stackFrame);
if (sourceURL != "[wasm code]"_s) {
// [wasm code] is a useless source URL, so we don't bother to set it.
// It is the default value JSC returns.
frame->source_url = Bun::toStringRef(sourceURL);
frame.source_url = Bun::toStringRef(sourceURL);
}
return;
}
auto sourceURL = Zig::sourceURL(vm, *stackFrame);
frame->source_url = Bun::toStringRef(sourceURL);
auto m_codeBlock = stackFrame->codeBlock();
auto sourceURL = Zig::sourceURL(vm, stackFrame);
frame.source_url = Bun::toStringRef(sourceURL);
auto m_codeBlock = stackFrame.codeBlock();
if (m_codeBlock) {
switch (m_codeBlock->codeType()) {
case JSC::EvalCode: {
frame->code_type = ZigStackFrameCodeEval;
frame.code_type = ZigStackFrameCodeEval;
return;
}
case JSC::ModuleCode: {
frame->code_type = ZigStackFrameCodeModule;
frame.code_type = ZigStackFrameCodeModule;
return;
}
case JSC::GlobalCode: {
frame->code_type = ZigStackFrameCodeGlobal;
frame.code_type = ZigStackFrameCodeGlobal;
return;
}
case JSC::FunctionCode: {
frame->code_type = !m_codeBlock->isConstructor() ? ZigStackFrameCodeFunction : ZigStackFrameCodeConstructor;
frame.code_type = !m_codeBlock->isConstructor() ? ZigStackFrameCodeFunction : ZigStackFrameCodeConstructor;
break;
}
default:
@@ -4415,7 +4434,7 @@ static void populateStackFrameMetadata(JSC::VM& vm, JSC::JSGlobalObject* globalO
}
}
auto calleeCell = stackFrame->callee();
auto calleeCell = stackFrame.callee();
if (!calleeCell)
return;
@@ -4425,17 +4444,17 @@ static void populateStackFrameMetadata(JSC::VM& vm, JSC::JSGlobalObject* globalO
WTF::String functionName = Zig::functionName(vm, globalObject, callee);
if (!functionName.isEmpty()) {
frame->function_name = Bun::toStringRef(functionName);
frame.function_name = Bun::toStringRef(functionName);
}
frame->is_async = stackFrame->isAsyncFrame();
frame.is_async = stackFrame.isAsyncFrame();
}
static void populateStackFramePosition(const JSC::StackFrame* stackFrame, BunString* source_lines,
static void populateStackFramePosition(const JSC::StackFrame& stackFrame, BunString* source_lines,
OrdinalNumber* source_line_numbers, uint8_t source_lines_count,
ZigStackFramePosition* position, JSC::SourceProvider** referenced_source_provider, PopulateStackTraceFlags flags)
ZigStackFramePosition& position, JSC::SourceProvider** referenced_source_provider, PopulateStackTraceFlags flags)
{
auto code = stackFrame->codeBlock();
auto code = stackFrame.codeBlock();
if (!code)
return;
@@ -4448,19 +4467,19 @@ static void populateStackFramePosition(const JSC::StackFrame* stackFrame, BunStr
if (sourceString.isNull()) [[unlikely]]
return;
if (!stackFrame->hasBytecodeIndex()) {
if (stackFrame->hasLineAndColumnInfo()) {
auto lineColumn = stackFrame->computeLineAndColumn();
position->line_zero_based = OrdinalNumber::fromOneBasedInt(lineColumn.line).zeroBasedInt();
position->column_zero_based = OrdinalNumber::fromOneBasedInt(lineColumn.column).zeroBasedInt();
if (!stackFrame.hasBytecodeIndex()) {
if (stackFrame.hasLineAndColumnInfo()) {
auto lineColumn = stackFrame.computeLineAndColumn();
position.line_zero_based = OrdinalNumber::fromOneBasedInt(lineColumn.line).zeroBasedInt();
position.column_zero_based = OrdinalNumber::fromOneBasedInt(lineColumn.column).zeroBasedInt();
}
position->byte_position = -1;
position.byte_position = -1;
return;
}
auto location = Bun::getAdjustedPositionForBytecode(code, stackFrame->bytecodeIndex());
*position = location;
auto location = Bun::getAdjustedPositionForBytecode(code, stackFrame.bytecodeIndex());
memcpy(&position, &location, sizeof(ZigStackFramePosition));
if (flags == PopulateStackTraceFlags::OnlyPosition)
return;
@@ -4528,18 +4547,18 @@ static void populateStackFramePosition(const JSC::StackFrame* stackFrame, BunStr
}
}
static void populateStackFrame(JSC::VM& vm, ZigStackTrace* trace, const JSC::StackFrame* stackFrame,
ZigStackFrame* frame, bool is_top, JSC::SourceProvider** referenced_source_provider, JSC::JSGlobalObject* globalObject, PopulateStackTraceFlags flags)
static void populateStackFrame(JSC::VM& vm, ZigStackTrace& trace, const JSC::StackFrame& stackFrame,
ZigStackFrame& frame, bool is_top, JSC::SourceProvider** referenced_source_provider, JSC::JSGlobalObject* globalObject, PopulateStackTraceFlags flags)
{
if (flags == PopulateStackTraceFlags::OnlyPosition) {
populateStackFrameMetadata(vm, globalObject, stackFrame, frame);
populateStackFramePosition(stackFrame, nullptr,
nullptr,
0, &frame->position, referenced_source_provider, flags);
0, frame.position, referenced_source_provider, flags);
} else if (flags == PopulateStackTraceFlags::OnlySourceLines) {
populateStackFramePosition(stackFrame, is_top ? trace->source_lines_ptr : nullptr,
is_top ? trace->source_lines_numbers : nullptr,
is_top ? trace->source_lines_to_collect : 0, &frame->position, referenced_source_provider, flags);
populateStackFramePosition(stackFrame, is_top ? trace.source_lines_ptr : nullptr,
is_top ? trace.source_lines_numbers : nullptr,
is_top ? trace.source_lines_to_collect : 0, frame.position, referenced_source_provider, flags);
}
}
@@ -4716,27 +4735,27 @@ public:
}
};
static void populateStackTrace(JSC::VM& vm, const WTF::Vector<JSC::StackFrame>& frames, ZigStackTrace* trace, JSC::JSGlobalObject* globalObject, PopulateStackTraceFlags flags)
static void populateStackTrace(JSC::VM& vm, const WTF::Vector<JSC::StackFrame>& frames, ZigStackTrace& trace, JSC::JSGlobalObject* globalObject, PopulateStackTraceFlags flags)
{
uint8_t frame_i = 0;
size_t stack_frame_i = 0;
const size_t total_frame_count = frames.size();
const uint8_t frame_count = total_frame_count < trace->frames_len ? total_frame_count : trace->frames_len;
const uint8_t frame_count = total_frame_count < trace.frames_cap ? total_frame_count : trace.frames_cap;
while (frame_i < frame_count && stack_frame_i < total_frame_count) {
// Skip native frames
while (stack_frame_i < total_frame_count && !(&frames.at(stack_frame_i))->hasLineAndColumnInfo() && !(&frames.at(stack_frame_i))->isWasmFrame()) {
while (stack_frame_i < total_frame_count && !(frames.at(stack_frame_i).hasLineAndColumnInfo()) && !(frames.at(stack_frame_i).isWasmFrame())) {
stack_frame_i++;
}
if (stack_frame_i >= total_frame_count)
break;
ZigStackFrame* frame = &trace->frames_ptr[frame_i];
populateStackFrame(vm, trace, &frames[stack_frame_i], frame, frame_i == 0, &trace->referenced_source_provider, globalObject, flags);
ZigStackFrame& frame = trace.frames_ptr[frame_i];
populateStackFrame(vm, trace, frames[stack_frame_i], frame, frame_i == 0, &trace.referenced_source_provider, globalObject, flags);
stack_frame_i++;
frame_i++;
}
trace->frames_len = frame_i;
trace.frames_len = frame_i;
}
static JSC::JSValue getNonObservable(JSC::VM& vm, JSC::JSGlobalObject* global, JSC::JSObject* obj, const JSC::PropertyName& propertyName)
@@ -4758,7 +4777,7 @@ static JSC::JSValue getNonObservable(JSC::VM& vm, JSC::JSGlobalObject* global, J
#define SYNTAX_ERROR_CODE 4
static void fromErrorInstance(ZigException* except, JSC::JSGlobalObject* global,
static void fromErrorInstance(ZigException& except, JSC::JSGlobalObject* global,
JSC::ErrorInstance* err, const Vector<JSC::StackFrame>* stackTrace,
JSC::JSValue val, PopulateStackTraceFlags flags)
{
@@ -4768,53 +4787,53 @@ static void fromErrorInstance(ZigException* except, JSC::JSGlobalObject* global,
bool getFromSourceURL = false;
if (stackTrace != nullptr && stackTrace->size() > 0) {
populateStackTrace(vm, *stackTrace, &except->stack, global, flags);
populateStackTrace(vm, *stackTrace, except.stack, global, flags);
} else if (err->stackTrace() != nullptr && err->stackTrace()->size() > 0) {
populateStackTrace(vm, *err->stackTrace(), &except->stack, global, flags);
populateStackTrace(vm, *err->stackTrace(), except.stack, global, flags);
} else {
getFromSourceURL = true;
}
except->type = (unsigned char)err->errorType();
except.type = (unsigned char)err->errorType();
if (err->isStackOverflowError()) {
except->type = 253;
except.type = 253;
}
if (err->isOutOfMemoryError()) {
except->type = 8;
except.type = 8;
}
if (except->type == SYNTAX_ERROR_CODE) {
except->message = Bun::toStringRef(err->sanitizedMessageString(global));
if (except.type == SYNTAX_ERROR_CODE) {
except.message = Bun::toStringRef(err->sanitizedMessageString(global));
} else if (JSC::JSValue message = obj->getIfPropertyExists(global, vm.propertyNames->message)) {
except->message = Bun::toStringRef(global, message);
except.message = Bun::toStringRef(global, message);
if (!scope.clearExceptionExceptTermination()) [[unlikely]]
return;
} else {
except->message = Bun::toStringRef(err->sanitizedMessageString(global));
except.message = Bun::toStringRef(err->sanitizedMessageString(global));
}
if (!scope.clearExceptionExceptTermination()) [[unlikely]] {
return;
}
except->name = Bun::toStringRef(err->sanitizedNameString(global));
except.name = Bun::toStringRef(err->sanitizedNameString(global));
if (!scope.clearExceptionExceptTermination()) [[unlikely]] {
return;
}
except->runtime_type = err->runtimeTypeForCause();
except.runtime_type = err->runtimeTypeForCause();
const auto& names = builtinNames(vm);
if (except->type != SYNTAX_ERROR_CODE) {
if (except.type != SYNTAX_ERROR_CODE) {
JSC::JSValue syscall = getNonObservable(vm, global, obj, names.syscallPublicName());
if (!scope.clearExceptionExceptTermination()) [[unlikely]]
return;
if (syscall) {
if (syscall.isString()) {
except->syscall = Bun::toStringRef(global, syscall);
except.syscall = Bun::toStringRef(global, syscall);
if (!scope.clearExceptionExceptTermination()) [[unlikely]]
return;
}
@@ -4825,7 +4844,7 @@ static void fromErrorInstance(ZigException* except, JSC::JSGlobalObject* global,
return;
if (code) {
if (code.isString() || code.isNumber()) {
except->system_code = Bun::toStringRef(global, code);
except.system_code = Bun::toStringRef(global, code);
if (!scope.clearExceptionExceptTermination()) [[unlikely]]
return;
}
@@ -4836,7 +4855,7 @@ static void fromErrorInstance(ZigException* except, JSC::JSGlobalObject* global,
return;
if (path) {
if (path.isString()) {
except->path = Bun::toStringRef(global, path);
except.path = Bun::toStringRef(global, path);
if (!scope.clearExceptionExceptTermination()) [[unlikely]]
return;
}
@@ -4847,7 +4866,7 @@ static void fromErrorInstance(ZigException* except, JSC::JSGlobalObject* global,
return;
if (fd) {
if (fd.isNumber()) {
except->fd = fd.toInt32(global);
except.fd = fd.toInt32(global);
}
}
@@ -4856,77 +4875,76 @@ static void fromErrorInstance(ZigException* except, JSC::JSGlobalObject* global,
return;
if (errno_) {
if (errno_.isNumber()) {
except->errno_ = errno_.toInt32(global);
except.errno_ = errno_.toInt32(global);
}
}
}
if (getFromSourceURL) {
{
// we don't want to serialize JSC::StackFrame longer than we need to
// so in this case, we parse the stack trace as a string
// we don't want to serialize JSC::StackFrame longer than we need to
// so in this case, we parse the stack trace as a string
// This one intentionally calls getters.
JSC::JSValue stackValue = obj->getIfPropertyExists(global, vm.propertyNames->stack);
if (!scope.clearExceptionExceptTermination()) [[unlikely]]
return;
if (stackValue) {
if (stackValue.isString()) {
WTF::String stack = stackValue.toWTFString(global);
if (!scope.clearExceptionExceptTermination()) [[unlikely]] {
return;
}
if (!stack.isEmpty()) {
// This one intentionally calls getters.
JSC::JSValue stackValue = obj->getIfPropertyExists(global, vm.propertyNames->stack);
if (!scope.clearExceptionExceptTermination()) [[unlikely]]
return;
if (stackValue) {
if (stackValue.isString()) {
WTF::String stack = stackValue.toWTFString(global);
if (!scope.clearExceptionExceptTermination()) [[unlikely]] {
return;
}
if (!stack.isEmpty()) {
V8StackTraceIterator iterator(stack);
const uint8_t frame_count = except->stack.frames_len;
V8StackTraceIterator iterator(stack);
const uint8_t frame_count = except.stack.frames_cap;
except->stack.frames_len = 0;
except.stack.frames_len = 0;
iterator.forEachFrame([&](const V8StackTraceIterator::StackFrame& frame, bool& stop) -> void {
ASSERT(except->stack.frames_len < frame_count);
auto& current = except->stack.frames_ptr[except->stack.frames_len];
current = {};
iterator.forEachFrame([&](const V8StackTraceIterator::StackFrame& frame, bool& stop) -> void {
ASSERT(except.stack.frames_len < frame_count);
auto& current = except.stack.frames_ptr[except.stack.frames_len];
current = {};
String functionName = frame.functionName.toString();
String sourceURL = frame.sourceURL.toString();
current.function_name = Bun::toStringRef(functionName);
current.source_url = Bun::toStringRef(sourceURL);
current.position.line_zero_based = frame.lineNumber.zeroBasedInt();
current.position.column_zero_based = frame.columnNumber.zeroBasedInt();
String functionName = frame.functionName.toString();
String sourceURL = frame.sourceURL.toString();
current.function_name = Bun::toStringRef(functionName);
current.source_url = Bun::toStringRef(sourceURL);
current.position.line_zero_based = frame.lineNumber.zeroBasedInt();
current.position.column_zero_based = frame.columnNumber.zeroBasedInt();
current.remapped = true;
current.is_async = frame.isAsync;
current.remapped = true;
current.is_async = frame.isAsync;
if (frame.isConstructor) {
current.code_type = ZigStackFrameCodeConstructor;
} else if (frame.isGlobalCode) {
current.code_type = ZigStackFrameCodeGlobal;
}
except->stack.frames_len += 1;
stop = except->stack.frames_len >= frame_count;
});
if (except->stack.frames_len > 0) {
getFromSourceURL = false;
except->remapped = true;
} else {
except->stack.frames_len = frame_count;
if (frame.isConstructor) {
current.code_type = ZigStackFrameCodeConstructor;
} else if (frame.isGlobalCode) {
current.code_type = ZigStackFrameCodeGlobal;
}
except.stack.frames_len += 1;
stop = except.stack.frames_len >= frame_count;
});
if (except.stack.frames_len > 0) {
getFromSourceURL = false;
except.remapped = true;
}
}
}
}
}
if (except.stack.frames_len == 0 && getFromSourceURL) {
JSC::JSValue sourceURL = getNonObservable(vm, global, obj, vm.propertyNames->sourceURL);
if (!scope.clearExceptionExceptTermination()) [[unlikely]]
return;
if (sourceURL) {
if (sourceURL.isString()) {
except->stack.frames_ptr[0].source_url = Bun::toStringRef(global, sourceURL);
except.stack.frames_ptr[0].source_url = Bun::toStringRef(global, sourceURL);
if (!scope.clearExceptionExceptTermination()) [[unlikely]]
return;
@@ -4937,7 +4955,7 @@ static void fromErrorInstance(ZigException* except, JSC::JSGlobalObject* global,
return;
if (column) {
if (column.isNumber()) {
except->stack.frames_ptr[0].position.column_zero_based = OrdinalNumber::fromOneBasedInt(column.toInt32(global)).zeroBasedInt();
except.stack.frames_ptr[0].position.column_zero_based = OrdinalNumber::fromOneBasedInt(column.toInt32(global)).zeroBasedInt();
}
}
@@ -4946,7 +4964,7 @@ static void fromErrorInstance(ZigException* except, JSC::JSGlobalObject* global,
return;
if (line) {
if (line.isNumber()) {
except->stack.frames_ptr[0].position.line_zero_based = OrdinalNumber::fromOneBasedInt(line.toInt32(global)).zeroBasedInt();
except.stack.frames_ptr[0].position.line_zero_based = OrdinalNumber::fromOneBasedInt(line.toInt32(global)).zeroBasedInt();
JSC::JSValue lineText = getNonObservable(vm, global, obj, builtinNames(vm).lineTextPublicName());
if (!scope.clearExceptionExceptTermination()) [[unlikely]]
@@ -4955,10 +4973,10 @@ static void fromErrorInstance(ZigException* except, JSC::JSGlobalObject* global,
if (lineText.isString()) {
if (JSC::JSString* jsStr = lineText.toStringOrNull(global)) {
auto str = jsStr->value(global);
except->stack.source_lines_ptr[0] = Bun::toStringRef(str);
except->stack.source_lines_numbers[0] = except->stack.frames_ptr[0].position.line();
except->stack.source_lines_len = 1;
except->remapped = true;
except.stack.source_lines_ptr[0] = Bun::toStringRef(str);
except.stack.source_lines_numbers[0] = except.stack.frames_ptr[0].position.line();
except.stack.source_lines_len = 1;
except.remapped = true;
}
}
}
@@ -4967,19 +4985,17 @@ static void fromErrorInstance(ZigException* except, JSC::JSGlobalObject* global,
}
{
except->stack.frames_len = 1;
except.stack.frames_len = 1;
PropertySlot slot = PropertySlot(obj, PropertySlot::InternalMethodType::VMInquiry, &vm);
except->stack.frames_ptr[0].remapped = obj->getNonIndexPropertySlot(global, names.originalLinePublicName(), slot);
except.stack.frames_ptr[0].remapped = obj->getNonIndexPropertySlot(global, names.originalLinePublicName(), slot);
if (!scope.clearExceptionExceptTermination()) [[unlikely]]
return;
}
}
}
except->exception = err;
}
void exceptionFromString(ZigException* except, JSC::JSValue value, JSC::JSGlobalObject* global)
void exceptionFromString(ZigException& except, JSC::JSValue value, JSC::JSGlobalObject* global)
{
auto& vm = JSC::getVM(global);
if (vm.hasPendingTerminationException()) [[unlikely]] {
@@ -4998,23 +5014,23 @@ void exceptionFromString(ZigException* except, JSC::JSValue value, JSC::JSGlobal
if (name_value) {
if (name_value.isString()) {
auto name_str = name_value.toWTFString(global);
except->name = Bun::toStringRef(name_str);
except.name = Bun::toStringRef(name_str);
if (name_str == "Error"_s) {
except->type = JSErrorCodeError;
except.type = JSErrorCodeError;
} else if (name_str == "EvalError"_s) {
except->type = JSErrorCodeEvalError;
except.type = JSErrorCodeEvalError;
} else if (name_str == "RangeError"_s) {
except->type = JSErrorCodeRangeError;
except.type = JSErrorCodeRangeError;
} else if (name_str == "ReferenceError"_s) {
except->type = JSErrorCodeReferenceError;
except.type = JSErrorCodeReferenceError;
} else if (name_str == "SyntaxError"_s) {
except->type = JSErrorCodeSyntaxError;
except.type = JSErrorCodeSyntaxError;
} else if (name_str == "TypeError"_s) {
except->type = JSErrorCodeTypeError;
except.type = JSErrorCodeTypeError;
} else if (name_str == "URIError"_s) {
except->type = JSErrorCodeURIError;
except.type = JSErrorCodeURIError;
} else if (name_str == "AggregateError"_s) {
except->type = JSErrorCodeAggregateError;
except.type = JSErrorCodeAggregateError;
}
}
}
@@ -5025,44 +5041,46 @@ void exceptionFromString(ZigException* except, JSC::JSValue value, JSC::JSGlobal
}
if (message) {
if (message.isString()) {
except->message = Bun::toStringRef(message.toWTFString(global));
except.message = Bun::toStringRef(message.toWTFString(global));
}
}
auto sourceURL = obj->getIfPropertyExists(global, vm.propertyNames->sourceURL);
if (scope.exception()) [[unlikely]] {
scope.clearExceptionExceptTermination();
}
if (sourceURL) {
if (sourceURL.isString()) {
except->stack.frames_ptr[0].source_url = Bun::toStringRef(sourceURL.toWTFString(global));
except->stack.frames_len = 1;
if (except.stack.frames_len == 0) {
auto sourceURL = obj->getIfPropertyExists(global, vm.propertyNames->sourceURL);
if (scope.exception()) [[unlikely]] {
scope.clearExceptionExceptTermination();
}
}
if (scope.exception()) [[unlikely]] {
scope.clearExceptionExceptTermination();
}
auto line = obj->getIfPropertyExists(global, vm.propertyNames->line);
if (scope.exception()) [[unlikely]] {
scope.clearExceptionExceptTermination();
}
if (line) {
if (line.isNumber()) {
except->stack.frames_ptr[0].position.line_zero_based = OrdinalNumber::fromOneBasedInt(line.toInt32(global)).zeroBasedInt();
// TODO: don't sourcemap it twice
auto originalLine = obj->getIfPropertyExists(global, builtinNames(vm).originalLinePublicName());
if (scope.exception()) [[unlikely]] {
scope.clearExceptionExceptTermination();
if (sourceURL) {
if (sourceURL.isString()) {
except.stack.frames_ptr[0].source_url = Bun::toStringRef(sourceURL.toWTFString(global));
except.stack.frames_len = 1;
}
if (originalLine) {
if (originalLine.isNumber()) {
except->stack.frames_ptr[0].position.line_zero_based = OrdinalNumber::fromOneBasedInt(originalLine.toInt32(global)).zeroBasedInt();
}
if (scope.exception()) [[unlikely]] {
scope.clearExceptionExceptTermination();
}
auto line = obj->getIfPropertyExists(global, vm.propertyNames->line);
if (scope.exception()) [[unlikely]] {
scope.clearExceptionExceptTermination();
}
if (line) {
if (line.isNumber()) {
except.stack.frames_ptr[0].position.line_zero_based = OrdinalNumber::fromOneBasedInt(line.toInt32(global)).zeroBasedInt();
// TODO: don't sourcemap it twice
auto originalLine = obj->getIfPropertyExists(global, builtinNames(vm).originalLinePublicName());
if (scope.exception()) [[unlikely]] {
scope.clearExceptionExceptTermination();
}
if (originalLine) {
if (originalLine.isNumber()) {
except.stack.frames_ptr[0].position.line_zero_based = OrdinalNumber::fromOneBasedInt(originalLine.toInt32(global)).zeroBasedInt();
}
}
except.stack.frames_len = 1;
}
except->stack.frames_len = 1;
}
}
@@ -5082,9 +5100,9 @@ void exceptionFromString(ZigException* except, JSC::JSValue value, JSC::JSGlobal
case JSC::SymbolType: {
auto* symbol = asSymbol(cell);
if (symbol->description().isEmpty()) {
except->message = BunStringEmpty;
except.message = BunStringEmpty;
} else {
except->message = Bun::toStringRef(symbol->description());
except.message = Bun::toStringRef(symbol->description());
}
return;
}
@@ -5101,7 +5119,7 @@ void exceptionFromString(ZigException* except, JSC::JSValue value, JSC::JSGlobal
return;
}
except->message = Bun::toStringRef(str);
except.message = Bun::toStringRef(str);
}
extern "C" JSC::EncodedJSValue JSC__Exception__asJSValue(JSC::Exception* exception)
@@ -5264,24 +5282,24 @@ extern "C" [[ZIG_EXPORT(check_slow)]] void JSC__JSValue__toZigException(JSC::Enc
JSValue unwrapped = jscException->value();
if (JSC::ErrorInstance* error = JSC::jsDynamicCast<JSC::ErrorInstance*>(unwrapped)) {
fromErrorInstance(exception, global, error, &jscException->stack(), unwrapped, PopulateStackTraceFlags::OnlyPosition);
fromErrorInstance(*exception, global, error, &jscException->stack(), unwrapped, PopulateStackTraceFlags::OnlyPosition);
return;
}
if (jscException->stack().size() > 0) {
populateStackTrace(global->vm(), jscException->stack(), &exception->stack, global, PopulateStackTraceFlags::OnlyPosition);
populateStackTrace(global->vm(), jscException->stack(), exception->stack, global, PopulateStackTraceFlags::OnlyPosition);
}
exceptionFromString(exception, unwrapped, global);
exceptionFromString(*exception, unwrapped, global);
return;
}
if (JSC::ErrorInstance* error = JSC::jsDynamicCast<JSC::ErrorInstance*>(value)) {
fromErrorInstance(exception, global, error, nullptr, value, PopulateStackTraceFlags::OnlyPosition);
fromErrorInstance(*exception, global, error, nullptr, value, PopulateStackTraceFlags::OnlyPosition);
return;
}
exceptionFromString(exception, value, global);
exceptionFromString(*exception, value, global);
}
void ZigException__collectSourceLines(JSC::EncodedJSValue jsException, JSC::JSGlobalObject* global, ZigException* exception)
@@ -5296,16 +5314,16 @@ void ZigException__collectSourceLines(JSC::EncodedJSValue jsException, JSC::JSGl
JSValue unwrapped = jscException->value();
if (jscException->stack().size() > 0) {
populateStackTrace(global->vm(), jscException->stack(), &exception->stack, global, PopulateStackTraceFlags::OnlySourceLines);
populateStackTrace(global->vm(), jscException->stack(), exception->stack, global, PopulateStackTraceFlags::OnlySourceLines);
}
exceptionFromString(exception, unwrapped, global);
exceptionFromString(*exception, unwrapped, global);
return;
}
if (JSC::ErrorInstance* error = JSC::jsDynamicCast<JSC::ErrorInstance*>(value)) {
if (error->stackTrace() != nullptr && error->stackTrace()->size() > 0) {
populateStackTrace(global->vm(), *error->stackTrace(), &exception->stack, global, PopulateStackTraceFlags::OnlySourceLines);
populateStackTrace(global->vm(), *error->stackTrace(), exception->stack, global, PopulateStackTraceFlags::OnlySourceLines);
}
return;
}
@@ -5377,7 +5395,7 @@ bool JSC__JSValue__isTerminationException(JSC::EncodedJSValue JSValue0)
extern "C" void JSC__Exception__getStackTrace(JSC::Exception* arg0, JSC::JSGlobalObject* global, ZigStackTrace* trace)
{
populateStackTrace(arg0->vm(), arg0->stack(), trace, global, PopulateStackTraceFlags::OnlyPosition);
populateStackTrace(arg0->vm(), arg0->stack(), *trace, global, PopulateStackTraceFlags::OnlyPosition);
}
void JSC__VM__shrinkFootprint(JSC::VM* arg0)
@@ -6130,8 +6148,9 @@ extern "C" void JSC__JSGlobalObject__queueMicrotaskJob(JSC::JSGlobalObject* arg0
if (microtaskArgs[3].isEmpty()) {
microtaskArgs[3] = jsUndefined();
}
auto microTaskFunction = globalObject->performMicrotaskFunction();
#if ASSERT_ENABLED
ASSERT_WITH_MESSAGE(microTaskFunction, "Invalid microtask function");
auto& vm = globalObject->vm();
if (microtaskArgs[0].isCell()) {
JSC::Integrity::auditCellFully(vm, microtaskArgs[0].asCell());
@@ -6148,10 +6167,11 @@ extern "C" void JSC__JSGlobalObject__queueMicrotaskJob(JSC::JSGlobalObject* arg0
if (microtaskArgs[3].isCell()) {
JSC::Integrity::auditCellFully(vm, microtaskArgs[3].asCell());
}
#endif
globalObject->queueMicrotask(
globalObject->performMicrotaskFunction(),
microTaskFunction,
WTFMove(microtaskArgs[0]),
WTFMove(microtaskArgs[1]),
WTFMove(microtaskArgs[2]),

View File

@@ -192,6 +192,7 @@ typedef struct ZigStackTrace {
uint8_t source_lines_to_collect;
ZigStackFrame* frames_ptr;
uint8_t frames_len;
uint8_t frames_cap;
JSC::SourceProvider* referenced_source_provider;
} ZigStackTrace;

View File

@@ -40,7 +40,7 @@ pub threadlocal var global: *MiniEventLoop = undefined;
pub const ConcurrentTaskQueue = UnboundedQueue(AnyTaskWithExtraContext, .next);
pub fn initGlobal(env: ?*bun.DotEnv.Loader) *MiniEventLoop {
pub fn initGlobal(env: ?*bun.DotEnv.Loader, cwd: ?[]const u8) *MiniEventLoop {
if (globalInitialized) return global;
const loop = MiniEventLoop.init(bun.default_allocator);
global = bun.handleOom(bun.default_allocator.create(MiniEventLoop));
@@ -54,6 +54,22 @@ pub fn initGlobal(env: ?*bun.DotEnv.Loader) *MiniEventLoop {
loader.* = bun.DotEnv.Loader.init(map, bun.default_allocator);
break :env_loader loader;
};
// Set top_level_dir from provided cwd or get current working directory
if (cwd) |dir| {
global.top_level_dir = dir;
} else if (global.top_level_dir.len == 0) {
var buf: bun.PathBuffer = undefined;
switch (bun.sys.getcwd(&buf)) {
.result => |p| {
global.top_level_dir = bun.default_allocator.dupe(u8, p) catch "";
},
.err => {
global.top_level_dir = "";
},
}
}
globalInitialized = true;
return global;
}

View File

@@ -390,6 +390,7 @@ pub const LinkerContext = struct {
}
}
try this.graph.propagateAsyncDependencies();
try this.scanImportsAndExports();
// Stop now if there were errors

View File

@@ -472,6 +472,68 @@ pub const File = struct {
pub const List = MultiArrayList(File);
};
pub fn propagateAsyncDependencies(this: *LinkerGraph) !void {
const State = struct {
visited: bun.collections.AutoBitSet,
import_records: []const ImportRecord.List,
flags: []JSMeta.Flags,
pub fn visitAll(self: *@This()) void {
for (0..self.import_records.len) |i| {
self.visit(i);
}
}
fn visit(self: *@This(), index: usize) void {
if (self.visited.isSet(index)) return;
self.visited.set(index);
if (self.flags[index].is_async_or_has_async_dependency) return;
for (self.import_records[index].sliceConst()) |*import_record| {
switch (import_record.kind) {
.stmt => {},
// Any use of `import()` that makes the parent async will necessarily use
// top-level await, so this will have already been detected by `validateTLA`,
// and `is_async_or_has_async_dependency` will already be true.
//
// We don't want to process these imports here because `import()` can appear in
// non-top-level contexts (like inside an async function) or in contexts that
// don't use `await`, which don't necessarily make the parent module async.
.dynamic => continue,
// `require()` cannot import async modules.
.require, .require_resolve => continue,
// Entry points; not imports from JS
.entry_point_run, .entry_point_build => continue,
// CSS imports
.at, .at_conditional, .url, .composes => continue,
// Other non-JS imports
.html_manifest, .internal => continue,
}
const import_index: usize = import_record.source_index.get();
if (import_index >= self.import_records.len) continue;
self.visit(import_index);
if (self.flags[import_index].is_async_or_has_async_dependency) {
self.flags[index].is_async_or_has_async_dependency = true;
break;
}
}
}
};
var state: State = .{
.visited = try .initEmpty(bun.default_allocator, this.ast.len),
.import_records = this.ast.items(.import_records),
.flags = this.meta.items(.flags),
};
defer state.visited.deinit(bun.default_allocator);
state.visitAll();
}
const string = []const u8;
const std = @import("std");

View File

@@ -117,49 +117,67 @@ pub fn postProcessJSChunk(ctx: GenerateChunkCtx, worker: *ThreadPool.Worker, chu
var newline_before_comment = false;
var is_executable = false;
// Start with the hashbang if there is one. This must be done before the
// banner because it only works if it's literally the first character.
if (chunk.isEntryPoint()) {
const is_bun = c.graph.ast.items(.target)[chunk.entry_point.source_index].isBun();
const hashbang = c.graph.ast.items(.hashbang)[chunk.entry_point.source_index];
// Extract hashbang and banner for entry points
const hashbang, const banner = if (chunk.isEntryPoint()) brk: {
const source_hashbang = c.graph.ast.items(.hashbang)[chunk.entry_point.source_index];
if (hashbang.len > 0) {
j.pushStatic(hashbang);
j.pushStatic("\n");
line_offset.advance(hashbang);
line_offset.advance("\n");
newline_before_comment = true;
is_executable = true;
// If source file has a hashbang, use it
if (source_hashbang.len > 0) {
break :brk .{ source_hashbang, c.options.banner };
}
if (is_bun) {
const cjs_entry_chunk = "(function(exports, require, module, __filename, __dirname) {";
if (ctx.c.options.generate_bytecode_cache and output_format == .cjs) {
const input = "// @bun @bytecode @bun-cjs\n" ++ cjs_entry_chunk;
j.pushStatic(input);
line_offset.advance(input);
} else if (ctx.c.options.generate_bytecode_cache) {
j.pushStatic("// @bun @bytecode\n");
line_offset.advance("// @bun @bytecode\n");
} else if (output_format == .cjs) {
j.pushStatic("// @bun @bun-cjs\n" ++ cjs_entry_chunk);
line_offset.advance("// @bun @bun-cjs\n" ++ cjs_entry_chunk);
} else {
j.pushStatic("// @bun\n");
line_offset.advance("// @bun\n");
}
// Otherwise check if banner starts with hashbang
if (c.options.banner.len > 0 and strings.hasPrefixComptime(c.options.banner, "#!")) {
const newline_pos = strings.indexOfChar(c.options.banner, '\n') orelse c.options.banner.len;
const banner_hashbang = c.options.banner[0..newline_pos];
break :brk .{ banner_hashbang, std.mem.trimLeft(u8, c.options.banner[newline_pos..], "\r\n") };
}
// No hashbang anywhere
break :brk .{ "", c.options.banner };
} else .{ "", c.options.banner };
// Start with the hashbang if there is one. This must be done before the
// banner because it only works if it's literally the first character.
if (hashbang.len > 0) {
j.pushStatic(hashbang);
j.pushStatic("\n");
line_offset.advance(hashbang);
line_offset.advance("\n");
newline_before_comment = true;
is_executable = true;
}
// Add @bun comments and CJS wrapper start for each chunk when targeting Bun.
const is_bun = c.graph.ast.items(.target)[chunk.entry_point.source_index].isBun();
if (is_bun) {
const cjs_entry_chunk = "(function(exports, require, module, __filename, __dirname) {";
if (ctx.c.options.generate_bytecode_cache and output_format == .cjs) {
const input = "// @bun @bytecode @bun-cjs\n" ++ cjs_entry_chunk;
j.pushStatic(input);
line_offset.advance(input);
} else if (ctx.c.options.generate_bytecode_cache) {
j.pushStatic("// @bun @bytecode\n");
line_offset.advance("// @bun @bytecode\n");
} else if (output_format == .cjs) {
j.pushStatic("// @bun @bun-cjs\n" ++ cjs_entry_chunk);
line_offset.advance("// @bun @bun-cjs\n" ++ cjs_entry_chunk);
} else {
j.pushStatic("// @bun\n");
line_offset.advance("// @bun\n");
}
}
if (c.options.banner.len > 0) {
if (newline_before_comment) {
// Add the banner (excluding any hashbang part) for all chunks
if (banner.len > 0) {
j.pushStatic(banner);
line_offset.advance(banner);
if (!strings.endsWithChar(banner, '\n')) {
j.pushStatic("\n");
line_offset.advance("\n");
}
j.pushStatic(ctx.c.options.banner);
line_offset.advance(ctx.c.options.banner);
j.pushStatic("\n");
line_offset.advance("\n");
newline_before_comment = true;
}
// Add the top-level directive if present (but omit "use strict" in ES
@@ -372,12 +390,9 @@ pub fn postProcessJSChunk(ctx: GenerateChunkCtx, worker: *ThreadPool.Worker, chu
}
},
.cjs => {
if (chunk.isEntryPoint()) {
const is_bun = ctx.c.graph.ast.items(.target)[chunk.entry_point.source_index].isBun();
if (is_bun) {
j.pushStatic("})\n");
line_offset.advance("})\n");
}
if (is_bun) {
j.pushStatic("})\n");
line_offset.advance("})\n");
}
},
else => {},

View File

@@ -756,7 +756,7 @@ pub const BunxCommand = struct {
.stdin = .inherit,
.windows = if (Environment.isWindows) .{
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(this_transpiler.env)),
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(this_transpiler.env, null)),
},
}) catch |err| {
Output.prettyErrorln("<r><red>error<r>: bunx failed to install <b>{s}<r> due to error <b>{s}<r>", .{ install_param, @errorName(err) });

View File

@@ -109,7 +109,7 @@ fn execTask(allocator: std.mem.Allocator, task_: string, cwd: string, _: string,
.stdin = .inherit,
.windows = if (Environment.isWindows) .{
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null)),
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null, null)),
},
}) catch return;
}
@@ -1487,7 +1487,7 @@ pub const CreateCommand = struct {
.stdin = .inherit,
.windows = if (Environment.isWindows) .{
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null)),
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null, null)),
},
});
_ = try process.unwrap();

View File

@@ -9,9 +9,7 @@ pub const ExecCommand = struct {
null,
);
try bundle.runEnvLoader(false);
const mini = bun.jsc.MiniEventLoop.initGlobal(bundle.env);
var buf: bun.PathBuffer = undefined;
const cwd = switch (bun.sys.getcwd(&buf)) {
.result => |p| p,
.err => |e| {
@@ -19,6 +17,7 @@ pub const ExecCommand = struct {
Global.exit(1);
},
};
const mini = bun.jsc.MiniEventLoop.initGlobal(bundle.env, cwd);
const parts: []const []const u8 = &[_][]const u8{
cwd,
"[eval]",

View File

@@ -142,7 +142,6 @@ const State = struct {
handles: []ProcessHandle,
event_loop: *bun.jsc.MiniEventLoop,
remaining_scripts: usize = 0,
total_expected_scripts: usize = 0,
// buffer for batched output
draw_buf: std.ArrayList(u8) = std.ArrayList(u8).init(bun.default_allocator),
last_lines_written: usize = 0,
@@ -152,7 +151,6 @@ const State = struct {
env: *bun.DotEnv.Loader,
pub fn isDone(this: *This) bool {
// We're done when all scripts that were started have finished
return this.remaining_scripts == 0;
}
@@ -193,29 +191,15 @@ const State = struct {
fn processExit(this: *This, handle: *ProcessHandle) !void {
this.remaining_scripts -= 1;
// Check if the process exited successfully
const success = if (handle.process) |proc| switch (proc.status) {
.exited => |exited| exited.code == 0,
else => false,
} else false;
if (!this.aborted) {
// Only start dependents if this process succeeded
if (success) {
for (handle.dependents.items) |dependent| {
dependent.remaining_dependencies -= 1;
if (dependent.remaining_dependencies == 0) {
dependent.start() catch {
Output.prettyErrorln("<r><red>error<r>: Failed to start process", .{});
Global.exit(1);
};
}
for (handle.dependents.items) |dependent| {
dependent.remaining_dependencies -= 1;
if (dependent.remaining_dependencies == 0) {
dependent.start() catch {
Output.prettyErrorln("<r><red>error<r>: Failed to start process", .{});
Global.exit(1);
};
}
} else {
// If this process failed, abort all remaining processes
// This prevents dependents from running
this.abort();
}
}
if (this.pretty_output) {
@@ -548,7 +532,7 @@ pub fn runScriptsWithFilter(ctx: Command.Context) !noreturn {
Global.exit(1);
}
const event_loop = bun.jsc.MiniEventLoop.initGlobal(this_transpiler.env);
const event_loop = bun.jsc.MiniEventLoop.initGlobal(this_transpiler.env, null);
const shell_bin: [:0]const u8 = if (Environment.isPosix)
RunCommand.findShell(this_transpiler.env.get("PATH") orelse "", fsinstance.top_level_dir) orelse return error.MissingShell
else
@@ -596,22 +580,18 @@ pub fn runScriptsWithFilter(ctx: Command.Context) !noreturn {
}
// compute dependencies (TODO: maybe we should do this only in a workspace?)
for (state.handles) |*handle| {
// Iterate through the dependency values directly
const deps = handle.config.deps.map.values();
for (deps) |dep_value| {
// Check version tag for workspace dependencies
const tag = dep_value.version.tag;
if (tag == .workspace) {
// This is a workspace dependency
// Get the dependency name from the dependency itself
const dep_name = dep_value.name.slice(handle.config.deps.source_buf);
// Check if this dependency is another package in our workspace
if (map.get(dep_name)) |pkgs| {
for (pkgs.items) |pkg| {
try pkg.dependents.append(handle);
handle.remaining_dependencies += 1;
}
var iter = handle.config.deps.map.iterator();
while (iter.next()) |entry| {
var sfa = std.heap.stackFallback(256, ctx.allocator);
const alloc = sfa.get();
const buf = try alloc.alloc(u8, entry.key_ptr.len());
defer alloc.free(buf);
const name = entry.key_ptr.slice(buf);
// is it a workspace dependency?
if (map.get(name)) |pkgs| {
for (pkgs.items) |dep| {
try dep.dependents.append(handle);
handle.remaining_dependencies += 1;
}
}
}

View File

@@ -309,75 +309,86 @@ pub const PackageManagerCommand = struct {
const load_lockfile = pm.lockfile.loadFromCwd(pm, ctx.allocator, ctx.log, true);
handleLoadLockfileErrors(load_lockfile, pm);
Output.flush();
Output.disableBuffering();
const lockfile = load_lockfile.ok.lockfile;
var iterator = Lockfile.Tree.Iterator(.node_modules).init(lockfile);
var max_depth: usize = 0;
// Determine max depth for traversal
const max_display_depth = if (pm.options.depth) |d| d else std.math.maxInt(usize);
var directories = std.ArrayList(NodeModulesFolder).init(ctx.allocator);
defer directories.deinit();
while (iterator.next(null)) |node_modules| {
const path_len = node_modules.relative_path.len;
const path = try ctx.allocator.alloc(u8, path_len + 1);
bun.copy(u8, path, node_modules.relative_path);
path[path_len] = 0;
const dependencies = try ctx.allocator.alloc(DependencyID, node_modules.dependencies.len);
bun.copy(DependencyID, dependencies, node_modules.dependencies);
if (max_depth < node_modules.depth + 1) max_depth = node_modules.depth + 1;
try directories.append(.{
.relative_path = path[0..path_len :0],
.dependencies = dependencies,
.tree_id = node_modules.tree_id,
.depth = node_modules.depth,
});
}
const first_directory = directories.orderedRemove(0);
var more_packages = try ctx.allocator.alloc(bool, max_depth);
@memset(more_packages, false);
if (first_directory.dependencies.len > 1) more_packages[0] = true;
if (strings.leftHasAnyInRight(args, &.{ "-A", "-a", "--all" })) {
try printNodeModulesFolderStructure(&first_directory, null, 0, &directories, lockfile, more_packages);
if (pm.options.json_output) {
// JSON output
try printJsonDependencyTree(ctx, pm, lockfile, max_display_depth);
} else {
var cwd_buf: bun.PathBuffer = undefined;
const path = bun.getcwd(&cwd_buf) catch {
Output.prettyErrorln("<r><red>error<r>: Could not get current working directory", .{});
Global.exit(1);
};
const dependencies = lockfile.buffers.dependencies.items;
const slice = lockfile.packages.slice();
const resolutions = slice.items(.resolution);
const root_deps = slice.items(.dependencies)[0];
// Regular tree output
Output.flush();
Output.disableBuffering();
Output.println("{s} node_modules ({d})", .{ path, lockfile.buffers.hoisted_dependencies.items.len });
const string_bytes = lockfile.buffers.string_bytes.items;
const sorted_dependencies = try ctx.allocator.alloc(DependencyID, root_deps.len);
defer ctx.allocator.free(sorted_dependencies);
for (sorted_dependencies, 0..) |*dep, i| {
dep.* = @as(DependencyID, @truncate(root_deps.off + i));
var iterator = Lockfile.Tree.Iterator(.node_modules).init(lockfile);
var max_depth: usize = 0;
var directories = std.ArrayList(NodeModulesFolder).init(ctx.allocator);
defer directories.deinit();
while (iterator.next(null)) |node_modules| {
const path_len = node_modules.relative_path.len;
const path = try ctx.allocator.alloc(u8, path_len + 1);
bun.copy(u8, path, node_modules.relative_path);
path[path_len] = 0;
const dependencies = try ctx.allocator.alloc(DependencyID, node_modules.dependencies.len);
bun.copy(DependencyID, dependencies, node_modules.dependencies);
if (max_depth < node_modules.depth + 1) max_depth = node_modules.depth + 1;
try directories.append(.{
.relative_path = path[0..path_len :0],
.dependencies = dependencies,
.tree_id = node_modules.tree_id,
.depth = node_modules.depth,
});
}
std.sort.pdq(DependencyID, sorted_dependencies, ByName{
.dependencies = dependencies,
.buf = string_bytes,
}, ByName.isLessThan);
for (sorted_dependencies, 0..) |dependency_id, index| {
const package_id = lockfile.buffers.resolutions.items[dependency_id];
if (package_id >= lockfile.packages.len) continue;
const name = dependencies[dependency_id].name.slice(string_bytes);
const resolution = resolutions[package_id].fmt(string_bytes, .auto);
const first_directory = directories.orderedRemove(0);
if (index < sorted_dependencies.len - 1) {
Output.prettyln("<d>├──<r> {s}<r><d>@{any}<r>\n", .{ name, resolution });
} else {
Output.prettyln("<d>└──<r> {s}<r><d>@{any}<r>\n", .{ name, resolution });
var more_packages = try ctx.allocator.alloc(bool, max_depth);
@memset(more_packages, false);
if (first_directory.dependencies.len > 1) more_packages[0] = true;
if (strings.leftHasAnyInRight(args, &.{ "-A", "-a", "--all" })) {
try printNodeModulesFolderStructure(&first_directory, null, 0, &directories, lockfile, more_packages, max_display_depth);
} else {
var cwd_buf: bun.PathBuffer = undefined;
const path = bun.getcwd(&cwd_buf) catch {
Output.prettyErrorln("<r><red>error<r>: Could not get current working directory", .{});
Global.exit(1);
};
const dependencies = lockfile.buffers.dependencies.items;
const slice = lockfile.packages.slice();
const resolutions = slice.items(.resolution);
const root_deps = slice.items(.dependencies)[0];
Output.println("{s} node_modules ({d})", .{ path, lockfile.buffers.hoisted_dependencies.items.len });
const string_bytes = lockfile.buffers.string_bytes.items;
const sorted_dependencies = try ctx.allocator.alloc(DependencyID, root_deps.len);
defer ctx.allocator.free(sorted_dependencies);
for (sorted_dependencies, 0..) |*dep, i| {
dep.* = @as(DependencyID, @truncate(root_deps.off + i));
}
std.sort.pdq(DependencyID, sorted_dependencies, ByName{
.dependencies = dependencies,
.buf = string_bytes,
}, ByName.isLessThan);
for (sorted_dependencies, 0..) |dependency_id, index| {
const package_id = lockfile.buffers.resolutions.items[dependency_id];
if (package_id >= lockfile.packages.len) continue;
const name = dependencies[dependency_id].name.slice(string_bytes);
const resolution = resolutions[package_id].fmt(string_bytes, .auto);
if (index < sorted_dependencies.len - 1) {
Output.prettyln("<d>├──<r> {s}<r><d>@{any}<r>\n", .{ name, resolution });
} else {
Output.prettyln("<d>└──<r> {s}<r><d>@{any}<r>\n", .{ name, resolution });
}
}
}
}
@@ -441,6 +452,360 @@ pub const PackageManagerCommand = struct {
Global.exit(0);
}
}
fn printJsonDependencyTree(ctx: Command.Context, pm: *PackageManager, lockfile: *Lockfile, max_depth: usize) !void {
const allocator = ctx.allocator;
const dependencies = lockfile.buffers.dependencies.items;
const string_bytes = lockfile.buffers.string_bytes.items;
const slice = lockfile.packages.slice();
const resolutions = slice.items(.resolution);
const names = slice.items(.name);
const root_deps = slice.items(.dependencies)[0];
// Get root package info from lockfile
const root_package_id = pm.root_package_id.get(lockfile, pm.workspace_name_hash);
const root_name = if (root_package_id < lockfile.packages.len)
names[root_package_id].slice(string_bytes)
else if (pm.root_package_json_name_at_time_of_init.len > 0)
pm.root_package_json_name_at_time_of_init
else
"unknown";
// Get version from root package resolution
// For the root package, we typically have a "root" resolution tag, so we need to check package.json
var version_buf: [512]u8 = undefined;
const version = if (root_package_id < lockfile.packages.len and resolutions[root_package_id].tag == .npm)
try std.fmt.bufPrint(&version_buf, "{}", .{resolutions[root_package_id].value.npm.version.fmt(string_bytes)})
else if (root_package_id < lockfile.packages.len and resolutions[root_package_id].tag != .root)
try std.fmt.bufPrint(&version_buf, "{}", .{resolutions[root_package_id].fmt(string_bytes, .auto)})
else blk: {
// Try to read version from package.json for root packages
var path_buf: bun.PathBuffer = undefined;
const package_json_path = std.fmt.bufPrintZ(&path_buf, "{s}/package.json", .{pm.root_dir.dir}) catch "package.json";
if (std.fs.cwd().openFile(package_json_path, .{})) |file| {
defer file.close();
const content = file.readToEndAlloc(allocator, 1024 * 1024) catch null;
if (content) |c| {
defer allocator.free(c);
// Simple extraction of version field
if (std.mem.indexOf(u8, c, "\"version\"")) |pos| {
if (std.mem.indexOfPos(u8, c, pos + 9, "\"")) |start| {
if (std.mem.indexOfPos(u8, c, start + 1, "\"")) |end| {
const v = c[start + 1 .. end];
break :blk try std.fmt.bufPrint(&version_buf, "{s}", .{v});
}
}
}
}
} else |_| {}
break :blk "0.0.1";
};
// Start building JSON output
var buffer = std.ArrayList(u8).init(allocator);
defer buffer.deinit();
var writer = buffer.writer();
try writer.writeAll("{\n");
try writer.print(" \"version\": \"{s}\",\n", .{version});
try writer.print(" \"name\": \"{s}\"", .{root_name});
// Separate dependencies by type
var prod_deps = std.ArrayList(DependencyID).init(allocator);
var dev_deps = std.ArrayList(DependencyID).init(allocator);
var peer_deps = std.ArrayList(DependencyID).init(allocator);
var optional_deps = std.ArrayList(DependencyID).init(allocator);
defer prod_deps.deinit();
defer dev_deps.deinit();
defer peer_deps.deinit();
defer optional_deps.deinit();
// Categorize dependencies by type
if (root_deps.len > 0) {
for (0..root_deps.len) |i| {
const dep_id = @as(DependencyID, @truncate(root_deps.off + i));
const dep = dependencies[dep_id];
if (dep.behavior.peer) {
try peer_deps.append(dep_id);
} else if (dep.behavior.dev) {
try dev_deps.append(dep_id);
} else if (dep.behavior.optional) {
try optional_deps.append(dep_id);
} else if (dep.behavior.prod) {
try prod_deps.append(dep_id);
}
}
}
var has_any_deps = false;
// Print production dependencies
if (prod_deps.items.len > 0) {
try writer.writeAll(if (has_any_deps) ",\n \"dependencies\": {\n" else ",\n \"dependencies\": {\n");
try printJsonDependencySection(
writer,
lockfile,
prod_deps.items,
max_depth,
allocator,
true, // include "from" field
);
try writer.writeAll(" }");
has_any_deps = true;
}
// Print dev dependencies
if (dev_deps.items.len > 0) {
try writer.writeAll(if (has_any_deps) ",\n \"devDependencies\": {\n" else ",\n \"devDependencies\": {\n");
try printJsonDependencySection(
writer,
lockfile,
dev_deps.items,
max_depth,
allocator,
true, // include "from" field
);
try writer.writeAll(" }");
has_any_deps = true;
}
// Print peer dependencies
if (peer_deps.items.len > 0) {
try writer.writeAll(if (has_any_deps) ",\n \"peerDependencies\": {\n" else ",\n \"peerDependencies\": {\n");
try printJsonDependencySection(
writer,
lockfile,
peer_deps.items,
max_depth,
allocator,
true, // include "from" field
);
try writer.writeAll(" }");
has_any_deps = true;
}
// Print optional dependencies
if (optional_deps.items.len > 0) {
try writer.writeAll(if (has_any_deps) ",\n \"optionalDependencies\": {\n" else ",\n \"optionalDependencies\": {\n");
try printJsonDependencySection(
writer,
lockfile,
optional_deps.items,
max_depth,
allocator,
true, // include "from" field
);
try writer.writeAll(" }");
has_any_deps = true;
}
if (!has_any_deps) {
try writer.writeAll("\n");
} else {
try writer.writeAll("\n");
}
try writer.writeAll("}\n");
Output.flush();
Output.disableBuffering();
try Output.writer().writeAll(buffer.items);
Output.enableBuffering();
}
fn printJsonDependencySection(
writer: anytype,
lockfile: *Lockfile,
dep_ids: []const DependencyID,
max_depth: usize,
allocator: std.mem.Allocator,
include_from: bool,
) !void {
const dependencies = lockfile.buffers.dependencies.items;
const string_bytes = lockfile.buffers.string_bytes.items;
const slice = lockfile.packages.slice();
const resolutions = slice.items(.resolution);
// Sort dependencies by name
const sorted_deps = try allocator.alloc(DependencyID, dep_ids.len);
defer allocator.free(sorted_deps);
@memcpy(sorted_deps, dep_ids);
std.sort.pdq(DependencyID, sorted_deps, ByName{
.dependencies = dependencies,
.buf = string_bytes,
}, ByName.isLessThan);
for (sorted_deps, 0..) |dependency_id, i| {
const package_id = lockfile.buffers.resolutions.items[dependency_id];
if (package_id >= lockfile.packages.len) continue;
const dep = dependencies[dependency_id];
const dep_name = dep.name.slice(string_bytes);
const resolution = resolutions[package_id];
// Get version string based on resolution type
var version_buf: [512]u8 = undefined;
const version_str = if (resolution.tag == .npm)
try std.fmt.bufPrint(&version_buf, "{}", .{resolution.value.npm.version.fmt(string_bytes)})
else
try std.fmt.bufPrint(&version_buf, "{}", .{resolution.fmt(string_bytes, .auto)});
// Get resolved URL from resolution
var resolved_buf: [1024]u8 = undefined;
const resolved_url = try std.fmt.bufPrint(&resolved_buf, "{}", .{resolution.fmtURL(string_bytes)});
try writer.print(" \"{s}\": {{\n", .{dep_name});
try writer.print(" \"version\": \"{s}\",\n", .{version_str});
try writer.print(" \"resolved\": \"{s}\",\n", .{resolved_url});
try writer.writeAll(" \"overridden\": false");
// Add "from" field only if requested (for root-level deps)
if (include_from) {
const from_str = dep.version.literal.slice(string_bytes);
try writer.print(",\n \"from\": \"{s}\"", .{from_str});
}
// Add nested dependencies if depth allows
if (max_depth > 0) {
const package_deps = slice.items(.dependencies)[package_id];
if (package_deps.len > 0) {
try writer.writeAll(",\n \"dependencies\": {\n");
try printJsonNestedDependencies(
writer,
lockfile,
package_deps,
1,
max_depth,
allocator,
8,
);
try writer.writeAll("\n }");
}
}
try writer.writeAll("\n }");
if (i < sorted_deps.len - 1) {
try writer.writeAll(",");
}
try writer.writeAll("\n");
}
}
fn printJsonNestedDependencies(
writer: anytype,
lockfile: *Lockfile,
deps: Lockfile.DependencySlice,
current_depth: usize,
max_depth: usize,
allocator: std.mem.Allocator,
indent: usize,
) !void {
if (current_depth > max_depth) return;
const dependencies = lockfile.buffers.dependencies.items;
const string_bytes = lockfile.buffers.string_bytes.items;
const slice = lockfile.packages.slice();
const resolutions = slice.items(.resolution);
// Sort dependencies
const sorted_dependencies = try allocator.alloc(DependencyID, deps.len);
defer allocator.free(sorted_dependencies);
for (sorted_dependencies, 0..) |*dep, i| {
dep.* = @as(DependencyID, @truncate(deps.off + i));
}
std.sort.pdq(DependencyID, sorted_dependencies, ByName{
.dependencies = dependencies,
.buf = string_bytes,
}, ByName.isLessThan);
for (sorted_dependencies, 0..) |dependency_id, i| {
const package_id = lockfile.buffers.resolutions.items[dependency_id];
if (package_id >= lockfile.packages.len) continue;
const dep_name = dependencies[dependency_id].name.slice(string_bytes);
const resolution = resolutions[package_id];
// Get version string based on resolution type
var version_buf: [512]u8 = undefined;
const version_str = if (resolution.tag == .npm)
try std.fmt.bufPrint(&version_buf, "{}", .{resolution.value.npm.version.fmt(string_bytes)})
else
try std.fmt.bufPrint(&version_buf, "{}", .{resolution.fmt(string_bytes, .auto)});
// Get resolved URL from resolution
var resolved_buf: [1024]u8 = undefined;
const resolved_url = try std.fmt.bufPrint(&resolved_buf, "{}", .{resolution.fmtURL(string_bytes)});
// Indent
var j: usize = 0;
while (j < indent) : (j += 1) {
try writer.writeAll(" ");
}
try writer.print("\"{s}\": {{\n", .{dep_name});
// Indent for properties
j = 0;
while (j < indent + 2) : (j += 1) {
try writer.writeAll(" ");
}
try writer.print("\"version\": \"{s}\",\n", .{version_str});
j = 0;
while (j < indent + 2) : (j += 1) {
try writer.writeAll(" ");
}
try writer.print("\"resolved\": \"{s}\",\n", .{resolved_url});
j = 0;
while (j < indent + 2) : (j += 1) {
try writer.writeAll(" ");
}
try writer.writeAll("\"overridden\": false");
// Add nested dependencies if depth allows (no "from" field for nested)
if (current_depth < max_depth) {
const package_deps = slice.items(.dependencies)[package_id];
if (package_deps.len > 0) {
try writer.writeAll(",\n");
j = 0;
while (j < indent + 2) : (j += 1) {
try writer.writeAll(" ");
}
try writer.writeAll("\"dependencies\": {\n");
try printJsonNestedDependencies(
writer,
lockfile,
package_deps,
current_depth + 1,
max_depth,
allocator,
indent + 4,
);
try writer.writeAll("\n");
j = 0;
while (j < indent + 2) : (j += 1) {
try writer.writeAll(" ");
}
try writer.writeAll("}");
}
}
try writer.writeAll("\n");
j = 0;
while (j < indent) : (j += 1) {
try writer.writeAll(" ");
}
try writer.writeAll("}");
if (i < sorted_dependencies.len - 1) {
try writer.writeAll(",\n");
}
}
}
};
fn printNodeModulesFolderStructure(
@@ -450,7 +815,12 @@ fn printNodeModulesFolderStructure(
directories: *std.ArrayList(NodeModulesFolder),
lockfile: *Lockfile,
more_packages: []bool,
max_display_depth: usize,
) !void {
// Stop if we've exceeded the maximum depth
if (depth > max_display_depth) {
return;
}
const allocator = lockfile.allocator;
const resolutions = lockfile.packages.items(.resolution);
const string_bytes = lockfile.buffers.string_bytes.items;
@@ -539,7 +909,7 @@ fn printNodeModulesFolderStructure(
}
more_packages[new_depth] = true;
try printNodeModulesFolderStructure(&next, package_id, new_depth, directories, lockfile, more_packages);
try printNodeModulesFolderStructure(&next, package_id, new_depth, directories, lockfile, more_packages, max_display_depth);
}
}

View File

@@ -461,7 +461,7 @@ pub const PmVersionCommand = struct {
.cwd = cwd,
.envp = null,
.windows = if (Environment.isWindows) .{
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null)),
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null, null)),
},
}) catch |err| {
Output.errGeneric("Failed to spawn git process: {s}", .{@errorName(err)});
@@ -494,7 +494,7 @@ pub const PmVersionCommand = struct {
.cwd = cwd,
.envp = null,
.windows = if (Environment.isWindows) .{
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null)),
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null, null)),
},
}) catch |err| {
Output.err(err, "Failed to spawn git process", .{});
@@ -541,7 +541,7 @@ pub const PmVersionCommand = struct {
.stdin = .ignore,
.envp = null,
.windows = if (Environment.isWindows) .{
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null)),
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null, null)),
},
}) catch |err| {
Output.errGeneric("Git add failed: {s}", .{@errorName(err)});
@@ -575,7 +575,7 @@ pub const PmVersionCommand = struct {
.stdin = .ignore,
.envp = null,
.windows = if (Environment.isWindows) .{
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null)),
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null, null)),
},
}) catch |err| {
Output.errGeneric("Git commit failed: {s}", .{@errorName(err)});
@@ -606,7 +606,7 @@ pub const PmVersionCommand = struct {
.stdin = .ignore,
.envp = null,
.windows = if (Environment.isWindows) .{
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null)),
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null, null)),
},
}) catch |err| {
Output.errGeneric("Git tag failed: {s}", .{@errorName(err)});

View File

@@ -246,7 +246,7 @@ pub const RunCommand = struct {
}
if (!use_system_shell) {
const mini = bun.jsc.MiniEventLoop.initGlobal(env);
const mini = bun.jsc.MiniEventLoop.initGlobal(env, cwd);
const code = bun.shell.Interpreter.initAndRunFromSource(ctx, mini, name, copy_script.items, cwd) catch |err| {
if (!silent) {
Output.prettyErrorln("<r><red>error<r>: Failed to run script <b>{s}<r> due to error <b>{s}<r>", .{ name, @errorName(err) });
@@ -294,7 +294,7 @@ pub const RunCommand = struct {
.ipc = ipc_fd,
.windows = if (Environment.isWindows) .{
.loop = jsc.EventLoopHandle.init(jsc.MiniEventLoop.initGlobal(env)),
.loop = jsc.EventLoopHandle.init(jsc.MiniEventLoop.initGlobal(env, null)),
},
}) catch |err| {
if (!silent) {
@@ -467,7 +467,7 @@ pub const RunCommand = struct {
.use_execve_on_macos = silent,
.windows = if (Environment.isWindows) .{
.loop = jsc.EventLoopHandle.init(jsc.MiniEventLoop.initGlobal(env)),
.loop = jsc.EventLoopHandle.init(jsc.MiniEventLoop.initGlobal(env, null)),
},
}) catch |err| {
bun.handleErrorReturnTrace(err, @errorReturnTrace());

View File

@@ -601,7 +601,7 @@ pub const UpgradeCommand = struct {
.stdin = .inherit,
.windows = if (Environment.isWindows) .{
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null)),
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null, null)),
},
}) catch |err| {
Output.prettyErrorln("<r><red>error:<r> Failed to spawn Expand-Archive on {s} due to error {s}", .{ tmpname, @errorName(err) });

View File

@@ -98,8 +98,8 @@ for (let i = 0; i < nativeStartIndex; i++) {
// TODO: there is no reason this cannot be converted automatically.
// import { ... } from '...' -> `const { ... } = require('...')`
const scannedImports = t.scanImports(input);
for (const imp of scannedImports) {
const scannedImports = t.scan(input);
for (const imp of scannedImports.imports) {
if (imp.kind === "import-statement") {
var isBuiltin = true;
try {
@@ -120,6 +120,14 @@ for (let i = 0; i < nativeStartIndex; i++) {
}
}
if (scannedImports.exports.includes("default") && scannedImports.exports.length > 1) {
const err = new Error(
`Using \`export default\` AND named exports together in builtin modules is unsupported. See src/js/README.md (from ${moduleList[i]})`,
);
err.name = "BunError";
err.fileName = moduleList[i];
throw err;
}
let importStatements: string[] = [];
const processed = sliceSourceCode(

View File

@@ -4,5 +4,6 @@ pub const BabyList = baby_list.BabyList;
pub const ByteList = baby_list.ByteList; // alias of BabyList(u8)
pub const OffsetByteList = baby_list.OffsetByteList;
pub const bit_set = @import("./collections/bit_set.zig");
pub const AutoBitSet = bit_set.AutoBitSet;
pub const HiveArray = @import("./collections/hive_array.zig").HiveArray;
pub const BoundedArray = @import("./collections/bounded_array.zig").BoundedArray;

View File

@@ -248,7 +248,7 @@ pub fn generateFiles(allocator: std.mem.Allocator, entry_point: string, dependen
.stdin = .inherit,
.windows = if (Environment.isWindows) .{
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null)),
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null, null)),
},
}) catch |err| {
Output.err(err, "failed to install dependencies", .{});
@@ -361,7 +361,7 @@ pub fn generateFiles(allocator: std.mem.Allocator, entry_point: string, dependen
.stdin = .inherit,
.windows = if (Environment.isWindows) .{
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null)),
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null, null)),
},
}) catch |err| {
Output.err(err, "failed to start app", .{});

View File

@@ -198,7 +198,7 @@ pub fn onStart(opts: InitOpts) void {
bun.http.default_arena = Arena.init();
bun.http.default_allocator = bun.http.default_arena.allocator();
const loop = bun.jsc.MiniEventLoop.initGlobal(null);
const loop = bun.jsc.MiniEventLoop.initGlobal(null, null);
if (Environment.isWindows) {
_ = std.process.getenvW(comptime bun.strings.w("SystemRoot")) orelse {

View File

@@ -170,6 +170,7 @@ fn JSONLikeParser_(
try p.lexer.next();
var is_single_line = !p.lexer.has_newline_before;
var exprs = std.ArrayList(Expr).init(p.list_allocator);
errdefer exprs.deinit();
while (p.lexer.token != .t_close_bracket) {
if (exprs.items.len > 0) {
@@ -203,6 +204,7 @@ fn JSONLikeParser_(
try p.lexer.next();
var is_single_line = !p.lexer.has_newline_before;
var properties = std.ArrayList(G.Property).init(p.list_allocator);
errdefer properties.deinit();
const DuplicateNodeType = comptime if (opts.json_warn_duplicate_keys) *HashMapPool.LinkedList.Node else void;
const HashMapType = comptime if (opts.json_warn_duplicate_keys) HashMapPool.HashMap else void;

View File

@@ -291,7 +291,7 @@ const SQL: typeof Bun.SQL = function SQL(
reserved_sql.connect = () => {
if (state.connectionState & ReservedConnectionState.closed) {
return Promise.$reject(this.connectionClosedError());
return Promise.$reject(pool.connectionClosedError());
}
return Promise.$resolve(reserved_sql);
};
@@ -322,7 +322,7 @@ const SQL: typeof Bun.SQL = function SQL(
reserved_sql.beginDistributed = (name: string, fn: TransactionCallback) => {
// begin is allowed the difference is that we need to make sure to use the same connection and never release it
if (state.connectionState & ReservedConnectionState.closed) {
return Promise.$reject(this.connectionClosedError());
return Promise.$reject(pool.connectionClosedError());
}
let callback = fn;
@@ -346,7 +346,7 @@ const SQL: typeof Bun.SQL = function SQL(
state.connectionState & ReservedConnectionState.closed ||
!(state.connectionState & ReservedConnectionState.acceptQueries)
) {
return Promise.$reject(this.connectionClosedError());
return Promise.$reject(pool.connectionClosedError());
}
let callback = fn;
let options: string | undefined = options_or_fn as unknown as string;
@@ -369,7 +369,7 @@ const SQL: typeof Bun.SQL = function SQL(
reserved_sql.flush = () => {
if (state.connectionState & ReservedConnectionState.closed) {
throw this.connectionClosedError();
throw pool.connectionClosedError();
}
// Use pooled connection's flush if available, otherwise use adapter's flush
if (pooledConnection.flush) {
@@ -429,7 +429,7 @@ const SQL: typeof Bun.SQL = function SQL(
state.connectionState & ReservedConnectionState.closed ||
!(state.connectionState & ReservedConnectionState.acceptQueries)
) {
return Promise.$reject(this.connectionClosedError());
return Promise.$reject(pool.connectionClosedError());
}
// just release the connection back to the pool
state.connectionState |= ReservedConnectionState.closed;
@@ -552,7 +552,7 @@ const SQL: typeof Bun.SQL = function SQL(
function run_internal_transaction_sql(string) {
if (state.connectionState & ReservedConnectionState.closed) {
return Promise.$reject(this.connectionClosedError());
return Promise.$reject(pool.connectionClosedError());
}
return unsafeQueryFromTransaction(string, [], pooledConnection, state.queries);
}
@@ -564,7 +564,7 @@ const SQL: typeof Bun.SQL = function SQL(
state.connectionState & ReservedConnectionState.closed ||
!(state.connectionState & ReservedConnectionState.acceptQueries)
) {
return Promise.$reject(this.connectionClosedError());
return Promise.$reject(pool.connectionClosedError());
}
if ($isArray(strings)) {
// detect if is tagged template
@@ -593,7 +593,7 @@ const SQL: typeof Bun.SQL = function SQL(
transaction_sql.connect = () => {
if (state.connectionState & ReservedConnectionState.closed) {
return Promise.$reject(this.connectionClosedError());
return Promise.$reject(pool.connectionClosedError());
}
return Promise.$resolve(transaction_sql);
@@ -732,7 +732,7 @@ const SQL: typeof Bun.SQL = function SQL(
state.connectionState & ReservedConnectionState.closed ||
!(state.connectionState & ReservedConnectionState.acceptQueries)
) {
throw this.connectionClosedError();
throw pool.connectionClosedError();
}
if ($isCallable(name)) {
@@ -816,7 +816,7 @@ const SQL: typeof Bun.SQL = function SQL(
sql.reserve = () => {
if (pool.closed) {
return Promise.$reject(this.connectionClosedError());
return Promise.$reject(pool.connectionClosedError());
}
// Check if adapter supports reserved connections
@@ -831,7 +831,7 @@ const SQL: typeof Bun.SQL = function SQL(
};
sql.rollbackDistributed = async function (name: string) {
if (pool.closed) {
throw this.connectionClosedError();
throw pool.connectionClosedError();
}
if (!pool.getRollbackDistributedSQL) {
@@ -844,7 +844,7 @@ const SQL: typeof Bun.SQL = function SQL(
sql.commitDistributed = async function (name: string) {
if (pool.closed) {
throw this.connectionClosedError();
throw pool.connectionClosedError();
}
if (!pool.getCommitDistributedSQL) {
@@ -857,7 +857,7 @@ const SQL: typeof Bun.SQL = function SQL(
sql.beginDistributed = (name: string, fn: TransactionCallback) => {
if (pool.closed) {
return Promise.$reject(this.connectionClosedError());
return Promise.$reject(pool.connectionClosedError());
}
let callback = fn;
@@ -876,7 +876,7 @@ const SQL: typeof Bun.SQL = function SQL(
sql.begin = (options_or_fn: string | TransactionCallback, fn?: TransactionCallback) => {
if (pool.closed) {
return Promise.$reject(this.connectionClosedError());
return Promise.$reject(pool.connectionClosedError());
}
let callback = fn;
let options: string | undefined = options_or_fn as unknown as string;
@@ -896,7 +896,7 @@ const SQL: typeof Bun.SQL = function SQL(
};
sql.connect = () => {
if (pool.closed) {
return Promise.$reject(this.connectionClosedError());
return Promise.$reject(pool.connectionClosedError());
}
if (pool.isConnected()) {

View File

@@ -141,7 +141,8 @@ function ReadStream(this: FSStream, path, options): void {
// Only buffers are supported.
options.decodeStrings = true;
let { fd, autoClose, fs: customFs, start = 0, end = Infinity, encoding } = options;
let { fd, autoClose, fs: customFs, start, end = Infinity, encoding } = options;
if (fd == null) {
this[kFs] = customFs || fs;
this.fd = null;

View File

@@ -274,6 +274,9 @@ function onQueryFinish(this: PooledMySQLConnection, onClose: (err: Error) => voi
this.adapter.release(this);
}
function closeNT(onClose: (err: Error) => void, err: Error | null) {
onClose(err as Error);
}
class PooledMySQLConnection {
private static async createConnection(
options: Bun.SQL.__internal.DefinedPostgresOrMySQLOptions,
@@ -328,7 +331,7 @@ class PooledMySQLConnection {
!prepare,
);
} catch (e) {
onClose(e as Error);
process.nextTick(closeNT, onClose, e);
return null;
}
}
@@ -344,10 +347,13 @@ class PooledMySQLConnection {
/// queryCount is used to indicate the number of queries using the connection, if a connection is reserved or if its a transaction queryCount will be 1 independently of the number of queries
queryCount: number = 0;
#onConnected(err, _) {
#onConnected(err, connection) {
if (err) {
err = wrapError(err);
} else {
this.connection = connection;
}
const connectionInfo = this.connectionInfo;
if (connectionInfo?.onconnect) {
connectionInfo.onconnect(err);
@@ -413,12 +419,8 @@ class PooledMySQLConnection {
this.#startConnection();
}
async #startConnection() {
this.connection = await PooledMySQLConnection.createConnection(
this.connectionInfo,
this.#onConnected.bind(this),
this.#onClose.bind(this),
);
#startConnection() {
PooledMySQLConnection.createConnection(this.connectionInfo, this.#onConnected.bind(this), this.#onClose.bind(this));
}
onClose(onClose: (err: Error) => void) {
@@ -482,14 +484,14 @@ class PooledMySQLConnection {
}
}
export class MySQLAdapter
class MySQLAdapter
implements
DatabaseAdapter<PooledMySQLConnection, $ZigGeneratedClasses.MySQLConnection, $ZigGeneratedClasses.MySQLQuery>
{
public readonly connectionInfo: Bun.SQL.__internal.DefinedPostgresOrMySQLOptions;
public readonly connections: PooledMySQLConnection[];
public readonly readyConnections: Set<PooledMySQLConnection>;
public readonly readyConnections: Set<PooledMySQLConnection> = new Set();
public waitingQueue: Array<(err: Error | null, result: any) => void> = [];
public reservedQueue: Array<(err: Error | null, result: any) => void> = [];
@@ -502,7 +504,6 @@ export class MySQLAdapter
constructor(connectionInfo: Bun.SQL.__internal.DefinedPostgresOrMySQLOptions) {
this.connectionInfo = connectionInfo;
this.connections = new Array(connectionInfo.max);
this.readyConnections = new Set();
}
escapeIdentifier(str: string) {

View File

@@ -499,7 +499,7 @@ class PooledPostgresConnection {
}
}
export class PostgresAdapter
class PostgresAdapter
implements
DatabaseAdapter<
PooledPostgresConnection,

View File

@@ -293,7 +293,7 @@ function parseSQLQuery(query: string): SQLParsedInfo {
return { command, firstKeyword, hasReturning };
}
export class SQLiteQueryHandle implements BaseQueryHandle<BunSQLiteModule.Database> {
class SQLiteQueryHandle implements BaseQueryHandle<BunSQLiteModule.Database> {
private mode = SQLQueryResultMode.objects;
private readonly sql: string;
@@ -380,9 +380,7 @@ export class SQLiteQueryHandle implements BaseQueryHandle<BunSQLiteModule.Databa
}
}
export class SQLiteAdapter
implements DatabaseAdapter<BunSQLiteModule.Database, BunSQLiteModule.Database, SQLiteQueryHandle>
{
class SQLiteAdapter implements DatabaseAdapter<BunSQLiteModule.Database, BunSQLiteModule.Database, SQLiteQueryHandle> {
public readonly connectionInfo: Bun.SQL.__internal.DefinedSQLiteOptions;
public db: BunSQLiteModule.Database | null = null;
public storedError: Error | null = null;
@@ -807,4 +805,5 @@ export default {
SQLCommand,
commandToString,
parseSQLQuery,
SQLiteQueryHandle,
};

View File

@@ -18,7 +18,8 @@ function ReadStream(fd): void {
}
fs.ReadStream.$apply(this, ["", { fd }]);
this.isRaw = false;
this.isTTY = true;
// Only set isTTY to true if the fd is actually a TTY
this.isTTY = isatty(fd);
}
$toClass(ReadStream, "ReadStream", fs.ReadStream);
@@ -26,6 +27,26 @@ Object.defineProperty(ReadStream, "prototype", {
get() {
const Prototype = Object.create(fs.ReadStream.prototype);
// Add ref/unref methods to make tty.ReadStream behave like Node.js
// where TTY streams have socket-like behavior
Prototype.ref = function () {
// Get the underlying native stream source if available
const source = this.$bunNativePtr;
if (source?.updateRef) {
source.updateRef(true);
}
return this;
};
Prototype.unref = function () {
// Get the underlying native stream source if available
const source = this.$bunNativePtr;
if (source?.updateRef) {
source.updateRef(false);
}
return this;
};
Prototype.setRawMode = function (flag) {
flag = !!flag;

View File

@@ -25,7 +25,7 @@ pub fn openURL(url: stringZ) void {
.stdin = .inherit,
.windows = if (Environment.isWindows) .{
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null)),
.loop = bun.jsc.EventLoopHandle.init(bun.jsc.MiniEventLoop.initGlobal(null, null)),
},
}) catch break :maybe_fallback) {
// don't fallback:

View File

@@ -142,16 +142,14 @@ pub const OutKind = union(enum) {
.capture = .{
.buf = cap,
},
} else if (val.writer.fd.get()) |fd| .{
// We have a valid fd that hasn't been moved to libuv
.fd = fd,
} else .{
// Windows notes:
// Since `val.writer.fd` is `MovableFD`, it could
// technically be moved to libuv for ownership.
//
// But since this file descriptor never going to be touched by this
// process, except to hand off to the subprocess when we
// spawn it, we don't really care if the file descriptor
// ends up being invalid.
.fd = val.writer.fd.get().?,
// On Windows, the fd might have been moved to libuv
// In this case, the subprocess should inherit the stdio
// since libuv is already managing it
.inherit = {},
};
},
.pipe => .pipe,

View File

@@ -5,21 +5,21 @@ pub fn createBinding(globalObject: *jsc.JSGlobalObject) JSValue {
binding.put(
globalObject,
ZigString.static("createQuery"),
jsc.JSFunction.create(globalObject, "createQuery", MySQLQuery.call, 6, .{}),
jsc.JSFunction.create(globalObject, "createQuery", MySQLQuery.createInstance, 6, .{}),
);
binding.put(
globalObject,
ZigString.static("createConnection"),
jsc.JSFunction.create(globalObject, "createQuery", MySQLConnection.call, 2, .{}),
jsc.JSFunction.create(globalObject, "createConnection", MySQLConnection.createInstance, 2, .{}),
);
return binding;
}
pub const MySQLConnection = @import("./mysql/MySQLConnection.zig");
pub const MySQLConnection = @import("./mysql/js/JSMySQLConnection.zig");
pub const MySQLContext = @import("./mysql/MySQLContext.zig");
pub const MySQLQuery = @import("./mysql/MySQLQuery.zig");
pub const MySQLQuery = @import("./mysql/js/JSMySQLQuery.zig");
const bun = @import("bun");

File diff suppressed because it is too large Load Diff

View File

@@ -1,133 +1,18 @@
const MySQLQuery = @This();
const RefCount = bun.ptr.ThreadSafeRefCount(@This(), "ref_count", deinit, .{});
statement: ?*MySQLStatement = null,
query: bun.String = bun.String.empty,
cursor_name: bun.String = bun.String.empty,
thisValue: JSRef = JSRef.empty(),
#statement: ?*MySQLStatement = null,
#query: bun.String,
status: Status = Status.pending,
ref_count: RefCount = RefCount.init(),
flags: packed struct(u8) {
is_done: bool = false,
binary: bool = false,
#status: Status,
#flags: packed struct(u8) {
bigint: bool = false,
simple: bool = false,
pipelined: bool = false,
result_mode: SQLQueryResultMode = .objects,
_padding: u1 = 0,
} = .{},
pub const ref = RefCount.ref;
pub const deref = RefCount.deref;
pub const Status = enum(u8) {
/// The query was just enqueued, statement status can be checked for more details
pending,
/// The query is being bound to the statement
binding,
/// The query is running
running,
/// The query is waiting for a partial response
partial_response,
/// The query was successful
success,
/// The query failed
fail,
pub fn isRunning(this: Status) bool {
return @intFromEnum(this) > @intFromEnum(Status.pending) and @intFromEnum(this) < @intFromEnum(Status.success);
}
};
pub fn hasPendingActivity(this: *@This()) bool {
return this.ref_count.load(.monotonic) > 1;
}
pub fn deinit(this: *@This()) void {
this.thisValue.deinit();
if (this.statement) |statement| {
statement.deref();
}
this.query.deref();
this.cursor_name.deref();
bun.default_allocator.destroy(this);
}
pub fn finalize(this: *@This()) void {
debug("MySQLQuery finalize", .{});
// Clean up any statement reference
if (this.statement) |statement| {
statement.deref();
this.statement = null;
}
if (this.thisValue == .weak) {
// clean up if is a weak reference, if is a strong reference we need to wait until the query is done
// if we are a strong reference, here is probably a bug because GC'd should not happen
this.thisValue.weak = .zero;
}
this.deref();
}
pub fn onWriteFail(
this: *@This(),
err: AnyMySQLError.Error,
globalObject: *jsc.JSGlobalObject,
queries_array: JSValue,
) void {
this.status = .fail;
const thisValue = this.thisValue.get();
defer this.thisValue.deinit();
const targetValue = this.getTarget(globalObject, true);
if (thisValue == .zero or targetValue == .zero) {
return;
}
const instance = AnyMySQLError.mysqlErrorToJS(globalObject, "Failed to bind query", err);
const vm = jsc.VirtualMachine.get();
const function = vm.rareData().mysql_context.onQueryRejectFn.get().?;
const event_loop = vm.eventLoop();
event_loop.runCallback(function, globalObject, thisValue, &.{
targetValue,
// TODO: add mysql error to JS
// postgresErrorToJS(globalObject, null, err),
instance,
queries_array,
});
}
pub fn bindAndExecute(this: *MySQLQuery, writer: anytype, statement: *MySQLStatement, globalObject: *jsc.JSGlobalObject) AnyMySQLError.Error!void {
debug("bindAndExecute", .{});
bun.assertf(statement.params.len == statement.params_received and statement.statement_id > 0, "statement is not prepared", .{});
if (statement.signature.fields.len != statement.params.len) {
return error.WrongNumberOfParametersProvided;
}
var packet = try writer.start(0);
var execute = PreparedStatement.Execute{
.statement_id = statement.statement_id,
.param_types = statement.signature.fields,
.new_params_bind_flag = statement.execution_flags.need_to_send_params,
.iteration_count = 1,
};
statement.execution_flags.need_to_send_params = false;
defer execute.deinit();
try this.bind(&execute, globalObject);
try execute.write(writer);
try packet.end();
this.status = .running;
}
fn bind(this: *MySQLQuery, execute: *PreparedStatement.Execute, globalObject: *jsc.JSGlobalObject) AnyMySQLError.Error!void {
const thisValue = this.thisValue.get();
const binding_value = js.bindingGetCached(thisValue) orelse .zero;
const columns_value = js.columnsGetCached(thisValue) orelse .zero;
_padding: u3 = 0,
},
fn bind(this: *MySQLQuery, execute: *PreparedStatement.Execute, globalObject: *JSGlobalObject, binding_value: JSValue, columns_value: JSValue) AnyMySQLError.Error!void {
var iter = try QueryBindingIterator.init(binding_value, columns_value, globalObject);
var i: u32 = 0;
@@ -140,7 +25,6 @@ fn bind(this: *MySQLQuery, execute: *PreparedStatement.Execute, globalObject: *j
}
while (try iter.next()) |js_value| {
const param = execute.param_types[i];
debug("param: {s} unsigned? {}", .{ @tagName(param.type), param.flags.UNSIGNED });
params[i] = try Value.fromJS(
js_value,
globalObject,
@@ -154,401 +38,245 @@ fn bind(this: *MySQLQuery, execute: *PreparedStatement.Execute, globalObject: *j
return error.InvalidQueryBinding;
}
this.status = .binding;
this.#status = .binding;
execute.params = params;
}
pub fn onError(this: *@This(), err: ErrorPacket, globalObject: *jsc.JSGlobalObject) void {
debug("onError", .{});
this.onJSError(err.toJS(globalObject), globalObject);
fn bindAndExecute(this: *MySQLQuery, writer: anytype, statement: *MySQLStatement, globalObject: *JSGlobalObject, binding_value: JSValue, columns_value: JSValue) AnyMySQLError.Error!void {
bun.assertf(statement.params.len == statement.params_received and statement.statement_id > 0, "statement is not prepared", .{});
if (statement.signature.fields.len != statement.params.len) {
return error.WrongNumberOfParametersProvided;
}
var packet = try writer.start(0);
var execute = PreparedStatement.Execute{
.statement_id = statement.statement_id,
.param_types = statement.signature.fields,
.new_params_bind_flag = statement.execution_flags.need_to_send_params,
.iteration_count = 1,
};
statement.execution_flags.need_to_send_params = false;
defer execute.deinit();
try this.bind(&execute, globalObject, binding_value, columns_value);
try execute.write(writer);
try packet.end();
this.#status = .running;
}
pub fn onJSError(this: *@This(), err: jsc.JSValue, globalObject: *jsc.JSGlobalObject) void {
this.ref();
defer this.deref();
this.status = .fail;
const thisValue = this.thisValue.get();
defer this.thisValue.deinit();
const targetValue = this.getTarget(globalObject, true);
if (thisValue == .zero or targetValue == .zero) {
fn runSimpleQuery(this: *@This(), connection: *MySQLConnection) !void {
if (this.#status != .pending or !connection.canExecuteQuery()) {
debug("cannot execute query", .{});
// cannot execute query
return;
}
var vm = jsc.VirtualMachine.get();
const function = vm.rareData().mysql_context.onQueryRejectFn.get().?;
const event_loop = vm.eventLoop();
event_loop.runCallback(function, globalObject, thisValue, &.{
targetValue,
err,
});
}
pub fn getTarget(this: *@This(), globalObject: *jsc.JSGlobalObject, clean_target: bool) jsc.JSValue {
const thisValue = this.thisValue.tryGet() orelse return .zero;
const target = js.targetGetCached(thisValue) orelse return .zero;
if (clean_target) {
js.targetSetCached(thisValue, globalObject, .zero);
var query_str = this.#query.toUTF8(bun.default_allocator);
defer query_str.deinit();
const writer = connection.getWriter();
if (this.#statement == null) {
const stmt = bun.new(MySQLStatement, .{
.signature = Signature.empty(),
.status = .parsing,
.ref_count = .initExactRefs(1),
});
this.#statement = stmt;
}
return target;
try MySQLRequest.executeQuery(query_str.slice(), MySQLConnection.Writer, writer);
this.#status = .running;
}
fn consumePendingValue(thisValue: jsc.JSValue, globalObject: *jsc.JSGlobalObject) ?JSValue {
const pending_value = js.pendingValueGetCached(thisValue) orelse return null;
js.pendingValueSetCached(thisValue, globalObject, .zero);
return pending_value;
}
fn runPreparedQuery(
this: *@This(),
connection: *MySQLConnection,
globalObject: *JSGlobalObject,
columns_value: JSValue,
binding_value: JSValue,
) !void {
var query_str: ?bun.ZigString.Slice = null;
defer if (query_str) |str| str.deinit();
pub fn allowGC(thisValue: jsc.JSValue, globalObject: *jsc.JSGlobalObject) void {
if (thisValue == .zero) {
return;
}
if (this.#statement == null) {
const query = this.#query.toUTF8(bun.default_allocator);
query_str = query;
var signature = Signature.generate(globalObject, query.slice(), binding_value, columns_value) catch |err| {
if (!globalObject.hasException())
return globalObject.throwValue(AnyMySQLError.mysqlErrorToJS(globalObject, "failed to generate signature", err));
return error.JSError;
};
errdefer signature.deinit();
const entry = connection.getStatementFromSignatureHash(bun.hash(signature.name)) catch |err| {
return globalObject.throwError(err, "failed to allocate statement");
};
defer thisValue.ensureStillAlive();
js.bindingSetCached(thisValue, globalObject, .zero);
js.pendingValueSetCached(thisValue, globalObject, .zero);
js.targetSetCached(thisValue, globalObject, .zero);
}
fn u64ToJSValue(value: u64) JSValue {
if (value <= jsc.MAX_SAFE_INTEGER) {
return JSValue.jsNumber(value);
}
return JSValue.jsBigInt(value);
}
pub fn onResult(this: *@This(), result_count: u64, globalObject: *jsc.JSGlobalObject, connection: jsc.JSValue, is_last: bool, last_insert_id: u64, affected_rows: u64) void {
this.ref();
defer this.deref();
const thisValue = this.thisValue.get();
const targetValue = this.getTarget(globalObject, is_last);
if (is_last) {
this.status = .success;
} else {
this.status = .partial_response;
}
defer if (is_last) {
allowGC(thisValue, globalObject);
this.thisValue.deinit();
};
if (thisValue == .zero or targetValue == .zero) {
return;
}
const vm = jsc.VirtualMachine.get();
const function = vm.rareData().mysql_context.onQueryResolveFn.get().?;
const event_loop = vm.eventLoop();
const tag: CommandTag = .{ .SELECT = result_count };
event_loop.runCallback(function, globalObject, thisValue, &.{
targetValue,
consumePendingValue(thisValue, globalObject) orelse .js_undefined,
tag.toJSTag(globalObject),
tag.toJSNumber(),
if (connection == .zero) .js_undefined else MySQLConnection.js.queriesGetCached(connection) orelse .js_undefined,
JSValue.jsBoolean(is_last),
JSValue.jsNumber(last_insert_id),
JSValue.jsNumber(affected_rows),
});
}
pub fn constructor(globalThis: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!*MySQLQuery {
_ = callframe;
return globalThis.throw("MySQLQuery cannot be constructed directly", .{});
}
pub fn estimatedSize(this: *MySQLQuery) usize {
_ = this;
return @sizeOf(MySQLQuery);
}
pub fn call(globalThis: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!jsc.JSValue {
const arguments = callframe.arguments();
var args = jsc.CallFrame.ArgumentsSlice.init(globalThis.bunVM(), arguments);
defer args.deinit();
const query = args.nextEat() orelse {
return globalThis.throw("query must be a string", .{});
};
const values = args.nextEat() orelse {
return globalThis.throw("values must be an array", .{});
};
if (!query.isString()) {
return globalThis.throw("query must be a string", .{});
}
if (values.jsType() != .Array) {
return globalThis.throw("values must be an array", .{});
}
const pending_value: JSValue = args.nextEat() orelse .js_undefined;
const columns: JSValue = args.nextEat() orelse .js_undefined;
const js_bigint: JSValue = args.nextEat() orelse .false;
const js_simple: JSValue = args.nextEat() orelse .false;
const bigint = js_bigint.isBoolean() and js_bigint.asBoolean();
const simple = js_simple.isBoolean() and js_simple.asBoolean();
if (simple) {
if (try values.getLength(globalThis) > 0) {
return globalThis.throwInvalidArguments("simple query cannot have parameters", .{});
}
if (try query.getLength(globalThis) >= std.math.maxInt(i32)) {
return globalThis.throwInvalidArguments("query is too long", .{});
if (entry.found_existing) {
const stmt = entry.value_ptr.*;
if (stmt.status == .failed) {
const error_response = stmt.error_response.toJS(globalObject);
// If the statement failed, we need to throw the error
return globalObject.throwValue(error_response);
}
this.#statement = stmt;
stmt.ref();
signature.deinit();
signature = Signature{};
} else {
const stmt = bun.new(MySQLStatement, .{
.signature = signature,
.ref_count = .initExactRefs(2),
.status = .pending,
.statement_id = 0,
});
this.#statement = stmt;
entry.value_ptr.* = stmt;
}
}
if (!pending_value.jsType().isArrayLike()) {
return globalThis.throwInvalidArgumentType("query", "pendingValue", "Array");
const stmt = this.#statement.?;
switch (stmt.status) {
.failed => {
debug("failed", .{});
const error_response = stmt.error_response.toJS(globalObject);
// If the statement failed, we need to throw the error
return globalObject.throwValue(error_response);
},
.prepared => {
if (connection.canPipeline()) {
debug("bindAndExecute", .{});
const writer = connection.getWriter();
this.bindAndExecute(writer, stmt, globalObject, binding_value, columns_value) catch |err| {
if (!globalObject.hasException())
return globalObject.throwValue(AnyMySQLError.mysqlErrorToJS(globalObject, "failed to bind and execute query", err));
return error.JSError;
};
this.#flags.pipelined = true;
}
},
.parsing => {
debug("parsing", .{});
},
.pending => {
if (connection.canPrepareQuery()) {
debug("prepareRequest", .{});
const writer = connection.getWriter();
const query = query_str orelse this.#query.toUTF8(bun.default_allocator);
MySQLRequest.prepareRequest(query.slice(), MySQLConnection.Writer, writer) catch |err| {
return globalObject.throwError(err, "failed to prepare query");
};
stmt.status = .parsing;
}
},
}
}
var ptr = bun.default_allocator.create(MySQLQuery) catch |err| {
return globalThis.throwError(err, "failed to allocate query");
};
const this_value = ptr.toJS(globalThis);
this_value.ensureStillAlive();
ptr.* = .{
.query = try query.toBunString(globalThis),
.thisValue = JSRef.initWeak(this_value),
.flags = .{
pub fn init(query: bun.String, bigint: bool, simple: bool) @This() {
query.ref();
return .{
.#query = query,
.#status = .pending,
.#flags = .{
.bigint = bigint,
.simple = simple,
},
};
ptr.query.ref();
}
js.bindingSetCached(this_value, globalThis, values);
js.pendingValueSetCached(this_value, globalThis, pending_value);
if (!columns.isUndefined()) {
js.columnsSetCached(this_value, globalThis, columns);
pub fn runQuery(this: *@This(), connection: *MySQLConnection, globalObject: *JSGlobalObject, columns_value: JSValue, binding_value: JSValue) !void {
if (this.#flags.simple) {
debug("runSimpleQuery", .{});
return try this.runSimpleQuery(connection);
}
debug("runPreparedQuery", .{});
return try this.runPreparedQuery(
connection,
globalObject,
if (columns_value == .zero) .js_undefined else columns_value,
if (binding_value == .zero) .js_undefined else binding_value,
);
}
return this_value;
pub inline fn setResultMode(this: *@This(), result_mode: SQLQueryResultMode) void {
this.#flags.result_mode = result_mode;
}
pub fn setPendingValue(this: *@This(), globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!JSValue {
const result = callframe.argument(0);
const thisValue = this.thisValue.tryGet() orelse return .js_undefined;
js.pendingValueSetCached(thisValue, globalObject, result);
return .js_undefined;
pub inline fn result(this: *@This(), is_last_result: bool) bool {
if (this.#status == .success or this.#status == .fail) return false;
this.#status = if (is_last_result) .success else .partial_response;
return true;
}
pub fn setMode(this: *@This(), globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!JSValue {
const js_mode = callframe.argument(0);
if (js_mode.isEmptyOrUndefinedOrNull() or !js_mode.isNumber()) {
return globalObject.throwInvalidArgumentType("setMode", "mode", "Number");
pub fn fail(this: *@This()) bool {
if (this.#status == .fail or this.#status == .success) return false;
this.#status = .fail;
return true;
}
pub fn cleanup(this: *@This()) void {
if (this.#statement) |statement| {
statement.deref();
this.#statement = null;
}
const mode = try js_mode.coerce(i32, globalObject);
this.flags.result_mode = std.meta.intToEnum(SQLQueryResultMode, mode) catch {
return globalObject.throwInvalidArgumentTypeValue("mode", "Number", js_mode);
};
return .js_undefined;
var query = this.#query;
defer query.deref();
this.#query = bun.String.empty;
}
pub fn doDone(this: *@This(), globalObject: *jsc.JSGlobalObject, _: *jsc.CallFrame) bun.JSError!JSValue {
_ = globalObject;
this.flags.is_done = true;
return .js_undefined;
pub inline fn isCompleted(this: *const @This()) bool {
return this.#status == .success or this.#status == .fail;
}
pub fn doCancel(this: *MySQLQuery, globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!JSValue {
_ = callframe;
_ = globalObject;
_ = this;
return .js_undefined;
}
pub fn doRun(this: *MySQLQuery, globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!JSValue {
debug("doRun", .{});
var arguments = callframe.arguments();
const connection: *MySQLConnection = arguments[0].as(MySQLConnection) orelse {
return globalObject.throw("connection must be a MySQLConnection", .{});
};
connection.poll_ref.ref(globalObject.bunVM());
var query = arguments[1];
if (!query.isObject()) {
return globalObject.throwInvalidArgumentType("run", "query", "Query");
pub inline fn isRunning(this: *const @This()) bool {
switch (this.#status) {
.running, .binding, .partial_response => return true,
.success, .fail, .pending => return false,
}
}
pub inline fn isPending(this: *const @This()) bool {
return this.#status == .pending;
}
const this_value = callframe.this();
const binding_value = js.bindingGetCached(this_value) orelse .zero;
var query_str = this.query.toUTF8(bun.default_allocator);
defer query_str.deinit();
const writer = connection.writer();
// We need a strong reference to the query so that it doesn't get GC'd
this.ref();
const can_execute = connection.canExecuteQuery();
if (this.flags.simple) {
// simple queries are always text in MySQL
this.flags.binary = false;
debug("executeQuery", .{});
pub inline fn isBeingPrepared(this: *@This()) bool {
return this.#status == .pending and this.#statement != null and this.#statement.?.status == .parsing;
}
const stmt = bun.default_allocator.create(MySQLStatement) catch {
this.deref();
return globalObject.throwOutOfMemory();
};
// Query is simple and it's the only owner of the statement
stmt.* = .{
.signature = Signature.empty(),
.status = .parsing,
};
this.statement = stmt;
if (can_execute) {
connection.sequence_id = 0;
MySQLRequest.executeQuery(query_str.slice(), MySQLConnection.Writer, writer) catch |err| {
debug("executeQuery failed: {s}", .{@errorName(err)});
// fail to run do cleanup
this.statement = null;
bun.default_allocator.destroy(stmt);
this.deref();
if (!globalObject.hasException())
return globalObject.throwValue(AnyMySQLError.mysqlErrorToJS(globalObject, "failed to execute query", err));
return error.JSError;
};
connection.flags.is_ready_for_query = false;
connection.nonpipelinable_requests += 1;
this.status = .running;
} else {
this.status = .pending;
}
connection.requests.writeItem(this) catch {
// fail to run do cleanup
this.statement = null;
bun.default_allocator.destroy(stmt);
this.deref();
return globalObject.throwOutOfMemory();
};
debug("doRun: wrote query to queue", .{});
this.thisValue.upgrade(globalObject);
js.targetSetCached(this_value, globalObject, query);
connection.flushDataAndResetTimeout();
return .js_undefined;
}
// prepared statements are always binary in MySQL
this.flags.binary = true;
const columns_value = js.columnsGetCached(callframe.this()) orelse .js_undefined;
var signature = Signature.generate(globalObject, query_str.slice(), binding_value, columns_value) catch |err| {
this.deref();
if (!globalObject.hasException())
return globalObject.throwValue(AnyMySQLError.mysqlErrorToJS(globalObject, "failed to generate signature", err));
return error.JSError;
};
errdefer signature.deinit();
const entry = connection.statements.getOrPut(bun.default_allocator, bun.hash(signature.name)) catch |err| {
this.deref();
return globalObject.throwError(err, "failed to allocate statement");
};
var did_write = false;
enqueue: {
if (entry.found_existing) {
const stmt = entry.value_ptr.*;
this.statement = stmt;
stmt.ref();
signature.deinit();
signature = Signature{};
switch (stmt.status) {
.failed => {
this.statement = null;
const error_response = stmt.error_response.toJS(globalObject);
stmt.deref();
this.deref();
// If the statement failed, we need to throw the error
return globalObject.throwValue(error_response);
},
.prepared => {
if (can_execute or connection.canPipeline()) {
debug("doRun: binding and executing query", .{});
this.bindAndExecute(writer, this.statement.?, globalObject) catch |err| {
if (!globalObject.hasException())
return globalObject.throwValue(AnyMySQLError.mysqlErrorToJS(globalObject, "failed to bind and execute query", err));
return error.JSError;
};
connection.sequence_id = 0;
this.flags.pipelined = true;
connection.pipelined_requests += 1;
connection.flags.is_ready_for_query = false;
did_write = true;
}
},
.parsing, .pending => {},
pub inline fn isPipelined(this: *const @This()) bool {
return this.#flags.pipelined;
}
pub inline fn isSimple(this: *const @This()) bool {
return this.#flags.simple;
}
pub inline fn isBigintSupported(this: *const @This()) bool {
return this.#flags.bigint;
}
pub inline fn getResultMode(this: *const @This()) SQLQueryResultMode {
return this.#flags.result_mode;
}
pub inline fn markAsPrepared(this: *@This()) void {
if (this.#status == .pending) {
if (this.#statement) |statement| {
if (statement.status == .parsing and
statement.params.len == statement.params_received and
statement.statement_id > 0)
{
statement.status = .prepared;
}
break :enqueue;
}
const stmt = bun.default_allocator.create(MySQLStatement) catch |err| {
this.deref();
return globalObject.throwError(err, "failed to allocate statement");
};
stmt.* = .{
.signature = signature,
.ref_count = .initExactRefs(2),
.status = .pending,
.statement_id = 0,
};
this.statement = stmt;
entry.value_ptr.* = stmt;
}
this.status = if (did_write) .running else .pending;
try connection.requests.writeItem(this);
this.thisValue.upgrade(globalObject);
js.targetSetCached(this_value, globalObject, query);
if (!did_write and can_execute) {
debug("doRun: preparing query", .{});
if (connection.canPrepareQuery()) {
this.statement.?.status = .parsing;
MySQLRequest.prepareRequest(query_str.slice(), MySQLConnection.Writer, writer) catch |err| {
this.deref();
return globalObject.throwError(err, "failed to prepare query");
};
connection.flags.waiting_to_prepare = true;
connection.flags.is_ready_for_query = false;
}
}
connection.flushDataAndResetTimeout();
return .js_undefined;
}
comptime {
@export(&jsc.toJSHostFn(call), .{ .name = "MySQLQuery__createInstance" });
pub inline fn getStatement(this: *const @This()) ?*MySQLStatement {
return this.#statement;
}
pub const js = jsc.Codegen.JSMySQLQuery;
pub const fromJS = js.fromJS;
pub const fromJSDirect = js.fromJSDirect;
pub const toJS = js.toJS;
const debug = bun.Output.scoped(.MySQLQuery, .visible);
// TODO: move to shared IF POSSIBLE
const AnyMySQLError = @import("./protocol/AnyMySQLError.zig");
const ErrorPacket = @import("./protocol/ErrorPacket.zig");
const MySQLConnection = @import("./MySQLConnection.zig");
const MySQLConnection = @import("./js/JSMySQLConnection.zig");
const MySQLRequest = @import("./MySQLRequest.zig");
const MySQLStatement = @import("./MySQLStatement.zig");
const PreparedStatement = @import("./protocol/PreparedStatement.zig");
const Signature = @import("./protocol/Signature.zig");
const bun = @import("bun");
const std = @import("std");
const CommandTag = @import("../postgres/CommandTag.zig").CommandTag;
const QueryBindingIterator = @import("../shared/QueryBindingIterator.zig").QueryBindingIterator;
const SQLQueryResultMode = @import("../shared/SQLQueryResultMode.zig").SQLQueryResultMode;
const Status = @import("./QueryStatus.zig").Status;
const Value = @import("./MySQLTypes.zig").Value;
const jsc = bun.jsc;
const JSRef = jsc.JSRef;
const JSValue = jsc.JSValue;
const JSGlobalObject = bun.jsc.JSGlobalObject;
const JSValue = bun.jsc.JSValue;

View File

@@ -0,0 +1,4 @@
result_count: u64,
last_insert_id: u64,
affected_rows: u64,
is_last_result: bool,

View File

@@ -0,0 +1,224 @@
pub const MySQLRequestQueue = @This();
#requests: Queue,
#pipelined_requests: u32 = 0,
#nonpipelinable_requests: u32 = 0,
// TODO: refactor to ENUM
#waiting_to_prepare: bool = false,
#is_ready_for_query: bool = true,
pub inline fn canExecuteQuery(this: *@This(), connection: *MySQLConnection) bool {
return connection.isAbleToWrite() and
this.#is_ready_for_query and
this.#nonpipelinable_requests == 0 and
this.#pipelined_requests == 0;
}
pub inline fn canPrepareQuery(this: *@This(), connection: *MySQLConnection) bool {
return connection.isAbleToWrite() and
this.#is_ready_for_query and
!this.#waiting_to_prepare and
this.#pipelined_requests == 0;
}
pub inline fn markAsReadyForQuery(this: *@This()) void {
this.#is_ready_for_query = true;
}
pub inline fn markAsPrepared(this: *@This()) void {
this.#waiting_to_prepare = false;
if (this.current()) |request| {
debug("markAsPrepared markAsPrepared", .{});
request.markAsPrepared();
}
}
pub inline fn canPipeline(this: *@This(), connection: *MySQLConnection) bool {
if (bun.getRuntimeFeatureFlag(.BUN_FEATURE_FLAG_DISABLE_SQL_AUTO_PIPELINING)) {
@branchHint(.unlikely);
return false;
}
return this.#is_ready_for_query and
this.#nonpipelinable_requests == 0 and // need to wait for non pipelinable requests to finish
!this.#waiting_to_prepare and
connection.isAbleToWrite();
}
pub fn markCurrentRequestAsFinished(this: *@This(), item: *JSMySQLQuery) void {
this.#waiting_to_prepare = false;
if (item.isBeingPrepared()) {
debug("markCurrentRequestAsFinished markAsPrepared", .{});
item.markAsPrepared();
} else if (item.isRunning()) {
if (item.isPipelined()) {
this.#pipelined_requests -= 1;
} else {
this.#nonpipelinable_requests -= 1;
}
}
}
pub fn advance(this: *@This(), connection: *MySQLConnection) void {
var offset: usize = 0;
defer {
while (this.#requests.readableLength() > 0) {
const request = this.#requests.peekItem(0);
// An item may be in the success or failed state and still be inside the queue (see deinit later comments)
// so we do the cleanup her
if (request.isCompleted()) {
debug("isCompleted discard after advance", .{});
this.#requests.discard(1);
request.deref();
continue;
}
break;
}
}
while (this.#requests.readableLength() > offset and connection.isAbleToWrite()) {
var request: *JSMySQLQuery = this.#requests.peekItem(offset);
if (request.isCompleted()) {
if (offset > 0) {
// discard later
offset += 1;
continue;
}
debug("isCompleted", .{});
this.#requests.discard(1);
request.deref();
continue;
}
if (request.isBeingPrepared()) {
debug("isBeingPrepared", .{});
this.#waiting_to_prepare = true;
// cannot continue the queue until the current request is marked as prepared
return;
}
if (request.isRunning()) {
debug("isRunning", .{});
const total_requests_running = this.#pipelined_requests + this.#nonpipelinable_requests;
if (offset < total_requests_running) {
offset += total_requests_running;
} else {
offset += 1;
}
continue;
}
request.run(connection) catch |err| {
debug("run failed", .{});
connection.onError(request, err);
if (offset == 0) {
this.#requests.discard(1);
request.deref();
}
offset += 1;
continue;
};
if (request.isBeingPrepared()) {
debug("isBeingPrepared", .{});
connection.resetConnectionTimeout();
this.#is_ready_for_query = false;
this.#waiting_to_prepare = true;
return;
} else if (request.isRunning()) {
connection.resetConnectionTimeout();
debug("isRunning after run", .{});
this.#is_ready_for_query = false;
if (request.isPipelined()) {
this.#pipelined_requests += 1;
if (this.canPipeline(connection)) {
debug("pipelined requests", .{});
offset += 1;
continue;
}
return;
}
debug("nonpipelinable requests", .{});
this.#nonpipelinable_requests += 1;
}
return;
}
}
pub fn init() @This() {
return .{ .#requests = Queue.init(bun.default_allocator) };
}
pub fn isEmpty(this: *@This()) bool {
return this.#requests.readableLength() == 0;
}
pub fn add(this: *@This(), request: *JSMySQLQuery) void {
debug("add", .{});
if (request.isBeingPrepared()) {
this.#is_ready_for_query = false;
this.#waiting_to_prepare = true;
} else if (request.isRunning()) {
this.#is_ready_for_query = false;
if (request.isPipelined()) {
this.#pipelined_requests += 1;
} else {
this.#nonpipelinable_requests += 1;
}
}
request.ref();
bun.handleOom(this.#requests.writeItem(request));
}
pub fn current(this: *@This()) ?*JSMySQLQuery {
if (this.#requests.readableLength() == 0) {
return null;
}
return this.#requests.peekItem(0);
}
pub fn clean(this: *@This(), reason: ?JSValue, queries_array: JSValue) void {
while (this.current()) |request| {
if (request.isCompleted()) {
request.deref();
this.#requests.discard(1);
continue;
}
if (reason) |r| {
request.rejectWithJSValue(queries_array, r);
} else {
request.reject(queries_array, error.ConnectionClosed);
}
this.#requests.discard(1);
request.deref();
continue;
}
this.#pipelined_requests = 0;
this.#nonpipelinable_requests = 0;
this.#waiting_to_prepare = false;
}
pub fn deinit(this: *@This()) void {
for (this.#requests.readableSlice(0)) |request| {
this.#requests.discard(1);
// We cannot touch JS here
request.markAsFailed();
request.deref();
}
this.#pipelined_requests = 0;
this.#nonpipelinable_requests = 0;
this.#waiting_to_prepare = false;
this.#requests.deinit();
}
const Queue = std.fifo.LinearFifo(*JSMySQLQuery, .Dynamic);
const debug = bun.Output.scoped(.MySQLRequestQueue, .visible);
const JSMySQLQuery = @import("./js/JSMySQLQuery.zig");
const MySQLConnection = @import("./js/JSMySQLConnection.zig");
const bun = @import("bun");
const std = @import("std");
const jsc = bun.jsc;
const JSValue = jsc.JSValue;

View File

@@ -55,7 +55,7 @@ pub fn deinit(this: *MySQLStatement) void {
this.cached_structure.deinit();
this.error_response.deinit();
this.signature.deinit();
bun.default_allocator.destroy(this);
bun.destroy(this);
}
pub fn checkForDuplicateFields(this: *@This()) void {

View File

@@ -0,0 +1,18 @@
pub const Status = enum(u8) {
/// The query was just enqueued, statement status can be checked for more details
pending,
/// The query is being bound to the statement
binding,
/// The query is running
running,
/// The query is waiting for a partial response
partial_response,
/// The query was successful
success,
/// The query failed
fail,
pub fn isRunning(this: Status) bool {
return @intFromEnum(this) > @intFromEnum(Status.pending) and @intFromEnum(this) < @intFromEnum(Status.success);
}
};

View File

@@ -0,0 +1,809 @@
const JSMySQLConnection = @This();
__ref_count: RefCount = RefCount.init(),
#js_value: jsc.JSRef = jsc.JSRef.empty(),
#globalObject: *jsc.JSGlobalObject,
#vm: *jsc.VirtualMachine,
#poll_ref: bun.Async.KeepAlive = .{},
#connection: MySQLConnection,
auto_flusher: AutoFlusher = .{},
idle_timeout_interval_ms: u32 = 0,
connection_timeout_ms: u32 = 0,
/// Before being connected, this is a connection timeout timer.
/// After being connected, this is an idle timeout timer.
timer: bun.api.Timer.EventLoopTimer = .{
.tag = .MySQLConnectionTimeout,
.next = .{
.sec = 0,
.nsec = 0,
},
},
/// This timer controls the maximum lifetime of a connection.
/// It starts when the connection successfully starts (i.e. after handshake is complete).
/// It stops when the connection is closed.
max_lifetime_interval_ms: u32 = 0,
max_lifetime_timer: bun.api.Timer.EventLoopTimer = .{
.tag = .MySQLConnectionMaxLifetime,
.next = .{
.sec = 0,
.nsec = 0,
},
},
pub const ref = RefCount.ref;
pub const deref = RefCount.deref;
pub fn onAutoFlush(this: *@This()) bool {
if (this.#connection.hasBackpressure()) {
this.auto_flusher.registered = false;
// if we have backpressure, wait for onWritable
return false;
}
// drain as much as we can
this.drainInternal();
// if we dont have backpressure and if we still have data to send, return true otherwise return false and wait for onWritable
const keep_flusher_registered = this.#connection.canFlush();
this.auto_flusher.registered = keep_flusher_registered;
return keep_flusher_registered;
}
fn registerAutoFlusher(this: *@This()) void {
if (!this.auto_flusher.registered and // should not be registered
this.#connection.canFlush())
{
AutoFlusher.registerDeferredMicrotaskWithTypeUnchecked(@This(), this, this.#vm);
this.auto_flusher.registered = true;
}
}
fn unregisterAutoFlusher(this: *@This()) void {
if (this.auto_flusher.registered) {
AutoFlusher.unregisterDeferredMicrotaskWithType(@This(), this, this.#vm);
this.auto_flusher.registered = false;
}
}
fn stopTimers(this: *@This()) void {
debug("stopTimers", .{});
if (this.timer.state == .ACTIVE) {
this.#vm.timer.remove(&this.timer);
}
if (this.max_lifetime_timer.state == .ACTIVE) {
this.#vm.timer.remove(&this.max_lifetime_timer);
}
}
fn getTimeoutInterval(this: *@This()) u32 {
return switch (this.#connection.status) {
.connected => {
if (this.#connection.isIdle()) {
return this.idle_timeout_interval_ms;
}
return 0;
},
.failed => 0,
else => {
return this.connection_timeout_ms;
},
};
}
pub fn resetConnectionTimeout(this: *@This()) void {
debug("resetConnectionTimeout", .{});
const interval = this.getTimeoutInterval();
if (this.timer.state == .ACTIVE) {
this.#vm.timer.remove(&this.timer);
}
if (this.#connection.status == .failed or
this.#connection.isProcessingData() or
interval == 0) return;
this.timer.next = bun.timespec.msFromNow(@intCast(interval));
this.#vm.timer.insert(&this.timer);
}
pub fn onConnectionTimeout(this: *@This()) bun.api.Timer.EventLoopTimer.Arm {
this.timer.state = .FIRED;
if (this.#connection.isProcessingData()) {
return .disarm;
}
if (this.#connection.status == .failed) return .disarm;
if (this.getTimeoutInterval() == 0) {
this.resetConnectionTimeout();
return .disarm;
}
switch (this.#connection.status) {
.connected => {
this.failFmt(error.IdleTimeout, "Idle timeout reached after {}", .{bun.fmt.fmtDurationOneDecimal(@as(u64, this.idle_timeout_interval_ms) *| std.time.ns_per_ms)});
},
.connecting => {
this.failFmt(error.ConnectionTimedOut, "Connection timeout after {}", .{bun.fmt.fmtDurationOneDecimal(@as(u64, this.connection_timeout_ms) *| std.time.ns_per_ms)});
},
.handshaking,
.authenticating,
.authentication_awaiting_pk,
=> {
this.failFmt(error.ConnectionTimedOut, "Connection timeout after {} (during authentication)", .{bun.fmt.fmtDurationOneDecimal(@as(u64, this.connection_timeout_ms) *| std.time.ns_per_ms)});
},
.disconnected, .failed => {},
}
return .disarm;
}
pub fn onMaxLifetimeTimeout(this: *@This()) bun.api.Timer.EventLoopTimer.Arm {
this.max_lifetime_timer.state = .FIRED;
if (this.#connection.status == .failed) return .disarm;
this.failFmt(error.LifetimeTimeout, "Max lifetime timeout reached after {}", .{bun.fmt.fmtDurationOneDecimal(@as(u64, this.max_lifetime_interval_ms) *| std.time.ns_per_ms)});
return .disarm;
}
fn setupMaxLifetimeTimerIfNecessary(this: *@This()) void {
if (this.max_lifetime_interval_ms == 0) return;
if (this.max_lifetime_timer.state == .ACTIVE) return;
this.max_lifetime_timer.next = bun.timespec.msFromNow(@intCast(this.max_lifetime_interval_ms));
this.#vm.timer.insert(&this.max_lifetime_timer);
}
pub fn constructor(globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!*@This() {
_ = callframe;
return globalObject.throw("MySQLConnection cannot be constructed directly", .{});
}
pub fn enqueueRequest(this: *@This(), item: *JSMySQLQuery) void {
debug("enqueueRequest", .{});
this.#connection.enqueueRequest(item);
this.resetConnectionTimeout();
this.registerAutoFlusher();
}
pub fn close(this: *@This()) void {
this.ref();
this.stopTimers();
this.unregisterAutoFlusher();
defer {
this.updateReferenceType();
this.deref();
}
if (this.#vm.isShuttingDown()) {
this.#connection.close();
} else {
this.#connection.cleanQueueAndClose(null, this.getQueriesArray());
}
}
fn drainInternal(this: *@This()) void {
if (this.#vm.isShuttingDown()) return this.close();
this.ref();
defer this.deref();
const event_loop = this.#vm.eventLoop();
event_loop.enter();
defer event_loop.exit();
this.ensureJSValueIsAlive();
this.#connection.flushQueue() catch |err| {
bun.assert_eql(err, error.AuthenticationFailed);
this.fail("Authentication failed", err);
return;
};
}
pub fn deinit(this: *@This()) void {
this.stopTimers();
this.#poll_ref.unref(this.#vm);
this.unregisterAutoFlusher();
this.#connection.cleanup();
bun.destroy(this);
}
fn ensureJSValueIsAlive(this: *@This()) void {
if (this.#js_value.tryGet()) |value| {
value.ensureStillAlive();
}
}
pub fn finalize(this: *@This()) void {
debug("finalize", .{});
this.#js_value.finalize();
this.deref();
}
fn SocketHandler(comptime ssl: bool) type {
return struct {
const SocketType = uws.NewSocketHandler(ssl);
fn _socket(s: SocketType) uws.AnySocket {
if (comptime ssl) {
return uws.AnySocket{ .SocketTLS = s };
}
return uws.AnySocket{ .SocketTCP = s };
}
pub fn onOpen(this: *JSMySQLConnection, s: SocketType) void {
const socket = _socket(s);
this.#connection.setSocket(socket);
this.setupMaxLifetimeTimerIfNecessary();
this.resetConnectionTimeout();
if (socket == .SocketTCP) {
// when upgrading to TLS the onOpen callback will be called again and at this moment we dont wanna to change the status to handshaking
this.#connection.status = .handshaking;
this.ref(); // keep a ref for the socket
}
this.updateReferenceType();
}
fn onHandshake_(
this: *JSMySQLConnection,
_: anytype,
success: i32,
ssl_error: uws.us_bun_verify_error_t,
) void {
const handshakeWasSuccessful = this.#connection.doHandshake(success, ssl_error) catch |err| return this.failFmt(err, "Failed to send handshake response", .{});
if (!handshakeWasSuccessful) {
this.failWithJSValue(ssl_error.toJS(this.#globalObject));
}
}
pub const onHandshake = if (ssl) onHandshake_ else null;
pub fn onClose(this: *JSMySQLConnection, _: SocketType, _: i32, _: ?*anyopaque) void {
defer this.deref();
this.fail("Connection closed", error.ConnectionClosed);
}
pub fn onEnd(_: *JSMySQLConnection, socket: SocketType) void {
// no half closed sockets
socket.close(.normal);
}
pub fn onConnectError(this: *JSMySQLConnection, _: SocketType, _: i32) void {
// TODO: proper propagation of the error
this.fail("Connection closed", error.ConnectionClosed);
}
pub fn onTimeout(this: *JSMySQLConnection, _: SocketType) void {
this.fail("Connection timeout", error.ConnectionTimedOut);
}
pub fn onData(this: *JSMySQLConnection, _: SocketType, data: []const u8) void {
this.ref();
defer this.deref();
const vm = this.#vm;
defer {
// reset the connection timeout after we're done processing the data
this.resetConnectionTimeout();
this.updateReferenceType();
this.registerAutoFlusher();
}
if (this.#vm.isShuttingDown()) {
// we are shutting down lets not process the data
return;
}
const event_loop = vm.eventLoop();
event_loop.enter();
defer event_loop.exit();
this.ensureJSValueIsAlive();
this.#connection.readAndProcessData(data) catch |err| {
this.onError(null, err);
};
}
pub fn onWritable(this: *JSMySQLConnection, _: SocketType) void {
this.#connection.resetBackpressure();
this.drainInternal();
}
};
}
fn updateReferenceType(this: *@This()) void {
if (this.#js_value.isNotEmpty()) {
if (this.#connection.isActive()) {
if (this.#js_value == .weak) {
this.#js_value.upgrade(this.#globalObject);
this.#poll_ref.ref(this.#vm);
}
return;
}
if (this.#js_value == .strong) {
this.#js_value.downgrade();
this.#poll_ref.unref(this.#vm);
return;
}
}
this.#poll_ref.unref(this.#vm);
}
pub fn createInstance(globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!jsc.JSValue {
var vm = globalObject.bunVM();
const arguments = callframe.arguments();
const hostname_str = try arguments[0].toBunString(globalObject);
defer hostname_str.deref();
const port = try arguments[1].coerce(i32, globalObject);
const username_str = try arguments[2].toBunString(globalObject);
defer username_str.deref();
const password_str = try arguments[3].toBunString(globalObject);
defer password_str.deref();
const database_str = try arguments[4].toBunString(globalObject);
defer database_str.deref();
// TODO: update this to match MySQL.
const ssl_mode: SSLMode = switch (arguments[5].toInt32()) {
0 => .disable,
1 => .prefer,
2 => .require,
3 => .verify_ca,
4 => .verify_full,
else => .disable,
};
const tls_object = arguments[6];
var tls_config: jsc.API.ServerConfig.SSLConfig = .{};
var tls_ctx: ?*uws.SocketContext = null;
if (ssl_mode != .disable) {
tls_config = if (tls_object.isBoolean() and tls_object.toBoolean())
.{}
else if (tls_object.isObject())
(jsc.API.ServerConfig.SSLConfig.fromJS(vm, globalObject, tls_object) catch return .zero) orelse .{}
else {
return globalObject.throwInvalidArguments("tls must be a boolean or an object", .{});
};
if (globalObject.hasException()) {
tls_config.deinit();
return .zero;
}
// we always request the cert so we can verify it and also we manually abort the connection if the hostname doesn't match
const original_reject_unauthorized = tls_config.reject_unauthorized;
tls_config.reject_unauthorized = 0;
tls_config.request_cert = 1;
// We create it right here so we can throw errors early.
const context_options = tls_config.asUSockets();
var err: uws.create_bun_socket_error_t = .none;
tls_ctx = uws.SocketContext.createSSLContext(vm.uwsLoop(), @sizeOf(*@This()), context_options, &err) orelse {
if (err != .none) {
return globalObject.throw("failed to create TLS context", .{});
} else {
return globalObject.throwValue(err.toJS(globalObject));
}
};
// restore the original reject_unauthorized
tls_config.reject_unauthorized = original_reject_unauthorized;
if (err != .none) {
tls_config.deinit();
if (tls_ctx) |ctx| {
ctx.deinit(true);
}
return globalObject.throwValue(err.toJS(globalObject));
}
uws.NewSocketHandler(true).configure(tls_ctx.?, true, *@This(), SocketHandler(true));
}
var username: []const u8 = "";
var password: []const u8 = "";
var database: []const u8 = "";
var options: []const u8 = "";
var path: []const u8 = "";
const options_str = try arguments[7].toBunString(globalObject);
defer options_str.deref();
const path_str = try arguments[8].toBunString(globalObject);
defer path_str.deref();
const options_buf: []u8 = brk: {
var b = bun.StringBuilder{};
b.cap += username_str.utf8ByteLength() + 1 + password_str.utf8ByteLength() + 1 + database_str.utf8ByteLength() + 1 + options_str.utf8ByteLength() + 1 + path_str.utf8ByteLength() + 1;
b.allocate(bun.default_allocator) catch {};
var u = username_str.toUTF8WithoutRef(bun.default_allocator);
defer u.deinit();
username = b.append(u.slice());
var p = password_str.toUTF8WithoutRef(bun.default_allocator);
defer p.deinit();
password = b.append(p.slice());
var d = database_str.toUTF8WithoutRef(bun.default_allocator);
defer d.deinit();
database = b.append(d.slice());
var o = options_str.toUTF8WithoutRef(bun.default_allocator);
defer o.deinit();
options = b.append(o.slice());
var _path = path_str.toUTF8WithoutRef(bun.default_allocator);
defer _path.deinit();
path = b.append(_path.slice());
break :brk b.allocatedSlice();
};
const on_connect = arguments[9];
const on_close = arguments[10];
const idle_timeout = arguments[11].toInt32();
const connection_timeout = arguments[12].toInt32();
const max_lifetime = arguments[13].toInt32();
const use_unnamed_prepared_statements = arguments[14].asBoolean();
// MySQL doesn't support unnamed prepared statements
_ = use_unnamed_prepared_statements;
var ptr = bun.new(JSMySQLConnection, .{
.#globalObject = globalObject,
.#vm = vm,
.idle_timeout_interval_ms = @intCast(idle_timeout),
.connection_timeout_ms = @intCast(connection_timeout),
.max_lifetime_interval_ms = @intCast(max_lifetime),
.#connection = MySQLConnection.init(
database,
username,
password,
options,
options_buf,
tls_config,
tls_ctx,
ssl_mode,
),
});
{
const hostname = hostname_str.toUTF8(bun.default_allocator);
defer hostname.deinit();
const ctx = vm.rareData().mysql_context.tcp orelse brk: {
const ctx_ = uws.SocketContext.createNoSSLContext(vm.uwsLoop(), @sizeOf(*@This())).?;
uws.NewSocketHandler(false).configure(ctx_, true, *@This(), SocketHandler(false));
vm.rareData().mysql_context.tcp = ctx_;
break :brk ctx_;
};
if (path.len > 0) {
ptr.#connection.setSocket(.{
.SocketTCP = uws.SocketTCP.connectUnixAnon(path, ctx, ptr, false) catch |err| {
ptr.deref();
return globalObject.throwError(err, "failed to connect to postgresql");
},
});
} else {
ptr.#connection.setSocket(.{
.SocketTCP = uws.SocketTCP.connectAnon(hostname.slice(), port, ctx, ptr, false) catch |err| {
ptr.deref();
return globalObject.throwError(err, "failed to connect to mysql");
},
});
}
}
ptr.#connection.status = .connecting;
ptr.resetConnectionTimeout();
ptr.#poll_ref.ref(vm);
const js_value = ptr.toJS(globalObject);
js_value.ensureStillAlive();
ptr.#js_value.setStrong(js_value, globalObject);
js.onconnectSetCached(js_value, globalObject, on_connect);
js.oncloseSetCached(js_value, globalObject, on_close);
return js_value;
}
pub fn getQueries(_: *@This(), thisValue: jsc.JSValue, globalObject: *jsc.JSGlobalObject) bun.JSError!jsc.JSValue {
if (js.queriesGetCached(thisValue)) |value| {
return value;
}
const array = try jsc.JSValue.createEmptyArray(globalObject, 0);
js.queriesSetCached(thisValue, globalObject, array);
return array;
}
pub fn getConnected(this: *@This(), _: *jsc.JSGlobalObject) JSValue {
return JSValue.jsBoolean(this.#connection.status == .connected);
}
pub fn getOnConnect(_: *@This(), thisValue: jsc.JSValue, _: *jsc.JSGlobalObject) jsc.JSValue {
if (js.onconnectGetCached(thisValue)) |value| {
return value;
}
return .js_undefined;
}
pub fn setOnConnect(_: *@This(), thisValue: jsc.JSValue, globalObject: *jsc.JSGlobalObject, value: jsc.JSValue) void {
js.onconnectSetCached(thisValue, globalObject, value);
}
pub fn getOnClose(_: *@This(), thisValue: jsc.JSValue, _: *jsc.JSGlobalObject) jsc.JSValue {
if (js.oncloseGetCached(thisValue)) |value| {
return value;
}
return .js_undefined;
}
pub fn setOnClose(_: *@This(), thisValue: jsc.JSValue, globalObject: *jsc.JSGlobalObject, value: jsc.JSValue) void {
js.oncloseSetCached(thisValue, globalObject, value);
}
pub fn doRef(this: *@This(), _: *jsc.JSGlobalObject, _: *jsc.CallFrame) bun.JSError!JSValue {
this.#poll_ref.ref(this.#vm);
return .js_undefined;
}
pub fn doUnref(this: *@This(), _: *jsc.JSGlobalObject, _: *jsc.CallFrame) bun.JSError!JSValue {
this.#poll_ref.unref(this.#vm);
return .js_undefined;
}
pub fn doFlush(this: *@This(), _: *jsc.JSGlobalObject, _: *jsc.CallFrame) bun.JSError!JSValue {
this.registerAutoFlusher();
return .js_undefined;
}
pub fn doClose(this: *@This(), globalObject: *jsc.JSGlobalObject, _: *jsc.CallFrame) bun.JSError!JSValue {
_ = globalObject;
this.stopTimers();
defer this.updateReferenceType();
this.#connection.cleanQueueAndClose(null, this.getQueriesArray());
return .js_undefined;
}
fn consumeOnConnectCallback(this: *const @This(), globalObject: *jsc.JSGlobalObject) ?jsc.JSValue {
if (this.#vm.isShuttingDown()) return null;
if (this.#js_value.tryGet()) |value| {
const on_connect = js.onconnectGetCached(value) orelse return null;
js.onconnectSetCached(value, globalObject, .zero);
return on_connect;
}
return null;
}
fn consumeOnCloseCallback(this: *const @This(), globalObject: *jsc.JSGlobalObject) ?jsc.JSValue {
if (this.#vm.isShuttingDown()) return null;
if (this.#js_value.tryGet()) |value| {
const on_close = js.oncloseGetCached(value) orelse return null;
js.oncloseSetCached(value, globalObject, .zero);
return on_close;
}
return null;
}
pub fn getQueriesArray(this: *@This()) JSValue {
if (this.#vm.isShuttingDown()) return .js_undefined;
if (this.#js_value.tryGet()) |value| {
return js.queriesGetCached(value) orelse .js_undefined;
}
return .js_undefined;
}
pub inline fn isAbleToWrite(this: *@This()) bool {
return this.#connection.isAbleToWrite();
}
pub inline fn isConnected(this: *@This()) bool {
return this.#connection.status == .connected;
}
pub inline fn canPipeline(this: *@This()) bool {
return this.#connection.canPipeline();
}
pub inline fn canPrepareQuery(this: *@This()) bool {
return this.#connection.canPrepareQuery();
}
pub inline fn canExecuteQuery(this: *@This()) bool {
return this.#connection.canExecuteQuery();
}
pub inline fn getWriter(this: *@This()) NewWriter(MySQLConnection.Writer) {
return this.#connection.writer();
}
fn failFmt(this: *@This(), error_code: AnyMySQLError.Error, comptime fmt: [:0]const u8, args: anytype) void {
const message = bun.handleOom(std.fmt.allocPrint(bun.default_allocator, fmt, args));
defer bun.default_allocator.free(message);
const err = AnyMySQLError.mysqlErrorToJS(this.#globalObject, message, error_code);
this.failWithJSValue(err);
}
fn failWithJSValue(this: *@This(), value: JSValue) void {
this.ref();
defer {
if (this.#vm.isShuttingDown()) {
this.#connection.close();
} else {
this.#connection.cleanQueueAndClose(value, this.getQueriesArray());
}
this.updateReferenceType();
this.deref();
}
this.stopTimers();
if (this.#connection.status == .failed) return;
this.#connection.status = .failed;
if (this.#vm.isShuttingDown()) return;
const on_close = this.consumeOnCloseCallback(this.#globalObject) orelse return;
on_close.ensureStillAlive();
const loop = this.#vm.eventLoop();
// loop.enter();
// defer loop.exit();
this.ensureJSValueIsAlive();
var js_error = value.toError() orelse value;
if (js_error == .zero) {
js_error = AnyMySQLError.mysqlErrorToJS(this.#globalObject, "Connection closed", error.ConnectionClosed);
}
js_error.ensureStillAlive();
const queries_array = this.getQueriesArray();
queries_array.ensureStillAlive();
// this.#globalObject.queueMicrotask(on_close, &[_]JSValue{ js_error, queries_array });
loop.runCallback(on_close, this.#globalObject, .js_undefined, &[_]JSValue{ js_error, queries_array });
}
fn fail(this: *@This(), message: []const u8, err: AnyMySQLError.Error) void {
const instance = AnyMySQLError.mysqlErrorToJS(this.#globalObject, message, err);
this.failWithJSValue(instance);
}
pub fn onConnectionEstabilished(this: *@This()) void {
if (this.#vm.isShuttingDown()) return;
const on_connect = this.consumeOnConnectCallback(this.#globalObject) orelse return;
on_connect.ensureStillAlive();
var js_value = this.#js_value.tryGet() orelse .js_undefined;
js_value.ensureStillAlive();
// this.#globalObject.queueMicrotask(on_connect, &[_]JSValue{ JSValue.jsNull(), js_value });
const loop = this.#vm.eventLoop();
loop.runCallback(on_connect, this.#globalObject, .js_undefined, &[_]JSValue{ JSValue.jsNull(), js_value });
this.#poll_ref.unref(this.#vm);
}
pub fn onQueryResult(this: *@This(), request: *JSMySQLQuery, result: MySQLQueryResult) void {
request.resolve(this.getQueriesArray(), result);
}
pub fn onResultRow(this: *@This(), request: *JSMySQLQuery, statement: *MySQLStatement, Context: type, reader: NewReader(Context)) error{ShortRead}!void {
const result_mode = request.getResultMode();
var stack_fallback = std.heap.stackFallback(4096, bun.default_allocator);
const allocator = stack_fallback.get();
var row = ResultSet.Row{
.globalObject = this.#globalObject,
.columns = statement.columns,
.binary = !request.isSimple(),
.raw = result_mode == .raw,
.bigint = request.isBigintSupported(),
};
var structure: JSValue = .js_undefined;
var cached_structure: ?CachedStructure = null;
switch (result_mode) {
.objects => {
cached_structure = if (this.#js_value.tryGet()) |value| statement.structure(value, this.#globalObject) else null;
structure = cached_structure.?.jsValue() orelse .js_undefined;
},
.raw, .values => {
// no need to check for duplicate fields or structure
},
}
defer row.deinit(allocator);
row.decode(allocator, reader) catch |err| {
if (err == error.ShortRead) {
return error.ShortRead;
}
this.#connection.queue.markCurrentRequestAsFinished(request);
request.reject(this.getQueriesArray(), err);
return;
};
const pending_value = request.getPendingValue() orelse .js_undefined;
// Process row data
const row_value = row.toJS(
this.#globalObject,
pending_value,
structure,
statement.fields_flags,
result_mode,
cached_structure,
);
if (this.#globalObject.tryTakeException()) |err| {
this.#connection.queue.markCurrentRequestAsFinished(request);
request.rejectWithJSValue(this.getQueriesArray(), err);
return;
}
statement.result_count += 1;
if (pending_value.isEmptyOrUndefinedOrNull()) {
request.setPendingValue(row_value);
}
}
pub fn onError(this: *@This(), request: ?*JSMySQLQuery, err: AnyMySQLError.Error) void {
if (request) |_request| {
if (this.#vm.isShuttingDown()) {
_request.markAsFailed();
return;
}
if (this.#globalObject.tryTakeException()) |err_| {
_request.rejectWithJSValue(this.getQueriesArray(), err_);
} else {
_request.reject(this.getQueriesArray(), err);
}
} else {
if (this.#vm.isShuttingDown()) {
this.close();
return;
}
if (this.#globalObject.tryTakeException()) |err_| {
this.failWithJSValue(err_);
} else {
this.fail("Connection closed", err);
}
}
}
pub fn onErrorPacket(
this: *@This(),
request: ?*JSMySQLQuery,
err: ErrorPacket,
) void {
if (request) |_request| {
if (this.#vm.isShuttingDown()) {
_request.markAsFailed();
} else {
if (this.#globalObject.tryTakeException()) |err_| {
_request.rejectWithJSValue(this.getQueriesArray(), err_);
} else {
_request.rejectWithJSValue(this.getQueriesArray(), err.toJS(this.#globalObject));
}
}
} else {
if (this.#vm.isShuttingDown()) {
this.close();
return;
}
if (this.#globalObject.tryTakeException()) |err_| {
this.failWithJSValue(err_);
} else {
this.failWithJSValue(err.toJS(this.#globalObject));
}
}
}
pub fn getStatementFromSignatureHash(this: *@This(), signature_hash: u64) !MySQLConnection.PreparedStatementsMapGetOrPutResult {
return try this.#connection.statements.getOrPut(bun.default_allocator, signature_hash);
}
const RefCount = bun.ptr.RefCount(@This(), "__ref_count", deinit, .{});
pub const js = jsc.Codegen.JSMySQLConnection;
pub const fromJS = js.fromJS;
pub const fromJSDirect = js.fromJSDirect;
pub const toJS = js.toJS;
pub const Writer = MySQLConnection.Writer;
const debug = bun.Output.scoped(.MySQLConnection, .visible);
const AnyMySQLError = @import("../protocol/AnyMySQLError.zig");
const CachedStructure = @import("../../shared/CachedStructure.zig");
const ErrorPacket = @import("../protocol/ErrorPacket.zig");
const JSMySQLQuery = @import("./JSMySQLQuery.zig");
const MySQLConnection = @import("../MySQLConnection.zig");
const MySQLQueryResult = @import("../MySQLQueryResult.zig");
const MySQLStatement = @import("../MySQLStatement.zig");
const ResultSet = @import("../protocol/ResultSet.zig");
const std = @import("std");
const NewReader = @import("../protocol/NewReader.zig").NewReader;
const NewWriter = @import("../protocol/NewWriter.zig").NewWriter;
const SSLMode = @import("../SSLMode.zig").SSLMode;
const bun = @import("bun");
const uws = bun.uws;
const jsc = bun.jsc;
const JSGlobalObject = jsc.JSGlobalObject;
const JSValue = jsc.JSValue;
const AutoFlusher = jsc.WebCore.AutoFlusher;

View File

@@ -0,0 +1,402 @@
const JSMySQLQuery = @This();
const RefCount = bun.ptr.RefCount(@This(), "__ref_count", deinit, .{});
#thisValue: JSRef = JSRef.empty(),
// unfortunally we cannot use #ref_count here
__ref_count: RefCount = RefCount.init(),
#vm: *jsc.VirtualMachine,
#globalObject: *jsc.JSGlobalObject,
#query: MySQLQuery,
pub const ref = RefCount.ref;
pub const deref = RefCount.deref;
pub fn estimatedSize(this: *@This()) usize {
_ = this;
return @sizeOf(@This());
}
pub fn constructor(globalThis: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!*@This() {
_ = callframe;
return globalThis.throwInvalidArguments("MySQLQuery cannot be constructed directly", .{});
}
fn deinit(this: *@This()) void {
this.#query.cleanup();
bun.destroy(this);
}
pub fn finalize(this: *@This()) void {
debug("MySQLQuery finalize", .{});
this.#thisValue.finalize();
this.deref();
}
pub fn createInstance(globalThis: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!jsc.JSValue {
const arguments = callframe.arguments();
var args = jsc.CallFrame.ArgumentsSlice.init(globalThis.bunVM(), arguments);
defer args.deinit();
const query = args.nextEat() orelse {
return globalThis.throw("query must be a string", .{});
};
const values = args.nextEat() orelse {
return globalThis.throw("values must be an array", .{});
};
if (!query.isString()) {
return globalThis.throw("query must be a string", .{});
}
if (values.jsType() != .Array) {
return globalThis.throw("values must be an array", .{});
}
const pending_value: JSValue = args.nextEat() orelse .js_undefined;
const columns: JSValue = args.nextEat() orelse .js_undefined;
const js_bigint: JSValue = args.nextEat() orelse .false;
const js_simple: JSValue = args.nextEat() orelse .false;
const bigint = js_bigint.isBoolean() and js_bigint.asBoolean();
const simple = js_simple.isBoolean() and js_simple.asBoolean();
if (simple) {
if (try values.getLength(globalThis) > 0) {
return globalThis.throwInvalidArguments("simple query cannot have parameters", .{});
}
if (try query.getLength(globalThis) >= std.math.maxInt(i32)) {
return globalThis.throwInvalidArguments("query is too long", .{});
}
}
if (!pending_value.jsType().isArrayLike()) {
return globalThis.throwInvalidArgumentType("query", "pendingValue", "Array");
}
var this = bun.new(@This(), .{
.#query = MySQLQuery.init(
try query.toBunString(globalThis),
bigint,
simple,
),
.#globalObject = globalThis,
.#vm = globalThis.bunVM(),
});
const this_value = this.toJS(globalThis);
this_value.ensureStillAlive();
this.#thisValue.setWeak(this_value);
this.setBinding(values);
this.setPendingValue(pending_value);
if (!columns.isUndefined()) {
this.setColumns(columns);
}
return this_value;
}
pub fn doRun(this: *@This(), globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!JSValue {
debug("doRun", .{});
this.ref();
defer this.deref();
var arguments = callframe.arguments();
if (arguments.len < 2) {
return globalObject.throwInvalidArguments("run must be called with 2 arguments connection and target", .{});
}
const connection: *MySQLConnection = arguments[0].as(MySQLConnection) orelse {
return globalObject.throw("connection must be a MySQLConnection", .{});
};
var target = arguments[1];
if (!target.isObject()) {
return globalObject.throwInvalidArgumentType("run", "query", "Query");
}
this.setTarget(target);
this.run(connection) catch |err| {
if (!globalObject.hasException()) {
return globalObject.throwValue(AnyMySQLError.mysqlErrorToJS(globalObject, "failed to execute query", err));
}
return error.JSError;
};
connection.enqueueRequest(this);
return .js_undefined;
}
pub fn doCancel(_: *@This(), _: *jsc.JSGlobalObject, _: *jsc.CallFrame) bun.JSError!JSValue {
// TODO: we can cancel a query that is pending aka not pipelined yet we just need fail it
// if is running is not worth/viable to cancel the whole connection
return .js_undefined;
}
pub fn doDone(_: *@This(), _: *jsc.JSGlobalObject, _: *jsc.CallFrame) bun.JSError!JSValue {
// TODO: investigate why this function is needed
return .js_undefined;
}
pub fn setModeFromJS(this: *@This(), globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!JSValue {
const js_mode = callframe.argument(0);
if (js_mode.isEmptyOrUndefinedOrNull() or !js_mode.isNumber()) {
return globalObject.throwInvalidArgumentType("setMode", "mode", "Number");
}
const mode_value = try js_mode.coerce(i32, globalObject);
const mode = std.meta.intToEnum(SQLQueryResultMode, mode_value) catch {
return globalObject.throwInvalidArgumentTypeValue("mode", "Number", js_mode);
};
this.#query.setResultMode(mode);
return .js_undefined;
}
pub fn setPendingValueFromJS(this: *@This(), _: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!JSValue {
const result = callframe.argument(0);
this.setPendingValue(result);
return .js_undefined;
}
pub fn resolve(
this: *@This(),
queries_array: JSValue,
result: MySQLQueryResult,
) void {
this.ref();
const is_last_result = result.is_last_result;
defer {
if (this.#thisValue.isNotEmpty() and is_last_result) {
this.#thisValue.downgrade();
}
this.deref();
}
if (!this.#query.result(is_last_result)) {
return;
}
if (this.#vm.isShuttingDown()) {
return;
}
const targetValue = this.getTarget() orelse return;
const thisValue = this.#thisValue.tryGet() orelse return;
thisValue.ensureStillAlive();
const tag: CommandTag = .{ .SELECT = result.result_count };
const js_tag = tag.toJSTag(this.#globalObject) catch return bun.assertf(false, "in MySQLQuery Tag should always be a number", .{});
js_tag.ensureStillAlive();
const function = this.#vm.rareData().mysql_context.onQueryResolveFn.get() orelse return;
bun.assertf(function.isCallable(), "onQueryResolveFn is not callable", .{});
const event_loop = this.#vm.eventLoop();
const pending_value = this.getPendingValue() orelse .js_undefined;
pending_value.ensureStillAlive();
this.setPendingValue(.js_undefined);
event_loop.runCallback(function, this.#globalObject, thisValue, &.{
targetValue,
pending_value,
js_tag,
tag.toJSNumber(),
if (queries_array == .zero) .js_undefined else queries_array,
JSValue.jsBoolean(is_last_result),
JSValue.jsNumber(result.last_insert_id),
JSValue.jsNumber(result.affected_rows),
});
}
pub fn markAsFailed(this: *@This()) void {
// Attention: we cannot touch JS here
// If you need to touch JS, you wanna to use reject or rejectWithJSValue instead
this.ref();
defer this.deref();
if (this.#thisValue.isNotEmpty()) {
this.#thisValue.downgrade();
}
_ = this.#query.fail();
}
pub fn reject(this: *@This(), queries_array: JSValue, err: AnyMySQLError.Error) void {
if (this.#vm.isShuttingDown()) {
this.markAsFailed();
return;
}
if (this.#globalObject.tryTakeException()) |err_| {
this.rejectWithJSValue(queries_array, err_);
} else {
const instance = AnyMySQLError.mysqlErrorToJS(this.#globalObject, "Failed to bind query", err);
instance.ensureStillAlive();
this.rejectWithJSValue(queries_array, instance);
}
}
pub fn rejectWithJSValue(this: *@This(), queries_array: JSValue, err: JSValue) void {
this.ref();
defer {
if (this.#thisValue.isNotEmpty()) {
this.#thisValue.downgrade();
}
this.deref();
}
if (!this.#query.fail()) {
return;
}
if (this.#vm.isShuttingDown()) {
return;
}
const targetValue = this.getTarget() orelse return;
var js_error = err.toError() orelse err;
if (js_error == .zero) {
js_error = AnyMySQLError.mysqlErrorToJS(this.#globalObject, "Query failed", error.UnknownError);
}
bun.assertf(js_error != .zero, "js_error is zero", .{});
js_error.ensureStillAlive();
const function = this.#vm.rareData().mysql_context.onQueryRejectFn.get() orelse return;
bun.assertf(function.isCallable(), "onQueryRejectFn is not callable", .{});
const event_loop = this.#vm.eventLoop();
const js_array = if (queries_array == .zero) .js_undefined else queries_array;
js_array.ensureStillAlive();
event_loop.runCallback(function, this.#globalObject, this.#thisValue.tryGet() orelse return, &.{
targetValue,
js_error,
js_array,
});
}
pub fn run(this: *@This(), connection: *MySQLConnection) AnyMySQLError.Error!void {
if (this.#vm.isShuttingDown()) {
debug("run cannot run a query if the VM is shutting down", .{});
// cannot run a query if the VM is shutting down
return;
}
if (!this.#query.isPending() or this.#query.isBeingPrepared()) {
debug("run already running or being prepared", .{});
// already running or completed
return;
}
const globalObject = this.#globalObject;
this.#thisValue.upgrade(globalObject);
errdefer {
this.#thisValue.downgrade();
_ = this.#query.fail();
}
const columns_value = this.getColumns() orelse .js_undefined;
const binding_value = this.getBinding() orelse .js_undefined;
this.#query.runQuery(connection, globalObject, columns_value, binding_value) catch |err| {
debug("run failed to execute query", .{});
if (!globalObject.hasException())
return globalObject.throwValue(AnyMySQLError.mysqlErrorToJS(globalObject, "failed to execute query", err));
return error.JSError;
};
}
pub inline fn isCompleted(this: *@This()) bool {
return this.#query.isCompleted();
}
pub inline fn isRunning(this: *@This()) bool {
return this.#query.isRunning();
}
pub inline fn isPending(this: *@This()) bool {
return this.#query.isPending();
}
pub inline fn isBeingPrepared(this: *@This()) bool {
return this.#query.isBeingPrepared();
}
pub inline fn isPipelined(this: *@This()) bool {
return this.#query.isPipelined();
}
pub inline fn isSimple(this: *@This()) bool {
return this.#query.isSimple();
}
pub inline fn isBigintSupported(this: *@This()) bool {
return this.#query.isBigintSupported();
}
pub inline fn getResultMode(this: *@This()) SQLQueryResultMode {
return this.#query.getResultMode();
}
// TODO: isolate statement modification away from the connection
pub fn getStatement(this: *@This()) ?*MySQLStatement {
return this.#query.getStatement();
}
pub fn markAsPrepared(this: *@This()) void {
this.#query.markAsPrepared();
}
pub inline fn setPendingValue(this: *@This(), result: JSValue) void {
if (this.#vm.isShuttingDown()) return;
if (this.#thisValue.tryGet()) |value| {
js.pendingValueSetCached(value, this.#globalObject, result);
}
}
pub inline fn getPendingValue(this: *@This()) ?JSValue {
if (this.#vm.isShuttingDown()) return null;
if (this.#thisValue.tryGet()) |value| {
return js.pendingValueGetCached(value);
}
return null;
}
inline fn setTarget(this: *@This(), result: JSValue) void {
if (this.#vm.isShuttingDown()) return;
if (this.#thisValue.tryGet()) |value| {
js.targetSetCached(value, this.#globalObject, result);
}
}
inline fn getTarget(this: *@This()) ?JSValue {
if (this.#vm.isShuttingDown()) return null;
if (this.#thisValue.tryGet()) |value| {
return js.targetGetCached(value);
}
return null;
}
inline fn setColumns(this: *@This(), result: JSValue) void {
if (this.#vm.isShuttingDown()) return;
if (this.#thisValue.tryGet()) |value| {
js.columnsSetCached(value, this.#globalObject, result);
}
}
inline fn getColumns(this: *@This()) ?JSValue {
if (this.#vm.isShuttingDown()) return null;
if (this.#thisValue.tryGet()) |value| {
return js.columnsGetCached(value);
}
return null;
}
inline fn setBinding(this: *@This(), result: JSValue) void {
if (this.#vm.isShuttingDown()) return;
if (this.#thisValue.tryGet()) |value| {
js.bindingSetCached(value, this.#globalObject, result);
}
}
inline fn getBinding(this: *@This()) ?JSValue {
if (this.#vm.isShuttingDown()) return null;
if (this.#thisValue.tryGet()) |value| {
return js.bindingGetCached(value);
}
return null;
}
comptime {
@export(&jsc.toJSHostFn(createInstance), .{ .name = "MySQLQuery__createInstance" });
}
pub const js = jsc.Codegen.JSMySQLQuery;
pub const fromJS = js.fromJS;
pub const fromJSDirect = js.fromJSDirect;
pub const toJS = js.toJS;
const debug = bun.Output.scoped(.MySQLQuery, .visible);
const AnyMySQLError = @import("../protocol/AnyMySQLError.zig");
const MySQLConnection = @import("./JSMySQLConnection.zig");
const MySQLQuery = @import("../MySQLQuery.zig");
const MySQLQueryResult = @import("../MySQLQueryResult.zig");
const MySQLStatement = @import("../MySQLStatement.zig");
const bun = @import("bun");
const std = @import("std");
const CommandTag = @import("../../postgres/CommandTag.zig").CommandTag;
const SQLQueryResultMode = @import("../../shared/SQLQueryResultMode.zig").SQLQueryResultMode;
const jsc = bun.jsc;
const JSRef = jsc.JSRef;
const JSValue = jsc.JSValue;

View File

@@ -33,6 +33,8 @@ pub const Error = error{
InvalidErrorPacket,
UnexpectedPacket,
ShortRead,
UnknownError,
InvalidState,
};
pub fn mysqlErrorToJS(globalObject: *jsc.JSGlobalObject, message: ?[]const u8, err: Error) JSValue {
@@ -64,6 +66,8 @@ pub fn mysqlErrorToJS(globalObject: *jsc.JSGlobalObject, message: ?[]const u8, e
error.MissingAuthData => "ERR_MYSQL_MISSING_AUTH_DATA",
error.FailedToEncryptPassword => "ERR_MYSQL_FAILED_TO_ENCRYPT_PASSWORD",
error.InvalidPublicKey => "ERR_MYSQL_INVALID_PUBLIC_KEY",
error.UnknownError => "ERR_MYSQL_UNKNOWN_ERROR",
error.InvalidState => "ERR_MYSQL_INVALID_STATE",
error.JSError => {
return globalObject.takeException(error.JSError);
},

View File

@@ -19,7 +19,7 @@ pub fn createMySQLError(
message: []const u8,
options: MySQLErrorOptions,
) bun.JSError!JSValue {
const opts_obj = JSValue.createEmptyObject(globalObject, 18);
const opts_obj = JSValue.createEmptyObject(globalObject, 0);
opts_obj.ensureStillAlive();
opts_obj.put(globalObject, JSC.ZigString.static("code"), try bun.String.createUTF8ForJS(globalObject, options.code));
if (options.errno) |errno| {

View File

@@ -29,7 +29,7 @@ pub const CommandTag = union(enum) {
other: []const u8,
pub fn toJSTag(this: CommandTag, globalObject: *jsc.JSGlobalObject) JSValue {
pub fn toJSTag(this: CommandTag, globalObject: *jsc.JSGlobalObject) bun.JSError!jsc.JSValue {
return switch (this) {
.INSERT => JSValue.jsNumber(1),
.DELETE => JSValue.jsNumber(2),
@@ -39,7 +39,7 @@ pub const CommandTag = union(enum) {
.MOVE => JSValue.jsNumber(6),
.FETCH => JSValue.jsNumber(7),
.COPY => JSValue.jsNumber(8),
.other => |tag| jsc.ZigString.init(tag).toJS(globalObject),
.other => |tag| bun.String.createUTF8ForJS(globalObject, tag),
};
}

View File

@@ -219,7 +219,7 @@ pub fn onConnectionTimeout(this: *PostgresSQLConnection) bun.api.Timer.EventLoop
this.failFmt("ERR_POSTGRES_CONNECTION_TIMEOUT", "Connection timeout after {}", .{bun.fmt.fmtDurationOneDecimal(@as(u64, this.connection_timeout_ms) *| std.time.ns_per_ms)});
},
.sent_startup_message => {
this.failFmt("ERR_POSTGRES_CONNECTION_TIMEOUT", "Connection timed out after {} (sent startup message, but never received response)", .{bun.fmt.fmtDurationOneDecimal(@as(u64, this.connection_timeout_ms) *| std.time.ns_per_ms)});
this.failFmt("ERR_POSTGRES_CONNECTION_TIMEOUT", "Connection timeout after {} (sent startup message, but never received response)", .{bun.fmt.fmtDurationOneDecimal(@as(u64, this.connection_timeout_ms) *| std.time.ns_per_ms)});
},
}
return .disarm;
@@ -311,7 +311,7 @@ pub fn failWithJSValue(this: *PostgresSQLConnection, value: JSValue) void {
this.stopTimers();
if (this.status == .failed) return;
this.setStatus(.failed);
this.status = .failed;
this.ref();
defer this.deref();
@@ -321,12 +321,17 @@ pub fn failWithJSValue(this: *PostgresSQLConnection, value: JSValue) void {
const loop = this.vm.eventLoop();
loop.enter();
var js_error = value.toError() orelse value;
if (js_error == .zero) {
js_error = postgresErrorToJS(this.globalObject, "Connection closed", error.ConnectionClosed);
}
js_error.ensureStillAlive();
defer loop.exit();
_ = on_close.call(
this.globalObject,
this.js_value,
.js_undefined,
&[_]JSValue{
value.toError() orelse value,
js_error,
this.getQueriesArray(),
},
) catch |e| this.globalObject.reportActiveExceptionAsUnhandled(e);
@@ -1350,6 +1355,9 @@ fn advance(this: *PostgresSQLConnection) void {
}
pub fn getQueriesArray(this: *const PostgresSQLConnection) JSValue {
if (this.js_value.isEmptyOrUndefinedOrNull()) {
return .js_undefined;
}
return js.queriesGetCached(this.js_value) orelse .js_undefined;
}

View File

@@ -1,5 +1,5 @@
const PostgresSQLQuery = @This();
const RefCount = bun.ptr.ThreadSafeRefCount(@This(), "ref_count", deinit, .{});
const RefCount = bun.ptr.RefCount(@This(), "ref_count", deinit, .{});
statement: ?*PostgresSQLStatement = null,
query: bun.String = bun.String.empty,
cursor_name: bun.String = bun.String.empty,
@@ -23,9 +23,9 @@ flags: packed struct(u8) {
pub const ref = RefCount.ref;
pub const deref = RefCount.deref;
pub fn getTarget(this: *PostgresSQLQuery, globalObject: *jsc.JSGlobalObject, clean_target: bool) jsc.JSValue {
const thisValue = this.thisValue.tryGet() orelse return .zero;
const target = js.targetGetCached(thisValue) orelse return .zero;
pub fn getTarget(this: *PostgresSQLQuery, globalObject: *jsc.JSGlobalObject, clean_target: bool) ?jsc.JSValue {
const thisValue = this.thisValue.tryGet() orelse return null;
const target = js.targetGetCached(thisValue) orelse return null;
if (clean_target) {
js.targetSetCached(thisValue, globalObject, .zero);
}
@@ -51,12 +51,7 @@ pub const Status = enum(u8) {
}
};
pub fn hasPendingActivity(this: *@This()) bool {
return this.ref_count.get() > 1;
}
pub fn deinit(this: *@This()) void {
this.thisValue.deinit();
if (this.statement) |statement| {
statement.deref();
}
@@ -67,11 +62,7 @@ pub fn deinit(this: *@This()) void {
pub fn finalize(this: *@This()) void {
debug("PostgresSQLQuery finalize", .{});
if (this.thisValue == .weak) {
// clean up if is a weak reference, if is a strong reference we need to wait until the query is done
// if we are a strong reference, here is probably a bug because GC'd should not happen
this.thisValue.weak = .zero;
}
this.thisValue.finalize();
this.deref();
}
@@ -84,12 +75,9 @@ pub fn onWriteFail(
this.ref();
defer this.deref();
this.status = .fail;
const thisValue = this.thisValue.get();
defer this.thisValue.deinit();
const targetValue = this.getTarget(globalObject, true);
if (thisValue == .zero or targetValue == .zero) {
return;
}
const thisValue = this.thisValue.tryGet() orelse return;
defer this.thisValue.downgrade();
const targetValue = this.getTarget(globalObject, true) orelse return;
const vm = jsc.VirtualMachine.get();
const function = vm.rareData().postgresql_context.onQueryRejectFn.get().?;
@@ -105,12 +93,9 @@ pub fn onJSError(this: *@This(), err: jsc.JSValue, globalObject: *jsc.JSGlobalOb
this.ref();
defer this.deref();
this.status = .fail;
const thisValue = this.thisValue.get();
defer this.thisValue.deinit();
const targetValue = this.getTarget(globalObject, true);
if (thisValue == .zero or targetValue == .zero) {
return;
}
const thisValue = this.thisValue.tryGet() orelse return;
defer this.thisValue.downgrade();
const targetValue = this.getTarget(globalObject, true) orelse return;
var vm = jsc.VirtualMachine.get();
const function = vm.rareData().postgresql_context.onQueryRejectFn.get().?;
@@ -145,31 +130,30 @@ fn consumePendingValue(thisValue: jsc.JSValue, globalObject: *jsc.JSGlobalObject
pub fn onResult(this: *@This(), command_tag_str: []const u8, globalObject: *jsc.JSGlobalObject, connection: jsc.JSValue, is_last: bool) void {
this.ref();
defer this.deref();
const thisValue = this.thisValue.get();
const targetValue = this.getTarget(globalObject, is_last);
if (is_last) {
this.status = .success;
} else {
this.status = .partial_response;
}
const tag = CommandTag.init(command_tag_str);
const js_tag = tag.toJSTag(globalObject) catch |e| return this.onJSError(globalObject.takeException(e), globalObject);
js_tag.ensureStillAlive();
const thisValue = this.thisValue.tryGet() orelse return;
defer if (is_last) {
allowGC(thisValue, globalObject);
this.thisValue.deinit();
this.thisValue.downgrade();
};
if (thisValue == .zero or targetValue == .zero) {
return;
}
const targetValue = this.getTarget(globalObject, is_last) orelse return;
const vm = jsc.VirtualMachine.get();
const function = vm.rareData().postgresql_context.onQueryResolveFn.get().?;
const event_loop = vm.eventLoop();
const tag = CommandTag.init(command_tag_str);
event_loop.runCallback(function, globalObject, thisValue, &.{
targetValue,
consumePendingValue(thisValue, globalObject) orelse .js_undefined,
tag.toJSTag(globalObject),
js_tag,
tag.toJSNumber(),
if (connection == .zero) .js_undefined else PostgresSQLConnection.js.queriesGetCached(connection) orelse .js_undefined,
JSValue.jsBoolean(is_last),
@@ -257,13 +241,13 @@ pub fn doDone(this: *@This(), globalObject: *jsc.JSGlobalObject, _: *jsc.CallFra
this.flags.is_done = true;
return .js_undefined;
}
pub fn setPendingValue(this: *PostgresSQLQuery, globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!JSValue {
pub fn setPendingValueFromJS(_: *PostgresSQLQuery, globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!JSValue {
const result = callframe.argument(0);
const thisValue = this.thisValue.tryGet() orelse return .js_undefined;
const thisValue = callframe.this();
js.pendingValueSetCached(thisValue, globalObject, result);
return .js_undefined;
}
pub fn setMode(this: *PostgresSQLQuery, globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!JSValue {
pub fn setModeFromJS(this: *PostgresSQLQuery, globalObject: *jsc.JSGlobalObject, callframe: *jsc.CallFrame) bun.JSError!JSValue {
const js_mode = callframe.argument(0);
if (js_mode.isEmptyOrUndefinedOrNull() or !js_mode.isNumber()) {
return globalObject.throwInvalidArgumentType("setMode", "mode", "Number");

View File

@@ -11,6 +11,10 @@ array_length: usize = 0,
any_failed: bool = false,
pub fn next(this: *ObjectIterator) ?jsc.JSValue {
if (this.array.isEmptyOrUndefinedOrNull() or this.columns.isEmptyOrUndefinedOrNull()) {
this.any_failed = true;
return null;
}
if (this.row_i >= this.array_length) {
return null;
}

View File

@@ -1,4 +1,4 @@
import { describe } from "bun:test";
import { describe, expect } from "bun:test";
import { itBundled } from "./expectBundled";
describe("bundler", () => {
@@ -33,4 +33,174 @@ describe("bundler", () => {
api.expectFile("out.js").toContain('"use client";');
},
});
itBundled("banner/BannerWithCJSAndTargetBun", {
banner: "// Copyright 2024 Example Corp",
format: "cjs",
target: "bun",
backend: "api",
outdir: "/out",
minifyWhitespace: true,
files: {
"a.js": `module.exports = 1;`,
},
onAfterBundle(api) {
const content = api.readFile("/out/a.js");
expect(content).toMatchInlineSnapshot(`
"// @bun @bun-cjs
(function(exports, require, module, __filename, __dirname) {// Copyright 2024 Example Corp
module.exports=1;})
"
`);
},
});
itBundled("banner/HashbangBannerWithCJSAndTargetBun", {
banner: "#!/usr/bin/env -S node --enable-source-maps\n// Additional banner content",
format: "cjs",
target: "bun",
backend: "api",
outdir: "/out",
minifyWhitespace: true,
files: {
"/a.js": `module.exports = 1;`,
},
onAfterBundle(api) {
const content = api.readFile("/out/a.js");
expect(content).toMatchInlineSnapshot(`
"#!/usr/bin/env -S node --enable-source-maps
// @bun @bun-cjs
(function(exports, require, module, __filename, __dirname) {// Additional banner content
module.exports=1;})
"
`);
},
});
itBundled("banner/SourceHashbangWithBannerAndCJSTargetBun", {
banner: "// Copyright 2024 Example Corp",
format: "cjs",
target: "bun",
outdir: "/out",
minifyWhitespace: true,
backend: "api",
files: {
"/a.js": `#!/usr/bin/env node
module.exports = 1;`,
},
onAfterBundle(api) {
const content = api.readFile("/out/a.js");
expect(content).toMatchInlineSnapshot(`
"#!/usr/bin/env node
// @bun @bun-cjs
(function(exports, require, module, __filename, __dirname) {// Copyright 2024 Example Corp
module.exports=1;})
"
`);
},
});
itBundled("banner/BannerWithESMAndTargetBun", {
banner: "// Copyright 2024 Example Corp",
format: "esm",
target: "bun",
backend: "api",
minifyWhitespace: true,
files: {
"/a.js": `export default 1;`,
},
onAfterBundle(api) {
const content = api.readFile("out.js");
// @bun comment should come first, then banner
const bunCommentIndex = content.indexOf("// @bun");
const bannerIndex = content.indexOf("// Copyright 2024 Example Corp");
expect(bunCommentIndex).toBe(0);
expect(bannerIndex).toBeGreaterThan(bunCommentIndex);
// No CJS wrapper in ESM format
expect(content).not.toContain("(function(exports, require, module, __filename, __dirname)");
expect(content).toMatchInlineSnapshot(`
"// @bun
// Copyright 2024 Example Corp
var a_default=1;export{a_default as default};
"
`);
},
});
itBundled("banner/HashbangBannerWithESMAndTargetBun", {
banner: "#!/usr/bin/env -S node --enable-source-maps\n// Additional banner content",
format: "esm",
target: "bun",
backend: "api",
outdir: "/out",
minifyWhitespace: true,
files: {
"/a.js": `export default 1;`,
},
onAfterBundle(api) {
const content = api.readFile("/out/a.js");
expect(content).toMatchInlineSnapshot(`
"#!/usr/bin/env -S node --enable-source-maps
// @bun
// Additional banner content
var a_default=1;export{a_default as default};
"
`);
},
});
itBundled("banner/BannerWithBytecodeAndCJSTargetBun", {
banner: "// Copyright 2024 Example Corp",
format: "cjs",
target: "bun",
backend: "api",
bytecode: true,
minifyWhitespace: true,
outdir: "/out",
files: {
"/a.js": `module.exports = 1;`,
},
onAfterBundle(api) {
const content = api.readFile("/out/a.js");
expect(content).toMatchInlineSnapshot(`
"// @bun @bytecode @bun-cjs
(function(exports, require, module, __filename, __dirname) {// Copyright 2024 Example Corp
module.exports=1;})
"
`);
// @bun @bytecode @bun-cjs comment should come first, then CJS wrapper, then banner
const bunBytecodeIndex = content.indexOf("// @bun @bytecode @bun-cjs");
const wrapperIndex = content.indexOf("(function(exports, require, module, __filename, __dirname) {");
const bannerIndex = content.indexOf("// Copyright 2024 Example Corp");
expect(bunBytecodeIndex).toBe(0);
expect(wrapperIndex).toBeGreaterThan(bunBytecodeIndex);
expect(bannerIndex).toBeGreaterThan(wrapperIndex);
},
});
itBundled("banner/HashbangBannerWithBytecodeAndCJSTargetBun", {
banner: "#!/usr/bin/env bun\n// Production build",
format: "cjs",
target: "bun",
bytecode: true,
backend: "api",
outdir: "/out",
minifyWhitespace: true,
files: {
"/a.js": `module.exports = 1;`,
},
onAfterBundle(api) {
const content = api.readFile("/out/a.js");
expect(content).toMatchInlineSnapshot(`
"#!/usr/bin/env bun
// @bun @bytecode @bun-cjs
(function(exports, require, module, __filename, __dirname) {// Production build
module.exports=1;})
"
`);
},
});
});

View File

@@ -106,6 +106,7 @@ describe("bundler", () => {
run: { stdout: "Hello, world!" },
});
itBundled("compile/WorkerRelativePathNoExtension", {
backend: "cli",
compile: true,
files: {
"/entry.ts": /* js */ `
@@ -125,6 +126,7 @@ describe("bundler", () => {
run: { stdout: "Hello, world!\nWorker loaded!\n", file: "dist/out", setCwd: true },
});
itBundled("compile/WorkerRelativePathTSExtension", {
backend: "cli",
compile: true,
files: {
"/entry.ts": /* js */ `
@@ -143,6 +145,7 @@ describe("bundler", () => {
run: { stdout: "Hello, world!\nWorker loaded!\n", file: "dist/out", setCwd: true },
});
itBundled("compile/WorkerRelativePathTSExtensionBytecode", {
backend: "cli",
compile: true,
bytecode: true,
files: {
@@ -558,6 +561,7 @@ describe("bundler", () => {
});
itBundled("compile/ImportMetaMain", {
compile: true,
backend: "cli",
files: {
"/entry.ts": /* js */ `
// test toString on function to observe what the inlined value was

View File

@@ -274,6 +274,7 @@ describe("bundler", () => {
"/entry.js": /* js */ `console.log(1)`,
},
outdir: "/out",
backend: "cli",
loader: {
".cool": "wtf",
},
@@ -2071,6 +2072,7 @@ describe("bundler", () => {
});
itBundled("edgecase/OutWithTwoFiles", {
backend: "cli",
files: {
"/entry.ts": `
import index from './index.html' with { type: 'file' }

View File

@@ -4,6 +4,7 @@ import { itBundled } from "./expectBundled";
describe("bundler", () => {
itBundled("footer/CommentFooter", {
footer: "// developed with love in SF",
backend: "cli",
files: {
"/a.js": `console.log("Hello, world!")`,
},
@@ -16,6 +17,7 @@ describe("bundler", () => {
* This is copyright of [...] ${new Date().getFullYear()}
* do not redistribute without consent of [...]
*/`,
backend: "cli",
files: {
"index.js": `console.log("Hello, world!")`,
},

View File

@@ -237,6 +237,7 @@ describe("bundler", () => {
},
});
itBundled("naming/NonexistantRoot", ({ root }) => ({
backend: "cli",
files: {
"/src/entry.js": /* js */ `
import asset1 from "./asset1.file";

View File

@@ -820,12 +820,13 @@ describe("bundler", () => {
},
external: ["esbuild"],
entryPoints: ["./index.ts"],
backend: "api",
plugins(build) {
const opts = (build as any).initialOptions;
expect(opts.bundle).toEqual(true);
expect(opts.entryPoints).toEqual([join(root, "index.ts")]);
expect(opts.external).toEqual(["esbuild"]);
expect(opts.format).toEqual(undefined);
expect(opts.format).toEqual("esm");
expect(opts.minify).toEqual(false);
expect(opts.minifyIdentifiers).toEqual(undefined);
expect(opts.minifySyntax).toEqual(undefined);

View File

@@ -216,6 +216,7 @@ describe("bundler", () => {
});
itBundled("regression/WindowsBackslashAssertion1#9974", {
backend: "cli",
files: {
"/test/entry.ts": `
import { loadFonts } from "../base";

View File

@@ -7,6 +7,7 @@ describe("bundler", () => {
compile: {
execArgv: ["--title=CompileExecArgvDualBehavior", "--smol"],
},
backend: "cli",
files: {
"/entry.ts": /* js */ `
// Test that --compile-exec-argv both processes flags AND populates execArgv
@@ -52,6 +53,7 @@ describe("bundler", () => {
compile: {
execArgv: ["--user-agent=test-agent", "--smol"],
},
backend: "cli",
files: {
"/entry.ts": /* js */ `
// Test that compile-exec-argv options don't appear in process.argv
@@ -115,6 +117,7 @@ describe("bundler", () => {
compile: {
execArgv: ["--user-agent=test-agent", "--smol"],
},
backend: "cli",
files: {
"/entry.ts": /* js */ `
// Test that user arguments are properly included when exec argv is present

View File

@@ -1241,6 +1241,7 @@ describe("bundler", () => {
},
minifyWhitespace: minify,
emitDCEAnnotations: emitDCEAnnotations,
backend: "cli",
onAfterBundle(api) {
const code = api.readFile("/out.js");
expect(code).not.toContain("_yes"); // should not contain any *_yes variables

View File

@@ -1604,6 +1604,46 @@ describe("bundler", () => {
stdout: "hi\n",
},
});
itBundled("default/CircularTLADependency2", {
files: {
"/entry.ts": /* ts */ `
await import("./b.ts");
`,
"/b.ts": /* ts */ `
import { c } from "./c.ts";
console.log(c);
export const b = "b";
`,
"/c.ts": /* ts */ `
import { d } from "./d.ts";
console.log(d);
export const c = "c";
`,
"/d.ts": /* ts */ `
const { e } = await import("./e.ts");
console.log(e);
import { f } from "./f.ts";
console.log(f);
export const d = "d";
`,
"/e.ts": /* ts */ `
export const e = "e";
`,
"/f.ts": /* ts */ `
import { g } from "./g.ts";
console.log(g);
export const f = "f";
`,
"/g.ts": /* ts */ `
import { c } from "./c.ts";
console.log(c);
export const g = "g";
`,
},
run: {
stdout: "c\ng\ne\nf\nd\nc\n",
},
});
itBundled("default/ThisOutsideFunctionRenamedToExports", {
files: {
"/entry.js": /* js */ `

View File

@@ -572,7 +572,22 @@ function expectBundled(
return (async () => {
if (!backend) {
backend = plugins !== undefined ? "api" : "cli";
backend =
dotenv ||
jsx.factory ||
jsx.fragment ||
jsx.runtime ||
jsx.importSource ||
typeof production !== "undefined" ||
bundling === false ||
(run && target === "node") ||
emitDCEAnnotations ||
bundleWarnings ||
env ||
run?.validate ||
define
? "cli"
: "api";
}
let root = path.join(
@@ -1043,7 +1058,12 @@ function expectBundled(
const buildConfig: BuildConfig = {
entrypoints: [...entryPaths, ...(entryPointsRaw ?? [])],
external,
banner,
format,
footer,
root: outbase,
packages,
loader,
minify: {
whitespace: minifyWhitespace,
identifiers: minifyIdentifiers,
@@ -1102,7 +1122,13 @@ for (const [key, blob] of build.outputs) {
configRef = buildConfig;
let build: BuildOutput;
try {
build = await Bun.build(buildConfig);
const cwd = process.cwd();
process.chdir(root);
try {
build = await Bun.build(buildConfig);
} finally {
process.chdir(cwd);
}
} catch (e) {
if (e instanceof AggregateError) {
build = {

View File

@@ -152,6 +152,7 @@ console.log(favicon);
// Test manifest with multiple HTML imports
itBundled("html-import/multiple-manifests", {
outdir: "out/",
backend: "cli",
files: {
"/server.js": `
import homeHtml from "./home.html";
@@ -301,6 +302,7 @@ console.log("About manifest:", aboutHtml);
// Test that import with {type: 'file'} still works as a file import
itBundled("html-import/with-type-file-attribute", {
outdir: "out/",
backend: "cli",
files: {
"/entry.js": `
import htmlUrl from "./page.html" with { type: 'file' };

View File

@@ -448,3 +448,81 @@ test.todo("junit reporter", async () => {
.trim();
expect(stripAnsi(report)).toMatchSnapshot();
});
// This test is checking that Bun.inspect || console.log on an Error instance is
// ~the same whether you did `error.stack` or not.
//
// Since the 2nd time around, we parse the error.stack getter, we need to make sure
// it doesn't lose frames.
test("error.stack doesnt lose frames", () => {
function top() {
function middle() {
function bottom() {
throw new Error("test");
}
bottom();
}
middle();
}
function accessErrorStackProperty(yes: boolean): Error {
try {
top();
expect.unreachable();
} catch (e: any) {
if (yes) {
e.stack;
}
return e as Error;
}
}
function bottom(yes: boolean) {
return accessErrorStackProperty(yes);
}
Object.defineProperty(top, "name", { value: "IGNORE_ME_BEFORE_THIS_LINE" });
Object.defineProperty(bottom, "name", { value: "IGNORE_ME_AFTER_THIS_LINE" });
let yes = Bun.inspect(bottom(true));
yes = yes.slice(yes.indexOf("^") + 1);
yes = yes.slice(yes.indexOf("\n"));
yes = yes
.replaceAll(import.meta.dirname, "<dir>")
.replaceAll("\\", "/")
.replace(/\d+/gim, "<num>");
let no = Bun.inspect(bottom(false));
no = no.slice(no.indexOf("^") + 1);
no = no.slice(no.indexOf("\n"));
no = no
.replaceAll(import.meta.dirname, "<dir>")
.replaceAll("\\", "/")
.replace(/\d+/gim, "<num>");
expect(no).toMatchInlineSnapshot(`
"
error: test
at bottom (<dir>/inspect.test.ts:<num>:<num>)
at middle (<dir>/inspect.test.ts:<num>:<num>)
at IGNORE_ME_BEFORE_THIS_LINE (<dir>/inspect.test.ts:<num>:<num>)
at accessErrorStackProperty (<dir>/inspect.test.ts:<num>:<num>)
at <anonymous> (<dir>/inspect.test.ts:<num>:<num>)
"
`);
// In Bun v1.2.20 and lower, we would only have the first frame here.
expect(yes).toMatchInlineSnapshot(`
"
error: test
at bottom (<dir>/inspect.test.ts:<num>:<num>)
at middle (<dir>/inspect.test.ts:<num>:<num>)
at IGNORE_ME_BEFORE_THIS_LINE (<dir>/inspect.test.ts:<num>:<num>)
at accessErrorStackProperty (<dir>/inspect.test.ts:<num>:<num>)
at <dir>/inspect.test.ts:<num>:<num>
"
`);
// We allow it to differ by the existence of <anonymous> as a string. But that's it.
expect(no.split("\n").slice(0, -2).join("\n").trim()).toBe(yes.split("\n").slice(0, -2).join("\n").trim());
});

View File

@@ -0,0 +1,117 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe, tempDir } from "harness";
test("bun pm ls --json separates dependency types correctly", async () => {
using dir = tempDir("pm-ls-types", {
"package.json": JSON.stringify({
name: "test-dep-types",
version: "1.0.0",
dependencies: {
"is-number": "7.0.0",
},
devDependencies: {
"is-odd": "3.0.1",
},
optionalDependencies: {
"is-even": "1.0.0",
},
}),
});
// Install dependencies
await using installProc = Bun.spawn({
cmd: [bunExe(), "install"],
cwd: String(dir),
env: bunEnv,
stderr: "pipe",
});
const installExitCode = await installProc.exited;
expect(installExitCode).toBe(0);
// Test JSON output with separated dependency types
await using proc = Bun.spawn({
cmd: [bunExe(), "pm", "ls", "--json"],
cwd: String(dir),
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(exitCode).toBe(0);
expect(stderr).toBe("");
const json = JSON.parse(stdout);
// Check that dependencies are in the right sections
expect(json).toHaveProperty("dependencies");
expect(json.dependencies).toHaveProperty("is-number");
expect(json.dependencies["is-number"]).toHaveProperty("from", "7.0.0");
expect(json).toHaveProperty("devDependencies");
expect(json.devDependencies).toHaveProperty("is-odd");
expect(json.devDependencies["is-odd"]).toHaveProperty("from", "3.0.1");
expect(json).toHaveProperty("optionalDependencies");
expect(json.optionalDependencies).toHaveProperty("is-even");
expect(json.optionalDependencies["is-even"]).toHaveProperty("from", "1.0.0");
// Ensure no mixing between sections
expect(json.dependencies).not.toHaveProperty("is-odd");
expect(json.dependencies).not.toHaveProperty("is-even");
expect(json.devDependencies).not.toHaveProperty("is-number");
expect(json.devDependencies).not.toHaveProperty("is-even");
expect(json.optionalDependencies).not.toHaveProperty("is-number");
expect(json.optionalDependencies).not.toHaveProperty("is-odd");
});
test("bun pm ls --json --depth=1 includes nested deps without 'from' field", async () => {
using dir = tempDir("pm-ls-nested-from", {
"package.json": JSON.stringify({
name: "test-nested",
version: "1.0.0",
dependencies: {
"is-odd": "3.0.1", // This depends on is-number
},
}),
});
// Install dependencies
await using installProc = Bun.spawn({
cmd: [bunExe(), "install"],
cwd: String(dir),
env: bunEnv,
stderr: "pipe",
});
const installExitCode = await installProc.exited;
expect(installExitCode).toBe(0);
// Test JSON output with depth=1
await using proc = Bun.spawn({
cmd: [bunExe(), "pm", "ls", "--json", "--depth=1"],
cwd: String(dir),
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(exitCode).toBe(0);
expect(stderr).toBe("");
const json = JSON.parse(stdout);
// Root dependency should have 'from' field
expect(json.dependencies["is-odd"]).toHaveProperty("from", "3.0.1");
// Nested dependencies should NOT have 'from' field
if (json.dependencies["is-odd"].dependencies?.["is-number"]) {
expect(json.dependencies["is-odd"].dependencies["is-number"]).not.toHaveProperty("from");
expect(json.dependencies["is-odd"].dependencies["is-number"]).toHaveProperty("version");
expect(json.dependencies["is-odd"].dependencies["is-number"]).toHaveProperty("resolved");
}
});

135
test/cli/pm/pm-ls.test.ts Normal file
View File

@@ -0,0 +1,135 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe, tempDir } from "harness";
test("bun pm ls --json outputs valid JSON", async () => {
using dir = tempDir("pm-ls-json", {
"package.json": JSON.stringify({
name: "test-project",
version: "1.0.0",
dependencies: {
"is-number": "7.0.0",
},
}),
});
// Install dependencies
await using installProc = Bun.spawn({
cmd: [bunExe(), "install"],
cwd: String(dir),
env: bunEnv,
stderr: "pipe",
});
const installExitCode = await installProc.exited;
expect(installExitCode).toBe(0);
// Test JSON output
await using proc = Bun.spawn({
cmd: [bunExe(), "pm", "ls", "--json"],
cwd: String(dir),
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(exitCode).toBe(0);
expect(stderr).toBe("");
// Parse JSON output
const json = JSON.parse(stdout);
expect(json).toHaveProperty("name", "test-project");
expect(json).toHaveProperty("version", "1.0.0");
expect(json).toHaveProperty("dependencies");
expect(json.dependencies).toHaveProperty("is-number");
expect(json.dependencies["is-number"]).toHaveProperty("version", "7.0.0");
expect(json.dependencies["is-number"]).toHaveProperty("resolved");
expect(json.dependencies["is-number"]).toHaveProperty("overridden", false);
});
test("bun pm ls --json --depth=0 limits depth", async () => {
using dir = tempDir("pm-ls-depth", {
"package.json": JSON.stringify({
name: "test-project",
version: "1.0.0",
dependencies: {
"is-number": "7.0.0", // This has no dependencies itself
},
}),
});
// Install dependencies
await using installProc = Bun.spawn({
cmd: [bunExe(), "install"],
cwd: String(dir),
env: bunEnv,
stderr: "pipe",
});
const installExitCode = await installProc.exited;
expect(installExitCode).toBe(0);
// Test depth=0 (no nested dependencies)
await using proc = Bun.spawn({
cmd: [bunExe(), "pm", "ls", "--json", "--depth=0"],
cwd: String(dir),
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(exitCode).toBe(0);
expect(stderr).toBe("");
const json = JSON.parse(stdout);
expect(json.dependencies["is-number"]).toHaveProperty("version");
expect(json.dependencies["is-number"]).not.toHaveProperty("dependencies");
});
test("bun pm ls --depth limits tree output", async () => {
using dir = tempDir("pm-ls-tree-depth", {
"package.json": JSON.stringify({
name: "test-project",
version: "1.0.0",
dependencies: {
"is-number": "7.0.0",
},
}),
});
// Install dependencies
await using installProc = Bun.spawn({
cmd: [bunExe(), "install"],
cwd: String(dir),
env: bunEnv,
stderr: "pipe",
});
const installExitCode = await installProc.exited;
expect(installExitCode).toBe(0);
// Test regular tree with depth=0
await using proc = Bun.spawn({
cmd: [bunExe(), "pm", "ls", "--depth=0"],
cwd: String(dir),
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(exitCode).toBe(0);
expect(stderr).toBe("");
// Should only show direct dependencies
const lines = stdout.trim().split("\n");
expect(lines.length).toBeGreaterThan(0);
expect(stdout).toContain("is-number@7.0.0");
// Should not show any nested structure (no more ├── or └──)
const hasNestedDeps = lines.some(line => line.includes("│"));
expect(hasNestedDeps).toBe(false);
});

View File

@@ -1,297 +0,0 @@
import { test, expect } from "bun:test";
import { bunEnv, bunExe, tempDir, normalizeBunSnapshot } from "harness";
import path from "path";
test("bun --filter respects workspace dependency order", async () => {
using dir = tempDir("filter-deps", {
"package.json": JSON.stringify({
name: "monorepo",
private: true,
workspaces: ["packages/*"],
}),
"packages/a/package.json": JSON.stringify({
name: "a",
version: "1.0.0",
scripts: {
build: "echo 'Building A' && sleep 0.5 && echo 'A built' && echo 'export const value = 42;' > dist/index.js",
prebuild: "mkdir -p dist",
},
}),
"packages/b/package.json": JSON.stringify({
name: "b",
version: "1.0.0",
dependencies: {
a: "workspace:*",
},
scripts: {
build: "echo 'Building B' && test -f ../a/dist/index.js && echo 'B built successfully'",
},
}),
});
await using proc = Bun.spawn({
cmd: [bunExe(), "--filter", "*", "build"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [stdoutText, stderrText, exitCode] = await Promise.all([
proc.stdout.text(),
proc.stderr.text(),
proc.exited,
]);
expect(exitCode).toBe(0);
expect(stderrText).toBe("");
// Check that A builds before B
const aBuiltIndex = stdoutText.indexOf("A built");
const bBuildingIndex = stdoutText.indexOf("Building B");
expect(aBuiltIndex).toBeGreaterThan(-1);
expect(bBuildingIndex).toBeGreaterThan(-1);
expect(aBuiltIndex).toBeLessThan(bBuildingIndex);
});
test("bun --filter handles complex dependency chains", async () => {
using dir = tempDir("filter-deps-chain", {
"package.json": JSON.stringify({
name: "monorepo",
private: true,
workspaces: ["packages/*"],
}),
"packages/a/package.json": JSON.stringify({
name: "a",
version: "1.0.0",
scripts: {
build: "echo 'Building A' && sleep 0.3 && echo 'A built' && echo 'export const a = 1;' > index.js",
},
}),
"packages/b/package.json": JSON.stringify({
name: "b",
version: "1.0.0",
dependencies: {
a: "workspace:*",
},
scripts: {
build: "echo 'Building B' && test -f ../a/index.js && sleep 0.3 && echo 'B built' && echo 'export const b = 2;' > index.js",
},
}),
"packages/c/package.json": JSON.stringify({
name: "c",
version: "1.0.0",
dependencies: {
b: "workspace:*",
},
scripts: {
build: "echo 'Building C' && test -f ../b/index.js && echo 'C built'",
},
}),
});
await using proc = Bun.spawn({
cmd: [bunExe(), "--filter", "*", "build"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [stdoutText, stderrText, exitCode] = await Promise.all([
proc.stdout.text(),
proc.stderr.text(),
proc.exited,
]);
expect(exitCode).toBe(0);
expect(stderrText).toBe("");
// Check the build order
const aBuiltIndex = stdoutText.indexOf("A built");
const bBuildingIndex = stdoutText.indexOf("Building B");
const bBuiltIndex = stdoutText.indexOf("B built");
const cBuildingIndex = stdoutText.indexOf("Building C");
expect(aBuiltIndex).toBeGreaterThan(-1);
expect(bBuildingIndex).toBeGreaterThan(-1);
expect(bBuiltIndex).toBeGreaterThan(-1);
expect(cBuildingIndex).toBeGreaterThan(-1);
// A must be built before B starts
expect(aBuiltIndex).toBeLessThan(bBuildingIndex);
// B must be built before C starts
expect(bBuiltIndex).toBeLessThan(cBuildingIndex);
});
test("bun --filter handles parallel execution of independent packages", async () => {
using dir = tempDir("filter-deps-parallel", {
"package.json": JSON.stringify({
name: "monorepo",
private: true,
workspaces: ["packages/*"],
}),
"packages/a/package.json": JSON.stringify({
name: "a",
version: "1.0.0",
scripts: {
build: "echo 'Building A' && sleep 0.3 && echo 'A built'",
},
}),
"packages/b/package.json": JSON.stringify({
name: "b",
version: "1.0.0",
scripts: {
build: "echo 'Building B' && sleep 0.3 && echo 'B built'",
},
}),
"packages/c/package.json": JSON.stringify({
name: "c",
version: "1.0.0",
dependencies: {
a: "workspace:*",
b: "workspace:*",
},
scripts: {
build: "echo 'Building C' && echo 'C built'",
},
}),
});
const startTime = Date.now();
await using proc = Bun.spawn({
cmd: [bunExe(), "--filter", "*", "build"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [stdoutText, stderrText, exitCode] = await Promise.all([
proc.stdout.text(),
proc.stderr.text(),
proc.exited,
]);
const endTime = Date.now();
expect(exitCode).toBe(0);
expect(stderrText).toBe("");
// Check that A and B ran in parallel (total time should be ~300ms, not ~600ms)
const duration = endTime - startTime;
expect(duration).toBeLessThan(500);
// Check that C runs after both A and B
const aBuiltIndex = stdoutText.indexOf("A built");
const bBuiltIndex = stdoutText.indexOf("B built");
const cBuildingIndex = stdoutText.indexOf("Building C");
expect(aBuiltIndex).toBeGreaterThan(-1);
expect(bBuiltIndex).toBeGreaterThan(-1);
expect(cBuildingIndex).toBeGreaterThan(-1);
expect(aBuiltIndex).toBeLessThan(cBuildingIndex);
expect(bBuiltIndex).toBeLessThan(cBuildingIndex);
});
test("bun --filter fails when dependency fails", async () => {
using dir = tempDir("filter-deps-failure", {
"package.json": JSON.stringify({
name: "monorepo",
private: true,
workspaces: ["packages/*"],
}),
"packages/a/package.json": JSON.stringify({
name: "a",
version: "1.0.0",
scripts: {
build: "echo 'Building A' && exit 1",
},
}),
"packages/b/package.json": JSON.stringify({
name: "b",
version: "1.0.0",
dependencies: {
a: "workspace:*",
},
scripts: {
build: "echo 'Should not run'",
},
}),
});
await using proc = Bun.spawn({
cmd: [bunExe(), "--filter", "*", "build"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [stdoutText, stderrText, exitCode] = await Promise.all([
proc.stdout.text(),
proc.stderr.text(),
proc.exited,
]);
expect(exitCode).toBe(1);
expect(stdoutText).toContain("Building A");
expect(stdoutText).not.toContain("Should not run");
});
test.skip("bun --filter with workspace: protocol dependency", async () => {
using dir = tempDir("filter-workspace-protocol", {
"package.json": JSON.stringify({
name: "monorepo",
private: true,
workspaces: ["packages/*"],
}),
"packages/lib/package.json": JSON.stringify({
name: "@test/lib",
version: "1.0.0",
scripts: {
build: "echo 'Building lib' && mkdir -p dist && echo 'done' > dist/lib.txt && sleep 0.1 && echo 'Lib built'",
},
}),
"packages/app/package.json": JSON.stringify({
name: "@test/app",
version: "1.0.0",
dependencies: {
"@test/lib": "workspace:^1.0.0",
},
scripts: {
build: "echo 'Building app' && test -f ../lib/dist/lib.txt && echo 'App built'",
},
}),
});
await using proc = Bun.spawn({
cmd: [bunExe(), "--filter", "*", "build"],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [stdoutText, stderrText, exitCode] = await Promise.all([
proc.stdout.text(),
proc.stderr.text(),
proc.exited,
]);
expect(exitCode).toBe(0);
expect(stderrText).toBe("");
// Check that lib builds before app
const libBuildingIndex = stdoutText.indexOf("Building lib");
const appBuildingIndex = stdoutText.indexOf("Building app");
expect(libBuildingIndex).toBeGreaterThan(-1);
expect(appBuildingIndex).toBeGreaterThan(-1);
// Since lib must complete before app starts, app should start after lib builds
const libBuiltIndex = stdoutText.indexOf("Lib built");
expect(libBuiltIndex).toBeGreaterThan(-1);
expect(libBuiltIndex).toBeLessThan(appBuildingIndex);
});

View File

@@ -862,11 +862,6 @@ export function isDockerEnabled(): boolean {
return false;
}
// TODO: investigate why its not starting on Linux arm64
if ((isLinux && process.arch === "arm64") || isMacOS) {
return false;
}
try {
const info = execSync(`${dockerCLI} info`, { stdio: ["ignore", "pipe", "inherit"] });
return info.toString().indexOf("Server Version:") !== -1;
@@ -924,7 +919,7 @@ export async function describeWithContainer(
return;
}
const { arch, platform } = process;
if ((archs && !archs?.includes(arch)) || platform === "win32" || platform === "darwin") {
if ((archs && !archs?.includes(arch)) || platform === "win32") {
test.skip(`docker image is not supported on ${platform}/${arch}, skipped: ${image}`, () => {});
return false;
}

View File

@@ -22,7 +22,7 @@
"allocator.ptr !=": 1,
"allocator.ptr ==": 0,
"global.hasException": 28,
"globalObject.hasException": 47,
"globalObject.hasException": 48,
"globalThis.hasException": 133,
"std.StringArrayHashMap(": 1,
"std.StringArrayHashMapUnmanaged(": 11,

View File

@@ -0,0 +1,277 @@
// Copyright Joyent, Inc. and other Node contributors.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the
// "Software"), to deal in the Software without restriction, including
// without limitation the rights to use, copy, modify, merge, publish,
// distribute, sublicense, and/or sell copies of the Software, and to permit
// persons to whom the Software is furnished to do so, subject to the
// following conditions:
//
// The above copyright notice and this permission notice shall be included
// in all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN
// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
// USE OR OTHER DEALINGS IN THE SOFTWARE.
'use strict';
const common = require('../common');
const tmpdir = require('../common/tmpdir');
const child_process = require('child_process');
const assert = require('assert');
const fs = require('fs');
const fixtures = require('../common/fixtures');
const fn = fixtures.path('elipses.txt');
const rangeFile = fixtures.path('x.txt');
function test1(options) {
let paused = false;
let bytesRead = 0;
const file = fs.createReadStream(fn, options);
const fileSize = fs.statSync(fn).size;
assert.strictEqual(file.bytesRead, 0);
file.on('open', common.mustCall(function(fd) {
file.length = 0;
assert.strictEqual(typeof fd, 'number');
assert.strictEqual(file.bytesRead, 0);
assert.ok(file.readable);
// GH-535
file.pause();
file.resume();
file.pause();
file.resume();
}));
file.on('data', function(data) {
assert.ok(data instanceof Buffer);
assert.ok(data.byteOffset % 8 === 0);
assert.ok(!paused);
file.length += data.length;
bytesRead += data.length;
assert.strictEqual(file.bytesRead, bytesRead);
paused = true;
file.pause();
setTimeout(function() {
paused = false;
file.resume();
}, 10);
});
file.on('end', common.mustCall(function(chunk) {
assert.strictEqual(bytesRead, fileSize);
assert.strictEqual(file.bytesRead, fileSize);
}));
file.on('close', common.mustCall(function() {
assert.strictEqual(bytesRead, fileSize);
assert.strictEqual(file.bytesRead, fileSize);
}));
process.on('exit', function() {
assert.strictEqual(file.length, 30000);
});
}
test1({});
test1({
fs: {
open: common.mustCall(fs.open),
read: common.mustCallAtLeast(fs.read, 1),
close: common.mustCall(fs.close),
}
});
{
const file = fs.createReadStream(fn, common.mustNotMutateObjectDeep({ encoding: 'utf8' }));
file.length = 0;
file.on('data', function(data) {
assert.strictEqual(typeof data, 'string');
file.length += data.length;
for (let i = 0; i < data.length; i++) {
// http://www.fileformat.info/info/unicode/char/2026/index.htm
assert.strictEqual(data[i], '\u2026');
}
});
file.on('close', common.mustCall());
process.on('exit', function() {
assert.strictEqual(file.length, 10000);
});
}
{
const file =
fs.createReadStream(rangeFile, common.mustNotMutateObjectDeep({ bufferSize: 1, start: 1, end: 2 }));
let contentRead = '';
file.on('data', function(data) {
contentRead += data.toString('utf-8');
});
file.on('end', common.mustCall(function(data) {
assert.strictEqual(contentRead, 'yz');
}));
}
{
const file = fs.createReadStream(rangeFile, common.mustNotMutateObjectDeep({ bufferSize: 1, start: 1 }));
file.data = '';
file.on('data', function(data) {
file.data += data.toString('utf-8');
});
file.on('end', common.mustCall(function() {
assert.strictEqual(file.data, 'yz\n');
}));
}
{
// Ref: https://github.com/nodejs/node-v0.x-archive/issues/2320
const file = fs.createReadStream(rangeFile, common.mustNotMutateObjectDeep({ bufferSize: 1.23, start: 1 }));
file.data = '';
file.on('data', function(data) {
file.data += data.toString('utf-8');
});
file.on('end', common.mustCall(function() {
assert.strictEqual(file.data, 'yz\n');
}));
}
assert.throws(
() => {
fs.createReadStream(rangeFile, common.mustNotMutateObjectDeep({ start: 10, end: 2 }));
},
{
code: 'ERR_OUT_OF_RANGE',
message: 'The value of "start" is out of range. It must be <= "end"' +
' (here: 2). Received 10',
name: 'RangeError'
});
{
const stream = fs.createReadStream(rangeFile, common.mustNotMutateObjectDeep({ start: 0, end: 0 }));
stream.data = '';
stream.on('data', function(chunk) {
stream.data += chunk;
});
stream.on('end', common.mustCall(function() {
assert.strictEqual(stream.data, 'x');
}));
}
{
// Verify that end works when start is not specified.
const stream = new fs.createReadStream(rangeFile, common.mustNotMutateObjectDeep({ end: 1 }));
stream.data = '';
stream.on('data', function(chunk) {
stream.data += chunk;
});
stream.on('end', common.mustCall(function() {
assert.strictEqual(stream.data, 'xy');
}));
}
if (!common.isWindows) {
// Verify that end works when start is not specified, and we do not try to
// use positioned reads. This makes sure that this keeps working for
// non-seekable file descriptors.
tmpdir.refresh();
const filename = `${tmpdir.path}/foo.pipe`;
const mkfifoResult = child_process.spawnSync('mkfifo', [filename]);
if (!mkfifoResult.error) {
child_process.exec(...common.escapePOSIXShell`echo "xyz foobar" > "${filename}"`);
const stream = new fs.createReadStream(filename, common.mustNotMutateObjectDeep({ end: 1 }));
stream.data = '';
stream.on('data', function(chunk) {
stream.data += chunk;
});
stream.on('end', common.mustCall(function() {
assert.strictEqual(stream.data, 'xy');
fs.unlinkSync(filename);
}));
} else {
common.printSkipMessage('mkfifo not available');
}
}
{
// Pause and then resume immediately.
const pauseRes = fs.createReadStream(rangeFile);
pauseRes.pause();
pauseRes.resume();
}
{
let file = fs.createReadStream(rangeFile, common.mustNotMutateObjectDeep({ autoClose: false }));
let data = '';
file.on('data', function(chunk) { data += chunk; });
file.on('end', common.mustCall(function() {
assert.strictEqual(data, 'xyz\n');
process.nextTick(function() {
assert(!file.closed);
assert(!file.destroyed);
fileNext();
});
}));
function fileNext() {
// This will tell us if the fd is usable again or not.
file = fs.createReadStream(null, common.mustNotMutateObjectDeep({ fd: file.fd, start: 0 }));
file.data = '';
file.on('data', function(data) {
file.data += data;
});
file.on('end', common.mustCall(function(err) {
assert.strictEqual(file.data, 'xyz\n');
}));
process.on('exit', function() {
assert(file.closed);
assert(file.destroyed);
});
}
}
{
// Just to make sure autoClose won't close the stream because of error.
const file = fs.createReadStream(null, common.mustNotMutateObjectDeep({ fd: 13337, autoClose: false }));
file.on('data', common.mustNotCall());
file.on('error', common.mustCall());
process.on('exit', function() {
assert(!file.closed);
assert(!file.destroyed);
assert(file.fd);
});
}
{
// Make sure stream is destroyed when file does not exist.
const file = fs.createReadStream('/path/to/file/that/does/not/exist');
file.on('data', common.mustNotCall());
file.on('error', common.mustCall());
process.on('exit', function() {
assert(file.closed);
assert(file.destroyed);
});
}

View File

@@ -112,21 +112,23 @@ if (docker) {
});
test("Idle timeout works at start", async () => {
const onclose = mock();
const onClosePromise = Promise.withResolvers();
const onclose = mock(err => {
onClosePromise.resolve(err);
});
const onconnect = mock();
await using sql = new SQL({
...options,
idle_timeout: 1,
onconnect,
onclose,
max: 1,
});
let error: any;
try {
await sql`select SLEEP(2)`;
} catch (e) {
error = e;
}
expect(error.code).toBe(`ERR_MYSQL_IDLE_TIMEOUT`);
await sql.connect();
const err = await onClosePromise.promise;
expect(err).toBeInstanceOf(SQL.SQLError);
expect(err).toBeInstanceOf(SQL.MySQLError);
expect((err as SQL.MySQLError).code).toBe(`ERR_MYSQL_IDLE_TIMEOUT`);
expect(onconnect).toHaveBeenCalled();
expect(onclose).toHaveBeenCalledTimes(1);
});
@@ -140,8 +142,10 @@ if (docker) {
await using sql = new SQL({
...options,
idle_timeout: 1,
connection_timeout: 5,
onconnect,
onclose,
max: 1,
});
expect<[{ x: number }]>(await sql`select 123 as x`).toEqual([{ x: 123 }]);
expect(onconnect).toHaveBeenCalledTimes(1);
@@ -158,11 +162,12 @@ if (docker) {
onClosePromise.resolve(err);
});
const onconnect = mock();
const sql = new SQL({
await using sql = new SQL({
...options,
max_lifetime: 1,
onconnect,
onclose,
max: 1,
});
let error: unknown;
expect<[{ x: number }]>(await sql`select 1 as x`).toEqual([{ x: 1 }]);
@@ -616,6 +621,7 @@ if (docker) {
expect(e.message).toBe("password error");
}
});
test("Support dynamic async password function that throws", async () => {
await using sql = new SQL({
...options,
@@ -633,6 +639,7 @@ if (docker) {
expect(e.message).toBe("password error");
}
});
test("sql file", async () => {
await using sql = new SQL(options);
expect((await sql.file(rel("select.sql")))[0].x).toBe(1);
@@ -869,31 +876,33 @@ if (docker) {
sql.flush();
});
test.each(["connect_timeout", "connectTimeout", "connectionTimeout", "connection_timeout"] as const)(
"connection timeout key %p throws",
async key => {
const server = net.createServer().listen();
describe("timeouts", () => {
test.each(["connect_timeout", "connectTimeout", "connectionTimeout", "connection_timeout"] as const)(
"connection timeout key %p throws",
async key => {
const server = net.createServer().listen();
const port = (server.address() as import("node:net").AddressInfo).port;
const port = (server.address() as import("node:net").AddressInfo).port;
const sql = new SQL({ adapter: "mysql", port, host: "127.0.0.1", [key]: 0.2 });
const sql = new SQL({ adapter: "mysql", port, host: "127.0.0.1", max: 1, [key]: 0.2 });
try {
await sql`select 1`;
throw new Error("should not reach");
} catch (e) {
expect(e).toBeInstanceOf(Error);
expect(e.code).toBe("ERR_MYSQL_CONNECTION_TIMEOUT");
expect(e.message).toMatch(/Connection timed out after 200ms/);
} finally {
sql.close();
server.close();
}
},
{
timeout: 1000,
},
);
try {
await sql`select 1`;
throw new Error("should not reach");
} catch (e) {
expect(e).toBeInstanceOf(Error);
expect(e.code).toBe("ERR_MYSQL_CONNECTION_TIMEOUT");
expect(e.message).toMatch(/Connection timeout after 200ms/);
} finally {
sql.close();
server.close();
}
},
{
timeout: 1000,
},
);
});
test("Array returns rows as arrays of columns", async () => {
await using sql = new SQL(options);
return [(await sql`select CAST(1 AS SIGNED) as x`.values())[0][0], 1];

View File

@@ -2685,7 +2685,7 @@ if (isDockerEnabled()) {
expect(e).toBeInstanceOf(SQL.SQLError);
expect(e).toBeInstanceOf(SQL.PostgresError);
expect(e.code).toBe("ERR_POSTGRES_CONNECTION_TIMEOUT");
expect(e.message).toMatch(/Connection timed out after 200ms/);
expect(e.message).toMatch(/Connection timeout after 200ms/);
} finally {
sql.close();
server.close();

View File

@@ -0,0 +1,24 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe } from "harness";
test("issue #22650 - shell crash with && operator followed by external command", async () => {
// Minimal reproduction: echo && <external command>
// This triggers the crash because after the first command succeeds,
// the shell tries to spawn an external process but top_level_dir is not set
await using proc = Bun.spawn({
cmd: [bunExe(), "exec", "echo test && node --version"],
env: bunEnv,
stderr: "pipe",
stdout: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
// Should not have any errors
expect(stderr).toBe("");
// Should execute both commands successfully
expect(stdout).toContain("test");
expect(stdout).toMatch(/v\d+\.\d+\.\d+/); // Node version pattern
expect(exitCode).toBe(0);
});

View File

@@ -0,0 +1,109 @@
import { expect, test } from "bun:test";
import { openSync } from "fs";
import { bunEnv, bunExe, normalizeBunSnapshot } from "harness";
import tty from "tty";
test("tty.ReadStream should have ref/unref methods when opened on /dev/tty", () => {
// Skip this test if /dev/tty is not available (e.g., in CI without TTY)
let ttyFd: number;
try {
ttyFd = openSync("/dev/tty", "r");
} catch (err: any) {
if (err.code === "ENXIO" || err.code === "ENOENT") {
// No TTY available, skip the test
return;
}
throw err;
}
try {
// Create a tty.ReadStream with the /dev/tty file descriptor
const stream = new tty.ReadStream(ttyFd);
// Verify the stream is recognized as a TTY
expect(stream.isTTY).toBe(true);
// Verify ref/unref methods exist
expect(typeof stream.ref).toBe("function");
expect(typeof stream.unref).toBe("function");
// Verify ref/unref return the stream for chaining
expect(stream.ref()).toBe(stream);
expect(stream.unref()).toBe(stream);
// Clean up - destroy will close the fd
stream.destroy();
} finally {
// Don't double-close the fd - stream.destroy() already closed it
}
});
test("tty.ReadStream ref/unref should behave like Node.js", async () => {
// Skip on Windows - no /dev/tty
if (process.platform === "win32") {
return;
}
// Create a test script that uses tty.ReadStream with ref/unref
const script = `
const fs = require('fs');
const tty = require('tty');
let ttyFd;
try {
ttyFd = fs.openSync('/dev/tty', 'r');
} catch (err) {
// No TTY available
console.log('NO_TTY');
process.exit(0);
}
const stream = new tty.ReadStream(ttyFd);
// Test that ref/unref methods exist and work
if (typeof stream.ref !== 'function' || typeof stream.unref !== 'function') {
console.error('ref/unref methods missing');
process.exit(1);
}
// Unref should allow process to exit
stream.unref();
// Set a timer that would keep process alive if ref() was called
const timer = setTimeout(() => {
console.log('TIMEOUT');
}, 100);
timer.unref();
// Process should exit immediately since both stream and timer are unref'd
console.log('SUCCESS');
// Clean up properly
stream.destroy();
`;
// Write the test script to a temporary file
const path = require("path");
const os = require("os");
const tempFile = path.join(os.tmpdir(), "test-tty-ref-unref-" + Date.now() + ".js");
await Bun.write(tempFile, script);
// Run the script with bun
const proc = Bun.spawn({
cmd: [bunExe(), tempFile],
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [exitCode, stdout, stderr] = await Promise.all([proc.exited, proc.stdout.text(), proc.stderr.text()]);
if (stdout.includes("NO_TTY")) {
// No TTY available in test environment, skip
return;
}
expect(stderr).toBe("");
expect(exitCode).toBe(0);
expect(normalizeBunSnapshot(stdout)).toMatchInlineSnapshot(`"SUCCESS"`);
});

View File

@@ -0,0 +1,242 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe, isWindows, normalizeBunSnapshot, tempDir } from "harness";
import { join } from "path";
// Skip on Windows as it doesn't have /dev/tty
test.skipIf(isWindows)("can reopen /dev/tty after stdin EOF for interactive session", async () => {
// This test ensures that Bun can reopen /dev/tty after stdin reaches EOF,
// which is needed for tools like Claude Code that read piped input then
// switch to interactive mode.
// Create test script that reads piped input then reopens TTY
const testScript = `
const fs = require('fs');
const tty = require('tty');
// Read piped input
let inputData = '';
process.stdin.on('data', (chunk) => {
inputData += chunk;
});
process.stdin.on('end', () => {
console.log('GOT_INPUT:' + inputData.trim());
// After stdin ends, reopen TTY for interaction
try {
const fd = fs.openSync('/dev/tty', 'r+');
console.log('OPENED_TTY:true');
const ttyStream = new tty.ReadStream(fd);
console.log('CREATED_STREAM:true');
console.log('POS:' + ttyStream.pos);
console.log('START:' + ttyStream.start);
// Verify we can set raw mode
if (typeof ttyStream.setRawMode === 'function') {
ttyStream.setRawMode(true);
console.log('SET_RAW_MODE:true');
ttyStream.setRawMode(false);
}
ttyStream.destroy();
fs.closeSync(fd);
console.log('SUCCESS:true');
process.exit(0);
} catch (err) {
console.log('ERROR:' + err.code);
process.exit(1);
}
});
if (process.stdin.isTTY) {
console.log('ERROR:NO_PIPED_INPUT');
process.exit(1);
}
`;
using dir = tempDir("tty-reopen", {});
const scriptPath = join(String(dir), "test.js");
await Bun.write(scriptPath, testScript);
// Check if script command is available (might not be on Alpine by default)
const hasScript = Bun.which("script");
if (!hasScript) {
// Try without script - if /dev/tty isn't available, test will fail appropriately
await using proc = Bun.spawn({
cmd: ["sh", "-c", `echo "test input" | ${bunExe()} ${scriptPath}`],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
// If it fails with ENXIO, skip the test
if (exitCode !== 0 && stdout.includes("ERROR:ENXIO")) {
console.log("Skipping test: requires 'script' command for PTY simulation");
return;
}
// Otherwise check results - snapshot first to see what happened
const output = stdout + (stderr ? "\nSTDERR:\n" + stderr : "");
expect(normalizeBunSnapshot(output, dir)).toMatchInlineSnapshot(`
"GOT_INPUT:test input
OPENED_TTY:true
CREATED_STREAM:true
POS:undefined
START:undefined
SET_RAW_MODE:true
SUCCESS:true"
`);
expect(exitCode).toBe(0);
return;
}
// Use script command to provide a PTY environment
// This simulates a real terminal where /dev/tty is available
// macOS and Linux have different script command syntax
const isMacOS = process.platform === "darwin";
const scriptCmd = isMacOS
? ["script", "-q", "/dev/null", "sh", "-c", `echo "test input" | ${bunExe()} ${scriptPath}`]
: ["script", "-q", "-c", `echo "test input" | ${bunExe()} ${scriptPath}`, "/dev/null"];
await using proc = Bun.spawn({
cmd: scriptCmd,
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
// First snapshot the combined output to see what actually happened
const output = stdout + (stderr ? "\nSTDERR:\n" + stderr : "");
// Use JSON.stringify to make control characters visible
const jsonOutput = JSON.stringify(normalizeBunSnapshot(output, dir));
// macOS script adds control characters, Linux doesn't
const expected = isMacOS
? `"^D\\b\\bGOT_INPUT:test input\\nOPENED_TTY:true\\nCREATED_STREAM:true\\nPOS:undefined\\nSTART:undefined\\nSET_RAW_MODE:true\\nSUCCESS:true"`
: `"GOT_INPUT:test input\\nOPENED_TTY:true\\nCREATED_STREAM:true\\nPOS:undefined\\nSTART:undefined\\nSET_RAW_MODE:true\\nSUCCESS:true"`;
expect(jsonOutput).toBe(expected);
// Then check exit code
expect(exitCode).toBe(0);
});
// Skip on Windows as it doesn't have /dev/tty
test.skipIf(isWindows)("TTY ReadStream should not set position for character devices", async () => {
// This test ensures that when creating a ReadStream with an fd (like for TTY),
// the position remains undefined so that fs.read uses read() syscall instead
// of pread() which would fail with ESPIPE on character devices.
const testScript = `
const fs = require('fs');
const tty = require('tty');
try {
const fd = fs.openSync('/dev/tty', 'r+');
const ttyStream = new tty.ReadStream(fd);
// These should be undefined for TTY streams
console.log('POS_TYPE:' + typeof ttyStream.pos);
console.log('START_TYPE:' + typeof ttyStream.start);
// Monkey-patch fs.read to check what position is passed
const originalRead = fs.read;
let capturedPosition = 'NOT_CALLED';
let readCalled = false;
fs.read = function(fd, buffer, offset, length, position, callback) {
capturedPosition = position;
readCalled = true;
// Don't actually read, just call callback with 0 bytes
process.nextTick(() => callback(null, 0, buffer));
return originalRead;
};
// Set up data handler to trigger read
ttyStream.on('data', () => {});
ttyStream.on('error', () => {});
// Immediately log the state since we don't actually need to wait for a real read
console.log('POSITION_PASSED:' + capturedPosition);
console.log('POSITION_TYPE:' + typeof capturedPosition);
console.log('READ_CALLED:' + readCalled);
ttyStream.destroy();
fs.closeSync(fd);
process.exit(0);
} catch (err) {
console.log('ERROR:' + err.code);
process.exit(1);
}
`;
using dir = tempDir("tty-position", {});
const scriptPath = join(String(dir), "test.js");
await Bun.write(scriptPath, testScript);
// Check if script command is available
const hasScript = Bun.which("script");
if (!hasScript) {
// Try without script
await using proc = Bun.spawn({
cmd: [bunExe(), scriptPath],
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
if (exitCode !== 0 && stdout.includes("ERROR:ENXIO")) {
console.log("Skipping test: requires 'script' command for PTY simulation");
return;
}
// Snapshot first to see what happened
const output = stdout + (stderr ? "\nSTDERR:\n" + stderr : "");
expect(normalizeBunSnapshot(output, dir)).toMatchInlineSnapshot(`
"POS_TYPE:undefined
START_TYPE:undefined
POSITION_PASSED:NOT_CALLED
POSITION_TYPE:string
READ_CALLED:false"
`);
expect(exitCode).toBe(0);
return;
}
// Use script command to provide a PTY environment
// macOS and Linux have different script command syntax
const isMacOS = process.platform === "darwin";
const scriptCmd = isMacOS
? ["script", "-q", "/dev/null", bunExe(), scriptPath]
: ["script", "-q", "-c", `${bunExe()} ${scriptPath}`, "/dev/null"];
await using proc = Bun.spawn({
cmd: scriptCmd,
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
// First snapshot the combined output to see what actually happened
const output = stdout + (stderr ? "\nSTDERR:\n" + stderr : "");
// Use JSON.stringify to make control characters visible
const jsonOutput = JSON.stringify(normalizeBunSnapshot(output, dir));
// macOS script adds control characters, Linux doesn't
const expected = isMacOS
? `"^D\\b\\bPOS_TYPE:undefined\\nSTART_TYPE:undefined\\nPOSITION_PASSED:NOT_CALLED\\nPOSITION_TYPE:string\\nREAD_CALLED:false"`
: `"POS_TYPE:undefined\\nSTART_TYPE:undefined\\nPOSITION_PASSED:NOT_CALLED\\nPOSITION_TYPE:string\\nREAD_CALLED:false"`;
expect(jsonOutput).toBe(expected);
// Then check exit code
expect(exitCode).toBe(0);
});

View File

@@ -0,0 +1,141 @@
import { expect, test } from "bun:test";
import { bunEnv, bunExe, normalizeBunSnapshot, tempDir } from "harness";
// This test replicates the pattern used by TUI apps
// where they read piped stdin first, then reopen /dev/tty for interactive input
test("TUI app pattern: read piped stdin then reopen /dev/tty", async () => {
// Skip on Windows - no /dev/tty
if (process.platform === "win32") {
return;
}
// Check if 'script' command is available for TTY simulation
const scriptPath = Bun.which("script");
if (!scriptPath) {
// Skip test on platforms without 'script' command
return;
}
// Create a simpler test script that mimics TUI app behavior
const tuiAppPattern = `
const fs = require('fs');
const tty = require('tty');
async function main() {
// Step 1: Check if stdin is piped
if (!process.stdin.isTTY) {
// Read all piped input
let input = '';
for await (const chunk of process.stdin) {
input += chunk;
}
console.log('PIPED_INPUT:' + input.trim());
// Step 2: After stdin EOF, try to reopen /dev/tty
try {
const ttyFd = fs.openSync('/dev/tty', 'r');
const ttyStream = new tty.ReadStream(ttyFd);
// Verify TTY stream has expected properties
if (!ttyStream.isTTY) {
console.error('ERROR: tty.ReadStream not recognized as TTY');
process.exit(1);
}
// Verify ref/unref methods exist and work
if (typeof ttyStream.ref !== 'function' || typeof ttyStream.unref !== 'function') {
console.error('ERROR: ref/unref methods missing');
process.exit(1);
}
// Test that we can call ref/unref without errors
ttyStream.unref();
ttyStream.ref();
console.log('TTY_REOPENED:SUCCESS');
// Clean up - only destroy the stream, don't double-close the fd
ttyStream.destroy();
} catch (err) {
console.error('ERROR:' + err.code + ':' + err.message);
process.exit(1);
}
} else {
console.log('NO_PIPE');
}
}
main().catch(err => {
console.error('UNCAUGHT:' + err.message);
process.exit(1);
});
`;
using dir = tempDir("tui-app-test", {
"tui-app-sim.js": tuiAppPattern,
});
// Create a simple test that pipes input
// macOS and Linux have different script command syntax
const isMacOS = process.platform === "darwin";
const cmd = isMacOS
? [scriptPath, "-q", "/dev/null", "sh", "-c", `echo "piped content" | ${bunExe()} tui-app-sim.js`]
: [scriptPath, "-q", "-c", `echo "piped content" | ${bunExe()} tui-app-sim.js`, "/dev/null"];
const proc = Bun.spawn({
cmd,
env: bunEnv,
cwd: String(dir),
stdout: "pipe",
stderr: "pipe",
});
const [exitCode, stdout, stderr] = await Promise.all([proc.exited, proc.stdout.text(), proc.stderr.text()]);
// First snapshot the combined output to see what actually happened
const output = stdout + (stderr ? "\nSTDERR:\n" + stderr : "");
// Use JSON.stringify to make control characters visible
const jsonOutput = JSON.stringify(normalizeBunSnapshot(output, dir));
// macOS script adds control characters, Linux doesn't
const expected = isMacOS
? `"^D\\b\\bPIPED_INPUT:piped content\\nTTY_REOPENED:SUCCESS"`
: `"PIPED_INPUT:piped content\\nTTY_REOPENED:SUCCESS"`;
expect(jsonOutput).toBe(expected);
// Then check exit code
expect(exitCode).toBe(0);
});
// Test that tty.ReadStream works correctly with various file descriptors
test("tty.ReadStream handles non-TTY file descriptors correctly", () => {
const fs = require("fs");
const tty = require("tty");
const path = require("path");
const os = require("os");
// Create a regular file in the system temp directory
const tempFile = path.join(os.tmpdir(), "test-regular-file-" + Date.now() + ".txt");
fs.writeFileSync(tempFile, "test content");
try {
const fd = fs.openSync(tempFile, "r");
const stream = new tty.ReadStream(fd);
// Regular file should not be identified as TTY
expect(stream.isTTY).toBe(false);
// ref/unref should still exist (for compatibility) but may be no-ops
expect(typeof stream.ref).toBe("function");
expect(typeof stream.unref).toBe("function");
// Clean up - only destroy the stream, don't double-close the fd
stream.destroy();
} finally {
try {
fs.unlinkSync(tempFile);
} catch (e) {
// Ignore cleanup errors
}
}
});