Compare commits

...

8 Commits

Author SHA1 Message Date
Claude Bot
33b9a33be6 fix(diagnostics_channel): retain channels with active subscribers
The WeakReference class in diagnostics_channel was allowing channels to
be garbage collected even when they had active subscribers. This caused
subscribers registered in preload scripts to be lost when the main
application script ran, breaking OpenTelemetry instrumentation for
frameworks like Fastify and Express.

The fix aligns WeakReference behavior with Node.js by maintaining a
strong reference when refCount > 0. When incRef() is called and refCount
transitions from 0 to 1, a strong reference is stored. When decRef()
returns refCount to 0, the strong reference is cleared, allowing GC.

The get() method now returns the strong reference when available,
falling back to the weak reference otherwise.

Fixes #26536

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 19:48:13 +00:00
Dylan Conway
7ebfdf97a8 fix(npm): remove shebang from placeholder scripts to fix npm i -g bun on Windows (#26517)
## Summary
- Removes the `#!/bin/sh` shebang from placeholder `bin/bun.exe` and
`bin/bunx.exe` scripts in the npm package
- Fixes `npm i -g bun` being completely broken on Windows since v1.3.7

## Problem
PR #26259 added a `#!/bin/sh` shebang to the placeholder scripts to show
a helpful error when postinstall hasn't run. However, npm's `cmd-shim`
reads shebangs to generate `.ps1`/`.cmd` wrappers **before** postinstall
runs, and bakes the interpreter path into them. On Windows, the wrappers
referenced `/bin/sh` which doesn't exist, causing:

```
& "/bin/sh$exe"  "$basedir/node_modules/bun/bin/bun.exe" $args
   ~~~~~~~~~~~~~
The term '/bin/sh.exe' is not recognized...
```

Even after postinstall successfully replaced the placeholder with the
real binary, the stale wrappers still tried to invoke `/bin/sh`.

## Fix
Remove the shebang. Without it, `cmd-shim` generates a direct invocation
wrapper that works after postinstall replaces the placeholder. On Unix,
bash/zsh still execute shebang-less files as shell scripts via ENOEXEC
fallback, so the helpful error message is preserved.

## Test plan
- [x] `bun bd test test/regression/issue/24329.test.ts` passes (2/2
tests)
- Manually verify `npm i -g bun` works on Windows

Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-28 00:00:50 -08:00
SUZUKI Sosuke
4cd3b241bc Upgrade WebKit to cc5e0bddf7ea (#26526)
Upgrade WebKit from `0e6527f24783ea83` to `cc5e0bddf7eae1d8` (77
commits)

This brings in the latest changes from oven-sh/WebKit (2026-01-27).
2026-01-27 23:33:47 -08:00
robobun
cae67a17e2 chore: update mimalloc to latest dev3 (#26519)
## Summary
- Updates oven-sh/mimalloc bun-dev3 branch to latest upstream
microsoft/mimalloc dev3 (ffa38ab8)
- Merged 12 new commits from upstream

### Key upstream changes included:
- fix re-initialization of threads on macOS
- add lock for sub-pagemap allocations
- fix peak commit stat
- fix use of continue in bitmap find_and_clear (fixes rare case of not
finding space while it exists)

## Test plan
- [ ] CI passes
- [ ] Memory allocation tests pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 23:33:00 -08:00
robobun
a394063a7d refactor(test): use container-based postgres_tls for TLS SQL tests (#26518)
## Summary
- Refactors `tls-sql.test.ts` to use `describeWithContainer` with a
local Docker container instead of external Neon secrets
- Updates `postgres_tls` service to build from Dockerfile (fixes SSL key
permission issues)
- Fixes pg_hba.conf to allow local socket connections for init scripts

## Test plan
- [x] Verified tests pass locally with `bun bd test
test/js/sql/tls-sql.test.ts` (30 tests pass)
- [ ] CI passes on x64 Linux (arm64 Docker tests are currently disabled)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 23:32:39 -08:00
Jarred Sumner
c9ebb17921 Bump 2026-01-27 17:06:36 -08:00
Dylan Conway
2f510724a9 fix(napi): return napi_function for AsyncContextFrame in napi_typeof (#26511)
## Summary
- `napi_typeof` was returning `napi_object` for `AsyncContextFrame`
values, which are internally callable JSObjects
- Native addons that check callback types (e.g. encore.dev's runtime)
would fail with `expect Function, got: Object` and panic
- Added a `jsDynamicCast<AsyncContextFrame*>` check before the final
`napi_object` fallback to correctly report these values as
`napi_function`

Closes #25933

## Test plan
- [x] Verify encore.dev + supertokens reproduction from the issue no
longer panics
- [ ] Existing napi tests continue to pass

Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-27 13:35:15 -08:00
robobun
9a16f4c345 fix(http2): correct canReceiveData logic per RFC 7540 (#26491)
## Summary

- Fixed inverted logic in `canReceiveData` function in HTTP/2 stream
state handling
- Added gRPC streaming tests to verify correct behavior

## Problem

The `canReceiveData` function had completely inverted logic that
reported incorrect `remoteClose` status:

| Stream State | Before (Wrong) | After (Correct) |
|--------------|----------------|-----------------|
| OPEN | `false` (can't receive) | `true` (can receive) |
| HALF_CLOSED_LOCAL | `false` (can't receive) | `true` (can receive from
remote) |
| HALF_CLOSED_REMOTE | `true` (can receive) | `false` (remote closed) |
| CLOSED | `true` (can receive) | `false` (stream done) |

Per RFC 7540 Section 5.1:
- In `HALF_CLOSED_LOCAL` state, the local endpoint has sent END_STREAM
but can still **receive** data from the remote peer
- In `HALF_CLOSED_REMOTE` state, the remote endpoint has sent END_STREAM
so no more data will be received

## Test plan

- [x] Added gRPC streaming tests covering unary, server streaming,
client streaming, and bidirectional streaming calls
- [x] Verified HTTP/2 test suite passes (same or fewer failures than
before)
- [x] Verified gRPC test suite improves (7 failures vs 9 failures before
+ 2 errors)

Closes #20875

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 10:29:34 -08:00
20 changed files with 913 additions and 301 deletions

2
LATEST
View File

@@ -1 +1 @@
1.3.6
1.3.7

View File

@@ -4,7 +4,7 @@ register_repository(
REPOSITORY
oven-sh/mimalloc
COMMIT
989115cefb6915baa13788cb8252d83aac5330ad
ffa38ab8ac914f9eb7af75c1f8ad457643dc14f2
)
set(MIMALLOC_CMAKE_ARGS

View File

@@ -2,7 +2,7 @@ option(WEBKIT_VERSION "The version of WebKit to use")
option(WEBKIT_LOCAL "If a local version of WebKit should be used instead of downloading")
if(NOT WEBKIT_VERSION)
set(WEBKIT_VERSION 0e6527f24783ea832fa58f696437829cdcbc3c7c)
set(WEBKIT_VERSION cc5e0bddf7eae1d820cf673158845fe9bd83c094)
endif()
# Use preview build URL for Windows ARM64 until the fix is merged to main

View File

@@ -1,7 +1,7 @@
{
"private": true,
"name": "bun",
"version": "1.3.7",
"version": "1.3.8",
"workspaces": [
"./packages/bun-types",
"./packages/@types/bun"

View File

@@ -73,9 +73,11 @@ async function buildRootModule(dryRun?: boolean) {
});
// Create placeholder scripts that print an error message if postinstall hasn't run.
// On Unix, these are executed as shell scripts despite the .exe extension.
// On Windows, npm creates .cmd wrappers that would fail anyway if the binary isn't valid.
const placeholderScript = `#!/bin/sh
echo "Error: Bun's postinstall script was not run." >&2
// Do NOT add a shebang (#!/bin/sh) here — npm's cmd-shim reads shebangs to generate
// .ps1/.cmd wrappers BEFORE postinstall runs, and bakes the interpreter path in.
// A #!/bin/sh shebang breaks Windows because the wrappers reference /bin/sh which
// doesn't exist, even after postinstall replaces the placeholder with the real binary.
const placeholderScript = `echo "Error: Bun's postinstall script was not run." >&2
echo "" >&2
echo "This occurs when using --ignore-scripts during installation, or when using a" >&2
echo "package manager like pnpm that does not run postinstall scripts by default." >&2

View File

@@ -1129,10 +1129,16 @@ pub const H2FrameParser = struct {
return stream;
}
/// Returns true if the stream can still receive data from the remote peer.
/// Per RFC 7540 Section 5.1:
/// - OPEN: both endpoints can send and receive
/// - HALF_CLOSED_LOCAL: local sent END_STREAM, but can still receive from remote
/// - HALF_CLOSED_REMOTE: remote sent END_STREAM, no more data to receive
/// - CLOSED: stream is finished
pub fn canReceiveData(this: *Stream) bool {
return switch (this.state) {
.IDLE, .RESERVED_LOCAL, .RESERVED_REMOTE, .OPEN, .HALF_CLOSED_LOCAL => false,
.HALF_CLOSED_REMOTE, .CLOSED => true,
.IDLE, .RESERVED_LOCAL, .RESERVED_REMOTE, .OPEN, .HALF_CLOSED_LOCAL => true,
.HALF_CLOSED_REMOTE, .CLOSED => false,
};
}

View File

@@ -43,6 +43,11 @@ JSValue AsyncContextFrame::withAsyncContextIfNeeded(JSGlobalObject* globalObject
return callback;
}
// If already wrapped in an AsyncContextFrame, return as-is to avoid double-wrapping.
if (jsDynamicCast<AsyncContextFrame*>(callback)) {
return callback;
}
// Construct a low-overhead wrapper
auto& vm = JSC::getVM(globalObject);
return AsyncContextFrame::create(

View File

@@ -2427,6 +2427,11 @@ extern "C" napi_status napi_typeof(napi_env env, napi_value val,
return napi_clear_last_error(env);
}
if (JSC::jsDynamicCast<AsyncContextFrame*>(value)) {
*result = napi_function;
return napi_clear_last_error(env);
}
*result = napi_object;
return napi_clear_last_error(env);

View File

@@ -16,20 +16,40 @@ const PromiseResolve = Promise.$resolve.bind(Promise);
const PromiseReject = Promise.$reject.bind(Promise);
const PromisePrototypeThen = (promise, onFulfilled, onRejected) => promise.then(onFulfilled, onRejected);
// TODO: https://github.com/nodejs/node/blob/fb47afc335ef78a8cef7eac52b8ee7f045300696/src/node_util.h#L13
class WeakReference<T extends WeakKey> extends WeakRef<T> {
#refs = 0;
// WeakReference that maintains a strong reference when refCount > 0.
// This ensures channels with active subscribers aren't garbage collected.
// See: https://github.com/nodejs/node/blob/fb47afc335ef78a8cef7eac52b8ee7f045300696/lib/internal/util.js#L888
class WeakReference<T extends WeakKey> {
#weak: WeakRef<T>;
#strong: T | null = null;
#refCount = 0;
constructor(object: T) {
this.#weak = new WeakRef(object);
}
get() {
return this.deref();
// Return strong reference if available (when refCount > 0), otherwise try weak
return this.#strong ?? this.#weak.deref();
}
incRef() {
return ++this.#refs;
this.#refCount++;
if (this.#refCount === 1) {
const derefed = this.#weak.deref();
if (derefed !== undefined) {
this.#strong = derefed;
}
}
return this.#refCount;
}
decRef() {
return --this.#refs;
this.#refCount--;
if (this.#refCount === 0) {
this.#strong = null;
}
return this.#refCount;
}
}

View File

@@ -677,7 +677,7 @@ pub export fn napi_make_callback(env_: napi_env, _: *anyopaque, recv_: napi_valu
return envIsNull();
};
const recv, const func = .{ recv_.get(), func_.get() };
if (func.isEmptyOrUndefinedOrNull() or !func.isCallable()) {
if (func.isEmptyOrUndefinedOrNull() or (!func.isCallable() and !func.isAsyncContextFrame())) {
return env.setLastError(.function_expected);
}
@@ -1762,7 +1762,7 @@ pub export fn napi_create_threadsafe_function(
};
const func = func_.get();
if (call_js_cb == null and (func.isEmptyOrUndefinedOrNull() or !func.isCallable())) {
if (call_js_cb == null and (func.isEmptyOrUndefinedOrNull() or (!func.isCallable() and !func.isAsyncContextFrame()))) {
return env.setLastError(.function_expected);
}

View File

@@ -21,20 +21,17 @@ services:
start_period: 5s
postgres_tls:
image: postgres:15
environment:
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_USER: postgres
volumes:
- ./init-scripts/postgres:/docker-entrypoint-initdb.d:ro
- ../js/sql/docker-tls/server.crt:/etc/postgresql/ssl/server.crt:ro
- ../js/sql/docker-tls/server.key:/etc/postgresql/ssl/server.key:ro
build:
context: ../js/sql/docker-tls
dockerfile: Dockerfile
image: bun-postgres-tls:local
ports:
- target: 5432
published: 0
protocol: tcp
command: >
postgres
-c hba_file=/etc/postgresql/pg_hba.conf
-c ssl=on
-c ssl_cert_file=/etc/postgresql/ssl/server.crt
-c ssl_key_file=/etc/postgresql/ssl/server.key
@@ -47,7 +44,8 @@ services:
interval: 1h # Effectively disable after startup
timeout: 5s
retries: 30
start_period: 5s
start_period: 60s
start_interval: 1s
postgres_auth:
image: postgres:15

View File

@@ -109,8 +109,8 @@ class DockerComposeHelper {
return;
}
// Build the service if needed (for services like mysql_tls that need building)
if (service === "mysql_tls" || service === "redis_unified") {
// Build the service if needed (for services like mysql_tls, postgres_tls that need building)
if (service === "mysql_tls" || service === "redis_unified" || service === "postgres_tls") {
const buildResult = await this.exec(["build", service]);
if (buildResult.exitCode !== 0) {
throw new Error(`Failed to build service ${service}: ${buildResult.stderr}`);

View File

@@ -51,16 +51,20 @@ RUN chmod +x /docker-entrypoint-initdb.d/init-users-db.sh
# Create pg_hba.conf with SSL requirements
RUN mkdir -p /etc/postgresql && touch /etc/postgresql/pg_hba.conf && \
echo "hostssl all postgres 127.0.0.1/32 trust" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all bun_sql_test 127.0.0.1/32 trust" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all bun_sql_test_md5 127.0.0.1/32 md5" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all bun_sql_test_scram 127.0.0.1/32 scram-sha-256" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all postgres ::1/128 trust" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all bun_sql_test ::1/128 trust" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all bun_sql_test_md5 ::1/128 md5" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all bun_sql_test_scram ::1/128 scram-sha-256" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl replication all 127.0.0.1/32 trust" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl replication all ::1/128 trust" >> /etc/postgresql/pg_hba.conf && \
echo "# Allow local socket connections for init scripts" >> /etc/postgresql/pg_hba.conf && \
echo "local all postgres trust" >> /etc/postgresql/pg_hba.conf && \
echo "local all all trust" >> /etc/postgresql/pg_hba.conf && \
echo "# Remote TLS connections" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all postgres 0.0.0.0/0 trust" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all bun_sql_test 0.0.0.0/0 trust" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all bun_sql_test_md5 0.0.0.0/0 md5" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all bun_sql_test_scram 0.0.0.0/0 scram-sha-256" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all postgres ::/0 trust" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all bun_sql_test ::/0 trust" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all bun_sql_test_md5 ::/0 md5" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl all bun_sql_test_scram ::/0 scram-sha-256" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl replication all 0.0.0.0/0 trust" >> /etc/postgresql/pg_hba.conf && \
echo "hostssl replication all ::/0 trust" >> /etc/postgresql/pg_hba.conf && \
echo "host all all all reject" >> /etc/postgresql/pg_hba.conf
# Configure PostgreSQL for SSL

View File

@@ -1,280 +1,241 @@
import { postgres, randomUUIDv7, SQL, sql } from "bun";
import { SQL, randomUUIDv7 } from "bun";
import { describe, expect, test } from "bun:test";
import { getSecret } from "harness";
import { describeWithContainer, isDockerEnabled } from "harness";
const TLS_POSTGRES_DATABASE_URL = getSecret("TLS_POSTGRES_DATABASE_URL");
const PG_TRANSACTION_POOL_SUPABASE_URL = getSecret("PG_TRANSACTION_POOL_SUPABASE_URL");
for (const options of [
{
url: TLS_POSTGRES_DATABASE_URL,
tls: true,
adapter: "postgres",
max: 1,
bigint: true,
prepare: true,
transactionPool: false,
},
{
url: PG_TRANSACTION_POOL_SUPABASE_URL,
tls: true,
adapter: "postgres",
max: 1,
bigint: true,
prepare: false,
transactionPool: true,
},
{
url: TLS_POSTGRES_DATABASE_URL,
tls: true,
adapter: "postgres",
max: 1,
bigint: true,
prepare: false,
transactionPool: false,
},
] satisfies (Bun.SQL.Options & { transactionPool?: boolean })[]) {
if (options.url === undefined) {
console.log("SKIPPING TEST", JSON.stringify(options), "BECAUSE MISSING THE URL SECRET");
continue;
}
describe.concurrent(
`${options.transactionPool ? "Transaction Pooling" : `Prepared Statements (${options.prepare ? "on" : "off"})`}`,
() => {
test("default sql", async () => {
expect(sql.reserve).toBeDefined();
expect(sql.options).toBeDefined();
expect(sql[Symbol.asyncDispose]).toBeDefined();
expect(sql.begin).toBeDefined();
expect(sql.beginDistributed).toBeDefined();
expect(sql.distributed).toBeDefined();
expect(sql.unsafe).toBeDefined();
expect(sql.end).toBeDefined();
expect(sql.close).toBeDefined();
expect(sql.transaction).toBeDefined();
expect(sql.distributed).toBeDefined();
expect(sql.unsafe).toBeDefined();
expect(sql.commitDistributed).toBeDefined();
expect(sql.rollbackDistributed).toBeDefined();
});
test("default postgres", async () => {
expect(postgres.reserve).toBeDefined();
expect(postgres.options).toBeDefined();
expect(postgres[Symbol.asyncDispose]).toBeDefined();
expect(postgres.begin).toBeDefined();
expect(postgres.beginDistributed).toBeDefined();
expect(postgres.distributed).toBeDefined();
expect(postgres.unsafe).toBeDefined();
expect(postgres.end).toBeDefined();
expect(postgres.close).toBeDefined();
expect(postgres.transaction).toBeDefined();
expect(postgres.distributed).toBeDefined();
expect(postgres.unsafe).toBeDefined();
expect(postgres.commitDistributed).toBeDefined();
expect(postgres.rollbackDistributed).toBeDefined();
});
test("tls (explicit)", async () => {
await using sql = new SQL(options);
const [{ one, two }] = await sql`SELECT 1 as one, '2' as two`;
expect(one).toBe(1);
expect(two).toBe("2");
await sql.close();
});
test("Throws on illegal transactions", async () => {
await using sql = new SQL({ ...options, max: 2 });
const error = await sql`BEGIN`.catch(e => e);
expect(error).toBeInstanceOf(SQL.SQLError);
expect(error).toBeInstanceOf(SQL.PostgresError);
return expect(error.code).toBe("ERR_POSTGRES_UNSAFE_TRANSACTION");
});
test.skipIf(options.transactionPool)("Transaction throws", async () => {
await using sql = new SQL(options);
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TEMPORARY TABLE IF NOT EXISTS ${sql(random_name)} (a int)`;
expect(
await sql
.begin(async sql => {
await sql`insert into ${sql(random_name)} values(1)`;
await sql`insert into ${sql(random_name)} values('hej')`;
})
.catch(e => e.errno),
).toBe("22P02");
});
test.skipIf(options.transactionPool)("Transaction rolls back", async () => {
await using sql = new SQL(options);
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TEMPORARY TABLE IF NOT EXISTS ${sql(random_name)} (a int)`;
await sql
.begin(async sql => {
await sql`insert into ${sql(random_name)} values(1)`;
await sql`insert into ${sql(random_name)} values('hej')`;
})
.catch(() => {
/* ignore */
if (!isDockerEnabled()) {
test.skip("skipping TLS SQL tests - Docker is not available", () => {});
} else {
describeWithContainer(
"PostgreSQL TLS",
{
image: "postgres_tls",
},
container => {
// Test with prepared statements on and off
for (const prepare of [true, false]) {
describe(`prepared: ${prepare}`, () => {
const getOptions = (): Bun.SQL.Options => ({
url: `postgres://postgres@${container.host}:${container.port}/bun_sql_test`,
tls: true,
adapter: "postgres",
max: 1,
bigint: true,
prepare,
});
expect((await sql`select a from ${sql(random_name)}`).count).toBe(0);
});
test("tls (explicit)", async () => {
await container.ready;
await using sql = new SQL(getOptions());
const [{ one, two }] = await sql`SELECT 1 as one, '2' as two`;
expect(one).toBe(1);
expect(two).toBe("2");
});
test.skipIf(options.transactionPool)("Transaction throws on uncaught savepoint", async () => {
await using sql = new SQL(options);
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TEMPORARY TABLE IF NOT EXISTS ${sql(random_name)} (a int)`;
expect(
await sql
.begin(async sql => {
await sql`insert into ${sql(random_name)} values(1)`;
await sql.savepoint(async sql => {
await sql`insert into ${sql(random_name)} values(2)`;
throw new Error("fail");
});
})
.catch(err => err.message),
).toBe("fail");
});
test("Throws on illegal transactions", async () => {
await container.ready;
await using sql = new SQL({ ...getOptions(), max: 2 });
const error = await sql`BEGIN`.catch(e => e);
expect(error).toBeInstanceOf(SQL.SQLError);
expect(error).toBeInstanceOf(SQL.PostgresError);
return expect(error.code).toBe("ERR_POSTGRES_UNSAFE_TRANSACTION");
});
test.skipIf(options.transactionPool)("Transaction throws on uncaught named savepoint", async () => {
await using sql = new SQL(options);
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TEMPORARY TABLE IF NOT EXISTS ${sql(random_name)} (a int)`;
expect(
await sql
.begin(async sql => {
await sql`insert into ${sql(random_name)} values(1)`;
await sql.savepoint("watpoint", async sql => {
await sql`insert into ${sql(random_name)} values(2)`;
throw new Error("fail");
});
})
.catch(() => "fail"),
).toBe("fail");
});
test("Transaction throws", async () => {
await container.ready;
await using sql = new SQL(getOptions());
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TEMPORARY TABLE IF NOT EXISTS ${sql(random_name)} (a int)`;
expect(
await sql
.begin(async sql => {
await sql`insert into ${sql(random_name)} values(1)`;
await sql`insert into ${sql(random_name)} values('hej')`;
})
.catch(e => e.errno),
).toBe("22P02");
});
test("Transaction rolls back", async () => {
await container.ready;
await using sql = new SQL(getOptions());
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TEMPORARY TABLE IF NOT EXISTS ${sql(random_name)} (a int)`;
test("Transaction succeeds on caught savepoint", async () => {
await using sql = new SQL(options);
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TABLE IF NOT EXISTS ${sql(random_name)} (a int)`;
try {
await sql.begin(async sql => {
await sql`insert into ${sql(random_name)} values(1)`;
await sql
.savepoint(async sql => {
await sql`insert into ${sql(random_name)} values(2)`;
throw new Error("please rollback");
.begin(async sql => {
await sql`insert into ${sql(random_name)} values(1)`;
await sql`insert into ${sql(random_name)} values('hej')`;
})
.catch(() => {
/* ignore */
});
await sql`insert into ${sql(random_name)} values(3)`;
expect((await sql`select a from ${sql(random_name)}`).count).toBe(0);
});
expect((await sql`select count(1) from ${sql(random_name)}`)[0].count).toBe(2n);
} finally {
await sql`DROP TABLE IF EXISTS ${sql(random_name)}`;
}
});
test("Savepoint returns Result", async () => {
let result;
await using sql = new SQL(options);
await sql.begin(async t => {
result = await t.savepoint(s => s`select 1 as x`);
});
expect(result[0]?.x).toBe(1);
});
test("Transaction throws on uncaught savepoint", async () => {
await container.ready;
await using sql = new SQL(getOptions());
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TEMPORARY TABLE IF NOT EXISTS ${sql(random_name)} (a int)`;
expect(
await sql
.begin(async sql => {
await sql`insert into ${sql(random_name)} values(1)`;
await sql.savepoint(async sql => {
await sql`insert into ${sql(random_name)} values(2)`;
throw new Error("fail");
});
})
.catch(err => err.message),
).toBe("fail");
});
test("Transaction requests are executed implicitly", async () => {
await using sql = new SQL(options);
expect(
(
await sql.begin(sql => [
sql`select set_config('bun_sql.test', 'testing', true)`,
sql`select current_setting('bun_sql.test') as x`,
])
)[1][0].x,
).toBe("testing");
});
test("Transaction throws on uncaught named savepoint", async () => {
await container.ready;
await using sql = new SQL(getOptions());
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TEMPORARY TABLE IF NOT EXISTS ${sql(random_name)} (a int)`;
expect(
await sql
.begin(async sql => {
await sql`insert into ${sql(random_name)} values(1)`;
await sql.savepoint("watpoint", async sql => {
await sql`insert into ${sql(random_name)} values(2)`;
throw new Error("fail");
});
})
.catch(e => e.message),
).toBe("fail");
});
test("Uncaught transaction request errors bubbles to transaction", async () => {
await using sql = new SQL(options);
expect(
await sql
.begin(sql => [sql`select wat`, sql`select current_setting('bun_sql.test') as x, ${1} as a`])
.catch(e => e.errno),
).toBe("42703");
});
test("Transaction rejects with rethrown error", async () => {
await using sql = new SQL(options);
expect(
await sql
.begin(async sql => {
try {
await sql`select exception`;
} catch (ex) {
throw new Error("WAT");
}
})
.catch(e => e.message),
).toBe("WAT");
});
test("Parallel transactions", async () => {
await using sql = new SQL({ ...options, max: 2 });
expect(
(await Promise.all([sql.begin(sql => sql`select 1 as count`), sql.begin(sql => sql`select 1 as count`)]))
.map(x => x[0].count)
.join(""),
).toBe("11");
});
test("Many transactions at beginning of connection", async () => {
await using sql = new SQL({ ...options, max: 2 });
const xs = await Promise.all(Array.from({ length: 30 }, () => sql.begin(sql => sql`select 1`)));
return expect(xs.length).toBe(30);
});
test("Transactions array", async () => {
await using sql = new SQL(options);
expect(
(await sql.begin(sql => [sql`select 1 as count`, sql`select 1 as count`])).map(x => x[0].count).join(""),
).toBe("11");
});
test.skipIf(options.transactionPool)("Transaction waits", async () => {
await using sql = new SQL({ ...options, max: 2 });
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TEMPORARY TABLE IF NOT EXISTS ${sql(random_name)} (a int)`;
await sql.begin(async sql => {
await sql`insert into ${sql(random_name)} values(1)`;
await sql
.savepoint(async sql => {
await sql`insert into ${sql(random_name)} values(2)`;
throw new Error("please rollback");
})
.catch(() => {
/* ignore */
test("Transaction succeeds on caught savepoint", async () => {
await container.ready;
await using sql = new SQL(getOptions());
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TEMPORARY TABLE IF NOT EXISTS ${sql(random_name)} (a int)`;
await sql.begin(async sql => {
await sql`insert into ${sql(random_name)} values(1)`;
await sql
.savepoint(async sql => {
await sql`insert into ${sql(random_name)} values(2)`;
throw new Error("please rollback");
})
.catch(() => {
/* ignore */
});
await sql`insert into ${sql(random_name)} values(3)`;
});
await sql`insert into ${sql(random_name)} values(3)`;
expect((await sql`select count(1) from ${sql(random_name)}`)[0].count).toBe(2n);
});
test("Savepoint returns Result", async () => {
await container.ready;
let result;
await using sql = new SQL(getOptions());
await sql.begin(async t => {
result = await t.savepoint(s => s`select 1 as x`);
});
expect(result[0]?.x).toBe(1);
});
test("Transaction requests are executed implicitly", async () => {
await container.ready;
await using sql = new SQL(getOptions());
expect(
(
await sql.begin(sql => [
sql`select set_config('bun_sql.test', 'testing', true)`,
sql`select current_setting('bun_sql.test') as x`,
])
)[1][0].x,
).toBe("testing");
});
test("Uncaught transaction request errors bubbles to transaction", async () => {
await container.ready;
await using sql = new SQL(getOptions());
expect(
await sql
.begin(sql => [sql`select wat`, sql`select current_setting('bun_sql.test') as x, ${1} as a`])
.catch(e => e.errno),
).toBe("42703");
});
test("Transaction rejects with rethrown error", async () => {
await container.ready;
await using sql = new SQL(getOptions());
expect(
await sql
.begin(async sql => {
try {
await sql`select exception`;
} catch (ex) {
throw new Error("WAT");
}
})
.catch(e => e.message),
).toBe("WAT");
});
test("Parallel transactions", async () => {
await container.ready;
await using sql = new SQL({ ...getOptions(), max: 2 });
expect(
(await Promise.all([sql.begin(sql => sql`select 1 as count`), sql.begin(sql => sql`select 1 as count`)]))
.map(x => x[0].count)
.join(""),
).toBe("11");
});
test("Many transactions at beginning of connection", async () => {
await container.ready;
await using sql = new SQL({ ...getOptions(), max: 2 });
const xs = await Promise.all(Array.from({ length: 30 }, () => sql.begin(sql => sql`select 1`)));
return expect(xs.length).toBe(30);
});
test("Transactions array", async () => {
await container.ready;
await using sql = new SQL(getOptions());
expect(
(await sql.begin(sql => [sql`select 1 as count`, sql`select 1 as count`])).map(x => x[0].count).join(""),
).toBe("11");
});
test("Transaction waits", async () => {
await container.ready;
await using sql = new SQL({ ...getOptions(), max: 2 });
const random_name = ("t_" + randomUUIDv7("hex").replaceAll("-", "")).toLowerCase();
await sql`CREATE TEMPORARY TABLE IF NOT EXISTS ${sql(random_name)} (a int)`;
await sql.begin(async sql => {
await sql`insert into ${sql(random_name)} values(1)`;
await sql
.savepoint(async sql => {
await sql`insert into ${sql(random_name)} values(2)`;
throw new Error("please rollback");
})
.catch(() => {
/* ignore */
});
await sql`insert into ${sql(random_name)} values(3)`;
});
expect(
(
await Promise.all([
sql.begin(async sql => await sql`select 1 as count`),
sql.begin(async sql => await sql`select 1 as count`),
])
)
.map(x => x[0].count)
.join(""),
).toBe("11");
});
});
expect(
(
await Promise.all([
sql.begin(async sql => await sql`select 1 as count`),
sql.begin(async sql => await sql`select 1 as count`),
])
)
.map(x => x[0].count)
.join(""),
).toBe("11");
});
}
},
);
}

View File

@@ -895,4 +895,52 @@ nativeTests.test_napi_typeof_boxed_primitives = () => {
console.log("All boxed primitive tests passed!");
};
// https://github.com/oven-sh/bun/issues/25933
// Test that napi_typeof returns napi_function for callbacks wrapped in
// AsyncContextFrame (which happens inside AsyncLocalStorage.run()).
nativeTests.test_napi_typeof_async_context_frame = async () => {
const { AsyncLocalStorage } = require("node:async_hooks");
const als = new AsyncLocalStorage();
await als.run({ key: "value" }, () => {
return new Promise(resolve => {
// Pass a callback to the native addon. Because we're inside
// AsyncLocalStorage.run(), Bun wraps it in AsyncContextFrame.
// The native call_js_cb will call napi_typeof on the received
// js_callback and print the result.
nativeTests.test_issue_25933(() => {});
// The threadsafe function callback fires asynchronously.
setTimeout(resolve, 50);
});
});
};
// Test that napi_make_callback works when the func is an AsyncContextFrame
// (received by a threadsafe function's call_js_cb inside AsyncLocalStorage.run()).
nativeTests.test_make_callback_with_async_context = async () => {
const { AsyncLocalStorage } = require("node:async_hooks");
const als = new AsyncLocalStorage();
await als.run({ key: "value" }, () => {
return new Promise(resolve => {
nativeTests.test_napi_make_callback_async_context_frame(() => {});
setTimeout(resolve, 50);
});
});
};
// Test that napi_create_threadsafe_function with call_js_cb=NULL accepts an
// AsyncContextFrame as the func (received from another threadsafe function's call_js_cb).
nativeTests.test_create_tsfn_with_async_context = async () => {
const { AsyncLocalStorage } = require("node:async_hooks");
const als = new AsyncLocalStorage();
await als.run({ key: "value" }, () => {
return new Promise(resolve => {
nativeTests.test_napi_create_tsfn_async_context_frame(() => {});
setTimeout(resolve, 100);
});
});
};
module.exports = nativeTests;

View File

@@ -1998,6 +1998,127 @@ static napi_value test_napi_get_named_property_copied_string(const Napi::Callbac
return ok(env);
}
// https://github.com/oven-sh/bun/issues/25933
// When a threadsafe function is created inside AsyncLocalStorage.run(),
// the js_callback gets wrapped in AsyncContextFrame. napi_typeof must
// still report it as napi_function, not napi_object.
static napi_threadsafe_function tsfn_25933 = nullptr;
static void test_issue_25933_callback(napi_env env, napi_value js_callback,
void *context, void *data) {
napi_valuetype type;
napi_status status = napi_typeof(env, js_callback, &type);
if (status != napi_ok) {
printf("FAIL: napi_typeof returned error status %d\n", status);
} else if (type == napi_function) {
printf("PASS: napi_typeof returned napi_function\n");
} else {
printf("FAIL: napi_typeof returned %d, expected napi_function (%d)\n",
type, napi_function);
}
napi_release_threadsafe_function(tsfn_25933, napi_tsfn_release);
tsfn_25933 = nullptr;
}
static napi_value test_issue_25933(const Napi::CallbackInfo &info) {
Napi::Env env = info.Env();
Napi::HandleScope scope(env);
// The first argument is the JS callback function.
// When called inside AsyncLocalStorage.run(), Bun wraps this in
// AsyncContextFrame via withAsyncContextIfNeeded.
napi_value js_cb = info[0];
napi_value name = Napi::String::New(env, "tsfn_typeof_test");
NODE_API_CALL(env,
napi_create_threadsafe_function(
env, js_cb, nullptr, name, 0, 1, nullptr, nullptr,
nullptr, &test_issue_25933_callback, &tsfn_25933));
NODE_API_CALL(env, napi_call_threadsafe_function(tsfn_25933, nullptr,
napi_tsfn_nonblocking));
return env.Undefined();
}
// When a threadsafe function's call_js_cb receives a js_callback that is an
// AsyncContextFrame, calling napi_make_callback on it should work (not fail
// with function_expected).
static napi_threadsafe_function tsfn_make_callback = nullptr;
static void test_make_callback_tsfn_cb(napi_env env, napi_value js_callback,
void *context, void *data) {
napi_value recv;
napi_get_global(env, &recv);
napi_value result;
napi_status status = napi_make_callback(env, nullptr, recv, js_callback, 0, nullptr, &result);
if (status == napi_ok) {
printf("PASS: napi_make_callback succeeded\n");
} else {
printf("FAIL: napi_make_callback returned status %d\n", status);
}
napi_release_threadsafe_function(tsfn_make_callback, napi_tsfn_release);
tsfn_make_callback = nullptr;
}
static napi_value test_napi_make_callback_async_context_frame(const Napi::CallbackInfo &info) {
Napi::Env env = info.Env();
Napi::HandleScope scope(env);
napi_value js_cb = info[0];
napi_value name = Napi::String::New(env, "tsfn_make_callback_test");
NODE_API_CALL(env,
napi_create_threadsafe_function(
env, js_cb, nullptr, name, 0, 1, nullptr, nullptr,
nullptr, &test_make_callback_tsfn_cb, &tsfn_make_callback));
NODE_API_CALL(env, napi_call_threadsafe_function(tsfn_make_callback, nullptr,
napi_tsfn_nonblocking));
return env.Undefined();
}
// When a threadsafe function's call_js_cb receives a js_callback that is an
// AsyncContextFrame, passing it to a second napi_create_threadsafe_function
// with call_js_cb=NULL should succeed (not fail with function_expected).
static napi_threadsafe_function tsfn_create_outer = nullptr;
static void test_create_tsfn_outer_cb(napi_env env, napi_value js_callback,
void *context, void *data) {
// js_callback here is an AsyncContextFrame in Bun.
// Try to create a new threadsafe function with it and call_js_cb=NULL.
napi_value name;
napi_create_string_utf8(env, "inner_tsfn", NAPI_AUTO_LENGTH, &name);
napi_threadsafe_function inner_tsfn = nullptr;
napi_status status = napi_create_threadsafe_function(
env, js_callback, nullptr, name, 0, 1, nullptr, nullptr,
nullptr, /* call_js_cb */ nullptr, &inner_tsfn);
if (status != napi_ok) {
printf("FAIL: napi_create_threadsafe_function returned status %d\n", status);
} else {
printf("PASS: napi_create_threadsafe_function accepted AsyncContextFrame\n");
// Release immediately — we only needed to verify creation succeeds.
napi_release_threadsafe_function(inner_tsfn, napi_tsfn_release);
}
napi_release_threadsafe_function(tsfn_create_outer, napi_tsfn_release);
tsfn_create_outer = nullptr;
}
static napi_value test_napi_create_tsfn_async_context_frame(const Napi::CallbackInfo &info) {
Napi::Env env = info.Env();
Napi::HandleScope scope(env);
napi_value js_cb = info[0];
napi_value name = Napi::String::New(env, "tsfn_create_test");
NODE_API_CALL(env,
napi_create_threadsafe_function(
env, js_cb, nullptr, name, 0, 1, nullptr, nullptr,
nullptr, &test_create_tsfn_outer_cb, &tsfn_create_outer));
NODE_API_CALL(env, napi_call_threadsafe_function(tsfn_create_outer, nullptr,
napi_tsfn_nonblocking));
return env.Undefined();
}
void register_standalone_tests(Napi::Env env, Napi::Object exports) {
REGISTER_FUNCTION(env, exports, test_issue_7685);
REGISTER_FUNCTION(env, exports, test_issue_11949);
@@ -2033,6 +2154,9 @@ void register_standalone_tests(Napi::Env env, Napi::Object exports) {
REGISTER_FUNCTION(env, exports, napi_get_typeof);
REGISTER_FUNCTION(env, exports, test_external_buffer_data_lifetime);
REGISTER_FUNCTION(env, exports, test_napi_get_named_property_copied_string);
REGISTER_FUNCTION(env, exports, test_issue_25933);
REGISTER_FUNCTION(env, exports, test_napi_make_callback_async_context_frame);
REGISTER_FUNCTION(env, exports, test_napi_create_tsfn_async_context_frame);
}
} // namespace napitests

View File

@@ -798,6 +798,30 @@ describe("cleanup hooks", () => {
expect(output).toContain("napi_typeof");
});
it("should return napi_function for AsyncContextFrame in threadsafe callback", async () => {
// Test for https://github.com/oven-sh/bun/issues/25933
// When a threadsafe function is created inside AsyncLocalStorage.run(),
// the callback gets wrapped in AsyncContextFrame. napi_typeof must
// report it as napi_function, not napi_object.
const output = await checkSameOutput("test_napi_typeof_async_context_frame", []);
expect(output).toContain("PASS: napi_typeof returned napi_function");
});
it("should handle AsyncContextFrame in napi_make_callback", async () => {
// When a threadsafe function's call_js_cb receives an AsyncContextFrame
// as js_callback and passes it to napi_make_callback, it should succeed.
const output = await checkSameOutput("test_make_callback_with_async_context", []);
expect(output).toContain("PASS: napi_make_callback succeeded");
});
it("should accept AsyncContextFrame in napi_create_threadsafe_function with null call_js_cb", async () => {
// When a threadsafe function's call_js_cb receives an AsyncContextFrame
// and passes it to a second napi_create_threadsafe_function with
// call_js_cb=NULL, it should not reject with function_expected.
const output = await checkSameOutput("test_create_tsfn_with_async_context", []);
expect(output).toContain("PASS: napi_create_threadsafe_function accepted AsyncContextFrame");
});
it("should return napi_object for boxed primitives (String, Number, Boolean)", async () => {
// Regression test for https://github.com/oven-sh/bun/issues/25351
// napi_typeof was incorrectly returning napi_string for String objects (new String("hello"))

View File

@@ -0,0 +1,287 @@
/**
* Test for GitHub Issue #20875: gRPC regression - DEADLINE_EXCEEDED errors
* with streaming calls when using @grpc/grpc-js
*
* This test verifies that Bun's HTTP/2 client correctly handles:
* 1. Server streaming gRPC calls (like BatchGetDocuments)
* 2. Proper handling of streams in HALF_CLOSED_LOCAL state
*/
import * as grpc from "@grpc/grpc-js";
import * as loader from "@grpc/proto-loader";
import { afterAll, beforeAll, describe, expect, test } from "bun:test";
import { readFileSync } from "node:fs";
import { join } from "node:path";
const __dirname = import.meta.dirname;
const protoLoaderOptions = {
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true,
};
function loadProtoFile(file: string) {
const packageDefinition = loader.loadSync(file, protoLoaderOptions);
return grpc.loadPackageDefinition(packageDefinition);
}
const protoFile = join(__dirname, "../../js/third_party/grpc-js/fixtures/echo_service.proto");
const echoService = loadProtoFile(protoFile).EchoService as grpc.ServiceClientConstructor;
const ca = readFileSync(join(__dirname, "../../js/third_party/grpc-js/fixtures/ca.pem"));
const key = readFileSync(join(__dirname, "../../js/third_party/grpc-js/fixtures/server1.key"));
const cert = readFileSync(join(__dirname, "../../js/third_party/grpc-js/fixtures/server1.pem"));
let server: grpc.Server;
let client: InstanceType<typeof echoService>;
let serverPort: number;
describe("gRPC streaming calls", () => {
beforeAll(async () => {
server = new grpc.Server();
// Implement both unary and streaming methods
server.addService(echoService.service, {
// Unary call - works fine in the original issue
echo(call: grpc.ServerUnaryCall<any, any>, callback: grpc.sendUnaryData<any>) {
callback(null, call.request);
},
// Server streaming - this is what BatchGetDocuments uses
echoServerStream(call: grpc.ServerWritableStream<any, any>) {
const request = call.request;
// Simulate a streaming response (like BatchGetDocuments)
// Send multiple messages with a small delay
call.write({ value: "response1", value2: 1 });
call.write({ value: "response2", value2: 2 });
call.write({ value: request.value, value2: request.value2 });
call.end();
},
// Client streaming
echoClientStream(call: grpc.ServerReadableStream<any, any>, callback: grpc.sendUnaryData<any>) {
const messages: any[] = [];
call.on("data", data => {
messages.push(data);
});
call.on("end", () => {
callback(null, { value: `received ${messages.length} messages`, value2: messages.length });
});
},
// Bidirectional streaming
echoBidiStream(call: grpc.ServerDuplexStream<any, any>) {
call.on("data", data => {
call.write(data);
});
call.on("end", () => {
call.end();
});
},
});
const serverCreds = grpc.ServerCredentials.createSsl(ca, [{ private_key: key, cert_chain: cert }], false);
await new Promise<void>((resolve, reject) => {
server.bindAsync("127.0.0.1:0", serverCreds, (err, port) => {
if (err) {
reject(err);
return;
}
serverPort = port;
resolve();
});
});
const clientCreds = grpc.credentials.createSsl(ca);
client = new echoService(`127.0.0.1:${serverPort}`, clientCreds, {
"grpc.ssl_target_name_override": "foo.test.google.fr",
"grpc.default_authority": "foo.test.google.fr",
});
});
afterAll(() => {
client?.close();
server?.forceShutdown();
});
test("unary call should work", async () => {
const result = await new Promise<any>((resolve, reject) => {
const deadline = new Date();
deadline.setSeconds(deadline.getSeconds() + 10);
client.echo({ value: "test", value2: 42 }, { deadline }, (err: Error | null, response: any) => {
if (err) reject(err);
else resolve(response);
});
});
expect(result).toEqual({ value: "test", value2: 42 });
});
test("server streaming call should work (like BatchGetDocuments)", async () => {
const messages: any[] = [];
await new Promise<void>((resolve, reject) => {
const deadline = new Date();
deadline.setSeconds(deadline.getSeconds() + 10);
const stream = client.echoServerStream({ value: "request", value2: 100 }, { deadline });
stream.on("data", (data: any) => {
messages.push(data);
});
stream.on("error", (err: Error) => {
reject(err);
});
stream.on("end", () => {
resolve();
});
});
expect(messages).toHaveLength(3);
expect(messages[0]).toEqual({ value: "response1", value2: 1 });
expect(messages[1]).toEqual({ value: "response2", value2: 2 });
expect(messages[2]).toEqual({ value: "request", value2: 100 });
});
test("client streaming call should work", async () => {
const result = await new Promise<any>((resolve, reject) => {
const deadline = new Date();
deadline.setSeconds(deadline.getSeconds() + 10);
const stream = client.echoClientStream({ deadline }, (err: Error | null, response: any) => {
if (err) reject(err);
else resolve(response);
});
stream.write({ value: "msg1", value2: 1 });
stream.write({ value: "msg2", value2: 2 });
stream.write({ value: "msg3", value2: 3 });
stream.end();
});
expect(result).toEqual({ value: "received 3 messages", value2: 3 });
});
test("bidirectional streaming call should work", async () => {
const receivedMessages: any[] = [];
await new Promise<void>((resolve, reject) => {
const deadline = new Date();
deadline.setSeconds(deadline.getSeconds() + 10);
const stream = client.echoBidiStream({ deadline });
stream.on("data", (data: any) => {
receivedMessages.push(data);
});
stream.on("error", (err: Error) => {
reject(err);
});
stream.on("end", () => {
resolve();
});
// Send some messages
stream.write({ value: "msg1", value2: 1 });
stream.write({ value: "msg2", value2: 2 });
stream.end();
});
expect(receivedMessages).toHaveLength(2);
expect(receivedMessages[0]).toEqual({ value: "msg1", value2: 1 });
expect(receivedMessages[1]).toEqual({ value: "msg2", value2: 2 });
});
test("multiple concurrent calls with mixed types (reproduces #20875)", async () => {
// This test simulates the Firestore scenario:
// 1. Multiple unary Commit calls
// 2. Followed by a server streaming BatchGetDocuments call
// The issue is that the streaming call fails with DEADLINE_EXCEEDED
const results: any[] = [];
// First, make a few unary calls (like Commit)
for (let i = 0; i < 3; i++) {
const result = await new Promise<any>((resolve, reject) => {
const deadline = new Date();
deadline.setSeconds(deadline.getSeconds() + 10);
client.echo({ value: `commit${i}`, value2: i }, { deadline }, (err: Error | null, response: any) => {
if (err) reject(err);
else resolve(response);
});
});
results.push(result);
}
expect(results).toHaveLength(3);
// Now make a server streaming call (like BatchGetDocuments)
const streamingResults: any[] = [];
await new Promise<void>((resolve, reject) => {
const deadline = new Date();
deadline.setSeconds(deadline.getSeconds() + 10);
const stream = client.echoServerStream({ value: "batchGet", value2: 999 }, { deadline });
stream.on("data", (data: any) => {
streamingResults.push(data);
});
stream.on("error", (err: Error) => {
reject(err);
});
stream.on("end", () => {
resolve();
});
});
expect(streamingResults).toHaveLength(3);
expect(streamingResults[2]).toEqual({ value: "batchGet", value2: 999 });
});
test("rapid successive streaming calls", async () => {
// Make many streaming calls in rapid succession
const promises = [];
for (let i = 0; i < 10; i++) {
promises.push(
new Promise<any[]>((resolve, reject) => {
const messages: any[] = [];
const deadline = new Date();
deadline.setSeconds(deadline.getSeconds() + 10);
const stream = client.echoServerStream({ value: `batch${i}`, value2: i }, { deadline });
stream.on("data", (data: any) => {
messages.push(data);
});
stream.on("error", (err: Error) => {
reject(err);
});
stream.on("end", () => {
resolve(messages);
});
}),
);
}
const results = await Promise.all(promises);
expect(results).toHaveLength(10);
for (let i = 0; i < 10; i++) {
expect(results[i]).toHaveLength(3);
expect(results[i][2]).toEqual({ value: `batch${i}`, value2: i });
}
});
});

View File

@@ -13,8 +13,9 @@ test("bun npm placeholder script should exit with error if postinstall hasn't ru
// This is the placeholder script content that gets written to bin/bun.exe
// during npm package build (see packages/bun-release/scripts/upload-npm.ts)
const placeholderScript = `#!/bin/sh
echo "Error: Bun's postinstall script was not run." >&2
// Note: no shebang — a #!/bin/sh shebang breaks Windows because npm's cmd-shim
// bakes the interpreter path into .ps1/.cmd wrappers before postinstall runs.
const placeholderScript = `echo "Error: Bun's postinstall script was not run." >&2
echo "" >&2
echo "This occurs when using --ignore-scripts during installation, or when using a" >&2
echo "package manager like pnpm that does not run postinstall scripts by default." >&2
@@ -38,9 +39,11 @@ exit 1
});
expect(chmodExitCode).toBe(0);
// Run the placeholder script
// Run via sh explicitly — in real usage, bash/zsh automatically fall back to sh
// interpretation when execve returns ENOEXEC for a shebang-less executable file.
// Bun.spawn doesn't have that fallback, so we invoke sh directly here.
await using proc = Bun.spawn({
cmd: ["./bun-placeholder", "--version"],
cmd: ["sh", "./bun-placeholder"],
cwd: String(dir),
env: bunEnv,
stdout: "pipe",

View File

@@ -0,0 +1,125 @@
// Test for https://github.com/oven-sh/bun/issues/26536
// diagnostics_channel subscribers should persist across preload and app scripts
import { expect, test } from "bun:test";
import { bunEnv, bunExe, tempDir } from "harness";
test("diagnostics_channel subscribers persist from preload to main script", async () => {
// Create temp directory with test files
using dir = tempDir("issue-26536", {
"preload.mjs": `
import dc from 'node:diagnostics_channel';
const channel = dc.channel('test.channel.26536');
channel.subscribe((msg) => {
console.log("HOOK CALLED:", JSON.stringify(msg));
});
console.log("[preload] hasSubscribers:", channel.hasSubscribers);
`,
"app.mjs": `
import dc from 'node:diagnostics_channel';
const channel = dc.channel('test.channel.26536');
console.log("[app] hasSubscribers:", channel.hasSubscribers);
// Publish a message - should trigger the subscriber from preload
channel.publish({ test: true });
`,
});
// Run with preload
await using proc = Bun.spawn({
cmd: [bunExe(), "--preload", "./preload.mjs", "./app.mjs"],
cwd: String(dir),
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stderr).toBe("");
expect(stdout).toContain("[preload] hasSubscribers: true");
expect(stdout).toContain("[app] hasSubscribers: true");
expect(stdout).toContain('HOOK CALLED: {"test":true}');
expect(exitCode).toBe(0);
});
test("diagnostics_channel subscribers persist with CJS preload", async () => {
// Create temp directory with test files
using dir = tempDir("issue-26536-cjs", {
"preload.cjs": `
const dc = require('node:diagnostics_channel');
const channel = dc.channel('test.channel.26536.cjs');
channel.subscribe((msg) => {
console.log("HOOK CALLED:", JSON.stringify(msg));
});
console.log("[preload] hasSubscribers:", channel.hasSubscribers);
`,
"app.mjs": `
import dc from 'node:diagnostics_channel';
const channel = dc.channel('test.channel.26536.cjs');
console.log("[app] hasSubscribers:", channel.hasSubscribers);
// Publish a message - should trigger the subscriber from preload
channel.publish({ fromApp: "hello" });
`,
});
// Run with CJS preload
await using proc = Bun.spawn({
cmd: [bunExe(), "--preload", "./preload.cjs", "./app.mjs"],
cwd: String(dir),
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stderr).toBe("");
expect(stdout).toContain("[preload] hasSubscribers: true");
expect(stdout).toContain("[app] hasSubscribers: true");
expect(stdout).toContain('HOOK CALLED: {"fromApp":"hello"}');
expect(exitCode).toBe(0);
});
test("diagnostics_channel channel() returns same instance", async () => {
// Create temp directory with test files
using dir = tempDir("issue-26536-same-instance", {
"preload.mjs": `
import dc from 'node:diagnostics_channel';
const channel = dc.channel('test.channel.26536.same');
channel.subscribe(() => {});
// Store reference on globalThis
globalThis.__testChannel = channel;
console.log("[preload] stored channel");
`,
"app.mjs": `
import dc from 'node:diagnostics_channel';
const channel = dc.channel('test.channel.26536.same');
console.log("[app] same channel:", channel === globalThis.__testChannel);
`,
});
// Run with preload
await using proc = Bun.spawn({
cmd: [bunExe(), "--preload", "./preload.mjs", "./app.mjs"],
cwd: String(dir),
env: bunEnv,
stdout: "pipe",
stderr: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stderr).toBe("");
expect(stdout).toContain("[preload] stored channel");
expect(stdout).toContain("[app] same channel: true");
expect(exitCode).toBe(0);
});