Compare commits

..

5 Commits

Author SHA1 Message Date
Claude Bot
153afee569 fix(cli): error when using --bytecode with cross-compilation
Fixes #24144

When using `--bytecode` with a cross-compilation target like
`--target bun-linux-x64-musl`, the bytecode would be generated on the
host machine but would be incompatible with the target platform because
JSC bytecode format depends on the specific build configuration
(platform, architecture, libc).

Previously, this would compile successfully but cause a segfault when
running the resulting binary on the target platform.

Now, Bun will error at build time with a clear message explaining that
--bytecode is not supported with cross-compilation.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 22:05:01 +00:00
wovw
716801e92d gitignore: add .direnv dir (#26198)
### What does this PR do?

The `.direnv` folder is created by [direnv](https://direnv.net/) when
using `use flake` in `.envrc` to automatically load the Nix development
shell. Since the repo already includes a flake.nix, developers on NixOS
commonly use direnv (via nix-direnv) to auto-load the environment. This
folder contains cached environment data and should not be committed.
2026-01-18 00:17:14 -08:00
wovw
939f5cf7af fix(nix): disable fortify hardening for debug builds (#26199)
### What does this PR do?

NixOS enables security hardening flags by default in `mkShell` /
`devShells` e.g. `_FORTIFY_SOURCE=2`. This flag adds runtime buffer
overflow checks but requires compiler optimization (`-O1` or higher) to
work, since it needs to inline functions to insert checks.
Debug builds use `-O0` (no optimization), which causes this compilation
error:
`error: _FORTIFY_SOURCE requires compiling with optimization (-O)
[-Werror,-W#warnings]`

This patch is a standard Nix way to disable this specific flag while
keeping other hardening features intact. It doesn't affect release
builds since it's scoped to `devShells`.

### How did you verify your code works?

`bun bd test` successfully runs test cases.
2026-01-18 00:17:01 -08:00
SUZUKI Sosuke
496aeb97f9 refactor(wrapAnsi): use WTF::find for character searches (#26200)
## Summary

This PR addresses the review feedback from #26061
([comment](https://github.com/oven-sh/bun/pull/26061#discussion_r2697257836))
requesting the use of `WTF::find` for newline searches in
`wrapAnsi.cpp`.

## Changes

### 1. CRLF Normalization (lines 628-639)
Replaced manual loop with `WTF::findNextNewline` which provides
SIMD-optimized detection for `\r`, `\n`, and `\r\n` sequences.

**Before:**
```cpp
for (size_t i = 0; i < input.size(); ++i) {
    if (i + 1 < input.size() && input[i] == '\r' && input[i + 1] == '\n') {
        normalized.append(static_cast<Char>('\n'));
        i++;
    } else {
        normalized.append(input[i]);
    }
}
```

**After:**
```cpp
size_t pos = 0;
while (pos < input.size()) {
    auto newline = WTF::findNextNewline(input, pos);
    if (newline.position == WTF::notFound) {
        normalized.append(std::span { input.data() + pos, input.size() - pos });
        break;
    }
    if (newline.position > pos)
        normalized.append(std::span { input.data() + pos, newline.position - pos });
    normalized.append(static_cast<Char>('\n'));
    pos = newline.position + newline.length;
}
```

### 2. Word Length Calculation (lines 524-533)
Replaced manual loop with `WTF::find` for space character detection.

**Before:**
```cpp
for (const Char* it = lineStart; it <= lineEnd; ++it) {
    if (it == lineEnd || *it == ' ') {
        // word boundary logic
    }
}
```

**After:**
```cpp
auto lineSpan = std::span<const Char>(lineStart, lineEnd);
size_t wordStartIdx = 0;
while (wordStartIdx <= lineSpan.size()) {
    size_t spacePos = WTF::find(lineSpan, static_cast<Char>(' '), wordStartIdx);
    // word boundary logic using spacePos
}
```

## Benchmark Results

Tested on Apple M4 Max. No performance regression observed - most
benchmarks show slight improvements.

| Benchmark | Before | After | Change |
|-----------|--------|-------|--------|
| Short text (45 chars) | 613 ns | 583 ns | -4.9% |
| Medium text (810 chars) | 10.85 µs | 10.31 µs | -5.0% |
| Long text (8100 chars) | 684 µs | 102 µs | -85% * |
| Colored short | 1.26 µs | 806 ns | -36% |
| Colored medium | 19.24 µs | 13.80 µs | -28% |
| Japanese (full-width) | 7.74 µs | 7.43 µs | -4.0% |
| Emoji text | 9.35 µs | 9.27 µs | -0.9% |
| Hyperlink (OSC 8) | 5.73 µs | 5.58 µs | -2.6% |

\* Large variance in baseline measurement

## Testing

- All 35 existing tests pass
- Manual verification of CRLF normalization and word wrapping edge cases

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-17 23:43:02 -08:00
robobun
3b5f2fe756 chore(deps): update BoringSSL fork to latest upstream (#26212)
## Summary

Updates the BoringSSL fork to the latest upstream (337 commits since
last update) with bug fixes for Node.js crypto compatibility.

### Upstream BoringSSL Changes (337 commits)

| Category | Count |
|----------|-------|
| API Changes (including namespacing) | 42 |
| Code Cleanup/Refactoring | 35 |
| Testing/CI | 32 |
| Build System (Bazel, CMake) | 27 |
| Bug Fixes | 25 |
| Post-Quantum Cryptography | 14 |
| TLS/SSL Changes | 12 |
| Rust Bindings/Wrappers | 9 |
| Performance Improvements | 8 |
| Documentation | 8 |

#### Highlights

**Post-Quantum Cryptography**
- ML-DSA (Module-Lattice Digital Signature Algorithm): Full EVP
integration, Wycheproof tests, external mu verification
- SLH-DSA: Implementation of pure SLH-DSA-SHAKE-256f
- Merkle Tree Certificates: New support for verifying signatureless MTCs

**Major API Changes**
- New `CRYPTO_IOVEC` based AEAD APIs for zero-copy I/O across all
ciphers
- Massive namespacing effort moving internal symbols into `bssl`
namespace
- `bssl::Span` modernization to match `std::span` behavior

**TLS/SSL**
- Added `TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256` support
- HMAC on SHA-384 for TLS 1.3
- Improved Lucky 13 mitigation

**Build System**
- Bazel 8.x and 9.0.0 compatibility
- CI upgrades: Ubuntu 24.04, Android NDK r29

---

### Bun-specific Patches (in oven-sh/boringssl)

1. **Fix SHA512-224 EVP final buffer size** (`digests.cc.inc`)
- `BCM_sha512_224_final` writes 32 bytes but `EVP_MD.md_size` is 28
bytes
   - Now uses a temp buffer to avoid buffer overwrite

2. **Fix `EVP_do_all_sorted` to return only lowercase names**
(`evp_do_all.cc`)
- `EVP_CIPHER_do_all_sorted` and `EVP_MD_do_all_sorted` now return only
lowercase names
- Matches Node.js behavior for `crypto.getCiphers()` and
`crypto.getHashes()`

---

### Changes in Bun

- Updated BoringSSL commit hash to
`4f4f5ef8ebc6e23cbf393428f0ab1b526773f7ac`
- Removed `ignoreSHA512_224` parameter from `ncrypto::getDigestByName()`
to enable SHA512-224 support
- Removed special SHA512-224 buffer handling in `JSHash.cpp` (no longer
needed after BoringSSL fix)

## Test plan
- [x] `crypto.createHash('sha512-224')` works correctly
- [x] `crypto.getHashes()` returns lowercase names (md4, md5, sha1,
sha256, etc.)
- [x] `crypto.getCiphers()` returns lowercase names (aes-128-cbc,
aes-256-gcm, etc.)
- [x] `test/regression/issue/crypto-names.test.ts` passes
- [x] All CI tests pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 23:39:04 -08:00
15 changed files with 161 additions and 373 deletions

1
.gitignore vendored
View File

@@ -1,4 +1,5 @@
.claude/settings.local.json
.direnv
.DS_Store
.env
.envrc

View File

@@ -1,19 +0,0 @@
import { bench, run } from "../runner.mjs";
const N100 = Array.from({ length: 100 }, (_, i) => `chunk-${i}`);
const N1000 = Array.from({ length: 1000 }, (_, i) => `data-${i}`);
const N10000 = Array.from({ length: 10000 }, (_, i) => `x${i}`);
bench("new Blob([100 strings])", () => new Blob(N100));
bench("new Blob([1000 strings])", () => new Blob(N1000));
bench("new Blob([10000 strings])", () => new Blob(N10000));
// Mixed: strings + buffers
const mixed = [];
for (let i = 0; i < 100; i++) {
mixed.push(`text-${i}`);
mixed.push(new Uint8Array([i, i + 1, i + 2]));
}
bench("new Blob([100 strings + 100 buffers])", () => new Blob(mixed));
await run();

View File

@@ -4,7 +4,7 @@ register_repository(
REPOSITORY
oven-sh/boringssl
COMMIT
f1ffd9e83d4f5c28a9c70d73f9a4e6fcf310062f
4f4f5ef8ebc6e23cbf393428f0ab1b526773f7ac
)
register_cmake_command(

View File

@@ -131,6 +131,7 @@
stdenv = pkgs.clangStdenv;
}) {
inherit packages;
hardeningDisable = [ "fortify" ];
shellHook = ''
# Set up build environment

View File

@@ -1,28 +0,0 @@
pub const ContiguousArrayView = struct {
elements: [*]const JSValue,
len: u32,
i: u32 = 0,
pub fn init(value: JSValue, global: *JSGlobalObject) ?ContiguousArrayView {
var length: u32 = 0;
const ptr = Bun__JSArray__getContiguousVector(value, global, &length);
if (ptr == null) return null;
return .{ .elements = @ptrCast(ptr.?), .len = length };
}
pub inline fn next(self: *ContiguousArrayView) ?JSValue {
if (self.i >= self.len) return null;
const val = self.elements[self.i];
self.i += 1;
if (val == .zero) return .js_undefined; // hole
return val;
}
extern fn Bun__JSArray__getContiguousVector(JSValue, *JSGlobalObject, *u32) ?[*]const JSValue;
};
const bun = @import("bun");
const jsc = bun.jsc;
const JSGlobalObject = jsc.JSGlobalObject;
const JSValue = jsc.JSValue;

View File

@@ -46,7 +46,6 @@
#include "JavaScriptCore/JSArray.h"
#include "JavaScriptCore/JSArrayBuffer.h"
#include "JavaScriptCore/JSArrayInlines.h"
#include "JavaScriptCore/JSGlobalObjectInlines.h"
#include "JavaScriptCore/JSFunction.h"
#include "JavaScriptCore/ErrorInstanceInlines.h"
#include "JavaScriptCore/BigIntObject.h"
@@ -6110,45 +6109,6 @@ CPP_DECL [[ZIG_EXPORT(nothrow)]] unsigned int Bun__CallFrame__getLineNumber(JSC:
return lineColumn.line;
}
extern "C" const JSC::EncodedJSValue* Bun__JSArray__getContiguousVector(
JSC::EncodedJSValue encodedValue,
JSC::JSGlobalObject* globalObject,
uint32_t* outLength)
{
JSC::JSValue value = JSC::JSValue::decode(encodedValue);
if (!value.isCell())
return nullptr;
JSC::JSCell* cell = value.asCell();
if (!isJSArray(cell))
return nullptr;
JSC::JSArray* array = jsCast<JSC::JSArray*>(cell);
JSC::IndexingType indexing = array->indexingType();
// Only support Int32 and Contiguous shapes (not Double, ArrayStorage, etc.)
if (!hasInt32(indexing) && !hasContiguous(indexing))
return nullptr;
// Verify prototype chain is healthy and no indexed accessors are installed
if (!array->canDoFastIndexedAccess())
return nullptr;
ASSERT(!globalObject->isHavingABadTime());
JSC::Butterfly* butterfly = array->butterfly();
uint32_t length = butterfly->publicLength();
ASSERT(length <= butterfly->vectorLength());
if (length == 0)
return nullptr;
*outLength = length;
return reinterpret_cast<const JSC::EncodedJSValue*>(butterfly->contiguous().data());
}
extern "C" void JSC__ArrayBuffer__ref(JSC::ArrayBuffer* self) { self->ref(); }
extern "C" void JSC__ArrayBuffer__deref(JSC::ArrayBuffer* self) { self->deref(); }
extern "C" void JSC__ArrayBuffer__asBunArrayBuffer(JSC::ArrayBuffer* self, Bun__ArrayBuffer* out)

View File

@@ -1901,7 +1901,7 @@ DataPointer DHPointer::stateless(const EVPKeyPointer& ourKey,
// ============================================================================
// KDF
const EVP_MD* getDigestByName(const WTF::StringView name, bool ignoreSHA512_224)
const EVP_MD* getDigestByName(const WTF::StringView name)
{
// Historically, "dss1" and "DSS1" were DSA aliases for SHA-1
// exposed through the public API.
@@ -1955,9 +1955,6 @@ const EVP_MD* getDigestByName(const WTF::StringView name, bool ignoreSHA512_224)
return EVP_sha512();
}
if (WTF::equalIgnoringASCIICase(moreBits, "/224"_s)) {
if (ignoreSHA512_224) {
return nullptr;
}
return EVP_sha512_224();
}
if (WTF::equalIgnoringASCIICase(moreBits, "/256"_s)) {
@@ -1979,10 +1976,6 @@ const EVP_MD* getDigestByName(const WTF::StringView name, bool ignoreSHA512_224)
}
}
if (ignoreSHA512_224 && WTF::equalIgnoringASCIICase(name, "sha512-224"_s)) {
return nullptr;
}
// if (name == "ripemd160WithRSA"_s || name == "RSA-RIPEMD160"_s) {
// return EVP_ripemd160();
// }

View File

@@ -1575,7 +1575,7 @@ Buffer<char> ExportChallenge(const char* input, size_t length);
// ============================================================================
// KDF
const EVP_MD* getDigestByName(const WTF::StringView name, bool ignoreSHA512_224 = false);
const EVP_MD* getDigestByName(const WTF::StringView name);
const EVP_CIPHER* getCipherByName(const WTF::StringView name);
// Verify that the specified HKDF output length is valid for the given digest.

View File

@@ -251,15 +251,7 @@ JSC_DEFINE_HOST_FUNCTION(jsHashProtoFuncDigest, (JSC::JSGlobalObject * lexicalGl
// Only compute the digest if it hasn't been cached yet
if (!hash->m_digest && len > 0) {
const EVP_MD* md = hash->m_ctx.getDigest();
uint32_t bufLen = len;
if (md == EVP_sha512_224()) {
// SHA-512/224 expects buffer length of length % 8. can be truncated afterwards
bufLen = SHA512_224_DIGEST_BUFFER_LENGTH;
}
auto data = hash->m_ctx.digestFinal(bufLen);
auto data = hash->m_ctx.digestFinal(len);
if (!data) {
throwCryptoError(lexicalGlobalObject, scope, ERR_get_error(), "Failed to finalize digest"_s);
return {};
@@ -325,7 +317,7 @@ JSC_DEFINE_HOST_FUNCTION(constructHash, (JSC::JSGlobalObject * globalObject, JSC
WTF::String algorithm = algorithmOrHashInstanceValue.toWTFString(globalObject);
RETURN_IF_EXCEPTION(scope, {});
md = ncrypto::getDigestByName(algorithm, true);
md = ncrypto::getDigestByName(algorithm);
if (!md) {
zigHasher = ExternZigHash::getByName(zigGlobalObject, algorithm);
}

View File

@@ -518,25 +518,32 @@ static void processLine(const Char* lineStart, const Char* lineEnd, size_t colum
return;
}
// Calculate word lengths
// Calculate word lengths using WTF::find for space detection
Vector<size_t> wordLengths;
const Char* wordStart = lineStart;
for (const Char* it = lineStart; it <= lineEnd; ++it) {
if (it == lineEnd || *it == ' ') {
if (wordStart < it) {
wordLengths.append(stringWidth(wordStart, it, options.ambiguousIsNarrow));
} else {
wordLengths.append(0);
}
wordStart = it + 1;
auto lineSpan = std::span<const Char>(lineStart, lineEnd);
size_t wordStartIdx = 0;
while (wordStartIdx <= lineSpan.size()) {
size_t spacePos = WTF::find(lineSpan, static_cast<Char>(' '), wordStartIdx);
size_t wordEndIdx = (spacePos == WTF::notFound) ? lineSpan.size() : spacePos;
if (wordStartIdx < wordEndIdx) {
wordLengths.append(stringWidth(lineSpan.data() + wordStartIdx,
lineSpan.data() + wordEndIdx,
options.ambiguousIsNarrow));
} else {
wordLengths.append(0);
}
if (spacePos == WTF::notFound)
break;
wordStartIdx = wordEndIdx + 1;
}
// Start with empty first row
rows.append(Row<Char>());
// Process each word
wordStart = lineStart;
const Char* wordStart = lineStart;
size_t wordIndex = 0;
for (const Char* it = lineStart; it <= lineEnd; ++it) {
@@ -625,17 +632,24 @@ static WTF::String wrapAnsiImpl(std::span<const Char> input, size_t columns, con
return result.toString();
}
// Normalize \r\n to \n
// Normalize \r\n to \n using WTF::findNextNewline
Vector<Char> normalized;
normalized.reserveCapacity(input.size());
for (size_t i = 0; i < input.size(); ++i) {
if (i + 1 < input.size() && input[i] == '\r' && input[i + 1] == '\n') {
normalized.append(static_cast<Char>('\n'));
i++; // Skip next char
} else {
normalized.append(input[i]);
size_t pos = 0;
while (pos < input.size()) {
auto newline = WTF::findNextNewline(input, pos);
if (newline.position == WTF::notFound) {
// Append remaining content
normalized.append(std::span { input.data() + pos, input.size() - pos });
break;
}
// Append content before newline
if (newline.position > pos)
normalized.append(std::span { input.data() + pos, newline.position - pos });
// Always append \n regardless of original (\r, \n, or \r\n)
normalized.append(static_cast<Char>('\n'));
pos = newline.position + newline.length;
}
// Process each line separately

View File

@@ -56,7 +56,6 @@ pub const DeferredError = @import("./bindings/DeferredError.zig").DeferredError;
pub const GetterSetter = @import("./bindings/GetterSetter.zig").GetterSetter;
pub const JSArray = @import("./bindings/JSArray.zig").JSArray;
pub const JSArrayIterator = @import("./bindings/JSArrayIterator.zig").JSArrayIterator;
pub const ContiguousArrayView = @import("./bindings/ContiguousArrayView.zig").ContiguousArrayView;
pub const JSCell = @import("./bindings/JSCell.zig").JSCell;
pub const JSFunction = @import("./bindings/JSFunction.zig").JSFunction;
pub const JSGlobalObject = @import("./bindings/JSGlobalObject.zig").JSGlobalObject;

View File

@@ -3967,138 +3967,74 @@ fn fromJSWithoutDeferGC(
},
.Array, .DerivedArray => {
if (jsc.ContiguousArrayView.init(current, global)) |view_init| {
// Fast path: direct butterfly memory access
var fast_view = view_init;
try stack.ensureUnusedCapacity(fast_view.len);
var any_arrays = false;
while (fast_view.next()) |item| {
if (item.isUndefinedOrNull()) continue;
var iter = try jsc.JSArrayIterator.init(current, global);
try stack.ensureUnusedCapacity(iter.len);
var any_arrays = false;
while (try iter.next()) |item| {
if (item.isUndefinedOrNull()) continue;
if (!any_arrays) {
switch (item.jsTypeLoose()) {
.NumberObject,
.Cell,
.String,
.StringObject,
.DerivedStringObject,
=> {
var sliced = try item.toSlice(global, bun.default_allocator);
// When it's a string or ArrayBuffer inside an array, we can avoid the extra push/pop
// we only really want this for nested arrays
// However, we must preserve the order
// That means if there are any arrays
// we have to restart the loop
if (!any_arrays) {
switch (item.jsTypeLoose()) {
.NumberObject,
.Cell,
.String,
.StringObject,
.DerivedStringObject,
=> {
var sliced = try item.toSlice(global, bun.default_allocator);
const allocator = sliced.allocator.get();
could_have_non_ascii = could_have_non_ascii or !sliced.allocator.isWTFAllocator();
joiner.push(sliced.slice(), allocator);
continue;
},
.ArrayBuffer,
.Int8Array,
.Uint8Array,
.Uint8ClampedArray,
.Int16Array,
.Uint16Array,
.Int32Array,
.Uint32Array,
.Float16Array,
.Float32Array,
.Float64Array,
.BigInt64Array,
.BigUint64Array,
.DataView,
=> {
could_have_non_ascii = true;
var buf = item.asArrayBuffer(global).?;
joiner.pushStatic(buf.byteSlice());
continue;
},
.Array, .DerivedArray => {
any_arrays = true;
could_have_non_ascii = true;
break;
},
.DOMWrapper => {
if (item.as(Blob)) |blob| {
could_have_non_ascii = could_have_non_ascii or blob.charset != .all_ascii;
joiner.pushStatic(blob.sharedView());
continue;
} else {
const sliced = try current.toSliceClone(global);
const allocator = sliced.allocator.get();
could_have_non_ascii = could_have_non_ascii or !sliced.allocator.isWTFAllocator();
could_have_non_ascii = could_have_non_ascii or allocator != null;
joiner.push(sliced.slice(), allocator);
continue;
},
.ArrayBuffer,
.Int8Array,
.Uint8Array,
.Uint8ClampedArray,
.Int16Array,
.Uint16Array,
.Int32Array,
.Uint32Array,
.Float16Array,
.Float32Array,
.Float64Array,
.BigInt64Array,
.BigUint64Array,
.DataView,
=> {
could_have_non_ascii = true;
var buf = item.asArrayBuffer(global).?;
joiner.pushStatic(buf.byteSlice());
continue;
},
.Array, .DerivedArray => {
any_arrays = true;
could_have_non_ascii = true;
break;
},
.DOMWrapper => {
if (item.as(Blob)) |blob| {
could_have_non_ascii = could_have_non_ascii or blob.charset != .all_ascii;
joiner.pushStatic(blob.sharedView());
continue;
} else {
const sliced = try current.toSliceClone(global);
const allocator = sliced.allocator.get();
could_have_non_ascii = could_have_non_ascii or allocator != null;
joiner.push(sliced.slice(), allocator);
}
},
else => {},
}
}
},
else => {},
}
stack.appendAssumeCapacity(item);
}
} else {
// Slow path fallback: use indexed access
var iter = try jsc.JSArrayIterator.init(current, global);
try stack.ensureUnusedCapacity(iter.len);
var any_arrays = false;
while (try iter.next()) |item| {
if (item.isUndefinedOrNull()) continue;
if (!any_arrays) {
switch (item.jsTypeLoose()) {
.NumberObject,
.Cell,
.String,
.StringObject,
.DerivedStringObject,
=> {
var sliced = try item.toSlice(global, bun.default_allocator);
const allocator = sliced.allocator.get();
could_have_non_ascii = could_have_non_ascii or !sliced.allocator.isWTFAllocator();
joiner.push(sliced.slice(), allocator);
continue;
},
.ArrayBuffer,
.Int8Array,
.Uint8Array,
.Uint8ClampedArray,
.Int16Array,
.Uint16Array,
.Int32Array,
.Uint32Array,
.Float16Array,
.Float32Array,
.Float64Array,
.BigInt64Array,
.BigUint64Array,
.DataView,
=> {
could_have_non_ascii = true;
var buf = item.asArrayBuffer(global).?;
joiner.pushStatic(buf.byteSlice());
continue;
},
.Array, .DerivedArray => {
any_arrays = true;
could_have_non_ascii = true;
break;
},
.DOMWrapper => {
if (item.as(Blob)) |blob| {
could_have_non_ascii = could_have_non_ascii or blob.charset != .all_ascii;
joiner.pushStatic(blob.sharedView());
continue;
} else {
const sliced = try current.toSliceClone(global);
const allocator = sliced.allocator.get();
could_have_non_ascii = could_have_non_ascii or allocator != null;
joiner.push(sliced.slice(), allocator);
}
},
else => {},
}
}
stack.appendAssumeCapacity(item);
}
stack.appendAssumeCapacity(item);
}
},

View File

@@ -995,6 +995,12 @@ pub fn parse(allocator: std.mem.Allocator, ctx: Command.Context, comptime cmd: C
Output.errGeneric("Unsupported compile target: {f}\n", .{ctx.bundler_options.compile_target});
Global.exit(1);
}
// Bytecode is not portable across different platforms/architectures/libcs
// because JSC bytecode format depends on the specific build configuration.
if (ctx.bundler_options.bytecode and !ctx.bundler_options.compile_target.isDefault()) {
Output.errGeneric("--bytecode is not supported with cross-compilation. The target platform ({f}) differs from the host platform.", .{ctx.bundler_options.compile_target});
Global.exit(1);
}
opts.target = .bun;
break :brk;
}

View File

@@ -1,121 +0,0 @@
import { expect, test } from "bun:test";
test("basic string array", async () => {
const blob = new Blob(["hello", " ", "world"]);
expect(await blob.text()).toBe("hello world");
});
test("large array (10000 elements)", async () => {
const parts = Array.from({ length: 10000 }, (_, i) => `${i},`);
const blob = new Blob(parts);
const text = await blob.text();
expect(text).toBe(parts.join(""));
});
test("array with holes is handled", async () => {
const arr = ["a", , "b", , "c"] as unknown as string[];
const blob = new Blob(arr);
// holes become undefined which are skipped
expect(await blob.text()).toBe("abc");
});
test("undefined and null elements are skipped", async () => {
const blob = new Blob(["start", undefined as any, null as any, "end"]);
expect(await blob.text()).toBe("startend");
});
test("Proxy array is rejected", async () => {
const arr = new Proxy(["a", "b", "c"], {
get(target, prop) {
return Reflect.get(target, prop);
},
});
expect(() => new Blob(arr as any)).toThrow("new Blob() expects an Array");
});
test("prototype getter causes slow path fallback", async () => {
const arr = ["x", "y", "z"];
Object.defineProperty(Array.prototype, "1000", {
get() {
return "intercepted";
},
configurable: true,
});
try {
const blob = new Blob(arr);
expect(await blob.text()).toBe("xyz");
} finally {
delete (Array.prototype as any)["1000"];
}
});
test("nested arrays in blob parts", async () => {
// Nested arrays are not valid BlobParts per spec; elements before
// the nested array are processed inline
const blob = new Blob(["before", ["a", "b"] as any, "after"]);
expect(await blob.text()).toBe("before");
});
test("mixed types: string + TypedArray + Blob", async () => {
const innerBlob = new Blob(["inner"]);
const arr = ["start-", new Uint8Array([65, 66, 67]), innerBlob, "-end"];
const blob = new Blob(arr as any);
expect(await blob.text()).toBe("start-ABCinner-end");
});
test("toString side effects with custom objects", async () => {
const order: number[] = [];
const items = [1, 2, 3].map(n => ({
toString() {
order.push(n);
return `item${n}`;
},
}));
const blob = new Blob(items as any);
// Objects with toString are processed via stack (LIFO order)
expect(await blob.text()).toBe("item3item2item1");
expect(order).toEqual([3, 2, 1]);
});
test("empty array", async () => {
const blob = new Blob([]);
expect(blob.size).toBe(0);
expect(await blob.text()).toBe("");
});
test("DerivedArray (class extending Array)", async () => {
class MyArray extends Array {
constructor(...items: any[]) {
super(...items);
}
}
const arr = new MyArray("hello", " ", "derived");
const blob = new Blob(arr);
expect(await blob.text()).toBe("hello derived");
});
test("COW (Copy-on-Write) array from literal", async () => {
// Array literals may start as COW in JSC
const blob = new Blob(["cow", "test"]);
expect(await blob.text()).toBe("cowtest");
});
test("frozen array works correctly", async () => {
const arr = Object.freeze(["frozen", "-", "array"]);
const blob = new Blob(arr as any);
expect(await blob.text()).toBe("frozen-array");
});
test("sparse array (ArrayStorage) uses slow path correctly", async () => {
const arr: string[] = [];
arr[0] = "first";
arr[100] = "last";
const blob = new Blob(arr);
const text = await blob.text();
expect(text).toBe("firstlast");
});
test("single-element array optimization", async () => {
const blob = new Blob(["only"]);
expect(await blob.text()).toBe("only");
});

View File

@@ -0,0 +1,54 @@
import { describe, expect, test } from "bun:test";
import { bunEnv, bunExe, isLinux, isMusl, tempDir } from "harness";
// Issue #24144: Using --bytecode with --target bun-linux-x64-musl causes a segfault
// because bytecode is not portable across different platforms/architectures/libcs.
// The fix is to error out at build time when --bytecode is combined with cross-compilation.
describe("issue #24144: bytecode with cross-compilation", () => {
test("--bytecode with cross-compilation target should error", async () => {
using dir = tempDir("issue-24144", {
"index.ts": `console.log("Hello, world!");`,
});
// Use a cross-compilation target that differs from current platform
// We pick a musl target if we're on glibc, or glibc target if we're on musl
const crossTarget = isLinux
? isMusl
? "bun-linux-x64" // glibc target if we're on musl
: "bun-linux-x64-musl" // musl target if we're on glibc
: "bun-linux-x64-musl"; // any linux target if we're not on linux
await using proc = Bun.spawn({
cmd: [bunExe(), "build", "--compile", "--bytecode", `--target=${crossTarget}`, "index.ts", "--outfile=server"],
cwd: String(dir),
env: bunEnv,
stderr: "pipe",
stdout: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(stderr).toContain("--bytecode is not supported with cross-compilation");
expect(exitCode).toBe(1);
});
test("--bytecode without cross-compilation should work", async () => {
using dir = tempDir("issue-24144-same-platform", {
"index.ts": `console.log("Hello, world!");`,
});
await using proc = Bun.spawn({
cmd: [bunExe(), "build", "--compile", "--bytecode", "index.ts", "--outfile=server"],
cwd: String(dir),
env: bunEnv,
stderr: "pipe",
stdout: "pipe",
});
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
// Should succeed without the cross-compilation error
expect(stderr).not.toContain("--bytecode is not supported with cross-compilation");
expect(exitCode).toBe(0);
});
});