Replaces Azure Run Command approach with Packer for Windows CI images.
Packer connects via WinRM (native, no x64 emulation on ARM64),
handles sysprep automatically, and provides full output logging.
- scripts/packer/windows-x64.pkr.hcl: Windows Server 2019 x64
- scripts/packer/windows-arm64.pkr.hcl: Windows 11 ARM64 (direct to gallery)
- scripts/packer/variables.pkr.hcl: shared variables
- machine.mjs: routes Azure Windows builds through Packer
[build images]
Only use Azure Run Command displayStatus to detect real failures.
stderr contains non-error output from rustup, cargo, PowerShell
warnings, etc.
[build images]
Azure Run Command uses x64-emulated PowerShell on ARM64 VMs, which
causes all tools to think they're on x64. Use Sysnative path to
re-launch bootstrap as native ARM64 so PROCESSOR_ARCHITECTURE,
package installs, cmake, and everything else sees the real arch.
[build images]
Azure Run Command runs x64-emulated PowerShell on ARM64 VMs, so
$env:PROCESSOR_ARCHITECTURE reports AMD64. The registry value at
HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment
always reports the real architecture.
[build images]
RuntimeInformation.OSArchitecture doesn't exist in PowerShell 5.1
(which Azure Run Command uses), so IsARM64 was always false. This
caused ALL ARM64-specific code paths to be skipped.
[build images]
- machine.mjs: use C:\Scoop\apps\nodejs\current\node.exe instead of
bare 'node' which isn't in Azure Run Command PATH
- azure.mjs: spawnSafeFn now throws on non-zero exit code so bootstrap
failures actually stop the build instead of capturing broken images
[build images]
Child process scoop installs break PATH/shims. Only use child process
for 7zip on ARM64 (post_install error). Everything else runs in-process.
nssm falls back to blob storage mirror when nssm.cc is down.
[build images]
- VS installer exit code 3010 (reboot required) is not a real error
- mingw command is gcc, not mingw
- Use powershell -Command for scoop install isolation
[build images]
Scoop's 7zip post_install throws a terminating error on ARM64 that
propagates regardless of ErrorActionPreference. Running scoop in a
child PowerShell process isolates the error completely.
[build images]
7zip is a Scoop dependency of git. On ARM64 SYSTEM, 7zip's post_install
fails trying to delete 7zr.exe from TEMP. Fix by:
1. Moving Install-7zip before Install-Git so it's already installed
2. Using SilentlyContinue in Install-Scoop-Package then verifying
[build images]
The Remove-Item error in Scoop's 7zip post_install is a terminating
error that propagates through try/catch. Use SilentlyContinue to
suppress it, then verify 7z is actually installed.
[build images]
- 7zip: tolerate post_install cleanup error (access denied on TEMP)
- Rust: set CARGO_HOME/RUSTUP_HOME before rustup-init to avoid
SYSTEM profile path issue (Move-Item failure)
- nssm: fall back to blob storage mirror when nssm.cc returns 503
On ARM64 Windows, Chocolatey installs x64 cmake and LLVM. x64 cmake
reads PROCESSOR_ARCHITECTURE=AMD64 (overridden by OS for x64 processes),
causing CMAKE_SYSTEM_PROCESSOR=AMD64 and -march=haswell to be passed to
ARM64 clang-cl.
- Add ARM64 detection to bootstrap.ps1
- Install cmake ARM64 MSI directly from GitHub on ARM64
- Install LLVM ARM64 (woa64) installer directly from GitHub on ARM64
- Skip nasm (x86 assembler) and mingw on ARM64
- Warn about bun ARM64 not being available via Chocolatey
cmake.exe is an x64 binary, so the OS sets PROCESSOR_ARCHITECTURE=AMD64
when it runs on ARM64 Windows. This causes cmake to misdetect as x64
and use -march=haswell. Explicitly set CMAKE_SYSTEM_PROCESSOR=ARM64.
VS dev shell with -HostArch amd64 sets PROCESSOR_ARCHITECTURE=AMD64,
causing CMake to misdetect the system as x64 and use -march=haswell
on ARM64. Restore PROCESSOR_ARCHITECTURE=ARM64 after VS dev shell.
## Summary
- Update LLVM version references across build scripts, Dockerfiles, CI,
Nix configs, and documentation
- Fix LLVM 21 `-Wcharacter-conversion` errors in WebKit bindings:
- `EncodingTables.h`: pragma for intentional char32_t/char16_t
comparisons
- `TextCodecCJK.cpp`: widen `gb18030AsymmetricEncode` param to char32_t
- `URLPatternParser`: widen `isValidNameCodepoint` param to char32_t,
cast for `startsWith`
- Fix `__libcpp_verbose_abort` noexcept mismatch (LLVM 21 uses
`_NOEXCEPT`)
- Fix dangling pointer in `BunJSCModule.h` (`toCString` temporary
lifetime)
- Remove `useMathSumPreciseMethod` (removed upstream in JSC)
**Before merging:** Merge https://github.com/oven-sh/WebKit/pull/153
first, then update `WEBKIT_VERSION` in `cmake/tools/SetupWebKit.cmake`
to point to the merged commit.
## Test plan
- [ ] Build bun debug on macOS with LLVM 21
- [ ] Build bun on Linux (glibc)
- [ ] Build bun on Linux (musl)
- [ ] Build bun on Windows
- [ ] Run test suite
Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
## Summary
Add a CI step that runs JSC JIT stress tests under QEMU when
`SetupWebKit.cmake` is modified. This complements #26571 (basic baseline
CPU verification) by also testing JIT-generated code.
## Motivation
PR #26571 added QEMU-based verification that catches illegal
instructions in:
- Startup code
- Static initialization
- Basic interpreter execution
However, JIT compilers (DFG, FTL, Wasm BBQ/OMG) generate code at runtime
that could emit AVX or LSE instructions even if the compiled binary
doesn't. The JSC stress tests from #26380 exercise all JIT tiers through
hot loops that trigger tier-up.
## How it works
1. Detects if `cmake/tools/SetupWebKit.cmake` is modified in the PR
2. If WebKit changes are detected, runs `verify-jit-stress-qemu.sh`
after the build
3. Executes all 78 JIT stress test fixtures under QEMU with restricted
CPU features:
- x64: `qemu-x86_64 -cpu Nehalem` (SSE4.2, no AVX)
- aarch64: `qemu-aarch64 -cpu cortex-a53` (ARMv8.0-A, no LSE)
4. Any SIGILL from JIT-generated code fails the build
## Platforms tested
| Target | CPU Model | What it catches |
|---|---|---|
| `linux-x64-baseline` | Nehalem | JIT emitting AVX/AVX2/AVX512 |
| `linux-x64-musl-baseline` | Nehalem | JIT emitting AVX/AVX2/AVX512 |
| `linux-aarch64` | Cortex-A53 | JIT emitting LSE atomics, SVE |
| `linux-aarch64-musl` | Cortex-A53 | JIT emitting LSE atomics, SVE |
## Timeout
The step has a 30-minute timeout since QEMU emulation is ~10-50x slower
than native. This only runs on WebKit update PRs, so it won't affect
most CI runs.
## Refs
- #26380 - Added JSC JIT stress tests
- #26571 - Added basic QEMU baseline verification
## Summary
Add CI steps that verify baseline builds don't use CPU instructions
beyond their target. Uses QEMU user-mode emulation with restricted CPU
features — any illegal instruction causes SIGILL and fails the build.
## Platforms verified
| Build Target | QEMU Command | What it catches |
|---|---|---|
| `linux-x64-baseline` (glibc) | `qemu-x86_64 -cpu Nehalem` | AVX, AVX2,
AVX512 |
| `linux-x64-musl-baseline` | `qemu-x86_64 -cpu Nehalem` | AVX, AVX2,
AVX512 |
| `linux-aarch64` (glibc) | `qemu-aarch64 -cpu cortex-a35` | LSE
atomics, SVE, dotprod |
| `linux-aarch64-musl` | `qemu-aarch64 -cpu cortex-a35` | LSE atomics,
SVE, dotprod |
## How it works
Each verify step:
1. Downloads the built binary artifact from the `build-bun` step
2. Installs `qemu-user-static` on-the-fly (dnf/apk/apt-get)
3. Runs two smoke tests under QEMU with restricted CPU features:
- `bun --version` — validates startup, linker, static init code
- `bun -e eval` — validates JSC initialization and basic execution
4. Hard fails on SIGILL (exit code 132)
The verify step runs in the build group after `build-bun`, with a
5-minute timeout.
## Known issue this will surface
**mimalloc on aarch64**: Built with `MI_OPT_ARCH=ON` which adds
`-march=armv8.1-a`, enabling LSE atomics. This will SIGILL on
Cortex-A35/A53 CPUs. The aarch64 verify steps are expected to fail
initially, confirming the test catches real issues. Fix can be done
separately in `cmake/targets/BuildMimalloc.cmake`.
### What does this PR do?
Move `-DENABLE_REMOTE_INSPECTOR=ON` from the debug-only flags to the
macOS common flags so it applies to all build configurations (debug,
release, lto). This was already the case for Linux and Windows.
Without this, `build:release:local` fails because BunDebugger.cpp and
InspectorLifecycleAgent.cpp unconditionally use JSC inspector APIs that
are only available when REMOTE_INSPECTOR is enabled.
### How did you verify your code works?
Build locally
## Summary
- Fetches complete logs from BuildKite's public API (no token required)
- Saves logs to `/tmp/bun-build-{number}-{platform}-{step}.log`
- Shows log file path in output for each failed job
- Displays brief error summary (unique errors, max 5)
- Adds help text with usage examples (`--help`)
- Groups failures by type (build/test/other)
- Shows annotation counts with link to view full annotations
- Documents usage in CLAUDE.md
## Test plan
- [x] Tested with build #35051 (9 failed jobs)
- [x] Verified logs saved to `/tmp/bun-build-35051-*.log`
- [x] Verified error extraction and deduplication works
- [x] Verified `--help` flag shows usage
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary
- `isNodeTest()` was only checking if the path included the node test
directories but not verifying the file was actually a JavaScript file
- This caused `test/js/node/test/parallel/CLAUDE.md` to be incorrectly
treated as a test file
- Added `isJavaScript(path)` check to filter out non-JS files
## Test plan
- [x] Verify CLAUDE.md is no longer picked up as a test file
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Bot <claude-bot@bun.sh>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>