mirror of
https://github.com/oven-sh/bun
synced 2026-02-02 15:08:46 +00:00
92 lines
3.0 KiB
Plaintext
92 lines
3.0 KiB
Plaintext
---
|
|
description: Writing tests for Bun
|
|
globs:
|
|
---
|
|
# Writing tests for Bun
|
|
|
|
## Where tests are found
|
|
|
|
You'll find all of Bun's tests in the `test/` directory.
|
|
|
|
* `test/`
|
|
* `cli/` - CLI command tests, like `bun install` or `bun init`
|
|
* `js/` - JavaScript & TypeScript tests
|
|
* `bun/` - `Bun` APIs tests, separated by category, for example: `glob/` for `Bun.Glob` tests
|
|
* `node/` - Node.js module tests, separated by module, for example: `assert/` for `node:assert` tests
|
|
* `test/` - Vendored Node.js tests, taken from the Node.js repository (does not conform to Bun's test style)
|
|
* `web/` - Web API tests, separated by category, for example: `fetch/` for `Request` and `Response` tests
|
|
* `third_party/` - npm package tests, to validate that basic usage works in Bun
|
|
* `napi/` - N-API tests
|
|
* `v8/` - V8 C++ API tests
|
|
* `bundler/` - Bundler, transpiler, CSS, and `bun build` tests
|
|
* `regression/issue/[number]` - Regression tests, always make one when fixing a particular issue
|
|
|
|
## How tests are written
|
|
|
|
Bun's tests are written as JavaScript and TypeScript files with the Jest-style APIs, like `test`, `describe`, and `expect`. They are tested using Bun's own test runner, `bun test`.
|
|
|
|
```js
|
|
import { describe, test, expect } from "bun:test";
|
|
import assert, { AssertionError } from "assert";
|
|
|
|
describe("assert(expr)", () => {
|
|
test.each([true, 1, "foo"])(`assert(%p) does not throw`, expr => {
|
|
expect(() => assert(expr)).not.toThrow();
|
|
});
|
|
|
|
test.each([false, 0, "", null, undefined])(`assert(%p) throws`, expr => {
|
|
expect(() => assert(expr)).toThrow(AssertionError);
|
|
});
|
|
});
|
|
```
|
|
|
|
## Testing conventions
|
|
|
|
* See `test/harness.ts` for common test utilities and helpers
|
|
* Be rigorous and test for edge-cases and unexpected inputs
|
|
* Use data-driven tests, e.g. `test.each`, to reduce boilerplate when possible
|
|
* When you need to test Bun as a CLI, use the following pattern:
|
|
|
|
```js
|
|
import { test, expect } from "bun:test";
|
|
import { spawn } from "bun";
|
|
import { bunExe, bunEnv } from "harness";
|
|
|
|
test("bun --version", async () => {
|
|
const { exited, stdout: stdoutStream, stderr: stderrStream } = spawn({
|
|
cmd: [bunExe(), "--version"],
|
|
env: bunEnv,
|
|
stdout: "pipe",
|
|
stderr: "pipe",
|
|
});
|
|
const [ exitCode, stdout, stderr ] = await Promise.all([
|
|
exited,
|
|
new Response(stdoutStream).text(),
|
|
new Response(stderrStream).text(),
|
|
]);
|
|
expect({ exitCode, stdout, stderr }).toMatchObject({
|
|
exitCode: 0,
|
|
stdout: expect.stringContaining(Bun.version),
|
|
stderr: "",
|
|
});
|
|
});
|
|
```
|
|
|
|
## Before writing a test
|
|
|
|
* If you are fixing a bug, write the test first and make sure it fails (as expected) with the canary version of Bun
|
|
* If you are fixing a Node.js compatibility bug, create a throw-away snippet of code and test that it works as you expect in Node.js, then that it fails (as expected) with the canary version of Bun
|
|
* When the expected behaviour is ambigious, defer to matching what happens in Node.js
|
|
* Always attempt to find related tests in an existing test file before creating a new test file
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|