Add more documentation on bun test

This commit is contained in:
Jarred Sumner
2025-04-03 13:34:06 -07:00
parent 11f2b5fb55
commit c40663bdf1
9 changed files with 717 additions and 14 deletions

View File

@@ -265,12 +265,25 @@ export default {
page("test/time", "Dates and times", {
description: "Control the date & time in your tests for more reliable and deterministic tests",
}),
page("test/dom", "DOM testing", {
description: "Write headless tests for UI and React/Vue/Svelte/Lit components with happy-dom",
}),
page("test/coverage", "Code coverage", {
description: "Generate code coverage reports with `bun test --coverage`",
}),
page("test/reporters", "Test reporters", {
description: "Add a junit reporter to your test runs",
}),
page("test/configuration", "Test configuration", {
description: "Configure the test runner with bunfig.toml",
}),
page("test/runtime-behavior", "Runtime behavior", {
description: "Learn how the test runner affects Bun's runtime behavior",
}),
page("test/discovery", "Finding tests", {
description: "Learn how the test runner discovers tests",
}),
page("test/dom", "DOM testing", {
description: "Write headless tests for UI and React/Vue/Svelte/Lit components with happy-dom",
}),
divider("Package runner"),
page("cli/bunx", "`bunx`", {

View File

@@ -0,0 +1,87 @@
Configure `bun test` via `bunfig.toml` file and command-line options. This page documents the available configuration options for `bun test`.
## bunfig.toml options
You can configure `bun test` behavior by adding a `[test]` section to your `bunfig.toml` file:
```toml
[test]
# Options go here
```
### Test discovery
#### root
The `root` option specifies a root directory for test discovery, overriding the default behavior of scanning from the project root.
```toml
[test]
root = "src" # Only scan for tests in the src directory
```
### Reporters
#### reporter.junit
Configure the JUnit reporter output file path directly in the config file:
```toml
[test.reporter]
junit = "path/to/junit.xml" # Output path for JUnit XML report
```
This complements the `--reporter=junit` and `--reporter-outfile` CLI flags.
### Memory usage
#### smol
Enable the `--smol` memory-saving mode specifically for the test runner:
```toml
[test]
smol = true # Reduce memory usage during test runs
```
This is equivalent to using the `--smol` flag on the command line.
### Coverage options
In addition to the options documented in the [coverage documentation](./coverage.md), the following options are available:
#### coverageSkipTestFiles
Exclude files matching test patterns (e.g., \*.test.ts) from the coverage report:
```toml
[test]
coverageSkipTestFiles = true # Exclude test files from coverage reports
```
#### coverageThreshold (Object form)
The coverage threshold can be specified either as a number (as shown in the coverage documentation) or as an object with specific thresholds:
```toml
[test]
# Set specific thresholds for different coverage metrics
coverageThreshold = { lines = 0.9, functions = 0.8, statements = 0.85 }
```
Setting any of these enables `fail_on_low_coverage`, causing the test run to fail if coverage is below the threshold.
#### coverageIgnoreSourcemaps
Internally, Bun transpiles every file. That means code coverage must also go through sourcemaps before they can be reported. We expose this as a flag to allow you to opt out of this behavior, but it will be confusing because during the transpilation process, Bun may move code around and change variable names. This option is mostly useful for debugging coverage issues.
```toml
[test]
coverageIgnoreSourcemaps = true # Don't use sourcemaps for coverage analysis
```
When using this option, you probably want to stick a `// @bun` comment at the top of the source file to opt out of the transpilation process.
### Install settings inheritance
The `bun test` command inherits relevant network and installation configuration (registry, cafile, prefer, exact, etc.) from the `[install]` section of bunfig.toml. This is important if tests need to interact with private registries or require specific install behaviors triggered during the test run.

86
docs/test/discovery.md Normal file
View File

@@ -0,0 +1,86 @@
bun test's file discovery mechanism determines which files to run as tests. Understanding how it works helps you structure your test files effectively.
## Default Discovery Logic
By default, `bun test` recursively searches the project directory for files that match specific patterns:
- `*.test.{js|jsx|ts|tsx}` - Files ending with `.test.js`, `.test.jsx`, `.test.ts`, or `.test.tsx`
- `*_test.{js|jsx|ts|tsx}` - Files ending with `_test.js`, `_test.jsx`, `_test.ts`, or `_test.tsx`
- `*.spec.{js|jsx|ts|tsx}` - Files ending with `.spec.js`, `.spec.jsx`, `.spec.ts`, or `.spec.tsx`
- `*_spec.{js|jsx|ts|tsx}` - Files ending with `_spec.js`, `_spec.jsx`, `_spec.ts`, or `_spec.tsx`
## Exclusions
By default, Bun test ignores:
- `node_modules` directories
- Hidden directories (those starting with a period `.`)
- Files that don't have JavaScript-like extensions (based on available loaders)
## Customizing Test Discovery
### Position Arguments as Filters
You can filter which test files run by passing additional positional arguments to `bun test`:
```bash
$ bun test <filter> <filter> ...
```
Any test file with a path that contains one of the filters will run. These filters are simple substring matches, not glob patterns.
For example, to run all tests in a `utils` directory:
```bash
$ bun test utils
```
This would match files like `src/utils/string.test.ts` and `lib/utils/array_test.js`.
### Specifying Exact File Paths
To run a specific file in the test runner, make sure the path starts with `./` or `/` to distinguish it from a filter name:
```bash
$ bun test ./test/specific-file.test.ts
```
### Filter by Test Name
To filter tests by name rather than file path, use the `-t`/`--test-name-pattern` flag with a regex pattern:
```sh
# run all tests with "addition" in the name
$ bun test --test-name-pattern addition
```
The pattern is matched against a concatenated string of the test name prepended with the labels of all its parent describe blocks, separated by spaces. For example, a test defined as:
```js
describe("Math", () => {
describe("operations", () => {
test("should add correctly", () => {
// ...
});
});
});
```
Would be matched against the string "Math operations should add correctly".
### Changing the Root Directory
By default, Bun looks for test files starting from the current working directory. You can change this with the `root` option in your `bunfig.toml`:
```toml
[test]
root = "src" # Only scan for tests in the src directory
```
## Execution Order
Tests are run in the following order:
1. Test files are executed sequentially (not in parallel)
2. Within each file, tests run sequentially based on their definition order
3. Tests defined with `test.only()` or within a `describe.only()` block will cause other tests to be skipped when using `--only`

View File

@@ -56,9 +56,9 @@ The following properties and methods are implemented on mock functions.
- [x] [mockFn.mock.instances](https://jestjs.io/docs/mock-function-api#mockfnmockinstances)
- [x] [mockFn.mock.contexts](https://jestjs.io/docs/mock-function-api#mockfnmockcontexts)
- [x] [mockFn.mock.lastCall](https://jestjs.io/docs/mock-function-api#mockfnmocklastcall)
- [x] [mockFn.mockClear()](https://jestjs.io/docs/mock-function-api#mockfnmockclear)
- [x] [mockFn.mockReset()](https://jestjs.io/docs/mock-function-api#mockfnmockreset)
- [x] [mockFn.mockRestore()](https://jestjs.io/docs/mock-function-api#mockfnmockrestore)
- [x] [mockFn.mockClear()](https://jestjs.io/docs/mock-function-api#mockfnmockclear) - Clears call history
- [x] [mockFn.mockReset()](https://jestjs.io/docs/mock-function-api#mockfnmockreset) - Clears call history and removes implementation
- [x] [mockFn.mockRestore()](https://jestjs.io/docs/mock-function-api#mockfnmockrestore) - Restores original implementation
- [x] [mockFn.mockImplementation(fn)](https://jestjs.io/docs/mock-function-api#mockfnmockimplementationfn)
- [x] [mockFn.mockImplementationOnce(fn)](https://jestjs.io/docs/mock-function-api#mockfnmockimplementationoncefn)
- [x] [mockFn.mockName(name)](https://jestjs.io/docs/mock-function-api#mockfnmocknamename)
@@ -197,7 +197,59 @@ After resolution, the mocked module is stored in the ES Module registry **and**
The callback function is called lazily, only if the module is imported or required. This means that you can use `mock.module()` to mock modules that don't exist yet, and it means that you can use `mock.module()` to mock modules that are imported by other modules.
## Restore all function mocks to their original values with `mock.restore()`
### Module Mock Implementation Details
Understanding how `mock.module()` works helps you use it more effectively:
1. **Cache Interaction**: Module mocks interacts with both ESM and CommonJS module caches.
2. **Lazy Evaluation**: The mock factory callback is only evaluated when the module is actually imported or required.
3. **Path Resolution**: Bun automatically resolves the module specifier as though you were doing an import, supporting:
- Relative paths (`'./module'`)
- Absolute paths (`'/path/to/module'`)
- Package names (`'lodash'`)
4. **Import Timing Effects**:
- When mocking before first import: No side effects from the original module occur
- When mocking after import: The original module's side effects have already happened
- For this reason, using `--preload` is recommended for mocks that need to prevent side effects
5. **Live Bindings**: Mocked ESM modules maintain live bindings, so changing the mock will update all existing imports
## Global Mock Functions
### Clear all mocks with `mock.clearAllMocks()`
Reset all mock function state (calls, results, etc.) without restoring their original implementation:
```ts
import { expect, mock, test } from "bun:test";
const random1 = mock(() => Math.random());
const random2 = mock(() => Math.random());
test("clearing all mocks", () => {
random1();
random2();
expect(random1).toHaveBeenCalledTimes(1);
expect(random2).toHaveBeenCalledTimes(1);
mock.clearAllMocks();
expect(random1).toHaveBeenCalledTimes(0);
expect(random2).toHaveBeenCalledTimes(0);
// Note: implementations are preserved
expect(typeof random1()).toBe("number");
expect(typeof random2()).toBe("number");
});
```
This resets the `.mock.calls`, `.mock.instances`, `.mock.contexts`, and `.mock.results` properties of all mocks, but unlike `mock.restore()`, it does not restore the original implementation.
### Restore all function mocks with `mock.restore()`
Instead of manually restoring each mock individually with `mockFn.mockRestore()`, restore all mocks with one command by calling `mock.restore()`. Doing so does not reset the value of modules overridden with `mock.module()`.
@@ -234,3 +286,28 @@ test('foo, bar, baz', () => {
expect(bazSpy).toBe('baz');
});
```
## Vitest Compatibility
For added compatibility with tests written for [Vitest](https://vitest.dev/), Bun provides the `vi` global object as an alias for parts of the Jest mocking API:
```ts
import { test, expect } from "bun:test";
// Using the 'vi' alias similar to Vitest
test("vitest compatibility", () => {
const mockFn = vi.fn(() => 42);
mockFn();
expect(mockFn).toHaveBeenCalled();
// The following functions are available on the vi object:
// vi.fn
// vi.spyOn
// vi.mock
// vi.restoreAllMocks
// vi.clearAllMocks
});
```
This makes it easier to port tests from Vitest to Bun without having to rewrite all your mocks.

108
docs/test/reporters.md Normal file
View File

@@ -0,0 +1,108 @@
bun test supports different output formats through reporters. This document covers both built-in reporters and how to implement your own custom reporters.
## Built-in Reporters
### Default Console Reporter
By default, bun test outputs results to the console in a human-readable format:
```sh
test/package-json-lint.test.ts:
✓ test/package.json [0.88ms]
✓ test/js/third_party/grpc-js/package.json [0.18ms]
✓ test/js/third_party/svelte/package.json [0.21ms]
✓ test/js/third_party/express/package.json [1.05ms]
4 pass
0 fail
4 expect() calls
Ran 4 tests in 1.44ms
```
When a terminal doesn't support colors, the output avoids non-ascii characters:
```sh
test/package-json-lint.test.ts:
(pass) test/package.json [0.48ms]
(pass) test/js/third_party/grpc-js/package.json [0.10ms]
(pass) test/js/third_party/svelte/package.json [0.04ms]
(pass) test/js/third_party/express/package.json [0.04ms]
4 pass
0 fail
4 expect() calls
Ran 4 tests across 1 files. [0.66ms]
```
### JUnit XML Reporter
For CI/CD environments, Bun supports generating JUnit XML reports. JUnit XML is a widely-adopted format for test results that can be parsed by many CI/CD systems, including GitLab, Jenkins, and others.
#### Using the JUnit Reporter
To generate a JUnit XML report, use the `--reporter=junit` flag along with `--reporter-outfile` to specify the output file:
```sh
$ bun test --reporter=junit --reporter-outfile=./junit.xml
```
This continues to output to the console as usual while also writing the JUnit XML report to the specified path at the end of the test run.
#### Configuring via bunfig.toml
You can also configure the JUnit reporter in your `bunfig.toml` file:
```toml
[test.reporter]
junit = "path/to/junit.xml" # Output path for JUnit XML report
```
#### Environment Variables in JUnit Reports
The JUnit reporter automatically includes environment information as `<properties>` in the XML output. This can be helpful for tracking test runs in CI environments.
Specifically, it includes the following environment variables when available:
| Environment Variable | Property Name | Description |
| ----------------------------------------------------------------------- | ------------- | ---------------------- |
| `GITHUB_RUN_ID`, `GITHUB_SERVER_URL`, `GITHUB_REPOSITORY`, `CI_JOB_URL` | `ci` | CI build information |
| `GITHUB_SHA`, `CI_COMMIT_SHA`, `GIT_SHA` | `commit` | Git commit identifiers |
| System hostname | `hostname` | Machine hostname |
This makes it easier to track which environment and commit a particular test run was for.
#### Current Limitations
The JUnit reporter currently has a few limitations that will be addressed in future updates:
- `stdout` and `stderr` output from individual tests are not included in the report
- Precise timestamp fields per test case are not included
### GitHub Actions reporter
Bun test automatically detects when it's running inside GitHub Actions and emits GitHub Actions annotations to the console directly. No special configuration is needed beyond installing Bun and running `bun test`.
For a GitHub Actions workflow configuration example, see the [CI/CD integration](../cli/test.md#cicd-integration) section of the CLI documentation.
## Custom Reporters
Bun allows developers to implement custom test reporters by extending the WebKit Inspector Protocol with additional testing-specific domains.
### Inspector Protocol for Testing
To support test reporting, Bun extends the standard WebKit Inspector Protocol with two custom domains:
1. **TestReporter**: Reports test discovery, execution start, and completion events
2. **LifecycleReporter**: Reports errors and exceptions during test execution
These extensions allow you to build custom reporting tools that can receive detailed information about test execution in real-time.
### Key Events
Custom reporters can listen for these key events:
- `TestReporter.found`: Emitted when a test is discovered
- `TestReporter.start`: Emitted when a test starts running
- `TestReporter.end`: Emitted when a test completes
- `Console.messageAdded`: Emitted when console output occurs during a test
- `LifecycleReporter.error`: Emitted when an error or exception occurs

View File

@@ -0,0 +1,93 @@
`bun test` is deeply integrated with Bun's runtime. This is part of what makes `bun test` fast and simple to use.
#### `$NODE_ENV` environment variable
`bun test` automatically sets `$NODE_ENV` to `"test"` unless it's already set in the environment or via .env files. This is standard behavior for most test runners and helps ensure consistent test behavior.
```ts
import { test, expect } from "bun:test";
test("NODE_ENV is set to test", () => {
expect(process.env.NODE_ENV).toBe("test");
});
```
#### `$TZ` environment variable
By default, all `bun test` runs use UTC (`Etc/UTC`) as the time zone unless overridden by the `TZ` environment variable. This ensures consistent date and time behavior across different development environments.
#### Test Timeouts
Each test has a default timeout of 5000ms (5 seconds) if not explicitly overridden. Tests that exceed this timeout will fail. This can be changed globally with the `--timeout` flag or per-test as the third parameter to the test function.
## Error Handling
### Unhandled Errors
`bun test` tracks unhandled promise rejections and errors that occur between tests. If such errors occur, the final exit code will be non-zero (specifically, the count of such errors), even if all tests pass.
This helps catch errors in asynchronous code that might otherwise go unnoticed:
```ts
import { test } from "bun:test";
test("test 1", () => {
// This test passes
});
// This error happens outside any test
setTimeout(() => {
throw new Error("Unhandled error");
}, 0);
test("test 2", () => {
// This test also passes
});
// The test run will still fail with a non-zero exit code
// because of the unhandled error
```
Internally, this occurs with a higher precedence than `process.on("unhandledRejection")` or `process.on("uncaughtException")`, which makes it simpler to integrate with existing code.
## Using General CLI Flags with Tests
Several Bun CLI flags can be used with `bun test` to modify its behavior:
### Memory Usage
- `--smol`: Reduces memory usage for the test runner VM
### Debugging
- `--inspect`, `--inspect-brk`: Attaches the debugger to the test runner process
### Module Loading
- `--preload`: Runs scripts before test files (useful for global setup/mocks)
- `--define`: Sets compile-time constants
- `--loader`: Configures custom loaders
- `--tsconfig-override`: Uses a different tsconfig
- `--conditions`: Sets package.json conditions for module resolution
- `--env-file`: Loads environment variables for tests
### Installation-related Flags
- `--prefer-offline`, `--frozen-lockfile`, etc.: Affect any network requests or auto-installs during test execution
## Watch and Hot Reloading
When running `bun test` with the `--watch` flag, the test runner will watch for file changes and re-run affected tests.
The `--hot` flag provides similar functionality but is more aggressive about trying to preserve state between runs. For most test scenarios, `--watch` is the recommended option.
## Global Variables
The following globals are automatically available in test files without importing (though they can be imported from `bun:test` if preferred):
- `test`, `it`: Define tests
- `describe`: Group tests
- `expect`: Make assertions
- `beforeAll`, `beforeEach`, `afterAll`, `afterEach`: Lifecycle hooks
- `jest`: Jest global object
- `vi`: Vitest compatibility alias for common jest methods

View File

@@ -1,3 +1,7 @@
Snapshot testing saves the output of a value and compares it against future test runs. This is particularly useful for UI components, complex objects, or any output that needs to remain consistent.
## Basic snapshots
Snapshot tests are written using the `.toMatchSnapshot()` matcher:
```ts
@@ -13,3 +17,52 @@ The first time this test is run, the argument to `expect` will be serialized and
```bash
$ bun test --update-snapshots
```
## Inline snapshots
For smaller values, you can use inline snapshots with `.toMatchInlineSnapshot()`. These snapshots are stored directly in your test file:
```ts
import { test, expect } from "bun:test";
test("inline snapshot", () => {
// First run: snapshot will be inserted automatically
expect({ hello: "world" }).toMatchInlineSnapshot();
// After first run, the test file will be updated to:
// expect({ hello: "world" }).toMatchInlineSnapshot(`
// {
// "hello": "world",
// }
// `);
});
```
When you run the test, Bun automatically updates the test file itself with the generated snapshot string. This makes the tests more portable and easier to understand, since the expected output is right next to the test.
### Using inline snapshots
1. Write your test with `.toMatchInlineSnapshot()`
2. Run the test once
3. Bun automatically updates your test file with the snapshot
4. On subsequent runs, the value will be compared against the inline snapshot
Inline snapshots are particularly useful for small, simple values where it's helpful to see the expected output right in the test file.
## Error snapshots
You can also snapshot error messages using `.toThrowErrorMatchingSnapshot()` and `.toThrowErrorMatchingInlineSnapshot()`:
```ts
import { test, expect } from "bun:test";
test("error snapshot", () => {
expect(() => {
throw new Error("Something went wrong");
}).toThrowErrorMatchingSnapshot();
expect(() => {
throw new Error("Another error");
}).toThrowErrorMatchingInlineSnapshot();
});
```

View File

@@ -74,9 +74,29 @@ test("it was 2020, for a moment.", () => {
});
```
## Get mocked time with `jest.now()`
When you're using mocked time (with `setSystemTime` or `useFakeTimers`), you can use `jest.now()` to get the current mocked timestamp:
```ts
import { test, expect, jest } from "bun:test";
test("get the current mocked time", () => {
jest.useFakeTimers();
jest.setSystemTime(new Date("2020-01-01T00:00:00.000Z"));
expect(Date.now()).toBe(1577836800000); // Jan 1, 2020 timestamp
expect(jest.now()).toBe(1577836800000); // Same value
jest.useRealTimers();
});
```
This is useful when you need to access the mocked time directly without creating a new Date object.
## Set the time zone
To change the time zone, either pass the `$TZ` environment variable to `bun test`.
By default, the time zone for all `bun test` runs is set to UTC (`Etc/UTC`) unless overridden. To change the time zone, either pass the `$TZ` environment variable to `bun test`.
```sh
TZ=America/Los_Angeles bun test

View File

@@ -78,9 +78,11 @@ test("wat", async () => {
In `bun:test`, test timeouts throw an uncatchable exception to force the test to stop running and fail. We also kill any child processes that were spawned in the test to avoid leaving behind zombie processes lurking in the background.
The default timeout for each test is 5000ms (5 seconds) if not overridden by this timeout option or `jest.setDefaultTimeout()`.
### 🧟 Zombie process killer
When a test times out and processes spawned in the test via `Bun.spawn`, `Bun.spawnSync`, or `node:child_process` are not killed, they will be automatically killed and a message will be logged to the console.
When a test times out and processes spawned in the test via `Bun.spawn`, `Bun.spawnSync`, or `node:child_process` are not killed, they will be automatically killed and a message will be logged to the console. This prevents zombie processes from lingering in the background after timed-out tests.
## `test.skip`
@@ -197,22 +199,121 @@ test.todoIf(macOS)("runs on posix", () => {
});
```
## `test.each`
## `test.failing`
To return a function for multiple cases in a table of tests, use `test.each`.
Use `test.failing()` when you know a test is currently failing but you want to track it and be notified when it starts passing. This inverts the test result:
- A failing test marked with `.failing()` will pass
- A passing test marked with `.failing()` will fail (with a message indicating it's now passing and should be fixed)
```ts
// This will pass because the test is failing as expected
test.failing("math is broken", () => {
expect(0.1 + 0.2).toBe(0.3); // fails due to floating point precision
});
// This will fail with a message that the test is now passing
test.failing("fixed bug", () => {
expect(1 + 1).toBe(2); // passes, but we expected it to fail
});
```
This is useful for tracking known bugs that you plan to fix later, or for implementing test-driven development.
## Conditional Tests for Describe Blocks
The conditional modifiers `.if()`, `.skipIf()`, and `.todoIf()` can also be applied to `describe` blocks, affecting all tests within the suite:
```ts
const isMacOS = process.platform === "darwin";
// Only runs the entire suite on macOS
describe.if(isMacOS)("macOS-specific features", () => {
test("feature A", () => {
// only runs on macOS
});
test("feature B", () => {
// only runs on macOS
});
});
// Skips the entire suite on Windows
describe.skipIf(process.platform === "win32")("Unix features", () => {
test("feature C", () => {
// skipped on Windows
});
});
// Marks the entire suite as TODO on Linux
describe.todoIf(process.platform === "linux")("Upcoming Linux support", () => {
test("feature D", () => {
// marked as TODO on Linux
});
});
```
## `test.each` and `describe.each`
To run the same test with multiple sets of data, use `test.each`. This creates a parametrized test that runs once for each test case provided.
```ts
const cases = [
[1, 2, 3],
[3, 4, 5],
[3, 4, 7],
];
test.each(cases)("%p + %p should be %p", (a, b, expected) => {
// runs once for each test case provided
expect(a + b).toBe(expected);
});
```
There are a number of options available for formatting the case label depending on its type.
You can also use `describe.each` to create a parametrized suite that runs once for each test case:
```ts
describe.each([
[1, 2, 3],
[3, 4, 7],
])("add(%i, %i)", (a, b, expected) => {
test(`returns ${expected}`, () => {
expect(a + b).toBe(expected);
});
test(`sum is greater than each value`, () => {
expect(a + b).toBeGreaterThan(a);
expect(a + b).toBeGreaterThan(b);
});
});
```
### Argument Passing
How arguments are passed to your test function depends on the structure of your test cases:
- If a table row is an array (like `[1, 2, 3]`), each element is passed as an individual argument
- If a row is not an array (like an object), it's passed as a single argument
```ts
// Array items passed as individual arguments
test.each([
[1, 2, 3],
[4, 5, 9]
])("add(%i, %i) = %i", (a, b, expected) => {
expect(a + b).toBe(expected);
});
// Object items passed as a single argument
test.each([
{a: 1, b: 2, expected: 3},
{a: 4, b: 5, expected: 9}
])("add($a, $b) = $expected", (data) => {
expect(data.a + data.b).toBe(data.expected);
});
```
### Format Specifiers
There are a number of options available for formatting the test title:
{% table %}
@@ -263,6 +364,71 @@ There are a number of options available for formatting the case label depending
{% /table %}
#### Examples
```ts
// Basic specifiers
test.each([
["hello", 123],
["world", 456]
])("string: %s, number: %i", (str, num) => {
// "string: hello, number: 123"
// "string: world, number: 456"
});
// %p for pretty-format output
test.each([
[{name: "Alice"}, {a: 1, b: 2}],
[{name: "Bob"}, {x: 5, y: 10}]
])("user %p with data %p", (user, data) => {
// "user { name: 'Alice' } with data { a: 1, b: 2 }"
// "user { name: 'Bob' } with data { x: 5, y: 10 }"
});
// %# for index
test.each([
"apple",
"banana"
])("fruit #%# is %s", (fruit) => {
// "fruit #0 is apple"
// "fruit #1 is banana"
});
```
## Assertion Counting
Bun supports verifying that a specific number of assertions were called during a test:
### expect.hasAssertions()
Use `expect.hasAssertions()` to verify that at least one assertion is called during a test:
```ts
test("async work calls assertions", async () => {
expect.hasAssertions(); // Will fail if no assertions are called
const data = await fetchData();
expect(data).toBeDefined();
});
```
This is especially useful for async tests to ensure your assertions actually run.
### expect.assertions(count)
Use `expect.assertions(count)` to verify that a specific number of assertions are called during a test:
```ts
test("exactly two assertions", () => {
expect.assertions(2); // Will fail if not exactly 2 assertions are called
expect(1 + 1).toBe(2);
expect("hello").toContain("ell");
});
```
This helps ensure all your assertions run, especially in complex async code with multiple code paths.
## Matchers
Bun implements the following matchers. Full Jest compatibility is on the roadmap; track progress [here](https://github.com/oven-sh/bun/issues/1825).