Outline of this article
Pt 1: Why do we write tests?* To prove our code works* To protect our code from breaking when we work on it* To document behavior* To help make design decisions
Pt 2: Testing practices* TDD* Watch mode
Pt 3: Testing React* Recommended libraries* Snapshots vs assertions* Rules of thumb for assertions* Rules of thumb for snapshots* Black Box testing* What is the public API for a React component?* What is not part of the public API for a React component?
Pt 4: More testing practices* Testing async code* Testing library code* Quotes about testing
If you are working from a spec, user story, or design doc, the acceptance criteria are a good starting point for a test suite.
Example: describing what the app should do
describe('MyList', () => {
test(
'should sort alphabetically when the "name" header is clicked'
)
})
describe('An admin', () => {
describe('with no teams', () => {
test('should see the empty teams page')
test('should see the New Team button')
})
describe('with existing teams', () => {
test('should a list of teams')
})
})
A comprehensive test suite allows us to move quickly:
I’ve seen the continuous delivery process running smoothly at dozens of organizations — but I’ve never seen it work anywhere without a quality array of test suites that includes both unit tests and functional tests, and frequently includes integration tests, as well.
— Eric Elliot
Example: describing expected states
import Header from "./Header";
it("should render signed out by default", () => {
expect.assertions(2)
const tree = mount(<Header />);
const markers = {
in: tree.find({"data-test": "signedin"}),
out: tree.find({"data-test": "signedout"})
};
expect(markers.in).toHaveLength(0);
expect(markers.out).toHaveLength(1);
});
Sometimes tests are the only place a function is fully documented. This is particular true of edge cases that might not show up in normal usage. Make sure to cover those cases! Snapshots in particular can help quickly document a range of possible scenarios (at the loss of specificity).
Example
describe('Component states', () => {
test("loading state snapshot should match", () => {
const snap = shallow(<Foo loading />);
expect(snap).toMatchSnapshot();
});
test("empty state snapshot should match", () => {
const snap = shallow(<Foo items={[]} />);
expect(snap).toMatchSnapshot();
});
it("happy path state snapshot should match", () => {
const snap = shallow(<Foo items={dummyItemsList} />);
expect(snap).toMatchSnapshot();
});
it("error state snapshot should match", () => {
const snap = shallow(<Foo items={dummyInvalidItemsList} />);
expect(snap).toMatchSnapshot();
});
})
Writing tests early in the development process helps to clarify the relationships between components and identify problems like coupling, hard-to-use APIs, and separation-of-concerns violations.
Test-Driven Development
Test-driven development (TDD) advocates recommend writing tests before code: Write a test, then write just the code to make it pass; write another test, make it pass, and so on, one assertion at a time. Even if you aren’t practicing TDD it’s a good idea to be aware of the practice and what it is trying to achieve.
Watch Mode
It’s a good idea to use your test runner’s “watch mode” to continuously run tests while you work. In Jest you can pass a flag: jest --watch. Many editors also have plugins to display test output within the IDE.
Jest — test runner, assertions, mocks
Update 2018–07–02: I now recommend react-testing-library over Enzyme for testing components
Sinon — stubs and mocks (Jest has built-ins for some of this functionality so you might not need it)
yarn add --dev jest react-testing-library sinon
Render components:
// importing renderers for snapshotting and DOM assertions
import { render } from "react-testing-library";
Jest comes with a toMatchSnapshot expectation lets you compare a data structure to a stored copy that is checked into source control. Snapshots have advantages: They are easy to create, easy to update, and can act as a red flag if updating one component causes snapshots in other areas to fail. On the other hand, it's very easy to accidentally update a snapshot without verifying that the change makes sense. It's also hard to tell exactly what details are important in a snapshot and which aren't, whereas focused assertions can allow implementation changes while still preventing regressions.
Rule of thumb for assertions:
Rules of thumb for snapshots:
Tests should use public APIs. This means that tests use the same interfaces as “real” (application) code.
A component under test should be treated as a “black box” that takes inputs and returns outputs; the test should verify that, given a certain input or combination of inputs, the correct output comes out.
What is the public API for a React Component?
Inputs
Outputs
What is not part of the public API for a React Component?
Not Inputs
Not Outputs
Testing async code requires special care to ensure that all of the assertions actually run. Jest tests can be set up to fail if the number of received assertions doesn’t match the expected number with a special expectation: expect.assertions(count). Jest will also fail a test that returns a rejected Promise.
If you are testing error handling behavior it is extra important to add the assertion count so that returning a resolved Promise does not make the test pass without running the .catch code and its assertions.
Callback style with done
it("should get item from API", done => {
expect.assertions(1) // ⚠️
fetch('/url/1')
.then(data => expect(data).toHaveLength(1); done)
.catch(err => done)
})
Returning a Promise
it("should get item from API", () => {
return fetch('/url/1')
.then(data => expect(data).toHaveLength(1))
})
Using async/await syntax with Promises
it("should get item from API", async () => {
const data = await fetch('/url/1')
expect(data).toHaveLength(1)
})
Testing error handling with async/await
it("should not get item when unauthorized", async () => {
expect.assertions(1) // ⚠️
try {
const data = await fetch('/url/2')
} catch(err) {
expect(err.message).toEqual('Not Authorized')
}
})
Avoid writing unit tests that would be covered by a library’s own test suite. Functional/end-to-end tests against the app in a staging environment act as a sanity check against these assumptions, ensuring that both libraries and app code work.
The tests should halt the delivery pipeline on failure and produce a good bug report when they fail.
Never write test code that assumes it knows how the method under test should be implemented.
See also Tautological Tests (Fabio Pereira)
Behavior-driven development specifies that tests of any unit of software should be specified in terms of the desired behavior of the unit. The “desired behavior” in this case consists of the requirements set by the business.