Frontend
medium
mid
What tools would you use for testing and why
Vitest/Jest + React Testing Library for unit/integration tests focused on user behavior, Playwright/Cypress for end-to-end critical flows, MSW for mocking the network. Follow the testing trophy — invest most in integration tests.
6 min read·~8 min to think through
The goal isn't "100% coverage" — it's confidence that shipping won't break users, at a sustainable cost. Tool choice follows the testing trophy: a little static analysis, some unit, a lot of integration, a few E2E.
Static (cheapest, always on)
- TypeScript — catches whole classes of bugs before tests run.
- ESLint + Prettier — consistency and common-mistake detection.
Unit / integration — Vitest (or Jest) + React Testing Library
- Vitest for new projects: fast, ESM-native, Jest-compatible API, shares Vite config. Jest is fine for existing setups.
- React Testing Library — tests components the way users use them: query by role/label/text, fire real events, assert on visible output. Don't test implementation details (state, instance methods) — those tests break on every refactor.
- Most of your investment goes here: integration tests that render a feature, interact with it, and assert behavior. High confidence, refactor-resilient.
Network mocking — MSW (Mock Service Worker)
- Intercepts at the network layer, so components use their real fetch code. Same handlers work in tests and in the browser dev environment. Far better than mocking
fetchor axios per test.
End-to-end — Playwright (or Cypress)
- Playwright — fast, multi-browser, great parallelism and trace viewer. Cypress — excellent DX, time-travel debugging.
- Reserve E2E for critical user journeys (sign-up, checkout, login) — they're slow and flakier, so keep the suite small and high-value.
Supporting tools
- Storybook + interaction tests / Chromatic for visual regression on components.
- axe / jest-axe for automated accessibility checks.
- Codecov or similar to track coverage trends (a signal, not a target).
Why this shape
Unit tests are cheap but low-confidence; E2E is high-confidence but slow and flaky. Integration tests (RTL) hit the sweet spot — they exercise real component composition and user flows without a full browser. Optimize for confidence per minute of CI and per hour of maintenance.
Follow-up questions
- •Why does the testing trophy favor integration over unit tests?
- •Why is MSW better than mocking fetch directly?
- •What does 'testing implementation details' mean and why is it bad?
- •How do you keep an E2E suite from becoming flaky and slow?
Common mistakes
- •Chasing 100% coverage with brittle tests that assert on internal state.
- •Too many E2E tests — slow, flaky CI that the team starts ignoring.
- •Mocking fetch/axios per test instead of intercepting at the network layer with MSW.
- •Testing component internals so every refactor breaks the suite.
Performance considerations
- •CI time is the budget. Unit/integration with Vitest run in seconds; E2E in minutes. Parallelize, shard, and keep E2E small. Flaky tests cost more than missing tests because they erode trust in the whole suite.
Edge cases
- •Flaky tests from timers, animations, and race conditions.
- •Testing code that depends on real time/dates (fake timers).
- •Visual regressions that functional tests can't catch.
- •Accessibility regressions slipping through behavior tests.
Real-world examples
- •RTL integration tests covering a checkout form, MSW mocking the payment API, one Playwright test for the full happy-path purchase.
- •jest-axe in CI catching a missing label before it ships.
Senior engineer discussion
Seniors talk about confidence-per-cost and the testing trophy, not coverage numbers. They emphasize testing behavior over implementation (the RTL philosophy), MSW for realistic network boundaries, and keeping E2E small to fight flake. They also tie testing strategy to CI economics and team trust — a suite people ignore is worse than no suite.
Related questions
Frontend
Easy
6 min
Frontend
Medium
6 min