Code reviews & quality standards
Reviews exist to catch defects, share knowledge, and maintain consistency — not to gatekeep. Automate the objective stuff (lint, format, types, tests, CI), keep PRs small, review for correctness/design/readability, give kind specific actionable feedback, and distinguish blocking issues from preferences.
Code review is a quality and knowledge-sharing process, not a gate or a power trip. Good standards make reviews fast, consistent, and genuinely useful.
What reviews are for
- Catch defects — bugs, edge cases, security/perf issues.
- Maintain consistency — patterns, architecture, conventions.
- Share knowledge — the reviewer learns the change; the author learns from feedback; the bus factor goes up.
- Collective ownership — more than one person understands each part of the code.
Automate everything objective
Humans should never spend review time on what a machine can enforce:
- Formatting — Prettier.
- Lint — ESLint (incl. a11y, hooks rules).
- Types — TypeScript in CI.
- Tests — must pass; coverage thresholds where it makes sense.
- CI — build, bundle-size checks, visual regression.
This frees human review for design, correctness, and clarity — the things only humans can judge.
What humans review for
- Correctness — does it work, including edge cases and error paths?
- Design/architecture — right abstraction, fits the codebase, not over/under-engineered.
- Readability/maintainability — will someone understand this in 6 months?
- Tests — meaningful coverage of the new behavior.
- Security & performance — obvious risks.
- Naming, API surface, accessibility.
Practices that make it work
- Small PRs. A 100-line PR gets a real review; a 2,000-line PR gets a rubber stamp. Biggest single lever.
- Clear PR description — what, why, how to test, screenshots.
- Author self-reviews first — catch your own obvious stuff.
- Fast turnaround — reviews blocking for days kill momentum; agree on an SLA.
- Distinguish blocking from non-blocking — prefix nits ("nit:", "optional:") so the author knows what actually gates merge.
Feedback culture
- Kind, specific, actionable — "consider extracting this into a hook so X can reuse it" beats "this is messy."
- Critique the code, not the person. Ask questions ("what happens if this is null?") rather than commands.
- Explain the why — feedback that teaches.
- Praise good stuff too.
- For deep disagreements, talk — sync up instead of a 20-comment thread.
Standards as docs, not tribal knowledge
A style guide, an architecture doc, PR templates, and a review checklist — so standards are explicit and consistent across reviewers, not dependent on who happens to review.
Follow-up questions
- •What should be automated vs left to human reviewers?
- •Why are small PRs the biggest lever for review quality?
- •How do you give critical feedback without demoralizing the author?
- •How do you handle a disagreement that a comment thread can't resolve?
Common mistakes
- •Using review as a gatekeeping or ego exercise.
- •Huge PRs that get rubber-stamped because nobody can review them properly.
- •Bikeshedding style that should have been automated.
- •Vague or harsh feedback; not separating blocking issues from nits.
Performance considerations
- •
Edge cases
- •Urgent hotfixes vs normal review rigor.
- •Reviewing a domain you don't know well.
- •Author and reviewer fundamentally disagree on approach.
- •Junior reviewing senior, or vice versa.
Real-world examples
- •CI enforcing format/lint/types/tests so reviewers focus only on design and correctness.
- •A team PR template + review checklist making standards explicit instead of tribal.