Behavioral: deep dive into past projects, SDLC, debugging production issues, observability, knowing when not to optimize
Be ready to go several layers deep on a real project: the problem and your role, the architecture and why, your SDLC (branching, reviews, CI/CD, testing), how you debug production issues (error tracking, logs, repro, rollback), and your observability (metrics, alerts, RUM). Pick a project you know cold and can defend every decision on.
This is the project deep-dive round. The interviewer picks a project from your resume and drills down — SDLC, debugging, observability — until they hit the limit of your knowledge. The only way to do well is to pick a project you know cold and be honest about boundaries.
Prepare one project end-to-end
Choose a project where you owned a meaningful slice. Be ready to discuss:
The problem and your role
- What problem did it solve, for whom? What was your specific contribution vs. the team's?
Architecture and decisions
- The component structure, state management, data flow, key libraries.
- Why each choice — and what you'd do differently now. Interviewers love "what would you change?"
SDLC — how you actually shipped
- Branching strategy, PR/review process, CI/CD pipeline, environments.
- Testing strategy: unit / integration / e2e, what you tested and what you deliberately didn't.
- How features went from idea → spec → build → review → release (feature flags? staged rollout?).
Debugging production issues
- Concrete example: a real bug, how you found it (error tracking, logs, the recent diff), how you mitigated (rollback / flag / hotfix), and how you prevented recurrence.
- Your mental model: mitigate → diagnose with data → fix forward → add a regression test.
Observability
- Error tracking (Sentry), performance monitoring / RUM (Core Web Vitals in the field), logging, custom metrics, alerts.
- What you actually watched and what an alert would page you for.
How to handle the depth
- Go as deep as you genuinely can, then say so. "I owned the frontend; the deploy pipeline was platform-team — here's my understanding of it." Honesty about boundaries beats bluffing, which collapses on the next question.
- Bring numbers — bundle size, LCP, error rate, deploy frequency.
- Volunteer the trade-offs — every decision had a downside; naming it shows maturity.
What interviewers are assessing
- Depth of real ownership — did you actually build it, or just touch it?
- Engineering judgment — can you defend decisions and critique them?
- Operational maturity — do you think about debugging and observability, or just feature code?
- Honesty — calibrated confidence, no bluffing.
Senior framing
For senior roles this round separates "wrote features" from "owned a system." The signal is that you can move fluidly from a UI component down to the CI pipeline and the production alerting — and that you have opinions, backed by experience, on the trade-offs at every layer. Rehearse your deep-dive project out loud; the holes show up fast.
Follow-up questions
- •What would you architect differently if you rebuilt that project today?
- •Walk me through the hardest production bug you debugged on it.
- •What did you choose NOT to test, and why?
- •What did your alerting page you for?
Common mistakes
- •Picking a project you only partly remember — depth questions expose it.
- •Claiming credit for team-wide work without distinguishing your part.
- •Bluffing on areas you didn't own instead of stating the boundary.
- •Only talking feature code — no SDLC, debugging, or observability story.
Edge cases
- •If parts of the project were owned by other teams, be precise about the line and what you know secondhand.
Real-world examples
- •Resume project deep-dives are standard at mid-to-senior frontend interviews.