Optimize brute-force approach
A general technique question: start with a correct brute force, analyze its complexity to find the bottleneck, then apply a pattern — hashing for lookups, sorting, two pointers, sliding window, precomputation/prefix sums, memoization/DP, or better data structures — to cut redundant work.
"Optimize the brute force" is testing a process, not one trick. The interviewer wants to see how you go from a working-but-slow solution to an efficient one.
The process
- Get a correct brute force first. Never skip this — a working O(n²) beats a broken O(n). It also gives you a baseline and proves you understand the problem.
- Analyze its complexity. State the time/space. Find what's expensive — usually a nested loop, repeated recomputation, or repeated scans/lookups.
- Identify the redundancy. Brute force is slow because it does the same work repeatedly or explores more than it needs to. Name that specific waste.
- Apply the matching pattern to eliminate it.
- Re-analyze and confirm the improvement; check edge cases still hold.
The optimization toolbox — match the pattern to the bottleneck
| Bottleneck | Technique |
|---|---|
| Repeated lookups / "have I seen this?" in a loop → O(n²) | Hash map / set — O(1) lookup. (e.g. Two Sum: n² → n) |
| Need order or pairs/relationships | Sort first (O(n log n)), then a linear pass |
| Searching a sorted space | Binary search — O(n) → O(log n) |
| Pairs from both ends of sorted data | Two pointers — O(n²) → O(n) |
| Contiguous subarray/substring | Sliding window — O(n·k) → O(n) |
| Repeated range sums/queries | Prefix sums / precomputation |
| Overlapping subproblems in recursion | Memoization → DP — exponential → polynomial |
| Repeated min/max/priority access | Heap |
| Prefix/string queries | Trie |
| Exploring a graph/tree | BFS/DFS, prune the search space |
The recurring meta-insight
Most optimization is trading space for time (a hash map, a memo table, a precomputed array) or avoiding recomputation (caching, incremental updates, the sliding-window "add one, remove one" trick).
Worked mini-example
Two Sum, brute force: nested loop, O(n²). Bottleneck: for each element you re-scan for its complement. Redundancy: those scans repeat work. Fix: a hash set/map — as you iterate, check if target - num was already seen. One pass, O(n) time, O(n) space. Space-for-time.
How to answer in an interview
Always: state the brute force and its complexity → name the specific wasted work → pick the pattern that kills that waste → state the new complexity. Verbalizing that chain is the actual signal — more than landing the optimal solution silently.
Follow-up questions
- •Why is it important to write a correct brute force first?
- •How do you decide which optimization technique applies?
- •What's the recurring 'space for time' tradeoff?
- •Walk through optimizing a specific brute-force problem.
Common mistakes
- •Jumping to optimize before having a correct solution.
- •Not analyzing complexity, so you can't name the bottleneck.
- •Applying a technique without identifying the actual redundancy.
- •Optimizing time while ignoring the space cost (or vice versa).
Performance considerations
- •Most speedups trade memory for time (hash maps, memo tables, prefix arrays) or eliminate recomputation (sliding window, incremental updates). Always re-state the new time AND space after optimizing.
Edge cases
- •When the brute force is already optimal (don't over-engineer).
- •When constraints are small enough that brute force is fine.
- •Optimizations that help time but blow up space unacceptably.
Real-world examples
- •Two Sum: O(n^2) nested scan -> O(n) with a hash map.
- •Range-sum queries: recompute each time -> O(1) per query with a prefix-sum array.