For each element, count the numbers to its right that are smaller
Naive: nested loop, O(n²). Optimal: process right-to-left maintaining a sorted structure (BIT/Fenwick tree or merge-sort during count) for O(n log n). Explain both; the brute force is usually accepted but mention the optimization.
"Count smaller elements to the right" — classic problem testing whether you can go beyond the obvious O(n²).
Brute force — O(n²)
function countSmallerRight(arr) {
const res = Array(arr.length).fill(0);
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
if (arr[j] < arr[i]) res[i]++;
}
}
return res;
}
countSmallerRight([5, 2, 6, 1]); // [2, 1, 1, 0]Two nested loops. Fine for small inputs; state it, then offer better.
Optimal — O(n log n)
The trick: process right to left and maintain a sorted view of everything seen so far. For each element, "how many already-seen elements are smaller" = its rank in that sorted set.
Approach A — binary insertion into a sorted array:
function countSmallerRight(arr) {
const sorted = [];
const res = [];
for (let i = arr.length - 1; i >= 0; i--) {
// find insertion index = count of elements smaller than arr[i]
let lo = 0, hi = sorted.length;
while (lo < hi) {
const mid = (lo + hi) >> 1;
if (sorted[mid] < arr[i]) lo = mid + 1;
else hi = mid;
}
res[i] = lo;
sorted.splice(lo, 0, arr[i]); // splice is O(n) — see caveat
}
return res;
}Binary search is O(log n), but splice insertion is O(n) — so this is still O(n²) worst case. To get true O(n log n):
Approach B — Fenwick tree (BIT): coordinate-compress the values, then for each element right-to-left, query the prefix sum of counts below it and update. Both query and update are O(log n) → O(n log n) total.
Approach C — merge sort: count "inversions to the right" while merging — also O(n log n).
The framing
"Brute force is the nested loop, O(n²) — I'd write that first and confirm it works. The insight for the optimal solution is to scan right-to-left and ask 'what's the rank of this element among everything to its right.' A Fenwick tree or count-during-merge-sort answers that in O(log n) each, giving O(n log n) overall."
Follow-up questions
- •Why is the sorted-array-with-splice approach still O(n²)?
- •How does a Fenwick tree give O(log n) updates and queries?
- •How does merge sort count inversions?
- •How would coordinate compression help with large value ranges?
Common mistakes
- •Claiming binary-insertion into an array is O(n log n) — splice makes it O(n²).
- •Processing left-to-right, which doesn't naturally give 'to the right' counts.
- •Off-by-one in the binary search insertion index.
- •Not handling duplicates correctly (smaller vs smaller-or-equal).
Performance considerations
- •Brute force O(n²) is fine up to a few thousand elements. For large n, the Fenwick-tree or merge-sort solution at O(n log n) is the expected optimization; both need O(n) extra space.
Edge cases
- •All equal elements — every count is 0.
- •Strictly decreasing array — counts are n-1, n-2, ... 0.
- •Negative numbers and large value ranges (need coordinate compression for BIT).
- •Empty or single-element array.
Real-world examples
- •Counting inversions to measure how 'unsorted' a dataset is.
- •Rank/percentile computations in analytics.