Scenario Based: You're building a real-time chat application where multiple messages arrive simultaneously. How would you ensure the DOM updates efficiently without blocking the main thread
Batch incoming messages instead of one setState per message; React 18 auto-batches but a buffer + flush on rAF/interval helps for bursts. Virtualize the message list. Keep updates off the critical path: process/parse in chunks or a Web Worker, use stable keys, and memoize message rows.
A burst of simultaneous messages is a render-thrash problem: naively, each message = a setState = a re-render = a DOM update, and the main thread drowns. The fixes layer up.
1. Batch the state updates
Don't setState per message. Buffer incoming messages and flush them together:
const buffer = useRef([]);
useEffect(() => {
socket.on("message", (msg) => {
buffer.current.push(msg); // collect, don't render yet
});
const id = setInterval(() => {
if (buffer.current.length) {
setMessages((prev) => [...prev, ...buffer.current]); // one update
buffer.current = [];
}
}, 100); // flush ~10x/sec
return () => clearInterval(id);
}, []);React 18 auto-batches updates within the same event/tick — but messages arriving across separate socket events/ticks aren't auto-batched, so an explicit buffer + flush (on an interval or requestAnimationFrame) collapses a burst into one render.
2. Virtualize the message list
A chat can have thousands of messages. Rendering them all is the real killer. Windowing (react-window/react-virtuoso) renders only the visible messages — the DOM stays small no matter how long the history. virtuoso handles the chat-specific hard parts (variable heights, stick-to-bottom, prepend-on-scroll-up).
3. Keep work off the main thread / critical path
- Heavy per-message work — parsing markdown, syntax highlighting, link previews — done eagerly on the main thread blocks rendering. Move it to a Web Worker, or do it lazily/incrementally (only for visible messages).
startTransition— mark non-urgent updates (rendering the big list) as transitions so React can keep the input responsive and interrupt rendering.
4. Render efficiently
- Stable keys — a message id, never the index — so React reconciles minimally and doesn't re-render the whole list.
React.memothe message row — so adding messages at the bottom doesn't re-render existing rows.- Immutable appends with a functional updater.
5. UX details
- Scroll management — auto-scroll to the newest only if the user is already at the bottom; otherwise show a "N new messages" pill.
- Avoid layout thrash from measuring on every message.
The framing
"It's a render-thrash problem — one setState per message floods the main thread. So I buffer incoming messages and flush them on an interval or rAF, collapsing a burst into a single render — React 18 auto-batches within a tick but not across separate socket events. Then I virtualize the list so the DOM stays small regardless of history length, memoize message rows with stable id keys so existing rows don't re-render, and push heavy per-message work — markdown, highlighting — into a Web Worker or do it lazily. startTransition keeps the input responsive while the list renders."
Follow-up questions
- •How does React 18's automatic batching help, and where does it fall short?
- •Why is virtualization essential for a long chat?
- •When would you move work to a Web Worker here?
- •How do you handle auto-scrolling without fighting the user?
Common mistakes
- •One setState per incoming message — a render per message.
- •Rendering the entire message history with no virtualization.
- •Index keys, so the whole list re-renders on every append.
- •Doing markdown/highlighting synchronously on the main thread.
- •Always force-scrolling to the bottom, hijacking the user's scroll.
Performance considerations
- •Batching cuts render count; virtualization caps DOM size; memoized rows avoid re-rendering unchanged messages; Workers and startTransition keep the main thread free for input. Together they keep the app at 60fps under message bursts.
Edge cases
- •A sudden burst of hundreds of messages.
- •User scrolled up reading history when new messages arrive.
- •Variable-height messages breaking naive virtualization.
- •Messages arriving out of order.
Real-world examples
- •Slack/Discord/WhatsApp Web rendering high-volume channels smoothly.
- •Live-event chat handling thousands of messages per minute.