Implement limitConcurrency(tasks, limit) using a worker-pool pattern
`limitConcurrency(tasks, limit)` runs N workers in parallel; each pulls the next task index until done; returns results in original order. Cleaner than queueing tasks one by one. Pattern: a shared index, N async worker functions started in parallel, each looping until index is past the end.
limitConcurrency is a clean variant of the [[implement-a-task-queue-with-controlled-concurrency-successerror-callbacks-custom]] problem: instead of a queue + counter, use N workers pulling from a shared index.
The implementation
async function limitConcurrency(tasks, limit) {
const results = new Array(tasks.length);
let next = 0;
async function worker() {
while (true) {
const i = next++; // claim next index atomically (JS is single-threaded)
if (i >= tasks.length) return;
try {
results[i] = await tasks[i]();
} catch (err) {
results[i] = { error: err }; // or rethrow, depending on policy
}
}
}
const workers = Array.from({ length: Math.min(limit, tasks.length) }, worker);
await Promise.all(workers);
return results;
}Usage
const urls = [/* 100 URLs */];
const results = await limitConcurrency(
urls.map((u) => () => fetch(u).then((r) => r.json())),
6
);How it works
nextis a shared counter starting at 0.- Start
limitasync workers in parallel. - Each worker loops:
- Claim the next index by reading-then-incrementing
next. - If past the end, exit.
- Otherwise execute the task at that index and store the result.
Promise.allwaits for all workers to finish.- Results are in original order (each worker writes to its claimed index).
The "atomic claim" works because JS is single-threaded — next++ can't interleave with another async worker between read and increment. (No worries about race conditions; await points are the only suspension.)
Why this is cleaner than queueing one-by-one
The [[implement-a-task-queue-with-controlled-concurrency-successerror-callbacks-custom]] approach uses a queue + add(task) semantics and re-dequeues after each settle. limitConcurrency is better when you have a fixed batch of work upfront — no enqueue/dequeue ceremony, just N workers draining the index.
Variants
Fail-fast — bail on first error
async function worker() {
while (next < tasks.length) {
const i = next++;
results[i] = await tasks[i](); // throws propagate up
}
}A throw in any worker rejects Promise.all and the whole function rejects.
Settle all (don't fail-fast)
results[i] = await tasks[i]().catch((err) => ({ error: err }));Or use Promise.allSettled semantics — each result is { status, value | reason }.
With cancellation
Accept an AbortSignal:
async function worker() {
while (next < tasks.length && !signal.aborted) {
const i = next++;
results[i] = await tasks[i](signal);
}
}Each task is responsible for honoring the signal (e.g., fetch(url, { signal })).
Progress callback
async function worker() {
while (true) {
const i = next++;
if (i >= tasks.length) return;
results[i] = await tasks[i]();
onProgress?.(i, results[i]);
}
}Pitfalls
- Synchronous throws in the task function — wrap with
Promise.resolve().then(task)if a task might throw before its first await. - Tasks that never settle — no timeout = a worker is stuck forever; add per-task timeout if needed.
- Order preservation — works because each worker writes to its claimed index, not via push.
- Backpressure — if tasks are added dynamically rather than upfront, switch to the queue-based pattern.
Interview framing
"Worker-pool pattern: a shared index counter, N async workers running in parallel, each looping to claim the next index and run that task. When the index is past the end, the worker exits. Promise.all waits for all workers; results are in original order because each worker writes to its claimed index. The claim is safe because JS is single-threaded — next++ doesn't interleave between awaits. This is cleaner than the enqueue/dequeue queue pattern when you have a fixed batch upfront. Variants: fail-fast vs settle-all, cancellation via AbortSignal, progress callbacks, per-task timeouts."
Follow-up questions
- •Why is the `next++` safe without locks?
- •When would you prefer the queue pattern over the worker-pool pattern?
- •How would you add cancellation?
- •How would you preserve order if tasks finished in different times?
Common mistakes
- •Using `for await` in a worker on a shared iterator — order surprises.
- •Returning results in completion order instead of submission order.
- •Synchronous throws in tasks not caught.
- •Starting more workers than tasks (waste).
Performance considerations
- •Bounded parallelism — important for I/O-bound work (HTTP, DB) where the server/network can't absorb unlimited concurrency.
Edge cases
- •Empty tasks array.
- •Limit larger than tasks.length.
- •Tasks throwing synchronously.
- •Tasks that hang forever.
Real-world examples
- •p-limit, p-map (uses this pattern internally).
- •Image upload concurrency limit, request fan-out cap.