Queueing

In order to avoid overwhelming anyone, we might add a buffer between the clients and the dispatcher. This new worker is responsible for managing customer relations. Instead of speaking directly with the dispatcher, the client speaks to the services manager, passing the manager requests, and at some point in the future, getting a call that their task has been completed. Requests for work are added to a prioritized work queue (a stack of orders with the most important one on top), and this manager waits for another client to walk through the door. 

The following figure describes the situations:

The dispatcher tries to keep all workers busy by pulling tasks from this queue, passing back any packages workers have completed, and generally maintaining a sane work environment where nothing gets dropped or lost. Rather than proceeding task-by-task along a single timeline, multiple simultaneous jobs, on their own timelines, run in parallel. If it comes to a point where all the workers are idle and the task queue is empty, the office can sleep for a while, until the next client arrives.

This is a rough schematic of how Node gains speed by working asynchronously, rather than synchronously. Now, let's pe deeper into how Node's event loop works.