Queueing is an important technique in Node.js used for effectively handling asynchronous operations.
In this article, we’ll take a deep dive into queues in Node.js: what they are, how they work (with the event loop), and their various types.
A queue is a data structure used in Node.js to appropriately organize asynchronous operations. These operations exist in different forms, including HTTP requests, read or write file operations, streams, and more.
Handling asynchronous operations in Node.js can be challenging.
There can be unpredictable delays (or at worse, no results) during HTTP requests depending on network strength. There may also be delays while trying to read or write a file with Node.js depending on the size of the file.
Similar to timers and many other operations, there can also be an indefinite duration for an asynchronous operation to be completed.
With these different durations, Node.js needs to be able to handle all these operations effectively.
Node.js cannot handle operations based on first-start-first-handle or first-finish-first-handle.
One reason why this may not be a good choice is because an asynchronous operation might contain another asynchronous operation.
Leaving room for the first asynchronous process means that the inner asynchronous process would have to be completed before the other asynchronous operations in the queue can be considered.
There are many scenarios to consider, so the best option is to have a rule. This rule influences how the event loop and queues work in Node.js.
Let’s briefly look at how Node.js handles asynchronous operations.
The call stack keeps track of the function currently being executed and where it is run from. A function is added to the call stack when it is about to be executed.
This helps JavaScript retrace its steps after executing a function.
The callback queues are queues that hold callback functions to asynchronous operations when they have been completed in the background.
They work in a first-in-first-out (FIFO) manner. We’ll look at different types of callback queues later in this article.
Note that Node.js is responsible for every asynchronous activity, because JavaScript can block the thread with its single-threaded nature.
It is also responsible for adding functions to the callback queues after completing the background operations. JavaScript has nothing to do with the callback queue.
Meanwhile, the event loop continually checks the call stack if it’s empty so it can pick up a function from the callback queue and add to the call stack. The event loop only checks the queues when all synchronous operations have been executed.
So, what order does the event loop follow to select callback functions from the queues?
First, let’s look at the five main types of callback queues.
IO operations refer to operations that involve external devices like the computer’s internals. Common operations include read and write file operations, network operations, and so on.
These operations should be asynchronous because they are left for Node.js to handle. JavaScript does not have access to the computer’s internals.
When such operations are to be carried out, JavaScript transfers them to Node.js to handle in the background.
Upon completion, they are transferred to the IO callback queue for the event loop to transfer to the call stack for execution.
Every operation involving the timer feature of Node.js (like setTimeout()
and setInterval()
) are added to the timer queue.
Note that JavaScript does not have a timer feature by itself.
It uses the timer API (which includes setTimeout
) provided by Node.js to perform time-related operations. For this reason, timer operations are asynchronous.
Be it a duration of 2 or 0 seconds, JavaScript hands over time-related operations to Node.js, which are then completed and added to the timer queue.
For example:
setTimeout(function() { console.log('setTimeout'); }, 0) console.log('yeah') # result yeah setTimeout
JavaScript proceeds with other operations while the asynchronous operation is being processed. The event loop goes to the callback queues only when all synchronous operations have been handled.
This queue is broken down into two queues:
process.nextTick
function.Each iteration performed by an event loop is called a tick.
process.nextTick
is a function that executes a function at the next tick (a.k.a, the next iteration of the event loop). The microtask queue stores such functions so that they can be executed at the next tick.
This means that the event loop would have to keep checking the microtask queue for such functions before proceeding to other queues.
promises
.In the IO and Timer queue, as we’ve seen, everything concerning the asynchronous operation is handed over to the asynchronous function
Promises are different. In promises, an initial variable is stored in JavaScript memory (which you may have seen, <Pending>
).
Node.js places that function (attached to the promise) in the microtask queue when the asynchronous operation is completed. Simultaneously, it updates the variable in JavaScript memory with the result gotten so that the function is not run with <Pending>
.
The following codes explains how promises work:
let prom = new Promise(function (resolve, reject) { // delay execution setTimeout(function () { return resolve("hello"); }, 2000); }); console.log(prom); // Promise { <pending> } prom.then(function (response) { console.log(response); }); // after 2000ms, // hello
One important feature to note about the microtask queue is that the event loop repeatedly checks and executes functions in the microtask queue before attending to other queues.
For instance, when the microtask queue is completed, and, say, a timer operation performs a promise operation, the event loop would attend to that promise operation before moving on to the other functions in the timer queue.
Therefore, the microtask qeuue takes the highest priority over other queues.
The callback functions in this queue are executed immediately after all callback functions in the IO queue have been executed.
setImmediate
is the function used to add functions to this queue.
For example:
const fs = require('fs'); setImmediate(function() { console.log('setImmediate'); }) // assume this operation takes 1ms fs.readFile('path-to-file', function() { console.log('readFile') }) // assume this operation takes 3ms do...while...
When this program is executed, Node.js adds the callback function of setImmediate
to the check queue. Since the whole program has not been completed, the event loop does not check any of the queues.
The readFile
operation is asynchronous, so it’s handed over to Node.js, and the program continues execution.
The do while
operation lasts for 3ms. During this time, the readFile
operation completes and is pushed to the IO queue. After completing this operation, the event loop begins to check the queues.
Although the check queue got populated first, it is only considered after the IO queue is empty. Hence, readFile
is logged to the console before setImmediate
.
This queue stores functions that are associated with close event operations.
Examples include the following:
These are considered the least-prioritized queues because the operations here happen at a later time.
You wouldn’t want to execute a callback function in a close
event before a promise’s function is handled. What would that promise function do when the server is closed already?
The microtask queue is given top priority, followed by the timer queue, the I/O queue, the check queue, and, lastly, the close queue.
Let’s look at a bigger example to illustrate the types and order of the queues:
const fs = require("fs"); // assume this operation takes a 2ms fs.writeFile('./new-file.json', '...', function() { console.log('writeFile') }) // assume this takes 10ms to complete fs.readFile("./file.json", function(err, data) { console.log("readFile"); }); // don't assume, this actually takes 1ms setTimeout(function() { console.log("setTimeout"); }, 1000); // assume this operation takes 3ms while(...) { ... } setImmediate(function() { console.log("setImmediate"); }); // promise that takes 4ms to resolve let promise = new Promise(function (resolve, reject) { setTimeout(function () { return resolve("promise"); }, 4000); }); promise.then(function(response) { console.log(response) }) console.log("last line");
Here’s the program flow:
At 0ms, the program begins.
fs.writeFile
takes 2ms at the background before Node.js adds the callback function to the IO queue.
fs.readFile
takes 10ms at the background before Node.js adds the callback function to the IO queue.
setTimeout
takes 1ms at the background before Node.js adds the callback function to the timer queue.
Now, the while operation (which is synchronous) takes 3ms. During this time, the thread is blocked (remember that JavaScript is single-threaded).
Also during this time, the setTimeout
and fs.writeFile
operations are completed and their callback functions are added to the timer and IO queues, respectively.
Now the queues are:
// queues Timer = [ function () { console.log("setTimeout"); }, ]; IO = [ function () { console.log("writeFile"); }, ];
setImmediate
adds the callback function to the Check queue:
js // queues Timer... IO... Check = [ function() {console.log("setImmediate")} ]
The promise operation takes 4ms to resolve (in the background) before it is added to the microtask queue.
The last line is synchronous, so it is immediately executed:
# results "last line"
All synchronous activities are done, so the event loop starts checking the queues. It starts from the timer queue since the microtask queue is empty:
// queues Timer = [] // now empty IO... Check... # results "last line" "setTimeout"
While the event loop continues executing the callback functions in the queue, the promise
operation finishes and is added to the microtask queue:
// queues Timer = []; Microtask = [ function (response) { console.log(response); }, ]; IO = []; // now empty Check = []; // now empty immediately after IO # results "last line" "setTimeout" "writeFile" "setImmediate"
A few seconds later, the readFile
operation finishes and is added to the IO queue:
// queues Timer = []; Microtask = []; // now empty IO = [ function () { console.log("readFile"); }, ]; Check = []; # results "last line" "setTimeout" "writeFile" "setImmediate" "promise"
Finally, all callback functions are executed:
// queues Timer = [] Microtask = [] IO = [] // now empty again Check = []; # results "last line" "setTimeout" "writeFile" "setImmediate" "promise" "readFile"
Three things to note here:
readFile
) in the background. It did so because, at that point, the IO queue was empty. Remember that the check queue callbacks are run immediately after all the functions in the IO queue have been executed.JavaScript is single-threaded. Every asynchronous function is handled by Node.js working with the internal features of the computer.
Node.js is responsible for adding callback functions (attached to an asynchronous operation by JavaScript) to the callback queues. The event loop determines the callback function that would be executed next at every iteration.
Understanding how queues work in Node.js gives you a better understanding of it, since queues are one of the core features of the environment. The most popular definition of Node.js is non-blocking
, meaning asynchronous operations are properly handled.
This feature is made effective by the event loop and the callback queues.
Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third-party services are successful, try LogRocket.
LogRocket is like a DVR for web and mobile apps, recording literally everything that happens while a user interacts with your app. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.
LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowLearn how to manage memory leaks in Rust, avoid unsafe behavior, and use tools like weak references to ensure efficient programs.
Bypass anti-bot measures in Node.js with curl-impersonate. Learn how it mimics browsers to overcome bot detection for web scraping.
Handle frontend data discrepancies with eventual consistency using WebSockets, Docker Compose, and practical code examples.
Efficient initializing is crucial to smooth-running websites. One way to optimize that process is through lazy initialization in Rust 1.80.
4 Replies to "A deep dive into queues in Node.js"
I think you mean second every time you typed in “ms”(abbreviated form of millisecond)
One of the best blog post on nodejs core feature simple to understand and digestible code clear the concept. Thanks..
Truly one the best post i have read explaining the inner workings of node … well worth the read . Thanks Dillion
Hey Hi,
Very good article but for me still the whole picture is not clear. I understand Event Loop & CallBack Queues however I am still not clear WHAT is Event Queue? Is it different from CallBack Queues?