Just to clarify, I don’t hate Node.js. I actually like Node.js and enjoy being a full-stack JavaScript developer. However, it does not mean I do not get frustrated by it. Before I get into some frustrations with Node.js, let me say some of the things Node.js is awesome at:
However, there are some quirks about Node.js you should know:
console.log
and debugger
statements all over my code for this purpose, which is not idealThe above pain points are not limited to Node.js by any means. However, in my experience with Node.js as of today, I came to have two prominent frustrations that I think need to be clarified in more detail. Please also comment if you felt similar or additional frustrations with Node.js and how you manage to cope with them.
Throwing errors in Node.js is not as straightforward as other languages (and frameworks). We have a lot of asynchronous code in Node.js and it requires you to pass the error in your callbacks and promises, instead of throwing exceptions or simply using try/catch blocks. Debugging the true nature of the error becomes much more difficult when you have to go a few callbacks deep or cannot figure out how an unhandled exception can cause your app to silently fail, and it is then when you wish for a smoother error handling process.
Before diving into error handling, we need to define some basics.
Node.js is built on top of JavaScript which is a single thread language. You get something called a call stack when having function calls. If any of your function calls take time to get resolved, we have to block the whole thread while we are waiting for the result to come back, which is not ideal in scenarios when we have to interact with a web application in browsers. The user still wants to work with the app, while we are waiting for some data to come back to us.
Here is where we get to the concept of asynchronous JavaScript, which helps us handle blocking code. To put simply, this is a mechanism to assign a callback to be performed when your registered function call is resolved. There are few options to handle this:
addEventListener
which takes a callback as the second parameter:function clickHandler { alert('Button is clicked'); } btn.addEventListener('click', clickHandler);
promise
on async function, you get an object representing the state of the operation. We don’t know when the promise will come back to us with either a result or error, but we have the mechanism to handle either scenario. For example, calling node-fetch
would generate a promise object which we can handle with its methods:const fetch = require("node-fetch"); fetch("https://jsonplaceholder.typicode.com/todos/1") .then(res => res.json()) .then(json => console.log(json)) .catch(error => console.log("error", error)); // { userId: 1, id: 1, title: 'delectus aut autem', completed: false }
We have other options like async iterators and generators or new async/await feature in ES2017 which is just syntactic sugar on top of the promise
. But for simplicity, we just stick with the above options. Let’s see how error handling is maintained for both callbacks and promises.
Function callback — error handling with this approach is done using a Error First Callback
method. When the async function gets back with a result, the callback gets called with an Error Object
as its first argument. If we have no error, this will be set as null. Let’s look at an example:
// setTimeout is faking an async call which returns an error after 0.5 seconds const asyncFunction = (callback) => { setTimeout(() => { callback(new Error('I got an error')) }, 500) } // callback for our async function const callbackFunction = (err, data) => { if (err) { console.error(err); return; } console.log(data); } asyncFunction(callbackFunction);
When we call asyncFunction
above, it approaches setTimeout
as the first thing and cannot handle it synchronously. Therefore, it asks window API
to resolve it and continues the program. When the result comes back (which in this case is an Error Object
), it will call the function callback. Here come the frustrating parts.
We cannot use a try/catch
in the context of asynchronous function calls to catch errors. So we cannot just throw
an error, in our Error First Callback
approach:
const callbackFunction = (err, data) => { if (err) { throw err; } console.log(data); } try { asyncFunction(callbackFunction); } catch(err) { // we are not catching the error here // and the Node.js process will crash console.error(err); }
return
in our callback function will let the program continue and cause more errors. The main point here is there are so many quirks to remember and handle here that might cause the code to get into a state that is hard to reason about and debugif (err) { console.error(err); return; }
Promises are amazing at chaining multiple async functions together and help you avoid callback hell
that can be caused by using the previous method. For error handling, promises use .catch
method in the chain to handle exceptions. However, handling errors in them still comes with some concerns:
.catch
methods in your promise chain. This will cause such an error to be categorized as unhandled error
. In that case, we need to have a mechanism in Node.js to handle promise rejections that are not handled. This is done when unhandledRejection event
is emitted in Node.js:const fetch = require("node-fetch"); const url = "https://wrongAPI.github.com/users/github"; const unhandledRejections = new Map(); process.on("unhandledRejection", (reason, promise) => { unhandledRejections.set(promise, reason); console.log("unhandledRejections", unhandledRejections); }); const asyncFunction = () => fetch(url); asyncFunction() .then(res => res.json()) .then(json => console.log(json))
It is not straightforward how this needs to be handled in Node.js, but one common pattern is to add an immediate .catch
methods to the async task in higher-level components and re-throw the error in them again. This helps massively in tracing an error in case it happens in any of their children, since we chain another .catch
to the instances that calls the higher-level async task. Let’s see this with an example:
const fetch = require("node-fetch"); const url = "https://wrongAPI.github.com/users/github"; // higher level async task const asynFunction = () => { return fetch(url).catch(error => { // re-throwing the error throw new Error(error); }); }; // error thrown in this intacen 1 is much bette traceable // returns: instace 1 error: invalid json response body at https://wrongapi.github.com/users/github reason: Unexpected token < in JSON at position 0 try { return await asyncFunction(); } catch(error) { console.error("instace 1 error:", error.message) }
There are several tools for package management in Node.js like npm, yarn, and pnpm, which help you install tools, packages, and dependencies for your application to make the process of software development faster and easier.
However, as it is usually with the JavaScript community, defining good and universal standards are happening less and less compared to other languages and frameworks. Just Googling “JavaScript standards” show the lack of standard as people tend not to agree on how to approach JavaScript, except in few cases like Mozilla JS reference — which is very solid. Therefore, it is easy to feel confused which package manager you need to pick for your project in Node.js.
Additionally, there are complaints about the low quality of packages in the Node.js community, which makes it harder for developers to decide if they need to re-invent the wheel and build a needed tooling themselves or can they trust the maintained packages.
Finally, with JavaScript’s rapid changes, it is no surprise that a lot of packages that our applications are dependent on are changing as well. This requires a smoother package version management in Node.js which sometimes can be troublesome.
This, by no means, indicates that Node.js is any worse than other frameworks when it comes to packages and package management, but just a mere reflection of some frustrations that comes with Node.js package managers. We will discuss few of these frustrations like lack of standards, quality of packages, and version management in more detail, but first, we need to have a background on some of the most famous Node.js package managers.
package.json
document to manage your project dependencies and handle version management for themsemver
to handle versioning of packages. With this approach, a sample package versions look like this Major.Minor.Patch
, for example 1.0.0
. Let’s see an actual package.json
and list of dependencies and their versions in action:{ "name": "app", "version": "1.0.0", "description": "Node.js example", "main": "src/index.js", "scripts": { "start": "nodemon src/index.js" }, "dependencies": { "node-fetch": "~2.6.0" }, "devDependencies": { "nodemon": "^1.18.4" }, }
This is already confusing as we get two different symbols in front of package versions. What do they mean?
~
or tilde shows a range of acceptable patch versions for a package. For example, we are gonna update the app to all of the future patch updates for node-fetch
ranging from 2.6.0
to 2.7.0
^
or caret shows a range of acceptable minor/patch versions for a package. For example, we are gonna update the app to all of the future patch updates for nodemon
ranging from 1.18.4
to 2.0.0
This already seems like a lot of hassle for such a simple task. Additionally, we need to consider the fact that making a mistake in defining the correct range of dependency versions can break the app at some point. However, concepts like package.json.lock
or yarn.lock
are formed to help avoid making such mistakes by helping to make consistent dependency installs across machines. However, I wish there were more standard approaches in making sure severe problems do not happen due to flawed version control and management system in Node.js.
These are some frustrations I experienced with Node.js. But, here are some things to remember:
Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third-party services are successful, try LogRocket.
LogRocket is like a DVR for web and mobile apps, recording literally everything that happens while a user interacts with your app. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.
LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowLearn how to implement one-way and two-way data binding in Vue.js, using v-model and advanced techniques like defineModel for better apps.
Compare Prisma and Drizzle ORMs to learn their differences, strengths, and weaknesses for data access and migrations.
It’s easy for devs to default to JavaScript to fix every problem. Let’s use the RoLP to find simpler alternatives with HTML and CSS.
Learn how to manage memory leaks in Rust, avoid unsafe behavior, and use tools like weak references to ensure efficient programs.