Starting with a new programming language can be a daunting task. In the case of Node.js, it can be difficult even if you come from a front-end background and use JavaScript.
Starting in Node.js involves learning the entire npm ecosystem, getting used to the quirks of JavaScript and getting to know and love asynchronous logic. These all take time when you’re new and can drive you one step closer to insanity if you’re not careful.
In this article, I will throw some “newcomer” tips your way to alleviate some of the pains of Node.js.
Let’s start off with a simple, yet very sought after tip: how to serialize a JavaScript object in Node.js (or in other words, how to turn a JSON into something you can send over the wire to another process).
Basically what serializing means is turning an entity into something you can transfer. This mainly applies to objects, since they can be quite difficult to transfer between services, with objects having very specific properties, such as methods, inherited behavior and links to other complex objects (to name just a few of the major problems).
Lucky for us, JSON elements gets rid of most of the difficulties I just mentioned because they’re a special kind of object:
It’s also important to note that JSON is just a standard and it’s not a property of JavaScript scripts. You may have had to deal with this in something like C# (even though it is very different from JavaScript). There are libraries that allow you to work with JSON files in pretty much every single major language out there, but the main difference between them, is that now that you’re working on Node.js (and hence, JavaScript), you don’t have to translate them into a “proper structure” inside your language. In Node.js, you can simply load it and start manipulating it. This is one of my favorite things about using JSON files in Node.js.
Let’s look now at what options we have for serialization of these entities within Node.js.
Out of the box, Node.js will give you access to the JSON object. With it, you can easily parse and serialize any JSON you might need.
Essentially, the stringify method will turn your objects (since you’re in JavaScript, everything can be considered a JSON) into a string version of it.
There is however a caveat: stringify will ignore some properties since you’re trying to transform your complex objects into a language agnostic format (I know JSON stands for JavaScript Object Notation, but then again, it’s meant to work on any language that wants to support it, so there are limitations to what you can serialize into it).
Specifically, the stringify method will ignore:
Here is a quick example of how to use this method on your code, notice how we don’t need to require anything special in order to use it:
By executing the above code, you’ll get the following output:
Hello world! {“name”:”Fernando”,”age”:35} string
In other words, as I mentioned, two properties are being ignored due to their content, and I added the final line to show you that the actual type of the serialized version is a string, even though it doesn’t look like one when writing it out to stdout (standard output / your terminal window).
If you come from another, more object-oriented language, such as JAVA or maybe C# (just to name two examples) you’ll be missing the toString method right about now. In those languages, that method gets called every time you try to serialize an object and allows you to customize the string resulting from that operation.
In the case of JavaScript, when you’re using the stringify method, you have the special toJSON method you can use to customize the object’s JSON representation. Just remember that if you define that method, you have to return something, otherwise, the output of serializing your object will be undefined.
Let’s look at another example:
Now when you execute this code, you get the following output:
toJSON called toJSON called Hello world! undefined undefined — — — — — — — — — — — — — “{ ”name”: ”Fernando”, ”age”: 35 }” string
Notice the two undefined lines — those refer to the first object, who’s toJSON method was defined but doesn’t return a value. The last two lines though, they represent the output you would expect from a serialization process. The last thing I want to highlight here is that this time around, we were the ones who had to manually ignore the methods and undefined properties. If we wanted to show them however, we would need to find a correct and valid mapping for them within the JSON standard.
With your usual JSON serialization needs, the stringify method should be more than enough. There are, however, some uncommon cases when that’s not true. Two particular scenarios that come to mind are: wanting to serialize methods safely enough for you to de-serialize them and using them at the destination. Another example where the good old stringify method won’t really work is when you’re dealing with a lot of data inside your JSONs (I’m talking about Gb size JSONs).
You might have other edge cases where the above two solutions won’t work, it’s just a matter either adding the right logic on your process or finding the right module for it (if the problems are common enough, chances are there is already an npm module that takes care of it).
If you’re looking to achieve method serialization, you might want to take a look at node-serialize which allows you to do this easily. Note, however, that sending code over the wire to be executed at the destination is a big security risk since an attacker could provide a self-executing function and incur in an unwanted execution of malign code.
Let me show you how to use this module to serialize and execute a method:
And the output of this should be:
true {“name”:”Bob”,”say”:”_$$ND_FUNC$$_function() {n return ‘hi ‘ + this.name;n }”} true
The three lines written by the script tell us three things:
Finally, if instead, what you’re dealing with is a really big JSON, something that you can’t just parse or serialize with JSON.stringify, then you might want to look into the JSONStream module.
With this one, you can use streams to handle the serialization process, meaning, you can open a stream and gradually write items to it. So instead of turning your gigabyte-sized in-memory structure into a huge string (which probably will require too much memory and will crush your script), it’ll allow you to write into a file (in String format of course) item by item.
Here is a basic example of how to use this library and the streams mechanics:
The actual writing is done just in one line (books.forEach( transformStream.write );), the rest is just stream setup and event configuration.
The output from the script is as follows:
JSONStream serialization complete! Record (event): { name: ‘The Philosopher’s Stone’, year: 1997 } Record (event): { name: ‘The Chamber of Secrets’, year: 1998 } Record (event): { name: ‘The Prisoner of Azkaban’, year: 1999 } Record (event): { name: ‘The Goblet of Fire’, year: 2000 } Record (event): { name: ‘The Order of the Phoenix’, year: 2003 } Record (event): { name: ‘The Half-Blood Prince’, year: 2005 } Record (event): { name: ‘The Deathly Hallows’, year: 2007 } JSONStream parsing complete!
Eventually, the way you handle these tasks is up to you, these modules simply hide the native tools provided by Node.js, which you could eventually use yourself if you wanted to avoid having a dependency on third-party libraries.
Node.js tends to be considered a language specifically for the development of microservices due to the myriad of benefits it provides. But it’s important to note that every time you’re executing your code, you’re just running a script from your terminal. Well, either you or the automated process that you set up to do so, but in either case, whatever you type to execute it, eventually something will run a command like this:
$ node yourscript.js
It’s that simple, and when that happens, your script is capable of receive parameters, just like any other command line tools (heck, just like the node command, that’s receiving your script’s filename as a parameter).
This is not just relevant to when you’re developing a command line tool, you could be accepting command line parameters on your main script that boots up a set of microservices, or simply your main API file. Anything you do in Node can benefit from this, you could receive configuration overrides or even have different behavior depending on the attributes you receive.
And the best part is that reading these parameters is quite simple, here is a quick sample code that should illustrate that:
That’s it! Just copy that into your test file and execute it. Here is an example:
$ node cliparams.js test test2 test 3
And it’s output:
0: /path/to/node.js/bin/node 1: /path/to/your/script/cliparams.js 2: test 3: test2 4: test 5: 3
Notice how we passed three (3) parameters to our script, but instead, we’re seeing five (5). This is because the first parameter is the interpreter being executed (in this case, my node interpreter), the second one is the full path to the script being executed, and from here onwards, you’ll be seeing the actual parameters you passed.
This is the standard behavior, so you could add one more line in order to normalize the list and remove the (usually) unnecessary parameters:
With the following output:
1: test 2: test2 3: test 4: 3
Also, note that clearly, but default, the space character is the delimiter used to understand when a parameter starts and where it ends. If we want our values to also have spaces, then you simply surround your values with double quotes, like this:
$ node cliparams.js “test test2 test 3”
Now, the output from the same script will be:
0: /path/to/your/bin/node 1: /path/to/your/script/cliparams.js 2: test test2 test 3
As you can see, catching CLI parameters on your scripts is incredibly easy and a very powerful tool to add to your tool belt.
This one is a quick one, yet very interesting and useful. Usually, scripting languages provide developers with some ways to capture the current executing script’s path. It can come in handy when dealing with relative paths, since depending on what you’re trying to achieve, conditions might not be the same on your dev environment, such as production. And that could cause real problems. So instead, you would want to use a full valid path, and from there, move to wherever you want.
With this move, you make sure you know exactly where you’re standing and then from there, you can move to wherever you might need to move.
There are two variants for getting this information, you either get the full path up to, and including, the filename of the script, or just get the path up to the folder where the file resides, but not the filename.
The way to get this information is by using the global variables called __dirname and __filename, where the first one contains the path up to the folder, and the second one, as you might’ve guessed, also contains the actual filename of the script using it.
These are just global variables, so in order to use them, you just do, like this:
console.log(__dirname) console.log(__filename)
Note that these variables can be modified by you, so make sure you don’t modify, otherwise, you’ll lose the reference. Also, these variables are not accessible in Node’s REPL, so if you’re trying to verify this using the REPL, you won’t be able to.
This is something I’ve had to look up several times in the past because I tend to use objects in JavaScript as Maps (this comes from before we had access to actual Maps in the language). It’s a bit frustrating being able to do something like this:
yourArray.forEach( iteratorFn )
But not, something like this:
yourMap.forEach( iteratorFn )
Or even:
yourMap.keys().forEach( iteratorFn )
And that’s because the variable yourMap actually contains a JSON, and that’s it. So, playing devil’s advocate for a bit here, it makes sense that there are no methods to solve our problem.
There are, however, a couple of quick workarounds for this: the global object so eloquently called “Object” gives us access to the keys method, which actually does what we want, it returns a list of the attributes of our object. And we can also use a variation of the for loop, which can iterate over the properties of an object as we want.
Here is a quick code sample to show what I mean:
With the following output:
a b c [ ‘a’, ‘b’, ‘c’ ]
Notice how both options have ignored the method names, but if we define our object as a simple JSON, like this:
The output this time around is:
a b c test [ ‘a’, ‘b’, ‘c’, ‘test’ ]
This time around, methods were returned and this may or may not be what you’re looking for. So make sure you check the type of the property’s content before using it. And by that I mean, doing something like this:
or(m in obj) {
console.log(typeof obj[m]) //should print number, number, function
}
Given that with Node.js you usually build your own web server instead of using an already built one (like you would with PHP or JAVA for example). When it comes to deploying your web applications into a remote server, and especially, when you’re deploying on a production environment there might be some restrictions.
Specifically, a web server needs to listen on a specific port in order to receive standard web traffic, such as 80 for normal HTTP traffic or 443 for secure traffic (i.e HTTPS). The problem? You can’t simply start a program that listens to one of these ports if your user doesn’t have enough permissions.
Here is a quick example of what I mean, the following code will error out if you try to run it without enough privileges (usually, unless you’re root or administrator on your system, you won’t be able to).
And here is the error I get on my Linux box (Windows might throw a slightly different error, but the gist of it should be the same):
In order to work around this problem, you will usually want to set up another WebServer or specifically a reverse proxy that will handle the incoming traffic and redirect it internally to your application’s port. In practice, you will still be listening on a non-standard port, but the outside world will never know about it.
Nginx specifically is a great option as a WebServer or even simply as a reverse proxy due to its use of async I/O to handle requests. It allows it to scale up to tens of thousands of requests without an increase in resource consumption (unlike others, such as the Apache WebServer that spawns a new process for every new request).
For this particular article, I won’t cover how to install Nginx itself, if you’re looking into that as well, you may want to check out other articles and then come back here.
As for the actual configuration, you simply need to edit the config file at /etc/nginx/conf.d/sysmon.conf and add the following code:
server {
listen 80;
server_name www.example.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000;
}
}
After that, you simply restart your server and you’re done. What the above code is doing essentially is making sure the WebServer is listening on port 80, for requests received for URLs on www.example.com and whatever it receives, it redirects to the IP you configure on the proxy_pass attribute, which essentially should be the IP and PORT where you started your Node.js WebServer.
A quick tip, if you wanted to test this with a fake URL (such as www.example.com), you can add the following line to your /etc/hosts file:
127.0.0.1 www.example.com
While that line remains in the file, you’ll always be redirected to your localhost when using that URL.
When dealing with asynchronous code, you might be tempted to look for some external libraries to help you ease the pain of having to track their behavior. And if your code relies heavily on them, no one can blame you. But, if you’re just adding a few calls here and there, adding a whole library and it’s dependencies for just a few lines of code, might be considered overkill.
One particular case would be when dealing with a set of asynchronous calls that need to be executed in a loop. How can you gather the output of all those calls and ensure the correct execution of your code, with a simple for loop? That construct wasn’t meant to handle asynchronous calls (which is ironic if you think about it, considering how Node.js’ main selling point back then was in fact, the support for async I/O).
It’s actually not that hard, really, you just have to look past the syntactic sugar added by others such as Async.js and consider how asynchronous calls work.
Essentially, what you have to do, is build a function that will act as a for loop, receiving the list of calls to make, and a single callback to execute once everything is done (or eventually, once one of them errors out).
For example, the following code would take care of that:
The moment the first asynchronous call returns an error, our asyncLoop function will do the same. Otherwise, it’ll gather all results until all calls have finished. Once that happens, we call the final callback to continue with the logical flow.
You could use the above code, as follows:
Basically, you’re looping through three different HTTP calls and gathering their results, without having to use any external libraries for that (other than request to simplify the requesting code).
The last tip I want to cover is a simple one, yet it can be very handy, especially when debugging or logging error information into your log files.
The console object is probably one (if not the most) used object from Node.js, since it’s so easy and useful, but we, especially when just starting to play around with the language, tend to only go with the log method, which is fine, but there is so much more to it that rarely gets used, let me explain.
Usually, your terminal has two different streams you can write into. You, as a user will see both written in your screen, but with the right command line magic, you can redirect either to wherever you want. But how do you choose how to write to either one of them?
The log method writes into stdout, and the error method is what you would use to write into stderr (or standard error if you will).
console.error("Test error")
console.log("Test standard out")
That code, if executed, will only print both strings in your screen, with no difference between each other, but if you execute the script like this:
$ node script.js 1> out.log 2> err.log
Now, that’s a different story, now you’re redirecting the output of each stream into a different file.
Another useful thing to do when logging is to print your stack trace, that would give you an idea of what was happening when the error occurred. In other languages doing this is pretty straightforward. It is also straightforward in Node.js, only not everyone is aware of it.
function triggerTrace() {
console.trace(“Error log”)
}
triggerTrace()
By executing this code, you would get something like the following as the output:
Trace: Error log at triggerTrace (/path/to/your/project/node-tips/console-tips.js:7:10) at Object.<anonymous> (/path/to/your/project/node-tips/console-tips.js:10:1) at Module._compile (internal/modules/cjs/loader.js:702:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:713:10) at Module.load (internal/modules/cjs/loader.js:612:32) at tryModuleLoad (internal/modules/cjs/loader.js:551:12) at Function.Module._load (internal/modules/cjs/loader.js:543:3) at Function.Module.runMain (internal/modules/cjs/loader.js:744:10) at startup (internal/bootstrap/node.js:240:19) at bootstrapNodeJSCore (internal/bootstrap/node.js:564:3)
Notice how you’re getting the function name where the trace was triggered, as well as line numbers and file names. You wouldn’t be getting this with a simple console.log.
This is one that’s very useful when profiling your own code. If you wanted to understand how long a function call takes (or any piece of code to be honest), you would usually do something like:
With that, you’ll see something like a 1002 printed out (notice btw, how this is also proof that setTimeout doesn’t execute code exactly when the timeout expires, but it tries to do as soon as possible).
Now, instead of doing that, you can also use the console object to create a timer without you having to worry about variables, subtractions or any other extra code you might want/need to add.
Just like this:
You can see how we’re doing the same, but with less (and cleaner) code. In fact, you can use other names and keep several timers working at the same time.
As an added bonus, the output, in this case, is better formatted:
timer: 1002.814ms
That is it for this set of tips for the newcomers to Node.js, I hope they’ve been useful and that even if you’re not new to the tech stack, you’ve been able to pick up something as well.
Leave your comments below if you have any tips I’ve missed and would like to share or feel free to expand on the tips I have added.
Until the next one!
Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third-party services are successful, try LogRocket.
LogRocket is like a DVR for web and mobile apps, recording literally everything that happens while a user interacts with your app. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.
LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowMaking carousels can be time-consuming, but it doesn’t have to be. Learn how to use React Snap Carousel to simplify the process.
Consider using a React form library to mitigate the challenges of building and managing forms and surveys.
In this article, you’ll learn how to set up Hoppscotch and which APIs to test it with. Then we’ll discuss alternatives: OpenAPI DevTools and Postman.
Learn to migrate from react-native-camera to VisionCamera, manage permissions, optimize performance, and implement advanced features.
One Reply to "Pro Node.js tips"
Really i appreciated the effort you made for share the knowledge