Every release of Node.js comes with new exciting features, and v20 is no exception. Node.js v20 was released on April 18, 2023. This version ships with newer features that aim to make Node.js more secure than ever by reducing dependencies with a stable built-in test runner. Node.js v20 also provides the functionality to create single executable applications that can execute on Windows, macOS, and Linux without installing Node.js on their systems.
In this tutorial, we’ll explore some of the features available in Node.js v20. You will need Node.js v20 or higher installed on your machine and familiarity with creating and running Node.js programs to follow along.
Jump ahead:
One of the major features introduced in Node.js v20 is the experimental Permission Model, which aims to make Node.js more secure. For a long time, Node.js had no permissions system in place. Any application could interact with the file system or even spawn up processes on the user machines.
This opened doors to attacks, where third-party packages accessed a user’s machine resource without their consent. To mitigate the risks, the permission model restricts Node.js applications from accessing the file system, creating worker threads, and spawning child processes.
With the Permission Model enabled, users can run applications without the worry that malicious third-party packages can access confidential files, delete or encrypt files, or even run harmful programs. The Permission Model also allows the user to grant specific permissions to a Node.js app when running the application or during runtime.
Let’s look at how to use the Permission Model. Create a directory with a name of your choosing:
mkdir example_app
Create a package.json
file:
npm init -y
Add type:module
to support ESM modules:
{ ... "type": "module" }
Then, create a data.txt
with the following content:
Text content that will be read in a Node.js program.
Next, create an index.js
file and add the following code to read the data.txt
file:
import { readFile } from "fs/promises"; async function readFileContents(filename) { const content = await readFile(filename, "utf8"); // <!- console.log(content); } readFileContents("./data.txt");
Here, we define a readFileContents
function that accepts the filename
and reads the file from the file system. In the function, we invoke the readFile()
method of the fs
module to read the data.txt
file contents and then log them in the console.
Now, run the file using the node
command:
node index.js
We will see the output in the console that looks like the following:
Text content that will be read in a Node.js program.
To enable the experimental Permission Model, run the file with the --experimental-permission
flag:
node --experimental-permission index.js
This time we receive an error that looks like the following:
// output node:internal/modules/cjs/loader:179 const result = internalModuleStat(filename); ^ Error: Access to this API has been restricted at stat (node:internal/modules/cjs/loader:179:18) at Module._findPath (node:internal/modules/cjs/loader:651:16) at resolveMainPath (node:internal/modules/run_main:15:25) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:76:24) at node:internal/main/run_main_module:23:47 { code: 'ERR_ACCESS_DENIED', permission: 'FileSystemRead', resource: '/home/<your_username>/example_app/index.js' }
The error message lets us know we don’t have permission to read the file. The Permission Model has restricted the file system access. We would also get an error message if we tried spawning worker threads and child processes.
To grant read/write access to the file or a directory, we can use the --allow-fs-read
flag. The following are some of the options we can use:
--allow-fs-read=*
: The wildcard *
provides read access to all the directories/files on the file system--allow-fs-read=/home/<username>/
: Specifies that the /home/<username>
directory should have read access--allow-fs-read=/tmp/filename.txt
: This only allows reading access to the given filename, which is the filename.txt
hereWe can also grant write access using the --allow-fs-write
flag. It also accepts wildcards, directory paths, or filenames, as demonstrated above. As mentioned, the Permission Model also prevents Node.js programs from creating child processes. To grant the permission, we need to pass the --allow-child-process
:
node --experimental-permission --allow-child-process index.js
To allow worker threads to be created to execute tasks in parallel, we can use the --allow-worker
flag instead:
node --experimental-permission --allow-worker index.js
So, back to our earlier example, let’s grant Node.js permission to read the data.txt
. A more flexible way to do this is to provide the full directory path where the data.txt
resides as follows:
node --experimental-permission --allow-fs-read=/home/<your_username>/example_app index.js
Now, the program can read the file without any issues though it provides a warning:
(node:8506) ExperimentalWarning: Permission is an experimental feature (Use `node --trace-warnings ...` to show where the warning was created) Text content that will be read in a Node.js program.
With the Permission Model enabled, we might not know if the application has permission to write or read the file system. To guard ourselves against runtime ERR_ACCESS_DENIED
errors, the Permission Model allows us to check for permissions during runtime.
To check if we have read permissions, we can do it as follows:
if(process.permission.has('fs.read')) { // proceed to read the file }
Or, we can check for permission on directories. In the following code, we check if we have write access to the given directory:
if (process.permission.has('fs.write', '/home/username/') ) { //do your thing }
With that, we are now ready to create secure applications and protect our machine’s resources from being accessed without our consent. To explore more features in the permission model, visit the documentation.
Before the release of Node.js v18, all test runners in Node.js were third-party packages, such as Jest and Mocha. While they have served the Node.js community well, third-party libraries can be unpredictable.
For one, Jest has a bug where it breaks the instanceof
operator, generating false positives. The solution is to install yet another third-party package. Built-in tools tend to work as expected and are more standardized, as with Python or Go, which all ship with a built-in test runner.
Even newer JavaScript runtimes like Deno and Bun come with test runners. Node.js was left behind until the release of Node.js v18, which shipped with an experimental test runner. Now, with the Node.js v20 release, the test runner is stable and can be used in production. The following are some of the features available in the test runner:
Let’s explore the test runner in detail. First, create a calculator.js
with the following code:
// calculator.js export function add(x, y) { return x + y; } export function divide(x, y) { return x / y; }
Following that, create a test
directory with a calculator_test.js
file in the directory. In the file, add the following code to test the functions using the built-in test runner:
// calculator_test.js import { describe, it } from "node:test"; import assert from "node:assert/strict"; import { add, divide } from "../calculator.js"; describe("Calculator", () => { it("can add two numbers", () => { const result = add(2, 5); assert.strictEqual(result, 7); }); it("can divide two numbers", () => { const result = divide(15, 5); assert.strictEqual(result, 3); }); });
In the preceding code, we import the describe
/it
keywords from node:test
, which should be familiar if you have used Jest. We also import assert
from node:assert/strict
. We then test that the divide()
and add()
functions work as intended. Run the test as follows:
node --test
Running the tests yields output that matches the following:
▶ Calculator ✔ can add two numbers (0.984478ms) ✔ can divide two numbers (0.291951ms) ▶ Calculator (5.135785ms) ℹ tests 2 ℹ suites 1 ℹ pass 2 ℹ fail 0 ℹ cancelled 0 ℹ skipped 0 ℹ todo 0 ℹ duration_ms 158.853226
When we run the test, the built-in runner searches for all JavaScript test files with the suffix of .js
, .cjs
, and .mjs
to execute provided that:
test
test-
_test
, -test
, or _test
We can also provide the directory containing tests when we run node --test
. If we want to skip some tests, we will need to provide the skip: true
option as a second parameter to the it
block:
... describe("Calculator", () => { it("can add two numbers", () => { const result = add(2, 5); assert.strictEqual(result, 7); }); // skip test it("can divide two numbers", { skip: true }, () => { const result = divide(15, 5); assert.strictEqual(result, 3); }); } )
When we rerun the tests, we will see that only one test run:
▶ Calculator ✔ can add two numbers (0.954955ms) ﹣ can divide two numbers (0.214886ms) # SKIP ▶ Calculator (5.111238ms) ...
Node.js v20 has also shipped with an experimental watch mode that can automatically run the tests once it detects changes in the test files. We need to pass the --watch
flag and also provide the directory to run the tests in watch mode:
node --test --watch test/*.js
If you change the file, Node.js will automatically pick up the changes and rerun the test.
We have only scratched the surface of what the test runner can do. Check out the documentation to continue exploring it.
Node.js is built upon the high-performance V8 JavaScript engine, which also powers Google Chrome. It implements newer ECMAScript features. When a new version of Node.js is released, it ships with the latest version of the V8 JavaScript engine. The newest version is V8 v11.3, which has some notable features including:
ArrayBuffer
: Resizes the ArrayBuffer
according to the given size in bytesSharedArrayBuffer
: Grows a ShareArrayBuffer
according to the given size in bytesString.prototype.isWellFormed()
: Returns true
if a string is well-formed and doesn’t contain lone surrogatesString.prototype.toWellFormed()
: Fixes and returns a string without lone surrogate issues-v
flag: Improves case-insensitive matchingLet’s explore the first two features for resizing the ArrayBuffer
and growing the SharedArrayBuffer
. One of the most exciting new features is resizing an ArrayBuffer
. Before Node.js v20, resizing a buffer to hold more data after creating it was impossible. With Node.js 20, we can resize it using the resize()
method, as shown in the following example:
//resize_buffer.js const buffer = new ArrayBuffer(4, { maxByteLength: 10 }); if (buffer.resizable) { console.log("The Buffer can be resized!"); buffer.resize(8); // resize the buffer } console.log(`New Buffer Size: ${buffer.byteLength}`);
First, we create a buffer with a size of 4
bytes with a buffer limit of 10
bytes as specified using the maxByteLength
property. With the buffer limit, if we were to resize it to over 10
bytes, it would fail. If you require more bytes, you can modify the maxByteLength
to the value you want.
Next, we check if the buffer is resizable, then invoke the resize()
method to resize the buffer from 4
bytes to 8
bytes. Finally, we log the buffer size. Run the file like so:
node resize_buffer.js
Here’s the output:
The Buffer can be resized! New Buffer Size: 8
The buffer has been resized from 4
bytes to 8
bytes successfully. The SharedArrayBuffer
also had the same limitation ArrayBuffers
had, but now we can grow it to the size of our choosing using the grow()
method:
// grow_buffer.js const buffer = new SharedArrayBuffer(4, { maxByteLength: 10 }); if (buffer.growable) { console.log("The SharedArrayBuffer can grow!"); buffer.grow(8); } console.log(`New Buffer Size: ${buffer.byteLength}`);
The output looks like so:
The SharedArrayBuffer can grow! New Buffer Size: 8
In the example, we check if the buffer
is growable. If it evaluates to true, we invoke the grow()
method to grow the SharedArrayBuffer
to 8
bytes. Similar to the ArrayBuffer
, we should not grow it beyond the maxByteLength
.
Directories are often thought of as tree structures because they contain subdirectories, which also contain subdirectories. Historically, the readdir()
method of the fs
module has been limited to only listing file contents of the given directory and did not recursively go through the subdirectories and list their contents. As a result, developers turned to third-party libraries such as readdirp, recursive-readdir, klaw, and fs-readdir-recursive.
Node.js v20 has added a recursive
option to both the readdir
and readdirSync
methods that allow the methods to read the given directory and the subdirectories recursively. Assuming we have a directory structure similar to the following:
├── dir1 │ ├── dir2 │ │ └── file4.txt │ └── file3.txt ├── file1.txt └── file2.txt
We can add the recursive: true
option to list all the files, including those in subdirectories, as follows:
// list_directories.js import { readdir } from "node:fs/promises"; async function readFiles(dirname) { const entries = await readdir(dirname, { recursive: true }); console.log(entries); } readFiles("data"); // <- "data" is the root directory name
Once we run the file, the output will match the following:
[ 'dir1', 'file1.txt', 'file2.txt', 'dir1/dir2', 'dir1/file3.txt', 'dir1/dir2/file4.txt' ]
Without the recursive
option passed to the readdir()
method, the output will look like the following:
[ 'dir1', 'file1.txt', 'file2.txt' ]
While this is a minor addition, it helps us reduce dependencies in our projects.
The last feature we will explore in this article is the experimental single executable applications (SEA) introduced in Node.js v20. It allows us to bundle an app into a single executable .exe
file on Windows or a binary that can run on macOS/Linux without users having to install Node.js on their system. As of this writing, it only supports scripts that use the commons module system.
Let’s create a binary. The instructions in this section will only work on Linux. On macOS and Windows, some steps will differ, so it would be best to consult the documentation. First, create a different directory to contain the code and move into it:
mkdir sea_demo && cd sea_demo
Following this, create a list_items.js
file with the following:
const items = ["cameras", "chargers", "phones"]; console.log("The following are the items:"); for (const item of items) { console.log(item); }
Next, create a configuration file sea-config.json
that creates a blob that can be injected into the executable:
{ "main": "list_items.js", "output": "sea-prep.blob" }
Generate the blob like so:
node --experimental-sea-config sea-config.json // Output: Wrote single executable preparation blob to sea-prep.blob
Copy the executable and give it a name that works for you:
cp $(command -v node) list_items
Inject the blob into the binary:
npx postject list_items NODE_SEA_BLOB sea-prep.blob \ --sentinel-fuse NODE_SEA_FUSE_fce680ab2cc467b6e072b8b5df1996b2
Run the binary on the system like this:
./list_items
Our program produces output that looks similar to this:
The following are the items: cameras chargers phones (node:41515) ExperimentalWarning: Single executable application is an experimental feature and might change at any time (Use `list_items --trace-warnings ...` to show where the warning was created)
That takes care of creating SEAs. If you want to learn how to create binaries for macOS or Windows, take a look at the documentation.
In this post, we explored some of the features introduced in Node.js v20. First, we looked at how to use the experimental Permission Model. We then peeked into how to use the now stable built-in test runner. From there, we learned about the new features available in the V8 JavaScript engine.
Following that, we explored how to read directories recursively, and finally, we created a binary using the experimental Single Executable Application (SEA) feature that allows users to run Node.js programs without installing Node.js.
I hope you have found interesting features that you can incorporate into your projects. Don’t forget to check out the Node.js documentation for more information.
Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third-party services are successful, try LogRocket.
LogRocket is like a DVR for web and mobile apps, recording literally everything that happens while a user interacts with your app. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.
LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowBuild scalable admin dashboards with Filament and Laravel using Form Builder, Notifications, and Actions for clean, interactive panels.
Break down the parts of a URL and explore APIs for working with them in JavaScript, parsing them, building query strings, checking their validity, etc.
In this guide, explore lazy loading and error loading as two techniques for fetching data in React apps.
Deno is a popular JavaScript runtime, and it recently launched version 2.0 with several new features, bug fixes, and improvements […]