Jordan Irabor Jordan is an innovative software developer with over five years of experience developing software with high standards and ensuring clarity and quality. He also follows the latest blogs and writes technical articles, as a guest author, on several platforms.

How to build a web crawler with Node

6 min read 1682


A web crawler, often shortened to crawler or sometimes called a spider-bot, is a bot that systematically browses the internet typically for the purpose of web indexing. These internet bots can be used by search engines to improve the quality of search results for users. In addition to indexing the world wide web, crawling can also be used to gather data (known as web scraping).

The process of web scraping can be quite tasking on the CPU depending on the site’s structure and the complexity of data being extracted. To optimize and speed up this process, we will make use of Node workers (threads) which are useful for CPU-intensive operations.

In this article, we will learn how to build a web crawler that scrapes a website and stores the data in a database. This crawler bot will perform both operations using Node workers.


  1. Basic knowledge of Node.js
  2. Yarn or NPM (we’ll be using Yarn)
  3. A system configured to run Node code (preferably version 10.5.0 or superior)


Launch a terminal and create a new directory for this tutorial:

$ mkdir worker-tutorial
$ cd worker-tutorial

Initialize the directory by running the following command:

$ yarn init -y

We need the following packages to build the crawler:

  • Axios — a promised based HTTP client for the browser and Node.js
  • Cheerio — a lightweight implementation of jQuery which gives us access to the DOM on the server
  • Firebase database — a cloud-hosted NoSQL database. If you’re not familiar with setting up a firebase database, check out the documentation and follow steps 1-3 to get started

Let’s install the packages listed above with the following command:

$ yarn add axios cheerio firebase-admin

Hello workers

Before we start building the crawler using workers, let’s go over some basics. You can create a test file hello.js in the root of the project to run the following snippets.

Registering a worker

A worker can be initialized (registered) by importing the worker class from the worker_threads module like this:

// hello.js

const { Worker } = require('worker_threads');

new Worker("./worker.js");

Hello world

Printing out Hello World with workers is as simple as running the snippet below:

We made a custom demo for .
No really. Click here to check it out.

// hello.js

const { Worker, isMainThread }  = require('worker_threads');
    new Worker(__filename);
} else{
    console.log("Worker says: Hello World"); // prints 'Worker says: Hello World'

This snippet pulls in the worker class and the isMainThread object from the worker_threads module:

  • isMainThread helps us know when we are either running inside the main thread or a worker thread
  • new Worker(__filename) registers a new worker with the __filename variable which, in this case, is hello.js

Communication with workers

When a new worker (thread) is spawned, there is a messaging port that allows inter-thread communications. Below is a snippet which shows how to pass messages between workers (threads):

// hello.js

const { Worker, isMainThread, parentPort }  = require('worker_threads');

if (isMainThread) {
    const worker =  new Worker(__filename);
    worker.once('message', (message) => {
        console.log(message); // prints 'Worker thread: Hello!'
    worker.postMessage('Main Thread: Hi!');
} else {
    parentPort.once('message', (message) => {
        console.log(message) // prints 'Main Thread: Hi!'
        parentPort.postMessage("Worker thread: Hello!");

In the snippet above, we send a message to the parent thread using parentPort.postMessage() after initializing a worker thread. Then we listen for a message from the parent thread using parentPort.once(). We also send a message to the worker thread using worker.postMessage() and listen for a message from the worker thread using worker.once().

Running the code produces the following output:

Main Thread: Hi!
Worker thread: Hello!

Building the crawler

Let’s build a basic web crawler that uses Node workers to crawl and write to a database. The crawler will complete its task in the following order:

  1. Fetch (request) HTML from the website
  2. Extract the HTML from the response
  3. Traverse the DOM and extract the table containing exchange rates
  4. Format table elements (tbody, tr, and td) and extract exchange rate values
  5. Stores exchange rate values in an object and send it to a worker thread using worker.postMessage()
  6. Accept message from parent thread in worker thread using parentPort.on()
  7. Store message in firestore (firebase database)

Let’s create two new files in our project directory:

  1. main.js – for the main thread
  2. dbWorker.js – for the worker thread

The source code for this tutorial is available here on GitHub. Feel free to clone it, fork it or submit an issue.

Main thread (main.js)

In the main thread, we will scrape the IBAN website for the current exchange rates of popular currencies against the US dollar. We will import axios and use it to fetch the HTML from the site using a simple GET request.

We will also use cheerio to traverse the DOM and extract data from the table element. To know the exact elements to extract, we will open the IBAN website in our browser and load dev tools:


From the image above, we can see the table element with the classes — table table-bordered table-hover downloads. This will be a great starting point and we can feed that into our cheerio root element selector:

// main.js

const axios = require('axios');
const cheerio = require('cheerio');
const url = "";

fetchData(url).then( (res) => {
    const html =;
    const $ = cheerio.load(html);
    const statsTable = $('.table.table-bordered.table-hover.downloads > tbody > tr');
    statsTable.each(function() {
        let title = $(this).find('td').text();

async function fetchData(url){
    console.log("Crawling data...")
    // make http call to url
    let response = await axios(url).catch((err) => console.log(err));

    if(response.status !== 200){
        console.log("Error occurred while fetching data");
    return response;

Running the code above with Node will give the following output:

crawling data from example

Going forward, we will update the main.js file so that we can properly format our output and send it to our worker thread.

Updating the main thread

To properly format our output, we need to get rid of white space and tabs since we will be storing the final output in JSON. Let’s update the main.js file accordingly:

// main.js
let workDir = __dirname+"/dbWorker.js";

const mainFunc = async () => {
  const url = "";
  // fetch html data from iban website
  let res = await fetchData(url);
    console.log("Invalid data Obj");
  const html =;
  let dataObj = new Object();
  // mount html page to the root element
  const $ = cheerio.load(html);

  let dataObj = new Object();
  const statsTable = $('.table.table-bordered.table-hover.downloads > tbody > tr');
  //loop through all table rows and get table data
  statsTable.each(function() {
    let title = $(this).find('td').text(); // get the text in all the td elements
    let newStr = title.split("\t"); // convert text (string) into an array
    newStr.shift(); // strip off empty array element at index 0
    formatStr(newStr, dataObj); // format array string and store in an object

  return dataObj;

mainFunc().then((res) => {
    // start worker
    const worker = new Worker(workDir); 
    console.log("Sending crawled data to dbWorker...");
    // send formatted data to worker thread 
    // listen to message from worker thread
    worker.on("message", (message) => {


function formatStr(arr, dataObj){
    // regex to match all the words before the first digit
    let regExp = /[^A-Z]*(^\D+)/ 
    let newArr = arr[0].split(regExp); // split array element 0 using the regExp rule
    dataObj[newArr[1]] = newArr[2]; // store object 

In the snippet above, we are doing more than data formatting; after the mainFunc() has been resolved, we pass the formatted data to the worker thread for storage.

Worker thread (dbWorker.js)

In this worker thread, we will initialize firebase and listen for the crawled data from the main thread. When the data arrives, we will store it in the database and send a message back to the main thread to confirm that data storage was successful.

The snippet that takes care of the aforementioned operations can be seen below:

// dbWorker.js

const { parentPort } = require('worker_threads');
const admin = require("firebase-admin");

//firebase credentials
let firebaseConfig = {
    authDomain: "XXXXXXXXXXXX-XXX-XXX",
    projectId: "XXXXXXXXXXXX-XXX-XXX",
    storageBucket: "XXXXXXXXXXXX-XXX-XXX",
    messagingSenderId: "XXXXXXXXXXXX-XXX-XXX",

// Initialize Firebase
let db = admin.firestore();
// get current data in DD-MM-YYYY format
let date = new Date();
let currDate = `${date.getDate()}-${date.getMonth()}-${date.getFullYear()}`;
// recieve crawled data from main thread
parentPort.once("message", (message) => {
    console.log("Recieved data from mainWorker...");
    // store data gotten from main thread in database
        rates: JSON.stringify(message)
    }).then(() => {
        // send data back to main thread if operation was successful
        parentPort.postMessage("Data saved successfully");
    .catch((err) => console.log(err))    

Note: To set up a database on firebase, please visit the firebase documentation and follow steps 1-3 to get started.

Running main.js (which encompasses dbWorker.js) with Node will give the following output:

node output

You can now check your firebase database and will see the following crawled data:

firebase display of data

Final notes

Although web crawling can be fun, it can also be against the law if you use data to commit copyright infringement. It is generally advised that you read the terms and conditions of the site you intend to crawl, to know their data crawling policy beforehand. You can learn more in the Crawling Policy section of this page.

The use of worker threads does not guarantee your application will be faster but can present that mirage if used efficiently because it frees up the main thread by making CPU intensive tasks less cumbersome on the main thread.


In this tutorial, we learned how to build a web crawler that scrapes currency exchange rates and saves it to a database. We also learned how to use worker threads to run these operations.

The source code for each of the following snippets is available on GitHub. Feel free to clone it, fork it or submit an issue.

Further reading

Interested in learning more about worker threads? You can check out the following links:


You come here a lot! We hope you enjoy the LogRocket blog. Could you fill out a survey about what you want us to write about?

    Which of these topics are you most interested in?
    ReactVueAngularNew frameworks
    Do you spend a lot of time reproducing errors in your apps?
    Which, if any, do you think would help you reproduce errors more effectively?
    A solution to see exactly what a user did to trigger an errorProactive monitoring which automatically surfaces issuesHaving a support team triage issues more efficiently
    Thanks! Interested to hear how LogRocket can improve your bug fixing processes? Leave your email:

    200’s only Monitor failed and slow network requests in production

    Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third party services are successful, try LogRocket.

    LogRocket is like a DVR for web apps, recording literally everything that happens on your site. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.

    LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. .
    Jordan Irabor Jordan is an innovative software developer with over five years of experience developing software with high standards and ensuring clarity and quality. He also follows the latest blogs and writes technical articles, as a guest author, on several platforms.

    One Reply to “How to build a web crawler with Node”

    Leave a Reply