In the past five years, both Docker and Node.js have become very popular. In this step-by-step guide, we will see how we can use Node.js with Docker to improve the developer experience.
This tutorial will detail how to use docker build
efficiently and leverage Docker Compose to achieve a seamless local development environment. We will use a demo Express.js application as an example. Let’s get started.
Prerequisites
- You should be familiar with the basics of Node.js and npm. We will use Node 14, the latest LTS version.
- You should be aware of the basics of the Express.js framework.
- You should have some working knowledge of Docker.
This tutorial will use commands that will run on Unix-like systems like Linux or macOS with a shell. This is going to be a condensed post where we dive directly into setting up the app, so you may want to read up on Docker and Node if you feel so inclined.
You can dissect the way I have built the app in the public GitHub repository as a sequence of six pull requests.
Create a new Express.js project with Express generator
To generate our demo application, we will use the Express application generator. For this we will run the npx script below:
npx express-generator --view=pug --git <app-name>
To analyze the command we ran, we asked the Express generator to generate an express app. The --view=pug
tells the generator to use the Pug view engine and --git
parameter asks it to add a .gitignore
file. Of course, you will need to replace <app-name>
with your application name. I am using nodejs-docker-express
as an example.
It will render something like below:
Test the Express app
To test the app, first run npm install
to install all the necessary npm modules. After that, run the command below to start the app:
DEBUG=nodejs-docker-express:* npm start
After that, you should see a message with something like nodejs-docker-express:server Listening on port 3000
. The above command is pretty simple: it ran npm start
with an environment variable called DEBUG
with a value of nodejs-docker-express:*
, which instructs the server to do verbose debug.
If you are on Windows, you should use set DEBUG=nodejs-docker-express:* & npm start
. You can read more about Express debugging to learn about other available options.
Now open your browser and type in http://localhost:3000
to see an output like the one below:
Hurray! Your bare-bones Express.js app is already running. Now you can stop the server with Ctrl+c
on that command line window. We will proceed to dockerizing our Node.js express application.
Dockerize the app with Docker multi-stage build
Containerizing our application has numerous upsides. To start with, it will behave the same regardless of the platform on which it is run. With Docker containers, the app can be easily deployed to platforms like AWS Fargate, Google Cloud Run, or even your own Kubernetes cluster.
We will start with a Dockerfile. A Dockerfile is a blueprint on which the Docker image is built. When the built image is running, it is called a container. This post does a good job of explaining the differences between these three terms. Here is a quick visual explanation:
So the process is pretty straightforward: we build a Docker image from a Dockerfile. A running Docker image is called a Docker container.
Setting up the base stage
Time to see how our Dockerfile looks — as a bonus, we will utilize multi-stage builds to make our builds faster and more efficient.
FROM node:14-alpine as base WORKDIR /src COPY package*.json / EXPOSE 3000 FROM base as production ENV NODE_ENV=production RUN npm ci COPY . / CMD ["node", "bin/www"] FROM base as dev ENV NODE_ENV=development RUN npm install -g nodemon && npm install COPY . / CMD ["nodemon", "bin/www"]
In the above Dockerfile, we make use of multi-stage builds. It has three stages: base, production, and dev. The base stage has things common in both dev and production. Graphically, it can be portrayed like this:
Does this remind you of inheritance? It is a kind of inheritance for Docker images. We are using a slim production stage and a more feature-rich, development-focused dev stage.
Let’s go through it line by line:
FROM node:14-alpine as base
First, we tell Docker to use the official Docker Node Alpine image version 14, the last LTS one. This image is available publicly on DockerHub. We are using the Alpine variant of the official Node.js Docker image as it is just under 40MB compared to 345MB for the main one.
We also specify as base
because this Dockerfile uses the multi-stage build. Naming is up to you; we are using the base as it will be “extended” later in the build process.
WORKDIR /src COPY package*.json / EXPOSE 3000
The WORKDIR
sets the context for subsequent RUN
commands that execute after setting it. We only copy the package.json
and package-lock.json
files to the container to get faster builds with better Docker build caching.
The next line is to EXPOSE
the port 3000
on the container. This is the port where the Node.js Express web server runs by default. The above steps will be common to both the dev and production stages.
Now we can take a look at how the production target stage is built.
Setting up the production stage
FROM base as production ENV NODE_ENV=production RUN npm ci COPY . / CMD ["node", "bin/www"]
In the production stage, we continue where we left off for the base stage, as the line here instructs Docker to start from the base. Subsequently, we ask Docker to set the environment variable called NODE_ENV
to production
.
Setting this variable to production
is said to perform three times better and has other benefits like cached views, too. Running npm install
will install only the main dependencies, leaving out the dev dependencies. These settings are perfect for a production environment.
Next, we run npm ci
instead of npm install
. npm ci
is targeted for continuous integration and deployment. It is also much faster than npm install
because it bypasses some user-oriented features. Note that npm ci
needs a package-lock.json
file to work.
After that, we copy the code to /src
, as this is our workdir. This is where it will copy the custom code we have into the container. Consequently, we run the bin/www
command with the Node command to run the web server.
Because we are leveraging the multi-stage build, we can add components necessary for development only in the development stage. Let’s look at how that is done.
FROM base as dev ENV NODE_ENV=development RUN npm install -g nodemon && npm install COPY . / CMD ["nodemon", "bin/www"]
Similar to production, dev is also “extending” from the base stage. In this stage, we are setting the NODE_ENV
environment variable to development
. After that, we install nodemon. Whenever a file changes, nodemon will restart the server, making our development experience much smoother.
Then, we do the regular npm install
, which will install dev dependencies, too, if there are any. In our current package.json
, there are no dev dependencies. If we were testing our app with Jest, for example, that would be one of the dev dependencies. Notice the two commands are put together with an &&
. This creates fewer Docker layers, which is good for build caching.
Same as the earlier stage, we copy our code to the container at /src
. This time, however, we run the web server with nodemon to restart it on each file change as this is the development environment.
Don’t ignore .dockerignore
!
Just as we wouldn’t use Git without .gitignore
, it is highly advisable to add a .dockerignore
file when using Docker. .dockerignore
is used to ignore files that you don’t want to land in your Docker image. It helps to keep the Docker image small and keep the build cache more efficient by ignoring irrelevant file changes. This is how our .dockerignore
file looks:
.git node_modules
It’s very simple: we are instructing Docker to not copy the .git
folder and the node_modules
from the host to the Docker container. As we run npm ci
or npm install
inside the container, which will help keep things consistent.
Add Docker Compose — and don’t forget the build target
By now we have most of the things we’ll need to run our Node.js Express app with Docker. The next thing we’ll need to glue it all together is Docker Compose.
Compose makes it easy to run applications with single or even multiple containers. We don’t need to remember very long commands to build or run containers. As long as you can run docker-compose build
and docker-compose up
your applications will run effortlessly.
On the bright side, it comes pre-installed with your Docker installation. It is mostly used in the development environment.
Below is our docker-compose.yml
file, which lives on the root of the project for this tutorial:
version: '3.8' services: web: build: context: ./ target: dev volumes: - .:/src command: npm run start:dev ports: - "3000:3000" environment: NODE_ENV: development DEBUG: nodejs-docker-express:*
First, we specify the version of Docker Compose, which, in our case, is 3.8, the latest version supported by Docker engine 19.0.3. This also lets us use multi-stage Docker builds.
Next, we specify the services we are using. For this tutorial, we only have one service named web
. It has a build context
of the current directory and an important build parameter of target
set to dev
. This tells Docker that we want to build the Docker image with the dev stage.
After that, we are specifying the Docker volume. It instructs Docker to copy and sync changes from the local directory./
of the host with /src
on the Docker container. This will be useful when we change our file in the host machine, and it will be reflected instantly inside the container, too.
Consequently, we use the command npm run start:dev
, which is added to the package.json
file as:
"start:dev": "nodemon ./bin/www"
So, we want to start the web server with nodemon. As it is our development environment, it will restart the server on each file save.
Next, we map the host machine’s port 3000 with the container port 3000. We exposed port 3000 when we built the container, and our web server runs on 3000, too.
Finally, we set a couple of environment variables. First, it is the NODE_ENV
set to development
, as we want to see verbose errors and not do any view caching. Then, we set the debug to be *
, which tells the Express web server to print out verbose debug messages about everything.
Test the app with Docker and Docker Compose
We have set up all the needed parts, so now let’s carry on to building the Docker image. We will optimize our Docker builds with BuildKit. Docker images are built much faster with BuildKit enabled. Time to see it in action — run the following command:
COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose build
Here we are telling Compose to build the Docker image with BuildKit on. It should run in some time and build the Docker image like below:
So our Docker image was built in around 14 seconds — much faster with BuildKit. Let’s run the image:
docker-compose up
It should result in something like below:
After that, if you hit http://localhost:3000
on your browser you should see the following:
Great! Our app is running well with Docker. Now let’s make a file change and see if it reflects correctly.
Restart on file change nodemon to the rescue
Our aim is to change “Welcome to Express” to “Welcome to Express with Docker” as a test. To do this, we will need to change the routes/index.js
file at line 6
and make it look like below:
res.render('index', { title: 'Express with Docker' });
As soon as we save the file, we can see that the web server restarts. This clearly shows that our Docker volumes and nodemon are functioning properly as expected.
At his juncture, if you go and refresh your browser tab running http://localhost:3000
, you will see:
Hurray! You have successfully run an Express app with Docker in your local environment with Docker Compose configured. Give yourself a pat on the back!
Next steps and conclusion
Docker Compose is very useful to start multiple containers. So if we want to add Mongo or MySQL as a data source for the application, we can do it easily as another service in the docker-compose
file.
For the purposes of this tutorial, we will focus only on Node.js with Docker having a single container running.
Node.js and docker play along very well. With the use of docker-compose, the development experience is much smoother. You can use this tutorial as a building base to try out more advanced things with Docker and Node.js. Happy coding!
200’s only
Monitor failed and slow network requests in production
Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third party services are successful, try LogRocket. 
LogRocket is like a DVR for web and mobile apps, recording literally everything that happens while a user interacts with your app. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.
LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.
Nice article, thanks! One question: in the development stage of the Docker file, you copy the sources to /src in the image. But in the compose file with the dev target, you mount the current directory (.) to that same location as a volume. Are both really necessary? Isn’t it enough to do the volume mount, for the dev stage?
Thanks for the good article!
What’s the difference between `CMD [“nodemon”, “bin/www”]` for dev target inside Dockerfile and `command: npm run start:dev` inside docker-compose.yml? Will it override the one from dockerfile? And what’s the reason to specify both then?
@Tom thanks for the comment. They are essentially the same thing. There are 2 advantages of doing it, first one is when the files are changed they are synced inside the container too without the need to build it. Another one is if some NPM dependencies are installed on the host machine (not the container) they will also be synced into the container. I hope this clarifies your query!
Seems like a reasonable, detailed post. There’s one main thing that is missing that caused me to skip most of the content – *why* do I want to move to a dockerized dev environment? What are the benefits from your perspective? How has it improved, or what has it enabled in your development workflow? What are the drawbacks?
Hey @Dale, great question. Below are the benefits from my point of view:
1. Let’s say there is Node 16 out, in a new branch you could test you node app changing literally 1 line. Build and docker-compose up you can see the changes.
2. You don’t even need to install node locally on your machine if you want.
3. You may use a mac/windows but the app is deployed on a Linux server, if you use docker the same(ish) container goes to prod so the binaries and other things will work as expected
Some more reasons: https://geshan.com.np/blog/2018/10/why-use-docker-3-reasons-from-a-development-perspective/
It has improved a lot in the past years. It helps have a better streamlined workflow as you ship not only the code but essentially the whole stack with each deployment.
Drawbacks — I don’t see much for the good things it provides but the need to build and the time it takes to build the container might be one. On mac the file sync becomes a bit slow at times and yes it adds a bit more complexity to do things like doing line by line debugging which is one time setup. But, these should be tradeoff one should take for the portability and flexibility docker on dev provides. Thats my point of view, thanks!
Great article, thanks!
Just one question: what about the restart policy in production, in case of crash? Since you use nodemon only for development, how do you restart the app in production? Do you suggest using something like nodemon or pm2 in production or relying on the container restart policy?
@Francessco, thanks for the comment.
About restart of container on prod, there are two ways to deal with it IMO. First one let the container orchestrator like Kubernetes handle it. If a pod is down which is less than desired K8s will spin up a new one. Or you can even try something like PM2 – https://pm2.keymetrics.io/docs/usage/quick-start/ to get the job done. Analyze both and use the one that fits your need and use case. For a general node app I would go for Kubernetes to take care of it.
Hi great article, helps me a lot. May I know how do run the image for production. Thanks
Do we still need a dockerignore if we explicitly copy required directories and exclude nodemodules and .git file
Hello Yaz,
On production we use Kubernetes. If you want to quickly try out your docker images as running containers Google Cloud Run is a great service. Thanks!
Depends on what you want to do, if there is a docker ignore you don’t need to remember to do it. Lets say if you want to exclude your env files or logs from getting into docker it would be easier to add them to docker ignore than excluded them in the docker file. Hope it helps!
Whats your perfered way of updating the image when new node dependencies are installed locally?