Docker has long been the go-to tool to create easily distributable and deployable artifacts.
A Docker image can host code written in almost any language; every major operating system supports the ability to execute Docker images and all the cloud providers have at least one platform that allows deployed Docker images.
However, creating a Docker image from your custom application code requires a little expertise, especially if you regularly rebuild images as you make changes to your code.
It is very easy to unnecessarily download thousands of packages each time an image is built, wasting time, consuming bandwidth, and costing money.
Cloud Native Buildpacks emerged as a convenient way to build Docker images by leveraging the decade of experience large hosting providers have from generating and hosting Docker images. By capturing these best practices, Buildpacks ensure your Docker image builds are quick and efficient.
In this post, we’ll look at how a Docker image can host a simple Node.js application, review some common pitfalls you may run into, and explore how Buildpacks allow you to efficiently create Docker images, often with no additional configuration.
The sample application we’ll build in this post is extremely rudimentary. It is simply the end result of running the Express application generator to generate a web page displaying “Welcome to Express.”
The source code for this application can be found on GitHub.
Although simple, the sample application demonstrates how to build a Docker image hosting a Node.js application and some of the inefficiencies that a naive approach to building Docker images can lead to.
To follow along with the post, you’ll need a number of tools installed.
First, you’ll need Node.js, and the Node.js website provides downloads and instructions. The latest long-term support (LTS) release is suitable.
Next, you’ll need Docker and you can find downloads and instructions on their website. While Windows has recently gained native container support, this post focuses on building Linux Docker images.
Both Windows and macOS support Linux Docker images via a seamless virtual machine layer that you mostly don’t have to think about as a developer.
Finally, to use Buildpacks, you must install the pack
CLI tool, which is available from the Buildpacks website.
The traditional method for building Docker images is to define the process in a file called Dockerfile
. We can see an example of building a sample Node.js application below:
FROM node WORKDIR /usr/src/app COPY . . RUN npm install EXPOSE 3000 CMD [ "npm", "start" ]
The complete Dockerfile
reference information is available on the Docker website. Our example only uses a small subset of the available commands, but this is enough to Dockerize our sample application.
The FROM
command defines the base image that you build on top of. All major tools and programming languages offer supported Docker images that developers can base their own images on, and Node.js is no exception.
The complete list of available Node.js Docker images can be found on DockerHub, and here, use the default node
image:
FROM node
A Docker image is essentially a filesystem containing all the files required to run the Linux processes that support an application. On this filesystem, we’ll create a directory called /usr/src/app
to hold our Node.js application.
The WORKDIR
command creates the supplied directory and uses it as the working directory for any subsequent commands:
WORKDIR /usr/src/app
You can then copy the Node.js application source code from your local PC into the Docker image with the COPY
command.
The first argument is the location of the local files. The .
indicates that the local files are in the current working directory.
The second argument is the location in the Docker image where the files copy to. Thanks to the WORKDIR
command above, a .
passed as the second argument results in the files copying to /usr/src/app
:
COPY . .
At this point, you can test and build the application like you would locally. For this simple example, building the application means downloading any dependencies with npm
. You can run commands in the context of the Docker image with the RUN
command:
RUN npm install
The EXPOSE
command opens a port from a container created with the resulting Docker image to the Docker host machine. The sample application defaults to port 3000:
EXPOSE 3000
Finally, you define the command to run when a container based on this image starts. The npm start
command defined in the package.json
file as a shortcut for starting the Node web server allows you to run that same command with the CMD
command:
CMD [ "npm", "start" ]
To build the Docker image, run the following command from the directory holding the Dockerfile
file. This instructs Docker to use the Dockerfile
file in the current directory to build an image called expressapp
:
docker build . -t expressapp
Once the image is built, it can run with the command:
docker run -p 3000:3000 expressapp
This creates a container from the image called expressapp
and exposes port 3000 on the local PC to port 3000 inside the container. You can now open the sample application at http://localhost:3000.
You have now successfully Dockerized the sample Node.js application. However, the approach shown above does have some significant downsides.
Behind the scenes, a Docker image is made up of multiple layers. Each layer represents an incremental change to the image, and the layers combine to generate the resulting file system executed by a container.
Every command in the Dockerfile
creates a new layer. To improve the performance when building a Docker image, these layers are reused if the instructions in the Dockerfile
do not change, and the external files copied into the image do not change either.
You can see this by rebuilding the image again with the following command:
docker build . -t expressapp
Notice this time the image was built far faster than the first time. This is because neither the Dockerfile
file nor the files copied into the image changed, allowing Docker to reuse the previously generated layers.
Let’s now change a file to force Docker to rebuild the layers that make up the image. First, edit the file views\index.jade
to change the welcome message to Welcome to Express from Docker
:
extends layout block content h1= title p Welcome to #{title} from Docker
Now, rebuild the image with the command:
docker build . -t expressapp
Notice that the Node dependencies download again while building the image. This is because the command COPY . .
detected that the copied files changed, meaning the previous layer generated by this command could not be reused.
It also means that any subsequent layers cannot be reused, and the command RUN npm install
is forced to download the dependencies again.
This is a minor inconvenience for such a small application, but larger Node.js applications may require many hundreds of megabytes of dependencies to download with each change to the application source code. This is not particularly efficient and not a sustainable long-term approach.
The typical workaround to this problem is to only copy the package.json
file, run npm install
, and then copy the remaining application source code. We can see an example of this below:
FROM node WORKDIR /usr/src/app COPY package.json . RUN npm install COPY . . EXPOSE 3000 CMD [ "npm", "start" ]
This is an improvement, as changes to the application source code don’t require the dependencies to download again. So long as the package.json
file doesn’t change, you can reuse the layers generated by the COPY package.json .
and RUN npm install
commands.
Still, any change to the package.json
file results in all dependencies redownloading. Wouldn’t it be nice if you could share the node_modules
directory between image builds in the same way it persists on disk when building directly from your local PC?
This is where Buildpacks can help.
You can think of Buildpacks as build scripts evolved and maintained over many years by companies like Google, Heroku, and Cloud Foundry to compile your applications into Docker images in the most convenient and efficient way possible.
To prove this, delete the Dockerfile
file because you don’t need it anymore. The Node.js application now has no special Docker configuration files.
Now, build a Docker image with the following command:
pack build expressappbuildpack
If you run the pack
command for the first time, it prompts you to select a default builder:
Please select a default builder with: pack config default-builder <builder-image> Suggested builders: Google: gcr.io/buildpacks/builder:v1 Ubuntu 18 base image with buildpacks for .NET, Go, Java, Node.js, and Python Heroku: heroku/buildpacks:18 Base builder for Heroku-18 stack, based on ubuntu:18.04 base image Heroku: heroku/buildpacks:20 Base builder for Heroku-20 stack, based on ubuntu:20.04 base image Paketo Buildpacks: paketobuildpacks/builder:base Ubuntu bionic base image with buildpacks for Java, .NET Core, NodeJS, Go, Ruby, NGINX and Procfile Paketo Buildpacks: paketobuildpacks/builder:full Ubuntu bionic base image with buildpacks for Java, .NET Core, NodeJS, Go, PHP, Ruby, Apache HTTPD, NGINX and Procfile Paketo Buildpacks: paketobuildpacks/builder:tiny Tiny base image (bionic build image, distroless-like run image) with buildpacks for Java Native Image and Go Tip: Learn more about a specific builder with: pack builder inspect <builder-image>
Many high-quality builders are provided by teams like Google, Heroku, and Paketo. I tend to stick to the builders provided by Heroku, which is configured with the following command:
pack config default-builder heroku/buildpacks:20
Now, run the build command again:
pack build expressappbuildpack
Watch as the pack
tool detects that the application is written against Node.js, installs all the dependencies, and produces the Docker image. Run the new Docker image with this command:
docker run -p 3000:3000 expressappbuildpack
As before, the sample application is available from http://localhost:3000.
It is worth taking a moment to consider what you just achieved. With no Docker configuration or Dockerfile
, and no special flags or settings to indicate that you have a Node.js application, the pack
command successfully produces a Docker image embedding the Node.js application.
What’s more, Buildpacks are smarter in the way they handle application dependencies. To demonstrate this, add some white space to the end of the package.json
file (such as a new line) and save the changes.
Previously, any change to the package.json
file, even insignificant changes like adding white space, resulted in Docker being unable to reuse any previously generated layers. This, in turn, resulted in all dependencies redownloading.
But, if you rebuild the Docker image with pack
, the existing dependencies are reused. You will see a log message like [INFO] Reusing node modules
indicating that the previously downloaded dependencies are still available.
This is because Buildpacks cleverly use Docker volumes to persist files like dependencies between builds, which is much more robust than relying on layer caching.
So, with one simple command, you can Dockerize your Node.js applications without worrying about hacking a Dockerfile
to prevent unnecessary dependency downloads.
Docker is an incredibly powerful tool for building and distributing applications, but to make the most of it, developers must have a reasonably detailed understanding of how the commands in a Dockerfile
relate to Docker layers and the circumstances under which layers can, and can not, be reused.
Buildpacks abstract away much of the knowledge required to build a Docker image with high quality and battle-tested scripts leveraging advanced features like volumes to build images quickly and efficiently.
Buildpacks make it possible to build a Docker image with nothing more than a single call to pack
.
This post demonstrated manually building Docker images from a custom Dockerfile
and explored situations where application dependencies can redownload unnecessarily.
We then used the pack
command to build a Docker image, demonstrating how dependencies are reused even in situations where layers are typically rebuilt.
Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third-party services are successful, try LogRocket.
LogRocket is like a DVR for web and mobile apps, recording literally everything that happens while a user interacts with your app. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.
LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowBuild scalable admin dashboards with Filament and Laravel using Form Builder, Notifications, and Actions for clean, interactive panels.
Break down the parts of a URL and explore APIs for working with them in JavaScript, parsing them, building query strings, checking their validity, etc.
In this guide, explore lazy loading and error loading as two techniques for fetching data in React apps.
Deno is a popular JavaScript runtime, and it recently launched version 2.0 with several new features, bug fixes, and improvements […]