Rust has been the most admired programming language for seven years in a row, according to Stack Overflow. But it is also infamous for being slow at compile time. This is by design and for good reason; the compiler enforces constraints that make Rust safe at runtime.
But the thing with slow compilation time is that, if ignored, the waiting time will compound, the cost of running the CI/CD will increase, and the feedback loop will gradually lengthen. We don’t want the Rust compiler to steal our productivity! Ultimately, we want to use less electricity and pay less for the CI/CD. If you happen to hotfix a bug in production, you’ll realize the importance of faster iteration and faster CI/CD.
In this article, we‘ll focus on different strategies we can use to optimize CI/CD pipelines in Rust projects. We’ll review them in order of their potential impact on performance. For our demo, we’ll use a simple “Hello World!” REST API.
Let’s make our Rust projects faster!
Jump ahead:
CI/CD, or continuous integration/continuous development, is a collection of pipelines that runs code through various steps and platforms, helping ensure that code quality is met, tests pass, and builds are successful in each target platform.
In most cases, the pipelines are executed on the server, even if it is possible to run them locally. Most code hosting platforms have a customizable, integrated CI/CD feature. The pipelines can be run after any event or at a particular time, and there are several operating systems to choose from.
A Docker image is simply a file with a list of instructions for building a Docker container. To look at different ways to optimize the build process, we’ll use a simple “Hello World!” project. We will implement a variety of methods with the goal of reducing the Docker build time and producing a slimmer image.
In each iteration, we’ll build the image four times. Before each build step, we’ll change the code to ensure it gets recompiled. The first build is considered a warm-up, so the result will be excluded. We will look at the results of the last three builds for each iteration.
Take note that I am using Podman instead of Docker; it is an alternative to Docker (a 1:1 replacement) and has a similar API.
For our first round, we’ll start with a Rust Docker image Installing and setting up Rust manually using a plain base image will take a longer build time:
# Dockerfile.1 ARG BASE_IMAGE=rust:1.70.0-slim-bullseye FROM $BASE_IMAGE WORKDIR app COPY . . RUN cargo build --release CMD ["./target/release/hello"]
Now, we’ll run the build using the following command. The time
prefix is added to the command to measure the execution time of the podman build
process. By using time
, we can see how long it takes for the build to complete, providing valuable information on its performance. time
will output the elapsed real time, user CPU time, and system CPU time taken by the podman build
command.
time podman build -f Dockerfile.1 -t hello-1:0.1.0
Next, we must ensure that the image doesn’t include any files unrelated to the build process.
# Run the image $ podman run -p 7000:7000/tcp hello-1:0.1.0 # Check what is inside $ podman exec -it container_name bash root@2d3c937d40f0:/app# ls Cargo.lock Cargo.toml src target
Before exploring other methods, the crucial first step is to fine-tune the .dockerignore
file. Docker doesn’t use .gitignore
. .dockerignore
is the file that Docker uses to exclude files unrelated to the build, such as documentation, changelogs, and the local target directory. Next, we’ll employ multi-stage builds where the .dockerignore
plays a minor role, as only the binary enters the final image. However, completely disregarding the .dockerignore
can impact overall build time due to the copying process of the first build stage.
This approach produces a 1.12GB image and took 59 seconds to build:
154.76s user 15.64s system 280% cpu 1:00.84 total 162.36s user 15.93s system 299% cpu 59.557 total 173.79s user 17.33s system 300% cpu 1:03.53 total
For our second round, we’ll take advantage of Docker’s multi-stage builds to optimize the image size. Before multi-stage builds were available, we needed to remove and clean up the build resources manually to keep the Docker image small.
As the name suggests, this approach works with multiple stages of a build. We can selectively choose what to pick from the previous build, resulting in a smaller final image:
# Dockerfile.2 ARG BASE_IMAGE=rust:1.70.0-slim-bullseye FROM $BASE_IMAGE as builder WORKDIR app COPY . . RUN cargo build --release FROM $BASE_IMAGE COPY --from=builder /app/target/release/hello / CMD ["./hello"]
This approach produces an 838MB image and took 1 minute to build:
156.80s user 15.98s system 282% cpu 1:01.10 total 174.36s user 17.60s system 305% cpu 1:02.92 total 174.25s user 17.74s system 304% cpu 1:03.03 total
There is some improvement in the image size with this approach, but no reduction in build time.
Rust provides an option to reduce its binary size output. This option is set as an opt-out because Rust prefers fast compilation speed and ease of debugging.
Stripping the symbol, optimizing the binary size, and enabling Link Time Optimization (LTO) are some options that can be considered:
# Cargo.toml [profile.release] strip = true # Automatically strip symbols from the binary. opt-level = "z" # Optimize for size.
This approach produces an 832MB image and took 48 seconds to build. Somehow, taking additional steps for stripping the binary size took less time than the previous approach:
99.40s user 12.71s system 230% cpu 48.728 total 106.82s user 13.66s system 216% cpu 55.738 total 117.68s user 15.26s system 236% cpu 56.221 total
Although the Docker image size is only 6Mb smaller than achieved with the previous approach, this still represents a huge win for small applications.
In an effort to further improve build time and file size, let’s look at different image files that we can use.
First, let’s try scratch, Docker’s official, minimal image. The scratch
image does not contain glibc
, so we’ll need to do the static linking to the musl
C library to run the “Hello World!” app in the scratch
image:
# Dockerfile.3 FROM rust:1.70.0-alpine3.18 as builder # fixes `cannot find crti.o` RUN apk add musl-dev WORKDIR app COPY . . RUN cargo build --release FROM scratch as runtime COPY --from=builder /app/target/release/hello / CMD ["./hello"]
Here’s the error we’ll get if we try to build the app with glibc
and run it in a scratch
image:
$ podman run -p 7000:7000/tcp hello-3:0.1.0 {"msg":"exec container process (missing dynamic library?) `//./hello`: No such file or directory","level":"error"}
This approach produces an 878kB image and took 1.22 minutes to build.
178.83s user 28.14s system 244% cpu 1:24.67 total 170.81s user 27.31s system 239% cpu 1:22.73 total 167.39s user 27.25s system 225% cpu 1:26.16 total
This iteration produced a large improvement in the file size, but the build took 34 seconds longer than the standard Docker image.
Rather than using a scratch
image as a builder and setting up Rust manually, it’s easier to use an alpine image. Let’s give the alpine
image a try:
# Dockerfile.4 FROM rust:1.70.0-alpine3.18 as builder RUN apk add musl-dev WORKDIR app COPY . . RUN cargo build --release FROM alpine:3.18.0 COPY --from=builder /app/target/release/hello / CMD ["./hello"]
This approach produces an 8.5MB image and took 1.23 minutes to build:
175.92s user 27.78s system 241% cpu 1:24.36 total 170.38s user 27.52s system 231% cpu 1:25.62 tot 178.53s user 28.50s system 249% cpu 1:23.07 total
With distroless, we get a bigger image size but a faster build time. This approach produces a 25.4MB image and took 45 seconds to build:
# Dockerfile.5 FROM rust:1.70.0-slim-bullseye as builder WORKDIR app COPY . . RUN cargo build --release FROM gcr.io/distroless/cc-debian11 COPY --from=builder /app/target/release/hello / CMD ["./hello"]
This approach produces a 25.4MB image and took 45 seconds to build:
98.97s user 12.84s system 232% cpu 48.012 total 98.96s user 12.68s system 237% cpu 47.092 total 104.45s user 13.26s system 220% cpu 53.389 total
Another way to speed up the build time is to use cargo-chef. Cargo-chef speeds up the build time by leveraging the Docker layer cache.
First, let’s try applying cargo-chef to the distroless image:
# Dockerfile.6 FROM lukemathwalker/cargo-chef:latest-rust-1.70.0 as chef WORKDIR app FROM chef AS planner COPY . . RUN cargo chef prepare --recipe-path recipe.json FROM chef AS builder COPY --from=planner /app/recipe.json recipe.json RUN cargo chef cook --release --recipe-path recipe.json COPY . . RUN cargo build --release FROM gcr.io/distroless/cc-debian11 COPY --from=builder /app/target/release/hello / CMD ["./hello"]
This approach produces a 25.4MB image and took an astonishing 14 seconds to build:
16.07s user 1.80s system 121% cpu 14.651 total 16.02s user 1.83s system 120% cpu 14.802 total 16.24s user 1.77s system 121% cpu 14.851 total
By inspecting the build log, I can see that compiling the app is the only slow process. When I run the build command, I don’t see any dependencies being recompiled.
Now, let’s try applying cargo-chef with the scratch
image. This approach produces an 878 kB image and took only 22 seconds to build:
# Dockerfile.7 FROM lukemathwalker/cargo-chef:latest-rust-1.70.0-alpine3.18 as chef WORKDIR app FROM chef AS planner COPY . . RUN cargo chef prepare --recipe-path recipe.json FROM chef AS builder COPY --from=planner /app/recipe.json recipe.json RUN cargo chef cook --release --recipe-path recipe.json COPY . . RUN cargo build --release FROM scratch COPY --from=builder /app/target/release/hello / CMD ["./hello"] 25.77s user 3.55s system 132% cpu 22.190 total 25.67s user 3.71s system 131% cpu 22.329 total 25.83s user 3.64s system 131% cpu 22.346 total
Below is a summary of the performance gains of the different approaches we’ve reviewed in this article.
Approach | Image size | Build time |
---|---|---|
Basic Dockerfile | 1.12GB | 59 seconds |
Multi-stage builds | 838MB | 1 minute |
Minimizing binary size | 832MB | 48 seconds |
scratch image | 878kB | 1.22 minutes |
alpine image | 8.5MB | 1.23 minutes |
distroless image | 25.4MB | 45 seconds |
cargo-chef with distroless image | 25.4MB | 14 seconds |
cargo-chef with scratch image | 878kB | 22 seconds |
Of the options we reviewed, the best approach to get a tiny image size and fast build time is using cargo-chef
with a scratch
image. If you can’t use musl
due to the particular requirements of your application, opt for cargo-chef
and distroless image. Then, use SlimToolkit (previously DockerSlim) to minify your final image. In our case, It can trim the distroless image size up to 33.07%.
Using a caching tool in the CI, such as rust-cache or sccache, will greatly improve your Rust app’s build time since caching restores the previous builds artifact in the next build. In GitHub Actions, rust-cache
is easier to set up and more commonly used.
Another strategy for speeding up app build time is to switch to a faster linker, such as lld
or mold
. mold
is faster, but lld
is more stable and mature. You can also consider splitting your application into smaller crates to further improve build time.
Another opportunity for CI/CD process optimization is the application testing phase. cargo-nextest claims to provide 3x times faster execution than cargo-test
. In a simple “Hello World!” application, the difference is not much, but it has a significant impact on large applications as shown in its benchmark page.
Here, the cargo-nextest
is slower than regular cargo
:
# cargo 0.09s user 0.04s system 98% cpu 0.137 total 0.09s user 0.03s system 98% cpu 0.123 total 0.10s user 0.04s system 98% cpu 0.143 total # cargo-nextest 0.11s user 0.05s system 105% cpu 0.150 total 0.11s user 0.05s system 105% cpu 0.155 total 0.11s user 0.05s system 105% cpu 0.157 total
Installing a binary crate in CI using cargo install <app name>
is time consuming. We can speed up the installation by using cargo-binstall instead. It pulls the binary right away, and does not require any compilation.
To speed up binary crate installation in GitHub Actions, use install-action. This is my personal favorite; most of the contribution I make to speeding up the GitHub Actions workflow is attributed to simply changing cargo install
to install-action
.
In this article, we discussed different strategies for building the smallest and fastest Docker image for our Rust projects. The most effective method is using cargo-chef
with a scratch
image. We also reviewed other portions of the CI/CD process that can be optimized, such as testing, building the app, and binary crate installation.
I hope you enjoyed this article. If you have questions, please feel free to leave a comment.
Debugging Rust applications can be difficult, especially when users experience issues that are hard to reproduce. If you’re interested in monitoring and tracking the performance of your Rust apps, automatically surfacing errors, and tracking slow network requests and load time, try LogRocket.
LogRocket is like a DVR for web and mobile apps, recording literally everything that happens on your Rust application. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your app’s performance, reporting metrics like client CPU load, client memory usage, and more.
Modernize how you debug your Rust apps — start monitoring for free.
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowLearn how to manage memory leaks in Rust, avoid unsafe behavior, and use tools like weak references to ensure efficient programs.
Bypass anti-bot measures in Node.js with curl-impersonate. Learn how it mimics browsers to overcome bot detection for web scraping.
Handle frontend data discrepancies with eventual consistency using WebSockets, Docker Compose, and practical code examples.
Efficient initializing is crucial to smooth-running websites. One way to optimize that process is through lazy initialization in Rust 1.80.