According to the 2021 Rust Survey, Rust’s long compile time is still a big concern for developers and an area for further improvement. Especially when it comes to large projects or crates with many dependencies, Rust’s focus on runtime performance over compile time performance becomes rather punishing, even in debug builds. This negatively impacts developer experience and is a reason why some developers are still unwilling to try Rust.
In any case, Rust’s build times will continue to be on the slower side for the foreseeable future, especially for larger projects. While there are some tweaks one can make to improve this situation, setting them up and keeping up-to-date with new developments, like flags and configuration options for improved build times, is cumbersome.
In this article, we’ll explore Fleet, a build tool that is essentially a one-tool-solution for improving your Rust build times, both for local development and for CI/CD pipelines.
Fleet’s focus is on ease of use. Fleet doesn’t necessarily aim to reinvent the wheel and completely overhaul or restructure the way Rust builds work, but rather, it wraps the existing build tools, tweaking optimizations together into a configurable, intuitive tool that takes care of speeding up builds. It works on Linux, Windows, and Mac OS.
Unfortunately, at the time of writing, Fleet is still in beta and only supports nightly rustc
. However, it is being actively developed, and moving it to the stable
toolchain is on the short list of upcoming improvements. That said, if you don’t feel comfortable using Fleet right away, or your current project setups are incompatible with Fleet, there’s some good news. You can do most of the optimizations manually. Later in this article, we’ll go over them quickly, sharing some resources where you can learn more about them.
First, let’s start by learning how to install Fleet and use it in a project.
To install Fleet, you’ll need Rust installed on your machine. Once that’s taken care of, simply open your terminal and execute the respective installation script:
For Linux:
curl -L get.fleet.rs | sh
For Windows:
iwr -useb windows.fleet.rs | iex
Once that’s done, you can set up Fleet with one of four command line arguments:
-h / --help
: Print help information-V, / --version
: Print version informationbuild
: Build a Fleet projectrun
: Run a Fleet projectYou can check out the additional, optional arguments for run
and build
in the Fleet docs. These are somewhat similar to Cargo, but it’s not a 1:1 replacement, so be sure to check out the different configuration options if you have particular needs in terms of your project.
If you plan to benchmark build times with and without Fleet, be sure to run clean builds and keep caching and preloading in mind. While Fleet claims to be up to five times faster than Cargo on some builds, the size of actual performance gains for your project in terms of compilation speed will depend on many different factors, including the code you’re trying to compile and its dependencies as well as your hardware, SSD vs. WSL (Windows System for Linux).
In any case, if you currently feel that your project builds very slowly, install Fleet and give it a try to see if it improves the situation. In terms of setup, Fleet takes no time at all.
In addition to local development improvements, another important goal of Fleet is to improve CI/CD pipelines. If you’re interested in trying out Fleet for your automated builds, be sure to check out their docs on setting it up with GitHub for Linux and Windows.
At the time of writing this article, Fleet focuses on four different optimizations: Ramdisk, optimizing the build through settings, Sccache, and a custom linker. You can find a short description in this GitHub ticket, but it’s likely that this list will change over time, especially when Fleet moves to stable
and is further developed.
Let’s go over the different optimizations one-by-one and see what they actually do. The following will not be an extensive description, but rather a superficial overview of the different techniques with some tips and resources on how to use them. At the end of this article, there is also a link to a fantastic article describing how to manually improve compile times in Rust.
A Ramdisk, or Ramdrive, is essentially just a block of RAM that’s being used as if it were a hard disk to improve speed and in some cases put less stress on hard disks.
The idea of this optimization is to put the /target
folder of your build onto a Ramdisk to speed up the build. If you already have an SSD, this will only marginally improve build times. But, if you use WSL (Windows Subsystem for Linux) or a non-SSD hard disk, Ramdisk has the potential to massively improve performance.
There are plenty of tutorials on how to create Ramdisks for the different operating systems, but as a starting point, you can use the following two articles on Mac OS and on Linux.
Fleet manipulates the build configuration by using compiler options and flags to boost performance.
One example of this is increasing codegen-units
. This essentially increases parallelism in LLVM when it comes to compiling your code, but it comes at the potential cost of runtime performance.
This is usually not an issue for debug builds, where developer experience and faster builds are important, but definitely for release builds. You can read more about this flag in the docs.
Setting codegen-units
manually is rather easy, just add it to the rustflags
in your ~/.cargo/config.toml
:
[target.x86_64-unknown-linux-gnu] rustflags = ["-C", "codegen-units=256"]
However, as mentioned above, you should definitely override this back to 1
for release builds.
Another option is to lower the optimization level for your debug builds. While this means that the run-time performance will suffer, the compiler has less work to do, which is usually what you want for iterating on your codebase. However, there might be exceptions to this; you can read more about optimization levels in the docs.
To set the optimization level to the lowest possible setting, add the code below to your ~/.cargo/config.toml
file:
[target.x86_64-unknown-linux-gnu] rustflags = ["-C", "opt-level=0"]
Again, be sure to only set this for debug builds and not for release builds. You wouldn’t want to have entirely unoptimized code in your production binary.
For lower optimization levels, as mentioned, you can try adding the share-generics
flag, which enables the sharing of generics between multiple crates in your project, potentially saving the compiler from doing duplicate work.
For example, for Linux, you could add this to your ~/.cargo/config.toml
:
[target.x86_64-unknown-linux-gnu] rustflags = ["-Z", "share-generics=y"]
The next optimization is using Mozilla’s sccache. Sccache is a compiler-caching tool, meaning it attempts to cache compilation results, for example, across projects or crates, storing them on a disk, either locally or in cloud storage.
This is particularly useful if you have several projects with many and sometimes large dependencies. Caching the results of compiling these different projects can prevent the compiler from duplicating work.
Especially in the context of CI/CD-pipelines, where builds are usually executed in the context of a freshly spawned instance or container without any locally existing cache, cloud-backed sccache can drastically improve build times. Every time a build runs, the cache is updated and can be reused by subsequent builds.
Fleet seamlessly introduces sccache into its builds, but doing this manually is not particularly difficult either. Simply follow the instructions for installation and usage for sccache.
Finally, Fleet also configures and uses a custom linker to improve build performance. Especially for large projects with deep dependency trees, the compiler spends a lot of time linking. In these cases, using the fastest possible linker can greatly improve compilation times.
The list below includes the correct linker to use for each operating system:
clang + lld
however, Linux may potentially use mold soonrust-lld.exe
zld
Configuring a custom linker is not particularly difficult. Essentially, it boils down to installing the linker and then configuring Cargo to use it. For example, using zld
on Mac OS can be implemented by adding the following config to your ~/.cargo/config
:
[target.x86_64-apple-darwin] rustflags = ["-C", "link-arg=-fuse-ld=<path to zld>"]
On Linux, lld
or mold
, is the best choices for Rust. Fleet doesn’t use mold
yet due to license issues, but you can use it in your build locally by simply following the steps for Rust in the mold docs.
After this short overview, another fantastic resource for improving your build times if you’re reluctant to use Fleet at this point is Matthias Endler’s blog post about the topic.
Fleet has great potential, especially for developers who do not enjoy fussing around with build pipelines or build processes in general. It provides a powerful, all-in-one package of multi-platform and multi-environment optimizations for build speed, so it’s well worth a try if you’re struggling with build times.
Beyond that, we touched on some of the optimizations Fleet performs in the background and how those will help alleviate your compile-speed pain if you’re willing to put in a little time to figure out what they do and how to introduce them in your setup.
That said, often times, the reason behind slow build times is that a project depends on very many or large crates.
Managing your dependencies well and with a minimalist mindset, meaning introducing only the minimal version of whatever you need or building the required functionality from scratch instead of adding an existing crate, will not only keep your build times low, but also reduce complexity and increase the maintainability of your code.
Debugging Rust applications can be difficult, especially when users experience issues that are hard to reproduce. If you’re interested in monitoring and tracking the performance of your Rust apps, automatically surfacing errors, and tracking slow network requests and load time, try LogRocket.
LogRocket is like a DVR for web and mobile apps, recording literally everything that happens on your Rust application. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your app’s performance, reporting metrics like client CPU load, client memory usage, and more.
Modernize how you debug your Rust apps — start monitoring for free.
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowLearn how to implement one-way and two-way data binding in Vue.js, using v-model and advanced techniques like defineModel for better apps.
Compare Prisma and Drizzle ORMs to learn their differences, strengths, and weaknesses for data access and migrations.
It’s easy for devs to default to JavaScript to fix every problem. Let’s use the RoLP to find simpler alternatives with HTML and CSS.
Learn how to manage memory leaks in Rust, avoid unsafe behavior, and use tools like weak references to ensure efficient programs.