Thomas Eizinger I like to think about systems as a whole instead of just dealing with the front or backend. Quality matters a lot to me, concerning both the end product and the engineering process that leads to it.

How to use the Rust compiler as your integration testing framework

7 min read 2067

Using the Rust Compiler as Your Integration Framework

When I started developing in Rust 2.5 years ago, I was fascinated by how often my program would just work once it compiled. The Rust compiler is known for being a bit pedantic, which can be quite frustrating in the beginning. Over time, though, I actually learned to love this aspect. These days, I often find myself deliberately thinking about how I can use Rust’s type system in clever ways that allow me to catch or prevent bugs early on in the development cycle.

In this guide, I’ll share some of the knowledge I’ve built up on this topic and present some techniques to make integration testing almost redundant. Ultimately, the question is: Could “It compiles, let’s ship it!” actually be true for Rust?

Reviewing some of Rust’s features

As a starting point, let’s talk about some select features of Rust and its compiler.

Strong types

Rust is strongly typed. Strong typing means that functions clearly express the types you’re allowed to pass to them. The program fails to compile if you’re trying to pass a type that’s not compatible.

Strong typing is the fundamental building block of the idea I’m going to outline in this guide. It allows the compiler to point out situations in which we contradict ourselves in what we expressed. The program will only compile if all parts of the code are in harmony.


Rust’s standard library contains several types that are instances of monads, such as Result, Future, and Option. Unlike Haskell, though, Rust doesn’t have a way to abstract over all monad instances. Nevertheless, these types are still monads, which means we can abstract over them individually.

For example, the design choice to include the Option monad within the standard library for modeling empty states allows programs to abstract over the notion of empty state in general. In other words, you can write code that operates on Option<T> without having to be specific about what T is. The same is true for Result in the context of error handling and Future in the context of async computation.

The use of monads for these problems removes the need for language features such as null pointers and exceptions. As we’ll see later, not having these two features greatly improves the robustness of Rust programs.

Tagged unions

Rust’s enums are implemented as tagged unions. Leaving the official name aside, in addition to C-like enums, Rust’s enums can store additional data with each variant.

Take the Result type from the standard library, for instance:

We made a custom demo for .
No really. Click here to check it out.

pub enum Result<T, E> {

Believe it or not, this is just an enum. No fancy language features whatsoever.

When defining an enum, you can choose between tuple variants and struct variants. In other words, you can choose to access the fields within a variant by index (.0, .1) or by name (.foo, .bar).

Tagged unions are powerful because they allow you to easily model orthogonal aspects of a domain alongside relevant data.

Sticking to the example of Result, let’s say a function returns Result::Err(error). Not only do we communicate to our caller that the function failed, but we also communicate how by returning the error inside the Err variant. At the same time, Ok and Err are orthogonal. The variable/slot for an error simply doesn’t exist in the Ok case.

Exhaustiveness checks

A feature that plays well with tagged unions is exhaustiveness checks. To handle an enum in Rust, you have to match on it. Every time you match on an enum, the compiler makes sure you’re handling all the variants. Not handling a variant is a compile-time error unless you explicitly opt into this behavior using _ => ....


Types and functions can be annotated with #[must_use] in Rust. For a function, this makes the compiler emit a warning if a caller is not assigning the return value to a variable. For types, the warning is emitted every time the type is returned from a function and a caller is not using it.

Together with the on-by-default warning of unused variables, this is another kind of exhaustiveness checking. The compiler makes sure you don’t forget about variables and important return values.

For example, Result is annotated with #[must_use]. Not assigning or matching on a return value of type Result will emit a warning. This prevents you from forgetting about a potential error case.

Moving toward zero integration tests

We’ve discussed how some of Rust’s language features help us avoid forgetting about errors and properly handle them. But what if you encounter a logic bug? For example, let’s say you accidentally used < instead of > for a comparison.

Decision logic code like this can typically only be checked at runtime because it often depends on user input. To make sure you’ve got those things right, it’s a good idea to write tests.

But how exactly should we test it, and how can Rust and its compiler help us here?

Integration tests vs. unit tests

In my experience, there’s no general agreement on what exactly a unit test is. I personally recommend Ian Cooper’s talk on TDD for more on this topic.

For the sake of this tutorial, I’ll go with the following definitions:

  • Integration tests ensure that multiple modules work correctly together
  • Unit tests ensure that an individual module works correctly in isolation

Newtypes, newtypes, newtypes

Declaring types in Rust is cheap. They are effectively erased at runtime, so there is no runtime-performance overhead from having more of them.

It’s a good idea to take advantage of this through newtypes. A newtype is, by convention, a type definition that wraps exactly one other type. For example, if you have semantically different IDs in your system, modeling them as different types is like saying to the compiler, “Please warn me if I mix any of those up!”

struct UserId(Uuid);

struct OrderId(Uuid);

Passing a UserId where an OrderId is expected will generate a compilation error thanks to Rust having strong typing. This, of course, is only true if you parsed and validated the ID correctly when it entered your system — i.e., when it was read from the database or from an HTTP request.

Parse and validate early

Early, in this case, refers to early in the program’s data-flow hierarchy. What you want to avoid is carrying around data types that allow for more values than are actually permitted per the business rules of your software — for example, a String representing “MasterCard” or “VISA” if those two are the only possible credit card brands that users can use to make payments in your system.

Instead, you should create an enum that represents the valid range and parse the string into this enum early on.

The same is true for other invariant, such as number ranges. Let’s say a given number must be between 1 and 100. The closest primitive data type we can use to embed the invariant “must be between 1 and 100” is a u8. But a u8 allows the value 0 as well as 101 - 127.

We can create our own type that enforces these invariants.

struct Percent(u8);

impl Percent {
    fn new(value: u8) -> anyhow::Result<Self> {
        if value > 100 {
            anyhow::bail!("number is too big!")

        if value == 0 {
            anyhow::bail!("number is zero!")


Because the field within Percent is private, the only to construct an instance of Percent is through the new function.

As a result, the usage of Percent in any type signature within the program guarantees that if we get to this stage, the value within Percent has been checked against those invariants. Using data privacy and encapsulation allows us to embed invariants within the type system.

Separate the act of making decisions from their execution

To achieve our goal of not writing integration tests, we have to accept another constraint on how we design our software: a function should either make decisions or execute them, but not both.

The reasoning is that executing a decision likely involves other modules, whereas making a decision likely involves just data.

For example, a decision could be, “This credit card should be charged with $110.” The module making this decision can and should be a very different one from the module that executes it. Execution will likely involve rather boring code, like talking to the API of a payment processor. Making that decision, on the other hand, is what is often termed the “business logic,” or “core logic,” of a system.

How can we achieve this split? To quote David Wheeler, “All problems in computer science can be solved by another level of indirection.”

We can separate the acts of making decisions and executing them by adding a level of indirection. More concretely, instead of acting directly on a decision, we can simply model what happened or is supposed to happen.

Recall Rust’s enums, aka tagged unions? They’re a great fit for modeling the outcome of a decision. Common terminology here also includes “event” or “action.”

enum Action {
      ChargeCreditCard {
              credit_card: CardDetails,
              amount: Amount

With this level of indirection, we can create a module that encapsulates the decision logic of our application. This module also lends itself nicely to being unit tested. There’s nothing easier than throwing some data at a function and asserting that it returns the correct variant of an enum! Assuming that our caller reacts appropriately to receiving such an action, the system should work correctly.

Integrating modules

Let’s put ourselves in the shoes of the caller that is being returned an Action like the one above. What do we do with it? To handle enums in Rust, we have to match on them.

match action {
      Action::ChargeCreditCard { credit_card, amount } => {
            payment_processor_client.charge_credit_card(credit_card, amount);

What is interesting is the mapping between the enum variant and the function we are calling. They use the same terminology and the data is simply passed on. But that’s it in terms of “integration” work.

Putting it all together

If you’ve been following along closely, you might already see where this is going:

  • Splitting our code into decision and execution allows you to unit test your decision code — in this case, that we emit the correct variant of Action
  • Rust’s exhaustive matching ensures that you handle every variant of Action
  • The lint against unused variables ensures that you don’t forget about certain data that we are being passed
  • Rust’s strong typing ensures that what you pass to the charge_credit_card function is compatible
  • The #[must_use] attribute ensures that you’re not simply dropping the Action altogether
  • Parsing and validating data early ensure that you’re not dealing with invalid data. Hence, the decision module can assume that certain invariants of the data are true
  • The use of monadic types such as Result ensures that you deal with every error on the way. If exceptions were the primary error handling mechanism, your program could easily abort unintentionally due to an exception not being caught

Individually, these points are not that exciting, but when combined they pack a potent punch. Overall, there is a fairly low risk of writing buggy integration code here. I would even go so far as to say the risk is so low that the effort of writing and maintaining integration tests isn’t worth it. In my experience, that’s especially true for integration tests because they tend to be long with lots of setup code, quirky mocks, and often questionable assertions.

Instead, I would recommend having more end-to-end tests on an abstraction level that isn’t going to change any time soon, such as the HTTP API of a service. Such end-to-end tests, paired with unit tests on a module level and good use of the type system for integration, can make for a pretty easily maintainable code base.


In this guide, we explored some specific features of Rust today — namely, its enums and the enforced exhaustive matching. Paired with good use of newtypes and some indirection sprinkled here and there, it’s relatively easy to bundle most of your application’s crucial logic into a module that can be neatly tested in isolation and, at the same time, have very low risk in the integration code with other modules.

“It compiles, let’s ship it!” doesn’t quite apply to Rust. A better way to put it is: “It compiles without warnings and my unit tests pass, so let’s ship it!”

: Full visibility into your web apps

LogRocket is a frontend application monitoring solution that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen, or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store.

In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single-page apps.

Thomas Eizinger I like to think about systems as a whole instead of just dealing with the front or backend. Quality matters a lot to me, concerning both the end product and the engineering process that leads to it.

Leave a Reply