Editor’s note: This guide on organizing your Rust tests was last updated by Abhinav Anshul on 16 June 2023 to reflect recent changes to Rust. This update also includes new sections on the different types of Rust tests, a more detailed look at how to organize them, and the benefits of failing tests. Before you get started, make sure you’re up to speed with Rust.
Whenever you write any kind of code, it’s critical to put it to the test. In this guide, we’ll walk you through how to test Rust code. But, before we get to that, I want to explain why it’s so important to test. To put it plainly, code has bugs. This unfortunate truth was uncovered by the earliest programmers, and it continues to vex programmers to this day. Our only hope is to find and fix the bugs before we bring our code to production.
Testing is a cheap and easy way to find bugs. The great thing about unit tests is that they are inexpensive to set up and can be rerun at a modest cost. Think of testing as a game of hide-and-seek on a playground after dark. You could bring a flashlight, which is highly portable and durable but only illuminates a small area at any given time.
You could even combine it with a motor to rotate the light to reveal more random spots. Or, you could bring a large industrial lamp, which would be heavy to lug around, difficult to set up, and more temporary, but it would light up half the playground on its own. Even so, there would still be some dark corners.
Jump ahead:
Rust has a built-in test harness. This means the code required for running tests comes bundled with the Rust toolchain in the default Rust installation. Rust tests run with the cargo test
command, which allows us to write and run three types of tests.
Unit tests are used to test a specific part of the code. These tests ensure that the code behaves as expected. These tests are also used in Test driven development. Here’s an example of a unit test:
fn add(left: usize, right: usize) -> usize { // <-- code that needs to be tested left + right } #[cfg(test)] mod tests { use super::*; #[test] fn it_works() { // <-- actual test let result = add(2, 2); assert_eq!(result, 4); } }
Unit tests are mostly used to function or methods by providing inputs and checking output with an expected value. Unit tests are written for regression, for example, to check for changes in code results.
As the name suggests, these test the public API of a service/crate. These tests are used to test the functionalities of a program. Integration tests are placed in separate files, unlike the unit tests, which are present in the same file. Additionally, integration tests are useful for ensuring the public APIs don’t change and the expected usage of the program is not broken. These tests reside in tests
folder in the root of the crate, as shown below:
// file -> tests/check_sum #[test] fn check_sum(){ assert_eq!(test_example::add(21, 21),42); // <--- only public API are available }
These tests are similar to integration tests as these are used to test the example in the documentation of API usage are correct and not broken over time. Doc tests are mostly used in library crates, and almost every popular crate includes doc tests. These tests serve the dual purposes of usage examples in documentation and test for most common usage.
/// Function to add two number /// /// ``` /// # use test_example::add; /// let result = add(2, 2); /// assert_eq!(result, 4); /// ``` pub fn add(left: usize, right: usize) -> usize { left + right }
This doc test from std::iter::Iterator
trait. Here the test shows the most common usage of map
method on Iterator
:
/// /// [`for`]: ../../book/ch03-05-control-flow.html#looping-through-a-collection-with-for /// /// # Examples /// /// Basic usage: /// /// ``` /// let a = [1, 2, 3]; /// /// let mut iter = a.iter().map(|x| 2 * x); /// /// assert_eq!(iter.next(), Some(2));<-test the expected output with actual output /// assert_eq!(iter.next(), Some(4)); /// assert_eq!(iter.next(), Some(6)); /// assert_eq!(iter.next(), None); /// ``` /// /// If you're doing some sort of side effect, prefer [`for`] to `map()`:Organizing your Rust tests
Tests are a very important part of software development as tests force developers to write APIs that are easier to test which leads to easier-to-read APIs. The type of tests and the number of tests depends on the usage of the program and the type of user of the program. If you are writing a library, then doc-tests and integration tests are important, including the tests for errors, etc.
If you are writing a program with complex logic, unit tests are very important, with fuzzy tests to ensure edge cases are not missing. Things to consider when organizing a test includes the complexity of your program, the end user of the program, the criticality of incorrect behavior, and the type of program.
There is a whole spectrum of test methodologies, from test-driven design, to full-coverage testing, to the test-whenever-you-feel-like-it approach. I won’t judge you no matter what your preference, but I’ll note that my testing strategy depends heavily on the task at hand.
If the function under review has preconditions that must be true for all possible inputs or post-conditions that must hold when the function returns, it’s acceptable to assert those conditions within the method, provided checking the condition is not prohibitively expensive in terms of time and/or memory.
In some cases, a debug_assert!
will be OK. Doing this is usually a win because those assertions will be checked whether the code is under test or not. You can always remove them later once you no longer need them. Consider them part of the scaffolding to build a program. Tests usually know their outputs and can assert the exact equivalence. That said, it’s sometimes OK to just have the code under test check itself.
Your first round of testing should usually consist of doctests for the happy paths of our API. It’s often acceptable to not assert anything, provided the code has sufficient self-checks. It’s OK to unwrap()
or use the ?
operator here.
A little-known feature of doctests is that you can still omit the main
method if the last line is Ok::<_, T>(())
(for some type T
). For example, my aleph-alpha-tokenizer
crate has the following module-level doctest:
//! ``` //!# use std::error::Error; //! use aleph_alpha_tokenizer::AlephAlphaTokenizer; //! //! let source_text = "Ein interessantes Beispiel"; //! let tokenizer = AlephAlphaTokenizer::from_vocab("vocab.txt")?; //! let mut ids: Vec<i64> = Vec::new(); //! let mut ranges = Vec::new(); //! tokenizer.tokens_into(source_text, &mut ids, &mut ranges, None); //! for (id, range) in ids.iter().zip(ranges.iter()) { //! let _token_source = &source_text[range.clone()]; //! let _token_text = tokenizer.text_of(*id); //! let _is_special = tokenizer.is_special(*id); //! // etc. //! } //!# Ok::<_, Box<dyn Error + Send + Sync>>(()) //! ```
We see a few things here:
//!
to refer to the outer scope. Other item comments usually start with ///
to refer to the item below#
hash mark to the line prefix to have rustdoc
omit the line when rendering the exampleOk(())
with explicit error type ascribed via the famous turbofish
syntaxHaving doctests for all public APIs is a very good start. It proves both that the methods work as intended in the non-error case and that the API is at least somewhat usable. To return to our hide-and-seek analogy, it’s a very handy flashlight. You’ll only see a small area of the playground, but it’s bright and in good detail.
It can also be useful, especially for library crates, to provide example programs that show typical usage. Those go into the examples
subdirectory, and while they are not executed by cargo test
, they can be executed via cargo run
--``example <name>
. Those examples can be fully-fledged and are especially helpful for libraries to give possible users a good starting point.
Depending on the required level of assurance, this may be far from enough. Bugs tend to lurk in the various error cases. So, it makes sense to add tests for all error classes that may happen to an extent that is economical.
For example, for a parser, I would write a test to verify that it behaves well with empty inputs. Those tests add nothing to the documentation, so I would add not a doctest, but a test method. Here’s what that looks like:
#[test] #[should_panic(expected = "empty input")] fn empty_input() { parse("").unwrap(); }
The expected
parameter takes the expected output message substring of the panic. Without it, any panic is deemed a successful test, so you won’t be notified when your test breaks.
Note that
should_panic
doesn’t work withResult
-returning test functions. In the context of our playground game, this is a spotlight pointing toward the bushes.
In this case, the parse
method belongs to the public interface, so the test is a black-box test, meaning it only uses the public API of your crate. Black-box tests usually belong in one or more files in the tests
subdirectory of your crate. A good convention is to have one test file per module to make it easy to find the corresponding tests.
Sometimes, it makes sense to test private functionality to better pinpoint a bug or regression. Those tests are called white-box tests. Because they need to access the crate internals, they must be defined within the crate. The best practice is to include a test
submodule directly in your crate and only compile it under test, like so:
#[cfg(test)] mod test { use super::{parse_inner, check}; #[test] fn test_parse_inner() { .. } #[test] fn test_check() { .. } }
You might even want to verify that something doesn’t compile. There are two crates to enable that. The first one, called compiletest
, is part of the testing infrastructure of the Rust compiler and is maintained by the Rust developers.
With UI testing, you simply write a Rust program that should fail to compile, and compiletest
runs the compiler, writing the error output into a .stderr
file per test file. If the error output changes, the test fails. The Rustc dev guide has good setup documentation. For example, my compact_arena
crate has a number of UI tests, one of which I’ll reproduce below:
use compact_arena::mk_nano_arena; fn main() { mk_nano_arena!(a); mk_nano_arena!(b); let i = a.add(1usize); let j = b.add(1usize); let _ = b[i]; }
Running the test will create a .stderr
file in the target/debug/ui_tests
crate subdirectories (you can configure this). Copying those files next to the test programs will make the test pass as long as the compiler output stays the same. That means it will fail when error messages are improved, which happens quite often.
Incidentally, when trying out this with a new Rustc, all the tests failed due to improved diagnostics, which motivated me to port the tests to compile_fail
tests. Those embed matchers for the various error, warning, and help messages as comments, which makes them a bit more resilient against changes. The above test would look like this as a compile test:
use compact_arena::mk_nano_arena; fn main() { mk_nano_arena!(a); mk_nano_arena!(b); //~^ ERROR `tag` does not live long enough [E0597] let i = a.add(1usize); let j = b.add(1usize); let _ = b[i]; }
Each matcher must start with //~
. Optionally, any number of ^
s would move the expected line of the error message up by 1
each, as well as a substring of the actual error message. Again, you can read more about this in the Rustc dev guide.
The second, create, called trybuild
, is younger and smaller. Written by the unstoppable David Tolnay, it’s quite similar to the UI tests in compiletest
, but it only prints the expected and actual output, not the difference.
Note that doctests can also be marked
compile_fail
(optionally, with an error message substring), but I’d only use it if making some things unrepresentable in working code is one of the main selling point of your crate.
Keep in mind that failing tests can be extremely helpful. Firstly, failing tests help ensure the public API is not broken because the errors are also part of public API, and the error has sensible messages and some helpful context to debug errors. Failing tests also help ensure new error messages are not introduced while a code changes. These also provide documentation of common errors and the cause of the error.
Snapshot tests are tests for setup-kind behavior where you call some method(s) to set up some data and compare it against a static representation of the expected result. The hardest part of writing snapshot tests is setting up the expected result. However, you don’t need to do this work by hand; you can let the test harness do it for you. If your type under test implements Default
, you can simply compare it with YourType::default()
and copy the difference.
Note that this requires the
Debug
output of your type to be representative of the type in the actual code. If that is impossible, you can use insta, which offers a convenient way to do snapshot tests. The crate usescargo-insta
, a cargo subcommand, to review snapshot tests, making it extremely easy to check the results.
Now that we’ve spotlighted a few different areas, it’s time to bring in the big guns. Remember that motor we talked about that rotates the flashlight to various spots on the playground? That’s our property test. The idea is to define a property that must hold and then randomly generate inputs to test with.
Of course, this only works if you have a clearly defined property to test with. Depending on your use case, it may be very simple or very difficult to come up with suitable properties. Some things
to keep in mind:
Vec
has a length and a capacity, and the length is always less than or equal to the capacityGood property testing tools will not only come up with a failing test case but also shrink the input to something minimal that still exhibits the failure — a very helpful trait if you don’t want to look through 50,000 integers to find the three-integer sequence that triggered a bug.
There are two crates to help with property testing. One is Andrew Gallant’s QuickCheck. You can call it from a unit test, and it will quickly generate 100 inputs to test with. For example, my [bytecount](https://docs.rs/bytecount)
crate has both a simple and a fast count function and tests them against each other:
#[cfg(test)] mod tests { use bytecount::{count, naive_count}; #[quickcheck_macros::quickcheck] fn check_count_correct((haystack, needle): (Vec<u8>, u8)) -> bool { count(&haystack, needle) == naive_count(&haystack, needle) } }
In this case, the inputs are fairly simple, but you can either derive values from bytes or implement QuickCheck’s Arbitrary
trait for the types to test. The other crate for random property testing is [proptest](https://docs.rs/proptest)
. Compared to QuickCheck, it has a much more refined shrinking machinery that, on the other hand, also takes a bit more time to run.
Otherwise, they are quite similar. In my experience, both produce good results. Our example as proptest
would look like this:
use proptest::prelude::*; proptest! { #[test] fn check_count_correct(haystack: Vec<u8>, needle: u8) { prop_assert_eq!(count(&haystack, needle), naive_count(&haystack, needle)); } }
One benefit of proptest
, besides the higher flexibility regarding strategy, is failure persistence: all found errors are written into files to be run automatically the next time. This creates a simple regression test harness.
Let’s say we have a clever apparatus that will bias our light motor’s random gyrations to shine more light on previously dark places. Code coverage is used to steer the randomness away from already-tried code paths. This simple idea makes for a surprisingly powerful technique, one that has the potential to uncover a whole lot of bugs.
The easiest way to do this from Rust is to use cargo-fuzz. This is installed with cargo install cargo-fuzz
. Here’s the code:
$ cargo fuzz init $ cargo fuzz add count $ # edit the fuzz target file $ cargo fuzz run count
Note that running the fuzz tests will require a nightly Rust compiler for now.
For my bytecount
crate, one fuzz target looks like this:
#![no_main] use libfuzzer_sys::fuzz_target; use bytecount::{count, naive_count}; fuzz_target!(|data: &[u8]| { if !data.is_empty() { return; } let needle = data[0]; let haystack = &data[1..]; assert_eq!(count(&haystack, needle), naive_count(&haystack, needle)); });
I can happily attest that even after two days of running, libfuzzer
didn’t find any errors in this implementation. This fills me with great confidence in the correctness of my code.
Complex systems make for complex tests, so it’s often useful to have some test helpers. Especially for doctests, you can define functions under #[cfg(test)]
so they won’t turn up in production code. Files in the tests
subdirectory can, of course, contain arbitrary functions. In rare cases, tests can benefit from mocking. There are several mocking crates for Rust.
Of course, unit tests only look at components in isolation. Integration tests often uncover problems in the definition of the interface between components, even when the components themselves are tested successfully. So, don’t spend all your budgeted time on unit tests; leave some for integration tests, too:
Testing is a nice activity. The goals are clearly defined, and the test code is usually simple and easy to write. So, it’s vital to remember that the goal of tests is to find bugs. A test that never fails is pretty much worthless.
Unfortunately, you can’t foresee when a test will start to fail, so I wouldn’t suggest deleting tests that haven’t failed in a while. You’ll have to weigh the trade-off between having a fast test suite and guarding against regressions.
Debugging Rust applications can be difficult, especially when users experience issues that are hard to reproduce. If you’re interested in monitoring and tracking the performance of your Rust apps, automatically surfacing errors, and tracking slow network requests and load time, try LogRocket.
LogRocket is like a DVR for web and mobile apps, recording literally everything that happens on your Rust application. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your app’s performance, reporting metrics like client CPU load, client memory usage, and more.
Modernize how you debug your Rust apps — start monitoring for free.
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowBuild scalable admin dashboards with Filament and Laravel using Form Builder, Notifications, and Actions for clean, interactive panels.
Break down the parts of a URL and explore APIs for working with them in JavaScript, parsing them, building query strings, checking their validity, etc.
In this guide, explore lazy loading and error loading as two techniques for fetching data in React apps.
Deno is a popular JavaScript runtime, and it recently launched version 2.0 with several new features, bug fixes, and improvements […]
6 Replies to "How to organize your Rust tests"
Great post i must say and thanks for the information. Education is definitely a sticky subject. However, is still among the leading topics of our time. I appreciate your post and look forward to more.
360DigiTMG
Thank you for this post. I learn this topic now and it is very helpful for me
Thanks for sharing it. I’m migrating from C++, such complete tutorials with explanations to every step are very helpful.
nice post
Very interesting blog. Many blogs I see these days do not really provide anything that attracts others, but believe me the way you interact is literally awesome.
I didn’t have any expectations concerning that title, but the more I was astonished. The author did a great job. I spent a few minutes reading and checking the facts. Everything is very clear and understandable. I like posts that fill in your knowledge gaps. This one is of the sort.