Why doesn't println! work in Rust unit tests? - rust

I've implemented the following method and unit test:
use std::fs::File;
use std::path::Path;
use std::io::prelude::*;
fn read_file(path: &Path) {
let mut file = File::open(path).unwrap();
let mut contents = String::new();
file.read_to_string(&mut contents).unwrap();
println!("{}", contents);
}
#[test]
fn test_read_file() {
let path = &Path::new("/etc/hosts");
println!("{:?}", path);
read_file(path);
}
I run the unit test this way:
rustc --test app.rs; ./app
I could also run this with
cargo test
I get a message back saying the test passed but the println! is never displayed on screen. Why not?

This happens because Rust test programs hide the stdout of successful tests in order for the test output to be tidy. You can disable this behavior by passing the --nocapture option to the test binary or to cargo test (but, in this case after -- – see below):
#[test]
fn test() {
println!("Hidden output")
}
Invoking tests:
% rustc --test main.rs; ./main
running 1 test
test test ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured
% ./main --nocapture
running 1 test
Hidden output
test test ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured
% cargo test -- --nocapture
running 1 test
Hidden output
test test ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured
If tests fail, however, their stdout will be printed regardless if this option is present or not.

TL;DR
$ cargo test -- --nocapture
With the following code:
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum PieceShape {
King, Queen, Rook, Bishop, Knight, Pawn
}
fn main() {
println!("Hello, world!");
}
#[test]
fn demo_debug_format() {
let q = PieceShape::Queen;
let p = PieceShape::Pawn;
let k = PieceShape::King;
println!("q={:?} p={:?} k={:?}", q, p, k);
}
Then run the following:
$ cargo test -- --nocapture
And you should see
Running target/debug/chess-5d475d8baa0176e4
running 1 test
q=Queen p=Pawn k=King
test demo_debug_format ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured

As mentioned by L. F., --show-output is the way to go.
$ cargo test -- --show-output
Other display flags are mentioned in the documentation of cargo test in display-options.

To include print outs with println!() and keep colors for the test results, use the color and nocapture flags in cargo test.
$ cargo test -- --color always --nocapture
(cargo version: 0.13.0 nightly)

While testing, standard output is not displayed. Don't use text messages for testing but assert!, assert_eq!, and fail! instead. Rust's unit test system can understand these but not text messages.
The test you have written will pass even if something goes wrong. Let's see why:
read_to_end's signature is
fn read_to_end(&mut self) -> IoResult<Vec<u8>>
It returns an IoResult to indicate success or error. This is just a type def for a Result whose error value is an IoError. It's up to you to decide how an error should be handled. In this case, we want the task to fail, which is done by calling unwrap on the Result.
This will work:
let contents = File::open(&Path::new("message.txt"))
.read_to_end()
.unwrap();
unwrap should not be overused though.

Note that the modern solution (cargo test -- --show-output) doesn't work in doctests defined in a Markdown code-fence in the docstring of your functions. Only println! (etc.) statements done in a concrete #[test] block will be respected.

It's likely that the test output is being captured by the testing framework and not being printed to the standard output. When running tests with cargo test, the output of each test is captured and displayed only if the test fails. If you want to see the output of a test, you can use the --nocapture flag when running the test with cargo test. Like so:
cargo test -- --nocapture
Or you can use the println! macro inside a test function to print output to the standard output. Like so:
#[test]
fn test_read_file() {
let path = &Path::new("/etc/hosts");
println!("{:?}", path);
read_file(path);
println!("The test passed!");
}

Why? I don't know, but there is a small hack eprintln!("will print in {}", "tests")

In case you want to run the test displaying the printed output everytime the file changes:
sudo cargo watch -x "test -- --nocapture"
sudo might be optional depending on your set-up.

Related

How can I get Rust code coverage to ignore unreachable lines?

Using the following as an example:
fn get_num(input: i32) -> i32 {
input*2
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_it() {
for i in 1i32..72 {
match get_num(i) {
x if x%2 == 0 => { println!("even"); }
_ => {println!("odd");}
}
}
}
}
and implementing the code coverage like so:
RUSTFLAGS="-C instrument-coverage" cargo test;
llvm-profdata merge -sparse default_*.profraw -o default.profdata;
executable=$(RUSTFLAGS="-C instrument-coverage" cargo test --no-run --message-format=json | \
jq -r "select(.profile.test == true) | .filenames[]" | \
\grep -v dSY);
llvm-cov show --instr-profile=default.profdata --object "$executable";
I would like the println!("odd"); not to be marked as "not covered".
In javascript, I would use istanbul-ignore-next to ignore a line (mark them as "exercised"). In C#, one can use ExcludeFromCodeCoverage. Using, lcov (for C or C++), you can use LCOV_EXCL_LINE. In python, use # pragma: no cover. In ruby, use # :nocov:. Is there a way to do this in Rust?
Don't write unreachable code. If your test panics, it failed. If it doesn't panic, it succeeded. You don't need to examine output from the test to see if it succeeded or not.
As such, don't use match; use assert_eq!.
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_it() {
for i in 1i32..72 {
assert_eq!(get_num(i) % 2, 0)
}
}
}
If the assertion fails, you'll get information in the test results about why it failed.
#[no_coverage] can be applied to functions, but is not yet stable: https://github.com/rust-lang/rust/issues/84605. From that same ticket:
Ignoring a statement or block is not currently supported.
That's not generally possible because it would require for the compiler to do some super smart reasoning. In your particular case it's "obvious" that the match arm for odd numbers can't ever be reached, but what you're asking of the compiler (and the coverage tool) is to be a general theorem prover.
Basically the tool would need to come with some logic that can prove statements about the code. This is actually an unsolvable problem (undecidable, to be precise). You can make tools that can make pretty good attempts (the aforementioned theorem provers) but the complexity of such a tool would far eclipse what you'd want in a simple code coverage tool.

Using a structure as a command line argument in clap

Trying to use a struct within a struct in clap:
use clap::{Args, Parser};
use std::path::PathBuf;
#[derive(Parser, Debug)]
enum Command {
Foo(Foo),
}
#[derive(Args, Debug)]
struct Foo {
bar: Option<Bar>,
path: PathBuf,
}
#[derive(Parser, Clone, Debug)]
struct Bar {
bla: u8,
bla_2: String,
}
fn main() {
let cli = Command::parse();
println!("cli {:#?}", cli);
}
So I could call the app with the following options: cargo run -- foo bar 42 baz /tmp/a or just cargo run -- foo /tmp/a since the bar argument is optional.
However, currently it does not build:
--> src/main.rs:11:5
|
11 | bar: Option<Bar>,
| ^^^ the trait `FromStr` is not implemented for `Bar`
|
And since the values within Bar have to be space-separated implementing a FromStr would not do the trick anyway.
Is it even possible to do something of this fashion in clap currently?
There are several problems with your code. The biggest one is:
An optional positional item can never come before a required positional argument
This is a problem in your case because your command line looks like this:
cargo run -- <required> [optional] /tmp/a
If you have a required path at the end, there can not be an optional positional argument before that.
Further problems:
#[derive(Parser)] should be attached to a struct, not an enum.
There should only be one #[derive(Parser)], which represents the entry object of your arguments parser.
I'm unsure how else to help you, except pointing out your problems. If the invocations cargo run -- foo bar 42 baz /tmp/a and cargo run -- foo /tmp/a are non-negotiable, I don't think clap is the right library for you; I think you should parse by hand.

Why does invalid code annotated by `#[cfg(test)]` still cause the build to fail?

Running cargo build will succeed for the following code:
#[cfg(test)]
mod tests {
#[test]
fn func() {
let x = 1;
sss
}
}
but will fail for this code:
#[cfg(test)]
mod tests {
#[test]
fn func() {
sss
let x = 1;
}
}
error: expected `;`, found keyword `let`
--> src/lib.rs:5:12
|
5 | sss
| ^ help: add `;` here
6 | let x = 1;
| --- unexpected token
A section of the Rust Book on Test Organization says:
The #[cfg(test)] annotation on the tests module tells Rust to compile
and run the test code only when you run cargo test, not when you run
cargo build.
So why does Rust still compile mod tests that is annotated with #[cfg(test)]?
Code that is not compiled with cfg must still be syntactically valid (i.e. parse sucessfully), but nothing more than that.
In the first code, let x = 1; is a normal variable declaration statement, and sss is a trailing expression returning the value of sss from the function. sss is not defined, and thus of course this code is not valid, but it is syntactically valid.
In the second snippet, however, you have sss - that is an expression, and must have a trailing semicolon, but it doesn't. This code is not syntactically valid, and the fact that it is cfg-gated does not matter.

Why is running cargo bench faster than running release build?

I want to benchmark my Rust programs, and was comparing some alternatives to do that. I noted, however, that when running a benchmark with cargo bench and the bencher crate, the code runs consistently faster than when running a production build (cargo build --release) with the same code. For example:
Main code:
use dot_product;
const N: usize = 1000000;
use std::time;
fn main() {
let start = time::Instant::now();
dot_product::rayon_parallel([1; N].to_vec(), [2; N].to_vec());
println!("Time: {:?}", start.elapsed());
}
Average time: ~20ms
Benchmark code:
#[macro_use]
extern crate bencher;
use dot_product;
use bencher::Bencher;
const N: usize = 1000000;
fn parallel(bench: &mut Bencher) {
bench.iter(|| dot_product::rayon_parallel([1; N].to_vec(), [2; N].to_vec()))
}
benchmark_group!(benches, sequential, parallel);
benchmark_main!(benches);
Time: 5,006,199 ns/iter (+/- 1,320,975)
I tried the same with some other programs and cargo bench gives consistently faster results. Why could this happen?
As the comments suggested, you should use criterion::black_box() on all (final) results in the benchmarking code. This function does nothing - and simply gives back its only parameter - but is opaque to the optimizer, so the compiler has to assume the function does something with the input.
When not using black_box(), the benchmarking code doesn't actually do anything, as the compiler is able to figure out that the results of your code are unused and no side-effects can be observed. So it removes all your code during dead-code elimination and what you end up benchmarking is the benchmarking-suite itself.

Cargo build --release option ignored when testing overflow

I've been stepping through the Programming Rust book and wanted to observe the two's complement wrapping, so simple code of:
fn main() {
let mut x: u8 = 255;
println!("the value of x is {}", x) ;
x = 255 + 1 ;
println!("The value of x now is {}",x) ;
}
when I try and compile this with Cargo as per the guide, I run
cargo build --release
which in the book says will let it compile without overflow protection, but it won't compile. I get the protection error
|
6 | x = 255 + 1 ;
| ^^^^^^^^^^^ attempt to compute u8::MAX + 1_u8, which would overflow
Can you explain what I'm doing wrong please ?
I believe the value is not checked dynamically during run-time (it wont panic and would overflow) but still statically checked for (if possible) during compile time.
In this case the compiler is able to determine at compile time what you're trying to do and prevents you from doing it.
That being said if you look at the compiler output you can see the following message:
note: #[deny(arithmetic_overflow)] on by default
You'll see this message regardless of the optimization level.
If you'd like to observe the overflow put the following inner attribute at the top of your source file.
#![allow(arithmetic_overflow)]
Or, if you're compiling with rustc directly you can pass the following flags:
-O -A arithmetic_overflow
The rustc docs show that the following lints are on by default (regardless of optimization level)
ambiguous_associated_items
arithmetic_overflow
conflicting_repr_hints
const_err
ill_formed_attribute_input
incomplete_include
invalid_type_param_default
macro_expanded_macro_exports_accessed_by_absolute_paths
missing_fragment_specifier
mutable_transmutes
no_mangle_const_items
order_dependent_trait_objects
overflowing_literals
patterns_in_fns_without_body
pub_use_of_private_extern_crate
soft_unstable
unconditional_panic
unknown_crate_types
useless_deprecated
When you write a literal 255+1 in your code, the compiler evaluates the expression at compile-time and sees the overflow immediately, whether in debug or release mode. When the book says that --release disables overflow protection, it's talking about runtime checks. You can see the difference with this code:
fn increment (x: u8) -> u8 { x + 1 }
fn main() {
let x = 255;
println!("x: {}, x+1: {}", x, increment (x));
}
Playground
If you run this code in debug mode, you get:
thread 'main' panicked at 'attempt to add with overflow', src/main.rs:1:30
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
But if you run it in release mode, you get:
x: 255, x+1: 0

Resources