Skip to content

assert_cmd experience report #63

Open
@matklad

Description

@matklad

Hi! Today, I've written a bunch of tests for https://github.com/ferrous-systems/cargo-review-deps/blob/3ba1523f3b2c2cfe807bc76f42bf972d0efdc113/tests/test_cli.rs.

Initially, I've started with assert_cmd, then I've switched to assert_cli and now I have "write my own minimal testing harness on top of std::process::Command" on the todo list. I'd like to document my reasoning behind these decisions. Note that these are just my personal preferences though!

The first (but very minor) issue with assert_cmd was it's prelude/extension traits based design. It makes for an API which reads well, but it is definitely a speed bump for the new users who want to understand what API surface is available to them. I think that long term such design might be an overall win, but it is a slight disadvantage when you learn the library.

The issue which fundamentally made me think "I'll stick with assert_cli for now" was the predicate-based design of assertions for stdout. It has two things which don't fit my style of testing. First, the same issue with prelude. The first example for .stdout starts with

extern crate assert_cmd;
extern crate predicates;

use assert_cmd::prelude::*;

use std::process::Command;
use predicates::prelude::*;

That's an uncomfortable for me amount of things I need to import to do "quick & dirty smoke testing".

The second thing is extremely personal: I just hate assertion libraries with a passion :) Especially fluent assertion ones :-) I've tried them many times (b/c they are extremely popular), but every time the conclusion was that they make me less productive.

When I write tests, I usually follow the following stages.

  • A simple test with assert!(a == b), without custom message. This the lowest-friction thing you can write, and making adding tests easy is super important.
  • The second stage happens when the test eventually fails. I see assert!(false) in the console, and go to the test and write an elaborate hand-crafted error message with all kind of useful information, to help me debug the test.
  • The third stage (which I call data-driven testing) happens when I have a bunch of tests with similar setups/asserts. What I do then is that I remove all the asserts from all the tests by introducing a special function fn do_test(input: Input, expected: Expected) which turns arrange and assert phases of a test into Data (some JSON-like pods, a builder DSL or just a string with some regex-based DSL). Internally, do_test does all kind of fancy validation of input and custom comparisons of expected and actual outcomes.

I feel that assertion libraries sit somewhere between 1 and 2 here: they are significantly more difficult to use then plain assertions, but are not as good error-message quality wise as hand-crafted messages. And they don't really help with 3, where you need some kind of domain-specific diffing. EDIT: what works great as a midpoint between 1 & 3 is pytest / better_assertions style of thing, which generates reasonable diff from the plain assert.

So, TL;DR, the reason I've switched to assert_cli is to be able to do .stdout().contains("substring").

The reason why I want to write my own harness instead of assert_cli has to do with mostly technical issues:

  • It doesn't handle current_dir nicely, so I have to write custom code anyway.
  • When stdout does not match, it doesn't show stderr, which usually contains the clue as to why stdout is wrong :)

I also want to import a couple of niceties from Cargo's harness. The API I'd love to have will look like this:

// split by space to get a list of args. Not general, but covers 99% of cases
cmd("review-deps diff rand:0.6.0 rand:0.6.1")  
    // *rutime* arguments can be handled with the usual builder pattern
    .arg("-d").arg(&tmp_dir) 
    // a builder for expectation, not too fluent and easily extendable (b/c this lives in the same crate)
    .stdout_contains("< version = \"0.6.1\"") 
    // An optional debug helper. If streaming is enabled, all output from subprocess goes to the terminal as well. Super-valuable for eprintln! debugging 
    // .stream()
    // the call that actually runs the thing.
    // on error, it uncomnditionally prints process status, stdout & stderr, and only then a specific assert message. 
    .status(101)
    // that's it, but I can imagine this returns something you can use for fine-grained checking of the output, so
    // .output() # to get std::process::Output
    // .text_output() # to get something isomorphic to `Output`, but with Strings instead of Vec<u8>.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions