Dealing with optional tests

1.2k views Asked by At

The absence of a way to skip a test in CATCH, Google Test and other frameworks (at least in the traditional sense, where you specify the reason for doing so and see it in the output) made me think if I need it at all (I've been using UnitTest++ in my past projects).

Normally, yeah, there shouldn't be any reason to skip anything in a desktop app - you either test it or not. But when it comes to hardware - some things can't be guaranteed.

For example, I have two devices: one comes with an embedded beeper, but the other - without. In UnitTest++ I would query the system, find out that the beeper is not available, and would just skip the tests, which depend on it. In CATCH, of course, I can do something similar: query the system during the initialization, and then just exclude all tests with the tag "beeper" (a special feature in CATCH).

However, there's a slight difference: a tester (someone other than me) would read the output and not find those optional tests mentioned (whereas in UnitTest++ they'd be marked as skipped, and the reason would be provided as a part of the output). His first thoughts:

  • This must be some old version of the testing app.
  • Maybe I forgot to enable suite X.
  • Something is probably broken, I should ask the developer.
  • Wait, maybe they were just skipped. But why? I'll ask the developer anyway.

Moreover, he could just NOT notice that those tests were skipped, while they might actually shouldn't be (i.e. the OS returns "false", regardless of the beeper being/not being there, which is a major bug). One option would be to mark "skipped" tests as passed, but that feels like an unnecessary workaround.

Is there some clever technique I'm not aware of (i.e., I don't know, separating the optional tests into a standalone program altogether)? If not - should I stick to UnitTest++ then? It does the job, but I really like CATCH's SECTIONs and tags, helps in avoiding code repetition.

2

There are 2 answers

1
philsquared On BEST ANSWER

If you're detecting the availability of the beeper programmatically then you have a place to also print out the tests you're skipping.

You can get the set of tests that match a given test spec with something like the following:

  std::vector<TestCase> matchedTestCases;
  getRegistryHub().getTestCaseRegistry().getFilteredTests( testSpec, config, matchedTestCases );

testSpec is an instance of TestSpec. You can get the current one from config.testSpec() - or you can create it on the fly (which you may need to do if you're programmatically filtering tests. This isn't really documented at the moment as I had wanted to go back over the whole test spec thing and rework it. As it happens I did that last week. Hopefully this should be fairly stable now - but I'm letting it settle in before committing it documentation.

You should be able to work it out if you search for "class TestSpec" in the code -although you may find it easier to parse it out of a string using parseTestSpec().

You can get the config object with getCurrentContext().getConfig().

1
Mike Kinghan On

It's unclear whether you're asking for a technique that applies to googletest, or to CATCH, or either, or both. This answer applies to googletest.

The customary technique for skipping unwanted tests is to use the commandline option that is provided for the purpose, --gtest_filter. See the Documentation.

Here's an example of its use for a test suite in which a beeper might or might not be enabled:

test_runner.cpp

#include "gtest/gtest.h"

TEST(t_with_beeper, foo) {
    SUCCEED(); // <- Your test code here
}

TEST(t_without_beeper, foo) {
    SUCCEED(); // <- Your test code here
}

int main(int argc, char **argv)
{
    ::testing::InitGoogleTest(&argc, argv);
    return RUN_ALL_TESTS();
}

Run:

./test_runner --gtest_filter=t_with_beeper*

Output:

Note: Google Test filter = t_with_beeper*
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from t_with_beeper
[ RUN      ] t_with_beeper.foo
[       OK ] t_with_beeper.foo (0 ms)
[----------] 1 test from t_with_beeper (0 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (1 ms total)
[  PASSED  ] 1 test.

Run:

./test_runner --gtest_filter=t_without_beeper*

Output:

Note: Google Test filter = t_without_beeper*
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from t_without_beeper
[ RUN      ] t_without_beeper.foo
[       OK ] t_without_beeper.foo (0 ms)
[----------] 1 test from t_without_beeper (0 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (1 ms total)
[  PASSED  ] 1 test.

The report does not itemize the skipped tests but it makes it fairly obvious whether or not beeper tests are enabled, which should be sufficient to pre-empt any of the misconceptions or doubts you are concerned to avoid.

To enable or disable beeper tests within test_runner you can use the like of:

using namespace std;

int main(int argc, char **argv)
{   
    vector<char const *> args(argv,argv + argc);
    int nargs = argc + 1;
    if (have_beeper()) {
        args.push_back("--gtest_filter=t_with_beeper*");
    } else {
        args.push_back("--gtest_filter=t_without_beeper*");
    }
    ::testing::InitGoogleTest(&nargs,const_cast<char **>(args.data()));
    return RUN_ALL_TESTS();
}

where have_beeper() is a boolean function that queries the presence of a beeper.