Examples » Testing and benchmarking

Creating automated unit tests, integration tests and benchmarks.

Corrade's TestSuite took its initial inspiration in the QtTest framework. Unlike most other test libraries out there, it is not designed around opaque macros but rather tries to make use of standard C++ features to manage most of the test code. At first it may seem that this involves more typing, but it allows for much greater flexibility, easier debugging and more predictable control flow with real-world cases of data-driven testing and benchmarking.

Below is a simple introduction into writing tests and benchmarks, you can find more detailed information about all the features in the TestSuite::Tester class documentation.

Tester class

First step for creating a unit test is to subclass TestSuite::Tester and add test cases and benchmarks to it.

namespace {

struct MyTest: TestSuite::Tester {
    explicit MyTest();

    void commutativity();
    void associativity();
    void pi();
    void sin();
    void bigEndian();

    void prepend1kItemsVector();
    void prepend1kItemsList();
};

In constructor we register our test cases and benchmarks using addTests() and addBenchmarks():

MyTest::MyTest() {
    addTests({&MyTest::commutativity,
              &MyTest::associativity,
              &MyTest::sin,
              &MyTest::pi,
              &MyTest::bigEndian});

    addBenchmarks({&MyTest::prepend1kItemsVector,
                   &MyTest::prepend1kItemsList}, 100);
}

Now we implement our test cases. The simplest macro is CORRADE_VERIFY(), which only verifies that given expression is true; if not, it exits current test case (i.e. skips processing the rest of the function) and prints a diagnostic on the error output.

void MyTest::commutativity() {
    double a = 5.0;
    double b = 3.0;

    CORRADE_VERIFY(a*b == b*a);
    CORRADE_VERIFY(a/b == b/a);
}

Next macro is CORRADE_COMPARE(), which takes an actual value and compares it to expected value. Its advantage over CORRADE_VERIFY() is that it prints contents of both arguments via Utility::Debug if the comparison fails:

void MyTest::associativity() {
    CORRADE_COMPARE(2*(3 + 4), 14);
    CORRADE_COMPARE((2*3) + 4, 14);
}

If both values have different type or if we want to force some type for comparison, we can use CORRADE_COMPARE_AS(). This macro can be used also for more involved comparison using "pseudo-types", see the TestSuite::Comparator class documentation for more information.

void MyTest::sin() {
    CORRADE_COMPARE_AS(std::sin(0), 0.0f, float);
}

If you have some unimplemented or broken functionality and you want to document that fact in the test instead of just ignoring it, you can use CORRADE_EXPECT_FAIL() macro, which expects all following checks until the end of the scope to fail. An important property of this macro is that if some of the checks unexpectedly starts passing again, it makes the test case fail and the test code needs to be updated to avoid stale assumptions.

void MyTest::pi() {
    CORRADE_EXPECT_FAIL("Need better approximation.");
    double pi = 22/7.0;
    CORRADE_COMPARE(pi, 3.14159265);
}

For things that can't be tested on given platform you can use CORRADE_SKIP() macro to indicate that the particular feature can't be tested:

void MyTest::bigEndian() {
    if(!Utility::Endianness::isBigEndian())
        CORRADE_SKIP("Need big-endian machine for this.");

    union {
        short a = 64;
        char data[2];
    } a;
    CORRADE_COMPARE(a.data[0], 0);
    CORRADE_COMPARE(a.data[1], 64);
}

Besides test cases providing a clear passed/failed result, it's possible to create benchmarks where the results are left up to user interpretation. The most valuable benchmarks are ones that are comparing various approaches against each other so one can immediately see the difference:

void MyTest::prepend1kItemsVector() {
    double a{};
    CORRADE_BENCHMARK(100) {
        std::vector<double> container;
        for(std::size_t i = 0; i != 1000; ++i)
            container.insert(container.begin(), 1.0);
        a += container.back();
    }
    CORRADE_VERIFY(a); // to avoid the benchmark loop being optimized out
}

void MyTest::prepend1kItemsList() {
    double a{};
    CORRADE_BENCHMARK(100) {
        std::list<double> container;
        for(std::size_t i = 0; i != 1000; ++i)
            container.push_front(1.0);
        a += container.back();
    }
    CORRADE_VERIFY(a); // to avoid the benchmark loop being optimized out
}

Lastly, we create the main() function using the CORRADE_TEST_MAIN() macro. It conveniently abstracts platform differences and takes care of command-line argument parsing for us.

}

CORRADE_TEST_MAIN(MyTest)

Compilation and running

Now we can compile and run our test using CMake and the corrade_add_test() macro. It compiles the executable and links it to all required libraries so we don't have to care about that ourselves. Don't forget to call enable_testing() first so ctest is able to collect and run all the tests.

find_package(Corrade REQUIRED TestSuite)
set_directory_properties(PROPERTIES CORRADE_USE_PEDANTIC_FLAGS ON)

enable_testing()
corrade_add_test(MyTest MyTest.cpp)

The test executable can be run either manually or in a batch with all other tests using ctest. When executed, it produces output similar to this:

Starting Corrade::Examples::MyTest with 7 test cases...
  FAIL [1] commutativity() at …/MyTest.cpp:62
        Expression a/b == b/a failed.
  FAIL [2] associativity() at …/MyTest.cpp:67
        Values (2*3) + 4 and 14 are not the same, actual is
        10
        but expected
        14
    OK [3] sin()
 XFAIL [4] pi() at …/MyTest.cpp:77
        Need better approximation. pi and 3.14159265 failed the comparison.
    OK [4] pi()
  SKIP [5] bigEndian()
        Need big-endian machine for this.
 BENCH [6] 220.69 ± 7.38   µs prepend1kItemsVector()@99x100 (wall time)
 BENCH [7] 128.33 ± 5.46   µs prepend1kItemsList()@99x100 (wall time)
Finished Corrade::Examples::MyTest with 2 errors out of 206 checks.

The test executable accepts various arguments to control the test and benchmark execution, pass --help to it to see all the options or head over to the documentation. The full file contents are linked below, full source code is also available in the GitHub repository.