Corrade::TestSuite::Tester class

Base class for tests and benchmarks.

Supports colored output, instanced (or data-driven) tests, repeated tests (e.g. for testing race conditions) and benchmarks, which can either use one of the builtin measurement functions (such as wall time, CPU time or CPU cycle count) or any user-provided custom measurement function (for example measuring allocations, memory usage, GPU timings etc.). In addition, the behavior of the test execution can be configured via many command-line and environment options.

Make sure to first go through the Testing and benchmarking tutorial for initial overview and step-by-step introduction. Below is a more detailed description of all provided functionality.

Basic testing workflow

A test starts with deriving the Tester class. The test cases are parameter-less void member functions that are added using addTests() in the constructor and the main() function is created using CORRADE_TEST_MAIN(). The goal is to have as little boilerplate as possible, thus the test usually consists of only one *.cpp file with no header and the derived type is a struct to avoid having to write public keywords. It's also advised to wrap everything except the CORRADE_TEST_MAIN() macro in an unnamed namespace, as that will make the compiler tell you about accidentally unused test cases or variables.

namespace {

struct MyTest: TestSuite::Tester {
    explicit MyTest();

    void addTwo();
    void subtractThree();
};

MyTest::MyTest() {
    addTests({&MyTest::addTwo,
              &MyTest::subtractThree});
}

void MyTest::addTwo() {
    int a = 3;
    CORRADE_COMPARE(a + 2, 5);
}

void MyTest::subtractThree() {
    int b = 5;
    CORRADE_COMPARE(b - 3, 2);
}

}

CORRADE_TEST_MAIN(MyTest)

The above gives the following output:

Starting MyTest with 2 test cases...
    OK [1] addTwo()
    OK [2] subtractThree()
Finished MyTest with 0 errors out of 2 checks.

Actual testing is done via various CORRADE_VERIFY(), CORRADE_COMPARE(), CORRADE_COMPARE_AS() and other macros. If some comparison in given test case fails, a FAIL with concrete file, line and additional diagnostic is printed to the output and the test case is exited without executing the remaining statements. Otherwise, if all comparisons in given test case pass, an OK is printed. The main difference between these macros is the kind of diagnostic output they print when comparison fails — for example a simple expression failure reported by CORRADE_VERIFY() is enough when checking for non- nullptr value, but for comparing two strings you may want to use CORRADE_COMPARE() so you can not only see that they differ, but also how they differ.

Additionally there are CORRADE_SKIP(), CORRADE_EXPECT_FAIL() and CORRADE_EXPECT_FAIL_IF() control flow helpers that allow you to say for example that a particular test was skipped due to missing functionality on given platform (printing a SKIP in the output and exiting the test case right after the statement) or documenting that some algorithm produces incorrect result due to a bug, printing an XFAIL. Passing a test while failure is expected is treated as an error (XPASS), which can be helpful to ensure the assumptions in the tests don't get stale. Expected failures can be also disabled globally via a command-line option --no-xfail or via environment variable, see below.

Finally, while it's possible to use Utility::Debug and any other APIs for printing to the standard output, using the CORRADE_INFO() or CORRADE_WARN() macros will make the output prefixed with INFO or WARN, name of the test case as well as file/line information. The CORRADE_FAIL_IF() macro is then useful as an alternative to CORRADE_VERIFY() / CORRADE_COMPARE() when the implicit diagnostic message is insufficient — if the condition fails, it'll just print given message prefixed with FAIL and the test case is exited.

The only reason why those are macros and not member functions is the ability to gather class/function/file/line/expression information via the preprocessor for printing the test output and exact location of possible test failure. If none of the CORRADE_VERIFY(), CORRADE_COMPARE() plus variants, CORRADE_FAIL_IF() or CORRADE_SKIP() macros is encountered when running the test case, the test case is reported as invalid, with ? in the output, and that causes the whole test run to fail as well. This is done in order to prevent accidents where nothing actually gets verified.

The test cases are numbered in the output and those numbers can be used on the command-line to whitelist/blacklist the test cases with --only/--skip, randomly reorder them using --shuffle and more, see below for details. In total, when all test cases pass, the executable exits with 0 return code, in case of failure or invalid test case it exits with 1 to make it possible to run the tests in batch (such as CMake CTest). By default, after a failure, the testing continues with the other test cases, you can abort after first failure using --abort-on-fail command-line option.

Useful but not immediately obvious is the possibility to use templated member functions as test cases, for example when testing a certain algorithm on different data types:

struct PiTest: TestSuite::Tester {
    explicit PiTest();

    template<class T> void calculate();
};

PiTest::PiTest() {
    addTests<PiTest>({
        &PiTest::calculate<float>,
        &PiTest::calculate<double>});
}

template<class T> void PiTest::calculate() {
    setTestCaseTemplateName(std::is_same<T, float>::value ? "float" : "double");

    CORRADE_COMPARE(calculatePi<T>(), T(3.141592653589793));
}

And the corresponding output:

Starting PiTest with 2 test cases...
    OK [1] calculate<float>()
    OK [2] calculate<double>()
Finished PiTest with 0 errors out of 2 checks.

This works with all add*() functions though please note that current C++11 compilers (at least GCC and Clang) are not able to properly detect class type when passing only templated functions to it, so you may need to specify the type explicitly. Also, there is no easy and portable way to get function name with template parameters so by default it will be just the function name, but you can call setTestCaseName() to specify a full name.

Instanced tests

Often you have an algorithm which you need to test on a variety of inputs or corner cases. One solution is to use a for-cycle inside the test case to test on all inputs, but then the diagnostic on error will not report which input is to blame. Another solution is to duplicate the test case multiple times for all the different inputs, but that becomes a maintenance nightmare pretty quickly. Making the function with a non-type template parameter is also a solution, but that's not possible to do for all types and with huge input sizes it is often not worth the increased compilation times. Fortunately, there is an addInstancedTests() that comes for the rescue:

struct RoundTest: TestSuite::Tester {
    explicit RoundTest();

    void test();
};

constexpr const struct {
    const char* name;
    float input;
    float expected;
} RoundData[] {
    {"positive down", 3.3f, 3.0f},
    {"positive up", 3.5f, 4.0f},
    {"zero", 0.0f, 0.0f},
    {"negative down", -3.5f, -4.0f},
    {"negative up", -3.3f, -3.0f}
};

RoundTest::RoundTest() {
    addInstancedTests({&RoundTest::test},
        Containers::arraySize(RoundData));
}

void RoundTest::test() {
    auto&& data = RoundData[testCaseInstanceId()];
    setTestCaseDescription(data.name);

    CORRADE_COMPARE(round(data.input), data.expected);
}

Corresponding output:

Starting RoundTest with 5 test cases...
    OK [1] test(positive down)
    OK [2] test(positive up)
    OK [3] test(zero)
    OK [4] test(negative down)
    OK [5] test(negative up)
Finished RoundTest with 0 errors out of 5 checks.

The tester class just gives you an instance index via testCaseInstanceId() and it's up to you whether you use it as an offset to some data array or generate an input using it, the above example is just a hint how one might use it. Each instance is printed to the output separately and if one instance fails, it doesn't stop the other instances from being executed. Similarly to the templated tests, setTestCaseDescription() allows you to set a human-readable description of given instance. If not called, the instances are just numbered in the output.

See also the TestCaseDescriptionSourceLocation class for improved file/line diagnostics for instanced test cases.

Testing in a loop

While instanced tests are usually the go-to solution when testing on a larger set of data, sometimes you need to loop over a few values and check them one by one. When such test fails, it's often hard to know which particular value caused the failure. To fix that, you can use the CORRADE_ITERATION() macro to annotate current iteration in case of a failure. It works with any type printable via Utility::Debug and handles nested loops as well. Silly example:

void NameTest::noYellingAllowed() {
    for(Containers::StringView name: {"Lucy", "JOHN", "Ed"}) {
        CORRADE_ITERATION(name);
        for(std::size_t i = 1; i != name.size(); ++i) {
            CORRADE_ITERATION(i);
            CORRADE_VERIFY(!std::isupper(name[i]));
        }
    }
}

On failure, the iteration value(s) will be printed next to the file/line info:

Starting NameTest with 1 test cases...
  FAIL [1] noYellingAllowed() at …/NameTest.cpp:47 (iteration JOHN, 1)
        Expression !std::isupper(name[i]) failed.
Finished NameTest with 1 errors out of 4 checks.

This macro isn't limited to just loops, it can be used to provide more context to just any check. See also Compare::Container for a convenient way of comparing container contents.

Repeated tests

A complementary feature to instanced tests are repeated tests using addRepeatedTests(), useful for example to repeatedly call one function 10000 times to increase probability of potential race conditions. The difference from instanced tests is that all repeats are treated as executing the same code and thus only the overall result is reported in the output. Also unlike instanced tests, if a particular repeat fails, no further repeats are executed. The test output contains the number of executed repeats after the test case name, prefixed by @. Example of testing race conditions with multiple threads accessing the same variable:

struct RaceTest: TestSuite::Tester {
    explicit RaceTest();

    template<class T> void threadedIncrement();
};

RaceTest::RaceTest() {
    addRepeatedTests<RaceTest>({
        &RaceTest::threadedIncrement<int>,
        &RaceTest::threadedIncrement<std::atomic_int>}, 10000);
}

template<class T> void RaceTest::threadedIncrement() {
    setTestCaseTemplateName(std::is_same<T, int>::value ?
        "int" : "std::atomic_int");

    T x{0};
    int y = 1;
    auto fun = [&x, &y] {
        for(std::size_t i = 0; i != 500; ++i) x += y;
    };
    std::thread a{fun}, b{fun}, c{fun};

    a.join();
    b.join();
    c.join();

    CORRADE_COMPARE(x, 1500);
}

Depending on various factors, here is one possible output:

Starting RaceTest with 2 test cases...
  FAIL [1] threadedIncrement<int>()@167 at …/RaceTest.cpp:60
        Values x and 1500 are not the same, actual is
        1000
        but expected
        1500
    OK [2] threadedIncrement<std::atomic_int>()@10000
Finished RaceTest with 1 errors out of 10167 checks.

Similarly to testCaseInstanceId() there is testCaseRepeatId() which gives repeat index. Use with care, however, as the repeated tests are assumed to execute the same code every time. On the command line it is possible to increase repeat count via --repeat-every. In addition there is --repeat-all which behaves as like all add*() functions in the constructor were called multiple times in a loop. Combined with --shuffle this can be used to run the test cases multiple times in a random order to uncover potential unwanted interactions and order-dependent bugs.

It's also possible to combine instanced and repeated tests using addRepeatedInstancedTests().

Advanced comparisons

While the diagnostic provided by CORRADE_COMPARE() is definitely better than just knowing that something failed, the CORRADE_COMPARE_AS() and CORRADE_COMPARE_WITH() macros allow for advanced comparison features in specialized cases. The Compare namespace contains various builtin comparators, some of which are listed below. It's also possible to implement custom comparators for your own use cases — see the Comparator class for details.

Saving diagnostic files

On comparison failure, it's sometimes desirable to inspect the generated data with an external tool. Or, in case the expected test data need to be updated, it's easier to copy over the generated data to the original file than applying changes manually. To make this easier without needing to add file-saving to the test itself, pass a path to the --save-diagnostic command-line option. Comparators that operate with files (such as Compare::File or Compare::StringToFile) will then use this path to save the actual data under the same filename as the expected file, notifying you about the operation with a SAVED message:

$ ./MyTest --save-diagnostic some/path
Starting MyTest with 1 test cases...
  FAIL [1] generateFile() at MyTest.cpp:73
        Files "a.txt" and "expected.txt" are different, actual ABC but expected abc
 SAVED [1] generateFile() -> some/path/expected.txt
Finished MyTest with 1 errors out of 1 checks. 1 check saved a diagnostic file.

Note that this functionality is not restricted to just saving the actual compared data (or to comparison failures) — third-party comparators can use it for generating diffs or providing further diagnostic meant to be viewed externally. See the Comparator class for further information.

Benchmarks

Besides verifying code correctness, it's possible to measure code performance. Unlike correctness tests, the benchmark results are hard to reason about using only automated means, so there are no macros for verifying benchmark results and instead the measured values are just printed to the output for users to see. Benchmarks can be added using addBenchmarks(), the actual benchmark loop is marked by CORRADE_BENCHMARK() and the results are printed to output with BENCH identifier. Example benchmark comparing performance of inverse square root implementations:

struct InvSqrtBenchmark: TestSuite::Tester {
    explicit InvSqrtBenchmark();

    void naive();
    void fast();
};

InvSqrtBenchmark::InvSqrtBenchmark() {
    for(auto fn: {&InvSqrtBenchmark::naive, &InvSqrtBenchmark::fast}) {
        addBenchmarks({fn}, 500, BenchmarkType::WallTime);
        addBenchmarks({fn}, 500, BenchmarkType::CpuTime);
    }
}

void InvSqrtBenchmark::naive() {
    volatile float a; /* to avoid optimizers removing the benchmark code */
    CORRADE_BENCHMARK(1000000)
        a = 1.0f/std::sqrt(float(testCaseRepeatId()));
    CORRADE_VERIFY(a);
}

void InvSqrtBenchmark::fast() {
    volatile float a; /* to avoid optimizers removing the benchmark code */
    CORRADE_BENCHMARK(1000000)
        a = fastinvsqrt(float(testCaseRepeatId()));
    CORRADE_VERIFY(a);
}

Note that it's not an error to add one test/benchmark multiple times — here it is used to have the same code benchmarked with different timers. Possible output:

Starting InvSqrtBenchmark with 4 test cases...
 BENCH [1]   8.24 ± 0.19   ns naive()@499x1000000 (wall time)
 BENCH [2]   8.27 ± 0.19   ns naive()@499x1000000 (CPU time)
 BENCH [3]   0.31 ± 0.01   ns fast()@499x1000000 (wall time)
 BENCH [4]   0.31 ± 0.01   ns fast()@499x1000000 (CPU time)
Finished InvSqrtBenchmark with 0 errors out of 0 checks.

The number passed to addBenchmarks() is equivalent to repeat count passed to addRepeatedTests() and specifies measurement sample count. The number passed to CORRADE_BENCHMARK() is number of iterations of the inner loop in one sample measurement to amortize the overhead and error caused by clock precision — the faster the measured code is the more iterations it needs. The measured value is then divided by that number to represent cost of a single iteration. The testCaseRepeatId() returns current sample index and can be used to give some input variation to the test. By default the benchmarks measure wall clock time, see BenchmarkType for other types of builtin benchmarks. The default benchmark type can be also overridden on the command-line via --benchmark.

It's possible to use all CORRADE_VERIFY(), CORRADE_COMPARE() etc. verification macros inside the benchmark to check pre/post-conditions. If one of them fails, the benchmark is treated in the output just like a failing test, with no benchmark results being printed out. Keep in mind, however, that those macros have some overhead, so try to not use them inside the benchmark loop.

The benchmark output is calculated from all samples except the first discarded samples. By default that's one sample, --benchmark-discard and --repeat-every command-line options can be used to override how many samples are taken and how many of them are discarded at first. In the output, the used sample count and sample size is printed after test case name, prefixed with @. The output contains mean value and a sample standard deviation, calculated as:

\[ \begin{array}{rcl} \bar{x} & = & \dfrac{1}{N} \sum\limits_{i=1}^N x_i \\ \\ \sigma_x & = & \sqrt{\dfrac{1}{N-1} \sum\limits_{i=1}^N \left( x_i - \bar{x} \right)^2} \end{array} \]

Different benchmark type have different units. Depending on value magnitude, larger units may be used as documented in BenchmarkUnits. For easier visual recognition of the values, by default the sample standard deviation is colored yellow if it is larger than 5% of the absolute value of the mean and red if it is larger than 25% of the absolute value of the mean. This can be overridden on the command-line via --benchmark-yellow and --benchmark-red. See Mitigating noise in CPU benchmark results below for various ways of achieving more stable results.

It's possible to have instanced benchmarks as well, see addInstancedBenchmarks().

Custom benchmarks

It's possible to specify a custom pair of functions for intiating the benchmark and returning the result using addCustomBenchmarks(). The benchmark end function returns an unsigned 64-bit integer indicating measured amount in units given by BenchmarkUnits. To further describe the value being measured you can call setBenchmarkName() in the benchmark begin function. Contrived example of benchmarking number of copies when using std::vector::push_back():

struct VectorBenchmark: TestSuite::Tester {
    explicit VectorBenchmark();

    void insert();

    void copyCountBegin();
    std::uint64_t copyCountEnd();
};

namespace {
    std::uint64_t count = 0;

    struct CopyCounter {
        CopyCounter() = default;
        CopyCounter(const CopyCounter&) {
            ++count;
        }
    };

    enum: std::size_t { InsertDataCount = 3 };

    constexpr const struct {
        const char* name;
        std::size_t count;
    } InsertData[InsertDataCount]{
        {"100", 100},
        {"1k", 1000},
        {"10k", 10000}
    };
}

VectorBenchmark::VectorBenchmark() {
    addCustomInstancedBenchmarks({&VectorBenchmark::insert}, 1, InsertDataCount,
        &VectorBenchmark::copyCountBegin,
        &VectorBenchmark::copyCountEnd,
        BenchmarkUnits::Count);
}

void VectorBenchmark::insert() {
    auto&& data = InsertData[testCaseInstanceId()];
    setTestCaseDescription(data.name);

    std::vector<CopyCounter> v;
    CORRADE_BENCHMARK(1)
        for(std::size_t i = 0; i != data.count; ++i)
            v.push_back({});
}

void VectorBenchmark::copyCountBegin() {
    setBenchmarkName("copy count");
    count = 0;
}

std::uint64_t VectorBenchmark::copyCountEnd() {
    return count;
}

Running the benchmark shows that calling push_back() for 10 thousand elements actually causes the copy constructor to be called 26 thousand times:

Starting VectorBenchmark with 3 test cases...
 BENCH [1] 227.00             insert(100)@1x1 (copy count)
 BENCH [2]   2.02          k  insert(1k)@1x1 (copy count)
 BENCH [3]  26.38          k  insert(10k)@1x1 (copy count)
Finished VectorBenchmark with 0 errors out of 0 checks.

Specifying setup/teardown routines

While the common practice in C++ is to use RAII for resource lifetime management, sometimes you may need to execute arbitrary code at the beginning and end of each test case. For this, all addTests(), addInstancedTests(), addRepeatedTests(), addRepeatedInstancedTests(), addBenchmarks(), addInstancedBenchmarks(), addCustomBenchmarks() and addCustomInstancedBenchmarks() have an overload that is additionally taking a pair of parameter-less void functions for setup and teardown. Both functions are called before and after each test case run, independently on whether the test case passed or failed.

Catching exceptions

If a test case fails with an unhandled exception, a THROW is printed on the output, together with a (platform-specific mangled) name of the exception type and contents of std::exception::what(). No file/line info is provided in this case, as it's not easily possible to know where the exception originated from. Only exceptions derived from std::exception are caught to avoid interfering with serious issues such as memory access errors. If catching unhandled exceptions is not desired (for example when you want to do a post-mortem debugging of the stack trace leading to the exception), it can be disabled with the --no-catch command-line option.

Apart from the above, the test suite doesn't provide any builtin exception support — if it's needed to verify that an exception was or wasn't thrown, the user is expected to implement a try {} catch block inside the test case and verify the desired properties directly.

Command-line options

Command-line options that make sense to be set globally for multiple test cases are also configurable via environment variables for greater flexibility when for example running the tests in a batch via ctest.

Usage:

./my-test [-h|--help] [-c|--color on|off|auto] [--skip N1,N2-N3…]
    [--skip-tests] [--skip-benchmarks] [--only N1,N2-N3…] [--shuffle]
    [--repeat-every N] [--repeat-all N] [--abort-on-fail] [--no-xfail]
    [--no-catch] [--save-diagnostic PATH] [--verbose] [--benchmark TYPE]
    [--benchmark-discard N] [--benchmark-yellow N] [--benchmark-red N]

Arguments:

  • -h, --help — display this help message and exit
  • -c, --color on|off|auto — colored output (environment: CORRADE_TEST_COLOR, default: auto). The auto option enables color output in case an interactive terminal is detected. Note that on Windows it is possible to output colors only directly to an interactive terminal unless CORRADE_UTILITY_USE_ANSI_COLORS is defined.
  • --skip N1,N2-N3… — skip test cases with given numbers. See Utility::String::parseNumberSequence() for syntax description.
  • --skip-tests — skip all tests (environment: CORRADE_TEST_SKIP_TESTS=ON|OFF)
  • --skip-benchmarks — skip all benchmarks (environment: CORRADE_TEST_SKIP_BENCHMARKS=ON|OFF)
  • --only N1,N2-N3… — run only test cases with given numbers. See Utility::String::parseNumberSequence() for syntax description.
  • --shuffle — randomly shuffle test case order (environment: CORRADE_TEST_SHUFFLE=ON|OFF)
  • --repeat-every N — repeat every test case N times (environment: CORRADE_TEST_REPEAT_EVERY, default: 1)
  • --repeat-all N — repeat all test cases N times (environment: CORRADE_TEST_REPEAT_ALL, default: 1)
  • -X, --abort-on-fail — abort after first failure (environment: CORRADE_TEST_ABORT_ON_FAIL=ON|OFF)
  • --no-xfail — disallow expected failures (environment: CORRADE_TEST_NO_XFAIL=ON|OFF)
  • --no-catch — don't catch standard exceptions (environment: CORRADE_TEST_NO_CATCH=ON|OFF)
  • -S, --save-diagnostic PATH — save diagnostic files to given path (environment: CORRADE_TEST_SAVE_DIAGNOSTIC)
  • -v, --verbose — enable verbose output (environment: CORRADE_TEST_VERBOSE=ON|OFF). Note that there isn't any corresponding "quiet" option, if you want to see just the failures, redirect standard output away.
  • --benchmark TYPE — default benchmark type (environment: CORRADE_TEST_BENCHMARK). Supported benchmark types:
    • wall-time — wall time spent
    • cpu-time — CPU time spent
    • cpu-cycles — CPU cycles spent (x86 only, gives zero result elsewhere)
  • --benchmark-discard N — discard first N measurements of each benchmark (environment: CORRADE_TEST_BENCHMARK_DISCARD, default: 1)
  • --benchmark-yellow N — deviation threshold for marking benchmark yellow (environment: CORRADE_TEST_BENCHMARK_YELLOW, default: 0.05)
  • --benchmark-red N — deviation threshold for marking benchmark red (environment: CORRADE_TEST_BENCHMARK_RED, default: 0.25)

Compiling and running tests

In general, just compiling the executable and linking it to the TestSuite library is enough, no further setup is needed. When running, the test produces output to standard output / standard error and exits with non-zero code in case of a test failure.

Using CMake

If you are using CMake, there's a convenience corrade_add_test() CMake macro that creates the executable, links Corrade::TestSuite library to it and adds it to CTest. Besides that it is able to link other arbitrary libraries to the executable and specify a list of files that the tests used. It provides additional useful features on various platforms:

  • On Windows, the macro links the test executable to the Corrade::Main library for ANSI color support, UTF-8 argument parsing and UTF-8 output encoding.
  • If compiling for Emscripten, using corrade_add_test() makes CTest run the resulting *.js file via Node.js. Also it is able to bundle all files specified in FILES into the virtual Emscripten filesystem, making it easy to run file-based tests on this platform; all environment options are passed through as well. The macro also creates a runner for manual testing in a browser, see below for more information.
  • If Xcode projects are generated via CMake and CORRADE_TESTSUITE_TARGET_XCTEST is enabled, corrade_add_test() makes the test executables in a way compatible with XCTest, making it easy to run them directly from Xcode. Running the tests via ctest will also use XCTest.
  • If building for Android, using corrade_add_test() will make CTest upload the test executables and all files specified in FILES onto the device or emulator via adb, run it there with all environment options passed through as well and transfers test results back to the host.

Example of using the corrade_add_test() macro is below. The test executable will get built from the specified source with the libJPEG library linked and the *.jpg files will be available on desktop, Emscripten and Android in path specified in JPEG_TEST_DIR that was saved into the configure.h file inside current build directory:

if(CORRADE_TARGET_EMSCRIPTEN OR CORRADE_TARGET_ANDROID)
    set(JPEG_TEST_DIR ".")
else()
    set(JPEG_TEST_DIR ${CMAKE_CURRENT_SOURCE_DIR})
endif()

# Contains just
#  #define JPEG_TEST_DIR "${JPEG_TEST_DIR}"
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/configure.h.cmake
               ${CMAKE_CURRENT_BINARY_DIR}/configure.h)

corrade_add_test(JpegTest JpegTest.cpp
    LIBRARIES ${JPEG_LIBRARIES}
    FILES rgb.jpg rgba.jpg grayscale.jpg)
target_include_directories(JpegTest ${CMAKE_CURRENT_BINARY_DIR})

Manually running the tests on Android

If not using CMake CTest, Android tests can be run manually. When you have developer-enabled Android device connected or Android emulator running, you can use ADB to upload the built test to device temp directory and run it there:

adb push <path-to-the-test-build>/MyTest /data/local/tmp
adb shell /data/local/tmp/MyTest

You can also use adb shell to log directly into the device shell and continue from there. All command-line arguments are supported.

Manually running the tests on Emscripten

When not using CMake CTest, Emscripten tests can be run directly using Node.js. Emscripten sideloads the WebAssembly or asm.js binary files from current working directory, so it's needed to cd into the test build directory first:

cd <test-build-directory>
node MyTest.js

See also the --embed-files emcc option for a possibility to bundle test files with the executable.

Running Emscripten tests in a browser

Besides running tests using Node.js, it's possible to run each test case manually in a browser. Browsers require the executables to be accessed via a webserver — if you have Python installed, you can simply start serving the contents of your build directory using the following command:

cd <test-build-directory>
python -m http.server

The webserver is then available at http://localhost:8000. It supports directory listing, so you can navigate to each test case runner HTML file (look for e.g. MyTest.html). Unfortunately it's at the moment not possible to run all browser tests in a batch or automate the process in any other way.

Mitigating noise in CPU benchmark results

CPU frequency scaling, which is often enabled by default for power saving reasons, can add a lot of noise to benchmarks that measure time. Picking a higher iteration and repeat count in CORRADE_BENCHMARK() and addBenchmarks() has the effect of putting more strain on the system, forcing it to run at a higher frequency for longer period of time, which together with having more data to average tends to produce more stable results. However it's often impractical to hardcode them to a too high value as it hurts iteration times, and the repeat count needed for stable results may vary wildly between debug and release builds.

To quickly increase repeat count when running the test it's possible to use the --repeat-every command-line option or the corresponding environment variable. The --repeat-all option, possibly combined with --shuffle, will result in benchmarks being run and appearing in the output several times and possibly in random order, which could uncover various otherwise hard-to-detect implicit dependencies between the code being measured and application or system state such as cold caches.

On Linux or Android the test runner will attempt to query the CPU frequency scaling governor. If it's not set to performance, the benchmark output will contain a warning, with -v / --verbose showing a concrete suggestion how to fix it. Switching to a performance governor can be done with cpupower on Linux:

sudo cpupower frequency-set --governor performance

An equivalent command on Android is the following, which requires a rooted device:

echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

Public types

class TesterConfiguration
Tester configuration.
enum class BenchmarkType { Default = 1, WallTime = 2, CpuTime = 3, CpuCycles = 4 }
Benchmark type.
enum class BenchmarkUnits { Nanoseconds = 100, Cycles = 101, Instructions = 102, Bytes = 103, Count = 104, RatioThousandths = 105 new in Git master, PercentageThousandths = 106 new in Git master }
Custom benchmark units.
using Debug = Corrade::Utility::Debug
Alias for debug output.
using Warning = Corrade::Utility::Warning
Alias for warning output.
using Error = Corrade::Utility::Error
Alias for error output.

Constructors, destructors, conversion operators

Tester(const TesterConfiguration& configuration = TesterConfiguration{}) explicit
Constructor.

Public functions

auto arguments() -> std::pair<int&, char**>
Command-line arguments.
template<class Derived>
void addTests(std::initializer_list<void(Derived::*)()> tests)
Add test cases.
template<class Derived>
void addRepeatedTests(std::initializer_list<void(Derived::*)()> tests, std::size_t repeatCount)
Add repeated test cases.
template<class Derived>
void addTests(std::initializer_list<void(Derived::*)()> tests, void(Derived::*)() setup, void(Derived::*)() teardown)
Add test cases with explicit setup and teardown functions.
template<class Derived>
void addRepeatedTests(std::initializer_list<void(Derived::*)()> tests, std::size_t repeatCount, void(Derived::*)() setup, void(Derived::*)() teardown)
Add repeated test cases with explicit setup and teardown functions.
template<class Derived>
void addInstancedTests(std::initializer_list<void(Derived::*)()> tests, std::size_t instanceCount)
Add instanced test cases.
template<class Derived>
void addRepeatedInstancedTests(std::initializer_list<void(Derived::*)()> tests, std::size_t repeatCount, std::size_t instanceCount)
Add repeated instanced test cases.
template<class Derived>
void addInstancedTests(std::initializer_list<void(Derived::*)()> tests, std::size_t instanceCount, void(Derived::*)() setup, void(Derived::*)() teardown)
Add instanced test cases with explicit setup and teardown functions.
template<class Derived>
void addRepeatedInstancedTests(std::initializer_list<void(Derived::*)()> tests, std::size_t repeatCount, std::size_t instanceCount, void(Derived::*)() setup, void(Derived::*)() teardown)
Add repeated instanced test cases with explicit setup and teardown functions.
template<class Derived>
void addBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, BenchmarkType benchmarkType = BenchmarkType::Default)
Add benchmarks.
template<class Derived>
void addBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, void(Derived::*)() setup, void(Derived::*)() teardown, BenchmarkType benchmarkType = BenchmarkType::Default)
Add benchmarks with explicit setup and teardown functions.
template<class Derived>
void addCustomBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, void(Derived::*)() benchmarkBegin, std::uint64_t(Derived::*)() benchmarkEnd, BenchmarkUnits benchmarkUnits)
Add custom benchmarks.
template<class Derived>
void addCustomBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, void(Derived::*)() setup, void(Derived::*)() teardown, void(Derived::*)() benchmarkBegin, std::uint64_t(Derived::*)() benchmarkEnd, BenchmarkUnits benchmarkUnits)
Add custom benchmarks with explicit setup and teardown functions.
template<class Derived>
void addInstancedBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, std::size_t instanceCount, BenchmarkType benchmarkType = BenchmarkType::Default)
Add instanced benchmarks.
template<class Derived>
void addInstancedBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, std::size_t instanceCount, void(Derived::*)() setup, void(Derived::*)() teardown, BenchmarkType benchmarkType = BenchmarkType::Default)
Add instanced benchmarks with explicit setup and teardown functions.
template<class Derived>
void addCustomInstancedBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, std::size_t instanceCount, void(Derived::*)() benchmarkBegin, std::uint64_t(Derived::*)() benchmarkEnd, BenchmarkUnits benchmarkUnits)
Add custom instanced benchmarks.
template<class Derived>
void addCustomInstancedBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, std::size_t instanceCount, void(Derived::*)() setup, void(Derived::*)() teardown, void(Derived::*)() benchmarkBegin, std::uint64_t(Derived::*)() benchmarkEnd, BenchmarkUnits benchmarkUnits)
Add custom instanced benchmarks with explicit setup and teardown functions.
auto testCaseId() const -> std::size_t
Test case ID.
auto testCaseInstanceId() const -> std::size_t
Test case instance ID.
auto testCaseRepeatId() const -> std::size_t
Test case repeat ID.
auto testName() const -> Containers::StringView new in Git master
Test name.
void setTestName(Containers::StringView name)
Set custom test name.
void setTestName(const char* name)
auto testCaseName() const -> Containers::StringView new in Git master
Test case name.
void setTestCaseName(Containers::StringView name)
Set custom test case name.
void setTestCaseName(const char* name)
auto testCaseTemplateName() const -> Containers::StringView new in Git master
Test case template name.
void setTestCaseTemplateName(Containers::StringView name) new in 2019.10
Set test case template name.
void setTestCaseTemplateName(const char* name)
void setTestCaseTemplateName(Containers::ArrayView<const Containers::StringView> names) new in Git master
void setTestCaseTemplateName(std::initializer_list<Containers::StringView> names)
void setTestCaseTemplateName(Containers::ArrayView<const char*const> names)
void setTestCaseTemplateName(std::initializer_list<const char*> names)
auto testCaseDescription() const -> Containers::StringView new in Git master
Test case description.
void setTestCaseDescription(Containers::StringView description)
Set test case description.
void setTestCaseDescription(const char* description)
void setTestCaseDescription(const TestCaseDescriptionSourceLocation& description) new in Git master
Set test case description with source location.
auto benchmarkName() const -> Containers::StringView new in Git master
Test case description.
void setBenchmarkName(Containers::StringView name)
Set benchmark name.
void setBenchmarkName(const char* name)

Enum documentation

enum class Corrade::TestSuite::Tester::BenchmarkType

Benchmark type.

Enumerators
Default

Default. Equivalent to BenchmarkType::WallTime, but can be overridden on command-line using the --benchmark option.

WallTime

Wall time. Suitable for measuring events in microseconds and up. While the reported time is in nanoseconds, the actual timer granularity may differ from platform to platform. To measure shorter events, increase number of iterations passed to CORRADE_BENCHMARK() to amortize the error or use a different benchmark type.

CpuTime

CPU time. Suitable for measuring most events (microseconds and up). While the reported time is in nanoseconds, the actual timer granularity may differ from platform to platform (for example on Windows the CPU clock is reported in multiples of 100 ns). To measure shorter events, increase number of iterations passed to CORRADE_BENCHMARK() to amortize the error or use a different clock.

CpuCycles

CPU cycle count. Suitable for measuring sub-millisecond events, but note that on newer architectures the cycle counter frequency is constant and thus measured value is independent on CPU frequency, so it in fact measures time and not the actual cycles spent. See for example https://randomascii.wordpress.com/2011/07/29/rdtsc-in-the-age-of-sandybridge/ for more information.

enum class Corrade::TestSuite::Tester::BenchmarkUnits

Custom benchmark units.

Unit of measurements outputted from custom benchmarks.

Enumerators
Nanoseconds

Time in nanoseconds. Depending on the magnitude, the value is shown as ns, µs, ms and s.

Cycles

Processor cycle count. Depending on the magnitude, the value is shown as C, kC, MC and GC (with a multiplier of 1000).

Instructions

Processor instruction count. Depending on the magnitude, the value is shown as I, kI, MI and GI (with a multiplier of 1000).

Bytes

Memory (in bytes). Depending on the magnitude, the value is shown as B, kB, MB and GB (with a multiplier of 1024).

Count

Generic count. Depending on the magnitude, the value is shown with no suffix or with k, M or G (with a multiplier of 1000).

RatioThousandths new in Git master

Ratio expressed in 1/1000s. The value is shown divided by 1000 and depending on the magnitude it's shown with no suffix or with k, M or G (with a multiplier of 1000).

PercentageThousandths new in Git master

Percentage expressed in 1/1000s. The value is shown divided by 1000 and with a % suffix. In the unfortunate scenario where the magnitude reaches 1000 and more, it's shown with k, M or G (with a multiplier of 1000).

Typedef documentation

typedef Corrade::Utility::Debug Corrade::TestSuite::Tester::Debug

Alias for debug output.

For convenient debug output inside test cases (instead of using the fully qualified name):

void myTestCase() {
    int a = 4;
    Debug{} << a;
    CORRADE_COMPARE(a + a, 8);
}

typedef Corrade::Utility::Warning Corrade::TestSuite::Tester::Warning

Alias for warning output.

See Debug for more information.

typedef Corrade::Utility::Error Corrade::TestSuite::Tester::Error

Alias for error output.

See Debug for more information.

Function documentation

Corrade::TestSuite::Tester::Tester(const TesterConfiguration& configuration = TesterConfiguration{}) explicit

Constructor.

Parameters
configuration Optional configuration

std::pair<int&, char**> Corrade::TestSuite::Tester::arguments()

Command-line arguments.

Populated by CORRADE_TEST_MAIN(). Note that the argument value is usually immutable (thus const char* const *), it's however exposed as just char** to make passing to 3rd party APIs easier.

template<class Derived>
void Corrade::TestSuite::Tester::addTests(std::initializer_list<void(Derived::*)()> tests)

Add test cases.

Adds one or more test cases to be executed. It's not an error to call this function multiple times or add one test case more than once.

template<class Derived>
void Corrade::TestSuite::Tester::addRepeatedTests(std::initializer_list<void(Derived::*)()> tests, std::size_t repeatCount)

Add repeated test cases.

Unlike the above function repeats each of the test cases until it fails or repeatCount is reached. Useful for stability or resource leak checking. Each test case appears in the output log only once. It's not an error to call this function multiple times or add a particular test case more than once — in that case it will appear in the output log once for each occurrence in the list.

template<class Derived>
void Corrade::TestSuite::Tester::addTests(std::initializer_list<void(Derived::*)()> tests, void(Derived::*)() setup, void(Derived::*)() teardown)

Add test cases with explicit setup and teardown functions.

Parameters
tests List of test cases to run
setup Setup function
teardown Teardown function

In addition to the behavior of addTests() above, the setup function is called before every test case in the list and the teardown function is called after every test case in the list, regardless of whether it passed, failed or was skipped. Using verification macros in setup or teardown function is not allowed. It's not an error to call this function multiple times or add one test case more than once.

template<class Derived>
void Corrade::TestSuite::Tester::addRepeatedTests(std::initializer_list<void(Derived::*)()> tests, std::size_t repeatCount, void(Derived::*)() setup, void(Derived::*)() teardown)

Add repeated test cases with explicit setup and teardown functions.

Unlike the above function repeats each of the test cases until it fails or repeatCount is reached. Useful for stability or resource leak checking. The setup and teardown functions are called again for each repeat of each test case. Each test case appears in the output log only once. It's not an error to call this function multiple times or add a particular test case more than once — in that case it will appear in the output log once for each occurrence in the list.

template<class Derived>
void Corrade::TestSuite::Tester::addInstancedTests(std::initializer_list<void(Derived::*)()> tests, std::size_t instanceCount)

Add instanced test cases.

Unlike addTests(), this function runs each of the test cases instanceCount times. Useful for data-driven tests. Each test case appears in the output once for each instance. It's not an error to call this function multiple times or add one test case more than once — in that case it will appear once for each instance of each occurrence in the list.

template<class Derived>
void Corrade::TestSuite::Tester::addRepeatedInstancedTests(std::initializer_list<void(Derived::*)()> tests, std::size_t repeatCount, std::size_t instanceCount)

Add repeated instanced test cases.

Unlike the above function repeats each of the test case instances until it fails or repeatCount is reached. Useful for stability or resource leak checking. Each test case appears in the output once for each instance. It's not an error to call this function multiple times or add one test case more than once — in that case it will appear once for each instance of each occurrence in the list.

template<class Derived>
void Corrade::TestSuite::Tester::addInstancedTests(std::initializer_list<void(Derived::*)()> tests, std::size_t instanceCount, void(Derived::*)() setup, void(Derived::*)() teardown)

Add instanced test cases with explicit setup and teardown functions.

Parameters
tests List of test cases to run
instanceCount Instance count
setup Setup function
teardown Teardown function

In addition to the behavior of addInstancedTests() above, the setup function is called before every instance of every test case in the list and the teardown function is called after every instance of every test case in the list, regardless of whether it passed, failed or was skipped. Using verification macros in setup or teardown function is not allowed. It's not an error to call this function multiple times or add one test case more than once — in that case it will appear once for each instance of each occurrence in the list.

template<class Derived>
void Corrade::TestSuite::Tester::addRepeatedInstancedTests(std::initializer_list<void(Derived::*)()> tests, std::size_t repeatCount, std::size_t instanceCount, void(Derived::*)() setup, void(Derived::*)() teardown)

Add repeated instanced test cases with explicit setup and teardown functions.

Unlike the above function repeats each of the test case instances until it fails or repeatCount is reached. Useful for stability or resource leak checking. The setup and teardown functions are called again for each repeat of each instance of each test case. The test case appears in the output once for each instance. It's not an error to call this function multiple times or add one test case more than once — in that case it will appear once for each instance of each occurrence in the list.

template<class Derived>
void Corrade::TestSuite::Tester::addBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, BenchmarkType benchmarkType = BenchmarkType::Default)

Add benchmarks.

Parameters
benchmarks List of benchmarks to run
batchCount Batch count
benchmarkType Benchmark type

For each added benchmark measures the time spent executing code inside a statement or block denoted by CORRADE_BENCHMARK(). It is possible to use all verification macros inside the benchmark. The batchCount parameter specifies how many batches will be run to make the measurement more precise, while the batch size parameter passed to CORRADE_BENCHMARK() specifies how many iterations will be done in each batch to minimize overhead. It's not an error to call this function multiple times or add one benchmark more than once.

template<class Derived>
void Corrade::TestSuite::Tester::addBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, void(Derived::*)() setup, void(Derived::*)() teardown, BenchmarkType benchmarkType = BenchmarkType::Default)

Add benchmarks with explicit setup and teardown functions.

Parameters
benchmarks List of benchmarks to run
batchCount Batch count
setup Setup function
teardown Teardown function
benchmarkType Benchmark type

In addition to the behavior of addBenchmarks() above, the setup function is called before every batch of every benchmark in the list and the teardown function is called after every batch of every benchmark in the list, regardless of whether it passed, failed or was skipped. Using verification macros in setup or teardown function is not allowed. It's not an error to call this function multiple times or add one benchmark more than once.

template<class Derived>
void Corrade::TestSuite::Tester::addCustomBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, void(Derived::*)() benchmarkBegin, std::uint64_t(Derived::*)() benchmarkEnd, BenchmarkUnits benchmarkUnits)

Add custom benchmarks.

Parameters
benchmarks List of benchmarks to run
batchCount Batch count
benchmarkBegin Benchmark begin function
benchmarkEnd Benchmark end function
benchmarkUnits Benchmark units

Unlike the above functions uses user-supplied measurement functions. The benchmarkBegin parameter starts the measurement, the benchmarkEnd parameter ends the measurement and returns measured value, which is in units. It's not an error to call this function multiple times or add one benchmark more than once.

template<class Derived>
void Corrade::TestSuite::Tester::addCustomBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, void(Derived::*)() setup, void(Derived::*)() teardown, void(Derived::*)() benchmarkBegin, std::uint64_t(Derived::*)() benchmarkEnd, BenchmarkUnits benchmarkUnits)

Add custom benchmarks with explicit setup and teardown functions.

Parameters
benchmarks List of benchmarks to run
batchCount Batch count
setup Setup function
teardown Teardown function
benchmarkBegin Benchmark begin function
benchmarkEnd Benchmark end function
benchmarkUnits Benchmark units

In addition to the behavior of addCustomBenchmarks() above, the setup function is called before every batch of every benchmark in the list and the teardown function is called after every batch of every benchmark in the list, regardless of whether it passed, failed or was skipped. Using verification macros in setup or teardown function is not allowed. It's not an error to call this function multiple times or add one benchmark more than once.

template<class Derived>
void Corrade::TestSuite::Tester::addInstancedBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, std::size_t instanceCount, BenchmarkType benchmarkType = BenchmarkType::Default)

Add instanced benchmarks.

Parameters
benchmarks List of benchmarks to run
batchCount Batch count
instanceCount Instance count
benchmarkType Benchmark type

Unlike addBenchmarks(), this function runs each of the benchmarks instanceCount times. Useful for data-driven tests. Each test case appears in the output once for each instance. It's not an error to call this function multiple times or add one benchmark more than once — in that case it will appear once for each instance of each occurrence in the list.

template<class Derived>
void Corrade::TestSuite::Tester::addInstancedBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, std::size_t instanceCount, void(Derived::*)() setup, void(Derived::*)() teardown, BenchmarkType benchmarkType = BenchmarkType::Default)

Add instanced benchmarks with explicit setup and teardown functions.

Parameters
benchmarks List of benchmarks to run
batchCount Batch count
instanceCount Instance count
setup Setup function
teardown Teardown function
benchmarkType Benchmark type

In addition to the behavior of above function, the setup function is called before every instance of every batch of every benchmark in the list and the teardown function is called after every instance of every batch of every benchmark in the list, regardless of whether it passed, failed or was skipped. Using verification macros in setup or teardown function is not allowed. It's not an error to call this function multiple times or add one benchmark more than once — in that case it will appear once for each instance of each occurrence in the list.

template<class Derived>
void Corrade::TestSuite::Tester::addCustomInstancedBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, std::size_t instanceCount, void(Derived::*)() benchmarkBegin, std::uint64_t(Derived::*)() benchmarkEnd, BenchmarkUnits benchmarkUnits)

Add custom instanced benchmarks.

Parameters
benchmarks List of benchmarks to run
batchCount Batch count
instanceCount Instance count
benchmarkBegin Benchmark begin function
benchmarkEnd Benchmark end function
benchmarkUnits Benchmark units

Unlike the above functions uses user-supplied measurement functions. The benchmarkBegin parameter starts the measurement, the benchmarkEnd parameter ends the measurement and returns measured value, which is in units. It's not an error to call this function multiple times or add one benchmark more than once — in that case it will appear once for each instance of each occurrence in the list.

template<class Derived>
void Corrade::TestSuite::Tester::addCustomInstancedBenchmarks(std::initializer_list<void(Derived::*)()> benchmarks, std::size_t batchCount, std::size_t instanceCount, void(Derived::*)() setup, void(Derived::*)() teardown, void(Derived::*)() benchmarkBegin, std::uint64_t(Derived::*)() benchmarkEnd, BenchmarkUnits benchmarkUnits)

Add custom instanced benchmarks with explicit setup and teardown functions.

Parameters
benchmarks List of benchmarks to run
batchCount Batch count
instanceCount Batch count
setup Setup function
teardown Teardown function
benchmarkBegin Benchmark begin function
benchmarkEnd Benchmark end function
benchmarkUnits Benchmark units

In addition to the behavior of addCustomBenchmarks() above, the setup function is called before every batch of every benchmark in the list and the teardown function is called after every batch of every benchmark in the list, regardless of whether it passed, failed or was skipped. Using verification macros in setup or teardown function is not allowed. It's not an error to call this function multiple times or add one benchmark more than once — in that case it will appear once for each instance of each occurrence in the list.

std::size_t Corrade::TestSuite::Tester::testCaseId() const

Test case ID.

Returns ID of the test case that is currently executing, starting from 1. Expects that this function is called from within a test case or its corresponding setup/teardown function.

std::size_t Corrade::TestSuite::Tester::testCaseInstanceId() const

Test case instance ID.

Returns instance ID of the instanced test case that is currently executing, starting from 0. Expects that this function is called from within an instanced test case or its corresponding setup/teardown function.

std::size_t Corrade::TestSuite::Tester::testCaseRepeatId() const

Test case repeat ID.

Returns repeat ID of the repeated test case that is currently executing, starting from 0. Expects that this function is called from within a repeated test case or its corresponding setup/teardown function.

void Corrade::TestSuite::Tester::setTestName(Containers::StringView name)

Set custom test name.

By default the test name is gathered together with test filename by the CORRADE_TEST_MAIN() macro and is equivalent to fully-qualified class name.

A view that has both Containers::StringViewFlag::Global and Containers::StringViewFlag::NullTerminated set (such as coming from a Containers::StringView literal) will be used without having to make an owned string copy internally.

void Corrade::TestSuite::Tester::setTestName(const char* name)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

void Corrade::TestSuite::Tester::setTestCaseName(Containers::StringView name)

Set custom test case name.

By default the test case name is gathered in the check macros and is equivalent to the following:

setTestCaseName(CORRADE_FUNCTION);

A view that has both Containers::StringViewFlag::Global and Containers::StringViewFlag::NullTerminated set (such as coming from a Containers::StringView literal) will be used without having to make an owned string copy internally.

void Corrade::TestSuite::Tester::setTestCaseName(const char* name)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

void Corrade::TestSuite::Tester::setTestCaseTemplateName(Containers::StringView name) new in 2019.10

Set test case template name.

Useful to distinguish different specializations of the same templated test case. Equivalent to the following called from inside the test case:

setTestCaseName(Utility::format("{}<{}>", CORRADE_FUNCTION, name));

A view that has both Containers::StringViewFlag::Global and Containers::StringViewFlag::NullTerminated set (such as coming from a Containers::StringView literal) will be used without having to make an owned string copy internally.

void Corrade::TestSuite::Tester::setTestCaseTemplateName(const char* name)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

void Corrade::TestSuite::Tester::setTestCaseTemplateName(Containers::ArrayView<const Containers::StringView> names) new in Git master

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Useful for test cases that are templated with more than one parameter. Names are joined with ,.

Unlike with setTestCaseTemplateName(Containers::StringView), a new string for the joined result is created always so presence of any Containers::StringViewFlags in passed views doesn't matter.

void Corrade::TestSuite::Tester::setTestCaseTemplateName(std::initializer_list<Containers::StringView> names)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

void Corrade::TestSuite::Tester::setTestCaseTemplateName(Containers::ArrayView<const char*const> names)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

void Corrade::TestSuite::Tester::setTestCaseTemplateName(std::initializer_list<const char*> names)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

void Corrade::TestSuite::Tester::setTestCaseDescription(Containers::StringView description)

Set test case description.

Additional text displayed after the test case name. By default the description is empty for non-instanced test cases and instance ID for instanced test cases. If you use setTestCaseDescription(const TestCaseDescriptionSourceLocation&) instead, output messages will contain also the file/line where the instanced test case data were defined. See the TestCaseDescriptionSourceLocation class documentation for an example.

A view that has both Containers::StringViewFlag::Global and Containers::StringViewFlag::NullTerminated set (such as coming from a Containers::StringView literal) will be used without having to make an owned string copy internally.

void Corrade::TestSuite::Tester::setTestCaseDescription(const char* description)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

void Corrade::TestSuite::Tester::setTestCaseDescription(const TestCaseDescriptionSourceLocation& description) new in Git master

Set test case description with source location.

Compared to setTestCaseDescription(Containers::StringView), output messages printed for the test case will contain also the file/line where the instanced test case data were defined. See the TestCaseDescriptionSourceLocation class documentation for an example.

void Corrade::TestSuite::Tester::setBenchmarkName(Containers::StringView name)

Set benchmark name.

In case of addCustomBenchmarks() and addCustomInstancedBenchmarks() provides the name for the unit measured, for example "wall time".

A view that has both Containers::StringViewFlag::Global and Containers::StringViewFlag::NullTerminated set (such as coming from a Containers::StringView literal) will be used without having to make an owned string copy internally.

void Corrade::TestSuite::Tester::setBenchmarkName(const char* name)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.