CLion 2019.1 Help

Unit Testing Tutorial

This tutorial gives an overview of the unit testing approach and discusses three frameworks - Google Test, Boost.Test, and Catch. The second part will guide you through the process of including these frameworks into your project in CLion, and then we will take a look at the instruments that CLion provides to help you work with unit testing.

Basics

Unit testing aims to check individual units of your source code separately. A unit here is the smallest part of code that can be tested in isolation, for example, a free function or a class method. Unit testing helps:

  1. Modularize your code

    Code's testability depends on its design, and unit tests facilitate breaking the code into specialized pieces.

  2. Avoid regressions

    When you have a suite of unit tests, you can run them iteratively to ensure that everything keeps working correctly every time you add new functionality or introduce changes.

  3. Document your code

    Running, debugging, or even just reading tests can give a lot of information about how the original code works, so you can use them as implicit documentation.

A single unit test is a method that checks some specific functionality and has clear pass/fail criteria. The generalized structure of a single test looks like this:

Test (TestGroupName, TestName) { 1 - setup block 2 - run the under-test functionality 3 - check the results (the block of asserts) }

Good practices for unit testing include:

  • Creating tests for all publicly exposed functions, including class constructors and operators.

  • Covering all code paths and checking both trivial and edge cases, including those with incorrect input data (see negative testing).

  • Assuring that each test works independently and don't prevent other tests from execution.

  • Organizing tests in a way that the order in which you run them doesn't affect the results.

It's useful to group test cases when they are logically connected or use the same data. Suites combine tests with common functionality (for example, when they perform different cases for the same function). Fixture classes let you organize shared resources for multiple tests. Fixtures are used to setup/cleanup the environment for each test within a group and avoid code duplication.

Unit testing is often combined with mocking. Mock objects are light-weight implementations of test targets, used when the under-test functionality evolve complex dependencies and it is difficult to construct a viable test case using the actual object.

Frameworks

Unit testing involves a lot of routine operations: writing stub test code, implementing main(), printing output messages, and so on. Unit testing frameworks not only help automate these tasks, but also let you benefit from:

  • Manageable assertion behavior

    With a framework, you can specify whether or not a failure of a single case should cancel the whole test/suite execution. Along with the regular ASSERT, frameworks include EXPECT/CHECK macros that don't interrupt your test program on failure.

  • Various checkers

    Checkers are macros for comparing the expected and the actual results. Checkers provided by testing frameworks usually have configurable severity (warning, regular expectation, or a requirement). Also, they can include tolerances for floating point comparisons and even pre-implemented exception handlers to check raising of an exception under certain conditions.

  • Tests organization

    With frameworks, it's easy to create and run subsets of tests grouped by common functionality (suites) or shared data (fixtures). Also, modern frameworks automatically register tests, so you don't need to do that manually.

  • Customizable messages

    Frameworks can show verbose descriptive outputs, as well as user-defined messages or only the briefed pass/fail results (the latter is especially useful for regression testing).

  • XML reports

    Most of the testing frameworks can export results in XML format. This is useful when you need to further pass the results to a continuous integration system (like TeamCity or Jenkins).

There are many unit testing frameworks for C++. Some of the most popular are Google Test, Boost.Test, and Catch(2). All three are integrated in CLion, but before we dive into the integration details, let's briefly cover essential points of each framework.

Google Test

Google Test and Google Mock is a powerful pair of unit testing tools: the framework is portable, it includes a rich set of fatal and non-fatal assertions, provides instruments for creating fixtures and test groups, gives informative messages, and exports the results in XML. Probably the only drawback is a need to build gtest/gmock in your project in order to use it.

Assertions

In Google Test, the statements that check whether a condition is true is referred to as assertions. Non-fatal assertions have the EXPECT_ prefix in their names, and assertions that cause fatal failure and abort the execution are named starting with ASSERT_. For example:

TEST (SquareTest /*test suite name*/, PosZeroNeg /*test name*/) { EXPECT_EQ (9.0, (3.0*2.0)); // fail, test continues ASSERT_EQ (0.0, (0.0)); // success ASSERT_EQ (9, (3)*(-3.0)); // fail, test interrupts ASSERT_EQ (-9, (-3)*(-3.0));// not executed due to the previous assert }

Some of the asserts available in Google Test are listed below (in this table, ASSERT_ can be switched with EXPECT_):

Logical

ASSERT_TRUE(condition)
ASSERT_FALSE(condition)

General comparison

ASSERT_EQ(expected, actual) / ASSERT_NE(val1, val2)
ASSERT_LT(val1, val2) / ASSERT_LE(val1, val2)
ASSERT_GT(val1, val2) / ASSERT_GE(val1, val2)

Float point comparison

ASSERT_FLOAT_EQ(expected, actual)
ASSERT_DOUBLE_EQ(expected, actual)
ASSERT_NEAR(val1, val2, abs_error)

String comparison

ASSERT_STREQ(expected_str, actual_str) / ASSERT_STRNE(str1, str2)
ASSERT_STRCASEEQ(expected_str, actual_str) / ASSERT_STRCASENE(str1, str2)

Exception checking

ASSERT_THROW(statement, exception_type)
ASSERT_ANY_THROW(statement)
ASSERT_NO_THROW(statement)

Besides, Google Test supports predicate assertions which help making output messages more informative. For example, instead of EXPECT_EQ(a, b) you can use a predicate function that checks a and b for equivalency and returns boolean result. In case of failure, the assertion will print values of the function arguments:

Predicate assertion example
bool IsEq(int a, int b){ if (a==b) return true; else return false; } TEST(BasicChecks, TestEq) { int a = 0; int b = 1; EXPECT_EQ(a, b); EXPECT_PRED2(IsEq, a, b); }
Output
Failure Value of: b Actual: 1 Expected: a Which is: 0 Failure IsEq(a, b) evaluates to false, where a evaluates to 0 b evaluates to 1

In EXPECT_PRED2 above, predN is a predicate function with N arguments. Google Test currently supports predicate assertions of arity up to 5.

Fixtures

Google tests that share common objects or subroutines can be grouped into fixtures. Here is how a generalized fixture looks like:

class myTestFixture: public ::testing::test { public: myTestFixture( ) { // initialization; // can also be done in SetUp() } void SetUp( ) { // initialization or some code to run before each test } void TearDown( ) { // code to run after each test; // can be used instead of a destructor, // but exceptions can be handled in this function only } ~myTestFixture( ) { //resources cleanup, no exceptions allowed } // shared user data };

When used for a fixture, a TEST() macro should be replaced with TEST_F() to allow the test to access the fixture's members and functions:

TEST_F( myTestFixture, TestName) {/*...*/}

To learn more about Google Test, explore samples in the framework's repository. Also, take a look at Advanced options for details of other noticeable Google Test features, such as value- and type-parameterized tests.

Boost.Test

Boost unit testing framework (Boost.Test) is a part of the Boost library. It is a fully-functional and scalable framework, with wide rage of assertion macros, XML output, and other features. However, you need to build it on your platform, so it is not suitable when you are limited to header-only libraries. Also, Boost.Test itself lacks mocking functionality, but it can be combined with stand-alone mocking frameworks such as gmock (see Using Google Mock with Any Testing Framework).

Checkers

For most of the Boost.Test checkers, you can set a severity level:

  • WARN produces a warning message if the check failed, but error counter isn't increased and the test case continues;

  • CHECK reports an error and increases the error counter when the check is failed, but the test case continues;

  • REQUIRE is used for reporting fatal errors, when the execution of the test case should be aborted (for example, to check whether an object that will be used later was created successfully).

Basic macros are BOOST_WARN, BOOST_CHECK, and BOOST_REQUIRE. They take one argument of an expression to check, for example:

BOOST_WARN(sizeof(int) == sizeof(long)); BOOST_CHECK( i == 1 ); BOOST_REQUIRE( j > 5 );

This way, a Boost checker is usually a macro in the BOOST_[level]_[checkname] format that takes one or more arguments. A few examples are given below:

General comparison

BOOST_[level]_EQUAL, BOOST_[level]_NE, BOOST_[level]_GT

In case of failure, these macros not only give the test failed message, but also show the expected and the actual value:

int i = 2; int j = 1; BOOST_CHECK( i == j ); // reports the fact of failure only: "check i == j failed" BOOST_CHECK_EQUAL( i, j ); // reports "check i == j failed [2 != 1]"

Float point comparison

BOOST_[level]_CLOSE / BOOST_[level]_CLOSE_FRACTION / BOOST_[level]_SMALL

Exception checking

BOOST_[level]_THROW / BOOST_[level]_NO_THROW / BOOST_[level]_EXCEPTION

Suites

You can organize Boost tests into suites using the pair of BOOST_AUTO_TEST_SUITE(suite_name) and BOOST_AUTO_TEST_SUITE_END() macros. A simple test suite looks like this:

#define BOOST_TEST_MODULE Suite_example #include <boost/test/unit_test.hpp> BOOST_AUTO_TEST_SUITE(TwoTwoFour_suite) BOOST_AUTO_TEST_CASE(testPlus) { BOOST_CHECK_EQUAL(2+2, 4); } BOOST_AUTO_TEST_CASE(testMult) { BOOST_CHECK_EQUAL(2*2, 4); } BOOST_AUTO_TEST_SUITE_END()

Fixtures

To write a fixture with Boost, you can use either a regular BOOST_AUTO_TEST_CASE macro written after a fixture class declaration or a special BOOST_FIXTURE_TEST_CASE macro:

struct SampleF { SampleF() : i(1) { } ~SampleF() { } int i; }; BOOST_FIXTURE_TEST_CASE(SampleF_test, SampleF) { // accessing i from SampleF directly BOOST_CHECK_EQUAL(i, 1); BOOST_CHECK_EQUAL(i, 2); BOOST_CHECK_EQUAL(i, 3); }

Catch(2)

The main difference of Catch(2) from Google and Boost is that it's a header-only testing system: to create tests with Catch, you need to download and include only one header file catch.hpp. The framework's name itself states for C++ Automated Test Cases in Headers.

As well as Boost.Test, Catch doesn't provide mocking functionality. However, you can combine it with stand-alone mocking frameworks such as gmock, FakeIt, or Trompeloeil.

Sample test

Example below shows a simple test written with Catch:

#define CATCH_CONFIG_MAIN // provides main(); this line is required in only one .cpp file #include "catch.hpp" int theAnswer() { return 6*9; } // function to be tested TEST_CASE( "Life, the universe and everything", "[42][theAnswer]" ) { REQUIRE(theAnswer() == 42); }

In the above example, Life, the universe and everything is a free-form test name, which is required to be unique. The second argument of the TEST_CASE macro is a combination of two tags, [42] and [theAnswer]. Both test name and tags are regular strings that are not limited to be valid C++ identifiers. You can run collections of tests by specifying a wildcarded test name or a tag expression.

Notice the assertion line REQUIRE(theAnswer() == 42). Unlike other frameworks, Catch does not have a set of various asserts for different cases. Instead, it uses the actual C/C++ code to describe the assert:

...Failure: REQUIRE(theAnswer() == 42) with expansion: 54 == 42

REQUIRE aborts a test on failure, while the alternative CHECK macro only reports the failure and lets the test carry on. Within both of these macros, you can use all C++ comparison operators and pass the arguments in any order.

Sections

Another important feature of Catch is the way to organize tests in test cases with tags and sections (while a class-based fixture mechanism is also supported). Take a look at this example from the documentation:

TEST_CASE( "vectors can be sized and resized", "[vector]" ) { // initialization block executed for each section std::vector<int> v( 5 ); REQUIRE( v.size() == 5 ); REQUIRE( v.capacity() >= 5 ); // end of initialization block SECTION( "resizing bigger changes size and capacity" ) { v.resize( 10 ); REQUIRE( v.size() == 10 ); REQUIRE( v.capacity() >= 10 ); } SECTION( "resizing smaller changes size but not capacity" ) { v.resize( 0 ); REQUIRE( v.size() == 0 ); REQUIRE( v.capacity() >= 5 ); } }

In the above snippet, TEST_CASE is executed from the start for each SECTION. Two REQUIRE statements at the top of the TEST_CASE enforce that size is 5 and capacity is at least 5 before we enter each section. This way, shared objects are allocated on stack and there is no need to create a fixture class for them. On each run through a TEST_CASE, Catch executes one leaf section and skips the others. Next time, it executes the second section, and so on.

Sections can also be nested to an arbitrary depth and form a tree structure. Each leaf section (a section with no nested sections inside) is executed once. When a parent section fails, it prevents child sections from running. For example:

SECTION( "reserving bigger changes capacity but not size" ) { v.reserve( 10 ); REQUIRE( v.size() == 5 ); REQUIRE( v.capacity() >= 10 ); // verify that attempting to reserve a smaller capacity changes nothing SECTION( "reserving smaller again does not change capacity" ) { v.reserve( 7 ); REQUIRE( v.capacity() >= 10 ); } }

Catch also supports the alternative BDD-style syntax for test cases and sections.

Unit testing in CLion

CLion's integration of Google Test, Boost.Test, and Catch includes: full code insight for framework libraries, code generation for tests and fixture classes (available for Google Tests), dedicated run/debug configurations with auto-completion in the settings editor, gutter icons to run or debug tests/suites/fixtures and check their status, and the specialized test runner.

Setting up a testing framework for your project

In this chapter, we will discuss how to add all three testing frameworks to a project in CLion and write a simple set of tests using each of them.

Let's take an example of the DateConverter project. This program calculates the absolute value of a date given in the Gregorian calendar format and converts it into a Julian calendar date. We will test the functionality of this data converter using Google Test, Boost.Test, and Catch.

For each framework, we will do the following:

  1. Take the steps required to include a framework into the project.

  2. Create two test files, AbsoluteDateTest.cpp and ConverterTests.cpp. These files will contain test code written using the framework's syntax.

As a result, we will have three sets of equivalent tests written with Google Test, Boost.Test, and Catch respectively. You can explore the final version of the project with tests in the DateConverter_TestingDemo folder of the repository. Here is how the project structure will be transformed:

sample project with and without tests

Switch between the tabs below for detailed instructions for each framework:

Including the Google Test framework

  1. Create a folder for Google Tests under the project root. Inside it, create another folder for the framework's files.

    In our example, it's Google_tests and Google_tests/lib respectfully.

  2. Download Google Test from the official repository.

  3. Place the downloaded files into Google_tests/lib.

  4. Add a CMakeLists.txt file to Google_tests (right-click it in the project tree and select New | CMakeLists.txt). Add the following lines:

    project(Google_tests) add_subdirectory(lib) include_directories(${gtest_SOURCE_DIR}/include ${gtest_SOURCE_DIR})

  5. In the root CMakeLists.txt script, add the add_subdirectory(Google_tests) line at the end.

Adding Google tests

  1. Click Google_tests in the project tree and select New | C/C++ Source File, call it AbsoluteDateTest.cpp.

    CLion prompts to add this file to an existing target. We don't need to do that, since we are going to create a new target for this file on the next step.

    Repeat for ConverterTests.cpp.

  2. With two source files added, we can create a test target for them and link it with DateConverter. Google_tests/CMakeLists.txt should look like this:

    project(Google_tests) add_subdirectory(lib) include_directories(${gtest_SOURCE_DIR}/include ${gtest_SOURCE_DIR}) # adding the Google_Tests_run target add_executable(Google_Tests_run ConverterTests.cpp AbsoluteDateTest.cpp) # linking Google_Tests_run with the DateConverter library which will be tested target_link_libraries(Google_Tests_run DateConverter) target_link_libraries(Google_Tests_run gtest gtest_main)
  3. Now everything is ready to put the Google Test version of our checks in AbsoluteDateTest.cpp and ConverterTests.cpp. After that, the tests are ready to run:

    Google tests results

Including the Boost.Test framework

  1. Install and build Boost Testing Framework following these instructions (further in the tests we will use the shared library usage variant to link the framework).

  2. Create a folder for Boost tests under the project root. In our example, it's called Boost_tests.

  3. Add a CMakeLists.txt file to Boost_tests (right-click it in the project tree and select New | CMakeLists.txt). Add the following lines:

    set (Boost_USE_STATIC_LIBS OFF) find_package (Boost REQUIRED COMPONENTS unit_test_framework) include_directories (${Boost_INCLUDE_DIRS})

  4. In the root CMakeLists.txt script, add the add_subdirectory(Boost_tests) line at the end.

Adding Boost tests

  1. Click Boost_tests in the project tree and select New | C/C++ Source File, call it AbsoluteDateTest.cpp.

    CLion will prompt to add this file to an existing target. We don't need to do that, since we are going to create a new target for this file on the next step.

    Repeat for ConverterTests.cpp.

  2. With two source files added, we can create a test target for them and link it with DateConverter. Add the following lines to Boost_tests/CMakeLists.txt:

    add_executable (Boost_Tests_run ConverterTests.cpp AbsoluteDateTest.cpp) target_link_libraries (Boost_Tests_run ${Boost_LIBRARIES}) target_link_libraries (Boost_Tests_run DateConverter)
  3. Now we can put the Boost.Test version of our checks in AbsoluteDateTest.cpp and ConverterTests.cpp. After that, the tests are ready to run:

    Boost tests results

Including the Catch framework

  1. Create a folder for Catch tests under the project root. In our example, it's called Catch_tests.

  2. Download the catch.hpp header using the link from the documentation and place it in Catch_tests.

Adding Catch tests

  1. Click Catch_tests in the project tree and select New | C/C++ Source File, call it AbsoluteDateTest.cpp.

    CLion will prompt to add this file to an existing target. We don't need to do that, since we are going to create a new target for this file on the next step.

    Repeat for ConverterTests.cpp.

  2. Add a CMakeLists.txt file to Catch_tests (right-click the folder in the project tree and select New | CMakeLists.txt). Add the following lines:

    add_executable(Catch_tests_run ConverterTests.cpp AbsoluteDateTest.cpp) target_link_libraries(Catch_tests_run DateConverter)

  3. In the root CMakeLists.txt, write add_subdirectory(Catch_tests) in the end.

  4. Now everything is ready to put the Catch version of our checks in AbsoluteDateTest.cpp and ConverterTests.cpp. After that, the tests are ready to run:

    Catch tests results

Run/Debug configurations for tests

Test frameworks provide their own main() entry, so it's possible to run tests as regular applications in CLion. However, we recommend to use the dedicated Run/Debug configurations for Google Test, Boost.Test, and Catch, as they include test-related settings and let you benefit from the built-in test runner (which is unavailable if you run tests as regular applications):

test run/debug configuration templates

Note that if your CMake target is linked with gtest or gmock, a Google Test configuration for this target is created automatically.

To add a Run/Debug configuration for your tests, go to Run | Edit Configurations, click + and select the desired template. Next, depending on the framework, specify test pattern, suite, or tags (for Catch). Note that auto-completion is available in the settings fields to help you quickly fill them up:

auto-completion in configuration fields

You can use wildcards when specifying test patterns. For example, set the following pattern to run only the PlusOneDiff and PlusFour_Leap tests from the sample project:

using wildcards in test patterns

In other fields of the configuration settings, you can set environment variables or command line options. For example, in the Program arguments field you can set -s for Catch tests to force passing tests to show the full output, or --gtest_repeat to run a Google test multiple times:

flags in program arguments

The output will be:

Repeating all tests (iteration 1) . . . Repeating all tests (iteration 2) . . . Repeating all tests (iteration 3) . . .

Gutter icons for tests

In CLion, there are several ways to start a run/debug session for tests, one of which is using special gutter icons. These icons help quickly run or debug a single test or a whole suite/fixture:

gutter icons for tests

Gutter icons also show test results (if already available): success icons runConfigurations testState green2 or failure icons runConfigurations testState red2.

When you run a test/suite/fixture using gutter icons, CLion creates temporary Run/Debug configurations of the corresponding type. You can see these configurations in the list, but they are greyed out. To save a temporary configuration, select it in the Edit Configurations dialog and press icons actions menu saveall svg:

saving temporary test configuration

Test runner

When you run a test configuration, the results (and the process) are shown in the test runner window that includes:

  • progress bar with the percentage of tests executed so far,

  • tree view of all the running tests with their status and duration,

  • tests' output stream,

  • toolbar with the options to rerun failed icons runConfigurations testState red2 svg tests, export icons toolbarDecorator export svg or open previous results saved automatically artwork studio icons profiler toolbar clock, sort the tests alphabetically icons objectBrowser sorted svg to easily find a proper test or by durationicons runConfigurations sortbyDuration svg to understand which test ran longer than others.

For example, here is how the test runner window looks like if we intentionally break some of the Catch tests in the sample project:

test runner

Code generation for Google tests

If you are using Google Test framework, CLion's Generate menu can help you save time or writing test code. In a test file where you have gtest included, click Alt+Insert to see the code generation options.

When called from a fixture, the menu additionally includes SetUp Method and TearDown Method:

generate menu for tests

For fixture tests, code generation converts TEST() macros into the appropriate TEST_F(),TEST_P(), TYPED_TEST(), or TYPED_TEST_P() (see Typed Tests).

Other features

Quick Documentation for test macros

To help you explore the macros provided by testing frameworks, Quick Documentation pop-up shows the final macro replacement and formats it properly. It also highlights the strings and keywords used in the result substitution:

Formatted macro expansion in quick documentation popup

Show Test List

To reduce the time of initial indexing, CLion uses lazy test detection. It means that test are excluded form indexing until you open some of the test files or run/debug test configurations. To check which tests are currently detected for your project, call Show Test List from Help | Find Action. Note that calling this action doesn't trigger indexing.

Last modified: 12 June 2019