C++ unit testing with Qt Test – part 2 – advanced testing

This tutorial explores more advanced topics in C++ unit testing with Qt Test. A working example is discussed and analysed in detail. Full qmake project and source code are provided.

More C++ unit testing with Qt Test

In this tutorial I am going to introduce more advanced features of Qt Test, the Qt framework for C++ unit testing. In particular I will explain how to handle a project with multiple unit tests and how to implement data driven testing. I will also give examples of more testing macros and I will show you the integration offered by Qt Creator.

This is the second post of a series of three dedicated to Qt Test. The posts of this series are:

Creating a better project

Last time I showed how to create an unit test project using the “Auto Test Project” template. Another (slightly more advanced) option to do the same is the “Qt Unit Test” template:

Qt Creator Qt Unit Test template project details

This wizard will allow you to chose which Qt modules you want to include in the project and will offer more options in the Details section.

As seen in the first tutorial, the Qt Test philosophy is that every test case is an independent executable. In real projects you usually have hundreds or even thousands of different unit tests and running them all manually is definitely not an option. The best way to run and manage them is creating a “parent” project. The right choice in this case is the  “Subdirs Project” template, which is listed in the “Other Project” group of the “New Project” dialog.

Qt Creator subdirs project

After creating the project you will get back to the templates dialog to create a first project to include. You can cancel that and proceed to add your existing projects. In the end you will get something like this:

Qt Creator projects

For this tutorial I extended the TestCalculator unit test project and I created a new one called TestIntBitset. The new project tests a simplified bitset implementation. Once again, the code to test (the IntBitset class) is included in the unit test project for simplicity.

Data driven testing

An advanced feature of Qt Test is data driven testing. The idea is to separate tests and data to avoid to have a long list of similar QVERIFY or QCOMPARE macros and to replicate all the code needed to initialise a test.

To provide data to a test function you have to create another private slot with the same name of the function and add the “_data” suffix. For example the data function for testDiff() is testDiff_data().

Implementing a data function is a bit like inserting data into a database. First you define your data like if you were defining a table:

void TestCalculator::testDiff_data()

Then you add rows of values:

    QTest::newRow("all 0") << 0 << 0 << 0;
    QTest::newRow("same number") << 10 << 10 << 0;
    // ... more data ...

Each row contains a name and a list of values. You can imagine that the previous code is converted to something like the following table:

INDEX NAME a b result
0 “all 0” 0 0 0
1 “same number” 10 10 0

Once we have defined the data function we can write the test function which is divided in 2 parts.

The fist part retrieves the data:

void TestCalculator::testDiff()
    // retrieve data
    QFETCH(int, a);
    QFETCH(int, b);
    QFETCH(int, result);

    // set value

The second part uses the data to perform checks:

    // test
    QCOMPARE(mCalc.Diff(), result);

Without a data driven approach we should have repeated the instructions to set the values and the QCOMPARE check many times.

When a data driven test is executed the test function is called once per set of data and the log message looks like this:

PASS : TestCalculator::testDiff(all 0)
PASS : TestCalculator::testDiff(same number)
... more lines ...

As you can notice the name of the data row is reported in the log to help you differentiate the cases.

Other useful macros

Qt Test offers few extra macros to help you handling different situations in your unit tests.

Failing a test

One of these macros is QFAIL, which makes the current test fail. It can be used when you know that something will make the test fail. In that case there’s no point in wasting time executing the test, you can just fail and move on.

void TestIntBitset::initTestCase()
    if(sizeof(int) != 4)
        QFAIL("Int size is not 4 on this platform.");

In my example project I used QFAIL in the initTestCase and cleanupTestCase functions, which are special functions executed before and after the test functions are executed. When initTestCase fails none of the tests in the test case is executed.

Failing a single check

In case you know that a particular QVERIFY or QCOMPARE is going to fail, but you still want to continue executing the test, you can precede a check with the macro QEXPECT_FAIL:

void TestIntBitset::testSetOff()

	unsigned int bitsOff = 0;
	mBS.setBitOff(BITS_IN_BYTE * bitsOff++);
	mBS.setBitOff(BITS_IN_BYTE * bitsOff++);

	QEXPECT_FAIL("", "isAnyOff not implemented yet", Continue);

	// ... more test code ...

Its first parameter identifies a row of data when doing data driven testing, but it can be set to an empty string during normal testing. The second one is a log message and the third one lets you decide if you want to Continue or Abort the test on failure.

When running the previous test the output log will show something like this:

XFAIL  : TestIntBitset::testSetOff() isAnyOff not implemented yet
   Loc: [../../UnitTests/TestIntBitset/TestIntBitset.cpp(67)]

Skipping a test

In case you want to skip a test or part of it you can use the macro QSKIP which will mark the test as skipped and stop the execution:

void TestIntBitset::testOperators()
    QSKIP("Operators have not been implemented yet...");

No code after QSKIP will be executed, but the code before will, so if any check there fails the test will be considered failed.

Running a skipped test will show the following text in the logs:

SKIP : TestIntBitset::testOperators() Operators have not been implemented yet...
    Loc: [../../UnitTests/TestIntBitset/TestIntBitset.cpp(28)]

Deciding when using QFAIL and when using QSKIP can be debatable sometimes. In general there is not a precise rule and it’s all about your design choices. Personally I tend to use QFAIL when I know in advance that something is going to fail and I want to highlight that, whereas QSKIP when it doesn’t matter executing a test or part of it.

Warning messages

If you want to print a warning message in the tests log you can use QWARN. This macro can be useful when you want to notify that something is not going as expected in a test.

void TestIntBitset::testSetOff()

	unsigned int bitsOff = 0;
	mBS.setBitOff(BITS_IN_BYTE * bitsOff++);
	// ... more test code ...

	// this test will trigger a warning
	if((BITS_IN_BYTE * bitsOff) < BITS_IN_INT)
		QVERIFY(!mBS.isBitOff(BITS_IN_BYTE * bitsOff));
		QWARN("trying to verify bit out of set bounds");

	// ... more test code ...

In this case the QVERIFY check will fail because input data is somehow wrong. It would be unfair to fail the test because of a possible bug in the local code, but this situation needs to be highlighted. A warning is a good way to achieve that.

When running a test containing a warning, the output log will show something like this:

WARNING: TestIntBitset::testSetOff() trying to verify value out of set bounds
   Loc: [../../UnitTests/TestIntBitset/TestIntBitset.cpp(75)]

The warning message will always have a message and will show where the warning was issued.

Qt Creator integration

Not surprisingly, Qt Creator offers an excellent integration with Qt Test.

One of the panels in the left sidebar is called “Tests” and it shows all the unit test found in your container project.

Qt Creator tests panel

Using this panel you can disable some tests, run them all or run only a specific one. When doing data driven testing it also allows you to select which data sets are enabled. All this is extremely useful in a real project where you can have hundreds or even thousands of unit tests and you need to check/debug only one/few.

When running the unit test from the Tests panel the results are shown in the Test Results panel that you can also open with ALT-8.

Qt Creator test results panel

This panel will show clearly which tests passed and which not, but it will also show other useful information, especially in case of failure. In particular you click on results to jump to any fail or warning in the code. Furthermore, the panel lets you filter what events you want to see in the log, so for example you can check only fails or warnings.

The 2 panels combined make a great addition to Qt Creator and Qt Test and offer a very powerful tool for free.

Source code

The full source code of this tutorial is available on GitHub and released under the Unlicense license.

In this case you will find 3 qmake projects, but you only need to load the top level one (UnitTests.pro) to build.


To know more about Qt Test you can check out the latest documentation of the QTest namespace.

If you want to learn more about Qt have a look at the other Qt tutorials I posted.


The features discussed in this tutorial make Qt Test a more usable and complete framework than the one introduced in the first part of this series. In particular the integration with Qt Creator is extremely useful and effective. Data driven testing can also be very powerful to reduce testing code and to test different cases very easily.

Things will get even more interesting when I will discuss GUI testing in the next tutorial, so stay tuned for more.

Stay connected

Don’t forget to subscribe to the blog newsletter to get notified of future posts.

You can also get updates following me on Google+LinkedIn and Twitter.

Leave a Comment

Your email address will not be published. Required fields are marked *