Writing Tests 🔗
Now that you understand the basic concepts it’s time to start writing tests! Before you write your first test case you need to create a test runner. The test runner is a dedicated executable you create for the purpose of running tests. Your executable becomes a test runner by virtue of linking against the Audition library (or framework if you're on macOS).
Audition provides its own main
function so you don't have to. By doing so, Audition handles the entry point for your test runner program, processes command-line arguments, executes fixtures and tests, and reports the results.
When you execute a test runner without any test cases, it will display “no tests found” to indicate there aren't any tests available. If you see this output, then you know Audition is configured correctly and you're reading to start writing tests!
Architecting Your Project for Testing 🔗
It’s important to design your program with modularity in mind to facilitate testing. In C, you will likely be testing either a static or dynamic library or an application.
When testing a static or dynamic library, you can create a dedicated test runner executable that links both your library and the Audition framework. When testing an application, you must prevent the main
function of your application and main
function of Audition from clashing. Your approach for preventing this clash will vary depending on your code structure. Here, we'll discuss two strategies:
- Recompiling without an entry point: Recompile your application sources without its
main
function and link them to your test runner. With this approach you can optionally compile your source code as a static library, which can then be linked by both your application and the test runner. - Testing in source files: Another approach is to write test cases directly in the program’s source files. You can toggle the
main
function and the test cases on or off using preprocessor directives.
The advantage of keeping tests and program code in the same files is it promotes tighter integration between them, making it easier to ensure they both remain up-to-date. The disadvantage is it can clutter the program's implementation especially if you have many tests. It can also cause issues with mocking. It’s important to weigh these benefits against the potential drawbacks in terms of clarity, maintenance, and build complexity.
Writing a Simple Test Case 🔗
Let’s write a test case for a simple function that adds two integers and returns their sum. Our test case will verify the correctness of this function.
First, let’s define the function we intend to test:
int sum(int x, int y) {
return x + y;
}
To test this function, you'll need to create an executable program that links against the Audition library. We won't cover the compilation process here, as it’s assumed you know how to compile an executable with your C compiler.
Before you can write a test case for sum
you must include the audition.h
header file.
#include "audition.h"
With the header included, you are now ready to write test cases.
To test the sum
function you must define a test case. In Audition you define a test case using the TEST macro as shown below.
TEST(arithmetic, addition) {
// ...
}
The TEST macro defines a test entry point much like how main
is used as the entry point for executables. In this example, the first argument, arithmetic
, specifies the name of the test suite while addition
specifies the name of the test case. The code within the curly braces contains the test assertions. Let’s add a test assertion to verify that sum
behaves as expected:
TEST(arithmetic, addition) {
ASSERT_EQ(sum(1, 2), 3);
}
In this example, the ASSERT_EQ macro accepts two arguments and verifies they are equal. If they are not equal, then the test case fails. While this example uses one assertion, real-world test cases often contain many assertions.
Here’s the complete test program, with the sum
function defined in the same source file as the test case:
#include "audition.h"
int sum(int x, int y) {
return x + y;
}
TEST(arithmetic, addition) {
ASSERT_EQ(sum(1, 2), 3);
}
When you compile and run this code, you should see the following output:
[ 50% ] arithmetic.addition
[ 100% ] arithmetic.addition (pass)
1 passing [100%] (0ms)
Note that Audition reports how many test cases passed and the elapsed time. The elapsed time only includes the time spent in test cases, it does not reflect the time spent in fixtures or by the Audition library itself.
By default, Audition will always run every test case, but you can filter which suites and tests are run by using pattern matching.
Writing a Parameterized Test Case 🔗
Parameterized test cases or generators allow you to run the same test case with different inputs. This reduces duplication and enhances maintainability by enabling you to execute the same test case across a wide range of inputs. In Audition, there are two ways to specify the number of iterations: statically and dynamically. These are detailed in the following subsections:
- Static Parameterized Tests - The number of iterations is known at compile time.
- Dynamic Parameterized Tests - The number of iterations is computed at runtime.
In Audition, a test case becomes a parameterized test case when the iterations option is specified.
TEST(foo, bar, .iterations=5) {
// ...
}
In the test case itself you refer to the current iteration with the TEST_ITERATION macro. Iterations are zero-indexed: the first iteration corresponds to 0
, the second to 1
, the third to 2
and so on. You can use TEST_ITERATION to access data specific to the current iteration. Typically, this data comes from an array, which can be constructed from hard-coded values or data read from external sources.
Static Parameterized Tests 🔗
Static parameterized test cases are those where the number of iterations is known at compile time. This approach is preferred when the dataset is hard-coded in a C source file.
In Audition, the number of static iterations is specified using the iterations option. In the following example, the number of iterations is set to 3
, and the TEST_ITERATION macro retrieves the current iteration’s data from the produce
array.
const char *produce[] = {"apple", "orange", "banana"}
TEST(grocery, basket, .iterations=3) {
ASSERT_TRUE(isFruit(produce[TEST_ITERATION]));
}
Dynamic Parameterized Tests 🔗
Dynamic parameterized test cases are those where the number of iterations is not known at compile time. This approach is necessary when test data is computed at runtime, such as when reading from external files.
You can set the number of iterations for a test case with the SET_TEST_ITERATIONS macro. This macro accepts two parameters: the test case name and the number of iterations. The macro must be called from a SUITE_SETUP fixture. Calling it elsewhere, like from a test case or TEST_SETUP fixture, will cause Audition to terminate the test runner with an error. In addition to calling the SET_TEST_ITERATIONS macro, you must also set the iterations option of the parameterized test case to DYNAMIC_ITERATIONS to inform Audition that the number of iterations is set at runtime.
In the following example, the number of iterations is set to 3
at runtime. The test data is allocated dynamically, but in practice, it could come from any source such as the files in a directory or rows in a database.
static int *primes;
SUITE_SETUP(number_theory) {
// In this example the data is allocated dynamically, but you could
// generate it from each file in a directory, a single markup file,
// or rows in a database.
primes = calloc(3, sizeof(primes[0]));
primes[0] = 2;
primes[1] = 5;
primes[2] = 11;
SET_TEST_ITERATIONS(primality_testing, 3)
}
SUITE_TEARDOWN(number_theory) {
free(primes);
}
TEST(number_theory, primality_testing, .iterations=DYNAMIC_ITERATIONS) {
ASSERT_TRUE(isPrime(primes[TEST_ITERATION]));
}
Writing a Sandboxed Test Case 🔗
When a test is sandboxed, it executes in an isolated address space. This isolation prevents fatal errors, such as segmentation faults, from terminating the test runner. The sandbox is also useful for testing intentional termination, raised signals, capturing stdout
and stderr
, simulating stdin
, and aborting tests that exceed a timeout.
The sandbox is implemented by creating a separate process for tests to execute in, using CreateProcess
on Windows and fork
on Linux and macOS. The main test runner manages the sandbox process, so if a test case crashes within the sandbox, the test runner “catches” the crash, reports it, and continues testing.
However, running tests in the sandbox is slower than executing them in the main test runner process due to the overhead of inter-process communication (IPC). Additionally, initializing the sandbox process is also slower because it must re-run all fixtures.
The sandboxing feature is opt-in — tests are not run in the sandbox by default. To enable the sandbox for a test case, set the sandbox option to true
:
TEST(yourSuite, yourTest, .sandbox=true) {
// ...
}
Catching Timeouts 🔗
If a test might run longer than expected or could hang, you can specify a timeout that instructs the test runner to terminate the sandbox if the test duration exceeds this limit. To specify a timeout for a test case, set the timeout option to a duration in milliseconds. The following test case, the timeout duration is set to 3000 milliseconds (or 3 seconds), and if it exceeds that time, then it will be terminated and marked as failed.
TEST(yourSuite, yourTest, .timeout=3000) {
// ...
}
When the timeout option is set, the sandbox is implicitly enabled, so sandbox=true
does not need to be explicitly specified.
Death Testing 🔗
Death tests assert that a test case terminates the program with a specified status code. If the test does not terminate the program or if it terminates with the wrong status code, the test fails.
To indicate that a test is a death test, set the exit status option to the expected status code returned upon termination. The following test case expects the sandbox to terminate with an exit status code of seven or the test fails.
TEST(yourSuite, yourTest, .exit_status=7) {
// ...
}
When the exit status option is set, the sandbox is implicitly enabled, so sandbox=true
does not need to be explicitly specified.
Signal Testing 🔗
On POSIX systems, Audition allows you to assert that a specific signal was raised during the execution of a test case. To indicate that a test case should raise a specific signal, set the signal option to the expected signal. If the test does not raise the specified signal, the test fails. The following test case expects the sandbox to raise the POSIX signal SIGABRT
or the test fails.
TEST(yourSuite, yourTest, .signal=SIGABRT) {
// ...
}
When the signal option is set, the sandbox is implicitly enabled, so sandbox=true
does not need to be explicitly specified.
Debugging the Sandbox 🔗
To debug sandboxed test cases, you must configure your debugger to attach to a child process. The following subsections explain how to do this for popular debuggers.
GNU Debugger 🔗
By default, the GNU Debugger (GDB) should be configured to debug both parent and child processes. If it isn’t, modify the following setting after GDB starts:
(gdb) set detach-on-fork on
If this doesn’t work, you can choose to debug either the parent or child process with the following commands:
(gdb) set follow-fork-mode child
(gdb) set follow-fork-mode parent
LLDB Debugger 🔗
LLVM 14.0.0 added support for following a fork. You can set this with the following command:
(lldb) set target.process.follow-fork-mode child
For users of older versions of LLVM, you can trigger a breakpoint on calls to fork
and then attach the debugger to the new process:
(lldb) b fork
(lldb) attach -p 123
Visual Studio Debugger 🔗
For users of Visual Studio 2017 or newer, you can debug child processes by installing the Child Process Debugging Power Tool extension. This extension, developed by Microsoft, automatically attaches the Visual Studio debugger to child processes.