Regression testing of the code base is performed on a daily basis on a distinct set of machines we have at our disposal. The TestWeb provides access to the results of regression test runs via the TestWeb dashboard. All testing is related to specific branches in the source repository maintained with Mercurial ("hg"). The branches of MonetDB that are tested differ over time. In general, three different branch types can be distinguished:
- the "cutting edge" development version, which is Mercurial's default branch,
- the candidate feature release version, which is the next feature release,
- the latest bugfix version of the last stable release.
The TestWeb dashboard gives a quick overview of the last (by default) 5 runs. For each run, a grid of targets we test on, and their results in terms of failure/success is shown. A target is an identifier for a certain system configuration. It typically consists of compiler vendor, operating system, system architecture, and a set of properties that control how the software is built (e.g. assert) or tested (e.g. propcheck).
Furthermore, each run has some outputs, most importantly, the test grid. A link to this grid is provided right on top of the list of targets and their individual outputs. The test grid gives an overview of results for all targets and the tested modules in the MonetDB code base. The cross grid is built on-the-fly using as much information as is available. When platforms have their output not yet available, the test grid will ignore them. A full grid is only available when the associated run has finished, as indicated right above the list of targets on the dashboard.
With a (code-wise) complex system like MonetDB, modifying the source code — be it for fixing bugs or for adding new features — always bears the risk of breaking or at least altering some existing functionality. To facilitate the task of detecting such changes, small test scripts together with their respective correct/expected ("stable") output are collected within the code repository of MonetDB. Given the complexity of MonetDB, there is no way to do anything close to "exhaustive" testing, hence, the idea is to continuously extend the test collection. E.g., each developer should add some tests as soon as she/he adds new functionality. Likewise, a test script should be added for each bug report to monitor whether/when the bug is fixed, and to prevent (or at least detect) future occurrences of the same bug. The test grid for individual components lists all these tests and their results.
To run all the tests and compare their current output to their stable output, a simple tool called Mtest.py is included in the MonetDB code base. Mtest recursively walks through the source tree, runs tests, and checks for difference between the stable and the current output. As a result, Mtest creates a web interface that allows convenient access to the differences encountered during testing. Each developer is supposed to run "Mtest" (respectively "make check") on his/her favorite development platform and check the results before checking in her/his changes. During the automatic nightly tests, "make check" and "Mtest" are run on all testing platforms and the test grid is generated based on the data to provide convenient comparative access to the results.
Though Fedora Linux is our main development platform at CWI, we do not limit our attention to this single platform. Supporting a broad range of hardware and software platform is an important concern of MonetDB. Using standard configuration tools like automake, autoconf, and libtool, we have the same code base compiling not only on various flavors of Unix (e.g., Linux, Solaris, Mac OS X) but also on native Windows. Furthermore, the very code base compiles with a wide spectrum of (C-)compilers, ranging from GNU's gcc to Clang, Intel and Microsoft's Visual Studio and Visual Studio .NET compilers on Windows.