KDE PIM/Akonadi/Testing
This page describes the Akonadi test and benchmark facilities.
Akonadi Testrunner
Igor's GSoC project, found in kdepimlibs/akonadi/tests/testrunner. The Akonadi Testrunner sets up an isolated Akonadi server (which implies a separated D-Bus server) based on an environment configuration file.
Creating Testrunner Environments
A testrunner environment consists of two components: a set of configuration and data files and a XML description file of the environment.
Here is an example listing based on the environment used for the libakonadi unittests:
unittestenv/
unittestenv/config.xml
unittestenv/xdglocal
unittestenv/kdehome
unittestenv/kdehome/share
unittestenv/kdehome/share/config
unittestenv/kdehome/share/config/akonadi-firstrunrc
unittestenv/kdehome/share/config/akonadi_knut_resource_0rc
unittestenv/kdehome/testdata.xml
unittestenv/xdgconfig
The environment description file (config.xml)looks like this:
<config>
<kdehome>kdehome</kdehome>
<confighome>xdgconfig</confighome>
<datahome>xdgdata</datahome>
<agent synchronize="true">akonadi_knut_resource</agent>
</config>
The first three elements define the relevant paths inside the environment data, relative to the config.xml file. The <agent> element can be used to create instances of the specified agent (multiple such elements are allowed). If the agent is a resource, it can also be synced initially by adding the synchronize="true" attribute. Tests will not be launched before the syncing has been complete in this case.
Agents set up in this way can be configured by simply providing the corresponding configuration file in $KDEHOME, such as akonadi_knut_resource_0rc in our example.
Global configuration files can be provided in the same way, akonadi-firstrunrc as shown below is in particular useful to avoid the Akonadi default setup mechanism interfering with the test:
[ProcessedDefaults]
defaultaddressbook=done
defaultcalendar=done
Using the Testrunner
Interactive Use
For manual usage, the testrunner provides an interactive mode in which it sets up the environment and provides a way to "switch" into it.
First, start the testrunner:
$ akonaditest -c config.xml &
Once the setup is complete, it creates a shell script containing the necessary environment variable changes to switch into the test environment:
$ source testenvironment.sh
The environment variables of the current shell are then changed to point to the test environment (eg. KDEHOME, DBUS_*, etc.). Every Akonadi application run in that shell operates on the Akonadi server of the test environment.
To terminate and cleanup the test environment, run:
$ shutdown-testenvironment
Note that your shell afterwards still points to the (now no longer existing) test environment and might not work as expected anymore.
Non-Interactive Use
kdepimlibs/akonadi/tests uses the Akonadi Testrunner to run unittests in an isolated environment. For automated usage, the testrunner can be used in a non-interactive way:
$ akonaditest -c config.xml <comand> <params>
The testrunner will run command params within the isolated environment and terminate afterwards.
This can be used from within CMake (example based on kdepimlibs/akonadi/tests):
macro(add_akonadi_isolated_test _source)
set(_test ${_source})
get_filename_component(_name ${_source} NAME_WE)
kde4_add_executable(${_name} TEST ${_test})
target_link_libraries(${_name}
akonadi-kde akonadi-kmime ${QT_QTTEST_LIBRARY}
${QT_QTGUI_LIBRARY} ${KDE4_KDECORE_LIBS}
)
# based on kde4_add_unit_test
if (WIN32)
get_target_property( _loc ${_name} LOCATION )
set(_executable ${_loc}.bat)
else (WIN32)
set(_executable ${EXECUTABLE_OUTPUT_PATH}/${_name})
endif (WIN32)
if (UNIX)
set(_executable ${_executable}.shell)
endif (UNIX)
find_program(_testrunner akonaditest)
add_test( libakonadi-${_name}
${_testrunner} -c
${CMAKE_CURRENT_SOURCE_DIR}/unittestenv/config.xml
${_executable}
)
endmacro(add_akonadi_isolated_test)
Using QtTest unittests with KDE extensions (QTEST_KDEMAIN) together with the testrunner is problematic as they modify some of the environment variables set by the testrunner. Instead, use the following:
- include <qtest_akonadi.h>
QTEST_AKONADIMAIN( MyTest, NoGUI )
KNUT Test Data Resource
In kdepim/akonadi/resources, fully featured resource that operates on a single XML file. File format is decribed in knut.xsd and follows closely the internal structure of Akonadi. New files can be created in eg. Akonadiconsole by creating a resource and specifying a non-existing file.
Akonadi Scriptable Resource
Second part of Igor's GSoC project, currently in playground/pim/akonaditest.
cleanup confusing sections and fix sections which contain a todo
Akonadi Benchmarker
In kdepimlibs/akonadi/test, part of Robert's thesis.
cleanup confusing sections and fix sections which contain a todo
Unittests
Akonadi Server
Usable without installation, run with ctest/make test as usual.
kdepimlbs/akonadi
These tests use the Akonadi Testrunner, the test environment is found in kdepimlibs/akonadi/tests/unittestenv.
Setup
The tests do not yet completely work without having certain components installed, namely:
- Akonadi Server
- KNUT resource
Running the tests
The tests can be run automatically using ctest/make test as usual. To run a single test manually, it needs to be executed using the Akonadi testrunner:
$ cd kdepimlibs/akonadi/test
$ akonaditest -c unittest/config.xml $BUILDDIR/test_executable.shell
kdepim/akonadi
Are there any?
cleanup confusing sections and fix sections which contain a todo
Resource Testing
Tools to automatically test Akonadi resources are currently in development in playground/pim/akonaditest/resourcetester. There are two basic modes of operation, read tests and write tests. The resourcetester tool provides convenience methods for common operations needed to perform those tests.
Read Tests
To verify the read code in a resource works correctly we need to read pre-defined test data from the resource and compare that with independently provided reference data.
Write Tests
Once the reading code is verified we can use that to verify the writing code. This is done by writing a change to the resource, re-creating it to ensure the change was persistent and finally comparing the re-read change with the expected result.
cleanup confusing sections and fix sections which contain a todo