KDE PIM/Akonadi/Testing: Difference between revisions
Neverendingo (talk | contribs) m Text replace - "<code xml>" to "<syntaxhighlight lang="xml">" |
Gditchfield (talk | contribs) |
||
(7 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
[[Category:Testing]] | |||
This page describes the Akonadi test and benchmark facilities. | This page describes the Akonadi test and benchmark facilities. | ||
Line 5: | Line 7: | ||
= Akonadi Testrunner = | = Akonadi Testrunner = | ||
Igor's GSoC project, found in | Igor Trindade Oliveira's GSoC project, found in Akonadi's [https://invent.kde.org/pim/akonadi/-/tree/master/autotests/libs/testrunner autotests/libs/testrunner]. The Akonadi Testrunner sets up an isolated Akonadi server based on an environment configuration file. | ||
Until version 4.11 this meant starting a separate D-Bus daemon (and KDE infrastructure as needed, such as kdeinit). Since version 4.12 this uses the [[Projects/PIM/Akonadi/Multi-Instance|Akonadi Multi-Instance]] feature. The latter provides less strict isolation, but considerably improved performance and convenience when interacting with the test. | |||
== Creating Testrunner Environments == | == Creating Testrunner Environments == | ||
Line 13: | Line 17: | ||
Here is an example listing based on the environment used for the libakonadi unittests: | Here is an example listing based on the environment used for the libakonadi unittests: | ||
< | <syntaxhighlight lang="bash"> | ||
unittestenv/ | unittestenv/ | ||
unittestenv/config.xml | unittestenv/config.xml | ||
Line 71: | Line 75: | ||
=== Interactive Use === | === Interactive Use === | ||
Setting up the environment manually is useful to be able to run tests in gdb, valgrind, etc. | |||
For | For such usage, the testrunner provides an interactive mode in which it sets up the environment and provides a way to "switch" into it. | ||
First, start the testrunner: | First, start the testrunner: | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
$ akonaditest -c config.xml --testenv / | $ akonaditest -c config.xml --testenv /tmp/testenvironment.sh & | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 85: | Line 90: | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
$ source / | $ source /tmp/testenvironment.sh | ||
</syntaxhighlight> | </syntaxhighlight> | ||
The environment variables of the current shell are then changed to point to the test environment (eg. | The environment variables of the current shell are then changed to point to the test environment (eg. XDG_CONFIG_HOME, DBUS_*, etc.). Every Akonadi application run in that shell operates on the Akonadi server of the test environment. | ||
To terminate and cleanup the test environment, run: | To terminate and cleanup the test environment, run: | ||
Line 110: | Line 115: | ||
This can be used from within CMake (example based on ''kdepimlibs/akonadi/tests''): | This can be used from within CMake (example based on ''kdepimlibs/akonadi/tests''): | ||
< | <syntaxhighlight lang="cmake"> | ||
macro(add_akonadi_isolated_test _source) | macro(add_akonadi_isolated_test _source) | ||
set(_test ${_source}) | set(_test ${_source}) |
Latest revision as of 02:24, 4 April 2021
This page describes the Akonadi test and benchmark facilities.
Akonadi Testrunner
Igor Trindade Oliveira's GSoC project, found in Akonadi's autotests/libs/testrunner. The Akonadi Testrunner sets up an isolated Akonadi server based on an environment configuration file.
Until version 4.11 this meant starting a separate D-Bus daemon (and KDE infrastructure as needed, such as kdeinit). Since version 4.12 this uses the Akonadi Multi-Instance feature. The latter provides less strict isolation, but considerably improved performance and convenience when interacting with the test.
Creating Testrunner Environments
A testrunner environment consists of two components: a set of configuration and data files and a XML description file of the environment.
Here is an example listing based on the environment used for the libakonadi unittests:
unittestenv/
unittestenv/config.xml
unittestenv/xdglocal
unittestenv/kdehome
unittestenv/kdehome/share
unittestenv/kdehome/share/config
unittestenv/kdehome/share/config/akonadi-firstrunrc
unittestenv/kdehome/share/config/akonadi_knut_resource_0rc
unittestenv/kdehome/testdata.xml
unittestenv/xdgconfig
The environment description file (config.xml)looks like this:
<config>
<!-- path to KDE configuration ($KDEHOME) -->
<kdehome>kdehome</kdehome>
<!-- path to Akonadi configuration, ie. the stuff that
usually goes into ~/.config/akonadi/ -->
<confighome>xdgconfig</confighome>
<!-- path to Akonadi data, ie. the stuff that usually
goes into ~/.local/share/akonadi/ -->
<datahome>xdgdata</datahome>
<!-- load resources of the specified types -->
<agent synchronize="true">akonadi_knut_resource</agent>
<!-- set environment variables -->
<envvar name="AKONADI_DISABLE_AGENT_AUTOSTART">true</envvar>
</config>
The first three elements define the relevant paths inside the environment data, relative to the config.xml file. The <agent> element can be used to create instances of the specified agent (multiple such elements are allowed). If the agent is a resource, it can also be synced initially by adding the synchronize="true" attribute. Tests will not be launched before the syncing has been complete in this case.
Agents set up in this way can be configured by simply providing the corresponding configuration file in $KDEHOME, such as akonadi_knut_resource_0rc in our example.
Global configuration files can be provided in the same way, akonadi-firstrunrc as shown below is in particular useful to avoid the Akonadi default setup mechanism interfering with the test:
[ProcessedDefaults]
defaultaddressbook=done
defaultcalendar=done
Same for kdedrc which allows to disable kbuilsycoca4. That can greatly speed up tests.
[General]
CheckSycoca=false
CheckFileStamps=false
The <envvar> element allows you to set arbitrary environment variables inside the test environment. One useful example is AKONADI_DISABLE_AGENT_AUTOSTART which will prevent the Akonadi server from starting autostart agents, which can further speed up the setup process.
Using the Testrunner
Interactive Use
Setting up the environment manually is useful to be able to run tests in gdb, valgrind, etc.
For such usage, the testrunner provides an interactive mode in which it sets up the environment and provides a way to "switch" into it.
First, start the testrunner:
$ akonaditest -c config.xml --testenv /tmp/testenvironment.sh &
Note: Although the testenv parameter is not required, it makes life a bit easier when testing manual. If you don't pass it the script will be generated in a temporary dir and is therefore a bit harder to find.
Once the setup is complete, it creates a shell script containing the necessary environment variable changes to switch into the test environment:
$ source /tmp/testenvironment.sh
The environment variables of the current shell are then changed to point to the test environment (eg. XDG_CONFIG_HOME, DBUS_*, etc.). Every Akonadi application run in that shell operates on the Akonadi server of the test environment.
To terminate and cleanup the test environment, run:
$ shutdown-testenvironment
Note that your shell afterwards still points to the (now no longer existing) test environment and might not work as expected anymore.
Non-Interactive Use
kdepimlibs/akonadi/tests uses the Akonadi Testrunner to run unittests in an isolated environment. For automated usage, the testrunner can be used in a non-interactive way:
$ akonaditest -c config.xml <comand> <params>
The testrunner will run command params within the isolated environment and terminate afterwards.
This can be used from within CMake (example based on kdepimlibs/akonadi/tests):
macro(add_akonadi_isolated_test _source)
set(_test ${_source})
get_filename_component(_name ${_source} NAME_WE)
kde4_add_executable(${_name} TEST ${_test})
target_link_libraries(${_name}
akonadi-kde akonadi-kmime ${QT_QTTEST_LIBRARY}
${QT_QTGUI_LIBRARY} ${KDE4_KDECORE_LIBS}
)
# based on kde4_add_unit_test
if (WIN32)
get_target_property( _loc ${_name} LOCATION )
set(_executable ${_loc}.bat)
else (WIN32)
set(_executable ${EXECUTABLE_OUTPUT_PATH}/${_name})
endif (WIN32)
if (UNIX)
set(_executable ${_executable}.shell)
endif (UNIX)
find_program(_testrunner akonaditest)
add_test( libakonadi-${_name}
${_testrunner} -c
${CMAKE_CURRENT_SOURCE_DIR}/unittestenv/config.xml
${_executable}
)
endmacro(add_akonadi_isolated_test)
Using QtTest unittests with KDE extensions (QTEST_KDEMAIN) together with the testrunner is problematic as they modify some of the environment variables set by the testrunner. Instead, use the following:
#include <qtest_akonadi.h>
QTEST_AKONADIMAIN( MyTest, NoGUI )
KNUT Test Data Resource
In kdepim/akonadi/resources, fully featured resource that operates on a single XML file. File format is decribed in knut.xsd and follows closely the internal structure of Akonadi. New files can be created in eg. Akonadiconsole by creating a resource and specifying a non-existing file.
Akonadi Benchmarker
In kdepimlibs/akonadi/test, part of Robert's thesis. It is a set of test that show the time to process many item/collection operations.
cleanup confusing sections and fix sections which contain a todo
Unittests
Akonadi Server
Usable without installation, run with ctest/make test as usual.
kdepimlbs/akonadi
These tests use the Akonadi Testrunner, the test environment is found in kdepimlibs/akonadi/tests/unittestenv.
Setup
The tests do not yet completely work without having certain components installed, namely:
- Akonadi Server
- KNUT resource
Running the tests
The tests can be run automatically using ctest/make test as usual. To run a single test manually, it needs to be executed using the Akonadi testrunner:
$ cd kdepimlibs/akonadi/test
$ akonaditest -c unittest/config.xml $BUILDDIR/test_executable.shell
kdepim/akonadi
Are there any?
cleanup confusing sections and fix sections which contain a todo
Resource Testing
Tools to automatically test Akonadi resources are currently in development in playground/pim/akonaditest/resourcetester. There are two basic modes of operation, read tests and write tests. The resourcetester tool provides convenience methods for common operations needed to perform those tests.
Read Tests
To verify the read code in a resource works correctly we need to read pre-defined test data from the resource and compare that with independently provided reference data.
Write Tests
Once the reading code is verified we can use that to verify the writing code. This is done by writing a change to the resource, re-creating it to ensure the change was persistent and finally comparing the re-read change with the expected result.
cleanup confusing sections and fix sections which contain a todo