Reproducibility

The test directory contains tests which can ensure that updates to notebooks and libraries do not result in unintended changes to notebook output. This assures consistency of correction results for subsequent versions.

Note

Tests can be quite resource intensive, and thus should be run on a dedicated cluster node, allocated using salloc.

Running Tests

Before you run tests, commit your changes, so that the test run can be assigned to that commit:

git add ...
git commit -m "Added test section to docs"

To run all tests, navigate to the test directory and execute:

python -m unittest discover

This will usually entail executing a notebook under test via SLURM first, then checking its output against the last commited artefacts of that test type.

If individual tests are run, e.g. for debugging, additional options exist to skip tests, or notebook execution:

python test_XXX.py --help

where test_XXX.py is the test name, will give you a list of options available for that test.

If all tests pass, you can commit and push your updates. If you have failures, either check your changes, or if changes are intended, generate new artefacts.

Note

Running tests will generate entries for test reports in the artefacts directory under the most recent commit. Reviewers should check that such updates are present in the list of changed files.

Generating new Artefacts

If an update intents to change output, the tests can be used to generate new artefacts against which subsequent tests will then run.

First, commit your changes which you want to produce new artefacts for:

git add ...
git commit -m "AGIPD corrections handle baseline shift"

Contrary to running tests alone, new artefacts need to be generated for each affected test individually:

python test_XXX.py --generate

replacing test_XXX.py with the test you’d like to run. This will execute the notebook, create artefact entries in the artefact dir, and after that will check for consistency by executing the test against these artefacts. This last part is important: the test should not fail on its own input. If it does, something is very likely wrong!

After artefacts are created and tests using these have passed, commit the new artefacts and create a merge request for your branch:

git add tests/artefacts/
git commit -m "Added new artefacts for changes related to baseline shifts"

Please also add comments in the MR description on why artefacts have changed.

Note

Reviewers should always evaluate if the changes in test artefacts are appropriate, intended and acceptable.

Test Reports

Test reports are automatically generated when building documentation from all xml report files found in sub-directories of the artefact directory.

Note

Please make sure not to commit any additional files into the test_rsts subfolder of this documentation. Also, do not commit test_results.rst. It is autogenerated.

Test Data

In order to perform described test a detector data as well as calibration constants are required. Detector data for use in testing as well as calibration constants can be found in:

/gpfs/exfel/exp/XMPL/201750/p700001/raw/

Tests should be configured to output into a common location:

/gpfs/exfel/exp/XMPL/201750/p700001/scratch/

Repositories of calibration constants used in testing can be found at:

/gpfs/exfel/exp/XMPL/201750/p700001/usr