Data rates and throttling
Network limits and data rates
calng correction devices can typically process data faster than we can reliably send it over the network. It is not immediately obvious when a fast data stream runs into network limits - typically, some trains seem to silently go missing. Consider the layout of a typical correction pipeline for a large detector:
Each data stream can be throttled in some way. On the fast data path, the options are as follows:
- From detector module to DAQ (outside scope of calng)
- Number of frames per train is important, keep in mind for later steps
- From DAQ to correction device
- Can tweak DAQ train stride via calibration manager: positive integer
n
where DAQ only sends everyn
th train on monitoring output (higher value means slower data)
- Can tweak DAQ train stride via calibration manager: positive integer
- From correction device to group matcher
- Can tweak number of frames of frames processed and forwarded using a frame filter
For full detector matching in particular, it is important to choose DAQ train stride appropriately given the desired number of frames per train (either from detector or via frame filter on correction side).
Tested stable configurations
The measurement tables in this section have been removed as all the listed measurements were outdated by updates and upgrades to (depending on cluster) compute nodes, network infrastructure, calng, DAQ, and the Karabo framework itself. During summer maintenance 2024, testing with LPD1M at FXE suggested that train stride 1 (no throttling) and no veto pattern (512 frames per train) worked with stable 100 % preview matching ratio. With this in mind, we should revise all measurements.