toolbox_scs.detectors
¶
Submodules¶
toolbox_scs.detectors.azimuthal_integrator
toolbox_scs.detectors.bam_detectors
toolbox_scs.detectors.digitizers
toolbox_scs.detectors.dssc
toolbox_scs.detectors.dssc_data
toolbox_scs.detectors.dssc_misc
toolbox_scs.detectors.dssc_plot
toolbox_scs.detectors.dssc_processing
toolbox_scs.detectors.fccd
toolbox_scs.detectors.hrixs
toolbox_scs.detectors.pes
toolbox_scs.detectors.viking
toolbox_scs.detectors.xgm
Package Contents¶
Classes¶
The hRIXS analysis, especially curvature correction |
|
The Viking analysis (spectrometer used in combination with Andor Newton |
Functions¶
|
Load beam arrival monitor (BAM) data and align their pulse ID |
|
Extract the run values of bamStatus[1-3] and bamError. |
|
Checks and plots the peak parameters (pulse window and baseline window |
|
Automatically computes digitizer peaks. Sources can be loaded on the |
|
Extracts laser photodiode signal (peak intensity) from Fast ADC |
|
Extract peaks from one source (channel) of a digitizer. |
|
Automatically computes TIM peaks. Sources can be loaded on the |
|
Check for the existence of signal description and return all corresponding |
|
Compute the average over ntrains evenly spaced accross all trains |
|
Combines the given data into one dataset. For any of extra_data's data |
|
Load stored xarray Dataset. |
|
Adding attributes to a hdf5 file. This function is intended to be used to |
|
Store xarray Dataset in the specified location |
|
Creates a single entry for the dssc binner dictionary. The produced xarray |
|
Load the xgm data and define coordinates along the pulse dimension. |
|
Loads the first data file for DSSC module 0 (this is hardcoded) |
|
Load a DSSC mask file. |
|
Returns a mask for the given DSSC geometry with ASICs given in poslist |
|
Collects and reduces DSSC data for a single module. |
|
Extract PES parameters for a given extra_data DataCollection. |
|
Extracts time-of-flight spectra from raw digitizer traces. The |
|
Calculates the calibration factor F between the photon flux (slow signal) |
|
Load and/or computes XGM data. Sources can be loaded on the |
Attributes¶
- class toolbox_scs.detectors.AzimuthalIntegrator(imageshape, center, polar_range, aspect=204 / 236, **kwargs)[source]¶
Bases:
object
- class toolbox_scs.detectors.AzimuthalIntegratorDSSC(geom, polar_range, dxdy=(0, 0), **kwargs)[source]¶
Bases:
AzimuthalIntegrator
- toolbox_scs.detectors.get_bam(run, mnemonics=None, merge_with=None, bunchPattern='sase3', pulseIds=None)[source]¶
Load beam arrival monitor (BAM) data and align their pulse ID according to the bunch pattern. Sources can be loaded on the fly via the mnemonics argument, or processed from an existing data set (merge_with).
- Parameters
run (extra_data.DataCollection) – DataCollection containing the bam data.
mnemonics (str or list of str) – mnemonics for BAM, e.g. “BAM1932M” or [“BAM414”, “BAM1932M”]. If None, defaults to “BAM1932M” in case no merge_with dataset is provided.
merge_with (xarray Dataset) – If provided, the resulting Dataset will be merged with this one. The BAM variables of merge_with (if any) will also be selected, aligned and merged.
bunchPattern (str) – ‘sase1’ or ‘sase3’ or ‘scs_ppl’, bunch pattern used to extract peaks. The pulse ID dimension will be named ‘sa1_pId’, ‘sa3_pId’ or ‘ol_pId’, respectively.
pulseIds (list, 1D array) – Pulse Ids. If None, they are automatically loaded.
- Returns
merged with Dataset merge_with if provided.
- Return type
xarray Dataset with pulse-resolved BAM variables aligned,
Example
>>> import toolbox_scs as tb >>> run, ds = tb.load(2711, 303, 'BAM1932S') >>> ds['BAM1932S']
- toolbox_scs.detectors.get_bam_params(run, mnemo_or_source='BAM1932S')[source]¶
Extract the run values of bamStatus[1-3] and bamError.
- Parameters
run (extra_data.DataCollection) – DataCollection containing the bam data.
mnemo_or_source (str) – mnemonic of the BAM, e.g. ‘BAM414’, or source name, e.g. ‘SCS_ILH_LAS/DOOCS/BAM_414_B2.
- Returns
params – dictionnary containing the extracted parameters.
- Return type
dict
Note
The extracted parameters are run values, they do not reflect any possible change during the run.
- toolbox_scs.detectors.check_peak_params(run, mnemonic, raw_trace=None, ntrains=200, params=None, plot=True, show_all=False, bunchPattern='sase3')[source]¶
Checks and plots the peak parameters (pulse window and baseline window of a raw digitizer trace) used to compute the peak integration. These parameters are either set by the digitizer peak-integration settings, or are determined by a peak finding algorithm (used in get_tim_peaks or get_fast_adc_peaks) when the inputs are raw traces. The parameters can also be provided manually for visual inspection. The plot either shows the first and last pulse of the trace or the entire trace.
- Parameters
run (extra_data.DataCollection) – DataCollection containing the digitizer data.
mnemonic (str) – ToolBox mnemonic of the digitizer data, e.g. ‘MCP2apd’.
raw_trace (optional, 1D numpy array or xarray DataArray) – Raw trace to display. If None, the average raw trace over ntrains of the corresponding channel is loaded (this can be time-consuming).
ntrains (optional, int) – Only used if raw_trace is None. Number of trains used to calculate the average raw trace of the corresponding channel.
plot (bool) – If True, displays the raw trace and peak integration regions.
show_all (bool) – If True, displays the entire raw trace and all peak integration regions (this can be time-consuming). If False, shows the first and last pulse according to the bunchPattern.
bunchPattern (optional, str) – Only used if plot is True. Checks the bunch pattern against the digitizer peak parameters and shows potential mismatch.
- Return type
dictionnary of peak integration parameters
- toolbox_scs.detectors.get_digitizer_peaks(run, mnemonics=None, merge_with=None, bunchPattern='None', integParams=None, digitizer=None, keepAllSase=False)[source]¶
Automatically computes digitizer peaks. Sources can be loaded on the fly via the mnemonics argument, or processed from an existing data set (merge_with). The bunch pattern table is used to assign the pulse id coordinates.
- Parameters
run (extra_data.DataCollection) – DataCollection containing the digitizer data.
mnemonics (str or list of str) – mnemonics for FastADC or TIM, e.g. “FastADC2raw” or [“MCP2raw”, “MCP3apd”]. If None and no merge_with dataset is provided, defaults to “MCP2apd” if digitizer is ADQ412 or “FastADC5raw” if digitizer is FastADC.
merge_with (xarray Dataset) – If provided, the resulting Dataset will be merged with this one. The FastADC variables of merge_with (if any) will also be computed and merged.
bunchPattern (str) – ‘sase1’ or ‘sase3’ or ‘scs_ppl’, ‘None’: bunch pattern used to extract peaks.
integParams (dict) – dictionnary for raw trace integration, e.g. {‘pulseStart’:100, ‘pulsestop’:200, ‘baseStart’:50, ‘baseStop’:99, ‘period’:24, ‘npulses’:500}. If None, integration parameters are computed automatically.
keepAllSase (bool) – Only relevant in case of sase-dedicated trains. If True, all trains are kept, else only those of the bunchPattern are kept.
- Returns
xarray Dataset with all Fast ADC variables substituted by
the peak caclulated values (e.g. “FastADC2raw” becomes
”FastADC2peaks”).
- toolbox_scs.detectors.get_laser_peaks(run, mnemonics=None, merge_with=None, bunchPattern='scs_ppl', integParams=None)[source]¶
Extracts laser photodiode signal (peak intensity) from Fast ADC digitizer. Sources can be loaded on the fly via the mnemonics argument, and/or processed from an existing data set (merge_with). The PP laser bunch pattern is used to assign the pulse idcoordinates.
- Parameters
run (extra_data.DataCollection) – DataCollection containing the digitizer data.
mnemonics (str or list of str) – mnemonics for FastADC corresponding to laser signal, e.g. “FastADC2peaks” or [“FastADC2raw”, “FastADC3peaks”]. If None, defaults to “MCP2apd” in case no merge_with dataset is provided.
merge_with (xarray Dataset) – If provided, the resulting Dataset will be merged with this one. The FastADC variables of merge_with (if any) will also be computed and merged.
bunchPattern (str) – ‘sase1’ or ‘sase3’ or ‘scs_ppl’, bunch pattern used to extract peaks.
integParams (dict) – dictionnary for raw trace integration, e.g. {‘pulseStart’:100, ‘pulsestop’:200, ‘baseStart’:50, ‘baseStop’:99, ‘period’:24, ‘npulses’:500}. If None, integration parameters are computed automatically.
- Returns
xarray Dataset with all Fast ADC variables substituted by
the peak caclulated values (e.g. “FastADC2raw” becomes
”FastADC2peaks”).
- toolbox_scs.detectors.get_peaks(run, data=None, source=None, key=None, digitizer='ADQ412', useRaw=True, autoFind=True, integParams=None, bunchPattern='sase3', bpt=None, extra_dim=None, indices=None)[source]¶
Extract peaks from one source (channel) of a digitizer.
- Parameters
run (extra_data.DataCollection) – DataCollection containing the digitizer data
data (xarray DataArray or str) – array containing the raw traces or peak-integrated values from the digitizer. If str, must be one of the ToolBox mnemonics. If None, the data is loaded via the source and key arguments.
source (str) – Name of digitizer source, e.g. ‘SCS_UTC1_ADQ/ADC/1:network’. Only required if data is a DataArray or None.
key (str) – Key for digitizer data, e.g. ‘digitizers.channel_1_A.raw.samples’. Only required if data is a DataArray or is None.
digitizer (string) – name of digitizer, e.g. ‘FastADC’ or ‘ADQ412’. Used to determine the sampling rate.
useRaw (bool) – If True, extract peaks from raw traces. If False, uses the APD (or peaks) data from the digitizer.
autoFind (bool) – If True, finds integration parameters by inspecting the average raw trace. Only valid if useRaw is True.
integParams (dict) – dictionnary containing the integration parameters for raw trace integration: ‘pulseStart’, ‘pulseStop’, ‘baseStart’, ‘baseStop’, ‘period’, ‘npulses’. Not used if autoFind is True. All keys are required when bunch pattern is missing.
bunchPattern (string or dict) – match the peaks to the bunch pattern: ‘sase1’, ‘sase3’, ‘scs_ppl’. This will dictate the name of the pulse ID coordinates: ‘sa1_pId’, ‘sa3_pId’ or ‘scs_ppl’. Alternatively, a dict with source, key and pattern can be provided, e.g. {‘source’:’SCS_RR_UTC/TSYS/TIMESERVER’, ‘key’:’bunchPatternTable.value’, ‘pattern’:’sase3’}
bpt (xarray DataArray) – bunch pattern table
extra_dim (str) – Name given to the dimension along the peaks. If None, the name is given according to the bunchPattern.
indices (array, slice) – indices from the peak-integrated data to retrieve. Only required when bunch pattern is missing and useRaw is False.
- Return type
xarray.DataArray containing digitizer peaks with pulse coordinates
- toolbox_scs.detectors.get_tim_peaks(run, mnemonics=None, merge_with=None, bunchPattern='sase3', integParams=None, keepAllSase=False)[source]¶
Automatically computes TIM peaks. Sources can be loaded on the fly via the mnemonics argument, or processed from an existing data set (merge_with). The bunch pattern table is used to assign the pulse id coordinates.
- Parameters
run (extra_data.DataCollection) – DataCollection containing the digitizer data.
mnemonics (str or list of str) – mnemonics for TIM, e.g. “MCP2apd” or [“MCP2apd”, “MCP3raw”]. If None, defaults to “MCP2apd” in case no merge_with dataset is provided.
merge_with (xarray Dataset) – If provided, the resulting Dataset will be merged with this one. The TIM variables of merge_with (if any) will also be computed and merged.
bunchPattern (str) – ‘sase1’ or ‘sase3’ or ‘scs_ppl’, bunch pattern used to extract peaks. The pulse ID dimension will be named ‘sa1_pId’, ‘sa3_pId’ or ‘ol_pId’, respectively.
integParams (dict) – dictionnary for raw trace integration, e.g. {‘pulseStart’:100, ‘pulsestop’:200, ‘baseStart’:50, ‘baseStop’:99, ‘period’:24, ‘npulses’:500}. If None, integration parameters are computed automatically.
keepAllSase (bool) – Only relevant in case of sase-dedicated trains. If True, all trains are kept, else only those of the bunchPattern are kept.
- Returns
xarray Dataset with all TIM variables substituted by
the peak caclulated values (e.g. “MCP2raw” becomes
”MCP2peaks”), merged with Dataset *merge_with if provided.*
- toolbox_scs.detectors.digitizer_signal_description(run, digitizer=None)[source]¶
Check for the existence of signal description and return all corresponding channels in a dictionnary.
- Parameters
run (extra_data.DataCollection) – DataCollection containing the digitizer data.
digitizer (str or list of str (default=None)) – Name of the digitizer: one in [‘FastADC’, FastADC2’, ‘ADQ412’, ‘ADQ412_2] If None, all digitizers are used
- Returns
signal_description – the digitizer channels.
- Return type
dictionnary containing the signal description of
Example
import toolbox_scs as tb run = tb.open_run(3481, 100) signals = tb.digitizer_signal_description(run) signals_fadc2 = tb.digitizer_signal_description(run, ‘FastADC2’)
- toolbox_scs.detectors.get_dig_avg_trace(run, mnemonic, ntrains=None)[source]¶
Compute the average over ntrains evenly spaced accross all trains of a digitizer trace.
- Parameters
run (extra_data.DataCollection) – DataCollection containing the digitizer data.
mnemonic (str) – ToolBox mnemonic of the digitizer data, e.g. ‘MCP2apd’.
ntrains (int) – Number of trains used to calculate the average raw trace. If None, all trains are used.
- Returns
trace – The average digitizer trace
- Return type
DataArray
- class toolbox_scs.detectors.DSSCBinner(proposal_nr, run_nr, binners={}, xgm_name='SCS_SA3', tim_names=['MCP1apd', 'MCP2apd', 'MCP3apd'], dssc_coords_stride=2)[source]¶
-
- add_binner(name, binner)[source]¶
Add additional binner to internal dictionary
- Parameters
name (str) – name of binner to be created
binner (xarray.DataArray) – An array that represents a map how the respective coordinate should be binned.
- Raises
ToolBoxValueError – Exception: Raises exception in case the name does not correspond to a valid binner name. To be generalized.
- load_xgm()[source]¶
load xgm data and construct coordinate array according to corresponding dssc frame number.
- load_tim()[source]¶
load tim data and construct coordinate array according to corresponding dssc frame number.
- create_pulsemask(use_data='xgm', threshold=(0, np.inf))[source]¶
creates a mask for dssc frames according to measured xgm intensity. Once such a mask has been constructed, it will be used in the data reduction process to drop out-of-bounds pulses.
- get_info()[source]¶
Returns the expected shape of the binned dataset, in case binners have been defined.
- get_xgm_binned()[source]¶
Bin the xgm data according to the binners of the dssc data. The result can eventually be merged into the final dataset by the DSSCFormatter.
- Returns
xgm_data – xarray dataset containing the binned xgm data
- Return type
xarray.DataSet
- get_tim_binned()[source]¶
Bin the tim data according to the binners of the dssc data. The result can eventually be merged into the final dataset by the DSSCFormatter.
- Returns
tim_data – xarray dataset containing the binned tim data
- Return type
xarray.DataSet
- process_data(modules=[], filepath='./', chunksize=512, backend='loky', n_jobs=None, dark_image=None, xgm_normalization=False, normevery=1)[source]¶
Load and bin dssc data according to self.bins. No data is returned by this method. The condensed data is written to file by the worker processes directly.
- Parameters
modules (list of ints) – a list containing the module numbers that should be processed. If empty, all modules are processed.
filepath (str) – the path where the files containing the reduced data should be stored.
chunksize (int) – The number of trains that should be read in one iterative step.
backend (str) – joblib multiprocessing backend to be used. At the moment it can be any of joblibs standard backends: ‘loky’ (default), ‘multiprocessing’, ‘threading’. Anything else than the default is experimental and not appropriately implemented in the dbdet member function ‘bin_data’.
n_jobs (int) – inversely proportional of the number of cpu’s available for one job. Tasks within one job can grab a maximum of n_CPU_tot/n_jobs of cpu’s. Note that when using the default backend there is no need to adjust this parameter with the current implementation.
dark_image (xarray.DataArray) – DataArray with dimensions compatible with the loaded dssc data. If given, it will be subtracted from the dssc data before the binning. The dark image needs to be of dimension module, trainId, pulse, x and y.
xgm_normalization (boolean) – if true, the dssc data is normalized by the xgm data before the binning.
normevery (int) – integer indicating which out of normevery frame will be normalized.
- class toolbox_scs.detectors.DSSCFormatter(filepath)[source]¶
- combine_files(filenames=[])[source]¶
Read the files given in filenames, and store the data in the class variable ‘data’. If no filenames are given, it tries to read the files stored in the class-internal variable ‘_filenames’.
- Parameters
filenames (list) – list of strings containing the names of the files to be combined.
- add_dataArray(groups=[])[source]¶
Reads addional xarray-data from the first file given in the list of filenames. This assumes that all the files in the folder contain the same additional data. To be generalized.
- Parameters
groups (list) – list of strings with the names of the groups in the h5 file, containing additional xarray data.
- add_attributes(attributes={})[source]¶
Add additional information, such as run-type, as attributes to the formatted .h5 file.
- Parameters
attributes (dictionary) – a dictionary, containing information or data of any kind, that will be added to the formatted .h5 file as attributes.
- save_formatted_data(filename)[source]¶
Create a .h5 file containing the main dataset in the group called ‘data’. Additional groups will be created for the content of the variable ‘data_array’. Metadata about the file is added in the form of attributes.
- Parameters
filename (str) – the name of the file to be created
- toolbox_scs.detectors.get_data_formatted(filenames=[], data_list=[])[source]¶
Combines the given data into one dataset. For any of extra_data’s data types, an xarray.Dataset is returned. The data is sorted along the ‘module’ dimension. The array dimension have the order ‘trainId’, ‘pulse’, ‘module’, ‘x’, ‘y’. This order is required by the extra_geometry package.
- Parameters
filenames (list of str) – files to be combined as a list of names. Calls ‘_data_from_list’ to actually load the data.
data_list (list) – list containing the already loaded data
- Returns
data – A xarray.Dataset containing the combined data.
- Return type
xarray.Dataset
- toolbox_scs.detectors.load_xarray(fname, group='data', form='dataset')[source]¶
Load stored xarray Dataset. Comment: This function exists because of a problem with the standard netcdf engine that is malfunctioning due to related software installed in the exfel-python environment. May be dropped at some point.
- Parameters
fname (str) – filename as string
group (str) – the name of the xarray dataset (group in h5 file).
form (str) – specify whether the data to be loaded is a ‘dataset’ or a ‘array’.
- toolbox_scs.detectors.save_attributes_h5(fname, data={})[source]¶
Adding attributes to a hdf5 file. This function is intended to be used to attach metadata to a processed run.
- Parameters
fname (str) – filename as string
data (dictionary) – the data that should be added to the file in form of a dictionary.
- toolbox_scs.detectors.save_xarray(fname, data, group='data', mode='a')[source]¶
Store xarray Dataset in the specified location
- Parameters
data (xarray.DataSet) – The data to be stored
fname (str, int) – filename
overwrite (bool) – overwrite existing data
- Raises
ToolBoxFileError – Exception: File existed, but overwrite was set to False.
- toolbox_scs.detectors.create_dssc_bins(name, coordinates, bins)[source]¶
Creates a single entry for the dssc binner dictionary. The produced xarray data-array will later be used to perform grouping operations according to the given bins.
- Parameters
name (str) – name of the coordinate to be binned.
coordinates (numpy.ndarray) – the original coordinate values (1D)
bins (numpy.ndarray) – the bins according to which the corresponding dimension should be grouped.
- Returns
da – A pre-formatted xarray.DataArray relating the specified dimension with its bins.
- Return type
xarray.DataArray
Examples
>>> import toolbox_scs as tb >>> run = tb.open_run(2212, 235, include='*DA*')
1.) binner along ‘pulse’ dimension. Group data into two bins. >>> bins_pulse = [‘pumped’, ‘unpumped’] * 10 >>> binner_pulse = tb.create_dssc_bins(“pulse”,
np.linspace(0,19,20, dtype=int), bins_pulse)
- 2.) binner along ‘train’ dimension. Group data into bins corresponding
to the positions of a delay stage for instance.
>>> bins_trainId = tb.get_array(run, 'PP800_PhaseShifter', 0.04) >>> binner_train = tb.create_dssc_bins("trainId", run.trainIds, bins_trainId.values)
- toolbox_scs.detectors.get_xgm_formatted(run_obj, xgm_name, dssc_frame_coords)[source]¶
Load the xgm data and define coordinates along the pulse dimension.
- Parameters
run_obj (extra_data.DataCollection) – DataCollection object providing access to the xgm data to be loaded
xgm_name (str) – valid mnemonic of a xgm source
dssc_frame_coords (int, list) – defines which dssc frames should be normalized using data from the xgm.
- Returns
xgm – xgm data with coordinate ‘pulse’.
- Return type
xarray.DataArray
- toolbox_scs.detectors.load_dssc_info(proposal, run_nr)[source]¶
Loads the first data file for DSSC module 0 (this is hardcoded) and returns the detector_info dictionary
- Parameters
proposal (str, int) – number of proposal
run_nr (str, int) – number of run
- Returns
info – {‘dims’: tuple, ‘frames_per_train’: int, ‘total_frames’: int}
- Return type
dictionary
- toolbox_scs.detectors.load_mask(fname, dssc_mask)[source]¶
Load a DSSC mask file.
Copyright (c) 2019, Michael Schneider Copyright (c) 2020, SCS-team license: BSD 3-Clause License (see LICENSE_BSD for more info)
- Parameters
fname (str) – string of the filename of the mask file
- Return type
dssc_mask
- toolbox_scs.detectors.quickmask_DSSC_ASIC(poslist)[source]¶
Returns a mask for the given DSSC geometry with ASICs given in poslist blanked. poslist is a list of (module, row, column) tuples. Each module consists of 2 rows and 8 columns of individual ASICS.
Copyright (c) 2019, Michael Schneider Copyright (c) 2020, SCS-team license: BSD 3-Clause License (see LICENSE_BSD for more info)
- toolbox_scs.detectors.process_dssc_data(proposal, run_nr, module, chunksize, info, dssc_binners, path='./', pulsemask=None, dark_image=None, xgm_mnemonic='SCS_SA3', xgm_normalization=False, normevery=1)[source]¶
Collects and reduces DSSC data for a single module.
Copyright (c) 2020, SCS-team
- Parameters
proposal (int) – proposal number
run_nr (int) – run number
module (int) – DSSC module to process
chunksize (int) – number of trains to load simultaneously
info (dictionary) – dictionary containing keys ‘dims’, ‘frames_per_train’, ‘total_frames’, ‘trainIds’, ‘number_of_trains’.
dssc_binners (dictionary) – a dictionary containing binner objects created by the ToolBox member function “create_binner()”
path (str) – location in which the .h5 files, containing the binned data, should be stored.
pulsemask (numpy.ndarray) – array of booleans to be used to mask dssc data according to xgm data.
dark_image (xarray.DataArray) – an xarray dataarray with matching coordinates with the loaded data. If dark_image is not None it will be subtracted from each individual dssc frame.
xgm_normalization (bool) – true if the data should be divided by the corresponding xgm value.
xgm_mnemonic (str) – Mnemonic of the xgm data to be used for normalization.
normevery (int) – One out of normevery dssc frames will be normalized.
- Returns
module_data – xarray datastructure containing data binned according to bins.
- Return type
xarray.Dataset
- class toolbox_scs.detectors.hRIXS(proposalNB)[source]¶
The hRIXS analysis, especially curvature correction
The objects of this class contain the meta-information about the settings of the spectrometer, not the actual data, except possibly a dark image for background subtraction.
The actual data is loaded into `xarray`s, and stays there.
- PROPOSAL¶
the number of the proposal
- Type
int
- X_RANGE¶
the slice to take in the dispersive direction, in pixels. Defaults to the entire width.
- Type
slice
- Y_RANGE¶
the slice to take in the energy direction
- Type
slice
- THRESHOLD¶
pixel counts above which a hit candidate is assumed, for centroiding. use None if you want to give it in standard deviations instead.
- Type
float
- STD_THRESHOLD¶
same as THRESHOLD, in standard deviations.
- DBL_THRESHOLD¶
threshold controling whether a detected hit is considered to be a double hit.
- BINS¶
the number of bins used in centroiding
- Type
int
- CURVE_A, CURVE_B
the coefficients of the parabola for the curvature correction
- Type
float
- USE_DARK¶
whether to do dark subtraction. Is initially False, magically switches to True if a dark has been loaded, but may be reset.
- Type
bool
- ENERGY_INTERCEPT, ENERGY_SLOPE
The calibration from pixel to energy
- FIELDS¶
the fields to be loaded from the data. Add additional fields if so desired.
Example
proposal = 3145 h = hRIXS(proposal) h.Y_RANGE = slice(700, 900) h.CURVE_B = -3.695346575286939e-07 h.CURVE_A = 0.024084479232443695 h.ENERGY_SLOPE = 0.018387 h.ENERGY_INTERCEPT = 498.27 h.STD_THRESHOLD = 3.5
- aggregators¶
- from_run(runNB, proposal=None, extra_fields=())[source]¶
load a run
Load the run runNB. A thin wrapper around toolbox.load.
Example
data = h.from_run(145) # load run 145
data1 = h.from_run(145) # load run 145 data2 = h.from_run(155) # load run 155 data = xarray.concat([data1, data2], ‘trainId’) # combine both
- load_dark(runNB, proposal=None)[source]¶
load a dark run
Load the dark run runNB from proposal. The latter defaults to the current proposal. The dark is stored in this hRIXS object, and subsequent analyses use it for background subtraction.
Example
h.load_dark(166) # load dark run 166
- find_curvature(runNB, proposal=None, plot=True, args=None, **kwargs)[source]¶
find the curvature correction coefficients
The hRIXS has some abberations which leads to the spectroscopic lines being curved on the detector. We approximate these abberations with a parabola for later correction.
Load a run and determine the curvature. The curvature is set in self, and returned as a pair of floats.
- Parameters
runNB (int) – the run number to use
proposal (int) – the proposal to use, default to the current proposal
plot (bool) – whether to plot the found curvature onto the data
args (pair of float, optional) – a starting value to prime the fitting routine
Example
h.find_curvature(155) # use run 155 to fit the curvature
- centroid_one(image)[source]¶
find the position of photons with sub-pixel precision
A photon is supposed to have hit the detector if the intensity within a 2-by-2 square exceeds a threshold. In this case the position of the photon is calculated as the center-of-mass in a 4-by-4 square.
Return the list of x, y coordinate pairs, corrected by the curvature.
- centroid_two(image, energy)[source]¶
determine position of photon hits on detector
The algrothm is taken from the ESRF RIXS toolbox. The thresholds for determining photon hits are given by the incident photon energy
The function returns arrays containing the single and double hits as x and y coordinates
- centroid(data, bins=None, method='auto')[source]¶
calculate a spectrum by finding the centroid of individual photons
This takes the xarray.Dataset data and returns a copy of it, with a new xarray.DataArray named spectrum added, which contains the energy spectrum calculated for each hRIXS image.
Added a key for switching between algorithims choices are “auto” and “manual” which selects for method for determining whether thresholds there is a photon hit. It changes whether centroid_one or centroid_two is used.
Example
h.centroid(data) # find photons in all images of the run data.spectrum[0, :].plot() # plot the spectrum of the first image
- integrate(data)[source]¶
calculate a spectrum by integration
This takes the xarray data and returns a copy of it, with a new dataarray named spectrum added, which contains the energy spectrum calculated for each hRIXS image.
First the energy that corresponds to each pixel is calculated. Then all pixels within an energy range are summed, where the intensity of one pixel is distributed among the two energy ranges the pixel spans, proportionally to the overlap between the pixel and bin energy ranges.
The resulting data is normalized to one pixel, so the average intensity that arrived on one pixel.
Example
h.integrate(data) # create spectrum by summing pixels data.spectrum[0, :].plot() # plot the spectrum of the first image
- aggregate(ds, dim='trainId')[source]¶
aggregate (i.e. mostly sum) all data within one dataset
take all images in a dataset and aggregate them and their metadata. For images, spectra and normalizations that means adding them, for others (e.g. delays) adding would not make sense, so we treat them properly.
Example
h.centroid(data) # create spectra from finding photons agg = h.aggregate(data) # sum all spectra agg.spectrum.plot() # plot the resulting spectrum
groups = data.groupby(‘hRIXS_index’) # group data by a variable agg = groups.map(h.aggregate) # sum corresponding spectra agg.spectrum[0, :].plot() # plot the spectrum for first value
- toolbox_scs.detectors.get_pes_params(run)[source]¶
Extract PES parameters for a given extra_data DataCollection. Parameters are gas, binding energy, voltages of the MPOD.
- Parameters
run (extra_data.DataCollection) – DataCollection containing the digitizer data
- Returns
params – dictionnary of PES parameters
- Return type
dict
- toolbox_scs.detectors.get_pes_tof(run, mnemonics=None, merge_with=None, start=31390, width=300, origin=None, width_ns=None, subtract_baseline=True, baseStart=None, baseWidth=80, sample_rate=2000000000.0)[source]¶
Extracts time-of-flight spectra from raw digitizer traces. The tracesvare either loaded via ToolBox mnemonics or those in the optionally provided merge_with dataset. The spectra are aligned by pulse Id using the SASE 3 bunch pattern, and have time coordinates in nanoseconds.
- Parameters
run (extra_data.DataCollection) – DataCollection containing the digitizer data
mnemonics (str or list of str) – mnemonics for PES, e.g. “PES_W_raw” or [“PES_W_raw”, “PES_ENE_raw”]. If None and no merge_with dataset is provided, defaults to “PES_W_raw”.
merge_with (xarray Dataset) – If provided, the resulting Dataset will be merged with this one. The PES variables of merge_with (if any) will also be computed and merged.
start (int) – starting sample of the first spectrum in the raw trace.
width (int) – number of samples per spectra.
origin (int) – sample of the raw trace that corresponds to time-of-flight origin. If None, origin is equal to start.
width_ns (float) – time window for one spectrum. If None, the time window is defined by width / sample rate.
subtract_baseline (bool) – If True, subtract baseline defined by baseStart and baseWidth to each spectrum.
baseStart (int) – starting sample of the baseline.
baseWidth (int) – number of samples to average (starting from baseStart) for baseline calculation.
sample_rate (float) – sample rate of the digitizer.
- Returns
pes – Dataset containing the PES time-of-flight spectra (e.g. “PES_W_tof”), merged with optionally provided merg_with dataset.
- Return type
xarray Dataset
Example
>>> import toolbox_scs as tb >>> import toolbox_scs.detectors as tbdet >>> run, ds = tb.load(2927, 100, "PES_W_raw") >>> pes = tbdet.get_pes_tof(run, merge_with=ds)
- class toolbox_scs.detectors.Viking(proposalNB)[source]¶
The Viking analysis (spectrometer used in combination with Andor Newton camera)
The objects of this class contain the meta-information about the settings of the spectrometer, not the actual data, except possibly a dark image for background subtraction.
The actual data is loaded into xarray`s via the method `from_run(), and stays there.
- PROPOSAL¶
the number of the proposal
- Type
int
- X_RANGE¶
the slice to take in the non-dispersive direction, in pixels. Defaults to the entire width.
- Type
slice
- Y_RANGE¶
the slice to take in the energy dispersive direction
- Type
slice
- USE_DARK¶
whether to do dark subtraction. Is initially False, magically switches to True if a dark has been loaded, but may be reset.
- Type
bool
- ENERGY_CALIB¶
The 2nd degree polynomial coefficients for calibration from pixel to energy. Defaults to [0, 1, 0] (no calibration applied).
- Type
1D array (len=3)
- BL_POLY_DEG¶
the dgree of the polynomial used for baseline subtraction. Defaults to 1.
- Type
int
- BL_SIGNAL_RANGE¶
the dispersive-axis range, defined by an interval [min, max], to avoid when fitting a polynomial for baseline subtraction. Multiple ranges can be provided in the form [[min1, max1], [min2, max2], …].
- Type
list
- FIELDS¶
the fields to be loaded from the data. Add additional fields if so desired.
- Type
list of str
Example
proposal = 2953 v = Viking(proposal) v.X_RANGE = slice(0, 1900) v.Y_RANGE = slice(38, 80) v.ENERGY_CALIB = [1.47802667e-06, 2.30600328e-02, 5.15884589e+02] v.BL_SIGNAL_RANGE = [500, 545]
- from_run(runNB, add_attrs=True)[source]¶
load a run
Load the run runNB. A thin wrapper around toolbox_scs.load.
- Parameters
runNB (int) – the run number
add_attrs (bool) – if True, adds the camera parameters as attributes to the dataset (see get_camera_params())
Output –
------ –
ds (xarray Dataset) – the dataset containing the camera images
Example
data = v.from_run(145) # load run 145
data1 = v.from_run(145) # load run 145 data2 = v.from_run(155) # load run 155 data = xarray.concat([data1, data2], ‘trainId’) # combine both
- integrate(data)[source]¶
This function calculates the mean over the non-dispersive dimension to create a spectrum. If the camera parameters are known, the spectrum is multiplied by the number of photoelectrons per ADC count. A new variable “spectrum” is added to the data.
- get_camera_gain(run)[source]¶
Get the preamp gain of the camera in the Viking spectrometer for a specified run.
- Parameters
run (extra_data DataCollection) – information on the run
Output –
------ –
gain (int) –
- e_per_counts(run, gain=None)[source]¶
Conversion factor from camera digital counts to photoelectrons per count. The values can be found in the camera datasheet (Andor Newton) but they have been slightly corrected for High Sensitivity mode after analysis of runs 1204, 1207 and 1208, proposal 2937.
- Parameters
run (extra_data DataCollection) – information on the run
gain (int) – the camera preamp gain
Output –
------ –
ret (float) – photoelectrons per count
- removePolyBaseline(data)[source]¶
Removes a polynomial baseline to a spectrum, assuming a fixed position for the signal.
- Parameters
data (xarray Dataset) – The Viking data containing the variable “spectrum”
Output –
------ –
data – the original dataset with the added variable “spectrum_nobl” containing the baseline subtracted spectra.
- xas(data, data_ref, thickness=1, plot=False, plot_errors=True, xas_ylim=(-1, 3))[source]¶
Given two independent datasets (one with sample and one reference), this calculates the average XAS spectrum (absorption coefficient), associated standard deviation and standard error. The absorption coefficient is defined as -log(It/I0)/thickness.
- Parameters
data (xarray Dataset) – the dataset containing the spectra with sample
data_ref (xarray Dataset) – the dataset containing the spectra without sample
thickness (float) – the thickness used for the calculation of the absorption coefficient
plot (bool) – If True, plot the resulting average spectra.
plot_errors (bool) – If True, adds the 95% confidence interval on the spectra.
xas_ylim (tuple or list of float) – the y limits for the XAS plot.
Output –
------ –
xas (xarray Dataset) – the dataset containing the computed XAS quantities: I0, It, absorptionCoef and their associated errors.
- calibrate(runList, plot=True)[source]¶
This routine determines the calibration coefficients to translate the camera pixels into energy in eV. The Viking spectrometer is calibrated using the beamline monochromator: runs with various monochromatized photon energy are recorded and their peak position on the detector are determined by Gaussian fitting. The energy vs. position data is then fitted to a second degree polynomial.
- Parameters
runList (list of int) – the list of runs containing the monochromatized spectra
plot (bool) – if True, the spectra, their Gaussian fits and the calibration curve are plotted.
Output –
------ –
energy_calib (np.array) – the calibration coefficients (2nd degree polynomial)
- toolbox_scs.detectors.calibrate_xgm(run, data, xgm='SCS', plot=False)[source]¶
Calculates the calibration factor F between the photon flux (slow signal) and the fast signal (pulse-resolved) of the sase 3 pulses. The calibrated fast signal is equal to the uncalibrated one multiplied by F.
- Parameters
run (extra_data.DataCollection) – DataCollection containing the digitizer data.
data (xarray Dataset) – dataset containing the pulse-resolved sase 3 signal, e.g. ‘SCS_SA3’
xgm (str) – one in {‘XTD10’, ‘SCS’}
plot (bool) – If True, shows a plot of the photon flux, averaged fast signal and calibrated fast signal.
- Returns
F – calibration factor F defined as: calibrated XGM [microJ] = F * fast XGM array (‘SCS_SA3’ or ‘XTD10_SA3’)
- Return type
float
Example
>>> import toolbox_scs as tb >>> import toolbox_scs.detectors as tbdet >>> run, data = tb.load(900074, 69, ['SCS_XGM']) >>> ds = tbdet.get_xgm(run, merge_with=data) >>> F = tbdet.calibrate_xgm(run, ds, plot=True) >>> # Add calibrated XGM to the dataset: >>> ds['SCS_SA3_uJ'] = F * ds['SCS_SA3']
- toolbox_scs.detectors.get_xgm(run, mnemonics=None, merge_with=None, indices=slice(0, None))[source]¶
Load and/or computes XGM data. Sources can be loaded on the fly via the mnemonics argument, or processed from an existing dataset (merge_with). The bunch pattern table is used to assign the pulse id coordinates if the number of pulses has changed during the run.
- Parameters
run (extra_data.DataCollection) – DataCollection containing the xgm data.
mnemonics (str or list of str) – mnemonics for XGM, e.g. “SCS_SA3” or [“XTD10_XGM”, “SCS_XGM”]. If None, defaults to “SCS_SA3” in case no merge_with dataset is provided.
merge_with (xarray Dataset) – If provided, the resulting Dataset will be merged with this one. The XGM variables of merge_with (if any) will also be computed and merged.
indices (slice, list, 1D array) – Pulse indices of the XGM array in case bunch pattern is missing.
- Returns
merged with Dataset merge_with if provided.
- Return type
xarray Dataset with pulse-resolved XGM variables aligned,
Example
>>> import toolbox_scs as tb >>> run, ds = tb.load(2212, 213, 'SCS_SA3') >>> ds['SCS_SA3']