toolbox_scs.detectors.hrixs

Module Contents

Classes

hRIXS

The hRIXS analysis, especially curvature correction

MaranaX

A spin-off of the hRIXS class: with parallelized centroiding

class toolbox_scs.detectors.hrixs.hRIXS(proposalNB, detector='MaranaX')[source]

The hRIXS analysis, especially curvature correction

The objects of this class contain the meta-information about the settings of the spectrometer, not the actual data, except possibly a dark image for background subtraction.

The actual data is loaded into `xarray`s, and stays there.

PROPOSAL

the number of the proposal

Type:

int

DETECTOR

the detector to be used. Can be [‘hRIXS_det’, ‘MaranaX’] defaults to ‘hRIXS_det’ for backward-compatibility.

Type:

str

X_RANGE

the slice to take in the dispersive direction, in pixels. Defaults to the entire width.

Type:

slice

Y_RANGE

the slice to take in the energy direction

Type:

slice

THRESHOLD

pixel counts above which a hit candidate is assumed, for centroiding. use None if you want to give it in standard deviations instead.

Type:

float

STD_THRESHOLD

same as THRESHOLD, in standard deviations.

DBL_THRESHOLD

threshold controling whether a detected hit is considered to be a double hit.

BINS

the number of bins used in centroiding

Type:

int

CURVE_A, CURVE_B

the coefficients of the parabola for the curvature correction

Type:

float

USE_DARK

whether to do dark subtraction. Is initially False, magically switches to True if a dark has been loaded, but may be reset.

Type:

bool

ENERGY_INTERCEPT, ENERGY_SLOPE

The calibration from pixel to energy

FIELDS

the fields to be loaded from the data. Add additional fields if so desired.

Example

proposal = 3145 h = hRIXS(proposal) h.Y_RANGE = slice(700, 900) h.CURVE_B = -3.695346575286939e-07 h.CURVE_A = 0.024084479232443695 h.ENERGY_SLOPE = 0.018387 h.ENERGY_INTERCEPT = 498.27 h.STD_THRESHOLD = 3.5

DETECTOR_FIELDS[source]
aggregators[source]
set_params(**params)[source]
get_params(*params)[source]
from_run(runNB, proposal=None, extra_fields=(), drop_first=False, subset=None)[source]

load a run

Load the run runNB. A thin wrapper around toolbox.load. :param drop_first: if True, the first image in the run is removed from the dataset. :type drop_first: bool

Example

data = h.from_run(145) # load run 145

data1 = h.from_run(145) # load run 145 data2 = h.from_run(155) # load run 155 data = xarray.concat([data1, data2], ‘trainId’) # combine both

load_dark(runNB, proposal=None)[source]

load a dark run

Load the dark run runNB from proposal. The latter defaults to the current proposal. The dark is stored in this hRIXS object, and subsequent analyses use it for background subtraction.

Example

h.load_dark(166) # load dark run 166

find_curvature(runNB, proposal=None, plot=True, args=None, **kwargs)[source]

find the curvature correction coefficients

The hRIXS has some abberations which leads to the spectroscopic lines being curved on the detector. We approximate these abberations with a parabola for later correction.

Load a run and determine the curvature. The curvature is set in self, and returned as a pair of floats.

Parameters:
  • runNB (int) – the run number to use

  • proposal (int) – the proposal to use, default to the current proposal

  • plot (bool) – whether to plot the found curvature onto the data

  • args (pair of float, optional) – a starting value to prime the fitting routine

Example

h.find_curvature(155) # use run 155 to fit the curvature

centroid_one(image)[source]

find the position of photons with sub-pixel precision

A photon is supposed to have hit the detector if the intensity within a 2-by-2 square exceeds a threshold. In this case the position of the photon is calculated as the center-of-mass in a 4-by-4 square.

Return the list of x, y coordinate pairs, corrected by the curvature.

centroid_two(image, energy)[source]

determine position of photon hits on detector

The algrothm is taken from the ESRF RIXS toolbox. The thresholds for determining photon hits are given by the incident photon energy

The function returns arrays containing the single and double hits as x and y coordinates

centroid(data, bins=None, method='auto')[source]

calculate a spectrum by finding the centroid of individual photons

This takes the xarray.Dataset data and returns a copy of it, with a new xarray.DataArray named spectrum added, which contains the energy spectrum calculated for each hRIXS image.

Added a key for switching between algorithims choices are “auto” and “manual” which selects for method for determining whether thresholds there is a photon hit. It changes whether centroid_one or centroid_two is used.

Example

h.centroid(data) # find photons in all images of the run data.spectrum[0, :].plot() # plot the spectrum of the first image

parabola(x)[source]
integrate(data)[source]

calculate a spectrum by integration

This takes the xarray data and returns a copy of it, with a new dataarray named spectrum added, which contains the energy spectrum calculated for each hRIXS image.

First the energy that corresponds to each pixel is calculated. Then all pixels within an energy range are summed, where the intensity of one pixel is distributed among the two energy ranges the pixel spans, proportionally to the overlap between the pixel and bin energy ranges.

The resulting data is normalized to one pixel, so the average intensity that arrived on one pixel.

Example

h.integrate(data) # create spectrum by summing pixels data.spectrum[0, :].plot() # plot the spectrum of the first image

aggregator(da, dim)[source]
aggregate(ds, var=None, dim='trainId')[source]

aggregate (i.e. mostly sum) all data within one dataset

take all images in a dataset and aggregate them and their metadata. For images, spectra and normalizations that means adding them, for others (e.g. delays) adding would not make sense, so we treat them properly. The aggregation functions of each variable are defined in the aggregators attribute of the class. If var is specified, group the dataset by var prior to aggregation. A new variable “counts” gives the number of frames aggregated in each group.

Parameters:
  • ds (xarray Dataset) – the dataset containing RIXS data

  • var (string) – One of the variables in the dataset. If var is specified, the dataset is grouped by var prior to aggregation. This is useful for sorting e.g. a dataset that contains multiple delays.

  • dim (string) – the dimension over which to aggregate the data

Example

h.centroid(data) # create spectra from finding photons agg = h.aggregate(data) # sum all spectra agg.spectrum.plot() # plot the resulting spectrum

agg2 = h.aggregate(data, ‘hRIXS_delay’) # group data by delay agg2.spectrum[0, :].plot() # plot the spectrum for first value

aggregate_ds(ds, dim='trainId')[source]
normalize(data, which='hRIXS_norm')[source]

Adds a ‘normalized’ variable to the dataset defined as the ration between ‘spectrum’ and ‘which’

Parameters:
  • data (xarray Dataset) – the dataset containing hRIXS data

  • which (string, default="hRIXS_norm") – one of the variables of the dataset, usually “hRIXS_norm” or “counts”

class toolbox_scs.detectors.hrixs.MaranaX(*args, **kwargs)[source]

Bases: hRIXS

A spin-off of the hRIXS class: with parallelized centroiding

NUM_MAX_HITS = 30[source]
centroid(data, bins=None, **kwargs)[source]

calculate a spectrum by finding the centroid of individual photons

This takes the xarray.Dataset data and returns a copy of it, with a new xarray.DataArray named spectrum added, which contains the energy spectrum calculated for each hRIXS image.

Added a key for switching between algorithims choices are “auto” and “manual” which selects for method for determining whether thresholds there is a photon hit. It changes whether centroid_one or centroid_two is used.

Example

h.centroid(data) # find photons in all images of the run data.spectrum[0, :].plot() # plot the spectrum of the first image

_centroid_tb_map(_, index, data)[source]
_centroid_map(index, *, image, energy)[source]
_centroid_task(index, image, energy)[source]
_histogram_task(index, total, double, default_range)[source]
centroid_from_run(runNB, proposal=None, extra_fields=(), drop_first=False, subset=None, bins=None, return_hits=False)[source]

A combined function of from_run() and centroid(), which uses extra_data and pasha to avoid bulk loading of files.

_centroid_ed_map(_, index, trainId, data)[source]
static _mnemo_to_prop(mnemo)[source]
_is_mnemo_in_run(mnemo, run)[source]