# iris.PowderDiffractionDataset¶

class iris.PowderDiffractionDataset(*args, **kwargs)

Abstraction of HDF5 files for powder diffraction datasets.

__init__(*args, **kwargs)

Create a new file object.

See the h5py user guide for a detailed explanation of the options.

name
Name of the file on disk, or file-like object. Note: for files created with the ‘core’ driver, HDF5 still requires this be non-empty.
mode
r Readonly, file must exist r+ Read/write, file must exist w Create file, truncate if exists w- or x Create file, fail if exists a Read/write if exists, create otherwise (default)
driver
Name of the driver to use. Legal values are None (default, recommended), ‘core’, ‘sec2’, ‘stdio’, ‘mpio’.
libver
Library version bounds. Currently only the strings ‘earliest’ and ‘latest’ are defined.
userblock
Desired size of user block. Only allowed when creating a new file (mode w, w- or x).
swmr
Open the file in SWMR read mode. Only used when mode = ‘r’.
rdcc_nbytes
Total size of the raw data chunk cache in bytes. The default size is 1024**2 (1 MB) per dataset.
rdcc_w0
The chunk preemption policy for all datasets. This must be between 0 and 1 inclusive and indicates the weighting according to which chunks which have been fully read or written are penalized when determining which chunks to flush from cache. A value of 0 means fully read or written chunks are treated no differently than other chunks (the preemption is strictly LRU) while a value of 1 means fully read or written chunks are always preempted before other chunks. If your application only reads or writes data once, this can be safely set to 1. Otherwise, this should be set lower depending on how often you re-read or re-write the same data. The default value is 0.75.
rdcc_nslots
The number of chunk slots in the raw data chunk cache for this file. Increasing this value reduces the number of cache collisions, but slightly increases the memory used. Due to the hashing strategy, this value should ideally be a prime number. As a rule of thumb, this value should be at least 10 times the number of chunks that can fit in rdcc_nbytes bytes. For maximum performance, this value should be set approximately 100 times that number of chunks. The default value is 521.
track_order
Track dataset/group/attribute creation order under root group if True. If None use global default h5.get_config().track_order.
Passed on to the selected file driver.
__init__(*args, **kwargs)

Create a new file object.

See the h5py user guide for a detailed explanation of the options.

name
Name of the file on disk, or file-like object. Note: for files created with the ‘core’ driver, HDF5 still requires this be non-empty.
mode
r Readonly, file must exist r+ Read/write, file must exist w Create file, truncate if exists w- or x Create file, fail if exists a Read/write if exists, create otherwise (default)
driver
Name of the driver to use. Legal values are None (default, recommended), ‘core’, ‘sec2’, ‘stdio’, ‘mpio’.
libver
Library version bounds. Currently only the strings ‘earliest’ and ‘latest’ are defined.
userblock
Desired size of user block. Only allowed when creating a new file (mode w, w- or x).
swmr
Open the file in SWMR read mode. Only used when mode = ‘r’.
rdcc_nbytes
Total size of the raw data chunk cache in bytes. The default size is 1024**2 (1 MB) per dataset.
rdcc_w0
The chunk preemption policy for all datasets. This must be between 0 and 1 inclusive and indicates the weighting according to which chunks which have been fully read or written are penalized when determining which chunks to flush from cache. A value of 0 means fully read or written chunks are treated no differently than other chunks (the preemption is strictly LRU) while a value of 1 means fully read or written chunks are always preempted before other chunks. If your application only reads or writes data once, this can be safely set to 1. Otherwise, this should be set lower depending on how often you re-read or re-write the same data. The default value is 0.75.
rdcc_nslots
The number of chunk slots in the raw data chunk cache for this file. Increasing this value reduces the number of cache collisions, but slightly increases the memory used. Due to the hashing strategy, this value should ideally be a prime number. As a rule of thumb, this value should be at least 10 times the number of chunks that can fit in rdcc_nbytes bytes. For maximum performance, this value should be set approximately 100 times that number of chunks. The default value is 521.
track_order
Track dataset/group/attribute creation order under root group if True. If None use global default h5.get_config().track_order.
Passed on to the selected file driver.
compute_angular_averages(center=None, normalized=False, angular_bounds=None, trim=True, callback=None)

Compute the angular averages.

Parameters: center (2-tuple or None, optional) – Center of the diffraction patterns. If None (default), the dataset attribute will be used instead. normalized (bool, optional) – If True, each pattern is normalized to its integral. angular_bounds (2-tuple of float or None, optional) – Angle bounds are specified in degrees. 0 degrees is defined as the positive x-axis. Angle bounds outside [0, 360) are mapped back to [0, 360). trim (bool, optional) – If True, leading/trailing zeros - possibly due to masks - are trimmed. callback (callable or None, optional) – Callable of a single argument, to which the calculation progress will be passed as an integer between 0 and 100.
compute_baseline(first_stage, wavelet, max_iter=50, level=None, **kwargs)

Compute and save the baseline computed based on the dual-tree complex wavelet transform. All keyword arguments are passed to scikit-ued’s baseline_dt function.

Parameters: first_stage (str, optional) – Wavelet to use for the first stage. See skued.available_first_stage_filters() for a list of suitable arguments wavelet (str, optional) – Wavelet to use in stages > 1. Must be appropriate for the dual-tree complex wavelet transform. See skued.available_dt_filters() for possible values. max_iter (int, optional) – level (int or None, optional) – If None (default), maximum level is used.
classmethod from_dataset(dataset, center, normalized=True, angular_bounds=None, callback=None)

Transform a DiffractionDataset instance into a PowderDiffractionDataset. This requires computing the azimuthal averages as well.

Parameters: dataset (DiffractionDataset) – DiffractionDataset instance. center (2-tuple or None, optional) – Center of the diffraction patterns. If None (default), the dataset attribute will be used instead. normalized (bool, optional) – If True, each pattern is normalized to its integral. Default is False. angular_bounds (2-tuple of float or None, optional) – Angle bounds are specified in degrees. 0 degrees is defined as the positive x-axis. Angle bounds outside [0, 360) are mapped back to [0, 360). callback (callable or None, optional) – Callable of a single argument, to which the calculation progress will be passed as an integer between 0 and 100. powder PowderDiffractionDataset
powder_baseline(timedelay, out=None)

Returns the baseline data.

Parameters: timdelay (float or None) – Time-delay [ps]. If None, the entire block is returned. out (ndarray or None, optional) – If an out ndarray is provided, h5py can avoid making intermediate copies. out – If a baseline hasn’t been computed yet, the returned array is an array of zeros. ndarray
powder_calq(crystal, peak_indices, miller_indices)

Determine the scattering vector q corresponding to a polycrystalline diffraction pattern and a known crystal structure.

For best results, multiple peaks (and corresponding Miller indices) should be provided; the absolute minimum is two.

Parameters: crystal (skued.Crystal instance) – Crystal that gave rise to the diffraction data. peak_indices (n-tuple of ints) – Array index location of diffraction peaks. For best results, peaks should be well-separated. More than two peaks can be used. miller_indices (iterable of 3-tuples) – Indices associated with the peaks of peak_indices. More than two peaks can be used. E.g. indices = [(2,2,0), (-3,0,2)] ValueError : if the number of peak indices does not match the number of Miller indices. ValueError : if the number of peaks given is lower than two.
powder_data(timedelay, bgr=False, relative=False, out=None)

Returns the angular average data from scan-averaged diffraction patterns.

Parameters: timdelay (float or None) – Time-delay [ps]. If None, the entire block is returned. bgr (bool, optional) – If True, background is removed. relative (bool, optional) – If True, data is returned relative to the average of all diffraction patterns before photoexcitation. out (ndarray or None, optional) – If an out ndarray is provided, h5py can avoid making intermediate copies. I – Diffracted intensity [counts] ndarray, shape (N,) or (N,M)
powder_eq

Returns the average powder diffraction pattern for all times before photoexcitation. In case no data is available before photoexcitation, an array of zeros is returned.

Parameters: bgr (bool) – If True, background is removed. I – Diffracted intensity [counts] ndarray, shape (N,)
powder_time_series(rmin, rmax, bgr=False, relative=False, units='pixels', out=None)

Average intensity over time. Diffracted intensity is integrated in the closed interval [rmin, rmax]

Parameters: rmin (float) – Lower scattering vector bound [1/A] rmax (float) – Higher scattering vector bound [1/A]. bgr (bool, optional) – If True, background is removed. Default is False. relative (bool, optional) – If True, data is returned relative to the average of all diffraction patterns before photoexcitation. units (str, {'pixels', 'momentum'}) – Units of the bounds rmin and rmax. out (ndarray or None, optional) – 1-D ndarray in which to store the results. The shape should be compatible with (len(time_points),) out – Average diffracted intensity over time. ndarray, shape (N,)
px_radius

scattering_vector
Array of scattering vector norm $$|q|$$ [$$1/\AA$$]
shift_time_zero(*args, **kwargs)