Reference/API

Opening raw datasets

To open any raw dataset, take a look at the open_raw() function.

iris.open_raw(path)

Open a raw data item, guessing the AbstractRawDataset instance that should be used based on available plug-ins.

This function can also be used as a context manager:

with open_raw('.') as dset:
    ...
Parameters

path (path-like) – Path to the file/folder containing the raw data.

Returns

raw – The raw dataset. If no format could be guessed, an RuntimeError is raised.

Return type

AbstractRawDataset instance

Raises

RuntimeError – if the data format could not be guessed.

Raw Dataset Classes

class iris.AbstractRawDataset(source=None, metadata=None)

Abstract base class for ultrafast electron diffraction data set. AbstractRawDataset allows for enforced metadata types and values, as well as a standard interface. For example, AbstractRawDataset implements the context manager interface.

Minimally, the following method must be implemented in subclasses:

  • raw_data

It is suggested to also implement the following magic method:

  • __init__

  • __exit__

Optionally, the display_name class attribute can be specified.

For better results or performance during reduction, the following methods can be specialized:

  • reduced

A list of concrete implementations of AbstractRawDatasets is available in the implementations class attribute. Subclasses are automatically added.

The call signature must remain the same for all overwritten methods.

Parameters
  • source (object) – Data source, for example a directory or external file.

  • metadata (dict or None, optional) – Metadata and experimental parameters. Dictionary keys that are not valid metadata, they are ignored. Metadata can also be set directly later.

Raises

TypeError – if an item from the metadata has an unexpected type.

iterscan(scan, **kwargs)

Generator function of diffraction patterns as part of a scan, in time-delay order.

Parameters
  • scan (int) – Scan from which to yield the data.

  • kwargs – Keyword-arguments are passed to raw_data method.

Yields

data (~numpy.ndarray, ndim 2)

See also

itertime

generator of diffraction patterns for a single time-delay, in scan order

itertime(timedelay, exclude_scans=None, **kwargs)

Generator function of diffraction patterns of the same time-delay, in scan order.

Parameters
  • timedelay (float) –

  • data. (Scan from which to yield the) –

exclude_scansiterable or None, optional

These scans will be skipped.

kwargs

Keyword-arguments are passed to raw_data method.

Yields

data (~numpy.ndarray, ndim 2)

See also

iterscan

generator of diffraction patterns for a single scan, in time-delay order

property metadata

Experimental parameters and dataset metadata as a dictionary.

abstract raw_data(timedelay, scan=1, **kwargs)

Returns an array of the image at a timedelay and scan.

Parameters
  • timdelay (float) – Acquisition time-delay.

  • scan (int, optional) – Scan number. Default is 1.

  • kwargs – Keyword-arguments are ignored.

Returns

arr

Return type

~numpy.ndarray, ndim 2

Raises
  • ValueError – if timedelay or scan are invalid / out of bounds.

  • IOError – Filename is not associated with an image/does not exist.

reduced(exclude_scans=None, align=True, normalize=True, mask=None, processes=1, dtype=<class 'float'>)

Generator of reduced dataset. The reduced diffraction patterns are generated in order of time-delay.

This particular implementation normalizes diffracted intensity of pictures acquired at the same time-delay while rejecting masked pixels.

Parameters
  • exclude_scans (iterable or None, optional) – These scans will be skipped when reducing the dataset.

  • align (bool, optional) – If True (default), raw diffraction patterns will be aligned using the masked normalized cross-correlation approach. See skued.align for more information.

  • normalize (bool, optional) – If True (default), equivalent diffraction pictures (e.g. same time-delay, different scans) are normalized to the same diffracted intensity.

  • mask (array-like of bool or None, optional) – If not None, pixels where mask = True are ignored for certain operations (e.g. alignment).

  • processes (int or None, optional) – Number of Processes to spawn for processing.

  • dtype (numpy.dtype or None, optional) – Reduced patterns will be cast to dtype.

Yields

pattern (~numpy.ndarray, ndim 2)

update_metadata(metadata)

Update metadata from a dictionary. Only appropriate keys are used; irrelevant keys are ignored.

Parameters

metadata (dictionary) – See AbstractRawDataset.valid_metadata for valid keys.

Diffraction Dataset Classes

DiffractionDataset

class iris.DiffractionDataset(*args, **kwargs)

Bases: File

Abstraction of an HDF5 file to represent diffraction datasets.

Create a new file object.

See the h5py user guide for a detailed explanation of the options.

name

Name of the file on disk, or file-like object. Note: for files created with the ‘core’ driver, HDF5 still requires this be non-empty.

mode

r Readonly, file must exist (default) r+ Read/write, file must exist w Create file, truncate if exists w- or x Create file, fail if exists a Read/write if exists, create otherwise

driver

Name of the driver to use. Legal values are None (default, recommended), ‘core’, ‘sec2’, ‘direct’, ‘stdio’, ‘mpio’, ‘ros3’.

libver

Library version bounds. Supported values: ‘earliest’, ‘v108’, ‘v110’, ‘v112’ and ‘latest’. The ‘v108’, ‘v110’ and ‘v112’ options can only be specified with the HDF5 1.10.2 library or later.

userblock_size

Desired size of user block. Only allowed when creating a new file (mode w, w- or x).

swmr

Open the file in SWMR read mode. Only used when mode = ‘r’.

rdcc_nbytes

Total size of the raw data chunk cache in bytes. The default size is 1024**2 (1 MB) per dataset.

rdcc_w0

The chunk preemption policy for all datasets. This must be between 0 and 1 inclusive and indicates the weighting according to which chunks which have been fully read or written are penalized when determining which chunks to flush from cache. A value of 0 means fully read or written chunks are treated no differently than other chunks (the preemption is strictly LRU) while a value of 1 means fully read or written chunks are always preempted before other chunks. If your application only reads or writes data once, this can be safely set to 1. Otherwise, this should be set lower depending on how often you re-read or re-write the same data. The default value is 0.75.

rdcc_nslots

The number of chunk slots in the raw data chunk cache for this file. Increasing this value reduces the number of cache collisions, but slightly increases the memory used. Due to the hashing strategy, this value should ideally be a prime number. As a rule of thumb, this value should be at least 10 times the number of chunks that can fit in rdcc_nbytes bytes. For maximum performance, this value should be set approximately 100 times that number of chunks. The default value is 521.

track_order

Track dataset/group/attribute creation order under root group if True. If None use global default h5.get_config().track_order.

fs_strategy

The file space handling strategy to be used. Only allowed when creating a new file (mode w, w- or x). Defined as: “fsm” FSM, Aggregators, VFD “page” Paged FSM, VFD “aggregate” Aggregators, VFD “none” VFD If None use HDF5 defaults.

fs_page_size

File space page size in bytes. Only used when fs_strategy=”page”. If None use the HDF5 default (4096 bytes).

fs_persist

A boolean value to indicate whether free space should be persistent or not. Only allowed when creating a new file. The default value is False.

fs_threshold

The smallest free-space section size that the free space manager will track. Only allowed when creating a new file. The default value is 1.

page_buf_size

Page buffer size in bytes. Only allowed for HDF5 files created with fs_strategy=”page”. Must be a power of two value and greater or equal than the file space page size when creating the file. It is not used by default.

min_meta_keep

Minimum percentage of metadata to keep in the page buffer before allowing pages containing metadata to be evicted. Applicable only if page_buf_size is set. Default value is zero.

min_raw_keep

Minimum percentage of raw data to keep in the page buffer before allowing pages containing raw data to be evicted. Applicable only if page_buf_size is set. Default value is zero.

locking

The file locking behavior. Defined as: False (or “false”) Disable file locking True (or “true”) Enable file locking “best-effort” Enable file locking but ignore some errors None Use HDF5 defaults Warning: The HDF5_USE_FILE_LOCKING environment variable can override this parameter. Only available with HDF5 >= 1.12.1 or 1.10.x >= 1.10.7.

alignment_threshold

Together with alignment_interval, this property ensures that any file object greater than or equal in size to the alignement threshold (in bytes) will be aligned on an address which is a multiple of alignment interval.

alignment_interval

This property should be used in conjunction with alignment_threshold. See the description above. For more details, see https://portal.hdfgroup.org/display/HDF5/H5P_SET_ALIGNMENT

Additional keywords

Passed on to the selected file driver.

property compression_params

Compression options in the form of a dictionary

diff_apply(func, callback=None, processes=1)

Apply a function to each diffraction pattern possibly in parallel. The diffraction patterns will be modified in-place.

Warning

This is an irreversible in-place operation.

New in version 5.0.3.

Parameters
  • func (callable) – Function that takes in an array (diffraction pattern) and returns an array of the exact same shape, with the same data-type.

  • callback (callable or None, optional) – Callable that takes an int between 0 and 99. This can be used for progress update.

  • processes (int or None, optional) –

    Number of parallel processes to use. If None, all available processes will be used. In case Single Writer Multiple Reader mode is not available, processes is ignored.

    New in version 5.0.6.

Raises
  • TypeError – if func is not a proper callable

  • PermissionError – if the dataset has not been opened with write access.

diff_data(timedelay, relative=False, out=None)

Returns diffraction data at a specific time-delay.

Parameters
  • timdelay (float or None) – Timedelay [ps]. If None, the entire block is returned.

  • relative (bool, optional) – If True, data is returned relative to the average of all diffraction patterns before photoexcitation.

  • out (ndarray or None, optional) – If an out ndarray is provided, h5py can avoid making intermediate copies.

Returns

arr – Time-delay data. If out is provided, arr is a view into out.

Return type

ndarray

Raises

ValueError – If timedelay does not exist.

diff_eq()

Returns the averaged diffraction pattern for all times before photoexcitation. In case no data is available before photoexcitation, an array of zeros is returned.

If the dataset was opened with writing access, the result of this function is cached to file. It will be recomputed as needed.

Time-zero can be adjusted using the shift_time_zero method.

Returns

I – Diffracted intensity [counts]

Return type

ndarray, ndim 2

classmethod from_collection(patterns, filename, time_points, metadata, valid_mask=None, dtype=None, ckwargs=None, callback=None, **kwargs)

Create a DiffractionDataset from a collection of diffraction patterns and metadata.

Parameters
  • patterns (iterable of ndarray or ndarray) – Diffraction patterns. These should be in the same order as time_points. Note that the iterable can be a generator, in which case it will be consumed.

  • filename (str or path-like) – Path to the assembled DiffractionDataset.

  • time_points (array_like, shape (N,)) – Time-points of the diffraction patterns, in picoseconds.

  • metadata (dict) – Valid keys are contained in DiffractionDataset.valid_metadata.

  • valid_mask (ndarray or None, optional) – Boolean array that evaluates to True on valid pixels. This information is useful in cases where a beamblock is used.

  • dtype (dtype or None, optional) – Patterns will be cast to dtype. If None (default), dtype will be set to the same data-type as the first pattern in patterns.

  • ckwargs (dict, optional) – HDF5 compression keyword arguments. Refer to h5py’s documentation for details. Default is to use the lzf compression pipeline.

  • callback (callable or None, optional) – Callable that takes an int between 0 and 99. This can be used for progress update when patterns is a generator and involves large computations.

  • kwargs – Keywords are passed to h5py.File constructor. Default is file-mode ‘x’, which raises error if file already exists. Default libver is ‘latest’.

Returns

dataset

Return type

DiffractionDataset

classmethod from_raw(raw, filename, exclude_scans=None, valid_mask=None, processes=1, callback=None, align=True, normalize=True, ckwargs=None, dtype=None, **kwargs)

Create a DiffractionDataset from a subclass of AbstractRawDataset.

Parameters
  • raw (AbstractRawDataset instance) – Raw dataset instance.

  • filename (str or path-like) – Path to the assembled DiffractionDataset.

  • exclude_scans (iterable of ints or None, optional) – Scans to exclude from the processing. Default is to include all scans.

  • valid_mask (ndarray or None, optional) – Boolean array that evaluates to True on valid pixels. This information is useful in cases where a beamblock is used.

  • processes (int or None, optional) – Number of Processes to spawn for processing. Default is number of available CPU cores.

  • callback (callable or None, optional) – Callable that takes an int between 0 and 99. This can be used for progress update.

  • align (bool, optional) – If True (default), raw images will be aligned on a per-scan basis.

  • normalize (bool, optional) – If True, images within a scan are normalized to the same integrated diffracted intensity.

  • ckwargs (dict or None, optional) – HDF5 compression keyword arguments. Refer to h5py’s documentation for details.

  • dtype (dtype or None, optional) – Patterns will be cast to dtype. If None (default), dtype will be set to the same data-type as the first pattern in patterns.

  • kwargs – Keywords are passed to h5py.File constructor. Default is file-mode ‘x’, which raises error if file already exists.

Returns

dataset

Return type

DiffractionDataset

See also

open_raw

open raw datasets by guessing the appropriate format based on available plug-ins.

Raises

IOError – If the filename is already associated with a file.

property invalid_mask

Array that evaluates to True on invalid pixels (i.e. on beam-block, hot pixels, etc.)

mask_apply(func)

Modify the diffraction pattern mask m to func(m)

Parameters

func (callable) – Function that takes in the diffraction pattern mask, which evaluates to True on valid pixels, and returns an array of the exact same shape, with the same data-type.

Raises
  • TypeError – if func is not a proper callable, or if the result of func(m) is not boolean.

  • ValueError – if the result of func(m) does not have the right shape.

  • PermissionError – if the dataset has not been opened with write access.

property metadata

Dictionary of the dataset’s metadata. Dictionary is sorted alphabetically by keys.

property resolution

Resolution of diffraction patterns (px, px)

shift_time_zero(shift)

Insert a shift in time points. Reset the shift by setting it to zero. Shifts are not consecutive, so that calling shift_time_zero(20) twice will not result in a shift of 40ps.

Parameters

shift (float) – Shift [ps]. A positive value of shift will move all time-points forward in time, whereas a negative value of shift will move all time-points backwards in time.

Raises

PermissionError – if the dataset has not been opened with write access.

symmetrize(mod, center=None, kernel_size=None, callback=None, processes=1)

Symmetrize diffraction images based on n-fold rotational symmetry.

Warning

This is an irreversible in-place operation.

Parameters
  • mod (int) – Fold symmetry number.

  • center (array-like, shape (2,) or None) – Coordinates of the center (in pixels). If None (default), the center will be automatically determined.

  • kernel_size (float or None, optional) – If not None, every diffraction pattern will be smoothed with a gaussian kernel. kernel_size is the standard deviation of the gaussian kernel in units of pixels.

  • callback (callable or None, optional) – Callable that takes an int between 0 and 99. This can be used for progress update.

  • processes (int or None, optional) –

    Number of parallel processes to use. If None, all available processes will be used. In case Single Writer Multiple Reader mode is not available, processes is ignored.

    New in version 5.0.6.

Raises

See also

diff_apply

apply an operation to each diffraction pattern one-by-one

time_series(rect, relative=False, out=None)

Integrated intensity over time inside bounds.

Parameters
  • rect (4-tuple of ints) – Bounds of the region in px. Bounds are specified as [row1, row2, col1, col2]

  • relative (bool, optional) – If True, data is returned relative to the average of all diffraction patterns before photoexcitation.

  • out (ndarray or None, optional) – 1-D ndarray in which to store the results. The shape should be compatible with (len(time_points),)

Returns

out

Return type

ndarray, ndim 1

See also

time_series_selection

intensity integration using arbitrary selections.

time_series_selection(selection, relative=False, out=None)

Integrated intensity over time according to some arbitrary selection. This is a generalization of the DiffractionDataset.time_series method, which is much faster, but limited to rectangular selections.

New in version 5.2.1.

Parameters
  • selection (skued.Selection or ndarray, dtype bool, shape (N,M)) – A selection mask that dictates the regions to integrate in each scattering patterns. In the case selection is an array, an ArbirarySelection will be used. Performance may be degraded. Selection mask evaluating to True in the regions to integrate. The selection must be the same shape as one scattering pattern (i.e. two-dimensional).

  • relative (bool, optional) – If True, data is returned relative to the average of all diffraction patterns before photoexcitation.

  • out (ndarray or None, optional) – 1-D ndarray in which to store the results. The shape should be compatible with (len(time_points),)

Returns

out

Return type

ndarray, ndim 1

Raises

ValueError – if the shape of mask does not match the scattering patterns.

See also

time_series

integrated intensity in a rectangle.

property valid_mask

Array that evaluates to True on valid pixels (i.e. not on beam-block, not hot pixels, etc.)

PowderDiffractionDataset

class iris.PowderDiffractionDataset(*args, **kwargs)

Bases: DiffractionDataset

Abstraction of HDF5 files for powder diffraction datasets.

Create a new file object.

See the h5py user guide for a detailed explanation of the options.

name

Name of the file on disk, or file-like object. Note: for files created with the ‘core’ driver, HDF5 still requires this be non-empty.

mode

r Readonly, file must exist (default) r+ Read/write, file must exist w Create file, truncate if exists w- or x Create file, fail if exists a Read/write if exists, create otherwise

driver

Name of the driver to use. Legal values are None (default, recommended), ‘core’, ‘sec2’, ‘direct’, ‘stdio’, ‘mpio’, ‘ros3’.

libver

Library version bounds. Supported values: ‘earliest’, ‘v108’, ‘v110’, ‘v112’ and ‘latest’. The ‘v108’, ‘v110’ and ‘v112’ options can only be specified with the HDF5 1.10.2 library or later.

userblock_size

Desired size of user block. Only allowed when creating a new file (mode w, w- or x).

swmr

Open the file in SWMR read mode. Only used when mode = ‘r’.

rdcc_nbytes

Total size of the raw data chunk cache in bytes. The default size is 1024**2 (1 MB) per dataset.

rdcc_w0

The chunk preemption policy for all datasets. This must be between 0 and 1 inclusive and indicates the weighting according to which chunks which have been fully read or written are penalized when determining which chunks to flush from cache. A value of 0 means fully read or written chunks are treated no differently than other chunks (the preemption is strictly LRU) while a value of 1 means fully read or written chunks are always preempted before other chunks. If your application only reads or writes data once, this can be safely set to 1. Otherwise, this should be set lower depending on how often you re-read or re-write the same data. The default value is 0.75.

rdcc_nslots

The number of chunk slots in the raw data chunk cache for this file. Increasing this value reduces the number of cache collisions, but slightly increases the memory used. Due to the hashing strategy, this value should ideally be a prime number. As a rule of thumb, this value should be at least 10 times the number of chunks that can fit in rdcc_nbytes bytes. For maximum performance, this value should be set approximately 100 times that number of chunks. The default value is 521.

track_order

Track dataset/group/attribute creation order under root group if True. If None use global default h5.get_config().track_order.

fs_strategy

The file space handling strategy to be used. Only allowed when creating a new file (mode w, w- or x). Defined as: “fsm” FSM, Aggregators, VFD “page” Paged FSM, VFD “aggregate” Aggregators, VFD “none” VFD If None use HDF5 defaults.

fs_page_size

File space page size in bytes. Only used when fs_strategy=”page”. If None use the HDF5 default (4096 bytes).

fs_persist

A boolean value to indicate whether free space should be persistent or not. Only allowed when creating a new file. The default value is False.

fs_threshold

The smallest free-space section size that the free space manager will track. Only allowed when creating a new file. The default value is 1.

page_buf_size

Page buffer size in bytes. Only allowed for HDF5 files created with fs_strategy=”page”. Must be a power of two value and greater or equal than the file space page size when creating the file. It is not used by default.

min_meta_keep

Minimum percentage of metadata to keep in the page buffer before allowing pages containing metadata to be evicted. Applicable only if page_buf_size is set. Default value is zero.

min_raw_keep

Minimum percentage of raw data to keep in the page buffer before allowing pages containing raw data to be evicted. Applicable only if page_buf_size is set. Default value is zero.

locking

The file locking behavior. Defined as: False (or “false”) Disable file locking True (or “true”) Enable file locking “best-effort” Enable file locking but ignore some errors None Use HDF5 defaults Warning: The HDF5_USE_FILE_LOCKING environment variable can override this parameter. Only available with HDF5 >= 1.12.1 or 1.10.x >= 1.10.7.

alignment_threshold

Together with alignment_interval, this property ensures that any file object greater than or equal in size to the alignement threshold (in bytes) will be aligned on an address which is a multiple of alignment interval.

alignment_interval

This property should be used in conjunction with alignment_threshold. See the description above. For more details, see https://portal.hdfgroup.org/display/HDF5/H5P_SET_ALIGNMENT

Additional keywords

Passed on to the selected file driver.

compute_angular_averages(center=None, normalized=False, angular_bounds=None, trim=True, callback=None)

Compute the angular averages.

Parameters
  • center (2-tuple or None, optional) – Center of the diffraction patterns. If None (default), the dataset attribute will be used instead. If that is not possible, the center will be automatically determined. See DiffractionDataset.autocenter().

  • normalized (bool, optional) – If True, each pattern is normalized to its integral.

  • angular_bounds (2-tuple of float or None, optional) – Angle bounds are specified in degrees. 0 degrees is defined as the positive x-axis. Angle bounds outside [0, 360) are mapped back to [0, 360).

  • trim (bool, optional) – If True, leading/trailing zeros - possibly due to masks - are trimmed.

  • callback (callable or None, optional) – Callable of a single argument, to which the calculation progress will be passed as an integer between 0 and 100.

Raises

IOError – If the filename is already associated with a file.

compute_baseline(first_stage, wavelet, max_iter=50, level=None, **kwargs)

Compute and save the baseline computed based on the dual-tree complex wavelet transform. All keyword arguments are passed to scikit-ued’s baseline_dt function.

Parameters
  • first_stage (str, optional) – Wavelet to use for the first stage. See skued.available_first_stage_filters() for a list of suitable arguments

  • wavelet (str, optional) – Wavelet to use in stages > 1. Must be appropriate for the dual-tree complex wavelet transform. See skued.available_dt_filters() for possible values.

  • max_iter (int, optional) –

  • level (int or None, optional) – If None (default), maximum level is used.

Raises

IOError – If the filename is already associated with a file.

classmethod from_dataset(dataset, center=None, normalized=True, angular_bounds=None, callback=None)

Transform a DiffractionDataset instance into a PowderDiffractionDataset. This requires computing the azimuthal averages as well.

Parameters
  • dataset (DiffractionDataset) – DiffractionDataset instance.

  • center (2-tuple or None, optional) – Center of the diffraction patterns. If None (default), center will be automatically-determined.

  • normalized (bool, optional) – If True, each pattern is normalized to its integral. Default is False.

  • angular_bounds (2-tuple of float or None, optional) – Angle bounds are specified in degrees. 0 degrees is defined as the positive x-axis. Angle bounds outside [0, 360) are mapped back to [0, 360).

  • callback (callable or None, optional) – Callable of a single argument, to which the calculation progress will be passed as an integer between 0 and 100.

Returns

powder

Return type

PowderDiffractionDataset

powder_baseline(timedelay, out=None)

Returns the baseline data.

Parameters
  • timdelay (float or None) – Time-delay [ps]. If None, the entire block is returned.

  • out (ndarray or None, optional) – If an out ndarray is provided, h5py can avoid making intermediate copies.

Returns

out – If a baseline hasn’t been computed yet, the returned array is an array of zeros.

Return type

ndarray

powder_calq(crystal, peak_indices, miller_indices)

Determine the scattering vector q corresponding to a polycrystalline diffraction pattern and a known crystal structure.

For best results, multiple peaks (and corresponding Miller indices) should be provided; the absolute minimum is two.

Parameters
  • crystal (skued.Crystal instance) – Crystal that gave rise to the diffraction data.

  • peak_indices (n-tuple of ints) – Array index location of diffraction peaks. For best results, peaks should be well-separated. More than two peaks can be used.

  • miller_indices (iterable of 3-tuples) – Indices associated with the peaks of peak_indices. More than two peaks can be used. E.g. indices = [(2,2,0), (-3,0,2)]

Raises
  • ValueError – if the number of peak indices does not match the number of Miller indices, or if the number of peaks given is lower than two.

  • IOError – If the filename is already associated with a file.

powder_data(timedelay, bgr=False, relative=False, out=None)

Returns the angular average data from scan-averaged diffraction patterns.

Parameters
  • timdelay (float or None) – Time-delay [ps]. If None, the entire block is returned.

  • bgr (bool, optional) – If True, background is removed.

  • relative (bool, optional) – If True, data is returned relative to the average of all diffraction patterns before photoexcitation.

  • out (ndarray or None, optional) – If an out ndarray is provided, h5py can avoid making intermediate copies.

Returns

I – Diffracted intensity [counts]

Return type

ndarray, shape (N,) or (N,M)

powder_eq(bgr=False)

Returns the average powder diffraction pattern for all times before photoexcitation. In case no data is available before photoexcitation, an array of zeros is returned.

Parameters

bgr (bool) – If True, background is removed.

Returns

I – Diffracted intensity [counts]

Return type

ndarray, shape (N,)

powder_time_series(rmin, rmax, bgr=False, relative=False, units='pixels', out=None)

Average intensity over time. Diffracted intensity is integrated in the closed interval [rmin, rmax]

Parameters
  • rmin (float) – Lower scattering vector bound [1/A]

  • rmax (float) – Higher scattering vector bound [1/A].

  • bgr (bool, optional) – If True, background is removed. Default is False.

  • relative (bool, optional) – If True, data is returned relative to the average of all diffraction patterns before photoexcitation.

  • units (str, {'pixels', 'momentum'}) – Units of the bounds rmin and rmax.

  • out (ndarray or None, optional) – 1-D ndarray in which to store the results. The shape should be compatible with (len(time_points),)

Returns

out – Average diffracted intensity over time.

Return type

ndarray, shape (N,)

property px_radius

Pixel-radius of azimuthal average

property scattering_vector

Array of scattering vector norm \(|q|\) [\(1/\AA\)]

shift_time_zero(*args, **kwargs)

Shift time-zero uniformly across time-points.

Parameters

shift (float) – Shift [ps]. A positive value of shift will move all time-points forward in time, whereas a negative value of shift will move all time-points backwards in time.

Migrating older datasets

The work “migration” here is used to signify that a particular dataset needs to be migrated to a slightly updated form. This is done automatically if the dataset is opened with write permissions.

class iris.MigrationWarning

Bases: UserWarning

Warning class for warnings involving the migration of datasets to a newer version.

class iris.MigrationError

Bases: Exception

Thrown if a particular dataset requires migration.