ezmsg.sigproc.util.array#

Portable helpers for Array API interoperability.

These utilities smooth over differences between Array API libraries (NumPy, PyTorch, MLX, CuPy, etc.) — in particular around device placement and dtype introspection, which are not uniformly supported.

Design rule for xp_* helpers: prefer the native op only when it differs semantically from the fallback. For pure stride/metadata tricks (reshape, transpose, slicing) every backend’s implementation is equivalent in cost, so the simplest path wins. For ops that do real work (empty vs. zeros, compiled kernels) we route to the native op when available. When backends disagree on API — e.g. torch.flip(dims=...) vs numpy.flip(axis=...), or torch’s refusal of negative-step slicing — we absorb that here rather than leaking it to callers.

Functions

array_device(x)[source]#

Return the device of an array, or None for device-less libraries.

is_complex_dtype(dtype)[source]#

Check whether dtype is a complex type, portably across backends.

Return type:

bool

is_float_dtype(xp, dtype)[source]#

Check whether dtype is a real floating-point type, portably.

Return type:

bool

xp_asarray(xp, obj, *, dtype=None, device=None)[source]#

Portable xp.asarray that omits unsupported kwargs.

Some Array API libraries (e.g. MLX) don’t accept a device keyword. This helper builds the kwargs dict dynamically so that only supported arguments are forwarded.

xp_create(fn, *args, dtype=None, device=None, **extra)[source]#

Call a creation function (zeros, ones, eye) portably.

Omits device if it is None (for libraries that don’t support it).

xp_empty(xp, shape, *, dtype=None)[source]#

Portable xp.empty with a zeros fallback for backends (e.g. MLX) that don’t expose empty. MLX is lazy so the extra zero init is near-free; on eager backends empty is preferred when available.

xp_flip(arr, axis)[source]#

Reverse arr along axis, portable across backends.

Dispatches: numpy.flip(axis=) / cupy.flip(axis=) / torch.flip(dims=) when the namespace exposes flip, else negative-step slicing (MLX). Torch is the reason we can’t make slicing the universal path — it rejects negative steps with ValueError.

Note on cost: numpy/cupy return a strided view (O(1)); torch’s flip materializes a copy (no view equivalent exists there); MLX’s slicing returns a view.

xp_itemsize(dtype)[source]#

Bytes per element of dtype, portable across backends.

numpy/cupy dtype instances expose .itemsize as an int; torch dtypes also expose .itemsize as an int; MLX dtypes expose .size. NumPy scalar types (e.g. np.float32 the class) expose .itemsize as an attribute descriptor, not a concrete int — we detect that and round-trip through np.dtype(...) to get the instance.

Return type:

int