ezmsg.sigproc.util.array#
Portable helpers for Array API interoperability.
These utilities smooth over differences between Array API libraries
(NumPy, PyTorch, MLX, CuPy, etc.) — in particular around device
placement and dtype introspection, which are not uniformly supported.
Design rule for xp_* helpers: prefer the native op only when it
differs semantically from the fallback. For pure stride/metadata
tricks (reshape, transpose, slicing) every backend’s implementation is
equivalent in cost, so the simplest path wins. For ops that do real
work (empty vs. zeros, compiled kernels) we route to the native op when
available. When backends disagree on API — e.g. torch.flip(dims=...)
vs numpy.flip(axis=...), or torch’s refusal of negative-step slicing
— we absorb that here rather than leaking it to callers.
Functions
- is_complex_dtype(dtype)[source]#
Check whether dtype is a complex type, portably across backends.
- Return type:
- is_float_dtype(xp, dtype)[source]#
Check whether dtype is a real floating-point type, portably.
- Return type:
- xp_asarray(xp, obj, *, dtype=None, device=None)[source]#
Portable
xp.asarraythat omits unsupported kwargs.Some Array API libraries (e.g. MLX) don’t accept a
devicekeyword. This helper builds the kwargs dict dynamically so that only supported arguments are forwarded.
- xp_create(fn, *args, dtype=None, device=None, **extra)[source]#
Call a creation function (
zeros,ones,eye) portably.Omits
deviceif it isNone(for libraries that don’t support it).
- xp_empty(xp, shape, *, dtype=None)[source]#
Portable
xp.emptywith azerosfallback for backends (e.g. MLX) that don’t exposeempty. MLX is lazy so the extra zero init is near-free; on eager backendsemptyis preferred when available.
- xp_flip(arr, axis)[source]#
Reverse
arralongaxis, portable across backends.Dispatches:
numpy.flip(axis=)/cupy.flip(axis=)/torch.flip(dims=)when the namespace exposesflip, else negative-step slicing (MLX). Torch is the reason we can’t make slicing the universal path — it rejects negative steps withValueError.Note on cost: numpy/cupy return a strided view (O(1)); torch’s flip materializes a copy (no view equivalent exists there); MLX’s slicing returns a view.
- xp_itemsize(dtype)[source]#
Bytes per element of
dtype, portable across backends.numpy/cupy dtype instances expose
.itemsizeas an int; torch dtypes also expose.itemsizeas an int; MLX dtypes expose.size. NumPy scalar types (e.g.np.float32the class) expose.itemsizeas an attribute descriptor, not a concrete int — we detect that and round-trip throughnp.dtype(...)to get the instance.- Return type: