ezmsg.event.kernel_activation#

Compute binned kernel activation from sparse events.

This module provides efficient computation of kernel-convolved features at a lower output rate than the input. For exponential and alpha kernels, uses a state-based approach that is O(n_events + n_bins) instead of O(n_samples).

Classes

class ActivationKernelType(*values)[source]#

Bases: str, Enum

Supported kernel types for efficient binned activation.

EXPONENTIAL = 'exponential'#

k(t) = exp(-t/tau) for t >= 0.

Type:

Exponential decay

ALPHA = 'alpha'#

k(t) = (t/tau) * exp(-t/tau) for t >= 0.

Type:

Alpha function

COUNT = 'count'#

Simple event counting (no kernel, just count events per bin).

class BinAggregation(*values)[source]#

Bases: str, Enum

How to aggregate activation within each bin.

LAST = 'last'#

Use activation value at end of bin (default for activation features).

MEAN = 'mean'#

Average activation over the bin.

SUM = 'sum'#

Sum of activation over the bin (for count, this gives total count).

MAX = 'max'#

Maximum activation in the bin.

class BinnedKernelActivation(*args, **kwargs)[source]#

Bases: BaseStatefulTransformer[BinnedKernelActivationSettings, AxisArray, AxisArray, BinnedKernelActivationState]

Compute binned kernel activation from sparse events.

For exponential and alpha kernels, uses an efficient state-based algorithm: - Exponential: activation[t] = sum_i exp(-(t - t_i) / tau) - Alpha: activation[t] = sum_i (t - t_i) / tau * exp(-(t - t_i) / tau)

The algorithm only computes at event times and bin boundaries, giving O(n_events + n_bins) complexity instead of O(n_samples).

Input: AxisArray with sparse.COO data (event times and values) Output: AxisArray with dense binned activation features

Features:
  • Efficient for sparse events (much faster than dense convolution)

  • Handles chunk boundaries seamlessly

  • Supports exponential, alpha, and count kernels

  • Configurable bin aggregation (last, mean, sum, max)

class BinnedKernelActivationSettings(kernel_type=ActivationKernelType.EXPONENTIAL, tau=0.05, bin_duration=0.02, aggregation=BinAggregation.LAST, scale_by_value=False, normalize=True, rate_normalize=False)[source]#

Bases: Settings

Settings for BinnedKernelActivation.

Parameters:
kernel_type: ActivationKernelType = 'exponential'#

Type of kernel to apply.

tau: float = 0.05#

peak time.

Type:

Time constant in seconds. For exponential

Type:

decay rate. For alpha

bin_duration: float = 0.02#

Output bin duration in seconds.

aggregation: BinAggregation = 'last'#

How to aggregate activation within each bin.

scale_by_value: bool = False#

If True, weight each event by its value. If False, all events contribute 1.

normalize: bool = True#

If True, normalize kernel so integral equals 1.

rate_normalize: bool = False#

If True, divide output by bin_duration to get events/second (for COUNT kernel).

__init__(kernel_type=ActivationKernelType.EXPONENTIAL, tau=0.05, bin_duration=0.02, aggregation=BinAggregation.LAST, scale_by_value=False, normalize=True, rate_normalize=False)#
Parameters:
Return type:

None

class BinnedKernelActivationState[source]#

Bases: object

State for BinnedKernelActivation.

activation: ndarray[tuple[Any, ...], dtype[float64]] | None = None#
alpha_aux: ndarray[tuple[Any, ...], dtype[float64]] | None = None#
samples_since_update: ndarray[tuple[Any, ...], dtype[int64]] | None = None#
fs: float | None = None#
bin_accumulator: float = 0.0#
class BinnedKernelActivationUnit(*args, settings=None, **kwargs)[source]#

Bases: BaseTransformerUnit[BinnedKernelActivationSettings, AxisArray, AxisArray, BinnedKernelActivation]

Unit for BinnedKernelActivation.

Parameters:

settings (Settings | None)

SETTINGS#

alias of BinnedKernelActivationSettings