ezmsg.sigproc.ewma#

Functions

ewma_step(sample, zi, alpha, beta=None)[source]#

Do an exponentially weighted moving average step.

Parameters:
  • sample (ndarray[tuple[Any, ...], dtype[_ScalarT]]) – The new sample.

  • zi (ndarray[tuple[Any, ...], dtype[_ScalarT]]) – The output of the previous step.

  • alpha (float) – Fading factor.

  • beta (float | None) – Persisting factor. If None, it is calculated as 1-alpha.

Returns:

alpha * sample + beta * zi

Classes

class EWMASettings(time_constant: float = 1.0, axis: str | None = None, accumulate: bool = True)[source]#

Bases: Settings

Parameters:
time_constant: float = 1.0#

The amount of time for the smoothed response of a unit step function to reach 1 - 1/e approx-eq 63.2%.

axis: str | None = None#
accumulate: bool = True#

If True, update the EWMA state with each sample. If False, only apply the current EWMA estimate without updating state (useful for inference periods where you don’t want to adapt statistics).

__init__(time_constant=1.0, axis=None, accumulate=True)#
Parameters:
Return type:

None

class EWMAState[source]#

Bases: object

alpha: float#
zi: ndarray[tuple[Any, ...], dtype[_ScalarT]] | None = None#
class EWMATransformer(*args, **kwargs)[source]#

Bases: BaseStatefulTransformer[EWMASettings, AxisArray, AxisArray, EWMAState]

class EWMAUnit(*args, settings=None, **kwargs)[source]#

Bases: BaseTransformerUnit[EWMASettings, AxisArray, AxisArray, EWMATransformer]

Parameters:

settings (Settings | None)

SETTINGS#

alias of EWMASettings

async on_settings(msg)[source]#

Handle settings updates with smart reset behavior.

Only resets state if axis changes (structural change). Changes to time_constant or accumulate are applied without resetting accumulated state.

Parameters:

msg (EWMASettings)

Return type:

None

class EWMA_Deprecated(alpha, max_len)[source]#

Bases: object

Grabbed these methods from https://stackoverflow.com/a/70998068 and other answers in that topic, but they ended up being slower than the scipy.signal.lfilter method. Additionally, compute and compute2 suffer from potential errors as the vector length increases and beta**n approaches zero.

Parameters:
__init__(alpha, max_len)[source]#
Parameters:
prev: ndarray[tuple[Any, ...], dtype[_ScalarT]] | None#
compute(arr, out=None)[source]#
Parameters:
Return type:

ndarray[tuple[Any, …], dtype[_ScalarT]]

compute2(arr)[source]#

Compute the Exponentially Weighted Moving Average (EWMA) of the input array.

Parameters:

arr (ndarray[tuple[Any, ...], dtype[_ScalarT]]) – The input array to be smoothed.

Returns:

The smoothed array.

Return type:

ndarray[tuple[Any, …], dtype[_ScalarT]]

compute_sample(new_sample)[source]#
Parameters:

new_sample (ndarray[tuple[Any, ...], dtype[_ScalarT]])

Return type:

ndarray[tuple[Any, …], dtype[_ScalarT]]

ewma_step(sample, zi, alpha, beta=None)[source]#

Do an exponentially weighted moving average step.

Parameters:
  • sample (ndarray[tuple[Any, ...], dtype[_ScalarT]]) – The new sample.

  • zi (ndarray[tuple[Any, ...], dtype[_ScalarT]]) – The output of the previous step.

  • alpha (float) – Fading factor.

  • beta (float | None) – Persisting factor. If None, it is calculated as 1-alpha.

Returns:

alpha * sample + beta * zi

class EWMA_Deprecated(alpha, max_len)[source]#

Bases: object

Grabbed these methods from https://stackoverflow.com/a/70998068 and other answers in that topic, but they ended up being slower than the scipy.signal.lfilter method. Additionally, compute and compute2 suffer from potential errors as the vector length increases and beta**n approaches zero.

Parameters:
__init__(alpha, max_len)[source]#
Parameters:
compute(arr, out=None)[source]#
Parameters:
Return type:

ndarray[tuple[Any, …], dtype[_ScalarT]]

compute2(arr)[source]#

Compute the Exponentially Weighted Moving Average (EWMA) of the input array.

Parameters:

arr (ndarray[tuple[Any, ...], dtype[_ScalarT]]) – The input array to be smoothed.

Returns:

The smoothed array.

Return type:

ndarray[tuple[Any, …], dtype[_ScalarT]]

compute_sample(new_sample)[source]#
Parameters:

new_sample (ndarray[tuple[Any, ...], dtype[_ScalarT]])

Return type:

ndarray[tuple[Any, …], dtype[_ScalarT]]

class EWMASettings(time_constant: float = 1.0, axis: str | None = None, accumulate: bool = True)[source]#

Bases: Settings

Parameters:
time_constant: float = 1.0#

The amount of time for the smoothed response of a unit step function to reach 1 - 1/e approx-eq 63.2%.

axis: str | None = None#
accumulate: bool = True#

If True, update the EWMA state with each sample. If False, only apply the current EWMA estimate without updating state (useful for inference periods where you don’t want to adapt statistics).

__init__(time_constant=1.0, axis=None, accumulate=True)#
Parameters:
Return type:

None

class EWMAState[source]#

Bases: object

alpha: float#
zi: ndarray[tuple[Any, ...], dtype[_ScalarT]] | None = None#
class EWMATransformer(*args, **kwargs)[source]#

Bases: BaseStatefulTransformer[EWMASettings, AxisArray, AxisArray, EWMAState]

class EWMAUnit(*args, settings=None, **kwargs)[source]#

Bases: BaseTransformerUnit[EWMASettings, AxisArray, AxisArray, EWMATransformer]

Parameters:

settings (Settings | None)

SETTINGS#

alias of EWMASettings

async on_settings(msg)[source]#

Handle settings updates with smart reset behavior.

Only resets state if axis changes (structural change). Changes to time_constant or accumulate are applied without resetting accumulated state.

Parameters:

msg (EWMASettings)

Return type:

None