Torch Feature Calculators
Experimental
These feature calculators are experimental reimplementations of tsfresh in pure PyTorch. Individual implementations have not been deeply validated against tsfresh for correctness in all cases. Results are close but not identical to tsfresh.
All feature functions follow a consistent tensor shape convention:
- Input:
(N, B, S)whereN= timesteps,B= batch size,S= state variables - Output:
(B, S)for scalar features, or(K, B, S)for multi-valued features whereKis the number of values
Features are computed along the time dimension (dim=0), preserving batch and state dimensions. Functions suffixed with _batched compute several parameter variations in a single pass and return shape (K, B, S).
Statistical
pybasin.ts_torch.calculators.torch_features_statistical
Statistical feature calculators for time series.
All feature functions follow a consistent tensor shape convention: - Input: (N, B, S) where N=timesteps, B=batch size, S=state variables - Output: (B, S) for scalar features, or (K, B, S) for multi-valued features where K is the number of values
Features are computed along the time dimension (dim=0), preserving batch and state dimensions.
Functions
sum_values
Sum of all values along the time dimension.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input time series tensor of shape (N, B, S) where N=timesteps, B=batch size, S=states. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of shape (B, S) containing the sum across all N timesteps. |
median
Median of the time series along the time dimension.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input time series tensor of shape (N, B, S) where N=timesteps, B=batch size, S=states. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of shape (B, S) containing the median value across all N timesteps. |
mean
Mean of the time series along the time dimension.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input time series tensor of shape (N, B, S) where N=timesteps, B=batch size, S=states. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of shape (B, S) containing the mean value across all N timesteps. |
length
Length of the time series (number of timesteps).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input time series tensor of shape (N, B, S) where N=timesteps, B=batch size, S=states. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of shape (B, S) where all values equal N (the number of timesteps). |
standard_deviation
Standard deviation (population, ddof=0) along the time dimension.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input time series tensor of shape (N, B, S) where N=timesteps, B=batch size, S=states. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of shape (B, S) containing the standard deviation across all N timesteps. |
variation_coefficient
Coefficient of variation (std / mean).
mean_n_absolute_max
Mean of n largest absolute values (optimized with topk).
ratio_beyond_r_sigma
Ratio of values beyond r standard deviations.
symmetry_looking
Check if distribution looks symmetric.
quantile_batched
Compute multiple quantiles at once.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input time series tensor of shape (N, B, S) where N=timesteps, B=batch size, S=states. |
required |
qs
|
list[float]
|
List of quantile values (0.0 to 1.0) to compute. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of shape (len(qs), B, S) containing the requested quantiles. The first dimension corresponds to the different quantile values in the same order as the input list. |
large_standard_deviation_batched
Check if std > r * range for multiple r values at once.
Args: x: Input tensor of shape (N, B, S) rs: List of r threshold values
Returns: Tensor of shape (len(rs), B, S)
symmetry_looking_batched
Check if distribution looks symmetric for multiple r values.
Args: x: Input tensor of shape (N, B, S) rs: List of r threshold values
Returns: Tensor of shape (len(rs), B, S)
ratio_beyond_r_sigma_batched
Compute ratio of values beyond r standard deviations for multiple r.
Args: x: Input tensor of shape (N, B, S) rs: List of r multiplier values
Returns: Tensor of shape (len(rs), B, S)
mean_n_absolute_max_batched
Compute mean_n_absolute_max for multiple n values at once.
Args: x: Input tensor of shape (N, B, S) ns: List of number_of_maxima values
Returns: Tensor of shape (len(ns), B, S)
amplitude
Peak-to-peak amplitude (max - min) of the time series.
Useful for distinguishing limit cycles with different oscillation amplitudes.
Args: x: Input tensor of shape (N, B, S) where N is timesteps, B is batch, S is states.
Returns: Tensor of shape (B, S) with the amplitude for each batch/state.
options: show_root_heading: false heading_level: 3
Change / Difference
pybasin.ts_torch.calculators.torch_features_change
Change and difference-based feature calculators for time series.
All feature functions follow a consistent tensor shape convention: - Input: (N, B, S) where N=timesteps, B=batch size, S=state variables - Output: (B, S) for scalar features, or (K, B, S) for multi-valued features where K is the number of values
Features are computed along the time dimension (dim=0), preserving batch and state dimensions.
Functions
absolute_sum_of_changes
Sum of absolute differences between consecutive values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input time series tensor of shape (N, B, S) where N=timesteps, B=batch size, S=states. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of shape (B, S) containing the sum of |x[i+1] - x[i]| for all consecutive pairs. |
mean_abs_change
Mean of absolute differences between consecutive values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input time series tensor of shape (N, B, S) where N=timesteps, B=batch size, S=states. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of shape (B, S) containing the mean of |x[i+1] - x[i]| across all consecutive pairs. |
mean_second_derivative_central
Mean of second derivative (central difference): (x[-1] - x[-2] - x[1] + x[0]) / (2 * (n-2)).
change_quantiles
change_quantiles(
x: Tensor,
ql: float,
qh: float,
isabs: bool = True,
f_agg: str = "mean",
) -> Tensor
Statistics of changes within quantile corridor.
Computes statistics of consecutive value changes where both values fall within the [ql, qh] quantile range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input time series tensor of shape (N, B, S) where N=timesteps, B=batch size, S=states. |
required |
ql
|
float
|
Lower quantile (0.0 to 1.0) defining the corridor. |
required |
qh
|
float
|
Upper quantile (0.0 to 1.0), must be > ql. |
required |
isabs
|
bool
|
If True, use absolute differences. Default is True. |
True
|
f_agg
|
str
|
Aggregation function, "mean" or "var". Default is "mean". |
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of shape (B, S) containing the requested statistic (mean or variance) of changes within the quantile corridor. |
change_quantiles_batched
Compute change_quantiles for multiple parameter combinations using vmap.
This function pre-computes all unique quantiles once, then uses vmap to efficiently process all parameter combinations in a single kernel.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input time series tensor of shape (N, B, S) where N=timesteps, B=batch size, S=states. |
required |
params
|
list[dict]
|
List of parameter dicts, each with keys: - "ql": float (lower quantile, 0.0 to 1.0) - "qh": float (upper quantile, 0.0 to 1.0) - "isabs": bool (whether to use absolute differences) - "f_agg": str ("mean" or "var") |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of shape (len(params), B, S) containing the results for each parameter combination. The first dimension corresponds to the different parameter sets in the same order as the input list. Example: params = [ {"ql": 0.0, "qh": 0.2, "isabs": True, "f_agg": "mean"}, {"ql": 0.0, "qh": 0.2, "isabs": True, "f_agg": "var"}, {"ql": 0.0, "qh": 0.2, "isabs": False, "f_agg": "mean"}, ... ] result = change_quantiles_batched(x, params) # shape: (80, B, S) |
options: show_root_heading: false heading_level: 3
Counting
pybasin.ts_torch.calculators.torch_features_count
Functions
count_in_range
Count of values in range [min_val, max_val].
value_count_batched
Compute value_count for multiple values at once.
Args: x: Input tensor of shape (N, B, S) values: List of values to count
Returns: Tensor of shape (len(values), B, S)
range_count_batched
Compute range_count for multiple (min_val, max_val) pairs at once.
Args: x: Input tensor of shape (N, B, S) params: List of parameter dicts, each with keys "min_val" and "max_val"
Returns: Tensor of shape (len(params), B, S)
options: show_root_heading: false heading_level: 3
Boolean
pybasin.ts_torch.calculators.torch_features_boolean
Functions
has_duplicate
Check if any value occurs more than once (optimized with sorting).
has_duplicate_max
Check if maximum value occurs more than once.
has_duplicate_min
Check if minimum value occurs more than once.
has_variance_larger_than_standard_deviation
Check if variance > standard deviation (equivalent to std > 1).
options: show_root_heading: false heading_level: 3
Location
pybasin.ts_torch.calculators.torch_features_location
Functions
first_location_of_maximum
Relative first location of maximum value.
first_location_of_minimum
Relative first location of minimum value.
last_location_of_maximum
Relative last location of maximum value.
last_location_of_minimum
Relative last location of minimum value.
index_mass_quantile
Index where q% of cumulative mass is reached.
options: show_root_heading: false heading_level: 3
Pattern / Streak
pybasin.ts_torch.calculators.torch_features_pattern
Functions
longest_strike_above_mean
Longest consecutive sequence above mean.
longest_strike_below_mean
Longest consecutive sequence below mean.
find_peak_mask
Find local maxima with strict inequality.
A point is a peak if it is strictly greater than all neighbors in a window of size 2n+1. Edge points (first n and last n) are excluded.
.. note::
This matches scipy.signal.argrelmax behavior (strict inequality),
not scipy.signal.find_peaks which also handles flat plateaus by
returning their middle index. Flat plateaus return no peaks here.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input tensor of shape (N, B, S) where N=time, B=batch, S=states. |
required |
n
|
int
|
Support on each side (window size = 2n+1). |
1
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Boolean mask of shape (N, B, S) indicating peak positions. |
number_peaks
Count peaks with support n on each side (vectorized).
extract_peak_values
Extract peak amplitude values for orbit diagrams.
Returns the y-values at detected peaks, useful for visualizing period-N orbits where N distinct amplitude levels indicate period-N behavior.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input tensor of shape (N, B, S) where N=time, B=batch, S=states. |
required |
n
|
int
|
Support on each side for peak detection (window size = 2n+1). |
1
|
Returns:
| Type | Description |
|---|---|
tuple[Tensor, Tensor]
|
Tuple of (peak_values, peak_counts) where: - peak_values: Tensor of shape (max_peaks, B, S) with peak amplitudes, padded with NaN for trajectories with fewer peaks. - peak_counts: Tensor of shape (B, S) with number of peaks per trajectory. |
number_cwt_peaks
Count peaks detected via CWT-like multi-scale analysis (optimized).
Uses integer accumulation and precomputed masks to minimize allocations and avoid unnecessary dtype conversions during the loop.
Input x: (N, B, S) -> returns (B, S)
number_peaks_batched
Compute number_peaks for multiple n values at once.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input tensor of shape (N, B, S). |
required |
ns
|
list[int]
|
List of n values (support on each side). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of shape (len(ns), B, S). |
options: show_root_heading: false heading_level: 3
Autocorrelation
pybasin.ts_torch.calculators.torch_features_autocorrelation
Functions
partial_autocorrelation
Partial autocorrelation at given lag using Durbin-Levinson (fully vectorized).
agg_autocorrelation
Aggregated autocorrelation over lags 1 to maxlag (FFT-optimized).
autocorrelation_batched
Compute autocorrelation for multiple lags at once using FFT.
Args: x: Input tensor of shape (N, B, S) lags: List of lag values to compute
Returns: Tensor of shape (len(lags), B, S) with autocorrelation at each lag
partial_autocorrelation_batched
Compute partial autocorrelation for multiple lags at once.
Computes Durbin-Levinson algorithm once for max(lags), then extracts requested lag values. Stores PACF values separately from AR coefficients since the algorithm modifies AR coefficients in-place.
Args: x: Input tensor of shape (N, B, S) lags: List of lag values to compute
Returns: Tensor of shape (len(lags), B, S) with PACF at each lag
agg_autocorrelation_batched
Compute agg_autocorrelation for multiple (maxlag, f_agg) combinations at once.
Groups by maxlag to minimize FFT computations.
Args: x: Input tensor of shape (N, B, S) params: List of parameter dicts, each with keys: - "maxlag": int (defaults to 40 if not specified) - "f_agg": str ("mean", "median", "var")
Returns: Tensor of shape (len(params), B, S)
autocorrelation_periodicity
autocorrelation_periodicity(
x: Tensor,
min_lag: int = 2,
peak_threshold: float = 0.3,
output: str = "strength",
) -> Tensor
Compute autocorrelation-based periodicity measures.
TODO: Support returning multiple outputs (K, B, S) instead of requiring separate calls. This would require updating torch_feature_extractor.py and torch_feature_processors.py to handle 3D output tensors properly.
Returns either the periodicity strength (height of first significant autocorrelation peak) or the period estimate (lag of that peak). This is useful for detecting limit cycles vs chaos vs fixed points.
Uses FFT for efficient autocorrelation computation and local_maxima_1d for robust peak detection.
Args: x: Input tensor of shape (N, B, S) where N is timesteps, B is batch, S is states. min_lag: Minimum lag to search for peaks (to skip lag-0 peak). Default 2. peak_threshold: Minimum autocorrelation value to consider as a peak. Default 0.3. output: Which value to return - "strength" or "period". Default "strength".
Returns: Tensor of shape (B, S) with either periodicity strength or period estimate.
options: show_root_heading: false heading_level: 3
Entropy / Complexity
pybasin.ts_torch.calculators.torch_features_entropy_complexity
Functions
permutation_entropy
Permutation entropy (fully vectorized GPU implementation).
binned_entropy
Entropy of binned distribution (vectorized).
fourier_entropy
Entropy of the power spectral density.
fourier_entropy_batched
Compute fourier_entropy for multiple bins values at once.
Computes FFT and PSD once, then returns entropy for each bins value. Note: The bins parameter is not actually used in the tsfresh implementation (it's always spectral entropy), so this returns the same value for all bins.
Args: x: Input tensor of shape (N, B, S) bins_list: List of bins values (not actually used in computation)
Returns: Tensor of shape (len(bins_list), B, S)
lempel_ziv_complexity
Lempel-Ziv complexity approximation (optimized).
approximate_entropy
Approximate entropy of the time series.
options: show_root_heading: false heading_level: 3
Frequency Domain
pybasin.ts_torch.calculators.torch_features_frequency
Functions
fft_coefficient
FFT coefficient attributes.
fft_aggregated
Aggregated FFT spectral statistics.
spkt_welch_density
Simplified Welch power spectral density at coefficient.
cwt_coefficients
cwt_coefficients(
x: Tensor,
widths: tuple[int, ...] = (2,),
coeff: int = 0,
w: int = 2,
) -> Tensor
CWT coefficients using Ricker wavelet (vectorized).
This matches tsfresh's cwt_coefficients interface: - widths: tuple of scale values to compute CWT for - coeff: coefficient index to extract from the convolution result - w: which width from the widths tuple to use for the result
Note: This implementation uses a direct Ricker wavelet convolution which differs from tsfresh's pywt.cwt in normalization. Results have the same sign but different scaling. This is acceptable for feature extraction where relative patterns matter.
Args: x: Input tensor of shape (N, B, S) widths: Tuple of wavelet width (scale) parameters coeff: Coefficient index to extract w: Which width from widths to use (must be in widths)
Returns: Tensor of shape (B, S) with the CWT coefficient
fft_coefficient_batched
Compute FFT coefficients for multiple indices at once.
Args: x: Input tensor of shape (N, B, S) coeffs: List of coefficient indices to extract attr: Attribute to extract ("real", "imag", "abs", "angle")
Returns: Tensor of shape (len(coeffs), B, S)
fft_coefficient_all_attrs_batched
Compute all FFT attributes for multiple coefficients at once.
Args: x: Input tensor of shape (N, B, S) coeffs: List of coefficient indices
Returns: Tensor of shape (len(coeffs) * 4, B, S) ordered as: [coeff0_real, coeff0_imag, coeff0_abs, coeff0_angle, coeff1_real, ...]
fft_aggregated_batched
Compute fft_aggregated for all aggregation types at once.
Computes FFT and PSD once, then returns all requested aggregations.
Args: x: Input tensor of shape (N, B, S) aggtypes: List of aggregation types ("centroid", "variance", "skew", "kurtosis")
Returns: Tensor of shape (len(aggtypes), B, S)
spkt_welch_density_batched
Compute spkt_welch_density for multiple coefficient indices at once.
Computes Welch PSD once, then extracts multiple coefficients.
Args: x: Input tensor of shape (N, B, S) coeffs: List of coefficient indices to extract
Returns: Tensor of shape (len(coeffs), B, S)
cwt_coefficients_batched
Compute CWT coefficients for all parameter combinations at once (GPU-optimized).
This function computes CWT once per unique width value, then uses vectorized advanced indexing to extract all requested coefficients in a single operation. The extraction step is fully vectorized with no Python loops.
Note: This implementation uses a direct Ricker wavelet convolution which differs from tsfresh's pywt.cwt in normalization. Results have the same sign but different scaling.
Args: x: Input tensor of shape (N, B, S) params: List of parameter dicts, each with keys: - "widths": tuple of int (e.g., (2, 5, 10, 20)) - "coeff": int (coefficient index to extract) - "w": int (which width from widths to use)
Returns: Tensor of shape (len(params), B, S) with CWT coefficient for each param set
Example: params = [ {"widths": (2, 5, 10, 20), "coeff": c, "w": w} for c in range(15) for w in (2, 5, 10, 20) ] result = cwt_coefficients_batched(x, params) # shape: (60, B, S)
spectral_frequency_ratio
Compute the ratio of 2nd to 1st dominant frequency.
This feature is critical for distinguishing period-doubling bifurcations: - Period-1 limit cycle: ratio ≈ 2.0 (2nd peak is harmonic at 2f) - Period-2 limit cycle: ratio ≈ 0.5 (2nd peak is subharmonic at f/2) - Period-3 limit cycle: ratio ≈ 0.33 (2nd peak is subharmonic at f/3)
The function finds the two highest peaks in the power spectrum and returns the ratio of the 2nd dominant frequency to the 1st dominant frequency.
Args: x: Input tensor of shape (N, B, S) where N is timesteps, B is batch, S is states.
Returns: Tensor of shape (B, S) with the frequency ratio. Returns 0 if only one peak found.
options: show_root_heading: false heading_level: 3
Trend / Regression
pybasin.ts_torch.calculators.torch_features_trend
Functions
linear_trend
Linear regression trend attributes.
linear_trend_timewise
Linear trend (same as linear_trend for our use case).
agg_linear_trend
agg_linear_trend(
x: Tensor,
chunk_size: int = 10,
f_agg: str = "mean",
attr: str = "slope",
) -> Tensor
Linear trend on aggregated chunks.
ar_coefficient
AR model coefficients using Yule-Walker (optimized with FFT autocorrelation).
augmented_dickey_fuller
Simplified Augmented Dickey-Fuller test (vectorized).
linear_trend_batched
Compute linear regression trend for multiple attributes at once.
Args: x: Input tensor of shape (N, B, S) attrs: List of attributes ("slope", "intercept", "rvalue", "pvalue", "stderr")
Returns: Tensor of shape (len(attrs), B, S)
agg_linear_trend_batched
Compute agg_linear_trend for multiple parameter combinations at once.
Groups by (chunk_size, f_agg) to minimize redundant chunk aggregation, then computes all 4 trend attributes at once per group.
Args: x: Input tensor of shape (N, B, S) params: List of parameter dicts, each with keys: - "chunk_size": int (chunk size for aggregation, e.g., 5, 10, 50) - "f_agg": str ("mean", "var", "min", "max") - "attr": str ("slope", "intercept", "rvalue", "stderr")
Returns: Tensor of shape (len(params), B, S)
Example: params = [ {"chunk_size": 5, "f_agg": "mean", "attr": "slope"}, {"chunk_size": 5, "f_agg": "mean", "attr": "intercept"}, {"chunk_size": 5, "f_agg": "var", "attr": "slope"}, ... ] result = agg_linear_trend_batched(x, params) # shape: (48, B, S)
ar_coefficient_batched
Compute AR model coefficients for multiple coeff indices at once.
Args: x: Input tensor of shape (N, B, S) k: AR model order coeffs: List of coefficient indices to return
Returns: Tensor of shape (len(coeffs), B, S)
augmented_dickey_fuller_batched
Compute augmented_dickey_fuller for multiple attributes at once.
Computes ADF test once and returns all requested attributes.
Args: x: Input tensor of shape (N, B, S) attrs: List of attributes ("teststat", "pvalue", "usedlag")
Returns: Tensor of shape (len(attrs), B, S)
options: show_root_heading: false heading_level: 3
Reoccurrence
pybasin.ts_torch.calculators.torch_features_reocurrance
Functions
percentage_of_reoccurring_datapoints_to_all_datapoints
Percentage of unique values that appear more than once (fully vectorized).
percentage_of_reoccurring_values_to_all_values
Percentage of datapoints that are reoccurring (optimized).
sum_of_reoccurring_data_points
Sum of values that appear more than once (optimized).
sum_of_reoccurring_values
Sum of unique values that appear more than once (optimized).
options: show_root_heading: false heading_level: 3
Advanced
pybasin.ts_torch.calculators.torch_features_advanced
Advanced feature calculators that don't fit cleanly into other categories.
These features use specialized algorithms or test unique properties: - benford_correlation: Tests first-digit distribution - c3: Non-linearity measure using triple products - energy_ratio_by_chunks: Temporal energy distribution - time_reversal_asymmetry_statistic: Temporal asymmetry measure
Functions
benford_correlation
Correlation with Benford's law distribution (vectorized).
energy_ratio_by_chunks
Energy ratio of a segment.
time_reversal_asymmetry_statistic
Time reversal asymmetry statistic.
energy_ratio_by_chunks_batched
energy_ratio_by_chunks_batched(
x: Tensor, num_segments: int, segment_focuses: list[int]
) -> Tensor
Compute energy ratio for multiple segment focuses at once.
Args: x: Input tensor of shape (N, B, S) num_segments: Number of segments to divide the series into segment_focuses: List of segment indices to focus on
Returns: Tensor of shape (len(segment_focuses), B, S)
c3_batched
Compute c3 for multiple lag values at once.
Args: x: Input tensor of shape (N, B, S) lags: List of lag values
Returns: Tensor of shape (len(lags), B, S)
time_reversal_asymmetry_statistic_batched
Compute time_reversal_asymmetry_statistic for multiple lag values at once.
Args: x: Input tensor of shape (N, B, S) lags: List of lag values
Returns: Tensor of shape (len(lags), B, S)
options: show_root_heading: false heading_level: 3
Dynamical Systems
pybasin.ts_torch.calculators.torch_features_dynamical
Dynamical systems feature calculators for time series.
All feature functions follow a consistent tensor shape convention: - Input: (N, B, S) where N=timesteps, B=batch size, S=state variables - Output: (B, S) for scalar features, or (B, S, K) for multi-valued features where K is the number of values
Features are computed along the time dimension (dim=0), preserving batch and state dimensions.
Functions
lyapunov_r
lyapunov_r(
x: Tensor,
emb_dim: int = 10,
lag: int = 1,
trajectory_len: int = 20,
tau: float = 1.0,
) -> Tensor
Compute largest Lyapunov exponent using Rosenstein algorithm.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input time series tensor of shape (N, B, S) where N=timesteps, B=batch size, S=states. |
required |
emb_dim
|
int
|
Embedding dimension for phase space reconstruction. Default is 10. |
10
|
lag
|
int
|
Lag for delay embedding. Default is 1. |
1
|
trajectory_len
|
int
|
Number of steps to follow divergence. Default is 20. |
20
|
tau
|
float
|
Time step size for normalization. Default is 1.0. |
1.0
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of shape (B, S) containing the largest Lyapunov exponent for each state of each batch. |
lyapunov_e
lyapunov_e(
x: Tensor,
emb_dim: int = 10,
matrix_dim: int = 4,
min_nb: int = 8,
min_tsep: int = 0,
tau: float = 1.0,
) -> Tensor
Compute multiple Lyapunov exponents using Eckmann algorithm.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input time series tensor of shape (N, B, S) where N=timesteps, B=batch size, S=states. |
required |
emb_dim
|
int
|
Embedding dimension for phase space reconstruction. Default is 10. |
10
|
matrix_dim
|
int
|
Matrix dimension for Jacobian estimation (number of exponents to compute). Default is 4. |
4
|
min_nb
|
int
|
Minimal number of neighbors required. Default is 8. |
8
|
min_tsep
|
int
|
Minimal temporal separation between neighbors. Default is 0. |
0
|
tau
|
float
|
Time step size for normalization. Default is 1.0. |
1.0
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of shape (B, S, matrix_dim) containing the Lyapunov exponents. The third dimension contains matrix_dim exponents sorted from largest to smallest. |
correlation_dimension
Compute correlation dimension using Grassberger-Procaccia algorithm.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input time series tensor of shape (N, B, S) where N=timesteps, B=batch size, S=states. |
required |
emb_dim
|
int
|
Embedding dimension for phase space reconstruction. Default is 4. |
4
|
lag
|
int
|
Lag for delay embedding. Default is 1. |
1
|
n_rvals
|
int
|
Number of radius values to use in correlation integral. Default is 50. |
50
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of shape (B, S) containing the correlation dimension for each state of each batch. |
friedrich_coefficients
Coefficients of polynomial fit to velocity vs position (fully batch vectorized).
max_langevin_fixed_point
Maximum fixed point of Langevin model (fully batch vectorized).
friedrich_coefficients_batched
Compute friedrich_coefficients for multiple coeff values at once.
Groups by (m, r) combinations and computes polynomial fit once per group, then extracts all requested coefficients.
Args: x: Input tensor of shape (N, B, S) params: List of parameter dicts, each with keys: - "m": int (polynomial degree) - "r": float (not used in computation, kept for API compatibility) - "coeff": int (coefficient index to extract)
Returns: Tensor of shape (len(params), B, S)
options: show_root_heading: false heading_level: 3