Pro Feature. Requires a Pro or Ultra subscription. Get started at api.mathematicalcompany.com
Markov Regime Detection
Horizon includes a Hidden Markov Model (HMM) with Gaussian emissions, implemented entirely in Rust. It classifies the current market into regimes (e.g., calm vs. volatile) in real time, with a per-tick cost of O(N^2) where N is the number of states (typically 2-3).Regime detection lets your strategy adapt its behavior to market conditions. Widen spreads in volatile regimes, reduce size in crisis regimes, or disable quoting entirely when the model signals a regime change.
Overview
Rust HMM
Full Baum-Welch EM training, Viterbi decoding, forward-backward smoothing. All in Rust with zero Python overhead.
O(N^2) Online Filter
Per-tick forward filter costs ~9 multiplies for 3 states. Effectively zero latency added to your pipeline.
Auto-Train Mode
No pre-trained model? Collect prices during warmup and train inline. Works in both live and backtest.
Pipeline Integration
Drop
hz.markov_regime() into any pipeline. Injects regime info into ctx.params for downstream use.Quick Start
Pre-Trained Model
Train offline on historical returns, then use in live trading:Auto-Train Mode
No historical data? Train inline during warmup:ctx.params["regime"] is not set.
MarkovRegimeModel (Rust)
The core HMM class. Use this directly for offline analysis, or pass it tohz.markov_regime() for pipeline use.
Constructor
| Parameter | Type | Default | Description |
|---|---|---|---|
n_states | int | required | Number of hidden states (2-8) |
fit()
Train the model using Baum-Welch EM on a series of observations (log returns).| Parameter | Type | Default | Description |
|---|---|---|---|
data | list[float] | required | Observation sequence (log returns). Min 10 observations. |
max_iters | int | 100 | Maximum EM iterations |
tol | float | 1e-6 | Convergence tolerance on log-likelihood |
decode()
Find the most likely state sequence using the Viterbi algorithm.filter_step()
Online forward filter. Process one observation and update state probabilities. This is the hot path for live trading.predict()
One-step-ahead prediction: given the current filtered state, what are the probabilities for the next time step?smooth()
Full forward-backward smoothing on a batch of observations. More accurate than filtering alone.Other Methods
| Method | Returns | Description |
|---|---|---|
transition_matrix() | list[list[float]] | N x N state transition probabilities |
emission_params() | list[tuple[float, float]] | (mean, variance) for each state’s Gaussian |
current_regime() | int | Most likely current state from the filter |
filtered_probs() | list[float] | Current filtered state probabilities |
reset_filter() | None | Reset filter to initial state (for re-processing) |
n_states() | int | Number of states |
is_trained() | bool | Whether fit() has been called |
hz.prices_to_returns
Convenience function to convert a price series to log returns.hz.markov_regime() Pipeline Factory
Creates a pipeline function that classifies the current regime on each tick.| Parameter | Type | Default | Description |
|---|---|---|---|
model | MarkovRegimeModel | None | Pre-trained model. If None, auto-trains after warmup. |
n_states | int | 2 | Number of states (only used if model is None) |
warmup | int | 100 | Ticks to collect before auto-training |
feed | str | None | Feed name to read price from. None = first available. |
param_name | str | "regime" | Key prefix in ctx.params |
Injected Parameters
After training and sufficient data, the function injects:| Key | Type | Description |
|---|---|---|
ctx.params["regime"] | int | Most likely state (0 = calm, highest = volatile) |
ctx.params["regime_probs"] | list[float] | State probabilities |
ctx.params["regime_vol_state"] | float | P(highest-volatility state) |
Examples
Regime-Adaptive Market Maker
Widen spreads in volatile regimes:Regime-Gated Trading
Only trade in calm regimes:Backtest with Regime Detection
Offline Analysis
Use the HMM directly for research without a pipeline:Mathematical Background
Hidden Markov Model
Hidden Markov Model
Baum-Welch Training
Baum-Welch Training
The Baum-Welch algorithm (a special case of EM) iteratively:
- E-step: Run forward-backward to compute state occupation probabilities given current parameters
- M-step: Re-estimate parameters (A, mu, sigma^2) from the occupation probabilities
Viterbi Decoding
Viterbi Decoding
The Viterbi algorithm finds the single most likely state sequence using dynamic programming. Runs in O(T * N^2) time where T is the sequence length and N is the number of states.
Online Forward Filter
Online Forward Filter
The forward filter recursively computes P(state_t | observations_1..t):Then normalize:
P(state_t = j) = alpha_t(j) / sum(alpha_t). This is O(N^2) per time step. For N=3 (typical), that’s 9 multiplies plus normalization.