Horizon includes a Monte Carlo simulation engine for stress-testing prediction market portfolios. The entire simulation runs in Rust with a custom PRNG (xoshiro256++), Box-Muller normal generation, and Cholesky decomposition for correlated outcomes. No external dependencies.
Monte Carlo simulation answers the question: “Given my current positions and probability estimates, what is the distribution of possible portfolio outcomes?” This helps you understand tail risk (VaR, CVaR), win probability, and the impact of correlation between positions.
result = hz.simulate( engine=None, # Engine (auto-extract positions) positions=None, # Explicit positions (overrides engine) scenarios=10000, # Number of scenarios correlations=None, # dict[tuple[str,str], float] correlation pairs prices=None, # dict[str, float] override current prices seed=None, # Random seed)
The wrapper provides two key conveniences:Auto-extraction from Engine: If engine is passed, positions are extracted from engine.positions() and current prices from engine.all_feed_snapshots().Correlation dict: Instead of building a full NxN matrix, pass a dict of pairwise correlations:
engine = hz.Engine(risk_config=hz.RiskConfig(max_position_per_market=200))# ... trade for a while, build up positions ...# Stress test current portfolioresult = hz.simulate(engine=engine, scenarios=50000)print(f"Portfolio VaR 95: ${result.var_95:.2f}")
# Same seed = identical resultsr1 = hz.monte_carlo(positions, 10000, None, 42)r2 = hz.monte_carlo(positions, 10000, None, 42)assert r1.mean_pnl == r2.mean_pnl # Exact match# Different seed = different (but statistically similar) resultsr3 = hz.monte_carlo(positions, 10000, None, 99)# r3.mean_pnl will be close to r1.mean_pnl but not identical
Find optimal strategy parameters using a Gaussian Process surrogate with Expected Improvement acquisition.
Copy
from horizon import bayesian_optimize# Define a pipeline factory: takes params, returns pipeline listdef make_pipeline(params): return [ hz.market_maker(feed_name="book", gamma=params["gamma"], size=params["size"]), ]result = bayesian_optimize( data=historical_data, # same format as hz.backtest() pipeline_factory=make_pipeline, search_space={ "gamma": (0.1, 1.0), "size": (1.0, 20.0), }, objective="sharpe_ratio", # metric to maximize from BacktestResult n_trials=50, n_initial=10, seed=42,)print(result.best_params) # {"gamma": 0.5, "size": 8.2}print(result.best_score) # 1.85 (best Sharpe found)print(result.n_trials) # 50print(result.all_trials) # List of {"params": ..., "score": ...}print(result.convergence) # Best score at each trial index
Zero external dependencies. Uses RBF kernel GP with Cholesky decomposition.
Monte Carlo simulation assumes current_price represents the true probability. If your price estimates are poor, the simulation results will be misleading. Consider running simulations across a range of probability assumptions to understand sensitivity.