Entropy Pooling
Entropy pooling (Meucci, 2008) is a fully general framework for blending subjective views with a prior distribution over scenarios. Unlike Black-Litterman, which is limited to normal distributions and linear views, entropy pooling works with arbitrary scenario sets and supports inequality constraints, conditional views, and non-linear payoffs. Horizon implements the dual optimization in Rust for fast convergence.View Blending
hz.entropy_pool() tilts a prior distribution to satisfy your views while staying as close as possible to the prior (minimum relative entropy).Flexible Views
Express equality and inequality constraints on means, variances, correlations, or any linear function of the scenarios.
Posterior Analytics
hz.posterior_mean() and hz.posterior_covariance() extract portfolio-ready moments from the reweighted distribution.Pipeline Integration
hz.entropy_pooling() pipeline function updates posterior weights each cycle as views evolve.hz.entropy_pool
Compute posterior scenario probabilities that satisfy a set of view constraints while minimizing the Kullback-Leibler divergence from the prior.| Parameter | Type | Description |
|---|---|---|
prior_probs | list[float] | Prior scenario probabilities, length S (must sum to 1.0) |
scenarios | list[list[float]] | S x N scenario matrix (S scenarios, N markets) |
views | list[ViewConstraint] | List of view constraints to impose on the posterior |
EntropyPoolResult Type
| Field | Type | Description | ||
|---|---|---|---|---|
posterior_probs | list[float] | Posterior scenario probabilities, length S, summing to 1.0 | ||
relative_entropy | float | KL divergence D(posterior | prior). Lower means the views are more compatible with the prior | |
converged | bool | Whether the dual optimization converged within tolerance | ||
iterations | int | Number of iterations taken |
ViewConstraint Type
| Field | Type | Description |
|---|---|---|
coefficients | list[float] | Linear combination weights, length N. The constraint applies to sum_j coefficients[j] * scenario[j] |
bound | float | The target value for the view |
equality | bool | True for equality constraint (E[g(X)] = bound), False for inequality (E[g(X)] >= bound) |
Expressing Views
Mean View (Equality)
“I believe the expected return of market 1 is exactly 5%.”Mean View (Inequality)
“I believe market 2 will return at least 2%.”Relative View
“Market 0 will outperform market 1 by at least 3%.”Multiple Views
Posterior Moments
After computing posterior probabilities, extract the mean and covariance for portfolio construction.hz.posterior_mean
| Parameter | Type | Description |
|---|---|---|
probs | list[float] | Scenario probabilities (posterior), length S |
scenarios | list[list[float]] | S x N scenario matrix |
list[float]: posterior mean vector, length N.
hz.posterior_covariance
| Parameter | Type | Description |
|---|---|---|
probs | list[float] | Scenario probabilities (posterior), length S |
scenarios | list[list[float]] | S x N scenario matrix |
list[list[float]]: posterior N x N covariance matrix.
Pipeline Integration
Thehz.entropy_pooling() pipeline function recomputes posterior probabilities each cycle as your model views evolve, injecting the result into ctx.params["entropy_pool"].
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
feed | str | required | Feed name to build scenarios from |
scenario_lookback | int | 100 | Number of historical observations to use as scenarios |
view_fn | callable | required | Function (ctx) -> list[ViewConstraint] that generates views each cycle |
Mathematical Background
Entropy Pooling Algorithm
Entropy Pooling Algorithm
Given S scenarios with prior probabilities p = (p_1, …, p_S) and K view constraints of the form E_q[g_k(X)] = b_k (or >= b_k), entropy pooling finds the posterior q that minimizes:D(q || p) = sum_s q_s * ln(q_s / p_s)subject to sum_s q_s = 1 and the view constraints. The dual problem is unconstrained and convex, solved via Newton’s method. The posterior has an exponential-family form:q_s proportional to p_s * exp(sum_k lambda_k * g_k(x_s))where lambda_k are the dual variables (Lagrange multipliers).
Comparison with Black-Litterman
Comparison with Black-Litterman
Black-Litterman is a special case of entropy pooling where: (a) the prior is normal, (b) views are linear equality constraints on expected returns, and (c) the posterior is also normal. Entropy pooling generalizes this to arbitrary scenario distributions, inequality constraints, and non-Gaussian priors. This is important for prediction markets where return distributions are bounded (prices must stay in [0, 1]) and heavy-tailed.
Relative Entropy Interpretation
Relative Entropy Interpretation
The relative entropy D(q || p) measures the information cost of the views. Views that are highly inconsistent with the prior will produce a large D, indicating that the posterior has moved far from your baseline. Monitoring D over time can signal when your model views are becoming extreme or contradictory.