Skip to main content
Ultra Feature. Requires an Ultra subscription. Get started at api.mathematicalcompany.com
What is this? You have a view (‘I think candidate X has a 70% chance of winning’) and historical scenario data. Entropy pooling finds the probability distribution closest to your prior that satisfies your views. Unlike Black-Litterman, it works with any distribution, supports inequality views (‘at least 60%’), and handles non-linear constraints. Use it to systematically blend subjective views with data-driven priors.

Entropy Pooling

Entropy pooling (Meucci, 2008) is a fully general framework for blending subjective views with a prior distribution over scenarios. Unlike Black-Litterman, which is limited to normal distributions and linear views, entropy pooling works with arbitrary scenario sets and supports inequality constraints, conditional views, and non-linear payoffs. Horizon implements the dual optimization in Rust for fast convergence.

View Blending

hz.entropy_pool() tilts a prior distribution to satisfy your views while staying as close as possible to the prior (minimum relative entropy).

Flexible Views

Express equality and inequality constraints on means, variances, correlations, or any linear function of the scenarios.

Posterior Analytics

hz.posterior_mean() and hz.posterior_covariance() extract portfolio-ready moments from the reweighted distribution.

Pipeline Integration

hz.entropy_pooling() pipeline function updates posterior weights each cycle as views evolve.

hz.entropy_pool

Compute posterior scenario probabilities that satisfy a set of view constraints while minimizing the Kullback-Leibler divergence from the prior.
import horizon as hz

# 5 scenarios, 3 markets
scenarios = [
    [0.10, -0.05,  0.02],
    [0.05,  0.03, -0.01],
    [-0.02, 0.08,  0.04],
    [0.03, -0.02,  0.06],
    [-0.08, 0.01,  0.03],
]

# Equal prior (uniform)
prior_probs = [0.2, 0.2, 0.2, 0.2, 0.2]

# Views: "expected return of market 0 is at least 3%"
views = [
    hz.ViewConstraint(
        coefficients=[1.0, 0.0, 0.0],  # select market 0
        bound=0.03,
        equality=False,  # inequality: E[r_0] >= 0.03
    ),
]

result = hz.entropy_pool(prior_probs, scenarios, views)
print(result.posterior_probs)   # reweighted scenario probabilities
print(result.relative_entropy)  # KL divergence from prior
print(result.converged)         # True if optimization converged
ParameterTypeDescription
prior_probslist[float]Prior scenario probabilities, length S (must sum to 1.0)
scenarioslist[list[float]]S x N scenario matrix (S scenarios, N markets)
viewslist[ViewConstraint]List of view constraints to impose on the posterior

EntropyPoolResult Type

FieldTypeDescription
posterior_probslist[float]Posterior scenario probabilities, length S, summing to 1.0
relative_entropyfloatKL divergence D(posteriorprior). Lower means the views are more compatible with the prior
convergedboolWhether the dual optimization converged within tolerance
iterationsintNumber of iterations taken

ViewConstraint Type

FieldTypeDescription
coefficientslist[float]Linear combination weights, length N. The constraint applies to sum_j coefficients[j] * scenario[j]
boundfloatThe target value for the view
equalityboolTrue for equality constraint (E[g(X)] = bound), False for inequality (E[g(X)] >= bound)

Expressing Views

Mean View (Equality)

“I believe the expected return of market 1 is exactly 5%.”
view = hz.ViewConstraint(
    coefficients=[0.0, 1.0, 0.0],
    bound=0.05,
    equality=True,
)

Mean View (Inequality)

“I believe market 2 will return at least 2%.”
view = hz.ViewConstraint(
    coefficients=[0.0, 0.0, 1.0],
    bound=0.02,
    equality=False,
)

Relative View

“Market 0 will outperform market 1 by at least 3%.”
view = hz.ViewConstraint(
    coefficients=[1.0, -1.0, 0.0],
    bound=0.03,
    equality=False,
)

Multiple Views

views = [
    hz.ViewConstraint([1.0, 0.0, 0.0], bound=0.04, equality=False),
    hz.ViewConstraint([0.0, 1.0, -1.0], bound=0.01, equality=True),
]
result = hz.entropy_pool(prior_probs, scenarios, views)

Posterior Moments

After computing posterior probabilities, extract the mean and covariance for portfolio construction.

hz.posterior_mean

import horizon as hz

mean = hz.posterior_mean(result.posterior_probs, scenarios)
print(mean)  # [0.042, 0.031, 0.025] - posterior expected returns
ParameterTypeDescription
probslist[float]Scenario probabilities (posterior), length S
scenarioslist[list[float]]S x N scenario matrix
Returns list[float]: posterior mean vector, length N.

hz.posterior_covariance

cov = hz.posterior_covariance(result.posterior_probs, scenarios)
print(cov)  # 3x3 posterior covariance matrix
ParameterTypeDescription
probslist[float]Scenario probabilities (posterior), length S
scenarioslist[list[float]]S x N scenario matrix
Returns list[list[float]]: posterior N x N covariance matrix.

Pipeline Integration

The hz.entropy_pooling() pipeline function recomputes posterior probabilities each cycle as your model views evolve, injecting the result into ctx.params["entropy_pool"].
import horizon as hz

def view_generator(ctx):
    """Generate views from your model each cycle."""
    price = ctx.feeds["poly"].price
    views = []
    if price < 0.50:
        # Bullish view: expected return >= 5%
        views.append(hz.ViewConstraint([1.0], bound=0.05, equality=False))
    return views

def model(ctx):
    ep = ctx.params.get("entropy_pool")
    if ep is None or not ep.converged:
        return []

    posterior_mean = hz.posterior_mean(ep.posterior_probs, ctx.params["scenarios"])
    fair = 0.50 + posterior_mean[0]
    return hz.quotes(fair=fair, spread=0.04, size=10)

hz.run(
    name="entropy-pooling-mm",
    markets=["election"],
    pipeline=[
        hz.entropy_pooling(
            feed="poly",
            scenario_lookback=100,
            view_fn=view_generator,
        ),
        model,
    ],
    feeds={"poly": hz.PolymarketBook(token_id="0x123...")},
    interval=5.0,
)

Parameters

ParameterTypeDefaultDescription
feedstrrequiredFeed name to build scenarios from
scenario_lookbackint100Number of historical observations to use as scenarios
view_fncallablerequiredFunction (ctx) -> list[ViewConstraint] that generates views each cycle

Mathematical Background

Given S scenarios with prior probabilities p = (p_1, …, p_S) and K view constraints of the form E_q[g_k(X)] = b_k (or >= b_k), entropy pooling finds the posterior q that minimizes:D(q || p) = sum_s q_s * ln(q_s / p_s)subject to sum_s q_s = 1 and the view constraints. The dual problem is unconstrained and convex, solved via Newton’s method. The posterior has an exponential-family form:q_s proportional to p_s * exp(sum_k lambda_k * g_k(x_s))where lambda_k are the dual variables (Lagrange multipliers).
Black-Litterman is a special case of entropy pooling where: (a) the prior is normal, (b) views are linear equality constraints on expected returns, and (c) the posterior is also normal. Entropy pooling generalizes this to arbitrary scenario distributions, inequality constraints, and non-Gaussian priors. This is important for prediction markets where return distributions are bounded (prices must stay in [0, 1]) and heavy-tailed.
The relative entropy D(q || p) measures the information cost of the views. Views that are highly inconsistent with the prior will produce a large D, indicating that the posterior has moved far from your baseline. Monitoring D over time can signal when your model views are becoming extreme or contradictory.
Entropy pooling requires the number of scenarios S to be larger than the number of equality constraints. If you impose too many exact equality views relative to the scenario count, the problem becomes infeasible. Use inequality views when possible, and ensure S >> K.