Skip to main content
Pro Feature. Requires a Pro or Ultra subscription. Get started at api.mathematicalcompany.com
What is this? Classical portfolio optimization is fragile - small errors in expected returns produce wildly different weights. Robust optimization assumes the true expected returns lie within an uncertainty set and optimizes for the worst case. The result is a portfolio that performs well even when your return estimates are wrong, which they always are.

Robust Portfolio

Classical mean-variance optimization is notoriously sensitive to estimation error in expected returns. A small change in the mean vector can produce wildly different portfolio weights. Robust optimization addresses this by assuming the true mean lies within an uncertainty set and optimizing for the worst case. Horizon implements ellipsoidal uncertainty sets and worst-case return computation in Rust.

Robust Optimize

hz.robust_optimize() finds weights that maximize worst-case return over an ellipsoidal uncertainty set.

Worst-Case Return

hz.worst_case_return() computes the minimum expected return for given weights under parameter uncertainty.

Robust Frontier

hz.robust_efficient_frontier() traces the robust efficient frontier across risk targets.

Pipeline Integration

hz.robust_allocator() rebalances portfolio weights each cycle using robust optimization.

hz.robust_optimize

Find portfolio weights that maximize the worst-case expected return subject to a volatility constraint, where the true mean vector lies within an ellipsoidal uncertainty set centered on the sample mean.
import horizon as hz

# Expected returns (point estimates)
mu = [0.05, 0.03, 0.07, 0.02]

# Covariance matrix
cov = [
    [0.04, 0.01, 0.02, 0.005],
    [0.01, 0.03, 0.01, 0.003],
    [0.02, 0.01, 0.06, 0.01 ],
    [0.005, 0.003, 0.01, 0.02],
]

result = hz.robust_optimize(
    mu=mu,
    covariance=cov,
    kappa=0.5,           # uncertainty radius (higher = more conservative)
    max_volatility=0.15, # volatility constraint
)

print(result.weights)           # [0.25, 0.30, 0.20, 0.25]
print(result.worst_case_return) # guaranteed minimum expected return
print(result.volatility)        # portfolio volatility
print(result.kappa)             # uncertainty radius used
ParameterTypeDescription
mulist[float]Estimated expected returns, length N
covariancelist[list[float]]N x N covariance matrix
kappafloatUncertainty set radius. Controls how conservative the optimization is. Higher values shrink toward equal weight
max_volatilityfloatMaximum portfolio volatility (standard deviation) constraint

RobustPortfolioResult Type

FieldTypeDescription
weightslist[float]Optimal portfolio weights, length N, summing to 1.0
worst_case_returnfloatMinimum expected return over the uncertainty set
volatilityfloatPortfolio standard deviation at the optimal weights
kappafloatUncertainty radius used in the optimization
iterationsintNumber of iterations for convergence
The uncertainty set is an ellipsoid centered on the sample mean, with radius controlled by kappa. When kappa = 0, this reduces to standard mean-variance optimization. As kappa increases, the optimizer hedges more against estimation error.

hz.worst_case_return

For a given set of portfolio weights, compute the worst-case expected return over the uncertainty set.
import horizon as hz

mu = [0.05, 0.03, 0.07]
cov = [
    [0.04, 0.01, 0.02],
    [0.01, 0.03, 0.01],
    [0.02, 0.01, 0.06],
]
weights = [0.4, 0.3, 0.3]

worst = hz.worst_case_return(mu, cov, weights, kappa=0.5)
print(f"Worst-case return: {worst:.4f}")
# Compare with nominal: sum(w*m for w,m in zip(weights, mu))
nominal = sum(w * m for w, m in zip(weights, mu))
print(f"Nominal return: {nominal:.4f}")
print(f"Robustness penalty: {nominal - worst:.4f}")
ParameterTypeDescription
mulist[float]Estimated expected returns, length N
covariancelist[list[float]]N x N covariance matrix
weightslist[float]Portfolio weights, length N
kappafloatUncertainty set radius
Returns float: the worst-case expected return.

hz.robust_efficient_frontier

Trace the robust efficient frontier by solving the robust optimization problem at multiple volatility targets.
import horizon as hz

mu = [0.05, 0.03, 0.07, 0.02]
cov = [
    [0.04, 0.01, 0.02, 0.005],
    [0.01, 0.03, 0.01, 0.003],
    [0.02, 0.01, 0.06, 0.01 ],
    [0.005, 0.003, 0.01, 0.02],
]

frontier = hz.robust_efficient_frontier(
    mu=mu,
    covariance=cov,
    kappa=0.5,
    n_points=20,            # number of points on the frontier
    min_volatility=0.05,
    max_volatility=0.25,
)

for point in frontier:
    print(f"vol={point.volatility:.3f}  worst_ret={point.worst_case_return:.4f}  "
          f"weights={[f'{w:.2f}' for w in point.weights]}")
ParameterTypeDescription
mulist[float]Estimated expected returns, length N
covariancelist[list[float]]N x N covariance matrix
kappafloatUncertainty set radius
n_pointsintNumber of points to compute on the frontier
min_volatilityfloatMinimum volatility target
max_volatilityfloatMaximum volatility target
Returns list[RobustPortfolioResult]: one result per frontier point, sorted by volatility.

Choosing Kappa

The uncertainty radius kappa controls how conservative the portfolio is. Larger kappa means the optimizer assumes more parameter uncertainty and tilts toward diversification.
import horizon as hz

mu = [0.06, 0.03, 0.08]
cov = [
    [0.04, 0.01, 0.02],
    [0.01, 0.03, 0.01],
    [0.02, 0.01, 0.06],
]

for kappa in [0.0, 0.25, 0.5, 1.0, 2.0]:
    result = hz.robust_optimize(mu, cov, kappa=kappa, max_volatility=0.15)
    print(f"kappa={kappa:.2f}  weights={[f'{w:.2f}' for w in result.weights]}  "
          f"worst_ret={result.worst_case_return:.4f}")
KappaBehavior
0.0Standard mean-variance (no robustness)
0.1 - 0.3Mild robustness, slight tilt toward diversification
0.5 - 1.0Moderate robustness, recommended starting range
> 1.0Aggressive hedging, approaches equal-weight or minimum-variance
A practical calibration: set kappa proportional to sqrt(N / T) where N is the number of markets and T is the number of observations used to estimate the mean. This scales the uncertainty set to match the statistical uncertainty in the sample mean.

Pipeline Integration

The hz.robust_allocator() pipeline function recomputes robust portfolio weights each cycle and injects them into ctx.params["robust_weights"].
import horizon as hz

def quoter(ctx):
    weights = ctx.params.get("robust_weights")
    if weights is None:
        return []

    # Use robust weights to size positions across markets
    bankroll = 10000.0
    orders = []
    for i, market in enumerate(ctx.markets):
        size = weights.weights[i] * bankroll
        if size > 1.0:
            price = ctx.feeds["poly"].price
            orders += hz.quotes(fair=price, spread=0.04, size=size)
    return orders

hz.run(
    name="robust-allocator",
    markets=["election", "fed-rate", "recession", "btc-100k"],
    pipeline=[
        hz.robust_allocator(
            feed="poly",
            kappa=0.5,
            max_volatility=0.15,
            lookback=100,
            rebalance_every=50,  # recompute every 50 cycles
        ),
        quoter,
    ],
    feeds={"poly": hz.PolymarketBook(token_id="0x123...")},
    interval=5.0,
)

Parameters

ParameterTypeDefaultDescription
feedstrrequiredFeed name to compute returns from
kappafloat0.5Uncertainty set radius
max_volatilityfloat0.15Maximum portfolio volatility constraint
lookbackint100Number of historical observations for mean/covariance estimation
rebalance_everyint50Recompute weights every N cycles

Mathematical Background

The true mean vector mu is assumed to lie within an ellipsoid centered on the sample estimate mu_hat. The ellipsoid is defined as all mu_hat + delta such that delta' * Sigma_inv * delta is at most kappa^2. This is a natural choice because the sampling distribution of the mean is approximately elliptical (via the CLT), with shape governed by the covariance matrix. The radius kappa controls the confidence level.
The robust optimization problem maximizes the worst-case expected return w' * mu over all mu in the uncertainty set, subject to a volatility constraint w' * Sigma * w at most sigma_max^2, weights summing to 1, and non-negativity. The inner minimization has a closed-form solution: the worst-case return for weights w is w' * mu_hat - kappa * sqrt(w' * Sigma * w). So the robust problem reduces to a second-order cone program that can be solved efficiently.
Robust optimization with ellipsoidal uncertainty is closely related to Bayesian shrinkage. As kappa increases, the optimal portfolio shrinks toward the minimum-variance portfolio (which does not depend on the mean). This provides a smooth interpolation between the aggressive mean-variance portfolio and the conservative minimum-variance portfolio.
The covariance matrix must be positive definite. If you have fewer observations than markets (T less than N), the sample covariance will be singular. Use hz.denoise_covariance() or hz.hrp_weights() as alternatives when data is scarce.