Adversarial Simulation
A strategy that works in calm markets may collapse when a whale dumps, a front-runner snipes your orders, or a spoofer fakes liquidity. Horizon’s adversarial module simulates these attacks against your strategy and measures the impact. All adversary logic is pure Rust (stateless); simulation orchestration is in Python.
Overview
4 Adversary Types Whale, front-runner, spoofer, and liquidity drainer.
Vulnerability Scoring hz.vulnerability_score() quantifies strategy weakness to each attack.
Baseline vs Adversarial hz.adversarial_sim() compares performance with and without adversaries.
Anti-Fragile Optimization hz.anti_fragile() evolves parameters that perform well under attack.
Adversary Types
Type Behavior WhaleLarge orders that move the market against you FrontRunnerDetects your pending orders and trades ahead SpoofingPlaces and cancels large orders to fake liquidity LiquidityDrainerRemoves liquidity at critical moments
Core Functions
Adversary Actions
Each adversary type has a pure function that produces actions from market state.
import horizon as hz
action = hz.whale_action(
config = hz.AdversaryConfig(hz.AdversaryType.Whale, "whale_1" , intensity = 1.0 , size_multiplier = 5.0 ),
current_price = 0.55 ,
market_id = "btc_100k" ,
seed = 42 ,
)
print ( f "Side: { action.side } , Size: { action.size } , Price: { action.price } " )
action = hz.front_runner_action(config, pending_price = 0.55 , pending_size = 100.0 , market_id = "btc" , seed = 42 )
action = hz.spoofing_action(config, current_price = 0.55 , market_id = "btc" , seed = 42 )
action = hz.liquidity_drainer_action(config, bid_price = 0.54 , ask_price = 0.56 , market_id = "btc" , seed = 42 )
hz.compute_simulation_metrics
Compare baseline and adversarial PnL curves.
result = hz.compute_simulation_metrics(
baseline_pnls = [ 1.0 , 2.0 , 1.5 , 3.0 , 2.5 ],
adversarial_pnls = [ 0.8 , 1.5 , 0.5 , 2.0 , 1.0 ],
)
print ( f "Baseline Sharpe: { result.baseline_sharpe :.4f} " )
print ( f "Adversarial Sharpe: { result.adversarial_sharpe :.4f} " )
print ( f "PnL degradation: { result.pnl_degradation :.2%} " )
print ( f "Max drawdown increase: { result.max_drawdown_increase :.4f} " )
hz.vulnerability_score
Single score (0-1) for how vulnerable the strategy is.
score = hz.vulnerability_score(
baseline_pnls = [ 1.0 , 2.0 , 3.0 ],
adversarial_pnls = [ 0.5 , 1.0 , 1.5 ],
)
print ( f "Vulnerability: { score :.2f} " ) # 0.0 = immune, 1.0 = destroyed
Types
AdversaryConfig
config = hz.AdversaryConfig(
adversary_type = hz.AdversaryType.Whale,
name = "big_whale" ,
intensity = 1.0 , # 0.0-1.0 attack intensity
size_multiplier = 5.0 , # how much larger than typical orders
)
Preset Configurations
from horizon.adversarial import WHALE_PRESET , FRONT_RUNNER_PRESET , SPOOFING_PRESET , DRAINER_PRESET
Simulation Functions
hz.adversarial_sim
Run baseline vs adversarial comparison.
import horizon as hz
result = hz.adversarial_sim(
pipeline_factory = lambda params : [my_strategy],
data = historical_data,
adversaries = [
hz.AdversaryConfig(hz.AdversaryType.Whale, "whale" , intensity = 0.8 , size_multiplier = 5.0 ),
hz.AdversaryConfig(hz.AdversaryType.FrontRunner, "runner" , intensity = 0.5 , size_multiplier = 1.0 ),
],
num_cycles = 100 ,
seed = 42 ,
)
print ( f "PnL degradation: { result.pnl_degradation :.2%} " )
print ( f "Vulnerability: { result.vulnerability :.4f} " )
Parameter Type Default Description pipeline_factoryCallablerequired Function(params) -> pipeline dataAnyNoneHistorical data for backtest adversarieslist[AdversaryConfig][WHALE_PRESET]Adversary configurations num_cyclesint100Simulation length seedint42Random seed
hz.anti_fragile
Evolutionary optimization for adversarial robustness. Finds parameters that perform well even under attack.
result = hz.anti_fragile(
pipeline_factory = lambda params : [my_strategy_factory( ** params)],
param_bounds = [
{ "name" : "spread" , "min" : 0.01 , "max" : 0.10 },
{ "name" : "size" , "min" : 1 , "max" : 50 , "discrete" : True },
],
adversaries = [hz. WHALE_PRESET , hz. FRONT_RUNNER_PRESET ],
data = historical_data,
pop_size = 30 ,
generations = 50 ,
seed = 42 ,
)
print ( f "Anti-fragile params: { result[ 'best_genome' ] } " )
print ( f "Adversarial fitness: { result[ 'best_fitness' ] :.4f} " )
Parameter Type Default Description pipeline_factoryCallablerequired Strategy factory param_boundslist[dict]required Parameter bounds adversarieslist[AdversaryConfig]whale + front-runner Attack configuration dataAnyNoneHistorical data pop_sizeint30Evolution population size generationsint50Evolution generations seedint42Random seed
Examples
Test Strategy Robustness
import horizon as hz
# Test against all adversary types
for adv_type, name in [
(hz.AdversaryType.Whale, "Whale" ),
(hz.AdversaryType.FrontRunner, "Front-Runner" ),
(hz.AdversaryType.Spoofing, "Spoofer" ),
(hz.AdversaryType.LiquidityDrainer, "Drainer" ),
]:
config = hz.AdversaryConfig(adv_type, name.lower(), intensity = 1.0 , size_multiplier = 5.0 )
result = hz.adversarial_sim(
pipeline_factory = lambda p : [my_strategy],
data = data,
adversaries = [config],
)
print ( f " { name } : PnL degradation= { result.pnl_degradation :.2%} , "
f "vulnerability= { result.vulnerability :.4f} " )
Find Anti-Fragile Parameters
import horizon as hz
result = hz.anti_fragile(
pipeline_factory = lambda p : [make_strategy(p)],
param_bounds = [
{ "name" : "spread" , "min" : 0.02 , "max" : 0.15 },
{ "name" : "size" , "min" : 5 , "max" : 100 , "discrete" : True },
{ "name" : "window" , "min" : 10 , "max" : 200 , "discrete" : True },
],
adversaries = [hz. WHALE_PRESET , hz. FRONT_RUNNER_PRESET , hz. SPOOFING_PRESET ],
pop_size = 30 ,
generations = 50 ,
)
# These parameters are optimized to perform well even under adversarial conditions
print ( f "Anti-fragile spread: { result[ 'best_genome' ][ 0 ] :.4f} " )
Adversarial agents are heuristic simulations, not full market microstructure simulators. They test directional robustness (how your strategy handles unexpected market actions) but do not replicate the exact mechanics of real adversaries. Use them as stress tests, not as precise predictions.