Documentation Index
Fetch the complete documentation index at: https://mathematicalcompany.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
The LLM Signal Engine uses large language models combined with real-time news to generate probability forecasts for prediction markets. It detects edge opportunities where the LLM’s estimate diverges from the current market price.
Uses litellm for provider-agnostic LLM calls (100+ providers including OpenRouter, Anthropic, OpenAI, Together, Groq, and more). Falls back to direct Anthropic/OpenAI SDK calls when litellm is not installed.
Setup
# Install litellm (recommended -- supports 100+ providers)
pip install litellm
# Anthropic (default)
export ANTHROPIC_API_KEY="sk-ant-..."
# OpenAI
export OPENAI_API_KEY="sk-..."
# OpenRouter (access many models via one key)
export OPENROUTER_API_KEY="sk-or-..."
# NewsAPI (optional, for news enrichment)
export NEWSAPI_KEY="..."
# Exa.ai (optional, semantic search)
export EXA_API_KEY="..."
pip install exa-py
# Tavily (optional, real-time search)
export TAVILY_API_KEY="tvly-..."
pip install tavily-python
Model Selection
With litellm, you can use any supported model string:
import horizon as hz
# Anthropic (default)
config = hz.LLMConfig(provider="anthropic")
# OpenAI
config = hz.LLMConfig(provider="openai", model="gpt-4o")
# OpenRouter (any model)
config = hz.LLMConfig(model="openrouter/meta-llama/llama-3-70b-instruct")
# Together AI
config = hz.LLMConfig(model="together_ai/mistralai/Mixtral-8x7B-Instruct-v0.1")
# Groq
config = hz.LLMConfig(model="groq/llama3-70b-8192")
When the model string contains a /, it’s passed directly to litellm. Otherwise, the provider prefix is added automatically.
Standalone Forecast
import horizon as hz
# Single market forecast
forecast = hz.llm_forecast(
market_id="will-btc-hit-100k",
market_title="Will BTC hit $100k by end of 2026?",
market_price=0.65,
)
print(f"LLM probability: {forecast.llm_prob:.2%}")
print(f"Edge: {forecast.edge_bps:.0f} bps")
print(f"Reasoning: {forecast.reasoning}")
print(f"Confidence: {forecast.confidence:.2%}")
Scan for Edges
# Scan multiple markets for LLM-detected edges
markets = [
{"id": "market-1", "title": "Will X happen?", "price": 0.50},
{"id": "market-2", "title": "Will Y happen?", "price": 0.70},
]
edges = hz.llm_scan(markets, min_edge_bps=300)
for fc in edges:
print(f"{fc.market_title}: LLM={fc.llm_prob:.2%} Market={fc.market_price:.2%} Edge={fc.edge_bps:.0f}bps")
Pipeline Mode
Integrate LLM forecasting into your hz.run() pipeline:
def my_quoter(ctx):
forecast = ctx.params.get("llm_forecast")
edge = ctx.params.get("llm_edge_bps", 0)
if forecast and abs(edge) > 300:
return hz.quotes(fair=forecast.llm_prob, spread=0.04)
return []
hz.run(
name="llm-strategy",
markets=["will-btc-hit-100k"],
pipeline=[
hz.llm_signal(forecast_interval_cycles=20),
my_quoter,
],
)
The llm_signal() pipeline function:
- Refreshes the LLM forecast every N cycles
- Injects
ctx.params["llm_forecast"] (LLMForecast) and ctx.params["llm_edge_bps"] (float)
- Caches results between refreshes
Configuration
config = hz.LLMConfig(
provider="anthropic", # or "openai", or any litellm provider
model="", # empty = provider default; or "openrouter/..."
news_sources=[ # RSS feed URLs
"https://feeds.bbci.co.uk/news/rss.xml",
"https://rss.nytimes.com/services/xml/rss/nyt/World.xml",
],
newsapi_query="bitcoin crypto", # NewsAPI search query
exa_query="bitcoin price prediction", # Exa.ai semantic search
tavily_query="bitcoin market analysis", # Tavily real-time search
max_news_items=5,
cache_ttl=300.0, # 5 minute cache
rate_limit_per_minute=10,
max_tokens=1024,
temperature=0.2,
)
forecast = hz.llm_forecast("market-1", "Will X?", 0.50, config=config)
News Sources
The engine fetches news from four sources:
| Source | Config | Requirement |
|---|
| RSS | news_sources list of URLs | feedparser package |
| NewsAPI | newsapi_query string | NEWSAPI_KEY env var |
| Exa.ai | exa_query string | EXA_API_KEY env var + exa-py package |
| Tavily | tavily_query string | TAVILY_API_KEY env var + tavily-python package |
All are optional. Without news, the LLM forecasts based on the market question alone. Exa.ai provides semantic search (good for finding contextual articles), while Tavily provides real-time web search.
Graceful Degradation
| Condition | Behavior |
|---|
| No API key | Returns neutral forecast (prob=0.5, confidence=0) |
| API error | Returns cached result if available |
| News fetch fails | Proceeds without news context |
| Rate limit hit | Returns cached result or neutral |
| Package not installed | Logs warning with install hint |
Types
LLMForecast
| Field | Type | Description |
|---|
market_id | str | Market identifier |
market_title | str | Market question |
llm_prob | float | LLM probability estimate [0.01, 0.99] |
reasoning | str | LLM reasoning text |
confidence | float | Self-reported confidence [0, 1] |
edge_bps | float | (llm_prob - market_price) * 10000 |
market_price | float | Market price at forecast time |
model | str | Model used |
news_sources | list[str] | URLs of news items used |
timestamp | float | Unix timestamp |
LLMConfig
| Field | Default | Description |
|---|
provider | "anthropic" | LLM provider (or any litellm provider) |
model | "" | Model name (empty = default). Accepts litellm strings like "openrouter/..." |
news_sources | [] | RSS feed URLs |
newsapi_query | "" | NewsAPI search query |
exa_query | "" | Exa.ai semantic search query |
tavily_query | "" | Tavily real-time search query |
max_news_items | 5 | Max news items in prompt |
cache_ttl | 300.0 | Cache TTL in seconds |
rate_limit_per_minute | 10 | Max API calls per minute |
max_tokens | 1024 | Max response tokens |
temperature | 0.2 | Sampling temperature |