Setup
Model Selection
With litellm, you can use any supported model string:/, it’s passed directly to litellm. Otherwise, the provider prefix is added automatically.
Standalone Forecast
Scan for Edges
Pipeline Mode
Integrate LLM forecasting into yourhz.run() pipeline:
llm_signal() pipeline function:
- Refreshes the LLM forecast every N cycles
- Injects
ctx.params["llm_forecast"](LLMForecast) andctx.params["llm_edge_bps"](float) - Caches results between refreshes
Configuration
News Sources
The engine fetches news from four sources:| Source | Config | Requirement |
|---|---|---|
| RSS | news_sources list of URLs | feedparser package |
| NewsAPI | newsapi_query string | NEWSAPI_KEY env var |
| Exa.ai | exa_query string | EXA_API_KEY env var + exa-py package |
| Tavily | tavily_query string | TAVILY_API_KEY env var + tavily-python package |
Graceful Degradation
| Condition | Behavior |
|---|---|
| No API key | Returns neutral forecast (prob=0.5, confidence=0) |
| API error | Returns cached result if available |
| News fetch fails | Proceeds without news context |
| Rate limit hit | Returns cached result or neutral |
| Package not installed | Logs warning with install hint |
Types
LLMForecast
| Field | Type | Description |
|---|---|---|
market_id | str | Market identifier |
market_title | str | Market question |
llm_prob | float | LLM probability estimate [0.01, 0.99] |
reasoning | str | LLM reasoning text |
confidence | float | Self-reported confidence [0, 1] |
edge_bps | float | (llm_prob - market_price) * 10000 |
market_price | float | Market price at forecast time |
model | str | Model used |
news_sources | list[str] | URLs of news items used |
timestamp | float | Unix timestamp |
LLMConfig
| Field | Default | Description |
|---|---|---|
provider | "anthropic" | LLM provider (or any litellm provider) |
model | "" | Model name (empty = default). Accepts litellm strings like "openrouter/..." |
news_sources | [] | RSS feed URLs |
newsapi_query | "" | NewsAPI search query |
exa_query | "" | Exa.ai semantic search query |
tavily_query | "" | Tavily real-time search query |
max_news_items | 5 | Max news items in prompt |
cache_ttl | 300.0 | Cache TTL in seconds |
rate_limit_per_minute | 10 | Max API calls per minute |
max_tokens | 1024 | Max response tokens |
temperature | 0.2 | Sampling temperature |