Marius Siegert

LLM are already better forecasters than humas. I built the infra to leverage that.

by

After selling my previous AI fintech startup in August 2025, I finally had the time to work on a side project I had been thinking about for a while. I became fascinated by one specific question: Can LLMs meaningfully answer questions about the future?

When I started digging deeper, I found benchmarks and platforms like ForecastBench [1], Prophet Arena [2], and Metaculus [3]. What stood out was that the results consistently suggested something interesting: LLMs are already outperforming many humans in forecasting tasks, and in some cases they are getting surprisingly close to superforecasters (experts with exceptional long-term prediction track records).

That made me think: if the benchmarks already show this potential, then the real missing piece is not the model capability itself, but the infrastructure around it -> So I started building exactly that.

The idea is simple: any user can ask a question about the future, and the system continuously tracks that question over time. Instead of giving a one-off answer, the LLM monitors developments, updates its reasoning, and improves the forecast as new information becomes available.

I’m building this because I believe forecasting will become a much bigger part of how we use AI — not just for generating content, but for helping people make better decisions under uncertainty.

Would love to hear your thoughts: Oracle Markets | AI-Powered Prediction Markets

Source:
[1] ForecastBench
[2] Prophet Arena
[3] FutureEval | Metaculus

14 views

Add a comment

Replies

Be the first to comment