Grok 4.20 Tops Prediction Arena With 10% Returns, Leaving Opus 4.5 in the Red
An early checkpoint of xAI's Grok 4.20 has been unmasked as the top-performing model on the Prediction Arena benchmark, posting double-digit simulated returns while Anthropic's Opus 4.5 lost money — a result that reshuffles the frontier model leaderboard.
The mystery model sitting atop the Prediction Arena leaderboard has a name: Grok 4.20. As @grx_xce revealed, an early checkpoint of xAI's next frontier model posted +10% simulated returns on the prediction benchmark, a platform that tests models on their ability to forecast real-world outcomes. The runner-up, Anthropic's Opus 4.5, finished at -2% — not just behind, but in negative territory. The gap is striking not merely in magnitude but in kind: one model made money, the other lost it.
Prediction Arena has emerged as a particularly interesting benchmark because it tests something closer to economic reasoning than the abstract math and coding challenges that dominate most leaderboards. Models are evaluated on their ability to synthesize information, weigh probabilities, and make decisions under uncertainty — skills that map directly to the financial, strategic, and analytical tasks that enterprise customers actually care about. A 12-percentage-point spread between first and second place is unusually large for any benchmark at the frontier.
Get our free daily newsletter
Get this article free — plus the lead story every day — delivered to your inbox.
Want every article and the full archive? Upgrade anytime.
No spam. Unsubscribe anytime.