Anthropic Ships 1 Million Token Context to Everyone — and It's Already Reshaping How Developers Work

Claude Opus 4.6 and Sonnet 4.6 now offer a million-token context window at standard pricing across all plans. With a best-in-class 78.3% score on MRCR v2 and native Claude Code integration, Anthropic is making a loud bet that long context isn't a premium feature — it's table stakes.

Anthropic made its million-token context window generally available for Claude Opus 4.6 and Claude Sonnet 4.6 on Friday, as announced by the official Claude account. The rollout eliminates what had been one of the last meaningful paywalls in frontier AI: the ability to feed a model an entire codebase, a full document corpus, or a lengthy agent session history in a single pass. It's available on all plans, including through Claude Code, at standard pricing — no upcharges, no token tiers.

The benchmark numbers tell one story: Opus 4.6 scores 78.3% on MRCR v2 at a million tokens, which Anthropic claims is the highest among frontier models. MRCR v2 — Multi-turn Retrieval and Compositional Reasoning — is designed to stress-test a model's ability to find and synthesize information scattered across vast inputs. It's not a vanity metric. For developers loading entire repositories or running long-lived agents, it's a proxy for whether the model actually remembers what you told it 800,000 tokens ago.

Get our free daily newsletter

Get this article free — plus the lead story every day — delivered to your inbox.

Want every article and the full archive? Upgrade anytime.

No spam. Unsubscribe anytime.