Stanford Paper Shows LLMs Run at a Fraction of Their Creative Capacity — and a Single Prompt Fixes It

A new Stanford study demonstrates that RLHF-tuned models suffer from 'mode collapse' that suppresses creative output. A technique called Verbalized Sampling boosts measured creativity by 2.1x by forcing models to explore low-probability outputs.

Subscribe to unlock all stories

Get full access to The Singularity Ledger, archive included.

Cancel anytime. Payments powered by Stripe.