A Two-Person Lab Open-Sources a Text-to-Video Model Under Apache 2.0
Linum, a self-described two-person AI lab, released open-weight text-to-video models at 2 billion parameters under Apache 2.0 — the kind of release that would have been unthinkable from a team this size even a year ago.
Subscribe to unlock all stories
Get full access to The Singularity Ledger, archive included.
Cancel anytime. Payments powered by Stripe.