Google Open-Sources MedGemma 1.5 and MedASR, Putting 3D Medical Imaging and Clinical Dictation AI on Hugging Face

Google Research released two specialized medical AI models — one for analyzing CT and MRI scans in 3D, the other for clinical speech-to-text — both available as open weights on Hugging Face and Vertex AI. The move dramatically lowers the barrier to building healthcare AI that can run offline, on-premise, in environments where patient data can't leave the building.

Google Research announced the release of MedGemma 1.5 and MedASR, two open medical AI models designed for developers building healthcare applications, as @GoogleResearch detailed in a post that quickly went viral. MedGemma 1.5 handles 3D medical imaging — CT scans, MRIs, the volumetric data that radiologists spend their careers learning to interpret. MedASR targets a different but equally critical bottleneck: clinical dictation, converting the rapid-fire speech of physicians documenting patient encounters into structured text. Both are available on Hugging Face and through Google's Vertex AI platform.

The significance here isn't that Google built good medical AI models. It's that they open-sourced them. Medical AI has long been trapped behind proprietary walls, locked into specific vendor ecosystems, and gated by the enormous cost of acquiring and labeling clinical datasets. By releasing these as open weights, Google is effectively subsidizing an entire ecosystem of health tech startups and hospital IT departments that couldn't afford to train foundation models on medical data themselves.

Get our free daily newsletter

Get this article free — plus the lead story every day — delivered to your inbox.

Want every article and the full archive? Upgrade anytime.

No spam. Unsubscribe anytime.