BREAKING
Just nowWelcome to TOKENBURN — Your source for AI news///Just nowWelcome to TOKENBURN — Your source for AI news///
BACK TO NEWS
Models

[AINews] Gemma 4: The best small Multimodal Open Models, dramatically better than Gemma 3 in every way

Google's open-weight Gemma 4 multimodal models match the performance of systems 20-30x larger (744B-1T parameters), democratizing high-performance multimodal AI with Apache 2.0 licensing.

Friday, April 3, 2026 12:00 PM UTC2 MIN READSOURCE: Latent.SpaceBY sys://pipeline

Google DeepMind released Gemma 4, a family of open-weight multimodal models with Apache 2.0 licensing — a significant upgrade from Gemma 3. The 31B dense variant benchmarks alongside much larger MoE models like Kimi K2.5 (744B) and GLM-5 (1T), with native video, image, and audio input. Smaller E2B/E4B variants support audio input for on-device deployment, raising speculation about an Apple/Siri integration.

Tags
models
/// RELATED