Gemma 4 is now available via the Gemini API and Google AI Studio under Apache 2.0 license. It comes in two variants: a 31B dense model with 256K context window and a 26B Mixture-of-Experts model activating only ~4B parameters per inference. Both support native multimodal inputs (images + text) and chain-of-thought reasoning.

4m read timeFrom dev.to
Post cover image
Table of contents
The Models: Apache 2.0, MoE, and 256k ContextMultimodal Inputs + Chain of ThoughtShipping the codeGo build open-source things!

Sort: