Gemma 4 is now available via the Gemini API and Google AI Studio under Apache 2.0 license. It comes in two variants: a 31B dense model with 256K context window and a 26B Mixture-of-Experts model activating only ~4B parameters per inference. Both support native multimodal inputs (images + text) and chain-of-thought reasoning. Google AI Studio lets you prototype visually and export working code in TypeScript, Python, Go, or cURL with one click, including automatic base64 image encoding and thinkingConfig parameters.

4m read timeFrom dev.to
Post cover image
Table of contents
The Models: Apache 2.0, MoE, and 256k ContextMultimodal Inputs + Chain of ThoughtShipping the codeGo build open-source things!

Sort: