Google Releases VaultGemma 1B: a 1 billion parameter model fully trained with differential privacy

Google released VaultGemma 1B, a 1 billion parameter language model that implements differential privacy throughout the entire training process, not just fine-tuning. The model uses a 26-layer decoder-only transformer architecture trained on 13 trillion tokens with strict privacy guarantees (ε ≤ 2.0). While it shows measurable

Sort: