Zed's team details how they built Zeta2, their improved edit prediction model. Key improvements include richer input context (finer-grained edit history, LSP-resolved type/symbol definitions), a switch from Qwen 2.5 Coder (7B) to Seed Coder (8B) as the base model, and a knowledge distillation pipeline using Claude Sonnet as the

6m read timeFrom zed.dev
Post cover image
Table of contents
Knowledge distillationCollecting the right training dataThe reversal problemSwitching the base modelHow to know when it's time to shipWhat's nextIntroducing Zed AIWe Rebuilt Zeta from the Training Data UpChoose Your Edit Prediction Provider
4 Comments

Sort: