Sweep Next-Edit is a 1.5B parameter model that predicts code edits before you make them, running locally in under 500ms. The model is quantized to Q8_0 GGUF format, based on Qwen2.5-Coder, supports 8192 token context length, and outperforms models 4x its size on next-edit benchmarks. It's available under Apache 2.0 license with a JetBrains plugin for integration.

1m read time From huggingface.co
Post cover image
Table of contents
Model DescriptionUsageModel DetailsExampleLinksLicense

Sort: