# unconstrained Opus-tier koding at ~1000tok/s if you read The Agent Labs Thesis, custom model training is the smallest part of what an ideal Agent Lab should do. i was initially a skeptic that cog should invest a lot more in swe-x other than as a defensive manuever - cant out-bitter-lesson the big labs, but for “most” or routing tasks you could still do it. I’ve genuinely been turned around on this. the Cognition for Governments business is scaling to billions in ARR. and having own model based on being the first and largest cloud coding agents biz is offense, not defense.

Cognition @cognition
We are sharing an early preview of our ongoing SWE-1.6 training run. It significantly improves upon SWE-1.5 while being post-trained on the same pre-trained model - and it runs equally as fast at 950 tok/s. On SWE-Bench Pro it exceeds top open-source models. The preview model still exhibits some undesirable behaviors like overthinking and excessive self-verification, which we aim to improve. We are rolling out early access to a small subset of users in Windsurf.
Sort: