Kilo AI made the Kimi K2.5 model free for a week and saw 3x expected usage, exceeding 50B tokens daily. The model quickly became a top performer in Architect mode for system design tasks. While automatic context caching reduces input costs by 75%, the model's verbosity undermines savings—it generates 2.5x more tokens than
Sort: