Only the spark model is served through Cerebras, all speed optimizations for GPT-5.3-Codex are something different and the best is yet to come speed-wise! https://t.co/BsYqWpirU5

Rafael Bittencourt @rafaelobitten
VibeCoders of the world, I only have one thing to say: OpenAI just built a coding monster. The speed jump in gpt-5.3-codex xhigh is massive. It was likely boosted by the OpenAI x Cerebras partnership, plus whatever the OpenAI Codex team cooked behind the scenes. And now that we can run multiple multi-agent setups with real depth subagents calling subagents Codex has turned into an absolute beast for shipping high-quality software. Yes, you will burn through your limits much faster. The 2x until April makes it easier for now, but after that I honestly worry the cap will evaporate quickly. Still, Codex’s biggest pain point slow implementation of larger, more complex features finally feels solved. Congrats to @OpenAI @OpenAIDevs @sama @gdb and the Codex team @thsottiaux @embirico. This is insane. PS I’m running seven Pro accounts to squeeze every drop out of the doubled limit until April
Sort: