An analysis of why AI progress hasn't slowed despite the theoretical expectation that longer-horizon reinforcement learning tasks require exponentially more compute. Three main explanations are offered: labs may be achieving massive efficiency gains by eliminating training bugs (like GPT-4's FP16 summation error); human intuitions about AI intelligence are unreliable, especially as models approach human-level performance; and raw intelligence is only one of many capability-determining traits — factors like persistence, working memory, and agentic familiarity matter too. The post argues AI development is dominated by discontinuous 'lightning strikes' rather than smooth scaling curves, making general slowdown predictions unreliable.

6m read timeFrom seangoedecke.com
Post cover image
Table of contents
What’s in a FLOP?People are bad at judging intelligenceIntelligence is not the sole determinant of capabilityFinal thoughts

Sort: