A third scenario beyond AI adoption success or failure is proposed: open source models running locally could dominate AI. Key arguments include open source models reaching parity with frontier models within months of release, frontier providers facing unsustainable unit economics (OpenAI projecting $14B losses in 2026, Anthropic's $200/month subscription costing up to $5,000 in compute), the emergence of small specialized models as prices rise, and Apple's contrarian bet on local inference over datacenter buildout. Data from Epoch AI and Stanford HAI shows the gap between open and closed models shrinking to ~3 months lag and 1.7% Elo difference respectively. Local models offer a compelling 'fast, private, and free' value proposition that remains underappreciated because no one profits from their success.

5m read timeFrom tombedor.dev
Post cover image
Table of contents
Open source models keep up ​Remote providers increase prices (or degrade subscription value) ​Apple is betting on local ​Private and free is hard to beat ​Appendix ​Footnotes ​

Sort: