Friends Don't Let Friends Use Ollama

This title could be clearer and more informative.Try out Clickbait Shieldfor free (5 uses left this month).

A detailed critique of Ollama, the popular local LLM runner, arguing it should be abandoned in favor of alternatives. The post documents Ollama's history of failing to credit llama.cpp (its core inference engine), MIT license non-compliance for over 400 days, misleading model naming (e.g., labeling distilled DeepSeek-R1 variants as the full model), shipping a closed-source GUI app, a poorly executed fork of llama.cpp that introduced regressions and 1.8x performance gaps, a proprietary model registry that creates vendor lock-in, and a pivot toward cloud-hosted models with unclear privacy guarantees. The author recommends llama.cpp directly, LM Studio, Jan, llama-swap, LiteLLM, koboldcpp, or Red Hat's ramalama as better alternatives.

14m read timeFrom sleepingrobots.com
Post cover image
Table of contents
A llama.cpp Wrapper With AmnesiaThe Fork That Made Things WorseMisleading Model NamingThe Closed-Source AppThe Modelfile: Reinventing a Solved ProblemThe Registry BottleneckThe Cloud PivotThe VC PatternWhat To Use InsteadThe Bigger Picture

Sort: