Large language models function as randomized algorithms, deliberately introducing non-determinism through probabilistic token selection rather than always choosing the highest-probability output. This design makes LLMs robust against adversarial attacks by preventing repeatable failures on specific inputs, though it trades off

18m read timeFrom towardsdatascience.com
Post cover image
Table of contents
Randomized Algorithms and AdversariesLarge Language ModelsAnalyzing the RandomnessWhat About Creativity?ObtusenessTemperatureWhat This Means PracticallyConclusion

Sort: