Large language models function as randomized algorithms, deliberately introducing non-determinism through probabilistic token selection rather than always choosing the highest-probability output. This design makes LLMs robust against adversarial attacks by preventing repeatable failures on specific inputs, though it trades off
Table of contents
Randomized Algorithms and AdversariesLarge Language ModelsAnalyzing the RandomnessWhat About Creativity?ObtusenessTemperatureWhat This Means PracticallyConclusionSort: