LLMs do not have human-like memory. Knowledge is encoded into model parameters during training (parametric memory) and frozen afterward. Within a conversation, LLMs use a context window as temporary working memory, discarding it after each response. Applications like ChatGPT simulate memory by re-sending conversation history

4m read time From msdevbuild.com
Post cover image
Table of contents
Training Is Not MemoryLLMs Learn Patterns, Not ExperiencesWorking Memory: The Context WindowWhy ChatGPT Feels Like It Has MemoryExample of “Simulated Memory”How Systems Add Real Memory: RAGKey Takeaways: Do LLMs Have Memory?

Sort: