The ChatGPT package uses Redis as a vector database to cache historical user interactions per session. It provides an adaptive prompt creation mechanism based on the current context of the conversation. Redis’ vector database with the LLM chatbot can provide an infinite amount of context. It then uses vector search to store an embedded conversation history.
Table of contents
Why context length mattersThe architecture of the ChatGPT memory projectCode walkthroughExample interactionsNext stepsSort: