The ChatGPT package uses Redis as a vector database to cache historical user interactions per session. It provides an adaptive prompt creation mechanism based on the current context of the conversation. Redis’ vector database with the LLM chatbot can provide an infinite amount of context. It then uses vector search to store an embedded conversation history.

7m read timeFrom redis.com
Post cover image
Table of contents
Why context length mattersThe architecture of the ChatGPT memory projectCode walkthroughExample interactionsNext steps

Sort: