LocalRAG is a self-hosting application designed to deploy a production-level RAG chatbot on a local machine or server, ensuring data privacy and security. The application utilizes LangChain for backend processing, Streamlit for the frontend, Qdrant as the vector store, and Redis for storing chat messages. Prerequisites include

5m read timeFrom blog.gopenai.com
Post cover image
Table of contents
Containerize and Deploy ML Models with FastAPI & Dockerawesome_llm_apps/LocalRAG at main · HemachandranD/awesome_llm_apps

Sort: