Large language models (LLM) and conversational AI have great potential to make applications easier to use. Conviva shares their experience of building a conversational Q&A solution using LLM, their choice of open source models, and their hybrid approach of combining fine-tuning with retrieval-augmented generation (RAG) to improve the quality of answers.

8m read time From thenewstack.io
Post cover image
Table of contents
Host or Use an API?What’s the Right Language Model?Fine-Tuning, RAG or Both?Our Approach: Combining Fine-Tuning With RAGConclusions

Sort: