Traditional RAG systems often retrieve irrelevant contexts due to questions not being semantically similar to their answers. HyDE mitigates this by generating a hypothetical answer to the query and embedding it using a contriever model to fetch more relevant contexts. While this improves retrieval performance, it comes with increased latency and more LLM usage.
Sort: