A step-by-step guide to building a mood-based movie recommendation engine using voyage-4-nano embeddings, Hugging Face datasets, and MongoDB Atlas Vector Search. The tutorial covers setting up a local MongoDB Atlas deployment, generating text embeddings with Sentence Transformers, indexing 500 movies with plot embeddings, and exposing a FastAPI search endpoint that accepts natural language mood descriptions. It also explores how Matryoshka embedding dimensions (256, 512, 1024, 2048) affect retrieval quality and latency trade-offs, with concrete similarity score comparisons.
Table of contents
Why mood-based search?Architecture overviewUnderstanding voyage-4-nanoConfigure the development environmentImplement system componentsProject structureRun and testComparing embedding dimensionsSort: