Two techniques to improve existing Indic LLM finetuned models: train with DORA (not plain LORA) and train with ORPO.

2m read time From blog.gopenai.com
Post cover image
Table of contents
2 Techniques to Try NowAnswer.AI - Efficient finetuning of Llama 3 with FSDP QDoRAImproving LoRA: Implementing Weight-Decomposed Low-Rank Adaptation (DoRA) from ScratchFine-tune Llama 3 with ORPO

Sort: