This post explains the concept of Low-Rank Adaptation (LoRA) in Stable Diffusion, a lightweight training technique for fine-tuning large language and diffusion models. LoRA reduces the number of trainable parameters, resulting in faster training times and smaller file sizes. It is compared to checkpoint models and Textual
•6m read time• From machinelearningmastery.com
Table of contents
OverviewWhat Is Low-Rank AdaptationCheckpoint or LoRA?Examples of LoRA modelsFurther ReadingsSummarySort: