The post provides a comprehensive overview of diffusion models, explaining how they work with minimal use of mathematical notation. It focuses on how these models learn to generate images from noise, describes their architecture and the forward and reverse diffusion processes, and highlights their applications in text-to-image generation. The use of illustrations and examples aids in understanding the complex probability distributions and noise transformations involved in diffusion models.

Table of contents
IntroWhat exactly is it that diffusion models learn?Dataset visualizationHow and why do diffusion models work?Once you’ve trained a model, how do you get useful stuff out of it?Testing the conditioned diffusion modelFurther remarksFurther readingFun extrasSort: