I-DLM (Introspective Diffusion Language Model) is a new approach to diffusion-based language models that closes the quality gap with autoregressive (AR) models. The core insight is that existing DLMs lack 'introspective consistency' — they generate tokens without verifying them as AR models implicitly do. I-DLM introduces
Table of contents
AbstractWhy Introspective Consistency?The I-DLM MethodResultsSpeedup Factor ExplorerDocumentation & ResourcesCitationSort: