Deep Evidential Regression (DER) is a method for uncertainty quantification in neural networks that enables single-pass estimation of both epistemic and aleatoric uncertainty. Unlike deep ensembles or variational inference, DER trains a network to output parameters of the Normal Inverse Gamma (NIG) distribution, from which uncertainty estimates are derived analytically. The post covers the theory behind epistemic vs. aleatoric uncertainty, the NIG distribution, the evidential loss function, and a PyTorch implementation approximating a cubic function. Limitations include difficulty fully disentangling uncertainty types and sensitivity to the regularization hyperparameter.

11m read timeFrom towardsdatascience.com
Post cover image
Table of contents
What is Uncertainty and Why is it Important?Formalizing Uncertainty and Uncertainty Quantification (UQ) ApproachesDER TheoryEvidential Deep Learning Cubic ExampleConclusions

Sort: