A Domino Data Lab engineer walks through building an explainable credit risk AI application that pairs an XGBoost classifier with an agentic AI system. The agent uses SHAP feature importance, population benchmarking, risk threshold flagging, and feature analysis to generate plain-language explanations for loan decisions — going
Table of contents
How can systems that use generative AI have explainability?What it actually took to build this foundationWhat does compliance actually look like in practice?What a connected platform changesSort: