A Domino Data Lab engineer walks through building an explainable credit risk AI application that pairs an XGBoost classifier with an agentic AI system. The agent uses SHAP feature importance, population benchmarking, risk threshold flagging, and feature analysis to generate plain-language explanations for loan decisions — going

12m read timeFrom domino.ai
Post cover image
Table of contents
How can systems that use generative AI have explainability?What it actually took to build this foundationWhat does compliance actually look like in practice?What a connected platform changes

Sort: