A Domino Data Lab engineer walks through building an explainable credit risk AI application that pairs an XGBoost classifier with an agentic AI system. The agent uses SHAP feature importance, population benchmarking, risk threshold flagging, and feature analysis to generate plain-language explanations for loan decisions — going beyond simple score summarization to active reasoning. The build was completed in hours using Domino's integrated platform, with Claude Code handling code generation and Domino providing versioned datasets, reproducible environments, experiment tracking, model registry, and governed compliance bundles aligned to the EU AI Act's high-risk AI requirements for credit scoring.

12m read timeFrom domino.ai
Post cover image
Table of contents
How can systems that use generative AI have explainability?What it actually took to build this foundationWhat does compliance actually look like in practice?What a connected platform changes

Sort: