SR 26-2, the updated US federal model risk management guidance, is analyzed across six key gaps it leaves unaddressed. The regulation excludes deterministic rule-based systems, spreadsheets, and generative/agentic AI from its scope — yet all three carry real institutional risk. Aggregate model risk is named but not operationalized. Meanwhile, the EU AI Act and US state laws are tightening in parallel. The shift to principles-based guidance moves interpretive burden to institutions, requiring them to articulate their own standards. Practical recommendations include governing GenAI under a parallel framework, tracking model dependencies as live graphs, and building governance architecture alongside deployment rather than retrofitting it later.

6m read timeFrom domino.ai
Post cover image
Table of contents
Take 1. The model vs. rules engine line is drawn by code type, not by riskTake 2. Aggregate risk got a paragraph, not a frameworkTake 3. The spreadsheet carve-out is where errors actually happenTake 4. GenAI is out of SR 26-2 scope, not out of governance scopeTake 5. US governance got lighter while everyone else tightenedTake 6. The burden of proof just moved to the institutionIf you're leading AI and ML the same regulation reads differently from your seatThe real advantage

Sort: