This post explores the threat of model deserialization issues in machine learning, specifically backdooring Keras models. It provides a step-by-step guide to detecting and mitigating backdoored model files, and recommends integrating detection tooling into MLOps pipelines.
Table of contents
Revisiting Husky AILoading the Original ModelAdding a Custom LayerSaving the Backdoored Model FileSimulating the AttackChecking the ResultInspecting and Identifying Backdoored ModelsMitigations and DetectionsConclusionReferencesAppendixSort: