Security researcher hxr1 demonstrated how attackers can exploit Windows' native AI capabilities using Living-off-the-Land (LOTL) techniques. The attack hides malicious code inside ONNX model files—a legitimate machine learning format—by embedding payloads in metadata, model components, or weights using steganography. Because these files are signed by Microsoft and treated as benign by default, they evade many endpoint detection systems. Attackers can distribute poisoned models through phishing emails or open-source hubs. Recommended mitigations include adapting security tools to scan AI model files, configuring EDRs to monitor model loading processes, implementing YARA rules for static analysis, and using application controls like AppLocker.

3m read timeFrom aicyberinsights.com
Post cover image

Sort: