Amazon Q: Now with Helpful AI-Powered Self-Destruct Capabilities
This title could be clearer and more informative.Try out Clickbait Shieldfor free (5 uses left this month).
A security researcher successfully injected malicious code into Amazon Q Developer through a pull request that was merged without proper review. The compromised AI coding assistant contained prompts designed to delete local files and AWS cloud resources using shell commands and AWS CLI. Amazon quietly removed the malicious
Table of contents
“Security Is Our Top Priority,” They Said With a Straight FaceLet’s Talk About That Prompt“No Customer Resources Were Impacted.” According to… What, Exactly?The Pull Request That Came From NowhereAmazon’s Response: Delete the Evidence, Issue a Platitude“But No Users Were Impacted” Is Doing a Lot of WorkWhat We’ve Learned (Absolutely Nothing, But Here’s a List Anyway)This Isn’t New—And My Reaction Shouldn’t Be a SurpriseSort: