Spyware Injection Into Your ChatGPT's Long-Term Memory (SpAIware) · Embrace The Red
This title could be clearer and more informative.Try out Clickbait Shieldfor free (5 uses left this month).
The post details a security vulnerability in ChatGPT's macOS app, where attackers could exploit prompt injection to insert spyware into ChatGPT’s long-term memory. Though OpenAI has released a fix, users are advised to update their apps and regularly review stored memories for any suspicious content. The method involves injecting malicious instructions via untrusted websites, leading to continuous data exfiltration. Additional measures like reviewing the system’s memory settings and using temporary chats are recommended for added safety.
Table of contents
Background InformationHacking Memories to Store Malicious InstructionsPersisting Data Exfiltration Instructions in ChatGPT’s MemoryEnd-to-End Exploit DemonstrationStep by Step ExplanationIs url_safe a holistic fix?Is Hacking Memories via Prompt Injection Fixed?ConclusionDisclosure TimelineReferencesAppendix2 Comments
Sort: