This post explores the exploitation of generative AI apps using prompt injection techniques, discussing the different versions of prompt injection attacks and the potential for leaking confidential or protected information through these attacks.

8m read timeFrom infosecwriteups.com
Post cover image
Table of contents
Exploiting Generative AI Apps With Prompt InjectionWhat is Prompt Injection?The SetupWutsTheTea v1.0 — Exploitation via Basic PromptsWutsTheTea v2.0 — Exploitation via Prompt InjectionConclusion

Sort: