This post discusses the challenges of prompt injection in large language model (LLM) applications and how attackers can create conditional prompt injection payloads. It explores the use of conditional instructions in emails and the impacts of successful prompt injections.
Table of contents
Prompt Injection Exploit DevelopmentWho Am I?Copilot And Indirect Prompt InjectionsConditional Instructions For Specific UsersMore Advanced Conditional InstructionsThe Impact of Successful Prompt InjectionsResponsible DisclosureConclusionSort: