This post discusses the use of Azure OpenAI's Prompt Shield to mitigate prompt injection attacks. It explores the concept of prompt injection, demonstrates how sensitive information can be leaked via basic prompts, and evaluates the effectiveness of Prompt Shield in preventing data leakage. The post concludes by highlighting

15m read time From systemweakness.com
Post cover image
Table of contents
Mitigating Prompt Injection via Azure OpenAI’s Prompt ShieldWhat is Prompt Injection?The SetupBasic OpenAI Chatbot — No ProtectionsBasic OpenAI Chatbot — With Prompt ShieldBasic OpenAI Chatbot — With Prompt Shield & Custom InstructionsConclusion

Sort: