Prompt injection and jailbreaking are two different types of attacks with different implications and risks. Prompt injection is a class of attacks against applications built on Large Language Models (LLMs) that involve concatenating untrusted user input with a trusted prompt. Jailbreaking, on the other hand, is a class of
Sort: