Researchers have conducted a prompt hacking competition to study prompt injection attacks and have identified the most common successful strategy. The results reveal the vulnerabilities of large language models to prompt hacking.
Sort: