How to trick ChatGPT into revealing Windows keys? I give up
This title could be clearer and more informative.Try out Clickbait Shieldfor free (5 uses left this month).
A security researcher discovered a method to bypass ChatGPT's safety guardrails by framing queries as a guessing game, successfully extracting real Windows product keys including one owned by Wells Fargo. The technique exploits the AI's logic flow by using the phrase 'I give up' as a trigger to reveal sensitive information that was inadvertently included in the training data. This highlights broader security concerns about sensitive data accidentally being incorporated into AI models through sources like GitHub repositories.
1 Comment
Sort: