AI-powered code assistants like GPT-4o offer productivity boosts but aren't free from vulnerabilities. This post reviews two insecure recommendations GPT-4o made to fix a path traversal vulnerability in a Node.js application, showing how these suggestions fail in practice. It concludes that while LLMs may someday help with security, they aren't reliable security tools yet.

1 Comment

Sort: