A developer shares two agentic skills (llm-prompts:reviewer and llm-prompts:builder) designed to help identify and prevent prompt injection vulnerabilities in LLM applications. The reviewer runs 51 checks drawn from OWASP LLM01:2025, MITRE ATLAS, and NVIDIA NeMo Guardrails against any codebase, rating findings as CRITICAL/HIGH/MEDIUM/LOW with remediation guidance. The builder generates secure system prompts and scaffolding code (Ruby, Python, JS/TS, Go) annotated with check IDs so every defense is documented. The author is transparent that this is not a static analysis tool and results can vary, but argues it raises the security floor for solo developers and small teams building LLM features without dedicated security expertise.

8m read timeFrom allaboutcoding.ghinda.com
Post cover image
Table of contents
The ProblemWhat I BuiltThe 51 Checks, BrieflyThe Builder: Focused on securityA Reference ImplementationTestingWhat This Is and Is NotResources

Sort: