The STOIC framework provides a structured approach to securing generative AI applications through five threat categories: Stolen (data/model theft), Tricked (prompt injection and adversarial manipulation), Obstructed (denial of service), Infected (model poisoning and backdoors), and Compromised (infrastructure vulnerabilities).

57m watch time

Sort: