Garak is a vulnerability scanner for large language models (LLMs) that probes for various weaknesses such as hallucinations, data leakage, prompt injection, and more. It supports a wide range of models from platforms like Hugging Face, OpenAI, and Replicate. Users can install Garak via PyPI or from its GitHub repository, and it

10m read timeFrom github.com
Post cover image
Table of contents
Get startedLLM supportInstall:Getting startedExamplesReading the resultsIntro to generatorsIntro to probesLoggingHow is the code structured?Developing your own pluginFAQCiting garak

Sort: