Garak, LLM Vulnerability Scanner

garak, the LLM vulnerability scanner, utilizes generative AI to probe for weaknesses in LLMs, including hallucination, data leakage, misinformation, and more. Acting like ‘nmap for LLMs,’ garak focuses on causing LLM or dialog system failures using adaptive probes. Surprisingly, it’s free and constantly evolving to support various applications. The tool supports multiple LLM models like Hugging Face and OpenAI, ensuring comprehensive vulnerability assessment. Installation is straightforward via pip, and garak offers detailed documentation and user guides for beginners. With the ability to customize probes and generators, garak provides a robust framework for security testing and continuous improvement.

https://github.com/NVIDIA/garak

To top