Submit your favorite resources for free.

Submit
HackDB logoHackDB
icon of garak

garak

Garak is an LLM vulnerability scanner that probes for weaknesses like prompt injection, data leakage, hallucination, and toxicity.

Introduction

garak, LLM vulnerability scanner

Generative AI Red-teaming & Assessment Kit

garak is a command-line tool designed to check if an LLM can be made to fail in undesirable ways. It functions similarly to nmap or Metasploit, but for LLMs.

Key features:

  • Vulnerability Probing: Detects hallucination, data leakage, prompt injection, misinformation, toxicity, and jailbreaks.
  • Flexible LLM Support: Compatible with Hugging Face Hub, Replicate, OpenAI API, litellm, gguf models, and REST endpoints.
  • Customizable Probes: Allows specifying probe families or individual plugins for targeted testing.
  • Detailed Reporting: Generates JSONL reports with probing attempts and evaluation results, including hit logs for identified vulnerabilities.
  • Extensible Plugin Architecture: Supports developing custom probes and detectors.

Use cases:

  • Security assessments of LLMs and dialog systems.
  • Red teaming to identify failure modes.
  • Evaluating the effectiveness of different LLM configurations.
  • Continuous integration testing for LLM-powered applications.

Information

  • Publisher
  • Websitegithub.com
  • Created date04/11/2025
  • Published date04/11/2025

Categories

Tags

215+ Subscribers
Newsletter

Join 215+ Professionals

Receive our monthly newsletter featuring the latest additions to the directory.

No spam. Unsubscribe anytime.