LogoHackDB
icon of AI-Red-Teaming-Playground-Labs

AI-Red-Teaming-Playground-Labs

AI Red Teaming Playground Labs: Challenges for AI red teaming training, covering adversarial ML and Responsible AI failures.

Introduction

AI Red Teaming Playground Labs

This repository contains challenges for AI red teaming, used in the AI Red Teaming in Practice course. It teaches security professionals to systematically red team AI systems, incorporating adversarial machine learning and Responsible AI (RAI) failures.

Key Features:

  • Challenges covering direct/indirect prompt injection, metaprompt extraction, multi-turn attacks, and safety filter bypasses.
  • Uses Chat Copilot as the base environment.
  • Includes Docker Compose for easy setup.
  • Kubernetes deployment files are available for reference.

Use Cases:

  • Training security professionals in AI red teaming techniques.
  • Evaluating the robustness and security of AI systems.
  • Understanding and mitigating adversarial machine learning and RAI failures.

Information

Newsletter

Join the Community

Subscribe to our newsletter for the latest news and updates