Bugcrowd introduces AI Penetration Testing to uncover vulnerabilities in AI systems, including LLM applications, using vetted pentesters.
NIST's AI Risk Management Framework (AI RMF) is a voluntary framework for managing risks associated with artificial intelligence.
Gandalf is a prompting skills test by Lakera that challenges users to extract secret information from a large language model.
MITRE ATLAS is a knowledge base of adversary tactics and techniques targeting AI systems, helping organizations secure AI deployments.
OWASP unveils the Gen AI Red Teaming Guide, offering a structured approach to evaluating LLM and Generative AI vulnerabilities.
OWASP Gen AI Security Project provides resources, risk strategies, and global collaboration to secure LLMs, AI agents, and generative AI technologies.
OWASP Machine Learning Security Top 10 (2023) identifies the top 10 security risks for machine learning systems, focusing on developers and security experts.
Prompt Airlines AI CTF by Wiz challenges users to manipulate an AI chatbot for a free ticket, focusing on AI security vulnerabilities.
Payloads and techniques for exploiting prompt injection vulnerabilities in AI/NLP models like ChatGPT, including direct and indirect methods.
PromptBench: A unified library for evaluating and understanding large language models, enabling quick model assessment and robustness testing.
PyRIT is an open-source Python framework empowering security pros to proactively identify risks in generative AI systems.