AI-powered cybersecurity automation platform with 150+ tools and autonomous AI agents for pentesting, vulnerability discovery, and bug bounty automation.
Practical measures for enterprises to secure AI and LLM technology adoption, reducing security risks with pragmatic advice.
Bugcrowd introduces AI Penetration Testing to uncover vulnerabilities in AI systems, including LLM applications, using vetted pentesters.
Burp extension to fuzz GenAI/LLM prompts for behavioral and prompt injection vulnerabilities, aiding security assessments.
NIST's AI Risk Management Framework (AI RMF) is a voluntary framework for managing risks associated with artificial intelligence.
AI Red Teaming Playground Labs: Challenges for AI red teaming training, covering adversarial ML and Responsible AI failures.
AI-native SAST tool for code security, detecting vulnerabilities, secrets, IaC issues, and AI model security with actionable AI fixes.
Gandalf is a prompting skills test by Lakera that challenges users to extract secret information from a large language model.
Huntr is the first bug bounty platform focused on AI and ML security, rewarding researchers for finding vulnerabilities in open-source and model-related software.
MITRE ATLAS is a knowledge base of adversary tactics and techniques targeting AI systems, helping organizations secure AI deployments.
ModelScan: scans ML models for unsafe code, supporting H5, Pickle, and SavedModel formats, protecting against serialization attacks.
OWASP unveils the Gen AI Red Teaming Guide, offering a structured approach to evaluating LLM and Generative AI vulnerabilities.