LogoHackDB
RecentCategoryTagPricingSubmit
Sign In
LogoHackDB
Sign In
LogoHackDB

The Ultimate Directory for Offensive Security

Resources
  • Recent
  • Category

Category

AI Security

AI security focuses on attacking LLMs and agents via prompt injection, model extraction, RAG poisoning, tool abuse, and breaking trust boundaries.

  • Tag
  • Listing
    • Pricing
    • FAQ
    • Submit
    Pages
    • Home
    • Support
    • Sitemap
    • llms.txt
    Company
    • About Us
    • Privacy Policy
    • Terms of Service
    Copyright © 2026 All Rights Reserved.
    Back to categories
    image of HexStrike AI
    AI SecurityApplication SecurityBug Bounty
    Visit Website

    HexStrike AI

    Details

    AI-powered cybersecurity automation platform with 150+ tools and autonomous AI agents for pentesting, vulnerability discovery, and bug bounty automation.

    AIBug Bounty
    • Previous
    • 1
    • 2
    • 3
    • 4
    • Next
    image of Vulnetic Hacking Agent
    AI SecurityExploit DevelopmentRed Team OperationsVulnerability Intelligence
    Visit Website

    Vulnetic Hacking Agent

    Details

    Vulnetic AI is a high-performance hacking agent built for serious penetration testing at a fraction of typical costs.

    APICloudExternalInternalServices+1
    image of AI & LLM Security Measures
    AI Security
    Visit Website

    AI & LLM Security Measures

    Details

    Practical measures for enterprises to secure AI and LLM technology adoption, reducing security risks with pragmatic advice.

    AI
    image of AI Penetration Testing
    AI Security
    Visit Website

    AI Penetration Testing

    Details

    Bugcrowd introduces AI Penetration Testing to uncover vulnerabilities in AI systems, including LLM applications, using vetted pentesters.

    AI
    image of AI Prompt Fuzzer
    AI Security
    Visit Website

    AI Prompt Fuzzer

    Details

    Burp extension to fuzz GenAI/LLM prompts for behavioral and prompt injection vulnerabilities, aiding security assessments.

    AI
    image of AI Risk Management Framework
    AI Security
    Visit Website

    AI Risk Management Framework

    Details

    NIST's AI Risk Management Framework (AI RMF) is a voluntary framework for managing risks associated with artificial intelligence.

    AI
    image of AI-Red-Teaming-Playground-Labs
    AI SecurityRed Team OperationsTraining
    Visit Website

    AI-Red-Teaming-Playground-Labs

    Details

    AI Red Teaming Playground Labs: Challenges for AI red teaming training, covering adversarial ML and Responsible AI failures.

    TrainingAIVulnerability IntelligenceExternal
    image of Aikido
    Cloud SecurityApplication SecurityAPI SecurityVulnerability IntelligenceReportingAI Security
    Visit Website

    Aikido

    Details

    Aikido is a security platform for code and cloud, designed to automatically find and fix vulnerabilities in one central system.

    CloudAPIAIReportWeb+5
    image of Cygeniq
    AI Security
    Visit Website

    Cygeniq

    Details

    Cygeniq secures AI systems across their lifecycle with Hexashield AI, GRCortex AI, and CyberTiX AI.

    AI
    image of EnforsterAI
    AI SecurityApplication SecurityAPI SecurityCloud SecurityVulnerability Intelligence
    Visit Website

    EnforsterAI

    Details

    AI-native SAST tool for code security, detecting vulnerabilities, secrets, IaC issues, and AI model security with actionable AI fixes.

    AIStatic AnalysisAPICloudWeb+1
    image of Gandalf
    AI Security
    Visit Website

    Gandalf

    Details

    Gandalf is a prompting skills test by Lakera that challenges users to extract secret information from a large language model.

    AI
    image of Huntr
    AI SecurityBug Bounty
    Visit Website

    Huntr

    Details

    huntr is a bug bounty platform for AI and machine learning. It allows red teams to find and disclose vulnerabilities in open-source models and AI systems.

    Bug BountyAI