LogoHackDB
  • Search
  • Category
  • Tag
  • Pricing
  • Submit
LogoHackDB

Tag

Explore by tags

LogoHackDB

The Ultimate Directory for Offensive Security

RedditX (Twitter)
Product
  • Search
  • Category
  • Tag
Resources
  • Pricing
  • Submit
Pages
  • Home
  • Sitemap
  • Support
Company
  • About Us
  • Privacy Policy
  • Terms of Service
Copyright © 2025 All Rights Reserved.
image of HexStrike AI
AI SecurityApplication SecurityBug Bounty
Visit Website

HexStrike AI

Details

AI-powered cybersecurity automation platform with 150+ tools and autonomous AI agents for pentesting, vulnerability discovery, and bug bounty automation.

AIBug Bounty
image of AI & LLM Security Measures
AI Security
Visit Website

AI & LLM Security Measures

Details

Practical measures for enterprises to secure AI and LLM technology adoption, reducing security risks with pragmatic advice.

AI
image of AI Penetration Testing
AI Security
Visit Website

AI Penetration Testing

Details

Bugcrowd introduces AI Penetration Testing to uncover vulnerabilities in AI systems, including LLM applications, using vetted pentesters.

AI
image of AI Prompt Fuzzer
AI Security
Visit Website

AI Prompt Fuzzer

Details

Burp extension to fuzz GenAI/LLM prompts for behavioral and prompt injection vulnerabilities, aiding security assessments.

AI
image of AI Risk Management Framework
AI Security
Visit Website

AI Risk Management Framework

Details

NIST's AI Risk Management Framework (AI RMF) is a voluntary framework for managing risks associated with artificial intelligence.

AI
image of AI-Red-Teaming-Playground-Labs
AI SecurityRed Team OperationsTraining
Visit Website

AI-Red-Teaming-Playground-Labs

Details

AI Red Teaming Playground Labs: Challenges for AI red teaming training, covering adversarial ML and Responsible AI failures.

TrainingAIVulnerability IntelligenceExternal
image of EnforsterAI
AI SecurityApplication SecurityAPI SecurityCloud SecurityVulnerability Intelligence
Visit Website

EnforsterAI

Details

AI-native SAST tool for code security, detecting vulnerabilities, secrets, IaC issues, and AI model security with actionable AI fixes.

AIStatic AnalysisAPICloudWeb+1
image of Gandalf
AI Security
Visit Website

Gandalf

Details

Gandalf is a prompting skills test by Lakera that challenges users to extract secret information from a large language model.

AI
image of Huntr
AI SecurityBug Bounty
Visit Website

Huntr

Details

Huntr is the first bug bounty platform focused on AI and ML security, rewarding researchers for finding vulnerabilities in open-source and model-related software.

Bug BountyAI
image of Lakera
AI Security
Visit Website

Lakera

Details

Lakera provides an AI-native security platform to accelerate GenAI initiatives, trusted by Fortune 500s and backed by AI red teams.

AI
image of MITRE ATLAS
AI Security
Visit Website

MITRE ATLAS

Details

MITRE ATLAS is a knowledge base of adversary tactics and techniques targeting AI systems, helping organizations secure AI deployments.

AI
image of ModelScan
AI Security
Visit Website

ModelScan

Details

ModelScan: scans ML models for unsafe code, supporting H5, Pickle, and SavedModel formats, protecting against serialization attacks.

AI
  • Previous
  • 1
  • 2
  • Next
  • All

  • AI

  • API

  • Bruteforce

  • Bug Bounty

  • C2

  • Certifications

  • Classification

  • Cloud

  • DoS

  • External

  • Internal

  • Mobile

  • OSINT

  • Phishing

  • Physical

  • Report

  • Services

  • Static Analysis

  • Training

  • Vulnerability Intelligence

  • Web

  • Wireless