LogoHackDB
icon of ModelScan

ModelScan

ModelScan: scans ML models for unsafe code, supporting H5, Pickle, and SavedModel formats, protecting against serialization attacks.

Introduction

ModelScan is an open-source tool from Protect AI designed to scan machine learning models for potential security vulnerabilities, specifically model serialization attacks. It supports multiple model formats, including H5, Pickle, and SavedModel, making it compatible with frameworks like PyTorch, TensorFlow, Keras, Sklearn, and XGBoost.

Key Features:

  • Vulnerability Scanning: Detects unsafe code signatures within model files.
  • Multi-Format Support: Works with various serialization formats.
  • Risk Ranking: Ranks detected issues as CRITICAL, HIGH, MEDIUM, or LOW.
  • CLI Interface: Easy-to-use command-line interface for scanning models.
  • Integration: Designed for integration into ML pipelines and CI/CD systems.

Use Cases:

  • Scanning pre-trained models before use.
  • Scanning models after training to detect supply chain attacks.
  • Scanning models before deployment to production environments.

Information

Categories

Tags

Newsletter

Join the Community

Subscribe to our newsletter for the latest news and updates