Newsletter
Join the Community
Subscribe to our newsletter for the latest news and updates
ModelScan: scans ML models for unsafe code, supporting H5, Pickle, and SavedModel formats, protecting against serialization attacks.
ModelScan is an open-source tool from Protect AI designed to scan machine learning models for potential security vulnerabilities, specifically model serialization attacks. It supports multiple model formats, including H5, Pickle, and SavedModel, making it compatible with frameworks like PyTorch, TensorFlow, Keras, Sklearn, and XGBoost.
Key Features:
Use Cases: