This component continuously scans and analyzes the security posture of an organization's AI assets, including ML models, data pipelines, and training environments. It checks for misconfigurations, policy violations, and compliance gaps specific to AI systems. For instance, it can detect if a model's API is exposed without proper authentication or if sensitive training data is stored in an unencrypted location. This is the foundational layer that provides visibility into the AI security landscape.
This is the active defense component of AISPM. It focuses on identifying and responding to malicious activities targeting the AI model itself. It uses techniques to detect adversarial attacks (where an attacker manipulates input data to trick the model), data poisoning (where malicious data is introduced into the training dataset), and model evasion (where an attacker bypasses the model's security controls). This component goes beyond traditional network and host-based threat detection to focus on the unique threats to AI.
Since AI models are only as good as the data they're trained on, this component is crucial. It ensures the integrity, confidentiality, and security of both training and inference data. It includes measures like data anonymization, encryption, and access control policies to protect sensitive information from being exposed or tampered with. It also verifies the lineage of data to ensure it hasn't been corrupted or manipulated.
This component is dedicated to identifying and managing vulnerabilities within the AI models themselves. It involves tools and techniques for model risk assessment and vulnerability scanning. It looks for weaknesses in the model's architecture or its underlying libraries that could be exploited. This includes identifying issues like bias in the model, potential for model inversion attacks, or weaknesses in the model's API.
The final component provides the overarching governance structure. It helps define and enforce security policies and compliance frameworks specific to AI systems. This ensures that the organization's use of AI aligns with internal security standards and external regulations. It includes features for auditing, reporting, and generating compliance reports to demonstrate that AI systems are being developed and deployed in a secure and responsible manner.