AICM AtlasCSA AI Controls Matrix
DSP · Data Security and Privacy Lifecycle Management
DSP-21AI-Specific

Data Poisoning Prevention & Detection

Specification

Define, implement and evaluate processes, procedures and technical measures to prevent data poisoning in AI models and continuously detect such.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data collection, Data curation

Development

Training, Supply Chain

Evaluation

Validation/Red Teaming, Re-evaluation

Deployment

AI Services supply chain

Delivery

Continuous improvement

Retirement

Not applicable

Ownership / SSRM

PI

Shared across the supply chain

Shared control ownership refers to responsibilities and activities related to LLM security that are distributed across multiple stakeholders within the AI supply chain, including the Cloud Service Provider (CSP), Model Provider (MP), Orchestrated Service Provider (OSP), Application Provider (AP), and Customer (AIC). These controls require coordinated actions, communication, and governance across all involved parties to ensure their effectiveness.

Model

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Shared Application Provider-AI Customer (Shared AP-AIC)

The AP and AIC both share responsibility and accountability for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they offer and consume.

Implementation guidelines

[All Actors]
1. Data Governance and Source Validation
a. Define clear metrics for "fit for purpose" data
b. Develop data source vetting and validation procedures
c. Implement data provenance and lineage tracking
d. Implement digital signature validation to verify the integrity and authenticity of models, datasets and software components before use in AI pipelines.

2. Data Processing and Monitoring
a. Establish a process to periodically audit datasets for integrity.
b. Use anomaly detection and drift monitoring to identify deviations, set thresholds for deviations, and implement alerts to notify data owners of potential issues
c. Implement real-time monitoring of data distributions and statistical patterns to detect anomalous changes in training data.
d. Establish baseline metrics and threshold monitoring for early detection of data manipulation attempts.

3. Technical Measures
a. Perform data integrity checks for data used in AI systems to detect alteration, corruption and tampering of data.
b. Maintain detailed audit logs of who accessed, modified, or submitted data, with timestamps and user authentication to trace potential poisoning attempts.
c. Report unusual patterns or unexpected model behaviors promptly to model provider.
d. Conduct periodic reviews of model outputs against expected business outcomes.

Auditing guidelines

1. Verify that infrastructure providers validate data sources ingested into AI training and processing pipelines to prevent malicious or poisoned data introduction.

2. Verify data quality checks are embedded in infrastructure services to detect and filter corrupted or suspicious data during AI workload execution.

3. Verify automated anomaly detection systems monitor data flows and storage for unusual patterns indicating data poisoning.

4. Verify infrastructure supports adversarial training and other resilience techniques by providing appropriate compute and tooling capabilities.

5. Verify infrastructure enforces strict access controls to prevent unauthorized dataset modifications at storage or processing layers.

6. Verify that data encryption protects AI workloads’ data at rest and in transit against unauthorized access or tampering.

7. Verify monitoring and alerting systems detect tampering or poisoning signs at the infrastructure level.

8. Verify documented incident response processes exist for infrastructure-related data poisoning threats, with clear escalation and remediation paths.

9. Verify that infrastructure personnel receive training on recognizing and responding to data poisoning risks in AI workloads.

10. Verify that infrastructure providers deploy automated tools to continuously monitor data integrity and detect anomalies across the AI data pipeline.

Standards mappings

ISO 42001No Gap
42001: A.6.1.2 Objectives for responsible development of AI system
42001: A.6.2.3 Documentation of AI system design and development
42001: A.6.2.6 AI system operation and monitoring
42001: A.6.2.4 AI system verification and validation
42001: A.7.3 Acquisition of data
42001: A.7.4 Quality of data for AI system
Addendum

N/A

EU AI ActNo Gap
Article 9
Article 10 (2)
Article 15
Addendum

N/A

NIST AI 600-1Partial Gap
MP-2.3-001
MP-2.3-003
MG-3.2-006
Addendum

No formal requirement for ongoing telemetry or automated detection. Lacks explicit evaluation or metric-based refinement mandates.

BSI AIC4No Gap
PF-01
PF-02
DQ-03
SR-06
Addendum

N/A

AI-CAIQ questions (1)

DSP-21.1

Are processes, procedures and technical measures to prevent data poisoning in AI models and continuously detect such, defined, implemented and evaluated?