AICM AtlasCSA AI Controls Matrix
DSP · Data Security and Privacy Lifecycle Management
DSP-15Cloud & AI Related

Limitation of Production Data Use

Specification

Obtain authorization from data owners, and manage associated risk before replicating or using production data in non-production environments.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Team and expertise

Development

Design, Guardrails

Evaluation

Evaluation, Validation/Red Teaming

Deployment

Orchestration, AI Services supply chain

Delivery

Operations, Maintenance, Continuous monitoring

Retirement

Data deletion, Archiving

Ownership / SSRM

PI

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Model

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Orchestrated

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Application

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Implementation guidelines

[Applicable to all service providers]
1. Establish a policy to specify procedures for secure handling, sanitization, anonymization, and compliance with regulations when using or replicating production data.

2. Conduct risk assessments to identify risks associated with data use.

3. Define and enforce physical and logical boundaries to keep production data secure.

4. Establish and implement policies and procedures for secure development and managing changes.

5. Implement segregation of duties and enforce least privilege access to production environments only after authorization from data owner. 

6. Implement principle of least privilege and authorized access to datasets, models, and other sensitive information.

7. Implement security measures to secure datacenters.

8. Provide regular training on security, privacy, and secure coding practices.

9. Implement timely termination and management of employee access.

10. Monitor assets, enforce secure configurations, and manage vulnerabilities on infrastructure and applications processing and storing production data.

11. Enable logs and continuously monitor system access for anomalies.

12. Replicate only the necessary data to minimize potential risks.

13. Anonymize or pseudonymize data to protect privacy and reduce the risk of exposing sensitive information.

14. Implement security measures such as encryption, and secure data transfer protocols.

15. Ensure data replication complies with relevant data protection laws and regulations, such as GDPR.

16. Maintain detailed records of the data replication process and perform periodic audits to ensure compliance.

17. Establish process to inform relevant stakeholders about the data replication process, including its purpose and scope.

18. Detect and remove or remediate sensitive or poisoned data before using it for training.

19. Implement mechanisms to ensure the integrity of data, models, and code used in AI development and deployment.

20. Implement the process of obtaining authorization from data owner before using the data for non-production purposes.

Auditing guidelines

1. Verify that infrastructure-level policies and technical safeguards are in place to control the use of production tenant data in test, dev, or benchmarking environments.

2. Verify if infrastructure users (e.g., internal teams or client teams) obtain approval before replicating production workloads or datasets in non-production environments.

3. Verify if mechanisms (e.g., data masking, encryption) are in place to anonymize and secure data during infrastructure provisioning or testing.

4. Verify if any deviations from the infrastructure provider’s standard for handling production data are documented and approved.

5. Verify if infrastructure governance procedures are periodically updated to reflect regulatory, contractual, or service-level agreement changes.

6. Verify if internal teams are trained on policies and practices for securing client or production data when testing or provisioning infrastructure services.

Standards mappings

ISO 42001No Gap
42001: A.6.1.2 Objectives for responsible development of AI system
42001: A.6.1.3 Processes for responsible design and development of AI systems
42001: 6.3.2 – AI control planning must consider environment sensitivity.
42001: A.7.2 Data for development and enhancement of AI system.
42001: A.7.4 Quality of data for AI systems
42001: A.2.3 Alignment with other organizational policies
27001: A.8.31 - Separation of development
test and production environments
27001: A.8.33 - Test information
27001:  A.8.27 – Segregation of environments.
27001:  A.5.11 – Protection against misuse of systems.
27002: 8.31 Separation of development
test and production environments
27002: 8.33 Test Information
27002: 8.28 – Environment segregation implementation.
27002: 5.9 – Protection against misuse or unintentional data disclosure.
Addendum

N/A

EU AI ActPartial Gap
Article 10
Article 16
Article 17
Article 28
Article 29
Addendum

The EU AI Act covers data governance and risk management broadly but doesn't specifically address environment-specific risk management and non-production environment data usage is not specifically addressed.

NIST AI 600-1Full Gap
No Mapping
Addendum

NIST AI 600-1 does not cover these DSP-15 topics.

BSI AIC4Partial Gap
BC-06
Addendum

For such topics, there is the GDPR in the EU. The GDPR is translated to local regulations for every country in the EU. This is an explicit target of GDPR.

AI-CAIQ questions (1)

DSP-15.1

Are authorizations obtained from data owners and associated risks managed before replicating or using production data in non-production environments?