AICM AtlasCSA AI Controls Matrix
DSP · Data Security and Privacy Lifecycle Management
DSP-08Cloud & AI Related

Data Privacy by Design and Default

Specification

Develop systems, products, and business practices based upon a principle of privacy by design and industry best practices. Ensure that systems' privacy settings are configured by default, according to all applicable laws and regulations.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data collection, Resource provisioning

Development

Design, Guardrails

Evaluation

Evaluation

Deployment

Orchestration, AI Services supply chain

Delivery

Operations, Maintenance, Continuous monitoring

Retirement

Data deletion, Model disposal

Ownership / SSRM

PI

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Model

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Orchestrated

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Application

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Implementation guidelines

[All Actors]
1. Filter, mask and auto-delete sensitive inputs / outputs by default where the actor can inspect content; where content is opaque, ensure upstream or downstream services apply equivalent controls.

2. Enforce user consent, minimise data and give privacy control to users for any personal data the actor collects or processes.

3. Enforce enterprise privacy policies and control what data is shared for any data the actor shares or receives, ensuring onward transfers respect those policies.

4. Provide privacy-protective infrastructure defaults and compliance tools.

[Shared among: MP, OSP, CSP]
1. Use privacy-preserving data, consent-based training and reduce memorisation for any model training the actor performs.

Auditing guidelines

1. Examine whether the CSP’s policy, standards, and procedures create a framework that fosters a culture and expectation of privacy by design. Determine whether this content addresses the directive of the CSP’s culture and whether practices reflect privacy by design and industry best practices.

2. Examine whether the organization’s governance framework, documents, controls, and metrics satisfy the organization, and if its sub-processors comply with this requirement. Establish whether the CSP has documented the roles and responsibilities involved.

3. Obtain and examine supporting documentation maintained as evidence of these metrics to determine if the office or individual responsible reviews the information, and if identified issues were investigated and remediated appropriately.

4. Obtain evidence of the systems' privacy settings and the laws and regulations that apply to the CSP. Determine if the configurations are implemented as defined by the applicable laws and regulations.

5. Verify that processes, systems, and applications used for the collection and processing (including use, disclosure, retention, transmission, and disposal) are limited to what is necessary for the identified purpose.

6. Verify that the CSP limits data collection to the minimum necessary for the identified purposes.

7. Verify that the CSP limits the data processing to what is accurate, adequate, relevant, and necessary for the identified purposes.

8. Verify that the CSP defines and documents data minimization objectives and uses mechanisms (such as de-identification) to meet those objectives.

9. Verify that the CSP either deletes or renders data in a form that does not permit identification when it no longer requires access to identifiable forms of data unless there is a legal requirement or business justification to retain it in identifiable form.

10. Verify that the CSP ensures that temporary files created during data processing are deleted (e.g., erased or destroyed) following documented procedures within a specified, documented time frame.

11. Verify that the CSP does not retain data for longer than necessary for the purposes for which it was processed.

12. Verify that the CSP follows documented policies, procedures, and/or mechanisms when disposing of data.

13. Verify that the CSP subjects data (e.g., sent to another organization) over a data-transmission network to appropriate controls to ensure data reaches its intended destination.

14. Examine the CSP’s policy, standards, and procedures and determine if a culture and expectation of privacy by design for third-party providers is defined. Determine whether this content addresses the directive of the CSP’s culture and whether practices reflect privacy by design and industry best practices.

15. Examine infrastructure design documentation to verify that privacy-enabling capabilities, such as data isolation, segregation mechanisms, and privacy-preserving storage options, are incorporated into the infrastructure architecture.

16. Verify that default configurations for infrastructure components include privacy-enhancing settings such as encryption at rest, secure access controls, and logging that minimizes capture of personal data.

17. Review mechanisms provided to support data residency requirements, confirming the infrastructure enables customers to control where their data is physically stored and processed.

18. Assess how the infrastructure supports privacy compliance through capabilities such as data discovery, classification, pseudonymization, or isolation of regulated information.

19. Verify that infrastructure management interfaces and monitoring tools are designed to minimize the exposure of personal data in logs, alerts, and administrative views by default.

20. Examine how the infrastructure enables secure deletion or isolation of personal data when requested by customers or required by regulations.

Standards mappings

ISO 42001No Gap
42001: A.2.3 Alignment with other organizational policies
42001: A.5.4 Assessing AI system impact on individuals or groups of individuals
42001: A.6.1.3 Processes for trustworthy AI system design and development
42001: A.6.2.2 AI system requirements and specification
42001: A.6.2.5 AI system deployment
42001: A.6.2.7 AI system technical documentation
42001: A.7.2 Data for development and enhancement of AI system
42001: A.9.3 Objectives for responsible use of AI system
42001: A.7.3 Acquisition of data
27001: A.5.34 - Privacy and protection of personal identifiable information (PII)
27001: A.8.11 - Data masking
27002: 5.34 - Privacy and protection of personal identifiable information (PII)
27002: 8.11 - Data masking
Addendum

N/A

EU AI ActPartial Gap
Article 10
Article 52 (1)
Addendum

While under Article 10 (2) (a) privacy principles are covered, default setting specifications aren't detailed.

NIST AI 600-1Partial Gap
GV-1.1-001
MS-2.2-002
MS-2.2-003
MS-2.2-004
Addendum

NIST AI 600-1 does not address in a clear manner the DSP-08 topic of using privacy by design principles, but it has loose associations to scattered aspects of this.

BSI AIC4Full Gap

No specific mapping.

Addendum

no explicitly mentioned preconfigured hardware

AI-CAIQ questions (2)

DSP-08.1

Are systems, products, and business practices developed based upon a principle of privacy by design and industry best practices?

DSP-08.2

Are systems' privacy settings configured by default, according to all applicable laws and regulations?