AICM AtlasCSA AI Controls Matrix
DSP · Data Security and Privacy Lifecycle Management
DSP-07Cloud & AI Related

Data Protection by Design and Default

Specification

Develop systems, products, and business practices based upon a principle of security by design and industry best practices.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Resource provisioning, Team and expertise

Development

Design, Guardrails

Evaluation

Evaluation, Validation/Red Teaming

Deployment

Orchestration, AI Services supply chain

Delivery

Operations, Maintenance, Continuous monitoring

Retirement

Model disposal

Ownership / SSRM

PI

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Model

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Orchestrated

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Application

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Implementation guidelines

[All Actors]
1. Apply security controls (encryption, RBAC, logging, vulnerability patching) to every hop identified in the data-flow map for components the actor operates.

2. Verify that protections remain effective after system changes (e.g., new API, new storage location).

3. Document residual risks and mitigation plans.

[Shared among: MP, OSP, CSP]
1. Encrypt training data at rest, restrict access to authorised service roles, and monitor for anomalous reads/writes in training pipelines.

Auditing guidelines

1. Examine whether the CSP’s policy, standards, and procedures create a framework that fosters a culture and expectation of data protection by design and default.

2. Establish whether the CSP has documented the roles and responsibilities involved.

3. Review the CSP’s data breaches log, security incidents log, and project change failure records for examples of this requirement not being followed correctly. Further, confirm that action plans were identified and carried out.

4. Verify that security controls are embedded at every stage of the system development lifecycle.

5. Verify the effectiveness of technical measures such as secure coding practices, encryption, and access controls.

6. Verify that regular assessments and audits are conducted to evaluate the effectiveness of security measures and identify potential risks.

7. Verify that all processes, procedures, and technical measures related to security by design are thoroughly documented and regularly updated to reflect changes in industry best practices and regulations

8. Examine the CSP's policy, standards, and procedures, and determine if third-party data protection practices are considered.

9. Examine system design documentation to verify that security requirements were incorporated during the infrastructure design phase rather than added later, with particular focus on AI-specific processing requirements.

10. Verify that the infrastructure implements a defense-in-depth strategy with multiple security layers, including network segmentation, access controls, encryption, monitoring, and physical security appropriate for AI workloads.

11. Review the secure configuration baseline for infrastructure components, confirming it aligns with industry standards (e.g., CIS benchmarks, NIST guidelines) and is implemented by default across the environment.

12. Assess the infrastructure design review process, verifying that security assessments are conducted during design phases and that findings are addressed before deployment.

13. Evaluate how security considerations for high-performance computing environments typical in AI workloads are balanced with protection requirements without compromising either.

14. Verify that infrastructure monitoring capabilities are designed to detect security events specific to AI operations, including unusual data access patterns or resource utilization.

Standards mappings

ISO 42001No Gap
42001: A.2.3 Alignment with other organizational policies
42001: A.6.1.3 Processes for trustworthy AI system design and development
42001: A.6.2.2 AI system requirements and specification
42001: A.6.2.5 AI system deployment
42001: A.6.2.7 AI system technical documentation
42001: A.7.2 Data for development and enhancement of AI system
42001: A.7.5 Data provenance
27001: A.5.8 Information security in project management
27001: A.5.21 Managing information security in the information and communication (ICT) supply chain
27001: A.8.25  Secure development life cycle
27001: A.8.26 Application security requirements
27001: A.8.27 Secure system architecture and engineering principles
27001: A.8.28 Secure coding
27001: A.8.29 Security testing in development and acceptance
27001: A.8.30 Outsourced development
27001: A.8.31 Separation of development
test and production environment
27001: A.8.32 Change management
27001: A.8.33 Test information
27002: 5.8 Information security in project management
27002: 5.21 Managing information security in the information and communication (ICT) supply chain
27002: 8.25  Secure development life cycle
27002: 8.26 Application security requirements
27002: 8.27 Secure system architecture and engineering principles
27002: 8.28 Secure coding
27002: 8.29 Security testing in development and acceptance
27002: 8.30 Outsourced development
27002: 8.31 Separation of development
test and production environment
27002: 8.32 Change management
27002: 8.33 Test information
Addendum

N/A

EU AI ActNo Gap
Article 10 (2) (a)
Article 14
Addendum

N/A

NIST AI 600-1Partial Gap
GV-4.1-003
GV-5.1-001
MG-2.2-009
MP-2.3-002
MP-2.3-005
MS-1.3-003
Addendum

NIST AI 600-1 partially covers the DSP-07 topic of security by design but not in regard to the development lifecycle. However, it has loose associations to scattered aspects of this.

BSI AIC4No Gap
DEV-01
SR-06
PC-01
Addendum

N/A

AI-CAIQ questions (1)

DSP-07.1

Are systems, products, and business practices developed based upon a principle of security by design and industry best practices?