AICM AtlasCSA AI Controls Matrix
CEK · Cryptography, Encryption & Key Management
CEK-03Cloud & AI Related

Data Encryption

Specification

Provide data protection at-rest, in-transit and, where applicable, in-use by using cryptographic libraries certified to approved standards.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data curation, Data storage

Development

Training

Evaluation

Evaluation, Validation/Red Teaming, Re-evaluation

Deployment

AI Services supply chain, AI applications

Delivery

Maintenance, Continuous improvement

Retirement

Archiving, Data deletion

Ownership / SSRM

PI

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Shared Orchestrated Service Provider-Application Provider (Shared OSP-AP)

The OSP and AP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Implementation guidelines

[All Actors]
Applies to all Roles (Baseline) before application of role context.
1. Establish and document policies and procedures for Cryptography, Encryption, and Key Management.

2. Approve the policies and procedures through formal governance processes (e.g., security committee, CISO).

3. Communicate the policies and procedures to all relevant stakeholders.

4. Apply the approved policies and procedures to all systems, services, and processes under the role’s control.

5. Evaluate the effectiveness of policy and procedure implementation using internal audits, technical reviews, 
and encryption control validations.

6. Review and update the policies and procedures at least annually, or when significant system, model, or 
regulatory changes occur.

Auditing guidelines

1. Verify that the CSP enforces encryption of data at-rest and in-transit across its cloud infrastructure, including storage services, networking layers, and compute environments, using cryptographic libraries certified to approved standards (e.g., FIPS 140-2/3).

2. Confirm that the encryption algorithms and protocols used (e.g., AES-256, TLS 1.3, RSA-2048) are appropriate for the classification of the protected data and are consistently applied across service layers.

3. Review platform-level configurations, default encryption settings, and service templates to validate the enforcement of encryption for customer workloads, control plane traffic, and metadata.

4. Validate that customer-facing services include capabilities to configure or enforce encryption, including tenant-level encryption policies, key selection (e.g., CSP-managed, BYOK), and automatic encryption toggles.

5. Confirm that AI-related services (e.g., model hosting, inference APIs, storage of prompt/completion data) are protected by the same cryptographic mechanisms, and that LLM-specific data flows are not exempt from encryption policies.

6. Review documentation and service descriptions to ensure that data encryption mechanisms are exposed and clearly explained to customers, including implementation standards, key storage locations, and responsibilities.

7. Verify that cryptographic modules (e.g., KMS, HSMs, TLS libraries) are implemented and maintained according to secure coding practices and approved validation schemes.

8. Confirm that the CSP maintains and updates an inventory of cryptographic libraries and protocols in use and that deprecated or weak algorithms are phased out systematically.

9. Review evidence that encryption enforcement is monitored through automated compliance checks, alerts, and internal audit reviews, with exceptions formally tracked and approved.

10. Verify that the CSP provides customers and downstream roles (e.g., APs, AICs) with tools or APIs to confirm encryption status, configure encryption policies, and receive audit logs related to data protection practices.

From CCM:

1. Identify data flows within the organization that are in transit.
2. Identify data storages within the organization that are at rest.
3. Confirm that the identified data flows and data storages have been protected by an appropriate cryptographic algorithm aligned to cryptography, encryption, and key management policy and procedures.

Standards mappings

ISO 42001Partial Gap
No Mapping for ISO 42001
ISO 27001:2022  A.8.24
ISO 27002:2022  A.8.24
Addendum

Add a control requiring cryptographic protection for data at-rest and in-transit in AI systems, mandating the use of cryptographic libraries certified to approved standards (e.g., FIPS 140-2/3), with implementation guidance and periodic validation, aligning with ISO 27001 (A.8.24) and ISO 27002 (8.24) while addressing AI-specific data security needs.

EU AI ActPartial Gap
Recital 69, page 20/144
Addendum

There is no verbiage on the use of cryptographic libraries that have been certified to approved standards.

NIST AI 600-1Full Gap
No Mapping
Addendum

No (implicit/explicit) reference to cryptography, encryption, or key management is made in the NIST AI 600-1 standard, let alone to the use of cryptographic libraries certified to approved standards to secure data.

BSI AIC4No Gap
CRY-02
CRY-03
Addendum

N/A

AI-CAIQ questions (1)

CEK-03.1

Is data protection, at-rest, in-transit and where applicable in-use, provided by using cryptographic libraries certified to approved standards?