AICM AtlasCSA AI Controls Matrix
UEM · Universal Endpoint Management
UEM-08Cloud & AI Related

Storage Encryption

Specification

Protect information from unauthorized disclosure on managed endpoint devices with storage encryption.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data storage, Resource provisioning

Development

Training

Evaluation

Evaluation, Validation/Red Teaming, Re-evaluation

Deployment

Orchestration, AI Services supply chain, AI applications

Delivery

Operations, Maintenance, Continuous monitoring, Continuous improvement

Retirement

Archiving, Data deletion, Model disposal

Ownership / SSRM

PI

Shared across the supply chain

Shared control ownership refers to responsibilities and activities related to LLM security that are distributed across multiple stakeholders within the AI supply chain, including the Cloud Service Provider (CSP), Model Provider (MP), Orchestrated Service Provider (OSP), Application Provider (AP), and Customer (AIC). These controls require coordinated actions, communication, and governance across all involved parties to ensure their effectiveness.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Shared Orchestrated Service Provider-Application Provider (Shared OSP-AP)

The OSP and AP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Implementation guidelines

[Applicable to all actors except CSP]  

1. Mandate full-disk encryption on all managed endpoints for every AI supply chain participant, using approved tools 
(e.g., BitLocker, FileVault) and verifying compliance via UEM dashboards to ensure data at rest on any device is protected.

2. Set a unified policy that no sensitive AI data (models, training data, customer information) may reside on an endpoint 
unless storage encryption is active; configure UEM to block or quarantine devices that report as unencrypted.

3. Adopt a consistent cryptographic strength standard across stakeholders (e.g., AES-256 encryption, FIPS 140-2 
compliant solutions) for endpoint encryption, so that all data is secured to an equivalent level regardless of which 
organization’s device it is on.

4. Include encryption status in regular joint security reviews or compliance attestations, requiring each stakeholder to 
demonstrate that lost or stolen devices from their environment were encrypted (providing mutual assurance that a 
device breach at one party won't expose plaintext data).

Auditing guidelines

1. Verify that the CSP has a documented Encryption Policy for endpoints, approved by governance, defining sensitivity tiers, supported encryption levels (file, full‑disk), and roles/responsibilities.

2. Inspect the policy to confirm use of industry‑standard algorithms and strong cryptography for all sensitive data, with key management procedures and allowable exceptions clearly defined.

3. Confirm the policy mandates automated enforcement, such as diagnostic tools that validate encryption status and trigger remediation (patching, upgrades) before network access.

4. Verify that the policy requires testing of encryption workflows in isolated environments and integrates encryption checks into change‑management and orchestration processes.

5. Review system outputs (encryption compliance dashboards, remediation logs, change‑approval tickets, and audit trails) to ensure all endpoint devices adhere to the CSP’s storage encryption requirements.

From CCM: 
1. Examine the organization's asset disposal policy for end-of-life security requirements.
2. Examine the organization's policy on encryption or otherwise protection of data at rest on endpoints.
3. Determine if such controls are in place and evaluated as effective.

Standards mappings

ISO 42001Partial Gap
No Mapping for ISO 42001
ISO 27001 A.8.1
ISO 27001 A.8.5
Addendum

No ISO 42001 controls support UEM-08 topic of using encryption (encryption for stored data)

EU AI ActPartial Gap
Recital 69 (pg.20)
Article 10
Article 15
Article 17
Addendum

Amend the Act, or provide a technical annex, to specify that all managed endpoint devices storing or processing high-risk AI data must implement strong encryption mechanisms.

NIST AI 600-1No Gap
MP-4.1-009
MS-2.7-005
Addendum

N/A

BSI AIC4No Gap
C4 SR-06
C5 AM-06
C5 CRY-03
Addendum

N/A

AI-CAIQ questions (1)

UEM-08.1

Is information protected from unauthorized disclosure on managed endpoints with storage encryption?