AICM AtlasCSA AI Controls Matrix
DSP · Data Security and Privacy Lifecycle Management
DSP-22AI-Specific

Privacy Enhancing Technologies

Specification

Use Privacy Enhancing Technologies for training data, informed by risk and privacy impact analysis and business use cases.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data collection, Data storage

Development

Design, Training

Evaluation

Validation/Red Teaming

Deployment

AI Services supply chain, AI applications

Delivery

Operations, Continuous improvement

Retirement

Not applicable

Ownership / SSRM

PI

Shared across the supply chain

Shared control ownership refers to responsibilities and activities related to LLM security that are distributed across multiple stakeholders within the AI supply chain, including the Cloud Service Provider (CSP), Model Provider (MP), Orchestrated Service Provider (OSP), Application Provider (AP), and Customer (AIC). These controls require coordinated actions, communication, and governance across all involved parties to ensure their effectiveness.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Shared Application Provider-AI Customer (Shared AP-AIC)

The AP and AIC both share responsibility and accountability for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they offer and consume.

Implementation guidelines

[All Actors]
1. Conduct Risk and Privacy Impact Analysis
a. Define Privacy risk and impact assessment framework which encompasses AI specific privacy risks.
b. Document the purpose, scope and intended outcome of AI system.
c. Implement a risk classification system (Low, moderate, high) based on use case and data sensitivity.
d. Update the Privacy risk and impact assessment framework as new risks are identified, along with regulation changes.

2. Selection of Privacy Enhancing Technologies (PETs) aligned with Business Objectives
a. Establish guidelines to facilitate selection of PET based on risk profile and business objective, such as;
  a.Low Risk - Data minimization, pseudonymization, and anonymization.
  b.Moderate Risk - Differential Privacy, synthetic data,secure data enclaves.
  c.High Risk - Federated learning, secure multi-party computation (SMPC) or homomorphic encryption.
b. Document selected PETs, and engage legal, compliance and audit team stakeholders for review.
c. Conduct periodic evaluation as business use case evolves.

3. List of Privacy Enhancing Technologies for Training Data - Organization should consider selecting PETs aligned with their business use case and results of privacy impact assessment to mitigate identified risks
  a. Differential Privacy- Mathematical noise added to data or model outputs to prevent leakage of individual data points.
  b. Business Use Case- Training sensitive datasets without revealing specific individual records.
  c. Federated Learning- AI models are trained locally on servers and model updates are aggregated.
  d. Business Use case- Limitation on sharing raw data due to privacy or proprietary concerns.C.Homomorphic Encryption- AI models are trained with encrypted data computations.
  e.Business Use Case- Limitation of sharing unencrypted data due to privacy concerns.
  f. Confidential Computing - AI models are trained in hardware-based Trusted Execution Environment (TEEs), known as secure enclaves.
  g. Business Use Case - Data to be processed in secure hardware environments such as financial transactions.Secure Multi-Party Computation (SMPC) - AI models are trained on split data obtained from multiple parties.
  h. Business Use Case - Multiple parties need to collaborate on private data without revealing individual inputs.
  i. Data Anonymization - AI models are trained on datasets which have removed or masked identifiable information.
  j. Business Use Case - Data sharing should consider privacy law or regulatory standards.
  k..Data Pseudonymization - AI models are trained on datasets which have replaced identifiable information with reversible tokens or identifiers.
  l. Business Use Case - Data should remain linkable for analysis and business operations.

4. Monitoring and Review
a. Monitor privacy metrics, and perform periodic compliance assessments to verify Privacy Enhancing Technologies (PETs) effectively meet privacy objectives, regulatory requirements, and business use case specifications.
b. Evaluate and monitor privacy enhancing technologies effectiveness.
c. Establish performance baselines and acceptable latency thresholds for each PET implementation.

Auditing guidelines

1. Verify that infrastructure-level PETs (e.g., encrypted computation environments, secure aggregation) are implemented based on clearly defined business use cases.

2. Verify that PET infrastructure components are continuously monitored and evaluated to detect degradation or new risks.

3. Verify that PETs offered as services (e.g., secure multiparty computation platforms) meet privacy compliance standards.

4. Verify that metrics and audit reports for infrastructure-level PETs are defined and monitored for effectiveness.

5. Verify that DevOps and infrastructure teams are trained on PET deployment, updates, and security maintenance.

6. Verify that PET systems hosted in the infrastructure are kept up to date with relevant security patches.

7. Verify that PET usage logs (e.g., access to secure enclaves, computation results) are reviewed and analyzed.

8. Verify that periodic third-party penetration tests and vulnerability assessments are conducted on PET-enabled infrastructure offerings.

Standards mappings

ISO 42001No Gap
42001: 4.1 Understanding the organization and its context
42001: 6.1.4  AI System Impact Assessment
42001: A.4.2 Resource documentation
42001: A.4.4 Tooling resources
42001: A.2.3 Alignment with other organizational policies
27001: A.5.15 - Access Control
27001: A.5.34 -Privacy and protection of personal identifiable information (PII)
27001: A.8.5 - Secure Authentication
27001: A.8.11 - Data masking
27001: A.8.12 - Data leakage prevention
27001: A.8.20 - Networks security
27001: A.8.24 - Use of cryptography
27002: 5.15 - Access Control
27002: 5.34 -Privacy and protection of personal identifiable information (PII)
27002: 8.5 - Secure Authentication
27002: 8.11 - Data masking
27002: 8.12 - Data leakage prevention
27002: 8.20 - Networks security
27002: 8.24 - Use of cryptography
Addendum

N/A

EU AI ActPartial Gap
Article 10 (2) (f)
Article 15
Addendum

Privacy protection is required, but there is no specific mention of "differential privacy and federated learning," so specific technologies are not detailed.

NIST AI 600-1Partial Gap
MS-2.2-004
Addendum

NIST AI 600-1 does not include training data in the DSP-22 topic.

BSI AIC4No Gap
SR-06
Addendum

N/A

AI-CAIQ questions (1)

DSP-22.1

Are Privacy Enhancing Technologies (PET) used for training data informed by risk and privacy impact analysis and business use cases?