Privacy Enhancing Technologies
Specification
Use Privacy Enhancing Technologies for training data, informed by risk and privacy impact analysis and business use cases.
Threat coverage
Architectural relevance
Lifecycle
Data collection, Data storage
Design, Training
Validation/Red Teaming
AI Services supply chain, AI applications
Operations, Continuous improvement
Not applicable
Ownership / SSRM
PI
Shared across the supply chain
Shared control ownership refers to responsibilities and activities related to LLM security that are distributed across multiple stakeholders within the AI supply chain, including the Cloud Service Provider (CSP), Model Provider (MP), Orchestrated Service Provider (OSP), Application Provider (AP), and Customer (AIC). These controls require coordinated actions, communication, and governance across all involved parties to ensure their effectiveness.
Model
Owned by the Model Provider (MP)
The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.
Orchestrated
Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)
The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.
Application
Shared Application Provider-AI Customer (Shared AP-AIC)
The AP and AIC both share responsibility and accountability for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they offer and consume.
Implementation guidelines
Auditing guidelines
1. Verify that infrastructure-level PETs (e.g., encrypted computation environments, secure aggregation) are implemented based on clearly defined business use cases. 2. Verify that PET infrastructure components are continuously monitored and evaluated to detect degradation or new risks. 3. Verify that PETs offered as services (e.g., secure multiparty computation platforms) meet privacy compliance standards. 4. Verify that metrics and audit reports for infrastructure-level PETs are defined and monitored for effectiveness. 5. Verify that DevOps and infrastructure teams are trained on PET deployment, updates, and security maintenance. 6. Verify that PET systems hosted in the infrastructure are kept up to date with relevant security patches. 7. Verify that PET usage logs (e.g., access to secure enclaves, computation results) are reviewed and analyzed. 8. Verify that periodic third-party penetration tests and vulnerability assessments are conducted on PET-enabled infrastructure offerings.
Standards mappings
42001: 4.1 Understanding the organization and its context 42001: 6.1.4 AI System Impact Assessment 42001: A.4.2 Resource documentation 42001: A.4.4 Tooling resources 42001: A.2.3 Alignment with other organizational policies 27001: A.5.15 - Access Control 27001: A.5.34 -Privacy and protection of personal identifiable information (PII) 27001: A.8.5 - Secure Authentication 27001: A.8.11 - Data masking 27001: A.8.12 - Data leakage prevention 27001: A.8.20 - Networks security 27001: A.8.24 - Use of cryptography 27002: 5.15 - Access Control 27002: 5.34 -Privacy and protection of personal identifiable information (PII) 27002: 8.5 - Secure Authentication 27002: 8.11 - Data masking 27002: 8.12 - Data leakage prevention 27002: 8.20 - Networks security 27002: 8.24 - Use of cryptography
Addendum
N/A
Article 10 (2) (f) Article 15
Addendum
Privacy protection is required, but there is no specific mention of "differential privacy and federated learning," so specific technologies are not detailed.
MS-2.2-004
Addendum
NIST AI 600-1 does not include training data in the DSP-22 topic.
SR-06
Addendum
N/A
AI-CAIQ questions (1)
Are Privacy Enhancing Technologies (PET) used for training data informed by risk and privacy impact analysis and business use cases?