AICM AtlasCSA AI Controls Matrix
TVM · Threat & Vulnerability Management
TVM-13Cloud & AI Related

Threat Response

Specification

Use a risk-based method for the prioritization and mitigation of threats, leveraging an industry-recognized framework to guide threat decision-making and protection measures.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data curation, Data storage, Resource provisioning, Team and expertise

Development

Design, Training, Guardrails

Evaluation

Evaluation, Validation/Red Teaming, Re-evaluation

Deployment

Orchestration, AI Services supply chain, AI applications

Delivery

Operations, Maintenance, Continuous monitoring

Retirement

Data deletion, Model disposal

Ownership / SSRM

PI

Owned by the Cloud Service Provider (CSP)

The Cloud Service Provider (CSP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with cloud computing (processing, storage, and networking) technologies in the context of the services or products they develop and offer. The CSP is responsible and accountable for implementing the control within its own infrastructure/environment. The CSP is responsible for enabling the customer and/or upstream partner to implement/configure the control within their risk management approach. The CSP is accountable for ensuring that its providers upstream implement the control related to the service/product developed and offered by the CSP.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Shared Orchestrated Service Provider-Application Provider (Shared OSP-AP)

The OSP and AP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Implementation guidelines

[All Actors]
1. Adopt a Risk-Based Threat Management Framework (e.g. NIST SP 800-30, SO/IEC 27005, MITRE ATT&CK for AI and Cloud, etc.)
- Establish a continuous threat intelligence process.
- Identify relevant AI and cloud-specific threat vectors.
- Classify assets, data flows, and AI components to prioritize risks based on impact and likelihood.
- Use threat modeling (e.g., STRIDE, DREAD) on AI pipeline stages.

2. Prioritize threats based on: Potential for model manipulation, poisoning, inversion attacks, Data exposure risk, Business impact (e.g., customer-facing models vs. internal tools).

3. Implement risk response options: mitigate, accept, transfer, avoid.

4. Align with the organization’s incident response plan (IRP).

5. Map AI-specific threats (e.g., adversarial ML inputs) to response playbooks.

6. Document and automate detection-and-response logic where feasible.

Auditing guidelines

1. Review threat modeling for infrastructure supporting AI workloads.

2. Validate CSP’s shared responsibility documentation for AI threat handling.

3. Evaluate services offered for AI threat detection (e.g., GuardDuty ML, Sentinel AI rules).

4. Check scoring and categorization logic for threats affecting AI deployments.

5. Assess incident response SLAs for AI-specific threats (e.g., data exfiltration from AI APIs).

6. Confirm existence of collaborative threat response mechanisms with AI customers.

Standards mappings

ISO 42001No Gap
27001: 8.8 Management of technical vulnerabilities
42001: A.6.2.6 AI system Operation and monitoring
Addendum

N/A

EU AI ActPartial Gap
Article 9 (2)
Addendum

Provide framework guidance for decision-making, explicit framework or standard reference, and structured decision-making process for threats, and define or categorize threats like adversarial attacks, supply chain compromises, or vulnerabilities in AI tooling.

NIST AI 600-1Partial Gap
GV-1.1-001
MP-1.1-001 to
MP-3.2-003
MG-1.1-001 to
MG-3.3-002
Addendum

NIST AI 600-1 does not reference the prioritization of threats, nor the use of an industry-recognized framework to guide threat and protection measures.

BSI AIC4No Gap
C4 SR-02
C5 OPS-18
Addendum

N/A

AI-CAIQ questions (1)

TVM-13.1

Is a risk-based method for the prioritization and mitigation of threats, used, leveraging an industry-recognized framework to guide threat decision-making and protection measures?