AICM AtlasCSA AI Controls Matrix
TVM · Threat & Vulnerability Management
TVM-04Cloud & AI Related

Detection Updates

Specification

Define, implement and evaluate processes, procedures and technical measures to update detection tools, threat signatures, and indicators of compromise on a weekly, or more frequent basis.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Not applicable

Development

Guardrails

Evaluation

Evaluation, Validation/Red Teaming, Re-evaluation

Deployment

Orchestration, AI Services supply chain, AI applications

Delivery

Operations, Maintenance, Continuous monitoring, Continuous improvement

Retirement

Data deletion, Model disposal

Ownership / SSRM

PI

Shared across the supply chain

Shared control ownership refers to responsibilities and activities related to LLM security that are distributed across multiple stakeholders within the AI supply chain, including the Cloud Service Provider (CSP), Model Provider (MP), Orchestrated Service Provider (OSP), Application Provider (AP), and Customer (AIC). These controls require coordinated actions, communication, and governance across all involved parties to ensure their effectiveness.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Shared Orchestrated Service Provider-Application Provider (Shared OSP-AP)

The OSP and AP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Implementation guidelines

[All Actors]
1. Set up continuous monitoring systems to track vulnerabilities and patch statuses in real-time across the organization’s systems and applications.

2. Automate the generation of vulnerability reports to provide visibility into the status of identified vulnerabilities, remediation efforts, and patching activities.

3. Define thresholds for reporting vulnerability statuses to senior management and relevant stakeholders.

4. Ensure that vulnerability monitoring systems are integrated with incident response systems for quick action on high-risk vulnerabilities.

5. Implement automated reporting tools to ensure real-time visibility into vulnerability statuses and resolution efforts; regularly review vulnerability reports to ensure that emerging threats are being addressed and to track progress toward remediation goals.

6. Update detection engines, threat signatures, and indicators of compromise regularly or whenever a critical release is issued, to keep monitoring systems effective against emerging threats.

Auditing guidelines

1. Verify that the Cloud Service Provider (CSP) has defined processes, procedures, and technical measures to update tools implemented to detect vulnerabilities, threat signatures and indicators of compromise within the security perimeter at least weekly. Ensure that the processes are documented in detail, covering scope, objectives, roles and responsibilities.

2. Verify that the above-mentioned processes, procedures, and technical measures are compliant with relevant regulatory requirements and industry best practices.

3. Confirm that the above-mentioned processes, procedures, and technical measures are concretely and appropriately applied by involved parties in their day-to-day operations.

4. Inspect whether the above-mentioned processes, procedures, and technical measures are monitored against sets of industry-standard efficacy and efficiency metrics / indicators.

5. Inspect whether the above-mentioned policies, procedures, and technical measures are periodically reviewed and updated by responsible parties.

6. Verify the CSP has a mature, automated process for continuously updating the threat signatures and detection rules across its vast infrastructure, using a combination of commercial, open-source, and proprietary threat intelligence.

Standards mappings

ISO 42001Partial Gap
42001: A.6.2.6 AI System operation and monitoring
27001: A.8.7 Protection against malware
27001: A.8.8 Management of technical vulnerabilities
27001: A.8.16 Monitoring activities
27002: 5.7 Threat intelligence
27002: 5.17 / 5.18 Monitoring and event logging
Addendum

Define tools, scope, update frequency Ensure feeds are current and relevant Expand A.6.2.6 to include cyber telemetry Establish metrics and validation checks

EU AI ActPartial Gap
Article 15 (5)
Addendum

Specific detection tools, and specify frequency to update such measures.

NIST AI 600-1Partial Gap
MS-2.6-005
MS-2.7-009
Addendum

NIST AI 600-1 is missing "process evaluation, technical update mechanisms, and frequency requirement."

BSI AIC4Partial Gap
C4 SR-01
C5 OPS-05
Addendum

AI-CM TVM-04 requires at least weekly, while C4 SR-01 suggests quarterly.

AI-CAIQ questions (1)

TVM-04.1

Are processes, procedures, and technical measures defined, implemented, and evaluated to update detection tools, threat signatures, and indicators of compromise weekly or more frequently?