Detection Updates
Specification
Define, implement and evaluate processes, procedures and technical measures to update detection tools, threat signatures, and indicators of compromise on a weekly, or more frequent basis.
Threat coverage
Architectural relevance
Lifecycle
Not applicable
Guardrails
Evaluation, Validation/Red Teaming, Re-evaluation
Orchestration, AI Services supply chain, AI applications
Operations, Maintenance, Continuous monitoring, Continuous improvement
Data deletion, Model disposal
Ownership / SSRM
PI
Shared across the supply chain
Shared control ownership refers to responsibilities and activities related to LLM security that are distributed across multiple stakeholders within the AI supply chain, including the Cloud Service Provider (CSP), Model Provider (MP), Orchestrated Service Provider (OSP), Application Provider (AP), and Customer (AIC). These controls require coordinated actions, communication, and governance across all involved parties to ensure their effectiveness.
Model
Owned by the Model Provider (MP)
The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.
Orchestrated
Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)
The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.
Application
Shared Orchestrated Service Provider-Application Provider (Shared OSP-AP)
The OSP and AP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.
Implementation guidelines
Auditing guidelines
1. Verify that the Cloud Service Provider (CSP) has defined processes, procedures, and technical measures to update tools implemented to detect vulnerabilities, threat signatures and indicators of compromise within the security perimeter at least weekly. Ensure that the processes are documented in detail, covering scope, objectives, roles and responsibilities. 2. Verify that the above-mentioned processes, procedures, and technical measures are compliant with relevant regulatory requirements and industry best practices. 3. Confirm that the above-mentioned processes, procedures, and technical measures are concretely and appropriately applied by involved parties in their day-to-day operations. 4. Inspect whether the above-mentioned processes, procedures, and technical measures are monitored against sets of industry-standard efficacy and efficiency metrics / indicators. 5. Inspect whether the above-mentioned policies, procedures, and technical measures are periodically reviewed and updated by responsible parties. 6. Verify the CSP has a mature, automated process for continuously updating the threat signatures and detection rules across its vast infrastructure, using a combination of commercial, open-source, and proprietary threat intelligence.
Standards mappings
42001: A.6.2.6 AI System operation and monitoring 27001: A.8.7 Protection against malware 27001: A.8.8 Management of technical vulnerabilities 27001: A.8.16 Monitoring activities 27002: 5.7 Threat intelligence 27002: 5.17 / 5.18 Monitoring and event logging
Addendum
Define tools, scope, update frequency Ensure feeds are current and relevant Expand A.6.2.6 to include cyber telemetry Establish metrics and validation checks
Article 15 (5)
Addendum
Specific detection tools, and specify frequency to update such measures.
MS-2.6-005 MS-2.7-009
Addendum
NIST AI 600-1 is missing "process evaluation, technical update mechanisms, and frequency requirement."
C4 SR-01 C5 OPS-05
Addendum
AI-CM TVM-04 requires at least weekly, while C4 SR-01 suggests quarterly.
AI-CAIQ questions (1)
Are processes, procedures, and technical measures defined, implemented, and evaluated to update detection tools, threat signatures, and indicators of compromise weekly or more frequently?