Incident Response
Specification
Define incident categories and severity levels for AI systems, and determine response procedures for each, including automated response where applicable.
Threat coverage
Architectural relevance
Lifecycle
Data storage
Guardrails
Re-evaluation
Orchestration, AI Services supply chain, AI applications
Operations, Maintenance, Continuous monitoring, Continuous improvement
Data deletion
Ownership / SSRM
PI
Shared across the supply chain
Shared control ownership refers to responsibilities and activities related to LLM security that are distributed across multiple stakeholders within the AI supply chain, including the Cloud Service Provider (CSP), Model Provider (MP), Orchestrated Service Provider (OSP), Application Provider (AP), and Customer (AIC). These controls require coordinated actions, communication, and governance across all involved parties to ensure their effectiveness.
Model
Owned by the Model Provider (MP)
The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.
Orchestrated
Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)
The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.
Application
Owned by the Application Provider (AP)
The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.
Implementation guidelines
Auditing guidelines
1. Verify incident response categories and severity levels clearly documented (e.g, consider impacts to instance, region, single or multi-tenant). 2. Verify approaches for automatic detection and response (e.g., AWS Security Hub, Guard Duty, Lambda; Microsoft Cloud Defender; Google Chronicle and Security Command Center). 3. Confirm well-defined roles and escalation pathways during incident response. 4. Check documented incident response timelines and service level agreements (SLAs). 5. Ensure regular reviews of incident response activities and outcomes. 6. Verify clear accountability documented for incident handling. 7. Confirm training provided to relevant stakeholders on incident response processes.
Standards mappings
42001: A.8.4 42001: A.8.5 42001: B.8.4 42001: B.8.5 27001: A.5.25 27002: A.5.26
Addendum
ISO 42001 defines types of incidents that must be communicated, not incident categories and severity levels as AICM.
No Mapping
Addendum
Define incident categories and severity levels for AI systems and determining response procedures.Specifically require the establishment of a comprehensive framework for categorizing incidents and determining appropriate response procedures as outlined in SEF-09.
MG-2.4-002 MG-2.4-003 MG-2.4-004
Addendum
NIST AI 600-1 doesn't reference incident categories and severity levels, only escalating procedures in general, and does not take into consideration automated response.
C4 RE-05 C5 SIM-01
Addendum
N/A
AI-CAIQ questions (1)
Are incident categories and severity levels defined for AI systems, and response procedures determined for each, including automated response where applicable?