AICM AtlasCSA AI Controls Matrix
SEF · Security Incident Management, E-Discovery, & Cloud Forensics
SEF-01Cloud & AI Related

Security Incident Management Policy and Procedures

Specification

Establish, document, approve, communicate, apply, evaluate and maintain policies and procedures for Security Incident Management, E-Discovery, and Forensics. Review and update the policies and procedures at least annually or upon significant changes.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data storage

Development

Guardrails, Supply Chain

Evaluation

Re-evaluation

Deployment

AI applications, Orchestration

Delivery

Operations, Maintenance, Continuous monitoring, Continuous improvement

Retirement

Archiving, Data deletion, Model disposal

Ownership / SSRM

PI

Shared across the supply chain

Shared control ownership refers to responsibilities and activities related to LLM security that are distributed across multiple stakeholders within the AI supply chain, including the Cloud Service Provider (CSP), Model Provider (MP), Orchestrated Service Provider (OSP), Application Provider (AP), and Customer (AIC). These controls require coordinated actions, communication, and governance across all involved parties to ensure their effectiveness.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Owned by the Application Provider (AP)

The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.

Implementation guidelines

[All Actors]
1. Establish a unified AI Security Policy Framework encompassing data security, access control, incident response, change management, logging and monitoring, vulnerability management, and responsible AI use.

2. Align AI-related policies with industry frameworks such as ISO/IEC 27001, NIST SP 800-53, CSA CCM, and EU AI Act governance requirements.

3. Maintain a centralized and version-controlled repository of security policies applicable to AI lifecycle stages (e.g., data ingestion, model training, model inference, decommissioning).

4. Conduct annual reviews of policies to ensure continued relevance with emerging AI risks, technological changes, and regulatory updates.

5. Ensure policy accessibility and enforce policy awareness across all relevant internal teams, including development, operations, compliance, and security.

Auditing guidelines

1. Verify the Cloud Service Provider (CSP) has a documented and approved security incident management policy, aligned with recognized industry standards such as NIST SP 800-61, NIST 800-201 or ISO/IEC 27035.

2. Ensure the Cloud Service Provider (CSP) has procedures for E-Discovery and Forensics. Including deployment, operations and cloud consumers.

3. Confirm that roles and responsibilities for incident detection, reporting, escalation, and resolution are clearly defined and documented.

4. Check that procedures cover the full incident lifecycle, including initial reporting, triage, escalation criteria, containment, eradication, recovery, and post-incident review.

5. Ensure that the policy and procedures are communicated effectively to all internal and external stakeholders, including third-party service providers, where applicable.

6. Verify that the incident management policy and related procedures are reviewed and updated periodically, or following major incidents, organizational changes, or regulatory updates.

7. Confirm that regular training is provided to incident response teams, with materials updated based on emerging threats and lessons learned.

8. Validate that incident response drills or tabletop exercises are conducted regularly, with documentation of scenarios, participants, outcomes, and improvement actions.

Standards mappings

ISO 42001No Gap
42001: 5.2
42001: A.2.2
42001: A.2.3
42001: A.2.4
42001: B.2.1
42001: B.2.2
42001: B.2.3
42001: B.2.4
42001: B.6.2.6 AI system operation and monitoring
42001: B.8.4 Communication of incidents
42001: A.8.4 Communication of incidents
27001: 5.1
27001: 5.2
27001: 7.3
27001: 7.4
27001: 7.5
27001: 9.1
27001: 9.3
27002: 5
Addendum

N/A

EU AI ActPartial Gap
Article 15
Article 17
Article 26
Article 72
Article 73
Addendum

Approval, communication, or scheduled review of security-related procedures; requirement to establish, operate, and maintain formal response programs; support for legal discovery, investigation workflows, or digital evidence integrity; identify policy ownership, designate roles, or tie responsibilities to leadership or specific stakeholders.

NIST AI 600-1Partial Gap
GV-1.5-002
MG-2.3-001
MG-2.4-002
MG-2.4-003
Addendum

NIST AI 600-1 does not cover the SEF-01 topic of e-discovery and forensics specifically.

BSI AIC4No Gap
C4 PC-02
C5 SIM-01
C5 OPS-11
Addendum

N/A

AI-CAIQ questions (2)

SEF-01.1

Are policies and procedures established, documented, approved, communicated, applied, evaluated, and maintained for Security Incident Management, E-Discovery, and Forensics?

SEF-01.2

Are policies and procedures for Security Incident Management, E-Discovery, and Forensics reviewed and updated at least annually or upon significant changes?