AICM AtlasCSA AI Controls Matrix
GRC · Governance, Risk and Compliance
GRC-10AI-Specific

AI Impact Assessment

Specification

Establish, document, and communicate to all relevant stakeholders an AI Impact Assessment process and its criteria to regularly evaluate the ethical, societal, operational, legal, and security impacts of the AI system throughout its lifecycle.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Team and expertise

Development

Guardrails

Evaluation

Evaluation, Validation/Red Teaming

Deployment

Orchestration, AI Services supply chain

Delivery

Operations, Maintenance, Continuous monitoring, Continuous improvement

Retirement

Archiving, Data deletion, Model disposal

Ownership / SSRM

PI

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Owned by the Orchestrated Service Provider (OSP)

The Orchestrated Service Provider (OSP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The OSP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the OSP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The OSP is accountable for ensuring that its providers upstream (e.g MPs) implement the control as it relates to the service/product the develop and offered by the OSP. This refers to entities that create the technical building blocks and management tools that enable AI implementation. This can include platforms, frameworks, and tools that facilitate the integration, deployment, and management of AI models within enterprise workflows. These providers focus on model orchestration and offer services like API access, automated scaling, prompt management, workflow automation, monitoring, and governance rather than end-user functionality or raw infrastructure. They help businesses implement AI in a structured and efficient manner. Examples: AWS, Azure, GCP, OpenAI, Anthropic, LangChain (for AI workflow orchestration), Anyscale (Ray for distributed AI workloads), Databricks (MLflow), IBM Watson Orchestrate, and developer platforms like Google AI Studio.

Application

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Implementation guidelines

[All Actors]
1. Conduct a comprehensive impact assessment, for example using the guidance provided in ISO/IEC 42005:2025 on how to conduct AI Impact Assessments before deploying AI systems that process sensitive data, affect individuals, or are used in high-risk contexts.

2. Define the scope of the assessment to include intended use, affected users or communities, risk scenarios, and mitigations.

3. Document potential harms, legal/regulatory implications, fairness concerns, and system limitations.

4. Establish a review and sign-off process involving legal, compliance, risk, and technical stakeholders.

5. Update assessments periodically or when the system’s purpose, data sources, or deployment context changes.

Auditing guidelines

1. Verify that an approved AI Impact assessment process exists. The process should include how the methodology/criteria evaluates AI systems on a regular basis as per company's policy.

2. Verify that the evaluation process in integrated with the AI system lifecycle (e.g., design, development, deployment and monitoring phases).

3. Verify the evaluation criteria and scoring mechanism exists across all the dimensions such as ethical, societal, legal, operational, and security.

4. Assess how the impact assessment methodology evaluates differential impacts across various customer segments or usage patterns within the multi-tenant service environment.

5. Verify the process to identify various stakeholders (both internal and external) and how they communicate and engage stakeholders to communicate impact assessment process, evaluation procedures, impact/risk scores ,and most importantly how they collect and incorporate their feedback.

Standards mappings

ISO 42001No Gap
42001: B.5.2 (AI system impact assessment process)
42001: B.5.4 (Assessing AI system impact on individuals or groups of individuals)
Addendum

N/A

EU AI ActNo Gap
Article 9
Article 27
Article 55
Addendum

N/A

NIST AI 600-1No Gap
MP-5.1-002
MP-5.1-004
MS-1.3-002
MS-3.3-001
Addendum

N/A

BSI AIC4No Gap
BC-03
PF-01
PF-02
PF-05
Addendum

N/A

AI-CAIQ questions (1)

GRC-10.1

Is an AI Impact Assessment process and its criteria to regularly evaluate the ethical, societal, operational, legal, and security impacts of the AI system throughout its lifecycle, established, documented, and communicated to all relevant stakeholders?