AICM AtlasCSA AI Controls Matrix
GRC · Governance, Risk and Compliance
GRC-13AI-Specific

Explainability Requirement

Specification

Establish, document, and communicate the degree of explainability needed for the AI Services.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data collection, Data curation, Data storage, Team and expertise

Development

Design, Training, Guardrails, Supply Chain

Evaluation

Evaluation, Validation/Red Teaming, Re-evaluation

Deployment

Orchestration, AI Services supply chain, AI applications

Delivery

Operations, Maintenance, Continuous monitoring, Continuous improvement

Retirement

Archiving, Data deletion, Model disposal

Ownership / SSRM

PI

Owned by the Cloud Service Provider (CSP)

The Cloud Service Provider (CSP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with cloud computing (processing, storage, and networking) technologies in the context of the services or products they develop and offer. The CSP is responsible and accountable for implementing the control within its own infrastructure/environment. The CSP is responsible for enabling the customer and/or upstream partner to implement/configure the control within their risk management approach. The CSP is accountable for ensuring that its providers upstream implement the control related to the service/product developed and offered by the CSP.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Owned by the Orchestrated Service Provider (OSP)

The Orchestrated Service Provider (OSP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The OSP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the OSP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The OSP is accountable for ensuring that its providers upstream (e.g MPs) implement the control as it relates to the service/product the develop and offered by the OSP. This refers to entities that create the technical building blocks and management tools that enable AI implementation. This can include platforms, frameworks, and tools that facilitate the integration, deployment, and management of AI models within enterprise workflows. These providers focus on model orchestration and offer services like API access, automated scaling, prompt management, workflow automation, monitoring, and governance rather than end-user functionality or raw infrastructure. They help businesses implement AI in a structured and efficient manner. Examples: AWS, Azure, GCP, OpenAI, Anthropic, LangChain (for AI workflow orchestration), Anyscale (Ray for distributed AI workloads), Databricks (MLflow), IBM Watson Orchestrate, and developer platforms like Google AI Studio.

Application

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Implementation guidelines

[All Actors]
1. Define explainability expectations for AI systems based on use case criticality, audience, and regulatory context.

2. Use appropriate methods (e.g., LIME, SHAP, saliency maps, rule extraction, counterfactuals or causal based methods) to generate model explanations.

3. Document the explainability strategy, limitations, and interpretability metrics.

4. Ensure explanations are accessible to intended audiences (e.g., end users, regulators, developers).

5. Evaluate explainability outputs for clarity, consistency, and fairness implications.

Auditing guidelines

1. Verify that the CSP has clearly defined explainability requirements that align with applicable compliance, regulatory, or ethical obligations.

2. Verify that the CSP prioritizes explainability based on risk levels and use cases, ensuring alignment with customer requirements and potential consequences of decision errors.

3. Verify that the CSP maintains consistent and transparent communication with all stakeholders—including customers, integrated service providers, and internal teams—regarding explainability standards and responsibilities.

4. Verify that the CSP has a documented framework for selecting, integrating, or substituting AI components based on explainability factors outlined in its requirements.

5. Verify that the CSP ensures transparency, enabling customers to understand explainability expectations and how decisions are made across the full AI pipeline.

Standards mappings

ISO 42001No Gap
42001: B.8.2 (System documentation and information for users)
42001 B.9.3 (Objectives for responsible use of AI system)
Addendum

N/A

EU AI ActPartial Gap
Article 13
Article 52
Addendum

Degree of explainability needed and specific documentation requirements for explainability levels.

NIST AI 600-1No Gap
MP-2.3-003
MS-4.2-003
Addendum

N/A

BSI AIC4No Gap
EX-01
Addendum

N/A

AI-CAIQ questions (1)

GRC-13.1

Is the degree of explainability required for the AI Services established, documented, and communicated?