AICM AtlasCSA AI Controls Matrix
GRC · Governance, Risk and Compliance
GRC-04Cloud & AI Related

Policy Exception Process

Specification

Establish and follow an approved exception process as mandated by the governance program whenever a deviation from an established policy occurs.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Team and expertise

Development

Guardrails

Evaluation

Evaluation

Deployment

Orchestration, AI Services supply chain

Delivery

Operations, Maintenance, Continuous monitoring, Continuous improvement

Retirement

Archiving, Data deletion

Ownership / SSRM

PI

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Owned by the Orchestrated Service Provider (OSP)

The Orchestrated Service Provider (OSP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The OSP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the OSP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The OSP is accountable for ensuring that its providers upstream (e.g MPs) implement the control as it relates to the service/product the develop and offered by the OSP. This refers to entities that create the technical building blocks and management tools that enable AI implementation. This can include platforms, frameworks, and tools that facilitate the integration, deployment, and management of AI models within enterprise workflows. These providers focus on model orchestration and offer services like API access, automated scaling, prompt management, workflow automation, monitoring, and governance rather than end-user functionality or raw infrastructure. They help businesses implement AI in a structured and efficient manner. Examples: AWS, Azure, GCP, OpenAI, Anthropic, LangChain (for AI workflow orchestration), Anyscale (Ray for distributed AI workloads), Databricks (MLflow), IBM Watson Orchestrate, and developer platforms like Google AI Studio.

Application

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Implementation guidelines

[All Actors]
1. Operate a formal, documented exception workflow: all deviations from policy must be raised, risk-assessed, approved at the appropriate level, recorded, and time-boxed.

2. Define clear eligibility criteria for when an exception can be requested (e.g., emergency patch, experimental feature, urgent business need).

3. Perform a risk assessment and specify mitigating controls; confirm the exception will not violate security, privacy, or compliance requirements.

4. Log rationale, approvals, expiry dates and compensating controls in a system accessible for audit and review.

5. Notify all affected stakeholders (security, legal, business owners, external partners if relevant) and coordinate any downstream impacts.

6. Review exceptions on a schedule or when context changes; retire or renew them to maintain alignment with policy, regulation and organisational risk appetite.

Auditing guidelines

1. Policy Examination
a. Verify that a formal, documented exception process exists for deviations from organization's policies related to AI infrastructure, platform configurations, or customer-facing AI services (e.g., model hosting, API rate limits, data handling constraints).
b. Confirm that the exception process is incorporated into or referenced by the organization's broader governance framework, including internal compliance programs or risk management procedures applicable to AI services.

2. Policy Assessment
a. Verify that the exception process includes documented approval workflows, justification requirements, expiration timelines, and conditions under which exceptions must be renewed or reviewed.
b. Confirm that the exception process covers deviations from organization’s internal operational policies, including scenarios such as bypassing encryption enforcement, extending AI model access beyond standard SLAs, or overriding resource usage limits.
c. Assess whether approved exceptions are communicated to relevant internal teams (e.g., service owners, platform compliance) and documented in a central tracking system for auditability.

3. Review Process Evaluation
a. Determine whether the organization has implemented controls to prevent unauthorized policy deviations (e.g., configuration checks, exception flags in orchestration systems).
b. Confirm that an appropriate governance body (e.g., platform risk team, service compliance board) periodically reviews approved exceptions and monitors adherence to the exception process.

4. Implementation Validation
a. Review a sample of approved exceptions related to CSP-operated AI infrastructure or services to validate that they meet approval, justification, and expiration requirements.
b. Examine recent changes to CSP-managed AI systems (e.g., infrastructure scaling for specific clients, API access changes, or emergency patches) and confirm that appropriate exceptions were documented and approved when deviations from policy occurred.

From CCM:
1. Examine the policy and/or procedures to determine if the policy exception process has been established.
2. Identify and confirm that exceptions to policies are tracked, authorized, and evidenced.
3. Confirm a review of policy exceptions takes place on a periodic basis by appropriate management.

Standards mappings

ISO 42001No Gap
42001: A.2.3 (Alignment with other organizational policies)
42001: B.2.3 (Alignment with other organizational policies)
Addendum

N/A

EU AI ActPartial Gap
Article 9
Article 17 (4)
Article 25
Article 28
Addendum

Establish and follow an approved exception process that outlines the formal approval and documentation procedures for exceptions and specific governance requirements for policy deviations.

NIST AI 600-1No Gap
GV-1.3-007
GV-4.1-003
Addendum

N/A

BSI AIC4No Gap
SP-03
Addendum

N/A

AI-CAIQ questions (1)

GRC-04.1

Is an approved exception process mandated by the governance program established and followed whenever a deviation from an established policy occurs?