Policy Exception Process
Specification
Establish and follow an approved exception process as mandated by the governance program whenever a deviation from an established policy occurs.
Threat coverage
Architectural relevance
Lifecycle
Team and expertise
Guardrails
Evaluation
Orchestration, AI Services supply chain
Operations, Maintenance, Continuous monitoring, Continuous improvement
Archiving, Data deletion
Ownership / SSRM
PI
Shared Cloud Service Provider-Model Provider (Shared CSP-MP)
The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.
Model
Owned by the Model Provider (MP)
The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.
Orchestrated
Owned by the Orchestrated Service Provider (OSP)
The Orchestrated Service Provider (OSP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The OSP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the OSP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The OSP is accountable for ensuring that its providers upstream (e.g MPs) implement the control as it relates to the service/product the develop and offered by the OSP. This refers to entities that create the technical building blocks and management tools that enable AI implementation. This can include platforms, frameworks, and tools that facilitate the integration, deployment, and management of AI models within enterprise workflows. These providers focus on model orchestration and offer services like API access, automated scaling, prompt management, workflow automation, monitoring, and governance rather than end-user functionality or raw infrastructure. They help businesses implement AI in a structured and efficient manner. Examples: AWS, Azure, GCP, OpenAI, Anthropic, LangChain (for AI workflow orchestration), Anyscale (Ray for distributed AI workloads), Databricks (MLflow), IBM Watson Orchestrate, and developer platforms like Google AI Studio.
Application
Owned by the Customer (AIC)
The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.
Implementation guidelines
Auditing guidelines
1. Policy Examination a. Verify that a formal, documented exception process exists for deviations from organization's policies related to AI infrastructure, platform configurations, or customer-facing AI services (e.g., model hosting, API rate limits, data handling constraints). b. Confirm that the exception process is incorporated into or referenced by the organization's broader governance framework, including internal compliance programs or risk management procedures applicable to AI services. 2. Policy Assessment a. Verify that the exception process includes documented approval workflows, justification requirements, expiration timelines, and conditions under which exceptions must be renewed or reviewed. b. Confirm that the exception process covers deviations from organization’s internal operational policies, including scenarios such as bypassing encryption enforcement, extending AI model access beyond standard SLAs, or overriding resource usage limits. c. Assess whether approved exceptions are communicated to relevant internal teams (e.g., service owners, platform compliance) and documented in a central tracking system for auditability. 3. Review Process Evaluation a. Determine whether the organization has implemented controls to prevent unauthorized policy deviations (e.g., configuration checks, exception flags in orchestration systems). b. Confirm that an appropriate governance body (e.g., platform risk team, service compliance board) periodically reviews approved exceptions and monitors adherence to the exception process. 4. Implementation Validation a. Review a sample of approved exceptions related to CSP-operated AI infrastructure or services to validate that they meet approval, justification, and expiration requirements. b. Examine recent changes to CSP-managed AI systems (e.g., infrastructure scaling for specific clients, API access changes, or emergency patches) and confirm that appropriate exceptions were documented and approved when deviations from policy occurred. From CCM: 1. Examine the policy and/or procedures to determine if the policy exception process has been established. 2. Identify and confirm that exceptions to policies are tracked, authorized, and evidenced. 3. Confirm a review of policy exceptions takes place on a periodic basis by appropriate management.
Standards mappings
42001: A.2.3 (Alignment with other organizational policies) 42001: B.2.3 (Alignment with other organizational policies)
Addendum
N/A
Article 9 Article 17 (4) Article 25 Article 28
Addendum
Establish and follow an approved exception process that outlines the formal approval and documentation procedures for exceptions and specific governance requirements for policy deviations.
GV-1.3-007 GV-4.1-003
Addendum
N/A
SP-03
Addendum
N/A
AI-CAIQ questions (1)
Is an approved exception process mandated by the governance program established and followed whenever a deviation from an established policy occurs?