AICM AtlasCSA AI Controls Matrix
CCC · Change Control and Configuration Management
CCC-04Cloud & AI Related

Change Authorization

Specification

Implement and enforce a procedure to authorize addition, removal, update, and management of assets, owned, controlled or used by the organization.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data collection, Data curation, Data storage, Resource provisioning, Team and expertise

Development

Training, Design, Guardrails, Supply Chain

Evaluation

Evaluation, Validation/Red Teaming, Re-evaluation

Deployment

Orchestration, AI Services supply chain, AI applications

Delivery

Operations, Maintenance, Continuous monitoring, Continuous improvement

Retirement

Archiving, Data deletion, Model disposal

Ownership / SSRM

PI

Shared across the supply chain

Shared control ownership refers to responsibilities and activities related to LLM security that are distributed across multiple stakeholders within the AI supply chain, including the Cloud Service Provider (CSP), Model Provider (MP), Orchestrated Service Provider (OSP), Application Provider (AP), and Customer (AIC). These controls require coordinated actions, communication, and governance across all involved parties to ensure their effectiveness.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Owned by the Application Provider (AP)

The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.

Implementation guidelines

[All Actors]
1. Define baseline configurations for every critical asset (e.g., systems, containers), AI platforms, model environments, orchestration pipelines and application services.

2. Embed security, compliance and operational requirements (least-privilege, hardened services, logging defaults, resource limits) in each baseline.

3. Route every proposed addition, update or removal through a formal approval workflow with multi-stakeholder sign-off (e.g., engineering, security, compliance, operations).

4. Store approved baselines and change packages in tamper-evident, version-controlled repositories; capture who/what/when metadata for audit.

5. Block or flag unauthorised changes; require rollback or emergency-exception handling per CCC-08 when deviations are detected.

6. Review and revise baselines on a regular basis or whenever major architectural change, new regulation or material threat intelligence changes occur.

Auditing guidelines

Focus: The Cloud Service Provider/AI Processing Infrastructure Provider has implemented appropriate access restrictions for customers making changes to AI infrastructure components, including compute resources, accelerator configurations, storage systems, and networking capabilities. 

Applicability: This control applies when the Cloud Service Provider/AI Processing Infrastructure Provider gives customers the ability to perform changes to AI infrastructure components, such as: AI accelerator (GPU/TPU) configuration adjustments, compute cluster scaling or optimization, storage tiering and caching configurations, networking fabric and interconnect settings, resource allocation and scheduling policies, infrastructure-as-code template modifications, and container orchestration settings for AI workloads. Access controls for CSP or AI Processing Infrastructure Provider personnel should be covered in a specific CSP attestation report, such as SOC 1 or SOC 2.

1. Inquiring with Control Owners

1.1 Interview Platform Engineers and Infrastructure Administrators, and Review Access Control Documentation: Interview personnel responsible for managing customer access and examine formal documentation. For Infrastructure Access Management: Self-service AI infrastructure provisioning portals and GPU/TPU quota management systems; Hardware accelerator firmware and driver management with distributed computing orchestration platforms; High-performance storage configuration interfaces and infrastructure-as-code deployment pipelines; Kubernetes and container orchestration for AI workloads. For Access Control and Security Framework: Role-based access control (RBAC) implementation for infrastructure management; Customer isolation boundaries in multi-tenant AI environments; Quota enforcement mechanisms for high-value compute resources; API access control for infrastructure management interfaces. For Authentication and Governance: Authentication requirements for infrastructure configuration changes; Service account governance for automated infrastructure management; Resource tagging and permission boundaries; Escalation paths for quota and access modifications.

1.2 Assess Change Management Policies: Review policies governing: customer self-service capabilities vs. provider-managed changes, approval workflows for infrastructure quota increases, risk assessment processes for customer-initiated infrastructure changes, guardrails and protective limitations on customer capabilities, monitoring of customer infrastructure modification activities, intervention thresholds for performance-impacting changes, automated validation of customer infrastructure templates, and resource utilization monitoring and anomaly detection.

2. Obtaining and Verifying the Population of Records 

2.1 Obtain Complete Asset Population: Gather inventory of infrastructure components customers can modify: AI accelerator (GPU/TPU) pools and configuration interfaces, compute instance types available for AI workloads, storage system configuration options, network fabric settings accessible to customers, resource schedulers and orchestration tools, infrastructure-as-code templates and deployment pipelines, container orchestration platforms and configurations, and customer-configurable monitoring and alerting systems. Verify that the population is complete.

3. Inspecting Records and Documents

3.1 Choose a diverse sample of infrastructure components that customers can modify, such as: high-demand GPU/TPU configurations, specialized AI accelerator hardware, high-performance storage tiers, low-latency network interconnects, auto-scaling compute clusters, custom container environments for AI workloads, infrastructure-as-code deployment pipelines, and resource allocation and scheduling systems.

3.2 Obtain Access Control Lists: For each sampled infrastructure component, collect: user and account access permissions, role definitions and assignments, service principal and API key permissions, customer tenant isolation boundaries, resource quota configurations, permission boundary definitions, service control policies, and administrative access override capabilities.

3.3 Validate Access List Completeness: Verify the completeness of access lists through: reviewing script logic for access report generation, cross-referencing with identity management systems, comparing against role definition repositories, validating against authentication logs, reconciling with customer subscription records, and examining API gateway access configurations.

3.4 Verify Access Restrictions: For each sampled infrastructure component, validate that access is properly restricted. For Examining Access Control Mechanisms: review role-based access control implementations, verify tenant isolation in multi-tenant environments, confirm resource hierarchy permission inheritance, validate quota enforcement mechanisms, check API rate limiting and throttling configurations, assess permission boundary implementations, and review network-level access controls. For Reviewing Privileged Access Management: verify separation between provider administrative access and customer access, confirm just-in-time access for privileged operations, check approval workflows for elevated permissions, validate audit logging for privileged operations, assess emergency access procedures, and review service account governance. For Analyzing Deployment Pipeline Controls: examine infrastructure-as-code pipeline authorization checks, verify template validation before deployment, confirm policy-as-code enforcement, review deployment approval workflows, check pipeline execution permissions, and validate pre-deployment security and compliance scanning. For Testing Access Enforcement: verify unauthorized customer accounts cannot exceed quotas, confirm platform prevents cross-tenant resource access, test that permissions align with documented roles, validate that infrastructure policy guardrails cannot be bypassed, check that service limits are properly enforced, and verify logging and alerting for access control violations.

3.5 Assess AI-Specific Access Controls: Evaluate specialized controls for AI infrastructure: quota management for scarce GPU/TPU resources, cost control mechanisms for expensive accelerators, performance protection for shared infrastructure, data locality and sovereignty enforcement, memory and storage allocation limits, specialized monitoring for AI workload anomalies, and fair use policies for distributed training.

Standards mappings

ISO 42001No Gap
42001: Clause 8.1 Operational planning and control
42001: A.10.2 Allocating responsibilities
27001: A.5.15 Access control
27001: A.7.2 Physical entry
27001: A.8.9 Configuration management
27001: A.8.19 Installation of software on operational systems
27001: A.8.20 Networks security
27001: A.8.32 Change management
Addendum

N/A

EU AI ActFull Gap
No Mapping
Addendum

The EU AI Act does not cover the CCC-04 topic, "Implement and enforce a procedure to prevent unauthorized addition, removal, update, and management of assets, owned, controlled, or used by the organization," for any of the AI structures defined within the EU AI Act.

NIST AI 600-1Partial Gap
MS-2.7-001
MS-2.7-009
GV-6.1-005
GV-6.1-007
Addendum

The requirement of "restricting the unauthorized addition, removal, update, and management of changes" is missing in NIST AI 600-1.

BSI AIC4No Gap
DEV-07
DEV-09
DEV-10
AM-03
AM-04
Addendum

N/A

AI-CAIQ questions (1)

CCC-04.1

Are procedures implemented and enforced to authorize the addition, removal, update, and management of assets owned, controlled, or used by the organization?