AICM AtlasCSA AI Controls Matrix
CCC · Change Control and Configuration Management
CCC-03Cloud & AI Related

Change Management Technology

Specification

Implement a change management procedure to manage the risks associated with applying changes to assets owned, controlled or used by the organization.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data collection, Data curation, Data storage, Resource provisioning

Development

Design, Training, Guardrails

Evaluation

Evaluation, Validation/Red Teaming, Re-evaluation

Deployment

Orchestration, AI Services supply chain, AI applications

Delivery

Operations, Maintenance, Continuous monitoring, Continuous improvement

Retirement

Archiving, Data deletion, Model disposal

Ownership / SSRM

PI

Shared across the supply chain

Shared control ownership refers to responsibilities and activities related to LLM security that are distributed across multiple stakeholders within the AI supply chain, including the Cloud Service Provider (CSP), Model Provider (MP), Orchestrated Service Provider (OSP), Application Provider (AP), and Customer (AIC). These controls require coordinated actions, communication, and governance across all involved parties to ensure their effectiveness.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Owned by the Application Provider (AP)

The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.

Implementation guidelines

[All Actors]
1. Identify and document potential risks associated with each proposed change, including operational, security, performance, and compliance-related risks.

2. Define the likelihood and impact of each risk, and specify mitigation steps or fallback procedures (e.g., rollback plans, alternative configurations).

3. Include risk evaluation as part of the formal change review and approval workflow.

4. Implement monitoring and logging mechanisms to track the real-time impact of deployed changes and detect early signs of risk materialization.

5. Assign clear ownership and escalation paths for each identified risk, ensuring that designated stakeholders are prepared to intervene if risk thresholds are crossed.

6. Regularly review and update the risk registry to capture lessons learned from previous changes and incorporate them into future change planning.

Auditing guidelines

1. Inquiring with Control Owners

1.1 Interview change management leadership to understand how changes to AI-specific infrastructure assets are managed. This includes GPU/TPU/accelerator updates, AI-optimized server and storage configurations, high-bandwidth network fabric modifications, and distributed compute environment updates. Discuss the workflows for hardware introductions, driver and firmware updates, resource allocation changes, and infrastructure optimizations. Examine how risks are assessed, particularly regarding multi-tenant isolation, AI workload performance impacts, hardware-software compatibility, and capacity planning.

2. Inspecting Records and Documents

2.1 Confirm the use of enterprise change management systems such as ServiceNow, Jira, or BMC Remedy for tracking infrastructure changes. Validate that these systems are integrated into the broader governance and approval framework.

2.2 Review configuration management databases (CMDBs) to ensure accurate inventories of infrastructure assets, well-documented baseline configurations for AI environments, relationship mapping between components, and comprehensive change histories.

2.3 Assess the use of automated testing frameworks to validate infrastructure changes. Confirm that performance validation tests are consistently run, benchmark results are evaluated against thresholds, test coverage is documented, and testing is integrated into approval workflows.

2.4 Evaluate cloud infrastructure management practices by verifying enforcement of infrastructure-as-code templates, proper versioning, detection and remediation of configuration drift, and controlled deployment procedures.

2.5 Review API management practices related to infrastructure. Confirm that access to management APIs is controlled, versions are tracked, administrative activity is monitored, and authentication/authorization is enforced.

2.6 Verify oversight of infrastructure components managed by external providers. Check that contracts specify change management obligations, vendor change notifications are integrated into internal workflows, impact assessments are conducted, testing protocols are in place, and post-change SLA monitoring is active.

2.7 Inspect change management for AI-specific infrastructure. Confirm that driver and firmware updates are validated with ML frameworks, performance benchmarking is conducted with relevant AI workloads, capacity planning is aligned with distributed training needs, and hardware optimization configurations are properly tested.

2.8 Review infrastructure-as-code (IaC) practices. Ensure code reviews are conducted, access controls are applied to repositories, templates are tested in staging before production deployment, and syntax and security validations are automated. Confirm version control and change history are documented.

Standards mappings

ISO 42001Partial Gap
42001: Clause 6.3 Planning of changes
42001: Clause 8.1 Operational planning and control
42001: A.10.2 Allocating responsibilities
Addendum

ISO 42001 does not specifically mention the CCC-03 topic of "technology-driven", but does support using tooling resources.

EU AI ActPartial Gap
Article 17
Addendum

The EU AI Act does not cover CCC-03 topic for General-Purpose AI Models or General Purpose AI Models with Systemic Risks.

NIST AI 600-1Partial Gap
GV-6.1-005
GV-6.1-008
MG-4.1-006
GV-1.1-001
MS-2.7-001
Addendum

Formal change management procedure, Operational controls such as approvals, impact analysis, rollback, and documentation, Extend to cloud infrastructure or general IT assets.

BSI AIC4Partial Gap
DEV-05
Addendum

No C4 control maps to CCC-03 for change management, nor specifically mentions using "technology-driven".

AI-CAIQ questions (1)

CCC-03.1

Is a change management procedure implemented to manage the risks associated with applying changes to assets owned, controlled or used by the organization?