AICM AtlasCSA AI Controls Matrix
STA · Supply Chain Management, Transparency, and Accountability
STA-12Cloud & AI Related

Supply Chain Compliance Assessment

Specification

Define and implement a process for conducting internal assessments to confirm conformance and effectiveness of standards, policies, procedures, and service level agreement activities at least annually.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data storage, Resource provisioning, Team and expertise

Development

Design, Training, Guardrails

Evaluation

Evaluation, Validation/Red Teaming, Re-evaluation

Deployment

Orchestration, AI Services supply chain, AI applications

Delivery

Operations, Maintenance, Continuous monitoring, Continuous improvement

Retirement

Archiving, Data deletion, Model disposal

Ownership / SSRM

PI

Shared across the supply chain

Shared control ownership refers to responsibilities and activities related to LLM security that are distributed across multiple stakeholders within the AI supply chain, including the Cloud Service Provider (CSP), Model Provider (MP), Orchestrated Service Provider (OSP), Application Provider (AP), and Customer (AIC). These controls require coordinated actions, communication, and governance across all involved parties to ensure their effectiveness.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Owned by the Application Provider (AP)

The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.

Implementation guidelines

[All Actors]
1. Establish annual internal assessment processes to evaluate conformance with AI supply chain security standards, policies, procedures, and service level agreements across all organizational functions.

2. Define assessment scope and criteria covering policy compliance, procedural effectiveness, control implementation, and service level agreement performance for all AI-related activities.

3. Conduct systematic compliance reviews using standardized assessment frameworks, checklists, and metrics to measure adherence to established standards and identify gaps.

4. Document assessment findings including compliance status, control effectiveness ratings, identified deficiencies, and recommendations for improvement across all assessed areas.

5. Implement corrective action processes to address identified non-conformance issues, track remediation progress, and verify effectiveness of corrective measures.

6. Report assessment results to appropriate stakeholders including management, audit committees, and relevant governance bodies to support continuous improvement and accountability.

Auditing guidelines

1. Verify whether the Cloud Service Provider has a recurring, structured audit process to evaluate governance across cloud infrastructure, data storage, compute services, virtualization, and AI workload support (e.g., container orchestration, GPU provisioning, serverless functions).

2. Review audit documentation for issues such as misconfigured access controls, data residency violations, service availability risks, or insecure APIs. Confirm that corrective actions are tracked, resolved promptly, and aligned with cloud security standards, regulatory requirements, and internal policies.

3. Determine whether audit results are shared with relevant teams—such as cloud operations, compliance, and security—and that a feedback mechanism is in place to continuously improve audit effectiveness and ensure responsible cloud service delivery.

Standards mappings

ISO 42001No Gap
42001: A.2.3 Alignment with other organizational policies
42001: A.10.3 Suppliers
27001: A.5.22 Monitoring
review and change management of supplier services
27002: 5.22 Monitoring
review and change management of supplier services
Addendum

N/A

EU AI ActPartial Gap
Article 9 (2)
Article 17
Annex VII 5.3
Addendum

Define a formal internal audit program that includes: supplier and SLA performance reviews, security/privacy policy compliance, a documented process within the QMS or compliance framework, a review schedule (e.g., annually or per risk/event triggers), and confirmation that reviews are extended to all relevant actors, not just high-risk providers.

NIST AI 600-1No Gap
GV-6.1-005
Addendum

N/A

BSI AIC4No Gap
C4 PC-01
C4 Section 4 (4.4.2.1)
C5 SSO-04
C5 COM-02
C5 COM-03
Addendum

N/A

AI-CAIQ questions (1)

STA-12.1

Is there a process for conducting internal assessments at least annually to confirm the conformance and effectiveness of standards, policies, procedures, and SLA activities?