AICM AtlasCSA AI Controls Matrix
GRC · Governance, Risk and Compliance
GRC-06Cloud & AI Related

Governance Responsibility Model

Specification

Define and document roles and responsibilities for planning, implementing, operating, assessing, and improving governance programs.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Team and expertise

Development

Guardrails

Evaluation

Validation/Red Teaming, Evaluation

Deployment

Orchestration, AI Services supply chain

Delivery

Operations, Maintenance, Continuous monitoring, Continuous improvement

Retirement

Archiving, Data deletion

Ownership / SSRM

PI

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Owned by the Orchestrated Service Provider (OSP)

The Orchestrated Service Provider (OSP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The OSP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the OSP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The OSP is accountable for ensuring that its providers upstream (e.g MPs) implement the control as it relates to the service/product the develop and offered by the OSP. This refers to entities that create the technical building blocks and management tools that enable AI implementation. This can include platforms, frameworks, and tools that facilitate the integration, deployment, and management of AI models within enterprise workflows. These providers focus on model orchestration and offer services like API access, automated scaling, prompt management, workflow automation, monitoring, and governance rather than end-user functionality or raw infrastructure. They help businesses implement AI in a structured and efficient manner. Examples: AWS, Azure, GCP, OpenAI, Anthropic, LangChain (for AI workflow orchestration), Anyscale (Ray for distributed AI workloads), Databricks (MLflow), IBM Watson Orchestrate, and developer platforms like Google AI Studio.

Application

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Implementation guidelines

[All Actors]
1. Define and document governance responsibilities across the AI lifecycle, including roles for data governance, model oversight, risk acceptance, compliance, and incident response.

2. Capture those assignments in a RACI (Responsible / Accountable / Consulted / Informed) matrix that covers internal staff and external partners.

3. Align role definitions with organisational structure, separation-of-duties requirements and applicable regulations / standards.

4. Establish escalation paths and communication channels for governance conflicts or non-compliance.

5. Review and update the role catalogue at least annually (or after major organisational / regulatory change) and redistribute it to all stakeholders.

Auditing guidelines

1. Policy Examination
a. Verify that the organization has documented roles and responsibilities for governing the secure provisioning and operation of AI services provided to customers, including infrastructure, platform, and support layers.
b. Confirm that responsibilities span the governance lifecycle—planning, implementation, operation, assessment, and continuous improvement—as they relate to delivering secure and compliant AI services.

2. Policy Assessment
a. Assess whether governance roles clearly define accountability for service security, incident response, customer support, and shared responsibility boundaries.
b. Confirm that governance responsibilities account for multi-tenant risks, service-level obligations, and contractual responsibilities to customers using hosted AI solutions.

3. Program Evaluation
a. Determine whether governance responsibilities are embedded in service delivery processes such as security reviews, change control, and customer onboarding.
b. Verify that governance roles are tied to oversight forums (e.g., security councils, compliance teams) that monitor service-level performance, audit readiness, and continuous improvement.
c. Confirm that governance responsibilities explicitly address multi-tenant identity protections, with roles accountable for preventing cross-customer exposure of identity data or model outputs.

4. Implementation Validation
a. Review documentation such as governance charters, audit logs, operational dashboards, or internal/external assurance reports to confirm governance responsibilities are fulfilled.
b. Select a governance function (e.g., customer data isolation, AI model deployment support, shared responsibility guidance) and confirm the responsible role is fulfilling duties as defined in policy.

From CCM:
1. Confirm the organization has established a governance framework which details roles, responsibilities, and accountability.
2. Evidence that governance meetings are reported and documented appropriately.
3. Confirm that individuals/groups responsible for governance are tracking and monitoring progress against the governance program.

Standards mappings

ISO 42001No Gap
42001: B.3.2 (AI roles and responsibilities)
42001: A.3.2 (AI roles and responsibilities)
Addendum

N/A

EU AI ActNo Gap
Article 16
Article 23
Article 24
Article 25
Article 26
Article 27
Article 28
Article 29
Addendum

N/A

NIST AI 600-1Partial Gap
GV-2.1-001
GV-2.1-002
Addendum

More guidance for documenting and defining roles and responsibilities for governance programs as a whole.

BSI AIC4No Gap
OIS-01
OIS-02
COM-01
COM-02
COM-03
COM-04
Addendum

N/A

AI-CAIQ questions (1)

GRC-06.1

Are roles and responsibilities defined and documented for planning, implementing, operating, assessing, and improving governance programs?