AICM AtlasCSA AI Controls Matrix
GRC · Governance, Risk and Compliance
GRC-08Cloud & AI Related

Special Interest Groups

Specification

Establish and maintain contact with related special interest groups and other relevant entities in line with business context.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Team and expertise

Development

Design, Guardrails

Evaluation

Evaluation, Validation/Red Teaming

Deployment

Orchestration, AI Services supply chain

Delivery

Continuous monitoring, Maintenance

Retirement

Data deletion, Archiving

Ownership / SSRM

PI

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Owned by the Orchestrated Service Provider (OSP)

The Orchestrated Service Provider (OSP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The OSP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the OSP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The OSP is accountable for ensuring that its providers upstream (e.g MPs) implement the control as it relates to the service/product the develop and offered by the OSP. This refers to entities that create the technical building blocks and management tools that enable AI implementation. This can include platforms, frameworks, and tools that facilitate the integration, deployment, and management of AI models within enterprise workflows. These providers focus on model orchestration and offer services like API access, automated scaling, prompt management, workflow automation, monitoring, and governance rather than end-user functionality or raw infrastructure. They help businesses implement AI in a structured and efficient manner. Examples: AWS, Azure, GCP, OpenAI, Anthropic, LangChain (for AI workflow orchestration), Anyscale (Ray for distributed AI workloads), Databricks (MLflow), IBM Watson Orchestrate, and developer platforms like Google AI Studio.

Application

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Implementation guidelines

[All Actors]
1. Establish channels for engaging with cloud- and AI-related special interest groups, consortia, or industry bodies.

2. Encourage participation in technical and regulatory discussions to stay ahead of emerging standards and security risks.

3. Assign internal stakeholders to monitor developments from groups like CSA, ISO/IEC JTC 1/SC 42, NIST, and ENISA.

4. Leverage insights from industry engagement to influence internal policy, architecture, and risk practices.

5. Foster partnerships and knowledge-sharing across peer organizations to enhance resilience and governance maturity.

Auditing guidelines

1. Policy Examination: Verify that the organization has a documented strategy, policy, or guideline encouraging participation in external special interest groups related to cloud infrastructure, AI platform governance, multi-tenant security, compliance frameworks, or responsible AI deployment at scale. Confirm that the organization has articulated the purpose of external engagement (e.g., tracking regulatory developments, contributing to cloud/AI standards, addressing emerging risks, promoting transparency in shared responsibility models).

2. Policy Assessment: Assess whether the identified external groups are relevant to the organization's operations, including cloud compliance forums, international standards bodies, infrastructure-focused AI alliances, and regulatory cloud working groups. Verify that roles responsible for external engagement (e.g., cloud governance, compliance, public policy, platform risk) are formally documented and aligned to the organization’s governance structure.

3. Program Evaluation: Determine whether the organization has a structured process to identify, evaluate, and prioritize participation in external groups based on relevance to regulatory, security, and AI infrastructure concerns. Confirm that information gained from external engagement is shared internally with appropriate stakeholders (e.g., engineering, compliance, legal, client-facing teams) and influences the organization's policies, platform development, or assurance practices.

4. Implementation Validation: Review documentation such as working group memberships, standards committee participation, public comment submissions, industry consortium involvement, or internal briefings summarizing external discussions. Select a sample of external groups (e.g., ISO/IEC JTC 1/SC 42, EUCS, Open Compute Project, CSA AI/Cloud working groups) and verify that engagement supports the organization’s AI/cloud governance objectives and shared responsibility obligations.

From CCM:
1. Examine the organization's policy and procedures related to contact with cloud-related special interest groups to determine if membership is required and actively maintained.
2. Identify relevant individuals responsible for contacting cloud-related special interest groups and determine if the policy requirements stipulated in the policy level have been implemented.

Standards mappings

ISO 42001No Gap
42001: B.4.6 (Human resources)
27001: A.5.6 (Contact with special interest groups)
Addendum

N/A

EU AI ActPartial Gap
Article 72
Addendum

Establish and maintain contact with cloud-related special interest groups, in line with business context and proactive engagement requirements with industry groups.

NIST AI 600-1No Gap
GV-6.1-001
GV-6.1-002
MP-5.2-002
Addendum

N/A

BSI AIC4No Gap
OIS-05
Addendum

N/A

AI-CAIQ questions (1)

GRC-08.01

Is contact established and maintained with related special interest groups and other relevant entities in line with business context?