AICM AtlasCSA AI Controls Matrix
I&S · Infrastructure Security
I&S-06Cloud & AI Related

Segmentation and Segregation

Specification

Design, develop, deploy and configure applications and infrastructures such that tenant access is appropriately segmented and segregated, monitored and restricted.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data storage, Resource provisioning

Development

Design, Guardrails

Evaluation

Validation/Red Teaming, Re-evaluation

Deployment

Orchestration, AI Services supply chain, AI applications

Delivery

Operations, Maintenance, Continuous monitoring, Continuous improvement

Retirement

Not applicable

Ownership / SSRM

PI

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Shared Orchestrated Service Provider-Application Provider (Shared OSP-AP)

The OSP and AP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Implementation guidelines

[All Actors]
1. Use identity-based policies for resource isolation.

2. Restrict intra-tenant access based on least privilege.

[Shared between the MP, AP, OSP and CSP]
3. Network and Access Segmentation: Segment internal networks and tenant boundaries at the infrastructure level.

4. Implement VLANs and firewall rules to enforce tenant separation. Use virtualization and network controls (e.g., subnets, ACLs) to isolate workloads and environments.

5. Continuously monitor tenant interactions with logging and AI-driven anomaly detection: Use tools like SIEMs, behavioral analytics, or ML-based threat detection to detect misuse, lateral movement, or data leakage.

6. Zero Trust Architecture (ZTA) principles: Enforce microsegmentation, continuous verification, and minimal implicit trust across services and users.

Auditing guidelines

1. Verify documented policies and procedures clearly outline segmentation and segregation practices.

2. Confirm network segmentation is effectively implemented to isolate sensitive systems and data.

3. Ensure access controls align with segmentation and segregation policies.

4. Check regular testing and validation of segmentation effectiveness.

5. Validate incident response plans specifically address breaches across segmented environments.

6. Confirm logs and monitoring effectively detect unauthorized cross-segment traffic.

7. Verify periodic training provided to relevant staff on segmentation requirements.

Standards mappings

ISO 42001Partial Gap
42001: 6.3.2 – Planning of AI-specific controls
42001: 8.2.2 – Operational control
42001: 9.1 / 10.2 – Monitoring and corrective action
ISO/IEC 27001:2022 - A.8.22
ISO/IEC 27001:2022 - A.8.27 – Segregation of environments
27002: 8.28 – Secure development and test environments
27002: 9.4 – Access control
27002: 5.9 – Segregation in networks
27002: 5.17 / 5.18 – Monitoring and logging
27002: 5.13 / 5.14 – Infrastructure hardening
Addendum

ISO 42001 doesn't directly focus on tenant isolation or infrastructure-level enforcement — assumes these are handled via ISO 27001-style controls

EU AI ActPartial Gap
Article 15
Addendum

Full control would have to be added because the EU AI Act does not address these concerns. Add, "Design, develop, deploy and configure applications and infrastructures such that CSP and AIC (tenant) user access and intra-tenant access is appropriately segmented and segregated, monitored and restricted from other tenants."

NIST AI 600-1Full Gap
No Mapping
Addendum

The AICM control dictates appropriate environment segmentations from an authentication/authorization perspective. No infrastructure access segmentation in NIST AI 600-1.

BSI AIC4No Gap
C4 DM-02
C4 SR-06
C5 OPS-24
C5 COS-06
Addendum

N/A

AI-CAIQ questions (1)

I&S-06.1

Are applications and infrastructures designed, developed, deployed and configured such that tenant access is appropriately segmented, segregated, monitored, and restricted from other tenants?