AICM AtlasCSA AI Controls Matrix
DCS · Datacenter Security
DCS-07Cloud-Specific

Controlled Physical Access Points

Specification

Design and implement physical security perimeters to safeguard personnel, data, and information systems.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data storage

Development

Not applicable

Evaluation

Not applicable

Deployment

Not applicable

Delivery

Operations

Retirement

Not applicable

Ownership / SSRM

PI

Owned by the Cloud Service Provider (CSP)

The Cloud Service Provider (CSP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with cloud computing (processing, storage, and networking) technologies in the context of the services or products they develop and offer. The CSP is responsible and accountable for implementing the control within its own infrastructure/environment. The CSP is responsible for enabling the customer and/or upstream partner to implement/configure the control within their risk management approach. The CSP is accountable for ensuring that its providers upstream implement the control related to the service/product developed and offered by the CSP.

Model

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Owned by the Application Provider (AP)

The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.

Implementation guidelines

[All Actors Except AIC]
1. Providers should implement physical security measures to safeguard personnel, data and information systems that are involved in tasks, such as data processing, inference, training.

Auditing guidelines

1. Examine the policy relating to physical security perimeters.

2. Examine the lists of types of areas in the organization, and the classification of each.

3. Determine if there are appropriate physical security barriers and if monitoring exists between areas.

Standards mappings

ISO 42001No Gap
42001: A.2.3 Alignment with other organizational policies
42001: A.4.2 Resource Documentation
27001: A.7.1 - Physical security perimeters
27001: A.7.2 Physical entry
27001: A.7.1 Securing offices
rooms and facilities
27001: A.7.4 Physical security monitoring
27002: 7.1 - Physical security perimeters
27002: 7.2 Physical entry
27002: 7.3 Securing offices
rooms and facilities
27002: 7.4 Physical security monitoring
Addendum

N/A

EU AI ActFull Gap
No Mapping
Addendum

Ensure that all locations where high-risk AI systems, their supporting infrastructure, and sensitive data are housed are protected by controlled physical access points. Restrict access to authorized personnel only. List physical threats and access controls as factors to consider when identifying, assessing, and mitigating risks to the AI system.

NIST AI 600-1Full Gap
No Mapping
Addendum

There are no NIST AI 600-1 compliance action items that address physical security or establishing physical security perimeters around business areas.

BSI AIC4No Gap
PS-03
Addendum

N/A

AI-CAIQ questions (1)

DCS-07.1

Are physical security perimeters designed and implemented to safeguard personnel, data, and information systems?