AICM AtlasCSA AI Controls Matrix
HRS · Human Resources
HRS-15AI-Specific

AI Acceptable Use

Specification

Establish, document, and communicate to all personnel the policies and procedures on the acceptable use of AI technologies within the organization.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Team and expertise

Development

Supply Chain

Evaluation

Not applicable

Deployment

AI Services supply chain

Delivery

Not applicable

Retirement

Not applicable

Ownership / SSRM

PI

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Shared across the supply chain

Shared control ownership refers to responsibilities and activities related to LLM security that are distributed across multiple stakeholders within the AI supply chain, including the Cloud Service Provider (CSP), Model Provider (MP), Orchestrated Service Provider (OSP), Application Provider (AP), and Customer (AIC). These controls require coordinated actions, communication, and governance across all involved parties to ensure their effectiveness.

Implementation guidelines

[All Actors]
1. Establish and document an AI acceptable use policy and procedure that addresses fair, inclusive, reliable & safe, secure & private, transparent and accountable use of AI technologies.

2. Ensure coverage of all AI technologies, aligning with organizational values and ethical standards.

3. Require using approved AI tools, keeping an updated list of authorized applications, following procurement rules, and prohibiting unauthorized data processing, security bypassing, unapproved automation, and deceptive AI use.

4. Require comprehensive guidelines to ensure compliance and robust data protection, incorporating stringent security measures and thorough validation of AI outputs.

5. Communicated through periodic training to all employees, contractors, interns, and third parties, requiring formal acknowledgments and mandating regular evaluations to assess understanding.

6. Establish mechanisms to monitor compliance, require regular policy reviews, and report on effectiveness.

7.Continuously improve the policy by incorporating feedback, trends, and regulatory changes.

Auditing guidelines

1. Confirm the cloud service provider has a documented AI Acceptable Use Policy (AI AUP) for AI services, approved by governance (e.g., prohibiting use of hosted models for mass surveillance or unauthorized facial recognition).

2. Ensure the AI AUP is accessible and clearly defines acceptable use of AI tools, APIs, and compute resources (e.g., restricting use of GPU instances for training models that violate laws or content policies).

3.Verify the policy is communicated through onboarding, documentation, and training (e.g., cloud ops teams trained to detect misuse, customer success teams trained to guide compliant deployments).

4. Assess enforcement mechanisms like usage monitoring and abuse detection (e.g., flagging excessive API calls or attempts to bypass safety filters, with consistent enforcement).

5. Check that the policy is regularly reviewed and updated (e.g., when launching new AI services or enabling access to more powerful foundation models).

Standards mappings

ISO 42001No Gap
42001: 5.3 Roles
responsibilities and authorities
42001: 7.3 Awareness
42001: A.3.2 AI Roles and responsibilities
42001: A.9.2 Processes for responsible use of AI systems
42001: A.9.3 Objectives for responsible use of AI systems
Addendum

N/A

EU AI ActPartial Gap
Article 11
Article 53 (1)
Addendum

No organization-wide acceptable use policies or their communication to all personnel.

NIST AI 600-1No Gap
GV-6.1-010
GV-1.4-002
GV-3.2-003
GV-1.3-004
A.1.2
MP-4.1-003
Addendum

N/A

BSI AIC4Partial Gap
C5 HR-03
C5 AM-02
Addendum

No C4 control speaks to HRS-15 Topic of acceptable use of AI for personnel.

AI-CAIQ questions (1)

HRS-15.1

Are the policies and procedures on the acceptable use of AI technologies within the organization established, documented, and communicated to all personnel?