AICM AtlasCSA AI Controls Matrix
LOG · Logging and Monitoring
LOG-09Cloud & AI Related

Log Protection

Specification

Protect audit records from unauthorized access, modification, and deletion.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data storage

Development

Training

Evaluation

Evaluation, Validation/Red Teaming, Re-evaluation

Deployment

Orchestration, AI Services supply chain, AI applications

Delivery

Operations, Maintenance, Continuous monitoring, Continuous improvement

Retirement

Archiving, Data deletion, Model disposal

Ownership / SSRM

PI

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Owned by the Application Provider (AP)

The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.

Implementation guidelines

[All Actors]
1. Secure Logging Mechanisms: Implement robust access control and encryption (at rest and in transit) for AI related logs covering training, validation and inference. Capture relevant metadata such as model versions, data sources, input parameters, and system events to facilitate in-depth oversight.

2. Immutable and Tamper-Evident Storage: Store logs in an immutable, tamper-evident repository to protect against unauthorized modification or deletion. Employ cryptographic integrity checks or other mechanisms that enable prompt detection of log tampering.

3. Integration with Security Monitoring: Correlate AI logs with broader security monitoring systems to detect anomalies (e.g., unexpected inference spikes, unapproved dataset uploads). Define workflows to escalate suspicious events for investigation and remediation.

4. Privacy-Preserving Practices: Apply techniques like pseudonymization or data minimization to reduce exposure of sensitive information within AI logs. Align log retention periods and handling with applicable data protection requirements.

5. Regular Review and Alerting: Conduct scheduled reviews of AI logs for unauthorized model changes, suspicious hyperparameter modifications, or attempts to bypass controls. Configure automated alerts to notify incident response teams of critical events (e.g., potential model tampering, unauthorized data access).

Auditing guidelines

1. Inquiring with Control Owners

1.1 Conduct interviews with personnel responsible for protecting cloud infrastructure audit records from unauthorized access, modification, and deletion to understand their implementation of security controls and monitoring procedures for infrastructure logs and customer workload records. Verify their understanding of technical safeguards for cloud logging systems, access control mechanisms for customer isolation protection, and procedures for detecting and responding to unauthorized attempts to access, modify, or delete cloud infrastructure audit records.

2. Inspecting Records and Documents

2.1 Confirm technical safeguards are in place to enforce read-only permissions on cloud infrastructure audit logs containing customer workloads and resource allocation data.

2.2 Review documentation for secure cloud infrastructure log storage practices, including encryption of customer data logs and access control for infrastructure operation records.

2.3 Validate that cloud infrastructure logs are retained in tamper-evident storage formats or solutions to protect customer isolation and service availability.

2.4 Check if cloud infrastructure logs are segregated from operational logs and protected independently with customer-focused controls.

2.5 Verify access controls for cloud infrastructure logs follow least privilege and RBAC principles, preventing unauthorized access to customer workload data and infrastructure configurations.

2.6 Review the audit trail for access attempts to cloud infrastructure log storage locations containing sensitive customer and infrastructure information.

2.7 Confirm periodic testing or review of cloud infrastructure log protection mechanisms is performed to ensure customer isolation and infrastructure security compliance.

2.8 Examine backup and recovery procedures for cloud infrastructure audit records to ensure protection extends to archived customer workload logs and infrastructure operation data.

2.9 Validate monitoring and alerting mechanisms for detecting unauthorized access attempts to cloud infrastructure logs, customer data modification attempts, or deletion attempts on infrastructure records.

2.10 Review incident response procedures specifically related to cloud infrastructure audit record compromise, customer isolation breaches, or suspected unauthorized access to infrastructure logging systems.

2.11 Verify logs across all cloud-hosted services are protected using encryption, RBAC, and WORM settings.

2.12 Confirm multi-tenant isolation mechanisms prevent unauthorized access to logs.

2.13 Check that logs are replicated across regions without compromising their integrity.

2.14 Validate continuous monitoring tools alert on unauthorized access attempts to logs.

2.15 Confirm logs of audit services themselves (e.g., CloudTrail, StackDriver) are not modifiable.

2.16 Review lifecycle policies to ensure deletion controls follow compliance retention requirements.

2.17 Ensure third-party SIEM integrations do not compromise protection of native logs.

Standards mappings

ISO 42001No Gap
ISO 42001 7.5.3
ISO 42001 B.6.2.8
ISO 27001 A.5.37
ISO 27002 A.8.15
Addendum

N/A

EU AI ActFull Gap
No Mapping
Addendum

No implicit/explicit mention is made in the EU AI Act to the requirements set by the control.

NIST AI 600-1Full Gap
No Mapping
Addendum

Protecting audit records from unauthorized access.

BSI AIC4No Gap
C4 DM-02
C4 SR-06
C5 OPS-12
C5 OPS-14
C5 OPS-16
Addendum

N/A

AI-CAIQ questions (1)

LOG-09.1

Is the audit records protected from unauthorized access, modification, and deletion?