AICM AtlasCSA AI Controls Matrix
LOG · Logging and Monitoring
LOG-05Cloud & AI Related

Audit Logs Monitoring and Response

Specification

Monitor security audit logs to detect activity outside of typical or expected patterns. Establish and follow a defined process to review and take appropriate and timely actions on detected anomalies.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data storage, Resource provisioning

Development

Guardrails

Evaluation

Re-evaluation, Validation/Red Teaming

Deployment

Orchestration, AI Services supply chain, AI applications

Delivery

Operations, Continuous monitoring, Continuous improvement

Retirement

Not applicable

Ownership / SSRM

PI

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Owned by the Application Provider (AP)

The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.

Implementation guidelines

[All Actors]
1. Establish behavioral baselines for each system layer (infrastructure, orchestration, application, model endpoints) to distinguish normal from anomalous activity.

2. Continuously ingest and analyse audit logs with rules-based and/or ML-driven detection engines.

3. Generate real-time alerts when events deviate from baseline or match high-severity indicators; include user, asset, action, timestamp and source.

4. Route alerts to an incident-response workflow with defined roles, investigation steps, containment / remediation actions and escalation timelines.

5. Document every anomaly investigation (findings, root cause, actions taken, lessons learned) and link records to the corresponding log events.

6. Review and tune detection logic, thresholds and playbooks on a regular basis, or immediately after major incidents, architecture changes or new threat intelligence.

Auditing guidelines

1. Inquiring with Control Owners

1.1 Conduct interviews with personnel responsible for monitoring cloud infrastructure security audit logs and detecting anomalous resource usage patterns to understand their anomaly detection processes for AI processing infrastructure and customer workloads. Verify their understanding of baseline infrastructure behavior, customer usage patterns, and the defined process for reviewing and taking timely actions on detected security anomalies in cloud infrastructure operations.

2. Inspecting Records and Documents

2.1 Verify cloud infrastructure logs are monitored using automated systems for compute resources, storage access, network traffic, and customer workload activities.

2.2 Confirm alerts are configured for events such as unauthorized resource access, abnormal compute usage, suspicious network patterns, or customer workload boundary violations.

2.3 Check documented procedures exist for infrastructure log anomaly triage and root cause analysis of cloud security incidents.

2.4 Review monitoring dashboards for infrastructure layers, resource utilization patterns, customer activities, and access history.

2.5 Validate response actions are documented and tested for infrastructure-specific security incidents through simulations or real events.

2.6 Confirm evidence retention processes are triggered upon infrastructure security alert detection.

2.7 Examine documented baseline patterns for normal infrastructure operations and criteria used to identify security anomalies in compute usage, storage access, and customer workload activities.

2.8 Review the defined process for timely review and action on detected infrastructure anomalies, including escalation to cloud operations teams and response timeframes.

2.9 Validate that cloud infrastructure security audit log monitoring covers hypervisor activities, GPU/TPU usage, network communications, and customer tenant isolation events.

2.10 Confirm periodic review and tuning of infrastructure anomaly detection thresholds to reduce false positives while detecting sophisticated infrastructure attacks and customer security violations.

2.11 Confirm centralized log monitoring systems aggregate and analyze logs from all infrastructure and services.

2.12 Verify alerts are generated for privilege escalations, unauthorized API access, and failed access attempts.

2.13 Validate incident response teams are notified and take action per documented SLAs.

2.14 Check logs are integrated with automated remediation tools where applicable.

2.15 Review historical incident timelines to confirm log-based detection led to timely resolution.

2.16 Confirm CSP's monitoring framework is tested regularly through tabletop or real-world exercises.

2.17 Ensure multi-tenant log monitoring is segregated with independent alert channels per customer.

Standards mappings

ISO 42001Partial Gap
No Mapping for ISO 42001
ISO 27001 A.8.16
Addendum

No ISO 42001 control maps to LOG-05 topic.

EU AI ActPartial Gap
Article 14 (4)
Article 72 (1)
Article 72 (2)
Addendum

Contrarily to the AICM control that provides an increased level of detail, the EU AI Act establishes more general requirements that encompass the aspects tackled in the control. Some aspects detailed in the control are not explicitly mentioned in the European regulation (e.g., monitoring of security audit logs, establishment and implementation of a defined process to review and timely detect anomalies on AI systems).

NIST AI 600-1Partial Gap
GV-2.1-001
GV-6.1-005
GV-6.1-009
GV-6.2-004
MP-4.1-001
MS-2.6-005
MG-4.1-006
MG-4.1-007
Addendum

Monitoring audit logs specifically.

BSI AIC4No Gap
C4 SR-06
C5 OPS-13
Addendum

N/A

AI-CAIQ questions (2)

LOG-05.1

Are security audit logs monitored to detect activity outside of typical or expected patterns?

LOG-05.2

Is a process, on reviewing and taking appropriate and timely actions on detected anomalies, defined, established and followed?