AICM AtlasCSA AI Controls Matrix
LOG · Logging and Monitoring
LOG-15AI-Specific

Output Monitoring

Specification

Log and monitor all output events (content and metadata) to enable auditing and reporting on usage of AI models.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data storage

Development

Guardrails

Evaluation

Evaluation, Validation/Red Teaming, Re-evaluation

Deployment

Orchestration, AI Services supply chain, AI applications

Delivery

Operations, Maintenance, Continuous monitoring, Continuous improvement

Retirement

Archiving, Data deletion

Ownership / SSRM

PI

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Owned by the Application Provider (AP)

The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.

Implementation guidelines

[All Actors]
1. Comprehensive Output Logging: Capture and store all AI-generated outputs (content and metadata) in structured logs. Include timestamps, user IDs, model versions, and relevant request context for full traceability.

2. Secure Storage and Retention: Encrypt logged data in transit and at rest. Define clear retention and purge policies, ensuring logs remain accessible for compliance or incident investigations.

3. Monitoring and Real-Time Alerts: Implement automated pipelines (e.g., SIEM tools) to analyze AI outputs for anomalies, bias, or disallowed content. Configure alerts for high-risk or unusual output events, escalating them to incident response teams when necessary.

4. Output Validation and Risk Oversight: Periodically review logs and apply additional validation layers for critical or compliance-sensitive use cases (e.g., financial approvals, healthcare insights).

5. Collaboration and Incident Response: Define escalation paths for suspicious or abnormal AI outputs. Coordinate with providers (or other stakeholders) to share logs, investigate root causes, and remediate issues promptly.

6. Audit Trails and Reporting: Maintain detailed audit trails linking outputs to user actions, input data, and model identifiers. Regularly generate usage reports, aligning with regulatory requirements and organizational standards.

Auditing guidelines

1. Inquiring with Control Owners

1.1 Conduct interviews with personnel responsible for managing time synchronization across AI model development and distribution systems to understand their implementation of reliable time sources for model training logs, research activities, and model versioning. Verify their understanding of time synchronization requirements for training infrastructure, model distribution platforms, and procedures for maintaining accurate timestamps across model lifecycle activities.

2. Inspecting Records and Documents

2.1 Confirm AI model development systems handling training processes and model distribution use a centralized time source.

2.2 Verify implementation of Network Time Protocol (NTP) or equivalent time synchronization protocols across model training clusters and research infrastructure.

2.3 Check synchronization logs to validate accurate timestamping across model training processes, evaluation metrics, and distribution activities.

2.4 Assess whether unsynchronized model development systems trigger alerts or errors that could affect training reproducibility or research integrity.

2.5 Verify clock drift thresholds are defined and monitored for model training infrastructure and model serving systems.

2.6 Confirm the accuracy of timestamps in model development logs critical for research reproducibility and intellectual property protection.

2.7 Validate incident response records for model-related issues reference consistent timestamps across training sessions and model deployment events.

2.8 Examine documentation of reliable time source configuration for model development environments and backup time synchronization mechanisms.

2.9 Review time synchronization policies covering model training systems, research platforms, and model distribution infrastructure.

2.10 Validate that time source reliability is monitored for model development activities and backup time sources are available to ensure research continuity.

2.11 Ensure all model-serving environments and pipelines synchronize with a reliable time source.

2.12 Verify timestamp alignment across model logs, inference requests, and security logs.

2.13 Check whether clock sync mechanisms are included in deployment templates.

2.14 Review system logs for anomalies due to clock mismatches during model training or serving.

2.15 Confirm configuration compliance for time synchronization policies.

2.16 Assess whether timing discrepancies impact forensic reconstruction.

Standards mappings

ISO 42001No Gap
ISO 42001 B.6.2.8
ISO 27001 A.5.37
Addendum

N/A

EU AI ActPartial Gap
Article 15 (5)
Addendum

Contrarily to the AICM control that provides an increased level of detail, the EU AI Act establishes more general requirements that may be interpreted as encompassing the aspects tackled in the control.

NIST AI 600-1No Gap
MP-5.1-002
Addendum

N/A

BSI AIC4No Gap
C4 BC-03
C4 RE-02
C4 RE-03
C5 OPS-11
C5 PI-01
Addendum

N/A

AI-CAIQ questions (1)

LOG-15.1

Are all output events (content and metadata) logged and monitored to enable auditing and reporting on the usage of AI models?