AICM AtlasCSA AI Controls Matrix
MDS · Model Security
MDS-02AI-Specific

Model Artifact Scanning

Specification

Define, implement, and evaluate policies, procedures, and technical measures for the scanning of model artifacts for vulnerabilities and attacks, at each step of the service lifecycle and at each hand over point. Regularly review and update policies, procedures and technical measures to address model artifact scanning.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Team and expertise

Development

Design, Training, Supply Chain

Evaluation

Evaluation, Validation/Red Teaming, Re-evaluation

Deployment

AI Services supply chain, AI applications

Delivery

Operations, Maintenance

Retirement

Model disposal

Ownership / SSRM

PI

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Owned by the Application Provider (AP)

The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.

Implementation guidelines

[Shared Responsibilities (Applicable to MP, AP)]
1. Determine which scanners will be used: Periodically conduct research for which attacks have been proven detectable in model files. Ensure that the scanners used are up to date on these attacks and tested for accuracy.

2. Periodic checks: According to risk, deployed models in production should be scanned periodically, especially if running inference on user data.

3. Documentation: Keep logs of model scan results including scan configuration, model identifier, timestamp, stage of service lifecycle, and results. Collect overall metrics from these logs such as issue detection rate and any errors encountered.

4. Resolution Processes: Determine a response plan for any issues found. For critical issues, this should be immediate model quarantine, stopping it in the development cycle or removing it from production, and launching an investigation. For lower severity issues, this may involve limiting the model deployment scope until the issues are investigated. Applications relying on compromised models can be rolled back to a previous model version determined to have a clean scan.

Auditing guidelines

1. Examine security measures in place for storing model artifacts, including access controls and encryption. 

2. Verify logging and monitoring of access to model artifacts. 

3. Evaluate measures to prevent unauthorized modification or deletion of model artifacts. 

4. Assess compliance with relevant data security standards and regulations. 

5. Check procedures for secure transfer of model artifacts. 

6. Verify backup and recovery procedures for model artifacts.

Standards mappings

ISO 42001Partial Gap
No Mapping for ISO 42001
27002 - 8.8 Management of technical vulnerabilities
Addendum

No ISO 42001 covers the topics of MDS-02, however ISO 27002 does support with 8.8 Management of technical vulnerabilities

EU AI ActNo Gap
Article 15 (5)
Addendum

N/A

NIST AI 600-1Partial Gap
MP-2.3-005
Addendum

This NIST AI 600-1 control does not mention the MDS-02 topic of "at each step of the service lifecycle and at each hand over point."

BSI AIC4No Gap
C4 PC-01
C4 SR-05
C5 OPS-18
C5 SP-01
C5 SP-02
Addendum

N/A

AI-CAIQ questions (2)

MDS-02.1

Are processes, procedures, and technical measures defined, implemented, and evaluated for the periodic scanning of model artifacts for vulnerabilities and attacks at each step of the service lifecycle and at each handover point?

MDS-02.2

Are policies, procedures and technical measures to address model artifact scanning regularly reviewed and updated?