Vulnerability Management Metrics
Specification
Establish, monitor and report metrics for vulnerability identification and remediation at defined intervals.
Threat coverage
Architectural relevance
Lifecycle
Data collection, Data curation, Data storage, Resource provisioning
Design, Training
Evaluation, Validation/Red Teaming, Re-evaluation
Orchestration, AI Services supply chain, AI applications
Operations, Maintenance, Continuous monitoring, Continuous improvement
Data deletion, Model disposal
Ownership / SSRM
PI
Shared Cloud Service Provider-Model Provider (Shared CSP-MP)
The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.
Model
Owned by the Model Provider (MP)
The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.
Orchestrated
Shared Orchestrated Service Provider-Application Provider (Shared OSP-AP)
The OSP and AP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.
Application
Owned by the Application Provider (AP)
The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.
Implementation guidelines
Auditing guidelines
1. Verify that the Cloud Service Provider (CSP) has defined metrics and indicators for vulnerability identification and remediation at defined intervals. 2. Inspect whether the above-mentioned metrics and indicators are concretely and continuously monitored. 3. Inspect whether the above-mentioned metrics and indicators are periodically reviewed and updated by responsible parties. 4. Inspect whether the evidence emerged during the monitoring of the above-mentioned metrics and indicators is documented in appropriate executive and technical reports. 5. Inspect whether the above-mentioned reports are timely shared and actively discussed with all relevant parties to support decision making.
Standards mappings
42001: A.6.2.6 AI system Operation and monitoring 27001: 8.8 Management of technical vulnerabilities. 27001: 9.1 Monitoring measurement analysis and evaluation
Addendum
N/A
Article 72 Article 15 (1) Article 15 (2) Article 15 (3) Annex IV (2) (g) (3) (9) Annex XI (2) (1)
Addendum
For the purpose of satisfying the "defined intervals" requirements, TVM-10's wording is close enough to TVM-07's, so Article 9 (2) is an appropriate mapping.
MS-2.7-004
Addendum
NIST AI 600-1 is missing a reference to "monitor and report metrics for vulnerability identification/remediation."
C4 SR-02 C4 SR-03 C5 COM-04
Addendum
N/A
AI-CAIQ questions (1)
Are metrics established, monitored, and reported for vulnerability identification and remediation at defined intervals?