Input Validation
Specification
Validate, filter, modify or block, as necessary, input against adversarial patterns, failure patterns and unwanted behaviour according to organisational policies and applicable laws and regulations.
Threat coverage
Architectural relevance
Lifecycle
Data collection, Data curation
Guardrails, Training
Validation/Red Teaming
Orchestration, AI applications
Continuous monitoring, Operations
Not applicable
Ownership / SSRM
PI
Shared Cloud Service Provider-Model Provider (Shared CSP-MP)
The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.
Model
Shared Cloud Service Provider-Model Provider (Shared CSP-MP)
The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.
Orchestrated
Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)
The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.
Application
Owned by the Application Provider (AP)
The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.
Implementation guidelines
Auditing guidelines
1. Verify that the CSP has clearly defined processes, procedures, and technical measures in place which are addressing AI input validation. The documentation needs to clearly outline the scope as well as objectives, roles, and responsibilities. 2. Inspect these processes for compliance with applicable regulatory frameworks and AI best practices, specifically covering adversarial prompt attacks—including linguistic manipulations, logic manipulation, malicious programming code, adversarial token-based attacks, multi-language, and multi-modal threats. 3. Ensure that the CSP input validation practices consider the evolving AI threat scenario landscape and that a proactive AI Red Teaming is regularly performed. 4. Confirm that the input validation methods are active and perform detection, rejection, or sanitization of adversarial AI inputs across user-facing and API endpoints. 5. Review documented outputs of input validation assessments to ensure they are systematically analyzed and converted into actionable cybersecurity improvements. 6. Ensure that an ongoing monitoring of input validation mechanisms is in place, for evaluating their effectiveness through clearly defined, AI-specific security metrics and indicators (such as rate of detected adversarial inputs, prompt injection attempts). 7. Verify that validation measures are regularly reviewed and needed updates are implemented in a timely manner for effectively addressing continuous advancements in AI threat intelligence and adversarial attack methodologies.
Standards mappings
42001: A.6.2.4 / B.6.2.4 – AI system verification and validation 42001: A.7.4 / B.7.4 – Quality of data for AI systems 42001: A7.6 – Data preparation 42001: B6.2.3 Technical measures for AI reliability and resilience 42001: A8.26 – Application security requirements
Addendum
In clause B.6.2 (technical robustness & security) add: Explicit control requirement for input validation, Adversarial input detection and mitigation. In clause clause 9.1, 10.2: specify testing and monitoring of input validation effectiveness. In clause A.2.2/A.2.3 (policy alignment): include alignment of validation rules with policies and laws.
Recital 76 Recital 77 Article 9 (2) Article 10 (2) (b) Article 10 (2) (f) Article 10 (2) (g) Article 10 (3) Article 15
Addendum
Input validation pipelines: schema checks, whitelisting, input filtering Adversarial pattern detection: anomaly detection, adversarial example mitigation Live input blocking: reject-on-failure logic, policy-based access control Monitoring for prompt injection / model exploitation attempts Post-deployment input auditing and update procedures The technical solutions for AI should include measures to prevent, detect, and respond to threats such as data poisoning, model poisoning, adversarial inputs, confidentiality attacks, and other model flaws.
MP-2.3-002 MS-1.1-007 MS-2.6-006 MS-1.1-002
Addendum
NIST AI 600-1 is missing specific requirements to validate user input against adversarial patterns, failure patterns, and unwanted behavior and outliers.
DQ-01 DQ-03 DQ-06
Addendum
N/A
AI-CAIQ questions (1)
Is the input against adversarial patterns, failure patterns and unwanted behaviour, validated, filtered, modified, or blocked as necessary, according to organisational policies, applicable laws and regulations?