AICM AtlasCSA AI Controls Matrix
AIS · Application & Interface Security
AIS-08AI-Specific

Input Validation

Specification

Validate, filter, modify or block, as necessary, input against adversarial patterns, failure patterns and unwanted behaviour according to organisational policies and applicable laws and regulations.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Data collection, Data curation

Development

Guardrails, Training

Evaluation

Validation/Red Teaming

Deployment

Orchestration, AI applications

Delivery

Continuous monitoring, Operations

Retirement

Not applicable

Ownership / SSRM

PI

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Model

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Owned by the Application Provider (AP)

The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.

Implementation guidelines

[Applicable to all providers (CSP, MP, OSP, AP) excluding AIC unless otherwise specified]
1. Define Input Validation Scope: Establish provider-specific processes for validating, filtering, modifying, or blocking inputs to protect against adversarial patterns, failure patterns, and unwanted behavior, as outlined in the provider-specific sections below. Ensure processes cover all input sources (e.g., user inputs, APIs, data feeds) throughout the application lifecycle, including development, deployment, and operation.

2. Input Validation Governance Structure: Establish clear roles and responsibilities for managing input validation processes, involving cross-functional teams (e.g., security, data science, development) and oversight by governance bodies. Define approval workflows involving senior management, security leads, and compliance teams to ensure alignment with organizational policies and regulatory requirements.

3. Input Validation Documentation Standards: Define structured documentation standards for input validation processes, including detailed procedures, adversarial pattern libraries, and mitigation strategies specific to each provider’s scope. Use consistent templates to document input validation rules, risk assessments, and response actions for detected threats to ensure traceability and auditability.

4. Input Validation Management Framework: Implement a structured review process for input validation policies and mechanisms. Conduct reviews and updates at least annually or following significant system changes (e.g., new input sources, updated threat models, or regulatory changes). Ensure alignment with relevant laws, regulations, and standards (e.g., GDPR, CCPA, ISO 27001, OWASP Top 10) and AI-specific frameworks such as NIST AI Risk Management Framework and OWASP LLM Top 10. Incorporate automated tools for real-time input validation and threat detection where possible.

5. Communication and Training Standards: Define requirements for communicating input validation policies, including formal distribution of documentation, mandatory training programs for developers and operators on adversarial pattern recognition and mitigation, and awareness campaigns for stakeholders. Establish standards for ensuring policy accessibility (e.g., internal portals) and comprehension across relevant teams.

7. Quality Control Standards: Define policies for quality assurance of input validation processes, including requirements for testing validation mechanisms (e.g., fuzz testing, adversarial simulation), monitoring input-related incidents, and validating mitigation effectiveness. Incorporate automated validation tools and anomaly detection systems to ensure robust protection against adversarial inputs and compliance with organizational policies.

Auditing guidelines

1. Verify that the CSP has clearly defined processes, procedures, and technical measures in place which are addressing AI input validation. The documentation needs to clearly outline the scope as well as objectives, roles, and responsibilities.

2. Inspect these processes for compliance with applicable regulatory frameworks and AI best practices, specifically covering adversarial prompt attacks—including linguistic manipulations, logic manipulation, malicious programming code, adversarial token-based attacks, multi-language, and multi-modal threats.

3. Ensure that the CSP input validation practices consider the evolving AI threat scenario landscape and that a proactive AI Red Teaming is regularly performed.

4. Confirm that the input validation methods are active and perform detection, rejection, or sanitization of adversarial AI inputs across user-facing and API endpoints.

5. Review documented outputs of input validation assessments to ensure they are systematically analyzed and converted into actionable cybersecurity improvements.

6. Ensure that an ongoing monitoring of input validation mechanisms is in place, for evaluating their effectiveness through clearly defined, AI-specific security metrics and indicators (such as rate of detected adversarial inputs, prompt injection attempts).

7. Verify that validation measures are regularly reviewed and needed updates are implemented in a timely manner for effectively addressing continuous advancements in AI threat intelligence and adversarial attack methodologies.

Standards mappings

ISO 42001Partial Gap
42001:  A.6.2.4 / B.6.2.4 – AI system verification and validation
42001: A.7.4 / B.7.4 – Quality of data for AI systems
42001: A7.6 – Data preparation
42001: B6.2.3 Technical measures for AI reliability and resilience
42001: A8.26 – Application security requirements
Addendum

In clause B.6.2 (technical robustness & security) add: Explicit control requirement for input validation, Adversarial input detection and mitigation. In clause clause 9.1, 10.2: specify testing and monitoring of input validation effectiveness. In clause A.2.2/A.2.3 (policy alignment): include alignment of validation rules with policies and laws.

EU AI ActPartial Gap
Recital 76
Recital 77
Article 9 (2)
Article 10 (2) (b)
Article 10 (2) (f)
Article 10 (2) (g)
Article 10 (3)
Article 15
Addendum

Input validation pipelines: schema checks, whitelisting, input filtering Adversarial pattern detection: anomaly detection, adversarial example mitigation Live input blocking: reject-on-failure logic, policy-based access control Monitoring for prompt injection / model exploitation attempts Post-deployment input auditing and update procedures The technical solutions for AI should include measures to prevent, detect, and respond to threats such as data poisoning, model poisoning, adversarial inputs, confidentiality attacks, and other model flaws.

NIST AI 600-1Partial Gap
MP-2.3-002
MS-1.1-007
MS-2.6-006
MS-1.1-002
Addendum

NIST AI 600-1 is missing specific requirements to validate user input against adversarial patterns, failure patterns, and unwanted behavior and outliers.

BSI AIC4No Gap
DQ-01
DQ-03
DQ-06
Addendum

N/A

AI-CAIQ questions (1)

AIS-08.1

Is the input against adversarial patterns, failure patterns and unwanted behaviour, validated, filtered, modified, or blocked as necessary, according to organisational policies, applicable laws and regulations?