AICM AtlasCSA AI Controls Matrix
AIS · Application & Interface Security
AIS-05Cloud & AI Related

Application Security Testing

Specification

Implement a testing strategy, including criteria for acceptance of new information systems, upgrades and new versions, which provides application security assurance and maintains compliance while meeting organizational delivery goals. Automate when applicable and possible.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Resource provisioning, Team and expertise

Development

Guardrails, Training, Supply Chain

Evaluation

Validation/Red Teaming, Re-evaluation

Deployment

AI applications, AI Services supply chain

Delivery

Continuous improvement, Operations, Maintenance

Retirement

Not applicable

Ownership / SSRM

PI

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Model Provider-Orchestrated Service Provider (Shared MP-OSP)

The MP and OSP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Owned by the Application Provider (AP)

The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.

Implementation guidelines

[All Actors]
1. Implement comprehensive automated security testing throughout the AI lifecycle.

2. Use secured, controlled environments and infrastructure for developing, testing and releasing AI components.

3. Continuously monitor running AI workloads and supporting services; trigger additional tests when anomalies or new threats are detected.

4. Share test results, threat intelligence and incident information promptly so all stakeholders can coordinate remediation and improve best practices.
(a)        Communicate security requirements, testing results, and incident information in a timely and effective manner.
(b)        Collaborate to develop and implement best practices for AI security.
(c)        Share threat intelligence and lessons learned.
(d)        Work together to address emerging AI security challenges.
(e)        Update the Testing suite as new AI paradigms emerge.

Auditing guidelines

1. Examine the policies and procedures that define security testing strategies, the automation of security testing, and change management for continuous improvements.

2. Determine the security provisions and criteria for new information system(s).

3. Determine that the software release process includes AI/ML-specific provisions and is automated where applicable.

4. Verify that continuous security improvement processes are in place.

Standards mappings

ISO 42001Partial Gap
42001: B.6.2.4
27001: A.8.29
27001: A.8.26
27001: A.8.33
27001: A.5.10
27002: A.8.29 & A.8.33
Addendum

Add dedicated control that mandates: Security-focused AI application testing, Use of automated tools, Definition of acceptance criteria, Specific coverage of AI/ML-specific risk vectors, Alignment with organizational deployment goals.

EU AI ActNo Gap
Article 17 (d)
Article 17 (h)
Article 53
Article 54
Article 55
Article 60
Annex XI (Section 1) (2)
Annex XI (Section 2)
Addendum

N/A

NIST AI 600-1No Gap
MEASURE 2.3
MEASURE 2.5
MEASURE 2.6
Addendum

N/A

BSI AIC4No Gap
DEV-01
DEV-03
DEV-06
PSS-02
PSS-05
PSS-09
PSS-10
Addendum

N/A

AI-CAIQ questions (2)

AIS-05.1

Is a testing strategy implemented, including criteria for acceptance of new information systems, upgrades, and new versions, to provide application security assurance, maintain compliance, and meet organizational delivery goals?

AIS-05.2

Is automation applied where applicable and possible?