AICM AtlasCSA AI Controls Matrix
AIS · Application & Interface Security
AIS-04Cloud & AI Related

Secure Application Development Lifecycle

Specification

Define and implement a secure software development lifecycle (SDLC) process for application requirements analysis, planning, design, development, testing, deployment, and operation in accordance with security requirements defined by the organization.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Resource provisioning, Team and expertise

Development

Guardrails, Supply Chain, Design

Evaluation

Validation/Red Teaming, Evaluation, Re-evaluation

Deployment

AI applications, AI Services supply chain, Orchestration

Delivery

Continuous monitoring, Continuous improvement, Operations, Maintenance

Retirement

Archiving

Ownership / SSRM

PI

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Owned by the Orchestrated Service Provider (OSP)

The Orchestrated Service Provider (OSP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The OSP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the OSP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The OSP is accountable for ensuring that its providers upstream (e.g MPs) implement the control as it relates to the service/product the develop and offered by the OSP. This refers to entities that create the technical building blocks and management tools that enable AI implementation. This can include platforms, frameworks, and tools that facilitate the integration, deployment, and management of AI models within enterprise workflows. These providers focus on model orchestration and offer services like API access, automated scaling, prompt management, workflow automation, monitoring, and governance rather than end-user functionality or raw infrastructure. They help businesses implement AI in a structured and efficient manner. Examples: AWS, Azure, GCP, OpenAI, Anthropic, LangChain (for AI workflow orchestration), Anyscale (Ray for distributed AI workloads), Databricks (MLflow), IBM Watson Orchestrate, and developer platforms like Google AI Studio.

Application

Owned by the Application Provider (AP)

The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.

Implementation guidelines

[Applicable to MP, OSP, and AP]
The following best practices should be followed when implementing a secure SDLC:
1. Secure SDLC Governance: A security governance framework should be implemented that is tailored to the secure SDLC and that defines roles, responsibilities, and accountability for security throughout the secure SDLC
i. The AI Service Providers (e.g., CSP, MP, OSP, or AP) secure SDLC processes, at a minimum, should meet the regulatory and AIC’s business requirements.
ii.  The AISP should provide information to AISCs about the secure SDLC processes to the extent compatible with their disclosure policies.

2. Threat Modeling: AI-specific threat modeling should be incorporated into the early stages of the secure SDLC to identify potential security risks (e.g., poisoning, model inversion, data leakage).
Mitigation strategies should be designed to help developers anticipate and proactively address AI-specific security issues, reducing the likelihood of AI-specific vulnerabilities being introduced later in the development cycle.

3. Secure Coding Practices:
i. Comprehensive security design requirements should be defined that specify AI-specific security measures and controls to be implemented throughout the application's architecture
ii. Industry standard (e.g., OWASP, etc) secure coding practices should be incorporated in the secure SDLC
iii. Secure coding guidelines should be followed and implemented respectively (e.g., proper implementation and configuration of security headers, input validation, information output handling, error handling, and proper use of cryptographic libraries)
iv. Conduct regular training to ensure awareness and adherence to secure coding practices

4. Open-Source Components Secure Use: Secure practices for managing open-source components should be employed, including thorough vulnerability scanning, dependency management, and continuous monitoring for known vulnerabilities. Use trusted open-source repositories and ensure proper attribution and licensing compliance.

5. Vulnerability Management: Continuously scan for and remediate vulnerabilities in code, infrastructure, and third-party components (refer to TVM domain)

6. Security Testing: Regular security audits, static and dynamic application security testing, and penetration testing should be conducted to identify and address potential security weaknesses (refer to AIS-05 and TVM-06).

7. Secure Deployment and Configuration Management: (i) Secure deployment and configuration management practices should be implemented to ensure that AI applications are deployed and configured in a secure manner (ii)Automated tools should be used to enforce consistent configurations and monitor for deviations from security policies.

8. AI Impact Assessment: AI Impact Assessments should be conducted following the standards in GRC-10. Assessments should be updated at key SDLC stages and after major changes.

Auditing guidelines

1. Policy Examination: Verify the following: a documented SDLC process exists; the SDLC process explicitly includes all key phases: design, development, deployment, and operations; this SDLC is approved and maintained under formal governance; and that the SDLC defines security roles and responsibilities throughout all phases.

2. Policy Assessment (regards content evaluation)
   a. Threat Modeling: Verify that the SDLC includes documented AI-specific threat modeling. If not present, assess whether a documented rationale and alternative risk mitigation strategy exists.
   b. Secure Coding Practices: Verify that the SDLC includes secure coding standards and guidance. If absent, evaluate documented justification and compensating controls.
   c. Open-Source Component Management: Verify whether the SDLC incorporates a documented program for managing open-source components, including vulnerability scanning and license compliance. If not, assess rationale and alternative practices.
   d. Vulnerability Management: Verify that the SDLC integrates vulnerability management processes for application code, infrastructure, and third-party components. If missing, confirm documented rationale and risk mitigation.
   e. Security Testing: Verify that the SDLC includes regular security testing, including AI-specific testing (e.g., adversarial testing, LLM-specific tests). If not conducted, assess rationale and compensating risk controls.
   f. Verify that the SDLC includes secure deployment pipelines, configuration hardening, and use of version-controlled, auditable configuration tools (e.g., IaC templates). Confirm separation of secrets from application code, use of secure variables, and role-based deployment permissions. Best practice is to assess if the SDLC is inline with best practice guidelines (e.g., OWASP).
   g. Secure Key and Secret Management: Verify that the SDLC explicitly defines secure key management practices, including: secure generation and storage (e.g., HSMs, KMS services); access control and audit logging for key use; regular key rotation, revocation, and recovery mechanisms; scoped usage policies (least privilege for tokens); avoidance of hardcoded keys or secrets in source code.

3. Software Development Lifecycle (SDLC) Evaluation: Evaluate whether the SDLC aligns with the organization’s documented security requirements and governance expectations, and confirm whether the SDLC considers applicable regulatory requirements (e.g., data protection, AI-specific regulations), and assess alignment with those obligations.

4. Implementation Validation: Validate actual implementation of the SDLC by reviewing supporting documentation and corroborating with operational evidence (e.g., staff interviews, ticketing system analysis), and inspect sample implementation artifacts to verify completeness and accuracy of practices, such as: documented threat models; code samples showing secure coding techniques; evidence of open-source component governance (e.g., SBOMs, SCA reports); vulnerability scanning reports, remediation tickets, and patch evidence; logs or reports from security testing tools (e.g., SAST, DAST, IAST, penetration tests); secure deployment evidence (e.g., IaC templates, CI/CD security gates, monitoring alerts).

From CCM:
1. Examine policy and procedures for definition of SDLC, security, and compliance requirements.
2. Examine the state of implementation of the SDLC process.
3. Verify that the SDLC implementation is in accordance with requirements.

Standards mappings

ISO 42001Partial Gap
42001: B.6.1.3
42001: B.6.2.3
42001: A.6.2.4
42001: A.6.2.5
42001: A.6.2.6
27001: A.8.25
27001: A.8.27
27001: A.8.28
27001: A.8.29
27001: A.8.30
Addendum

Mandate a documented SDLC process. Need secure coding and security testing requirements as part of that process. Need an explicit linkage to organizational security and risk objectives.

EU AI ActNo Gap
Article 9 (6)
Article 9 (7)
Article 9 (8)
Article 17 (1)
Annex XI (Section 1) (2)
Annex XI (Section 2)
Addendum

N/A

NIST AI 600-1Partial Gap
GOVERN 4.1
GOVERN 4.2
MANAGE 3.2
MEASURE 2.8
MEASURE 2.9
Addendum

Define secure SDLC requirements for AI-based applications, which cover security by design principles, code security reviews, and vulnerability testing before deployment. Automated security testing integration in CI/CD pipelines and compliance mapping for application security, as well.

BSI AIC4No Gap
DEV-01
Addendum

N/A

AI-CAIQ questions (1)

AIS-04.1

Is a secure software development lifecycle (SDLC) process defined and implemented for application requirements analysis, planning, design, development, testing, deployment, and operation in accordance with security requirements defined by the organization?