Secure Application Development Lifecycle
Specification
Define and implement a secure software development lifecycle (SDLC) process for application requirements analysis, planning, design, development, testing, deployment, and operation in accordance with security requirements defined by the organization.
Threat coverage
Architectural relevance
Lifecycle
Resource provisioning, Team and expertise
Guardrails, Supply Chain, Design
Validation/Red Teaming, Evaluation, Re-evaluation
AI applications, AI Services supply chain, Orchestration
Continuous monitoring, Continuous improvement, Operations, Maintenance
Archiving
Ownership / SSRM
PI
Shared Cloud Service Provider-Model Provider (Shared CSP-MP)
The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.
Model
Owned by the Model Provider (MP)
The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.
Orchestrated
Owned by the Orchestrated Service Provider (OSP)
The Orchestrated Service Provider (OSP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The OSP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the OSP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The OSP is accountable for ensuring that its providers upstream (e.g MPs) implement the control as it relates to the service/product the develop and offered by the OSP. This refers to entities that create the technical building blocks and management tools that enable AI implementation. This can include platforms, frameworks, and tools that facilitate the integration, deployment, and management of AI models within enterprise workflows. These providers focus on model orchestration and offer services like API access, automated scaling, prompt management, workflow automation, monitoring, and governance rather than end-user functionality or raw infrastructure. They help businesses implement AI in a structured and efficient manner. Examples: AWS, Azure, GCP, OpenAI, Anthropic, LangChain (for AI workflow orchestration), Anyscale (Ray for distributed AI workloads), Databricks (MLflow), IBM Watson Orchestrate, and developer platforms like Google AI Studio.
Application
Owned by the Application Provider (AP)
The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.
Implementation guidelines
Auditing guidelines
1. Policy Examination: Verify the following: a documented SDLC process exists; the SDLC process explicitly includes all key phases: design, development, deployment, and operations; this SDLC is approved and maintained under formal governance; and that the SDLC defines security roles and responsibilities throughout all phases. 2. Policy Assessment (regards content evaluation) a. Threat Modeling: Verify that the SDLC includes documented AI-specific threat modeling. If not present, assess whether a documented rationale and alternative risk mitigation strategy exists. b. Secure Coding Practices: Verify that the SDLC includes secure coding standards and guidance. If absent, evaluate documented justification and compensating controls. c. Open-Source Component Management: Verify whether the SDLC incorporates a documented program for managing open-source components, including vulnerability scanning and license compliance. If not, assess rationale and alternative practices. d. Vulnerability Management: Verify that the SDLC integrates vulnerability management processes for application code, infrastructure, and third-party components. If missing, confirm documented rationale and risk mitigation. e. Security Testing: Verify that the SDLC includes regular security testing, including AI-specific testing (e.g., adversarial testing, LLM-specific tests). If not conducted, assess rationale and compensating risk controls. f. Verify that the SDLC includes secure deployment pipelines, configuration hardening, and use of version-controlled, auditable configuration tools (e.g., IaC templates). Confirm separation of secrets from application code, use of secure variables, and role-based deployment permissions. Best practice is to assess if the SDLC is inline with best practice guidelines (e.g., OWASP). g. Secure Key and Secret Management: Verify that the SDLC explicitly defines secure key management practices, including: secure generation and storage (e.g., HSMs, KMS services); access control and audit logging for key use; regular key rotation, revocation, and recovery mechanisms; scoped usage policies (least privilege for tokens); avoidance of hardcoded keys or secrets in source code. 3. Software Development Lifecycle (SDLC) Evaluation: Evaluate whether the SDLC aligns with the organization’s documented security requirements and governance expectations, and confirm whether the SDLC considers applicable regulatory requirements (e.g., data protection, AI-specific regulations), and assess alignment with those obligations. 4. Implementation Validation: Validate actual implementation of the SDLC by reviewing supporting documentation and corroborating with operational evidence (e.g., staff interviews, ticketing system analysis), and inspect sample implementation artifacts to verify completeness and accuracy of practices, such as: documented threat models; code samples showing secure coding techniques; evidence of open-source component governance (e.g., SBOMs, SCA reports); vulnerability scanning reports, remediation tickets, and patch evidence; logs or reports from security testing tools (e.g., SAST, DAST, IAST, penetration tests); secure deployment evidence (e.g., IaC templates, CI/CD security gates, monitoring alerts). From CCM: 1. Examine policy and procedures for definition of SDLC, security, and compliance requirements. 2. Examine the state of implementation of the SDLC process. 3. Verify that the SDLC implementation is in accordance with requirements.
Standards mappings
42001: B.6.1.3 42001: B.6.2.3 42001: A.6.2.4 42001: A.6.2.5 42001: A.6.2.6 27001: A.8.25 27001: A.8.27 27001: A.8.28 27001: A.8.29 27001: A.8.30
Addendum
Mandate a documented SDLC process. Need secure coding and security testing requirements as part of that process. Need an explicit linkage to organizational security and risk objectives.
Article 9 (6) Article 9 (7) Article 9 (8) Article 17 (1) Annex XI (Section 1) (2) Annex XI (Section 2)
Addendum
N/A
GOVERN 4.1 GOVERN 4.2 MANAGE 3.2 MEASURE 2.8 MEASURE 2.9
Addendum
Define secure SDLC requirements for AI-based applications, which cover security by design principles, code security reviews, and vulnerability testing before deployment. Automated security testing integration in CI/CD pipelines and compliance mapping for application security, as well.
DEV-01
Addendum
N/A
AI-CAIQ questions (1)
Is a secure software development lifecycle (SDLC) process defined and implemented for application requirements analysis, planning, design, development, testing, deployment, and operation in accordance with security requirements defined by the organization?