AICM AtlasCSA AI Controls Matrix
AIS · Application & Interface Security
AIS-15AI-Specific

Prompt Differentation

Specification

Implement mechanisms enabling the model to clearly distinguish user-provided input instructions from data and system instructions (e.g., system prompts).

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Not applicable

Development

Design, Guardrails

Evaluation

Validation/Red Teaming

Deployment

Orchestration, AI applications

Delivery

Operations, Continuous monitoring

Retirement

Not applicable

Ownership / SSRM

PI

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Orchestrated Service Provider-Application Provider (Shared OSP-AP)

The OSP and AP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Owned by the Application Provider (AP)

The Application Provider (AP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The AP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the AP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The AP is accountable for carrying out the due diligence on its upstream providers (e.g MPs, Orchestrated Services) to verify that they implement the control as it relates to the service/product develop and offered by the AP. These providers build and offer end-user applications that leverage generative AI models for specific tasks such as content creation, chatbots, code generation, and enterprise automation. These applications are often delivered as software-as-a-service (SaaS) solutions. These providers focus on user interfaces, application logic, domain-specific functionality, and overall user experience rather than underlying model development. Example: OpenAI (GPTs,Assistants), Zapier, CustomGPT, Microsoft Copilot (integrated into Office products), Jasper (AI-driven content generation), Notion AI (AI-enhanced productivity tools), Adobe Firefly (AI-generated media), and AI-powered customer service solutions like Amazon Rufus, as well as any organization that develops its AI-based application internally.

Implementation guidelines

[Applicable to all providers (CSP, MP, OSP, AP) excluding AIC unless otherwise specified]
1. Establish provider-specific policy scopes for implementing mechanisms to distinguish user-provided input instructions from data and system instructions. Ensure policies address input formatting techniques (e.g., border strings, data marking) and standard templates to reduce ambiguity and prevent incorrect prioritization across model interactions.

2. Define roles for overseeing input distinction mechanisms, involving cross-functional teams (e.g., AI developers, security engineers, compliance). Set approval workflows with senior management and technical leads to ensure alignment with organizational security and operational standards.

3. Create structured documentation standards for input distinction policies, including procedures for implementing border strings (e.g., ###USER###), data marking (e.g., metadata tags), and standard templates for user/system/data inputs. Use templates to document formatting rules, template designs, and validation checks.

4. Implement a review process for input distinction policies. Conduct reviews at least annually or after significant changes (e.g., new model architectures, updated input handling, identified exploits). Align with standards like OWASP Secure Input Handling, NIST 800-53 (SI-10), and AI-specific frameworks like NIST AI RMF, where applicable.

5. Define requirements for communicating input distinction policies: distribute formal documentation, mandate training for teams developing or managing models (e.g., developers, operators), and run awareness campaigns on secure input handling. Ensure accessibility via internal portals and comprehension across teams.

6. Set policies for quality assurance in input distinction, including requirements for automated validation of input formatting (e.g., regex checks for border strings), testing for ambiguity or misprioritization (e.g., adversarial input tests), and monitoring for incorrect instruction handling (e.g., logging misinterpretations). Require audit logs for input processing events.

Auditing guidelines

Focus: The Cloud Service Provider/AI Processing Infrastructure Provider has implemented effective infrastructure, frameworks, and capabilities that enable their customers to clearly distinguish user-provided input instructions from data and system instructions when deploying AI models on their platforms.

1. Inquiry with Control Owners

1.1 Interview Infrastructure and Platform Leadership: Interview AI platform architects, ML infrastructure engineers, and security specialists responsible for AI service development and model deployment frameworks. Obtain and review the organization's approach to supporting instruction separation, including: model serving framework capabilities for instruction boundary management, infrastructure support for secure prompt handling, AI platform SDK security features, hardware acceleration for secure token processing, virtualization boundaries for multi-tenant instruction isolation, and reference architectures for secure model deployment. Verify documented capabilities exist for enabling deployed models to maintain separation between user inputs and system 
instructions through platform features such as secured parameter passing, hardware-accelerated token processing or 
infrastructure-level isolation mechanisms.

1.2 Review AI Platform Implementation: Examine documentation describing the platform's support for instruction separation, including: model serving container security features, managed AI service input handling capabilities, SDK implementations for boundary enforcement, infrastructure templates for secure model deployment, hardware acceleration features for token processing, virtualization controls for instruction isolation, reference implementations for secure prompt handling. Assess how the infrastructure provider's AI platforms and services establish foundations for reliable instruction separation in customer-deployed models.

1.3 Assess AI Service Design: Review mechanisms implementing instruction separation at the service level, including: managed model endpoint configuration options, input/output processing pipeline security, parameter handling in API gateway services, request validation in model serving infrastructure, token processing optimizations, hardware acceleration for boundary enforcement, memory isolation between processing stages. Evaluate how the cloud provider's AI services support and enforce instruction boundaries for customer workloads and deployments.

1.4 Evaluate Customer Guidance and Security Controls: Review how the provider supports secure AI deployments through: security best practice documentation for model serving, reference architectures for secure prompt handling, infrastructure templates with security controls, compliance frameworks for AI workloads, monitoring capabilities for instruction boundary violations, infrastructure scanning for vulnerable deployments, and service configuration validation.

2. Obtaining and Verifying the Population of Records

2.1 Define the Complete Population of AI Infrastructure Offerings: Obtain a comprehensive inventory of AI infrastructure and platform services, including: general-purpose GPU/TPU compute services, AI-optimized virtual machine types, container services for model deployment, managed model serving platforms, AI development environments, model deployment frameworks and SDKs, hardware accelerators for AI workloads, and AI service integration components.

2.2 Verify Population Completeness: Cross-reference the AI service inventory against: service catalogs and documentation, infrastructure deployment templates, pricing and capability documentation, technical specifications for AI instances, SDK and API documentation, reference architecture publications, service configuration options, and hardware accelerator specifications. Ensure the inventory covers all infrastructure, platforms, and services where secure model deployment and instruction handling are relevant.

2.3 Categorize Infrastructure Components by Risk Level: Segment the AI infrastructure offerings based on: level of abstraction (IaaS, PaaS, SaaS), multi-tenancy characteristics, model deployment patterns, customer exposure and usage volume, integration with sensitive data systems, level of provider management, and hardware acceleration capabilities. This risk-based categorization should guide assessment depth for each infrastructure component.

3. Inspection of Evidence

3.1 Infrastructure Support Implementation Review: Select a representative sample of AI infrastructure offerings based on risk levels and verify. For Platform-Level Separation Support, examine how the infrastructure platform supports instruction separation through features such as: memory isolation for different prompt components, hardware-accelerated token processing, secure enclaves for sensitive instructions, virtualization boundaries for inference isolation, request parameter validation mechanisms, and input sanitization capabilities in serving infrastructure. For AI Service Security Implementation, verify that managed AI services support secure deployment: service configuration options for input separation, managed API gateways with validation capabilities, token processing security features, parameter handling and validation, input format enforcement options, and logging capabilities for boundary violations. For SDK and Framework Support, confirm platform SDKs and frameworks encourage security: helper libraries for proper instruction formatting, template implementations with separation patterns, input validation components, security-focused reference code, configuration validation tools, and default secure configurations.

3.2 Deployment Infrastructure Assessment: Review how the infrastructure supports secure model deployment. For Container and VM Security, verify security features in deployment environments: container isolation for model serving, memory protection mechanisms, process boundary enforcement, resource governance supporting security, configuration validation capabilities, and default security hardening. For Infrastructure Template Analysis, examine infrastructure-as-code templates to confirm: security best practices in reference architectures, implementation of isolation patterns, proper configuration of service boundaries, inclusion of monitoring and logging, integration of security services, and validation of deployment configurations.

3.3 Security Testing for Infrastructure Support: Perform targeted testing of infrastructure security features. For Isolation Validation Testing, verify infrastructure isolation through: multi-tenant boundary testing, memory isolation verification, process separation validation, resource governance effectiveness, hardware acceleration security, and performance under security controls. For Platform Security Analysis, evaluate platform security features supporting: proper enforcement of configured boundaries, validation of input parameters, detection of potential boundary violations, performance impact of security controls, resource isolation between tenants, and maintaining security during scaling operations.

3.4 Documentation and Customer Guidance: Review supporting materials for infrastructure users. For Deployment Documentation, verify existence and quality of: security best practices for model deployment, guidance on configuring secure model endpoints, examples of secure infrastructure configuration, warning about insecure deployment patterns, templates implementing security controls, and architecture diagrams showing security boundaries. For Security Technical Guides, assess infrastructure security documentation for inclusion of: model serving security considerations, input handling best practices, configuration validation guidance, monitoring recommendations, incident response guidance, and performance optimization with security.

3.5 Hardware Acceleration Security: Evaluate security features of specialized AI hardware. For Accelerator Security Implementation, verify that hardware accelerators support security: memory protection features, isolation between workloads, secure parameter handling, token processing security features, resource governance capabilities, performance with security controls enabled.

Hardware-Software Integration: Review security of the hardware-software stack: driver security features, firmware security controls, API security for accelerator access, resource allocation security, memory management security, and monitoring and telemetry for security events.

4. Evaluation and Reporting

4.1 Infrastructure Support Effectiveness Assessment: Evaluate how well the implementation: enables secure deployment of models with instruction boundaries, provides performance-optimized security controls, offers appropriate levels of isolation for different workloads, balances security with AI workload performance needs, and addresses various deployment patterns and architectures.

4.2 Platform Strategy Assessment: Assess the effectiveness of the overall approach based on: integration of security throughout the AI infrastructure stack, support for various model deployment patterns, appropriate defaults encouraging security, compatibility with industry security practices, and evolution of security features with new AI capabilities.

4.3 Documentation and Guidance Adequacy: Evaluate the quality of security documentation, including: clarity of secure deployment guidance, completeness of configuration recommendations, integration of security into reference architectures, support for customers implementing secure deployments, and ongoing communication about security considerations.

4.4 Continuous Improvement Mechanisms: Evaluate processes for enhancing infrastructure security through: regular security testing of infrastructure components, analysis of customer deployment patterns and challenges, integration of lessons from security incidents, research into improved security architectures, and iterative enhancement of security features.

Standards mappings

ISO 42001Partial Gap
ISO 42001: A.6.2.3 - Documentation of AI system design
and development
ISO 27001: A.8.27 - Secure system architecture and engineering principles
Addendum

A control requiring clear distinction and protection mechanisms between system-level prompts, user inputs, and AI configuration Prompt injection testing, monitoring, and failure recovery mechanisms Design-time separation of concerns enforced through interfaces, APIs, or instruction schema.

EU AI ActFull Gap
No Mapping
Addendum

The EU AI Act does not provide requirements for AIS-15: Implement mechanisms to clearly distinguish user-provided input instructions from data and system instructions (prompt differentiation).

NIST AI 600-1Partial Gap
GV-4.1-001
Addendum

Include use or mention of "prompt differentiation."

BSI AIC4Partial Gap
C4 SR-06
Addendum

No C4 control speaks specifically to the AIS-15 topic of distinguishing between user and system prompts.

AI-CAIQ questions (1)

AIS-15.1

Are mechanisms implemented to enable the model to clearly distinguish user-provided input instructions from data and system instructions (e.g., system prompts)?