AICM AtlasCSA AI Controls Matrix
HRS · Human Resources
HRS-01Cloud & AI Related

Background Screening Policy and Procedures

Specification

Establish, document, approve, communicate, apply, evaluate and maintain policies and procedures for background verification of all new employees (including but not limited to remote employees, contractors, and third parties) according to local laws, regulations, ethics, and contractual constraints and proportional to the data classification to be accessed, the business requirements, and acceptable risk. Review and update the policies and procedures at least annually.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Team and expertise

Development

Supply Chain

Evaluation

Not applicable

Deployment

AI Services supply chain

Delivery

Not applicable

Retirement

Not applicable

Ownership / SSRM

PI

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Shared Cloud Service Provider-Model Provider (Shared CSP-MP)

The CSP and MP are jointly responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer.

Application

Shared across the supply chain

Shared control ownership refers to responsibilities and activities related to LLM security that are distributed across multiple stakeholders within the AI supply chain, including the Cloud Service Provider (CSP), Model Provider (MP), Orchestrated Service Provider (OSP), Application Provider (AP), and Customer (AIC). These controls require coordinated actions, communication, and governance across all involved parties to ensure their effectiveness.

Implementation guidelines

[Applicable to all actors]
1. AI-Focused Screening: Verify AI expertise, data governance experience, and relevant 
certifications for anyone involved in AI development, orchestration, deployment, or management.

2. Compliance and Ethical Requirements: Ensure all candidates understand and agree to 
comply with legal, regulatory, and ethical standards related to data privacy and responsible AI.

3. Controlled Access: Grant access to critical AI systems (e.g., model endpoints, training data, 
admin consoles) only after successful completion of background checks. Maintain secure, 
confidential records of the verification process.

4. Periodic Updates: Review and update screening policies annually—or more frequently if AI 
risk profiles change—to align with evolving technologies and regulations.

5. Automated Verification & Human Oversight: When AI-enabled background-screening tools are used, a qualified reviewer must validate AI-generated results before onboarding or any adverse employment decision. Audit the accuracy and bias of the tools on a defined schedule.

Auditing guidelines

1. Policy Documentation and Approval: Verify that the CSP has a documented, approved background verification policy covering employees, contractors, and third parties who have physical or logical access to cloud infrastructure and data centers.

2. Defined Criteria: Verify that the policy defines consistent screening criteria: criminal history, employment history, education, professional licenses, and (if relevant) credit checks.

3. Transparency and Consent: Verify the policy is clearly communicated to applicants and written consent is obtained, respecting fairness and applicable laws.

4. Use of Providers: Verify that the CSP uses reputable, legally compliant background check providers.

5. Handling Adverse Findings: Verify that the CSP defines fair processes for addressing adverse findings, allowing candidates to respond or appeal.

6. Data Privacy and Security: Verify that personal data collected through background checks is securely handled in compliance with privacy regulations.

7. Review and Update: Verify that the policy is reviewed and updated at least annually or after significant changes to legal/regulatory or operational context.

8. Evidence of Compliance: Review a representative sample of hiring records to ensure background checks were completed before granting access to infrastructure or sensitive data.

9. Customer Assurance: Review third‑party audit reports (e.g., SOC 2, ISO 27001) that include background verification controls, and confirm they are up to date and communicated to customers as evidence of compliance.

10. KPI Monitoring: Verify whether the CSP tracks metrics (e.g., turnaround time, discrepancies, compliance incidents) to improve the program.

From CCM:
1. Examine policy for adequacy, currency, communication, and effectiveness.
2. Examine the process for selection of local laws, regulations, ethics, and contractual constraints, and for review of its output.
3. Verify that the background verification required is mapped to the risks and data classification.
4. Examine the policy and procedures for evidence of review at least annually.
5. Examine Human Resources tickets upon hire which trigger background review and final confirmation from third party conducting background reviews showing it has been completed and how exceptions or failed checks have been addressed.

Standards mappings

ISO 42001No Gap
42001: A.2.2 AI Policy
42001: A.2.3 Alignment with other organizational policies
42001: A.2.4 Review of AI Policy
27001: 5.1 Leadership and commitment
27001: 5.2 Policy
27001: 7.3 Awareness
27001: 7.4 Communication
27001: 7.5 Documented Information
27001: 9.1 Monitoring
measurement
analysis and evaluation
27001: 9.3 Management Review
27001: A.5.1 Policies for information security
27001: A.5.4 Management responsibilities
27001: A.5.10 Acceptable use of information and other associated assets
27001: A.5.37 Documented operating procedures
27001: A.6.1 Screening27001: A.5.1 Policies for information security
27001: 5.1 Policies for information security
27002: 5.4 Management responsibilities
27001: 5.10 Acceptable use of information and other associated assets
27001: 5.37 Documented operating procedures
27001: 6.1 Screening
Addendum

N/A

EU AI ActFull Gap
No Mapping
Addendum

A clause requiring background screening proportional to data/system sensitivity; applicability to employees, remote workers, contractors, and third-party personnel; requirements for policy documentation, approval, communication, and review cycles; and flexibility to align with local labor laws, ethics, and contractual framework.

NIST AI 600-1No Gap
MP-4.1-003
Addendum

N/A

BSI AIC4No Gap
HR-01
Addendum

N/A

AI-CAIQ questions (3)

HRS-01.1

Are new employee background verification policies and procedures (including but not limited to remote employees, contractors, and third parties) established, documented, approved, communicated, applied, evaluated, and maintained?

HRS-01.2

Are background verification policies and procedures designed according to local laws, regulations, ethics, and contractual constraints and proportional to the data classification to be accessed, business requirements, and acceptable risk?

HRS-01.3

Are background verification policies and procedures reviewed and updated at least annually?