AICM AtlasCSA AI Controls Matrix
GRC · Governance, Risk and Compliance
GRC-11AI-Specific

Bias and Fairness Assessment

Specification

Regularly evaluate AI systems, models, datasets & algorithms for bias and fairness to ensure compliance with ethical standards.

Threat coverage

Model manipulation
Data poisoning
Sensitive data disclosure
Model theft
Model/Service Failure
Insecure supply chain
Insecure apps/plugins
Denial of Service
Loss of governance

Architectural relevance

Physical infrastructure
Network
Compute
Storage
Application
Data

Lifecycle

Preparation

Team and expertise

Development

Guardrails

Evaluation

Evaluation, Validation/Red Teaming, Re-evaluation

Deployment

AI applications, Orchestration

Delivery

Operations, Maintenance, Continuous monitoring, Continuous improvement

Retirement

Data deletion, Archiving

Ownership / SSRM

PI

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Model

Owned by the Model Provider (MP)

The model provider (MP) designs, develops, and implements the control as part of their services or products to mitigate security, privacy, or compliance risks associated with the Large Language Model (LLM). Model Providers are entities that develop, train, and distribute foundational and fine-tuned AI models for various applications. They create the underlying AI capabilities that other actors build upon. Model Providers are responsible for model architecture, training methodologies, performance characteristics, and documentation of capabilities and limitations. They operate at the foundation layer of the AI stack and may provide direct API access to their models. Examples: OpenAI (GPT, DALL-E, Whisper), Anthropic(Claude), Google(Gemini), Meta(Llama), as well as any customized model.

Orchestrated

Owned by the Orchestrated Service Provider (OSP)

The Orchestrated Service Provider (OSP) is responsible for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies in the context of the services or products they develop and offer. The OSP is responsible and accountable for the implementation of the control within its own infrastructure/environment. If the control has downstream implications on the users/customers, the OSP is responsible for enabling the customer and/or upstream partner in the implementation/configuration of the control within their risk management approach. The OSP is accountable for ensuring that its providers upstream (e.g MPs) implement the control as it relates to the service/product the develop and offered by the OSP. This refers to entities that create the technical building blocks and management tools that enable AI implementation. This can include platforms, frameworks, and tools that facilitate the integration, deployment, and management of AI models within enterprise workflows. These providers focus on model orchestration and offer services like API access, automated scaling, prompt management, workflow automation, monitoring, and governance rather than end-user functionality or raw infrastructure. They help businesses implement AI in a structured and efficient manner. Examples: AWS, Azure, GCP, OpenAI, Anthropic, LangChain (for AI workflow orchestration), Anyscale (Ray for distributed AI workloads), Databricks (MLflow), IBM Watson Orchestrate, and developer platforms like Google AI Studio.

Application

Owned by the Customer (AIC)

The Customer (AIC) is responsible and accountable for the design, development, implementation, and enforcement of the control to mitigate security, privacy, or compliance risks associated with Large Language Model (LLM)/GenAI technologies services or products they consume.

Implementation guidelines

[All Actors]
1. Establish a process for regularly evaluating bias and fairness in AI models, data pipelines, and decision systems.

2. Use quantitative and qualitative methods to assess disparate impact across demographic groups.

3. Document fairness metrics, thresholds, and mitigation steps taken.

4. Engage domain experts, ethicists, and legal teams in reviewing fairness practices.

5. Make fairness assessments auditable and repeatable through documentation and tool use.

Auditing guidelines

1. Bias and Fairness Policy: Confirm a documented policy exists for assessing and mitigating bias in end-user applications, aligned with Responsible AI principles and relevant regulations.

2. Representative User Contexts: Ensure training and fine-tuning consider diverse user demographics, languages, and contexts relevant to the application’s global reach.

3. Fairness in Output Behavior: Verify that models are evaluated and adjusted to reduce biased or harmful outputs across different use cases and user interactions.

4. Real-Time Monitoring and Safeguards: Confirm mechanisms exist to detect, log, and respond to bias-related issues in real-time usage.

5. Transparency to Users: Ensure that limitations, fairness considerations, and usage guidelines are clearly communicated to users (e.g., through system cards, help docs).

Standards mappings

ISO 42001No Gap
42001: B.5.4 (Assessing AI system impact on individuals or groups of individuals)
Addendum

N/A

EU AI ActNo Gap
Article 10 (2) (f)
Addendum

N/A

NIST AI 600-1No Gap
MG-3.2-004
GV-1.2-002
MS-2.11-002
MS-2.11-004
MS-2.6-002
MS-3.3-003
Addendum

N/A

BSI AIC4Partial Gap
BI-01
DQ-03
Addendum

No ethical topics in AIC4 or C5. But several Aspects can be found in DQ and BI.

AI-CAIQ questions (1)

GRC-11.1

Are AI systems, models, datasets & algorithms regularly evaluated for bias and fairness to ensure compliance with ethical standards?