Safety & GovernanceCore

Fairness

Definition
The principle that AI systems should produce equitable outcomes across demographic groups. Achieving fairness requires careful dataset curation, evaluation metrics, and ongoing auditing.
Why it matters
Fairness is not just an ethical imperative; it is a market access requirement. AI systems that produce biased outcomes get sued, fined, and banned from regulated markets. The challenge is that fairness is mathematically nuanced: different fairness metrics (demographic parity, equalized odds, individual fairness) can be mutually exclusive, forcing explicit trade-off decisions. For product leaders, fairness requires ongoing investment, not a one-time audit. Models can develop new biases as user populations shift, and fairness in one market may not transfer to another. Companies that treat fairness as a continuous engineering discipline, not a compliance checkbox, build more durable products.
In practice
NYC's Local Law 144 (effective July 2023) requires annual bias audits of AI hiring tools, with results published publicly. The EU AI Act categorizes employment, credit scoring, and law enforcement AI as high-risk, with mandatory fairness assessments. Companies like Pymetrics (now Harver) pioneered auditable AI hiring tools with published fairness metrics across gender and race. IBM's AI Fairness 360 toolkit provides open-source implementations of fairness metrics and bias mitigation algorithms. In practice, most companies use a combination of pre-deployment audits, post-deployment monitoring, and human review processes for high-stakes decisions.

We cover safety & governance every week.

Get the 5 AI stories that matter — free, every Friday.

Know the terms. Know the moves.

Get the 5 AI stories that matter every Friday — free.

Free forever. No spam.