A theory of justice for AI models making decisions about employment, lending, education, criminal justice, and other important social goods. Decisions about important social goods like education, employment, housing, loans, healthcare, and criminal justice are all becoming increasingly automated with the help of AI. But because AI models are trained on data with historical inequalities, they often produce unequal outcomes for members of disadvantaged groups. In AI Fairness , Derek Leben draws on traditional philosophical theories of fairness to develop a framework for evaluating AI models, which can be called a Theory of Algorithmic Justice --a theory inspired by the Theory of Justice developed by the American philosopher John Rawls. For several years now, researchers who design AI models have investigated the causes of inequalities in AI decisions and proposed techniques for mitigating them. It turns out that in most realistic conditions it is impossible to comply with all metrics simultaneously. Because of this, companies using AI systems will have to choose which metric they think is the correct measure of fairness, and regulators will need to determine how to apply currently existing laws to AI systems. Leben provides a detailed set of practical recommendations for companies looking to evaluate their AI systems and regulators thinking about laws around AI system, and he offers an honest analysis of the costs of implementing fairness in AI systems--as well as when these costs may or may not be acceptable.
AI Fairness : Designing Equal Opportunity Algorithms