Loading Events

« All Events

  • This event has passed.

Annette Zimmermann (Princeton): “Algorithmic Injustice Beyond Discrimination”

16 January, 4:30 pm6:00 pm

Abstract: Algorithmic decision-making (ADM) is increasingly being used to support probabilistic, predictive decision-making procedures of political and social significance. This includes, for instance, decisions in the criminal justice process, law enforcement, benefits allocation, hiring, and credit lending. A growing body of research in computer science and statistics reveals that algorithmic ex ante judgments are often subject to algorithmic bias: a disproportionately high risk of error concerning predictive assessments about racial and religious minorities, women, and other socially disadvantaged groups. Furthermore, due to certain features of algorithmic decision-making in its contemporary form—such as low explainability, low contestability, and automation bias—there is reason to worry that ADM may not only reproduce large-scale social injustices, but that it may also exacerbate them.

One possible response is to attempt to identify all instances in which ADM leads to outcomes that impose discriminatory harms on persons. Then, one might think, we could try to improve the predictive accuracy of ADM to prevent future discriminatory harm, and we could compensate people for past discriminatory harm.

But aside from the fact that this response is subject to a range of complex feasibility constraints, it rests on unpersuasive normative assumptions. In particular, I show that it is wrong to assume that ADM is fair just if it does not lead to discriminatory harm. Due to the predictive nature of ADM, algorithmic assessments often involve not only the disproportionate distribution of particular harms, but also the disproportionate distribution of risks of harm. As recent contributors to the philosophical debate on the ethics of risk have argued, risk impositions may be morally wrongful even if they never eventuate in actual harm (‘pure risks’). I build on existing autonomy-based accounts of the wrongness of pure risk (the view that pure risks are wrong just if their presence objectionably constrains a risk-bearer’s valuable choice set) by arguing that an additional and underacknowledged reason why pure risk can be wrongful is if, and because, it is distributed unfairly. This suggests that, contrary to what the overwhelming majority of analyses of algorithmic bias seem to assume, the problem of algorithmic injustice reaches far beyond the problem of discriminatory harm caused by predictive inaccuracy. This constitutes an urgent gap in the literature. In response to this gap, my account systematically maps the types of cases in which the deployment of ADM can be wrongful even if its predictive assessments do not lead to discriminatory harm.


16 January
4:30 pm – 6:00 pm
Event Category:




LAK 2.06
Lakatos Building
London, WC2A 2AE United Kingdom
+ Google Map