Fabian Beigang (PhD student at LSE Philosophy) just published an article in Minds and Machines, the Journal for Artificial Intelligence, Philosophy and Cognitive Science.
In many domains, decision-making is nowadays supported by machine learning algorithms. These algorithms generate models that attempt to predict or estimate relevant unobserved properties on the basis of historical data. These predictions, in turn, inform the decision-making process. Automating decision-making processes in this manner, however, runs the risk of systematizing morally problematic decision patterns. In particular, when minority groups are the ones who could experience disproportionate negative consequences of algorithmic decision-making, this is cause for concern as it could potentially reinforce existing biases and structural inequalities. The recognition of this problem has led to a wide-ranging discussion about algorithmic fairness.