Faculty of law blogs / UNIVERSITY OF OXFORD

Price Discrimination, Algorithmic Decision-making, and European Non-discrimination Law

Author(s)

Frederik Zuiderveen Borgesius
Professor of ICT and Law at the Interdisciplinary Research Hub on Security, Privacy, and Data Governance (iHub), Radboud University Nijmegen (the Netherlands)

Posted

Time to read

3 Minutes

In a new paper, 'Price Discrimination, Algorithmic Decision-making, and European Non-discrimination Law' (to be found here), I examine the problem of discriminatory effects of algorithmic decision-making, using online price differentiation as an example. With online price differentiation, a company charges different people different prices for identical products, based on information the company has about those people. The paper’s main question is: to what extent can EU non-discrimination law protect people against online price differentiation and algorithmic decision-making? The paper finds that non-discrimination law could help to protect people, but that this legal instrument also has severe weaknesses in the context of algorithmic decision-making.

Suppose that an online bookstore adapts the prices of its books to the consumer’s location (based on his or her IP address). The store differentiates its prices to improve profit. It turns out that people pay, on average, 20% extra if they live in streets where a majority of the people have a Roma background. We assume that the store does not intend to discriminate against Roma, and that the prices are independent of postage costs, taxes, etc.  (This is loosely based on a real-life example in the US).

Non-discrimination law, in particular the prohibition of indirect discrimination, can protect people against algorithmic discrimination. Roughly speaking, indirect discrimination occurs when a practice is neutral at first glance but ends up discriminating against people with a protected characteristic, for instance a certain ethnic background (cf article 2(b) of the EU Racial Equality Directive). Indirect discrimination is referred to as 'disparate impact' in the United States.

According to the EU Directive, the prohibition does not apply where a differentiation is 'objectively justified'. Whether such a justification applies is context-dependant and requires an intricate proportionality test. Such a nuanced open norm has advantages, but the nuance comes at the cost of clarity which makes the prohibition of indirect discrimination often difficult to apply in practice.

In addition, the paper shows that EU non-discrimination law has weaknesses when applied to algorithmic decision-making. First, algorithmic indirect discrimination can remain hidden. Consumers may not realise that they pay more than others. And even if some consumers found out that they pay more than others, they would not yet know that Roma in general pay more, so they would not know about the indirect discrimination. Moreover, it would be difficult for a consumer to prove the indirect discrimination.

Second, the EU non-discrimination directives only protect people against discrimination on the basis of certain protected characteristics (such as ethnicity and gender). However, algorithmic systems can generate new categories of people based on seemingly innocuous characteristics, such as web-browser preference, postal code, or more complicated categories combining many data points. An online store may for instance find that most consumers using a certain web browser pay less attention to prices; the store can charge those consumers extra. This type of differentiation could evade non-discrimination law, as browser type is not a protected characteristic, but such differentiation could still be unfair.

Third, algorithmic decision-making can reinforce social inequality. For example, in some cases, algorithmic pricing has led to higher prices for poor people. The EU non-discrimination directives do not protect people against discrimination on the basis of financial status.

Fourth, non-discrimination law is silent about algorithmic decisions based on incorrect predictions, while such decisions can be unfair. Algorithmic decision-making often entails applying a predictive model to individuals. An example of a predictive model is: ‘90% of the people living in postal code XYZ do not pay attention to prices.’ Suppose that an online store raises the prices for consumers in that area based on this predictive model. In that case, the company also raises the prices for the 10% who do care about prices. For instance, the postal code could refer to an area with mostly wealthy people, who don’t pay much attention to prices. But a poor student renting a room in that street (among the 10% who do care about prices) will also pay the higher price.

To conclude, non-discrimination law can help to protect people against discriminatory algorithmic decisions. So at least in the short term, properly enforcing non-discrimination law is important. As my previous research has shown, properly enforcing data protection law can also protect people against algorithmic discrimination.

In the long term, additional regulation is probably necessary. More research and debate are needed on the question of how people should be protected against algorithmic discrimination.

 

Frederik Zuiderveen Borgesius is a professor of ICT and Law at the Interdisciplinary Research Hub on Security, Privacy, and Data Governance (iHub) at Radboud University Nijmegen, the Netherlands.

Share

With the support of