Faculty of law blogs / UNIVERSITY OF OXFORD

Personalised Law and Personalised Transactions: The Case for an Opt-In Model

This post is part of a special series including contributions to the OBLB Annual Conference 2022 on ‘Personalized Law—Law by Algorithm’, held in Oxford on 16 June 2022. This post comes from Birke Häcker, who participated on the panel on ‘Law by Algorithm’.

In chapter 4 of their book Law by Algorithm (Mohr Siebeck, 2021), entitled ‘Down by Algorithms? Siphoning Rents, Exploiting Biases, and Shaping Preferences: Regulating the Dark Side of Personalized Transactions’, reprinted from 86 U. Chi. L. Rev. 581 (2019), Horst Eidenmüller and Gerhard Wagner draw attention to three worrying features of personalised B2C transactions. They demonstrate, in terms of both overall efficiency and individual autonomy, the downsides of (1) first-degree price discrimination allowing businesses to siphon rents from consumers, (2) algorithms facilitating the systematic exploitation of behavioural biases, and (3) the preference-shaping effects entailed by an inbuilt ‘feedback loop’.

They argue that the best way of alleviating (at least) the first and third of these problems is to provide consumers with the right not to participate in personalised shopping experiences, or to do so only selectively. They propose that consumers should have the possibility to opt out of the data collection and data processing on which the relevant sales algorithms rely. This form of ‘light touch regulation’, Eidenmüller and Wagner argue (summary at pp 69 et seq of the book /pp 607 et seq of the article), preserves most of the efficiency gains from using such algorithms (‘the bright side of personalization’), while providing consumers with an ‘exit strategy’ in the form of a ‘right to anonymity’, so as to counteract some of the dangers entailed by algorithmic personalisation (‘the dark side of personalized transactions’).

The aim of the present contribution is not to challenge the many valid points that Eidenmüller and Wagner make. Instead, it seeks to build on their argument and to take it even further by proposing a slightly different, more stringent regulatory model. I argue that an ‘opt-in model’ of algorithmic personalisation is preferable to the proposed ‘opt-out model’, for three main interconnected reasons.

Firstly, given that from a socio-legal perspective our modern market economy functions through people acting within certain ‘roles’, eg as consumers, the starting point ought to be addressing them (only) within their respective roles. This postulate is underscored by considerations of both freedom and equality.

Secondly, from an economic point of view, the opt-in model most closely reflects the reality that people waiving their ‘right to anonymity’ are effectively ‘selling’ their data in order to gain an advantage over those who stay purely within their roles, ie, to snatch some special deal which is better than that available to the general public. Personalisation should, and will on the opt-in model as suggested here, never lead to a worse deal than the one otherwise on offer.

Thirdly, in determining what should normatively be the default position with respect to personalised law and personalised transactions, the rival approaches of basing the legal default either on public policy or the hypothetical will of the parties affected lead to the same conclusion: algorithmic personalisation ought only to take place where it is actively—and consciously—chosen by those affected.

My argument focuses primarily on the example discussed by Eidenmüller and Wagner of algorithms being used to personalise prices in certain transactions. Yet it also applies more broadly to personalised law in all cases where, using algorithms, a specific contract term is proposed by a business to a consumer on the basis of the latter’s identified preferences. In short, the opt-in model could and should prima facie be adopted whenever, within the realm of dispositive law, data is collected and processed to ‘tailor’ the terms of a deal to the individual consumer concerned.

(1) Respecting the Consumer ‘Role’

This argument derives from the observation that mass transactions in modern societies build on people adopting certain ‘roles’ to interact. We all wear a number of different hats in our everyday lives: we are parents, drivers, doctors or patients, employers or employees, teachers or students, or—crucially here—consumers purchasing goods and services for personal use. Characteristics that are relevant and legitimately taken into account relating to one role (eg, our pre-existing illnesses for the purposes of determining medical treatment) are irrelevant and thus in principle taboo when it comes to people dealing with one another in a different role. When we buy our groceries or book a flight, factors such as our race, gender or sexual orientation, disabilities, age or religious affiliation are rightly not taken into account. In fact, in many cases it would amount to unlawful discrimination if they were.

For instance, section 142(1) of the Equality Act 2010 (UK) provides that ‘[a] term of a contract is unenforceable against a person in so far as it constitutes, promotes or provides for treatment of that or another person that is of a description prohibited by this Act’. This does not necessarily exclude extending certain benefits on the basis of a protected characteristic (such as offering a reduced fare to children—contrast the now illegal ‘ladies’ nights’, where only women were afforded free entry into nightclubs). What it does rule out is selectively offering someone worse conditions than are made available to other members of the public, purely on the basis that the contract partner has a disclosed or visible characteristic of a type falling within the scope of the legislation.

In the algorithmic world of online transactions, the first question to ask is therefore whether and to what extent the tracked preferences directly or indirectly correlate to one or more of the protected characteristics. For instance, it seems likely that particular search patterns will give a good indication of the user’s age, gender and perhaps their race, sexual orientation, religious affiliation or an existing disability. If these factors are subsequently used to individualise contract terms or the contract price, this will quickly amount to (at least indirect) discrimination. And, even where no protected characteristic is at stake, one may legitimately query whether, in policy terms, businesses ought to be free to take into account data relating to consumers’ individual preferences without this being actively permitted by the person affected.

This policy concern is usually expressed as one of equality. While it may not be unlawful to discriminate on grounds other than the designated set of protected characteristics, it would nonetheless be wrong to encourage such personal preference-based discrimination. Amongst the market participants who act in a particular role, such as all consumers, we ought to encourage rather than discourage equal treatment.

To the equality aspect may be added another concern, namely that of liberty. Being identified (only) as member of a particular category of market participants allows consumers to move freely behind the other party’s ‘veil of ignorance’ (or ‘veil of the anonymous customer’, as Eidenmüller and Wagner describe it). This relative ‘anonymity’ to some extent makes up for the structural imbalance between businesses and consumers in mass transactions. The consumer role thus fosters individual liberty. It is the same concern for privacy and personal freedom that also underlies modern data protection legislation. We already have in place a complex set of rules determining which aspects of data collection internet users have to agree to (‘opt-in’) and which ‘legitimate interests’ warrant only an ‘opt-out’ choice. The argument here is that, when it comes to using such data in order to personalise prices and to individualise contract terms, the ‘opt-in’ model is more appropriate.

(2) The Opt-In Model in Economic Perspective

Economically speaking, the opt-in model amounts to a consumer waiving the original ‘preference anonymity’ inherent in the consumer role in exchange for personalised price and contract term offers. In normal supermarkets, this happens when customers present a ‘club card’ or the like at checkout with a view to benefiting from a loyalty discount or special offers. Crucially, customers are free not to participate in the supermarket’s loyalty scheme or, if they do, not to reveal their affiliated status at every checkout. This amounts to an opt-in model, whether wholesale or selectively.

Crucially further, traditional loyalty discounts or club card offers operate so as to ensure that a customer will never pay more by waiving their relative ‘anonymity’—and thereby lifting the other party’s ‘veil of ignorance’—than would be paid by a consumer who chose not to do so. This is not to say that, in aggregate, supermarkets could not or would not use the information thus gained in order to adjust their pricing structure for everybody. It does however hold true for the individual transaction at hand. Eidenmüller and Wagner recognise the need to prevent the systematic exploitation of above-average personal reservation prices. They suggest that consumers should be able to ‘flip-flop between algorithmic shopping and spontaneous choice’. On their opt-out model, this requires consumers to ‘turn off the algorithm for a particular transaction’ (p 68/p 606). The opt-in model proposed here would go even further. It would ensure—and require—that the non-personalised price or contract terms are always displayed as the default, and it would then give consumers the choice to request or view a personalised (better) offer.

Admittedly, the comparison between two prices will be much more straightforward than, for example, the comparison between the standard set of contract terms and a personalised terms offer. Yet, even when the latter effectively amounts to ‘upselling’ (eg, offering an extended warranty period to particularly risk-averse consumers at a higher price), the customer would always see what extra cost they have to incur in order to secure the personalised terms. Similarly, there would also be transparency where a price reduction is driven by a negative adjustment of terms (eg, a shortening of the warranty period to the statutory minimum where a business normally offers longer periods). 

Eidenmüller and Wagner rightly point out that, on their model, consumers with relatively higher reservation prices will be the ones typically opting out of personalisation, and as businesses know this, ‘[t]he new nonpersonalized market price will be higher than the prepersonalization market price’ (p 56/p 592). This effect on the price offered to the general public will be at least as—and perhaps even more—pronounced on the opt-in model, given the greater degree of transparency, but that is no cause for particular concern. If we are willing to accept the proposition that some consumers can be offered better prices than others, thus permitting a measure of unequal treatment within the sphere of private business, then it is an inevitable side effect. It also happens offline. Supermarkets may choose either to maintain as low as possible a price across the board or to extend special offers to particular customers. Such cross-subsidies are part of normal pricing structures and not peculiar to online transactions. The opt-in model of algorithmic personalisation does not therefore seek to prevent them.

(3) Designing the Legal Default

Assuming that a business acts within the permissible limits of anti-discrimination and data protection law, there is a normative question of how the legal ‘default’ ought best to be designed. Here, too, the opt-in model has the edge over the opt-out model.

Broadly speaking, there are two main benchmarks when it comes to designing default legal rules, ie, rules that apply to a person unless and until that person chooses a different option. On the one hand, default rules may seek to replicate the arrangement that most people would choose most of the time. Orientating the default towards the hypothetical will of the average person (here: the average consumer) is an efficient way of minimising the time and energy spent on contracting. On the other hand, default rules can also be used as a regulatory tool furthering certain policy goals the state may have, by deliberately ‘nudging’ people in a particular direction. Only people who care enough one way or the other will make an active choice in the matter and thus potentially opt for something other than the default regime.

The two competing benchmarks for default rules will not infrequently conflict with one another. Regarding the algorithmic personalisation considered here, however, they would both seem to point in the same direction. Empirical evidence cited by Eidenmüller and Wagner suggests that most consumers ‘view individual price discrimination as unfair’ and, given a genuine choice, would opt against automatic personalisation (p 52/p 587). If so, then an opt-in model better reflects average preferences than an opt-out model.

In policy terms, the case for an opt-in model instead of an opt-out model is outlined above. While the state may not want to—or indeed not be able to—stem the tide of personalised pricing and contract terms completely, it is desirable in the interest of both equality and liberty to preserve consumer anonymity as much as possible in the face of AI tools and big data number-crunching. 

Ultimately, a simple means of cross-checking what is or is not appropriate in the virtual marketplace is to consider what is or is not appropriate in the traditional marketplace. To the extent that algorithmic personalisation merely replicates at a (much) more sophisticated level what goes on in modern supermarkets, with personal data being provided in exchange for special offers, it is hard to object to it. What we would not normally accept, and could not accept, is the complete absence of any non-personalised price or the suggestion that a customer should receive a personalised offer which is less advantageous than the non-personalised standard, merely because they have a relatively higher reservation price. To take individualisation this far would effectively explode the market economy as we know it. The opt-in model proposed here therefore ensures that the online and the offline shopping worlds remain in sync.

Birke Häcker holds the Statutory Chair of Comparative Law at the University of Oxford and is the Director of the Oxford Institute of European and Comparative Law. 

Share

With the support of