Faculty of law blogs / UNIVERSITY OF OXFORD

Algorithmic Harm in Consumer Markets

Posted

Time to read

3 Minutes

Author(s)

Oren Bar-Gill
William J. Friedman and Alicia Townsend Friedman Professor of Law and Economics, Harvard Law School
Cass R. Sunstein
Robert Walmsley University Professor, Harvard University
Inbal Talgam-Cohen
Assistant Professor, Technion – Israel Institute of Technology

Sellers and service providers are increasingly using machine learning algorithms. Many uses should greatly benefit consumers. Suppose that algorithms can predict what goods and services people will buy and at what price. If algorithms give people information about beneficial health care products that are ideally suited to their particular situations (say, diabetes or heart disease), consumers might gain a great deal. But other uses of algorithms should not be welcomed. If algorithms exploit a lack of information or behavioral biases on the part of identifiable people, so as to induce them to buy ineffective baldness cures or pointless insurance policies, or to overpay for valuable goods and services, those people will be harmed. We use the term ‘algorithmic harm’ to capture this kind of injury. In a recent paper, we catalog the different ways in which algorithms are being or may be used in consumer markets and identify the market conditions under which these uses harm consumers. We then identify legal responses that can reduce algorithmic harm.

A. Categories of Harm

We consider (1) algorithmic price discrimination and (2) algorithmic quality discrimination (or product targeting). By discrimination we mean the setting of different prices for different consumers or the targeting of different products to different consumers. We characterize the incidence of algorithmic harm for each category. To do so, we organize the analysis, for each category, into a 2x2 matrix.

 

No Differentiation

(Pre-Algorithmic World)

Differentiation

Perfectly Informed &

Perfectly Rational Consumers

PI-PR Benchmark

PI-PR Algorithmic Harm

Imperfectly Informed or

Imperfectly Rational Consumers

II-IR Benchmark

II-IR Algorithmic Harm

The two rows distinguish between two types of consumer markets—one that is populated by perfectly informed and rational consumers (PI-PR) and another that is populated by consumers who are imperfectly informed, imperfectly rational, or both (II-IR). Of course, these are theoretical archetypes, and we are dealing with a continuum, not a dichotomy. Real-world markets are populated by a mix of more- vs. less-informed and more- vs. less rational consumers.

For each type of consumer market, we start with the ‘No Differentiation’ benchmark—a pre-algorithmic world, where sellers offer the same product at the same price to everyone. We then compare this benchmark to a world where large data sets and sophisticated algorithms allow for at least some degree of ‘Differentiation.’ Our overarching conclusion will be that algorithmic differentiation is generally beneficial in PI-PR markets, but often harmful in II-IR markets.

This conclusion relates to prior work on consumer harm that predates the rise of algorithms. First, we recognize that some kinds of differentiation occurred long before machine learning algorithms were commonplace. Our claim is that the increasing use of algorithms results in higher degrees of differentiation. Second, the risk that uninformed, imperfectly rational consumers might be exploited by unscrupulous sellers similarly predates the rise of algorithms. Here, again, we suggest that algorithms significantly amplify the risk, eg by enabling the identification of specific information and rationality deficits that affect the demand of individual consumers.

B. Algorithms and Discrimination Based on Race and Sex

Our conclusion—that algorithmic harm is concentrated in II-IR markets and, more specifically, that policymakers should focus on differentiation, or discrimination, based on the consumer’s information or rationality deficits—is different from that found in most prior work on algorithmic harm. That work has focused on the risk that algorithms will discriminate on the basis of race and sex, setting higher prices or offering inferior products to women and to members of minority groups. While acknowledging that concern, we argue that, at least in consumer markets, algorithms will often, though not always, reduce the risk of discrimination based on race and sex. It follows that scholars and policymakers should expand their focus beyond race- and sex-based discrimination, specifically to algorithmic discrimination on the basis of information and rationality deficits, ie to the risk that algorithms will set higher prices or offer inferior products to uninformed, biased consumers.

C. Legal Responses

We emphasize two main categories of algorithm-specific legal responses that might reduce algorithmic harm: (1) algorithmic transparency and (2) regulations policing the design and implementation of algorithms. The implementation of these regulatory responses is especially challenging, given the increasing prevalence of opaque, machine-learning algorithms. Building on recent developments in computer science and in economics, we provide suggestions for policymakers on how to open the algorithmic black-box and create meaningful transparency that can then be used to trigger market responses or regulatory scrutiny and to overcome doctrinal (mens rea-type) hurdles to liability for algorithmic harm. We also provide suggestions on how to police the design and implementation of these black-box algorithms, mainly through the regulatory imposition of non-discrimination constraints—including limiting any differences in outcomes experienced by imperfectly informed and imperfectly rational consumers relative to informed, rational consumers—into the algorithm’s code. Our discussion of legal responses can inform policymakers in the United States and around the world who are increasingly concerned about algorithmic harm in consumer markets.

***

Our focus is on algorithms deployed by sellers and service providers and the harm that they might impose on consumers. We note, however, that there are also consumer-side algorithms that can help consumers make better choices and thus mitigate the algorithmic harms that we identify. Examples include ‘digital butlers,’ like Alexa, Siri and Google Assistant, that can help consumers make purchasing decisions, and more specialized apps that compare prices and help identify attractive options. Without discounting the importance of consumer-side algorithms, we believe that structural asymmetries between sellers and buyers will prevent such algorithms from eliminating the harms that we identify.

Oren Bar-Gill is the William J. Friedman and Alicia Townsend Friedman Professor of Law and Economics at Harvard University.

Cass Sunstein is the Robert Walmsley University Professor at Harvard University.

Inbal Talgam-Cohen is an Assistant Professor at the Henry and Marilyn Taub Faculty of Computer Science Technion, Israel Institute of Technology.

Share

With the support of