Faculty of law blogs / UNIVERSITY OF OXFORD

Law and Autonomous Systems Series: Toward a Consumer Contract Law for an Algorithmic Age

Author(s)

Lauren Scholz

Posted

Time to read

5 Minutes

Proprietary, sophisticated computer algorithms are a hidden yet central element of how we work and play today. Businesses use algorithms internally to make decisions and improve their service offerings. And increasingly, businesses even use algorithms in contract formation.

In a recent article, I coined the term “algorithmic contract” to describe contracts in which an algorithm determines the rights and responsibilities of a party by acting as either a gap-filler or a negotiator for the company in the process of contract formation. In gap-filler algorithmic contracts, parties agree that an algorithm, that operates at some time either before or after the contract is formed, will serve as a gap-filler, determining some term in the contract. An example of this is a company’s purchase of a good on Amazon.com. Amazon has standard form terms and conditions for all of its buyers, but sophisticated proprietary algorithms determine the good’s exact price at any given time for each user.

In negotiator algorithmic contracts, one or more parties use algorithms as negotiators before contract formation. The algorithm chooses which terms to offer or accept, or which company to do the deal with.  An example of this is high frequency trading of financial products by investment banks and funds. They employ quantitative analysts who create or modify proprietary algorithms that, through machine learning, create real-time strategies for buying and selling financial products. The point of using such algorithms is to efficiently bind the company to advantageous exchanges that a human analyst could not have thought of doing, including the individuals who wrote the program.

The algorithmic contracts that present the most significant problems for contract law are those that involve “black box” algorithmic agents. These algorithms have decision-making procedures that are not functionally human-intelligible before the program runs – and often cannot even be parsed after the program runs. I have argued that in business-to-business transactions, algorithmic contracts are enforceable because the algorithm is acting as a constructive agent for the company using it. That means the acts are indicative of the company’s intent.

The term “algorithmic contract” enhances and clarifies the policy discussion about computer programs known as smart contracts. An influential definition of a smart contract is as “a computerized transaction protocol that executes the terms of a contract.” Smart contracts are simply computer code that helps to procedurally carry out agreements. However, smart contracts are not necessarily legally binding. Not all code is enforceable in contract law, just like not every expression of human language forms a contract. An algorithmic contract is a legally enforceable contract formed by an algorithm. Not every smart contract is necessarily legally enforceable.

My approach towards the enforceability of algorithmic contracts between businesses is a permissive one. In the case of business-to-business algorithmic contracts, the doctrinal argument is supported by several policy considerations. If businesses are strictly liable for the acts of an algorithm in contract formation, they will be faced with potential adversaries with the financial incentive and ability to pursue litigation. This would create accountability in algorithm usage, and an incentive to allocate risk of loss to the least cost avoider in advance. Arguments for enforceability based on the assumption of risk and economic efficiency are highly persuasive here.

Consumer algorithmic contracts present novel issues. Business tends to operate within specialty norms. Even when businesses do not have actual knowledge of the content of forms they use, these forms are often coded to serve the general interests of repeat players in the industry. By contrast, the terms in consumer contracts are not generally shaped to the individual consumer’s advantage, and what’s more, consumers have less collective knowledge of terms and no ability to control terms. Furthermore, much more is at stake when it comes to the algorithmic assignment of rights to human persons, as opposed to artificial persons, by algorithmic contracts. Human persons have rights and responsibilities that companies do not. Some of these rights are non-disclaimable. In a liberal democracy, we should be concerned with individuals being autonomous and not subject to undue control by others.

There is precedent for having different standards for consumer-to-business transactions versus business-to-business transactions. Because of the distinct concerns presented by consumer-to-business transactions, the common law has traditionally held consumers to different standards of reasonable behavior than companies. For example, in the US, the Uniform Commercial Code has a variety of rules that apply only to merchants.

The carefully crafted business ethics enshrined in the common law of contracts and the Uniform Commercial Code should be transferred to the algorithmic age. Laws developed based on outmoded notions of algorithmic capabilities such as the Uniform Electronic Transactions Act should also be clarified to encompass an agent-based approach to algorithmic involvement in contract formation. It would be incorrect as a matter of legal principle, and imprudent as a matter of policy to have contract law, given its roots in promoting individual agency and consensual market transactions, serve to legitimate algorithmic exploitation.

Algorithmic exploitation comes in several forms. First, unaccountable algorithms operate in the background of society, determining the terms of our access to many resources. This enables an unseen form of social control by the corporate and government controllers of the relevant algorithms. Furthermore, big data processing techniques have been shown to be vulnerable to choosing strategies that perpetuate wrongful discrimination against socially and economically vulnerable groups. In addition to raising serious moral concerns about the scope of free choice in a liberal democracy, allowing algorithmic contracts to enable perfect market discrimination by powerful actors would result in major market failure. Given these very real concerns, doctrinal limits on the enforceability of consumer algorithmic contracts should be explored.

Contract law doctrine offers multiple potential avenues for limiting the enforceability of consumer algorithmic contracts. For example, the doctrine of undue influence, an affirmative defense to contractual enforcement, may be relevant to this issue. Undue influence finds contracts unenforceable when the improper use of power or trust deprives a person of free will and substitutes another's objectives in its place. It is the exercise of enough control over another person such that the pertinent act by that other person would not have otherwise been performed. Undue influence is often employed when there is a disparate capacity between parties.

In the case of consumer algorithmic contracts, a consumer potentially finds herself confronted with forming agreements with machine learning algorithms armed with superior processing capacity and comprehensive knowledge of both general attendant circumstances and particular data about that person.  In contract formation, the diminished capacity of consumers relative to companies employing algorithmic decision-making, paired with knowledge about each individual consumer’s vulnerabilities, could amount to undue influence resulting in a voidable contract. Similar arguments limiting the enforceability of consumer contracts can be made from the doctrines of mental capacity and procedural unconscionability.

The pressing question is not whether the law allows for limitations on enforcement of consumer algorithmic contracts, but which avenue best strikes a balance between minimizing algorithmic exploitation and enabling consensual, mutually beneficial business transactions. Given contract law’s ability to create personalized law, this is the central question of personal liberty in the dawning algorithmic age.

This article raises as many questions as it answers.  It does not propose a specific framework for what limitations there are on the enforceability of consumer algorithmic contracts. Rather, it makes the general point that existing contract law doctrine and principle dictates that some consumer algorithmic contracts may be unenforceable at law. I have proposed a research agenda for considering the various avenues at common law for limiting the scope of consumer algorithmic contracts, and suggest these principles be employed as appropriate in legislative and administrative regulation of consumer algorithmic contracts. These doctrinal arguments often relate to and can be supportive of the many pragmatic policy reasons for limitations on the enforceability of consumer algorithmic contracts.

Policy makers and academics struggle to anticipate what companies will do with new technologies and, accordingly, develop rules that preserve personal freedom and uphold business ethics. The broadest point this article makes is that the law cannot operate in changing techno-social conditions without applying general rules that make value judgments. Judges and lawmakers cannot simply defer to technologists or currently influential market actors on what policy decisions should be made. If they do so, they are abdicating their longstanding role as a source of ethical limitations that protect civil liberties and enrich society and the economy. In common law jurisdictions, values with historical legitimacy can be found in the common law. Principles from the common law can serve as a useful guide for what the law should aim to achieve in the algorithmic age.

Lauren Henry Scholz is an Assistant Professor at Florida State University, College of Law.

Share

With the support of