What Is Personalized Law?
This post is part of a special series including contributions to the OBLB Annual Conference 2022 on ‘Personalized Law—Law by Algorithm’, held in Oxford on 16 June 2022. This post comes from Omri Ben-Shahar, co-author with Ariel Porat of ‘Personalized Law’, the book discussed on the first panel of the conference.
Laws apply in context. A dark rainy night requires greater care by drivers. A sale of a dangerous product requires more pronounced warnings. And the sanction for a criminal act depends, among other things, on the harm it caused. The more circumstances the law counts as relevant in issuing specific commands, the more granular and contextualized it is.
But laws rarely count the identity of a person and their subjective characteristics as relevant factors. In this factual dimension, laws universally aspire to uniformity. Justitia, the Goddess of Justice, whose blindfolded image is sculpted at the entrance to many a courthouse around the world, reminds judges: ignore the identity of any particular litigant, impart equal treatment to all.
It is this axiom—laws must be interpersonally uniform—that our book ‘Personalized Law: Different Rules for Different People’ challenges. Rather than blindfolded, let the law know everything that is relevant about people, apply the underlying legal principles to facts of each person, and thus tailor personalized legal regimes. If medicine, education, or parenting can treat, teach, or nurture better when personalized and adjusted to the subjective, why not law?
Would it not be more just and efficient for the law to impose greater standards of due care on those among us who create greater risks? Imagine a world in which dangerous drivers must comply with more exacting traffic laws, drive slower, or pay higher fines. Similarly for rights: why not bestow greater consumer protections on consumers who need them more? Imagine, for example, a regime in which longer rights to withdraw from contracts are granted to the less educated, poorer, or cognitively error-prone individuals. In fact, many types of laws can be usefully personalized. Food labels or drug warnings could be designed to highlight to each person a different subset of information, more relevant to their diet and health. A statutory age of capacity—to drive, purchase alcohol, or fly a plane—could vary person-by-person, based on an individual statutory safety score. If it is possible to predict, based on people’s traits and social interactions, the risk they create (as insurers are already training algorithms to do), the capacity restrictions could be personalized and serve their underlying objective in a far more precise manner.
Personalized law could be crude, for example by dividing people between ‘high’, ‘medium’, and ‘low’ alcohol purchase age. Or it could be highly precise, stoked up by Big Data and implemented by algorithms, with individualized commands. It could advance any goal of any law. Applied to protective laws, it can pinpoint the protections where most needed. Applied to safety laws, it can allocate safety burdens where most effective. And applied to punitive laws, it can account for individual circumstances to assess the culpability of the wrongdoer.
Personalized law is a novel jurisprudential template. It poses weighty implementation challenges, but the biggest questions are not its underlying technology (how will the algorithms or judges be trained? Where will the data come from?) but its perceived tension with traditional and widely cherished legal paradigms. Is personalized law just? Does it discriminate people or conflict with doctrines of equal protection? Would it stigmatize people? Would it disturb social coordination? Would it chill personal improvement and innovation? Would it allow governments to stealthily advance perverse policies through less transparent personalized micro-commands?
These questions are examined soberly in our book, forcing us to recognize various limiting principles. First, personalized law is poorly suited for public and constitutional law, where the individual rights have social value that greatly exceeds the private consumption value to the rightsholders. Second, it is hard to personalize laws that protect reputations, relationships, family life, privacy, and mental health, because the information is so sensitive and hard to verify (although it is worth pointing that uniform laws have their own blind spots in these areas). Third, personalized law must be extremely careful not perpetuate past and present injustice, like racism, which put people at various systematic disadvantages. While personalized law has much potential in mitigating the effects of past discrimination—for example, by granting more favorable treatment to people who had to confront greater hardships—it must not classify some people as ‘more dangerous’ merely because of discriminatory social policies towards them.
What about equal protection under the law? Personalized rules are by design unequal, and it is therefore all too easy to confuse such scheme with inequality. But it would be a mistake. Justice Felix Frankfurter once remarked that ‘there is no greater inequality than the equal treatment of unequals’. For meaningful equality, it is critical that people be treated based on all relevant criteria that define them. Treating them uniformly by ignoring relevant differences is unjust. Similarly people cannot be treated merely on the basis of membership in a group, particularly when such group is classified by race, sex, or age. This, indeed, is a key benefit of personalized law: even when membership in a group is a factor in one’s legal treatment, members of the group are not treated uniformly. They are not pooled into a single group-bin. Personalized law profiles people along numerous factors, primarily ones that are far less constitutionally problematic, like income, employment, personality, and past behavior. The weight put on each factor is marginal. In the end, each person is a pool of one, the recipient of their own singular rule.
Finally, a word on ‘Law by Algorithm’. Like each profession, law workers (judges, lawyers, lawmakers) do not want to see their jobs replaced by artificial intelligence. They invoke maxims and principles that highlight the importance of intuition, compassion, emotions, ingenuity, and other human experiences. There is, of course, much comfort in non-robotic interactions. But let us also remember that they come with non-trivial pitfalls. Yes, only a human judge can look a defendant in the eye, but mountains of social science evidence teach us that the product of such naked-eye examinations, once filtered by judges’ implicit biases, politics, and cognitive limits, is often ill-fated.
Omri Ben-Shahar is Leo and Eileen Herzel Professor of Law, University of Chicago.
This post is part of an OBLB series on Personalised Law—Law by Algorithm. The introductory post of the series is available here. Other posts in the series can be accessed from the OBLB series page.
Share
YOU MAY ALSO BE INTERESTED IN