Faculty of law blogs / UNIVERSITY OF OXFORD

Consumer Credit in The Age of AI—Beyond Anti-Discrimination Law

Author(s)

Katja Langenbucher
Law Professor at Goethe University's House of Finance, Frankfurt; Affiliated Professor at SciencesPo, Paris; Long-term Guest Professor at Fordham Law School, NYC

Posted

Time to read

3 Minutes

Creditors face information asymmetries when evaluating the creditworthiness of potential borrowers. These are low for a banker in my hometown who has been providing me with credit for years, which I have dutifully repaid. But what if I move to New York City? I will have none of the classic items a US bank will ask me to provide—no utility bill, mobile phone contract, or payment history with a US-based entity. How will the US bank reconstruct hidden fundamental information about me? Their only choice is to rely on observable variables. Historically, signals such as ‘capital’, ‘capacity’, and ‘character’ were important clues towards fundamental information. Starting in the 1930s, lenders profited from advances in statistics. Reasonably good forecasts could be established based on a limited list of observable attributes.

Today, search costs for lenders to assess borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. My NYC bank might now be happy to accept proof of a satisfactory online payment history, instead of a US utility bill. In that way, digital technology can achieve more efficient and lower-cost delivery of financial services than ever before. Lenders can access data far beyond traditional financial variables without compromising on speed in evaluating creditworthiness. The popular remark ‘all data is credit data. We just don’t know how to use it yet’ suggests, that there is much to explore, ranging from online payment history, over age or sex, to number of typos in text messages and speed in clicking through a captcha exercise.

For some candidates, this holds the promise of inclusion and better access to finance. Broader data and more refined, often AI-driven models help to detect previously invisible, but highly attractive candidates without triggering prohibitive costs. However, not all applicants profit to the same extent. Several empirical studies have evidenced the potential for inclusion but, at the same time, pointed to inequalities in output across protected and not-protected communities. Against this background, a lively international debate on ‘algorithmic discrimination’ has evolved. The US have focused on integrating the novel concerns in their existing fair lending framework, developed in the 1970s. The Fair Housing Act regulates mortgage-based lending, while the Equal Credit Opportunity Act regulates general consumer credit. The EU, by contrast, is currently drafting an EU AI Act and a new Consumer Credit Directive, answering to concerns brought about by digitalization.

In a new paper (January 2023 revision from a previous version), I make two main contributions to the debate. I submit that received anti-discrimination doctrine is ill-suited to deal with algorithmic discrimination in credit underwriting and I suggest reorienting the debate towards the regulatory design of retail loan markets.

First, the paper explores how US and EU anti-discrimination laws fare when faced with algorithmic credit underwriting. I discuss disparate treatment/direct discrimination and suggest that situations which we have traditionally understood as rare but hard cases will become the new normal. An example concerns sex discrimination and pregnancy. Leaving cases of gender transitioning aside, pregnancy correlates with the female sex. Still, an early US Supreme Court decision had held that exclusion of pregnancy from a disability benefits payments plan was not based on sex, prompting Congress to amend the law. The ECJ had reached the opposite conclusion, arguing that pregnancy is ‘inextricably linked’ to the female sex. Today, we might find it hard to point to variables as clearly correlating with a protected characteristic as pregnancy does with sex. However, due to the broad spectrum of big data and the power of AI models to establish previously invisible correlations, this will change. The assumption, implicit in received doctrine, that there are relatively few variables which are ‘inextricably linked’ to a protected attribute, holds no more.

Moving on to disparate impact/indirect discrimination, the paper discusses whether these doctrines extend to credit underwriting. Under the assumption that they do, it investigates how to identify a facially neutral attribute, which under disparate impact/indirect discrimination doctrine is understood to trigger an unfavorable outcome for protected communities. The paper suggests that eliminating individual neutral attributes will not carry far, given algorithmic models’ power to find stand-in variables which predict the same outcome.

Faced with important shortcomings of received anti-discrimination law, the second part of the paper outlines the preliminary contours of a regulatory design for algorithmic consumer credit underwriting. It stresses the role of quality control of both, algorithmic models and data, and illustrates one way to do this by summarizing key rules of the EU AI Act. Furthermore, the paper investigates consumer rights to rectify incorrect data under US and EU law, assuming that these rights will gain in prominence, because information gathered from sources such as social media networks are prone to mistakes and misunderstandings. The paper moves on to explore consumer rights in both jurisdictions to be informed about scoring and about the reasons for a denial of credit. It explains the US ECOA notice and contrasts this with the EU GDPR and the ongoing reform of the EU Consumer Credit Directive. Concluding, the paper offers some thoughts on personalized pricing in credit underwriting, calling for a future detailed study.

Katja Langenbucher is Professor of Law at Goethe University Frankfurt.

Share

With the support of