Faculty of law blogs / UNIVERSITY OF OXFORD

Discriminating Credit Algorithms

Author(s)

Talia Gillis
Associate Professor of Law at Columbia Law School from July 2020 onwards

Posted

Time to read

3 Minutes

In 2017, Upstart, a US-based alternative lender using big data and machine learning to predict creditworthiness, received a No Action Letter from the US Consumer Financial Protection Bureau (CFPB) after it reviewed Upstart’s underwriting model. Earlier this year, however, the Student Borrower Protect Center circulated a study suggesting that Upstart’s model causes ‘educational redlining’ by charging graduates of historically black colleges higher interest rates. The Center argued that when comparing the terms offered to similar hypothetical lenders from New York University and Howard University, a historically black university, the Howard University graduate was offered a more expensive loan.

Should Upstart’s model be considered discriminatory? Is consideration of a borrower’s college discriminatory when college attendance correlates with race? Can Upstart demonstrate that college attendance predicts default to defend its algorithm? Or perhaps the different loan rates should be considered an attempt to implicitly consider borrower race in a lending decision? The answers to these questions are fundamental to our understanding of fair lending in the algorithmic setting, and can shed light on other contexts in which advance prediction technologies are replacing human decision-makers and simple models.

In a recent working paper, ‘False Dreams of Algorithmic Fairness: The Case of Credit Pricing’, I discuss how to apply fair lending law to the algorithmic context in which big data and machine learning are used to price credit. Given that fair lending law developed to address concerns that arise in a human decision-making context, the application of existing doctrine in the machine-learning setting is not straightforward, as discussed in a previous paper with Jann Spiess. Furthermore, fair lending law continues to be highly contested with respect to the boundaries and theoretical foundations of the legal doctrine. Scholars have disagreed about both the doctrine of disparate treatment and disparate impact, making the ‘translation’ of discrimination law to the algorithmic setting particularly challenging.

I focus on several common approaches to applying discrimination law to the algorithmic setting in the context of fair lending and other legal domains. Previous writing on discrimination in the algorithmic setting has focused on policing the information used by the algorithm by limiting the data that enters as an algorithm’s inputs. These approaches suggest that we exclude protected characteristics and their proxies, limit algorithms to pre-approved inputs, and use statistical methods to neutralize the effect of protected characteristics. Although proponents of these approaches are rarely explicit as to the goal of their proposals, they often imply that the exclusion of inputs either precludes the consideration or protected characteristics or can be used as a method to reduce disparities for protected groups.

The primary source of the shortcomings of the four approaches I discuss is that they continue to scrutinize decision inputs, similar to traditional fair lending practices, when this strategy is outdated in the algorithmic context. Using data on past mortgages, I simulate algorithmic credit pricing and demonstrate that input scrutiny fails to address discrimination concerns. The ubiquity of correlations in big data combined with the flexibility and complexity of machine-learning means that one cannot rule out the consideration of a protected characteristic even when formally excluded. It is simply not possible to guarantee the exclusion of information that is relevant to a prediction, such as age, when it is embedded in other inputs, such as income. This is a key concern as fair lending often prohibits differential pricing on the basis of factors that might indeed bear a relationship to default risk. For example, fair lending requires that borrowers not be treated differently if their income derives from a public assistance program, even if the income source type is empirically related to default risk. Similarly, in the machine-learning context, it may be impossible to determine which inputs drive disparate outcomes.

The limitations of current approaches mean that fair lending law must make the necessary, yet uncomfortable, shift to outcome-focused analysis. Discrimination law has always resisted focusing solely on the outcomes or effects of a policy as a way of identifying discrimination. However, given the unsuitability of input-based approaches in the algorithmic setting, there is a need to rethink how to analyze discrimination in this new context. This is true for both disparate treatment and disparate impact. For disparate treatment, we have no reliable way to detect proxies for protected characteristics. For disparate impact, we need new tools to evaluate the effects of algorithmic pricing that are appropriate for machine learning, as restricting variables upstream can have a limited or surprising effect on the disparities downstream.

The type of outcome-focused test I propose considers the disparities created by a credit pricing algorithm. It then seeks to distinguish between permissible and impermissible disparities by considering whether disparities are a product of differences between people that are considered legitimate for distinction. The framework is highly flexible and can be adapted based on the specific normative theory or policy goals of the regulator, such as whether legal rules are intended only to address ill-intent towards protected groups or also play a broader role in addressing disparities in credit markets.

Talia Gillis will be joining Columbia Law School as an Associate Professor in July 2020.

Share

With the support of