Faculty of law blogs / UNIVERSITY OF OXFORD

Less Discriminatory Alternatives

Posted

Time to read

2 Minutes

Author(s)

Talia Gillis
Associate Professor of Law at Columbia Law School from July 2020 onwards
Vitaly Meursault
Machine Learning Economist at the Federal Reserve Bank of Philadelphia
Berk Ustun
Assistant Professor at UC San Diego

What obligations do lenders face in ensuring that their lending models are fair and non-discriminatory? In the US, fair lending law under the Equal Credit Opportunity Act and the Fair Housing Act prohibits both intentional or direct discrimination (‘disparate treatment’) and unintentional discrimination (‘disparate impact’). A disparate impact claim typically takes on a burden-shifting structure where plaintiffs must first show that a lending policy creates disparities. In the second stage, lenders can defend the policy by demonstrating a ‘business justification’ for the policy, such as when the policy distinguishes borrowers based on creditworthiness predictions. However, disparate impact also includes a third stage: a lending practice justified by business necessity can still be deemed discriminatory if there is it is shown there is a less discriminatory alternative (LDA) policy that achieves the same business goal.

The LDA requirement has been largely overlooked in both case law and regulation. This is about to change. The Consumer Financial Protection Bureau (CFPB) recently published its annual Fair Lending Report for 2023, discussing LDAs for the first time. According to the CFPB, compliance with fair lending law requires lenders ‘to develop a process for the consideration of a range of less discriminatory models.’ The CFPB has essentially elevated the LDA search, which has been lacking regulatory guidance, to a key component of fair lending, determining that lenders themselves must conduct this search.

Proactively requiring lenders to search for LDAs is a crucial step for fair lending, as it prevents the use of algorithms that offer little business value while create unnecessary harm. But what would such a search look like and how would it be operationalized? In our new paper, we address these challenges by developing a method to identify and implement LDAs in lending models. Our approach involves a systematic search for alternative models that minimize disparities between protected groups while maintaining the accuracy required for business needs. By leveraging integer programming, we can explore a wide range of linear classification models to find those that reduce discrimination without compromising predictive performance.

To demonstrate the effectiveness of our method, we provide several examples of LDA searches using real-world consumer finance data. Our examples show that many existing models inadvertently lead to unnecessary discrimination and that our method can identify and implement fairer alternatives.

The significance of our method lies in its potential to transform how lenders address algorithmic discrimination. Traditionally, proving the existence of an LDA was the burden of plaintiffs challenging discriminatory practices. However, with our approach, lenders can proactively search for and implement fairer models, which is an important step given that the CFPB’s Fair Lending Report flips the traditional burden of proof by requiring lenders, not plaintiffs, to proactively search for LDAs as part of their compliance efforts.

Our method also has the potential to enhance regulatory oversight and enable third-party challenges to algorithmic practices. To operationalize the search for an LDA under our framework, an auditor would only need access to the accuracy and disparity metrics from the baseline model, which are then used to determine whether an LDA exists. Importantly, the auditor does not require access to the lender’s baseline model or training dataset to implement our method.

Creating a systematic and formal method for LDA searches also highlights the elements that require further regulatory guidance. For example, future guidance should clarify the disparity metric to be used for an LDA search, such as whether it should consider differential model accuracy or differences in loan approval. Another important element of the search is whether an LDA should be as accurate as a baseline model or whether there can be some compromise in performance to reduce disparities.

Our method also addresses a critical gap in the current regulatory landscape. While the LDA requirement has been a part of fair lending law for decades, it has been largely overlooked due to the lack of clear guidelines and standardized methods for conducting LDA searches. By formalizing the LDA search process, we provide a structured and transparent way to support lenders, regulators, and plaintiffs in reliably detecting less discriminatory alternatives.

The authors’ paper can be found here.

Talia Gillis is an Associate Professor of Law at Columbia University.

Vitaly Meursault is a Machine Learning Economist at the Federal Reserve Bank of Philadelphia.

Berk Ustun is an Assistant Professor at UC San Diego.

Share

With the support of