Faculty of law blogs / UNIVERSITY OF OXFORD

Financial Profiling

Author(s)

Katja Langenbucher
Law Professor at Goethe University's House of Finance, Frankfurt; Affiliated Professor at SciencesPo, Paris; Long-term Guest Professor at Fordham Law School, NYC

Posted

Time to read

3 Minutes

The availability of ‘big data’ and the increasing sophistication of artificial intelligence have led to increased consumer profiling in many areas of life. We receive marketing e-mails for products or for investment opportunities, based on our browser history. We get an automated rejection letter for a job application, because we took too long to fill out an online questionnaire. Our credit score is adjusted after we shopped at unusual places. The shop, the employer, and the bank collect our personal data and, increasingly, use AI models to detect patterns. These allow them to make a prognosis on whether we might be interested in buying a certain product or investing in a financial instrument, on whether we would perform well in a job or pay back a loan. In the words of Article 4(4) EU General Data Protection Regulation (GDPR), such profiling is

the practice of automated processing of personal data […] to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that person’s performance at work, economic situations, health, personal preferences, interests, reliability, behaviour, location or movements.

In a draft paper, I explore ‘financial’ profiling, ie practices determining access to financial resources. They belong to a class of profiling practices that the EU Artificial Intelligence Act (AI Act) places under special scrutiny, given that integrating an AI ‘may lead to discrimination between persons or groups and may perpetuate historical patterns of discrimination’ (Recital (58), AI Act). The paper takes up this concern. It reviews consumer protection and anti-discrimination law, data privacy law, and the AI Act for their regulatory approach. I submit that the former two face important hurdles when applying received doctrine to AI-based financial scoring. As to the AI Act, I stress its enabling components. Building on its spirit, I suggest putting some scrutiny into regulating profilers and consumer transparency rights.

I start with a classic Akerlof  framing to describe profiling as a searching device for banks and a signalling device for borrowers in a situation of information asymmetry. Profilers fill an intermediary role: They help the borrower to signal his financial standing and support the bank via a conveniently standardized tool. Reviewing the legal framework, I find that existing regulation mostly targets the bank that takes a decision based on a profile. By contrast, there is a lack of comprehensive regulation as to the profiler, although profiles are often determinative for the bank’s decision. As far as the profiled borrower is concerned, this regulatory strategy leads to significant hurdles when enforcing private rights.

The paper falls into three sections. The first part of the paper provides an overview of the EU law on profiles as a searching device. I start with EU data protection law. The GDPR frames profiling as an element of automated decision-making, mostly undeserving of its own regulation. For the borrower, this under-regulation of profilers comes at a cost. If one frames his profile as a signalling device, this requires, as a minimum, that the borrower understands the signal he sends and the potential to influence it via behavioural change. The paper engages with two recent European Court of Justice (ECJ) decisions on profiling in the form of credit scoring. Laudably, these broaden the scope of ‘automated decisions’ and underscore the relevance of explaining how decisions are made. Still, they leave gaps as to enforceability of consumer rights. Moving on, I critically explore EU anti-discrimination law’s shortcomings in the credit underwriting context when big data and AI predict the performance of a potential borrower. The paper then demonstrates how the AI Act changes strategy, regulating profilers that develop or deploy AI, but still falls short as to allowing for efficient private enforcement.

A shorter, second part, zooms in on details of private rights of action under the GDPR and the AI Act. If one understands a profile as a signal in a situation of information asymmetry, it is crucial for the borrower to understand the signal he sends and the room he has for behavioural change, allowing for an improved signal. Neither the GDPR, nor the AI Act, fully achieve that goal.

The third part of the paper takes first steps at proposing transparency rights for the profiled person, allowing him/her to adapt his/her behaviour, and decide when to use the signal. Optimal information, explainability, and human oversight provide classic versions of a transparency right. I submit that, as AI-based profiling increases, the regulatory focus must turn to adequate rights when dealing with black-box AI. This includes local explainability of a score as suggested by the ECJ in Dun & Bradstreet. Reaching beyond that decision, I underscore the tension between local and global explainability, if only the profiler has access to the model. This points towards targeted black-box rights under the umbrella of transparency rights.

Katja Langenbucher is a Law Professor at Goethe-University's House of Finance, Frankfurt

The draft article can be accessed here.

Share

With the support of