Faculty of law blogs / UNIVERSITY OF OXFORD

Unveiling the Potential of the European Financial Data Space: Combating Bias in AI-Driven Consumer Credit Assessments

Author(s)

Andrés Chomczyk Penedo
PhD Researcher at the Law, Science, Technology and Society Research Group, Vrije Universiteit Brussel (VUB). Marie Skłodowska-Curie fellow at the PROTECT ITN
Pablo Trigo Kramcsák
PhD Researcher at the Law, Science, Technology and Society Research Group, Vrije Universiteit Brussel (VUB). Researcher, Universidad de Chile, Faculty of Law, Centro de Estudios en Derecho Informático (CEDI).

Posted

Time to read

2 Minutes

In today's data-driven landscape, personal data processing is pivotal in propelling the EU data economy forward. The financial services industry is experiencing a transformation where traditional forms of money are gradually evolving into data. This paradigm shift, expedited by the COVID-19 pandemic, implies that financial transactions can no longer escape analysis through advanced big data techniques.

Considering this transformation, regulatory initiatives like the Payment Services Directive 2 (PSD2) have acknowledged the trend, actively promoting data-sharing schemes within the financial sector. These policies, termed 'open banking', aim to facilitate the entry of new players, particularly fintechs and non-incumbents financial institutions, into this industry, boosting financial inclusion. A core aspect of open banking policies and regulations is the expectation that individuals, by transferring their data across different service providers, would enjoy improved economic conditions (more choice, better pricing, and personalized services) through increased competition among companies vying to attract and retain customers. Under the EU Data Strategy and the Digital Finance Strategy umbrellas, the current aim is to expand open banking into open finance and create a common financial data space, enabling seamless information flow among various stakeholders and laying the foundation for an open framework across the financial services industry.

In our article on this common financial data space, we concentrate on AI applications that directly impact individuals, acknowledging concerns about the extensive and intensive use of AI. These systems have the potential to reinforce existing biases or introduce new ones, leading to unfair and discriminatory outcomes, with a particular effect on vulnerable populations. Recent empirical findings demonstrate that having access to more data could help mitigate or prevent biases. This insight directs our attention to a specific activity—AI-assisted creditworthiness assessments. While credit scoring has been explored in EU legal literature, the evolving regulatory landscape under the EU Digital Finance Strategy calls for fresh perspectives on this subject. To navigate this landscape effectively, we focus on whether obtaining more information to train AI systems and overcome existing biases is a sensible approach. The emergence of EU data spaces, particularly the European Financial Data Space (EFDS) under the recent proposal for a Regulation on Financial Data Access, puts a strong emphasis on securing quality data. Then, the significance of this emphasis on data quality becomes especially apparent when considering its potential impact on creditworthiness assessments.

In our contribution, we delve into several critical research questions, primarily exploring whether the European Financial Data Space can potentially provide AI developers with access to high-quality personal data (in terms of quantity, accuracy, and representativeness), with a view to enhancing their technical solutions, particularly in mitigating biases for credit scoring. We offer a comprehensive analysis that delves into the uncharted territory of the EFDS, comparing it with international experiences, examining the current use of AI in financial services and its inherent biases in the context of consumer creditworthiness evaluation, and identifying potential solutions.

We critically assess whether the EFDS can indeed enable stakeholders to utilize the available data to develop and improve AI systems. Our analysis focuses on three crucial data protection aspects – legal basis, transparency, and control over the processing activity – which are pivotal in achieving this objective. Ultimately, we reflect on our findings and propose potential answers to these pressing research questions, aiming to contribute to the regulatory proposal concerning the EFDS.

In conclusion, our paper seeks to reflect on the existing and proposed legal frameworks within the EU to ensure the adoption of adequate safeguards for the fundamental right at stake: the right to personal data protection. We approach this with a two-stage analysis: a descriptive analysis and an exploratory theoretical analysis. While this article specifically explores the implications of the EFDS from an EU data protection perspective, particularly regarding bias mitigation in AI-based consumer creditworthiness assessments, the conclusions drawn may extend to other domains where biases can potentially cause harm. The insights derived from this analysis may prove valuable not only within the EU but also in other jurisdictions that align, to varying degrees, with EU standards for open banking or data protection regulations, under the 'Brussels effect'.

 

The authors’ article can be accessed here.

 

Andrés Chomczyk Penedo is a PhD Researcher at the Law, Science, Technology and Society Research Group, Vrije Universiteit Brussel (VUB).

 

Pablo Trigo Kramcsák is a PhD Researcher at the Law, Science, Technology and Society Research Group, Vrije Universiteit Brussel (VUB) and Researcher at the Universidad de Chile, Faculty of Law, Centro de Estudios en Derecho Informático (CEDI).

Share

With the support of