Faculty of law blogs / UNIVERSITY OF OXFORD

Fintech and the Evolving Regulation of Consumer Financial Privacy

Author(s)

Nikita Aggarwal
Postdoctoral Research Fellow, UCLA

Posted

Time to read

3 Minutes

The concept and regulation of consumer financial privacy have evolved considerably over the course of the last century. As the common law duty of bank confidentiality, consumer financial privacy was originally concerned with protecting a consumer’s confidence in their bank, and the relationship of trust between them—both to protect the relationship per se, and because financial information, being of an intimate nature that can reveal much about a person, could harm the consumer’s credit and reputation if misused. It thus restricted banks and certain other financial institutions from sharing non-public information about consumers.

Although this duty has, from the beginning, been subject to clear qualifications permitting banks to disclose customer information on public interest and business efficacy grounds, these qualifications have expanded over the course of the last century, resulting in a gradual erosion of the duty of bank confidentiality. To paraphrase one commentator, the banker’s duty of confidentiality is ‘a dying duty but not dead yet’. In a new book chapter, forthcoming in the Oxford Handbook of AI Governance, I argue that the erosion of bank confidentiality and evolution of consumer financial privacy have been shaped directly by the digitization and datafication of consumer financial markets—a phenomenon referred to in recent years as ‘fintech.’

The growth of the consumer credit and credit information markets in the post-war period were the first major inflection points in the erosion of the duty of bank confidentiality. The subsequent rise of universal and cross-border banking, as well as non-bank and market-based finance, were triggers for further erosion. Advances in computing and the ability to process and store information was a key enabling factor at each stage. More broadly, there was a gradual shift in opinion regarding the value of bank–customer confidentiality relative to the public interest in sharing information, a shift that accelerated in the wake of 9/11 and the ensuing War on Terror.

Paradoxically, however, the same socio-technical forces that instigated the erosion of the common law duty of bank confidentiality also spurred the development of information privacy law. The growth of information privacy law since the 1970s, particularly under the framework of data protection regulation, has significantly expanded the concept and regulation of consumer financial privacy. More specifically, it moved consumer financial privacy away from the paradigm of relational confidentiality—non-disclosure of private information arising from the bank-customer relationship—towards one of individual control and institutional safeguards over the processing of all personal information, regardless of context. Over time, the data protection regime has come to place greater importance on the fundamental right of individual consumers to control the use of their personal data—what I refer to in this chapter as the intrinsic dimension of consumer financial privacy.

The rise of fintech 2.0 since the early 2000s, in particular, the increasing use of AI in financial decision-making, presents both challenges as well as opportunities for consumer financial privacy. On the one hand, the ideal of consumer financial privacy as individual data control—where such control is intrinsically valuable—seems increasingly unworkable as more personal data is generated, collected, and processed for consumer financial decision-making. There are technical and cognitive limits to consumers’ abilities to control the inferences and predictions drawn from their data with AI/ML. As such, the growth of AI in consumer finance stands to compromise consumer financial privacy in the intrinsic sense.

On the other hand, the use of AI could support the instrumental goals of consumer financial privacy. That is, despite losing individual control over their data, some consumers stand to benefit from the use of AI/ML techniques that, inter alia, mitigate discrimination and improve access to/lower the cost of finance. Indeed, to the extent that the use of less interpretable AI/ML methods offers more useful insights into a person’s behaviour, and thus improves their access to finance, less control over personal data could be more beneficial to consumers in the instrumental, consequential sense.

At the same time, however, the use of AI in consumer finance stands to harm (other) consumers. Amongst other things, lenders can exploit consumers using more granular, data-driven insights about their preferences and misperceptions due to the use of AI. Interpretable AI/ML could help to mitigate these data-driven harms by enabling consumers, firms and regulators to better understand the inferences underlying AI/ML models and correct inaccurate and/or unfair decisions based on the model’s predictions. Consumers also experience intangible harm due to the feeling of perpetual surveillance and behavioural profiling.

Clearly, consumer financial privacy is nuanced—the implications of AI-driven finance for consumer financial privacy even more so. Nevertheless, the existing data protection regime in the UK appears increasingly inadequate for addressing the consumer privacy challenges of AI-driven finance, and AI-driven decision making more broadly. To conclude, I raise four sets of questions to help direct future research into, and regulation of, consumer financial privacy:

1) First, what relative value should be placed on intrinsic consumer (financial) privacy; that is, individual control over the use of personal data as an end in itself?

2) Second, are the (tangible) consequential harms of data processing, such as the data-driven exploitation of vulnerable consumers, best mitigated by strengthening individual rights over personal data, and/or by strengthening the obligations of data processors, and the latter’s enforcement thereof? Relatedly, what role can interpretable AI/ML play in mitigating the harms due to data processing?

3) Third, could resurrecting and strengthening the duty of bank confidentiality (and relatedly, the duties of care of financial institutions provide a potential avenue for reform?

4) Fourth, should these questions be addressed under omnibus data protection regulation or sectoral, financial regulation, or both? In this regard, should existing provisions that govern consumer financial privacy under sectoral regulation—such as consumer credit and payment services laws—be construed as lex specialis, and therefore given precedence over cross-sectoral data protection regulation in the case of conflict?

Nikita Aggarwal is a Postdoctoral Research Fellow at UCLA’s Institute for Technology, Law and Policy.

Share

With the support of