Faculty of law blogs / UNIVERSITY OF OXFORD

Regulating AI in Finance: Putting the Human in the Loop

Posted

Time to read

3 Minutes

Author(s)

Dirk A Zetzsche
Professor of Law and ADA Chair in Financial Law (Inclusive Finance) at the Faculty of Law, Economics and Finance, University of Luxembourg
Douglas W Arner
Kerry Holdings Professor in Law, RGC Senior Fellow in Digital Finance and Sustainable Development, and Associate Director, HKU-Standard-Chartered Foundation FinTech Academy, University of Hong Kong
Ross P Buckley
Scientia Professor and the KPMG Law – King & Wood Mallesons Professor of Disruptive Innovation at UNSW Sydney
Brian Tang
Founding Executive Director, LITE Lab@HKU, Faculty of Law, University of Hong Kong

Finance is now one of the most globalized and digitized sectors of developed economies. It is also one of the most regulated, especially since the 2008 Global Financial Crisis. Globalization, digitization and technology are propelling Artificial Intelligence (AI) forward in finance at an ever increasing pace. 

Some experts predict AI will boost global GDP by 14%, or US$15.7 trillion, by 2030. Others estimate AI offers banks potential cost savings of 20% to 25%. While the former estimate may be overly optimistic, the combination of enhanced efficiency and cost savings combined with the potential for entirely new business models and opportunities explains why financial services companies are expected to spend US$11 billion on AI in 2020, more than any other industry. At the same time, there is also increasing concern about the potential for negative—even dystopian—impact of AI.

Despite the fact that finance is where some of the greatest investment is taking place, little has been written about potential financial regulatory concerns.

In a new working paper we develop a regulatory roadmap for understanding and addressing the increasing role of AI in finance, focusing on human responsibility: the idea of ‘putting the human in the loop’ in order, in particular, to address ‘black box’ issues. 

Our paper first maps the various use-cases of AI in finance, highlighting why AI has developed so rapidly in finance and is set to continue to do so. It then highlights the range of potential issues which may arise as a result of the growth of AI in finance. Against this background, the paper considers the regulatory challenges of AI in the context of financial services and the tools available to address them, concluding that human involvement is essential.

Our analysis suggests that for three reasons traditional financial supervision, focused on external supervision, is generally unlikely to be highly effective in addressing the risks created by AI. These reasons are: (1) AI increases information asymmetries regarding the capabilities and effects of algorithms between users, developers, regulators and consumers; (2) AI enhances data dependencies as different data sources may alter operations, effects and impact; and (3) AI enhances interdependency, in that AI systems can interact with other AI systems with unexpected consequences, enhancing or diminishing effectiveness, impact and explainability. These issues are often described as ‘black box’ problems: the inability to fully understand and explain how some AI operates or why it has done what it has done. Black box problems challenge the concepts of accountability and responsibility that lies at the heart of all financial regulatory efforts.

Even if regulatory authorities possessed unlimited resources and expertise—which they clearly do not—regulating the impact of AI purely by traditional means is nigh impossible. 

To address this challenge, our paper suggests that the most effective path forward involves regulatory approaches which bring the human into the loop, enhancing internal governance and personal responsibility through external regulation.

In the context of finance, we argue that the post-Crisis focus on personal and managerial responsibility systems provides a unique and important external framework to enhance internal responsibility in the context of AI. 

The strengthening of internal governance can be achieved, for the main part, through a renewed supervisory focus on personal responsibility and accountability of boards, senior management and key function holders for regulated areas and activities for which they are designated responsible for regulatory purposes. These responsibility rules, particularly if enhanced by specific due diligence and explainability requirements, will assist core staff of financial services firms to ensure that the AI under their control is performing in ways consistent with their personal responsibilities. If it is not, they will nonetheless be responsible. This is the nature of personal responsibility systems: the manager in charge is responsible for themselves, their staff, their third-party contractors, and their IT, including AI. This encourages—by way of direct personal responsibility—due diligence in investigating new technologies, their uses and impacts, and on requiring explainability systems as part of any AI system—or IT system for that matter. It is necessary for any individual who has potential direct responsibility in the event of regulatory action for any failure to have exercised due diligence, as due diligence and explainability will be the keys to their personal defence. 

We thus argue AI-tailored manager responsibility frameworks, augmented in some cases by independent AI review committees, as enhancements to the traditional three lines of defence—are likely to be the most effective means for addressing AI-related issues—particularly ‘black box’ problems—in finance and potentially in any regulated industry.

 

Dirk A. Zetzsche is Professor of Law, ADA Chair in Financial Law (Inclusive Finance), Faculty of Law, Economics and Finance, University of Luxembourg.

Douglas W. Arner is Kerry Holdings Professor in Law and Director, Asian Institute of International Financial Law, Faculty of Law, University of Hong Kong.

Ross Buckley is Scientia Professor, and the KPMG Law – King & Wood Mallesons Professor of Disruptive Innovation and Law at the University of New South Wales, Australia.

Brian W. Tang is Founding Executive Director, LITE Lab@HKU, Faculty of Law, University of Hong Kong.

Share

With the support of