Faculty of law blogs / UNIVERSITY OF OXFORD

Relying on AI for Financial Compliance and Supervision

Author(s)

Gerard Hertig
Principal Investigator, Future Resilient Systems Program, Singapore-ETH Centre

Posted

Time to read

2 Minutes

Financial intermediaries and supervisory authorities increasingly rely on artificial intelligence (AI), a term coined by John McCarthy in 1956. In two recent working papers (‘Use of AI by Financial Players: The Emerging Evidence’ and ‘The Political Economy of AI-Driven Financial Supervision’), I discuss a number of issues concerning the reliance on AI for financial compliance and supervision. Fundamentally, AI-driven decision-making remains prone to erroneous assumptions, build-in bias, data incompleteness and validation deficiencies. It follows that AI-driven compliance and supervision remain ‘incomplete’. However, human compliance and supervision is incomplete too and its completeness potential is lower than for Ai-driven systems.

It logically follows that the supervisory role of AI will continue to increase. The open question is: up to which point?

1. AI relies on computer algorithms that improve automatically through experience (machine learning). Nowadays, this technology is progressively put to use in many industries.

According to a 2021 survey, AI contributes to the conduct of business according to 51% of Asia Pacific respondents, whereas 82% of European, Middle East & African respondents perceive AI as a core component of their business strategy. More generally, a leading professional services firm expects AI-use to generate a 14% raise in global GDP by 2030. 

2. Financial intermediaries increasingly use AI for risk management and compliance purposes. Overall, AI has already allowed for the automation of 64% of data collection and 70% of data processing tasks. AI-reliance is especially noticeable in the loan processing, client advisory, financial trading, and fraud detection area; however, the available evidence remains circumstantial, making it hard to accurately quantify the magnitude of this evolution.

There is also emerging evidence of financial supervisors using AI for monitoring purposes. The European Central Bank as well as French and German financial supervisors started referring to AI-use in 2017, with their principal brethren following up in 2018 and 2019. However, the provision of AI-specific information dried-up in 2020 and 2021.

3. This evolution generates social benefits as well as social costs.

a) Increasing AI-reliance will reinforce private ordering in normal times and provide technology-advanced players with opportunities to game the regulatory system. At the same time, it could increase supervisory agencies’ productivity by up to 40% due to AI taking over the tedious tasks that contribute to friction within large organizations.

It follows that law-making and enforcement costs should decline. From a supervisory perspective, this is especially true when it comes to predicting bank distress, detecting fraud and minimizing money laundering. In the private litigation area, AI is already facilitating discovery analysis (ie setting the relevant facts, a component that even the best ‘lawyering’ cannot substitute) and case outcomes prediction (which, in turn, determines case settlement and case filing probabilities). 

AI is also likely to make arbitration more affordable, thus providing a commonly available alternative to courts.

b) At the same time, AI-reliance may have highly disruptive effects across markets. In particular, it may facilitate the creation of super firms (hubs of wealth and knowledge), which could have detrimental effects on the wider economy. More generally, it could boost the need for workers with certain skills while rendering others redundant—a trend that could have far-reaching consequences for labor markets.

Experts also warn of AI’s potential to increase inequality, push down wages and shrink the tax base. In addition, there is a risk of AI-use widening the gap between developed and developing countries.

4. Overall, AI-reliance is likely to foster private ordering in normal times but it could go hand-in-hand with significant job losses. More fundamentally, AI is not expected to have a significant impact in terms of systemic risk, even though lawmakers are increasingly focusing on AI-related systemic issues.

 

Gerard Hertig is a Principal Investigator at the Future Resilient Systems Program, Singapore-ETH Centre and a Fellow and Research Member at ECGI.

Share

With the support of