Faculty of law blogs / UNIVERSITY OF OXFORD

AI and Central Banks—Speeding Ahead within Legal Guardrails

Posted

Time to read

3 Minutes

Author(s)

Yan Liu
Deputy General Counsel at the International Monetary Fund
Alessandro Gullo
Assistant General Counsel, International Monetary Fund
Marianne Bechara
Senior Counsel, International Monetary Fund

Central banks have been pioneers in leveraging emerging technologies to enhance the efficiency of their functions. New generative AI tools are now the next frontier for central banks as revealed by a recent BIS report. The European Central Bank (ECB), together with the Bank of Spain and the Deutsche Bundesbank, developed a Gen AI tool to determine climate risks in the financial system. FinBERT, a Federal Reserve Board’s project, seeks to predict the probability of recessions, and the Commonwealth Bank of Australia is exploring Gen AI to better understand customer behavior in natural disasters.

AI can significantly strengthen central banks’ agility to predict and mitigate crises. It can improve monetary policy implementation, as well as the regulation and supervision of financial institutions. However, central banks should proceed with caution in their AI journeys as many unknowns lie ahead, including those on data governance, cybersecurity, legal liability, and central banks’ governance. The International Monetary Fund’s (IMF) 2021 Fintech Note has highlighted legal challenges concerning central banks’ governance, transparency, and accountability in central banks’ AI exploration.

Boosting Central Banks’ AI Oversight and Expertise

Central banks need the capacity and tools to monitor their use of AI. Three areas of central banks’ legal frameworks require attention.

First, the legal mandate of the oversight function should be expanded to include overseeing the implementation of the central bank’s AI policies, such as the choice of a system or provider as well as the design of the executive management structure concerning AI (such as chief AI officers). Second, the eligibility criteria of some members of decision-making bodies should include expertise on information technology, cyber risk, data, and AI. Incompatibility criteria and safeguards should mitigate conflict of interest, including vis-à-vis the private sector. Finally, internal oversight structures, such as audit committees, risks committees or newly established ‘algorithmics audit committees’, should have the mandate to monitor and audit AI use and the obligation to directly report to the oversight function.

Increasing Transparency on Central Banks’ Use of AI

Central bank transparency becomes increasingly important when AI tools are used.

Central banks may need to disclose the criteria and reasons for selecting an AI system or AI provider. They should also be able to explain in an easily understandable way how the AI system was used, the AI methodology used to reach its output, and the extent the AI output influenced their decision-making. Any limitation to the use of AI tools such as strict access rights should be transparently disclosed. Regular audit of AI systems should be conducted to identify and mitigate data bias. Finally, the risk of AI ‘black box’ should be mitigated by seeking to deploy ‘explainable AI’, designed to be accessible and easily understood by all stakeholders.

Not Abdicating AI Responsibility

As public agencies, central banks are bound by public law principles, requiring them to take well-motivated, proportional actions. Central banks must be accountable for all their decisions, including those involving AI use. This obligation applies even when flaws emerge in the AI tools or in data harvested, such as hallucinations or data bias. Three areas are prominent.

  • Central banks’ accountability may extend to a broad group of stakeholders, including subjects benefitting from personal data protection, who represent a significant portion of the general public.
  • Central banks will remain responsible even when significant AI output has been used in their decision-making process. They must therefore retain control and oversight of third-party providers (e.g., cloud providers), as delegation does not relinquish responsibility for their core functions. An analogy can be drawn with responsibilities related to outsourced activities as prescribed by the Basel Committee on Banking Supervision, and, more recently, by the ECB in its consultation on its new guide on outsourcing cloud services. Another approach to construe the liability of central banks and other entities involved could be to rely on contracts and tort law, but it is questionable whether this would allow to achieve sufficiently broad-based conclusions within and across jurisdictions.
  • Central banks must be held responsible for the internal controls implemented to monitor and manage associated risks. For instance, measures like requiring ‘humans in the loop’ and ‘kill switches’ for manual supervision and triage are essential to address the risk of AI hallucinations. Concerns regarding data bias must be proactively tackled through policies mandating sampling and avoiding nontransparent and unreliable sources.

Navigating the Legal Landscape in Central Banks’ AI Journey

Central banks are actively exploring AI tools to enhance efficiency and effectiveness in performing their functions, related to monetary policy or supervision. AI tools carry tremendous potential for central banks. They enable real-time analysis of vast amounts of data, leading to more informed and up to date policy decisions. They can also improve decision-making and risk management within central banks. 

However, the use of AI tools could pose risks and challenges to central banks’ governance, data privacy, intellectual property, cybersecurity, and legal liability. An effective AI journey for central banks requires completion of ‘AI legal checkpoints’: is there an effective oversight function? Is there adequate expertise? Are central banks’ decisions relating to AI transparently disclosed? Do central banks remain accountable for their actions? Central banks should carefully assess if their legal frameworks are adequate to ensure they are well equipped for their AI journey.

Yan Liu is the Deputy General Counsel in the Legal Department of the International Monetary Fund.

Alessandro Gullo is the Assistant General Counsel in the Legal Department of the International Monetary Fund.

Marianne Bechara is the Senior Counsel in the Legal Department of the International Monetary Fund.

Share

With the support of