Faculty of law blogs / UNIVERSITY OF OXFORD

Regulatory Innovation and Permission to Fail: The Case of Suptech

Author(s)

Hilary J Allen
Professor of Law at the American University Washington College of Law

Posted

Time to read

4 Minutes

We are at a moment in time when all kinds of different businesses are evolving into technology companies.  Case in point: in 2017, Lloyd Blankfein, CEO of storied investment bank Goldman Sachs, was reported to have said ‘We are a technology firm. We are a platform.’  While Blankfein’s statement might have had a touch of marketing bluster and hyperbole to it, the fact remains that that in 2022, Goldman Sachs employed more than 12,000 software engineers—roughly a quarter of its workforce.

This kind of evolution can pose problems for regulatory agencies that have honed their methods, structures, and skillsets in light of an industry’s traditional way of doing business.  Financial regulatory agencies, for example, are predominantly staffed with economists, lawyers, and accountants.  That expertise is still critical to financial regulation, but increasingly, so is software engineering and data science.  I have written in the past about the need for financial regulatory agencies to build these kinds of expertise, but the problem goes deeper than expertise.  As regulated industries become more technologically sophisticated, the way regulators perform their functions will have to adapt too.  For example, sometimes human regulators will not be able to intervene quickly enough to address problems with automated systems. Or human regulators may not be able to review the huge volumes of data used to train the machine learning algorithms deployed by industry.  In these cases, regulators will need technological tools of their own—without such tools, regulators will increasingly struggle to discharge their mandates to protect the public.

As I argue in my new article ‘Regulatory Innovation and Permission to Fail: The Case of. Suptech, ‘regulatory agencies’ technological innovation is becoming a defensive necessity, but will inevitably involve some failures. In the private sector, failure is seen as critical to the innovation process, and is expected. Regulatory agencies also need to be extended this “permission to fail” in their innovation attempts, or else they will be condemned to committing failures of inaction – and the public will suffer the consequences.’  I illustrate this argument with case studies drawn from the world of ‘suptech’ (‘suptech’ refers to technology used by financial regulators to help discharge their supervisory and other regulatory functions).  I survey where financial regulatory agencies are already undertaking suptech experimentation (and to be clear, many agencies are doing so, which flies in the face of caricatures of stodgy, Kafka-esque bureaucracies).  But I also argue that more suptech experimentation is needed—and that regulators’ fear of failure is a significant impediment to such experimentation. 

Not all failures are created equal, though, and this article aims to start a conversation about different types of regulatory failures, their impact on the innovation process, and their importance to democratic accountability.  The starting point is to recognize that failures of inaction count as regulatory failures, and regulators should not be given a free pass for them.  When an industry is innovating at a breakneck pace, regulatory agencies that do not develop their own technological innovations may cede their ability to oversee that industry.  However, as public bodies, regulators should also be held to account when they do engage in technological innovation, if their technology operates in an unnecessarily draconian fashion, or has inequitable distributional impacts.  Similarly, if public bodies use their tools to fish for more private sector data than they are entitled to, or fail to invest in operational resilience with respect to their technological tools such that the general public is harmed, that should rightly undermine the credibility and the legitimacy of the agency in question (I argue that particular attention should be paid to operational risks associated with APIs, as well as the reliability of any third party cloud service providers).

For innovation to occur, though, some kinds of failures must necessarily be excused in the public sector—just as they are in the private sector.  Financial regulators are often trying to address complex problems with systemic dimensions while juggling competing mandates; these problems are characterized by great uncertainty and are often far more difficult to solve than any problem the private financial industry would take on.  Trial and error are the hallmark of innovation, and if we accept that public sector technological innovation is becoming an increasingly necessary part of the regulatory state, then we will need to learn to forgive regulatory agencies for their failures of effectiveness and efficiency, at least to a degree.  Sometimes, the technology will not work at all; other times it may not perform as intended (the article explores some of the limitations of machine learning, natural language processing, circuit breakers, and machine-readable rules).  Or a technological tool may ultimately be effective, but the development costs may be significant in relation to the improvements offered.  Regulators need to be extended some grace for these kinds of failures if they are to innovate.

So how do we create this ‘permission to fail’? Legal structures that permit excusable failures are necessary (and are explored in the article), but they are not sufficient. Permission to fail will also depend on public opinion, and so insights from sociology, political science, technology ethics, and other fields will also be critical to developing this concept. Ultimately, the kinds of failures we are willing to tolerate, or excuse will depend to some extent on public perceptions, which will be informed by law as well as by messaging. But the law adopted will also be a product of public perceptions about which failures are tolerable, and messaging can be used to urge changes in that law.  Constructing permission to fail will therefore be an ambitious and recursive project, but it is one we need to start engaging with if we wish to ensure the continuing efficacy of the administrative state—notwithstanding this article’s focus on financial regulation and suptech, the concept of ‘permission to fail’ should resonate for any regulatory agency that is struggling to oversee an increasingly technologically sophisticated industry.

Hilary J. Allen is a Professor of Law at the American University Washington College of Law.

Share

With the support of