Faculty of law blogs / UNIVERSITY OF OXFORD

Managing Corporations’ Risk in Adopting AI: A Corporate Responsibility Paradigm

Author(s)

Iris H-Y Chiu
Professor of Company Law and Financial Regulation at UCL Faculty of Laws
Ernest Lim
Professor at the Faculty of Law, National University of Singapore

Posted

Time to read

3 Minutes

Accelerating developments are being observed in machine learning (ML) technology as the capacities for data capture and ever-increasing computer processing power have significantly improved. This is a branch of artificial intelligence technology that is not ‘deterministic,’ but rather one that programs the machine to ‘learn’ from patterns and data in order to arrive at outcomes, such as in predictive analytics.  Companies are increasingly exploring the adoption of ML technologies in various aspects of their business models, as successful adopters have seen marked revenue growth.  

ML raises issues of risk for corporate and commercial use that are distinct from the legal risks involved in deploying robots that may be more deterministic in nature. Such issues of risk relate to what data is being inputted for the learning processes for ML, and particularly the risks of inbuilt bias, and hidden, sub-optimal assumptions; how such data is processed by ML to reach its ‘outcome,’ leading sometimes to perverse results such as unexpected errors, harm, and difficult choices; and who should be accountable for such risks.  While extant literature provides rich discussion of these issues, there are only nascent regulatory frameworks and soft law in the form of ethical principles to guide corporations seeking to navigate these risks. 

Our paper intentionally focuses on corporations that deploy ML, rather than on producers of ML innovations, in order to chart a framework for guiding strategic corporate decisions in adopting ML. We argue that such a framework necessarily integrates corporations’ legal risks and their broader accountability to society. The navigation of ML innovations is not carried out within a ‘compliance landscape’ for corporations, given that the laws and regulations governing corporations’ use of ML are only just emerging. Corporations’ deployment of ML is being scrutinised by the industry, stakeholders, and broader society as governance initiatives are being gradually developed. We argue that corporations should frame their strategic deployment of ML innovations within a ‘thick and broad’ paradigm of corporate responsibility that is inextricably connected to business-society relations.

We first define the scope of ML that we are concerned about and distinguish this from automated systems. We argue that the key risks that ML poses to corporations are unpredictable (or biased) results, even if ML systems may perform efficiently most of the time. Such unpredictability poses four categories of legal and non-legal risks for corporations, which we will unpack: (a) risks of external harms and liability; (b) risks of regulatory liability; (c) reputational risks; and (d) risks of an operational nature and significant financial losses. These risks do not insularly affect corporations and their shareholders, as both often interact with a broader narrative in relation to business-society relations. Indeed, these risks pose broader consequences for business-society relations. 

Next, we anchor the risks depicted above in the narratives of business-society relations by first examining their impact on the social, economic, and moral realms and, secondly, arguing that corporations should navigate these narratives in a ‘thick and broad’ paradigm of corporate responsibility.  We explain that the ‘thick and broad’ paradigm of corporate responsibility is based on the perspective that integrates corporations into citizenship within the broader social fabric. The location of corporate management of ML risks in this paradigm compels corporations to internalise this socially conscious perspective and to shape their strategic and risk management approaches to ML risks accordingly. 

Subsequently, we propose that corporations should navigate ML risks in a broad and thick paradigm of corporate responsibility in the following ways: (a) institute corporate governance structures for leadership in strategic and responsible decisions regarding ML risk; (b) institute enterprise-wide structures for broad and integrated governance of ML risk internally; (c) engage meaningfully with stakeholders and regulators on the strategic and responsible use of ML and to consider their feedback when designing and implementing internal enterprise-wide structures for managing ML risk; (d) make voluntary disclosure of ML risks and management even when not subject to mandatory disclosure; (e) make prudential provision for ML risks in relation to bearing burdens for loss consistent with notions of social justice, fair burden and risk allocation; and (f) actively dialogue with regulators for sandbox arrangements for testing and experimenting with ML so that risks can be observed, and their management can be based on a fully considered and accountable process.

 

Iris Chiu is Professor of Company Law and Financial Regulation at the Faculty of Laws, UCL.

Ernest Lim is Associate Professor at the Faculty of Law, National University of Singapore.

Share

With the support of