Faculty of law blogs / UNIVERSITY OF OXFORD

Are we Board-Ready for Blockchain and AI?

Author(s)

Jason Fu

Posted

Time to read

2 Minutes

Blockchain and artificial intelligence (AI) are often referenced as key developments for the boardrooms of the future. An increasing amount of academic literature is being produced to examine the theoretical and legal arguments regarding these innovations. A roundtable, organised by the European Corporate Governance Institute (ECGI) and University of Oxford, Faculty of Law, took place at Allen & Overy in London on 26 November 2018 to explore some of these arguments. Roundtable participants discussed the use of technologies such as distributed ledger technology (DLT), blockchain, AI, and machine learning in the corporate governance context; how these technologies could facilitate shareholder engagement; the new data governance challenges posed by data analytics in the corporate context, and how these challenges were to be managed. 

In the presentation based on a working paper of his, Professor Christoph Van Der Elst (Tilburg University and ECGI) focused on the potential of blockchain and DLT to enhance shareholder engagement. Blockchain was presented as one solution to a range of current issues which include the legal requirement to identify shareholders, the need to enable shareholders to exercise their right to vote, the ability of institutional investors to develop their shareholder engagement policies, and the need to eliminate the unequal distribution of information to shareholders. By using one major chain of information, blockchain could potentially ensure the efficient and equal distribution of information, the immutability of each transaction, while also simultaneously reducing the number of intermediaries involved and transaction costs in this process. The mainstream introduction of such technology could effectively alter the division of power between the board and shareholders, by giving shareholders the capability to make decisions on a wider range of matters in an efficient manner.

The discussion that ensued raised a number of concerns that may require further thoughts prior to the widespread adoption of blockchain in corporate governance. These included the need for legal governance which would require centralisation of the system in order to properly reflect judicial decisions and changes in the law; the unproven claims that blockchain was immutable and unhackable, and the existing ability to challenge the immutability of a transaction; and the justification for some market participants to be allowed not to disclose their actions and stakes for a reasonable time period (eg. traders or raiders). Furthermore, the introduction of more efficient voting may, for example, add to the ongoing pressure to give a vote to individual members of pension funds.

The practical, legal and policy implications of implementing blockchain as a corporate governance tool are therefore diverse, but perhaps not insurmountable.

In the second presentation, Professor John Armour (University of Oxford and ECGI) provided examples of AI technology becoming increasingly prevalent, and of machine-learning having dramatically altered the development of AI. Such development of AI has replaced forerunners known as "expert systems", which depended on enormous engineering inputs and a finite knowledge base. The potential to adopt AI in relation to corporate governance is evident when considering it as a tool for monitoring and synthesising information, measuring performance, risks, compliance, and for sophisticated scenario planning (simulations). These possible uses could facilitate and enhance the board’s required oversight of internal controls along with their legal duty of care.

The use of AI in this context raises specific questions about data governance, ie. the governance issues arising from firms’ use of AI for monitoring and simulations. Data governance issues highlighted included the objective standard legally required by directors to exercise their duty of care, which may in the future require an appreciation of the strengths and weaknesses of AI model analyses, the model’s external validity insofar as the model has a general character, and the risk of "dashboard myopia." In addition, the normative significance of probabilistic information produced by algorithm would be a question for further reflection, and legal systems would have to get better at assessing probabilistic evidence. 

The roundtable provided a useful platform to examine the current status of these new tools, their potential application and the main areas of concern that will require consideration and debate.  

My full summary report of the event is available here.

Jason Fu is Visiting Lecturer in Commercial Law at King’s College London and Visiting Lecturer in Banking Law at the University of Birmingham.

Share

With the support of