Faculty of law blogs / UNIVERSITY OF OXFORD

Explainable AI in M&A: Legal Incentives and Technical Challenges

Author(s)

Philipp Hacker
Professor for Law and Ethics of the Digital Society, European New School of Digital Studies

Posted

Time to read

3 Minutes

Advanced machine learning (ML) techniques, such as deep neural networks or random forests, are often said to be powerful, but opaque. However, a burgeoning field of computer science is committed to developing machine learning tools that are interpretable ex ante or at least explainable ex post. This has implications not only for technological progress, but also for the law, as we explain in a recent open-access article. On the legal side, algorithmic explainability has so far been discussed mainly in data protection law, where a vivid debate has erupted over whether the European Union’s General Data Protection Regulation (GDPR) provides for a ‘right to an explanation’. While the obligations flowing from the GDPR in this respect are quite uncertain, we show that more concrete incentives to adopt explainable ML tools may arise from contract and tort law. To this end, we conduct two legal case studies, in medical and corporate merger applications of ML. As a second contribution, we discuss the (legally required) trade-off between accuracy and explainability, and demonstrate the effect in a technical case study.

In this post, we focus on the corporate merger example. While, in the medical context, AI tools are already subject to tremendous research efforts and are being adopted in the field, ML predictions may also be used to aid companies in M&A cases, for example by valuing potential target companies or by identifying factors for successful mergers. In this vein, machine learning has been applied to analyze merger activity based on corporate disclosures or on earnings conference call transcripts. By training models on such features, scholars were able to identify factors contributing to mergers, finding that firms with a corporate culture focusing on innovation are more likely to be acquirers than firms with a corporate culture focusing on quality. Furthermore, the authors find that firms with the same or a more similar corporate culture tend to be merged more often and with less transaction costs. Finally, based on a data set of M&A activity in Japan, researchers have furthermore developed a prototype to use ML for merger recommendations. Hence, ML tools for M&A, while not widely used yet, are clearly on the rise.

We analyze the legal implications of such tools under the business judgment rule. Two questions seem to stand out here: First, if AI tools are put in use, what requirements do managers have to meet to avail themselves of the protection granted by the business judgment rule in case of prediction error? Second, is there an inflection point at which directors may even be compelled to adopt ML technologies to avoid liability? 

On the first question, we argue that the model must surpass a certain performance threshold in the field. Management must retrieve all necessary information on this issue; this will require building up ML competence in or around the boardroom. Importantly, the avoidance of false positives (AI recommendations of eventually unsuccessful mergers) should be given much greater weight than the avoidance of false negatives (failures to recommend hypothetically successful mergers) in M&A settings, as the unwinding of unsuccessful mergers entails very significant transaction costs. Furthermore, managers will, if they rely on ML tools, be obliged to exercise independent judgment. Even if, on average, a model exhibits supra-human performance, managers may disagree with the model in individual cases if they can advance professional reasons for such a departure from the recommendations. This could for example, be based on the lack of representativeness of the training data for the target context. Conversely, if they fail to override the model despite clear and compelling evidence for failure, they will be held liable—but the business judgment rule provides for a significant margin of error here. This kind of reasoned departure from model predictions, however, is generally possible only if certain explainability and transparency requirements are met (see also 'AI Regulation in Europe'). Otherwise, it will be close to impossible for non-machine learners, such as managers, to evaluate whether the recommendation is likely to be correct or not.

As regards the second question, we argue that an obligation to adopt ML tools arises only if (i) the models are proven to consistently outperform professional actors in the field, and if (ii) their results can be explained to and critically assessed by humans. Given these standards, the compulsory use of ML tools in M&A still has a long way to go, despite active research in the field of explainable AI. However, we expect these models to be increasingly integrated into boardroom decisions on a voluntary basis, fuelling the trend toward explainable models in an endeavour to meet the requirements of the business judgment rule.

In this context, it is particularly important to note that, as we show in the final and technical part of the paper, increased explainability need not necessarily imply reduced accuracy. At any rate, given the importance both of field performance and of explainability, managers will likely be granted significant discretionary powers to conduct this trade-off. It seems advisable, however, to document these decisions to show that the adopted model fell within a reasonable margin of appreciation, and that the decision was taken on informed basis. Overall, this shows that not only the ML tool itself, but also be decision-making process leading to its adoption must be explainable ex post if managers want to rely on the business judgment rule. In the end, the best results will often be achieved by human-machine-teaming – but this again presupposes an explainable model for the team to work. 

Dr Philipp Hacker is an AXA Postdoctoral Fellow at Humboldt University of Berlin and a Research Fellow at the Centre for Law, Economics and Society, and at the Centre for Blockchain Technologies, both at UCL.

 

Share

With the support of