A Disclosure-Based Approach to Regulating AI in Corporate Governance
Although AI in corporate governance promises many benefits, mostly in the areas of risk management and information sharing between the board and management, the risks cannot be ignored. Privacy and security issues, lack of transparency with AI decision-making, and the incursion of human bias into AI systems are all flagged as risks in other areas that have begun to use AI. From a corporate governance perspective, many of these risks are likely to generate conflicts of interest, and confer undue power on those who control the decision-making regarding the deployment of specific AI technologies (such as managers or controlling shareholders) to the detriment of other constituencies (such as outside shareholders or other stakeholders). Further, when AI systems employed by boards make mistakes, there could be serious consequences which shareholders and other stakeholders of the company may have to bear. While these risks signal the need for regulatory safeguards, it is also important not to chill innovative uses of AI in corporate governance. As we explain in our paper, regulatory mechanisms such as licensing regimes and sandbox mechanisms, which have worked well in other contexts, would likely be overinclusive, costly, and counterproductive in the context of regulating AI in corporate governance. Instead, we argue in favour of a phased disclosure regime.
We survey the disclosure requirements in three Asian jurisdictions — Singapore, Hong Kong, and India — to assess whether current laws already require AI-related disclosures. We find that despite a significant push in all three jurisdictions to enhance sustainability reporting requirements, in addition to financial reporting, the implications of technological advancements, including the use of AI, in the governance of a company do not find adequate coverage. Thus, specific disclosure mandates will be required.
Phased disclosure regime
Corporate governance is ripe for disruption by AI, and it is crucial that the regulatory landscape strikes the right balance to allow for this disruption, with minimal costs. In our working paper, we argue that, given the current stage of development of AI technologies in the corporate sector and the fact that the implications of their deployment in governance matters are yet to be fully comprehended, a phased disclosure-based approach is most suitable as a regulatory mechanism.
We propose that it would be prudent to initially introduce disclosure norms on a ‘comply-or-explain’ basis, with sufficient flexibility to companies to determine the content and extent of disclosures. This would ensure that the regulatory response is proportionate to the risk from AI, and that it would not curb technological innovations. However, we propose that disclosure requirements should be eventually made mandatory and specific once the use of AI in governance becomes more widespread, and more details regarding the benefits and risks become known. Such an approach will help stakeholders and regulators learn more about the specific use cases and risks involved. This learning can then feed into policy discussions that guide the framing of more specific mandatory disclosures in future.
Content and presentation
Companies should be encouraged to disclose the rationale for their decision to adopt AI technology as part of their governance process. In addition, they must disclose which specific technology they use, the specific applications to which AI is deployed, and the general trends and experiences regarding the use. Given that disclosures form an integral part of the risk strategy of companies, they must also include a detailed treatment of the possible risks emanating from the use of AI, the company’s strategies to mitigate such risks and the plan by which the board proposes to implement the strategy.
The presentation of information should be such that it is accessible to a diverse range of investors and other stakeholders, irrespective of their levels of sophistication. This will enable them to appreciate the consequences of the use of AI in governance, and make investment, governance, or other decisions accordingly.
Finally, the use of a proportionality criterion would help in moderating the extent of information to be disseminated, in terms of both quality and quantity. For instance, disclosure of sensitive or competitive information regarding the use of AI would likely be counterproductive. Accordingly, disclosure norms would do well to carve out appropriate exceptions where the situation so warrants in light of the proportionality standard.
Akshaya Kamalnath is Senior Lecturer at Australian National University College of Law
Umakanth Varottil is Associate Professor at the Faculty of Law, National University of Singapore
YOU MAY ALSO BE INTERESTED IN