Artificial Fiduciaries
In my paper ‘Artificial Fiduciaries’ I contend that the conventional fiduciary duty framework, with its human-centric orientation, is outdated in the face of advancing AI technologies. The paper introduces an innovative theory: assigning fiduciary responsibilities to AI entities capable of human-like decision-making. This theory has implications for AI-operated entities, such as AI board members, robotic advisors, and AI healthcare providers.
The quest for truly independent directors in corporate governance has been fraught with challenges. Traditional reforms like term limits, external audits, and enhanced disclosures have fallen short. In contrast, the advent of AI offers a novel solution. The paper coins the term ‘artificial fiduciaries’ to describe this solution, which builds on Stephen Bainbridge and Todd Henderson’s idea of outsourcing board functions to Board Service Providers (BSPs). While their idea was a step forward, it did not solve the independence problem and was limited both by the technology of the time and by human bias. Artificial fiduciaries could revolutionize this by ensuring genuine independence, lowering agency costs through an optimized decision-making process. Additionally, they can serve as ombudsmen and enhance the potential for democratizing corporate governance on a global scale.
My paper scrutinizes traditional fiduciary theories, questioning whether AI can uphold the rigorous demands of fiduciary responsibilities. Eugene Volokh argued in ‘Chief Justice Robots’ that society requires compassionate judgment rather than compassionate judges. This suggests that the pertinent question is not whether AI can fulfil the theoretical criteria of a fiduciary, but whether it can accomplish the objectives that the law assigns to this role.
Artificial fiduciaries could act as independent outside directors with certain fiduciary obligations to both corporations and shareholders. Shareholders might see the concept of artificial fiduciaries collaborating with human counterparts as advantageous because the combination is likely to exhibit fewer imperfections than human fiduciaries operating in isolation. However, artificial fiduciaries’ specific responsibilities would differ from human counterparts due to their algorithmic nature. The article outlines the fiduciary duties that should be assigned to artificial fiduciaries, including the duty of care and the duty of loyalty. It suggests that artificial fiduciaries should be held to standards similar to those applying to human fiduciaries, with some distinctions due to their algorithmic nature, in order to minimize potential harm from their implementation.
Through a critical analysis of AI capabilities, the article addresses potential criticisms and limitations of employing AI in this way, such as bias, the black box issue, safety concerns, the illusion of objectivity, and the risk of creating super directors who dominate board discussions. It suggests that these challenges can be mitigated through transparency, ethical frameworks, and clear guidelines for their use and decision-making processes. These discussions substantially contribute to the literature on algorithmic fairness. In addition, this article also cautions against the misguided belief that AI is just a tool. Artificial fiduciaries will have a certain level of autonomous decision-making capability beyond the concept of a mere product.
The article concedes that artificial fiduciaries face certain constraints, such as a deficit in social capital and difficulties in navigating complex ethical dilemmas. It proposes a collaborative approach where human and artificial fiduciaries synergize, leveraging their respective strengths and mitigating their weaknesses. This collaboration underscores the necessity for clear ethical protocols to guide AI’s decision-making processes. A discerning board should accurately gauge the relevance of the contributions made by artificial fiduciaries and prioritize the implementation of their most effective recommendations.
The article concludes by contemplating the future landscape of corporate governance in light of AI’s evolving role and proposes regulatory strategies to govern the emergence of artificial fiduciaries. This exploration not only contributes to the academic discourse on AI in corporate governance but also serves as a call to action for policymakers in Delaware to allow artificial fiduciaries and to adapt the current frameworks that regulate fiduciary conduct.
Zhaoyi Li is Visiting Assistant Professor, University of Pittsburgh School of Law.
The author’s article can be read here.
Share
YOU MAY ALSO BE INTERESTED IN