Faculty of law blogs / UNIVERSITY OF OXFORD

Viewing Artificial Persons in the AI Age Through the Lens of History

Author(s)

Susan Watson
Professor of Law at the University of Auckland

Posted

Time to read

3 Minutes

With talk of driverless companies and bots substituting for human beings on company boards either in part or completely, the spectre of a future controlled by entities devoid of human beings is upon us. We experience a visceral fear when we imagine corporations without people. But has the future been here for longer than we all realise? Are we sure we are not there yet?

As discussed in my recent paper, there is no such thing as a company as such: companies are legal fictions that arise out of a form of collective imagination. We operate companies with consciousness of their falsity. Such is the power of this fiction that it has been described as amongst mankind’s most ingenious inventions. These artificial legal persons have long been recognised as legally separate from human beings so the shift to artificial legal persons being controlled by artificial intelligence (AI) is not therefore as radical as it might at first appear to be.

The success of the company is counterintuitive—as Adam Smith highlighted, the separation of the company from its investing shareholders creates agency problems. So why do companies succeed in creating value? Since the board of the English East India Company was charged in 1657 to act in the interests of the permanent capital fund of the Company rather than following the instructions of the shareholders themselves, the trend through history is towards separation of shareholders from the company. That separation, combined with the potential immortality of the artificial legal person and boards constrained to act in the interests of the corporation itself, mean capital funds grow and shareholders consequentially prosper. Accepting that the role of the board relates to the capital fund in the company rather than directors acting as economic and legal agents of current shareholders, provides clarity around purpose for boards.

As a form for transacting, extracting, and generating value the modern company is unbeatable. But a reality check around the limitations of the corporate form is needed. Modern corporate governance increasingly focuses on corporate purpose and on societal stakeholder concerns, with the focus by investors on sustainability and on ESG (environmental, social and governance) aspects inevitably influencing the weight boards put on those factors in their decision-making. But can we say companies ‘care’ about, say, sustainability? Boards may develop relationships with constituents and may address constituent concerns, but that may be only as far as the ultimate purpose or end of maximising the forms of value in the capital fund goes. Doing good by prioritising ESG factors will be considered legitimate and be tolerated by shareholders to the extent it enhances the reputation of the corporate legal person, encourages investment, and maximises the value of the capital fund. Boards may also consider sustainability issues when assessing risk, but if the focus is on capital fund value maximisation, the risks to the entity brought about by externalities may be neglected, and the effect of these externalities on the outside world ignored. So boards may consider ESG concerns to the extent they maximise the forms of value the corporation holds in its capital fund. Moving beyond myopic focus on profit maximisation will make companies less bad but it will not make them completely good.

Whether corporations are operating as almost automated entities controlled by constrained boards of human beings or as fully automated entities controlled by AI, who should be liable for corporate wrongs?  The notion of AI making decisions for the artificial legal person may compel us to re-think the roles of the human beings who operate in, around and behind the corporation. Substituting AI for human intelligence on the board will neither improve nor worsen the potential for harm that exists with the corporate form forcing a clear-eyed assessment of decision-making and liability in modern corporations.

Modern corporations may have three characteristics that militate against conscience and doing good: first, the fragmentation of roles including the separation of ownership from control, and control from implementation and accountability, secondly, the moral hazard of being a group and part of an organisation, and thirdly, internal constraints on moral decision-making including perceived profit maximisation or capital fund maximisation imperatives, and fiduciary obligations to the company.

Where ultimately should the buck stop? There are calls for piercing the corporate veil when abuses occur in driverless corporations operated using AI. Why should shareholders of driverless corporations be liable for corporate wrongs when shareholders of corporations currently are not? Boards are constrained by being compelled to act in the interests of shareholders through maximising the value of the capital fund. Shareholders launch and then keep the corporate vessel afloat. The property rights of shareholders in the capital fund of the corporation carry with them concomitant obligations. Shareholders have long enjoyed limited liability to creditors of the company. But why should that limited liability necessarily protect shareholders from liability for corporate wrongs? The modern prevalence of widely diversified institutional investors as shareholders may make veil-piercing more palatable. Whether it is algorithms or constrained boards of directors that govern the legal fictions that are corporations should not alter our requirement that shareholders as the creators and perpetuators of the fictions be liable for the harms those corporations do in the world.

Substituting constrained boards of directors acting in the interests of that corporate fund contained in the corporation with AI is an incremental change. Recognising the moral and ethical challenges with the AI-controlled corporation exist already with the human intelligence-controlled corporation may cause us to reconsider the responsibilities we place on those who set these value creators and aggregators in motion and the obligations of those who perpetuate their existence in the world.

Susan Watson is a Professor of Law at the University of Auckland.

Share

With the support of