Faculty of law blogs / UNIVERSITY OF OXFORD

Artificial Persons in the AI Age

Author(s)

Susan Watson
Professor of Law at the University of Auckland

Posted

Time to read

4 Minutes

With talk of driverless companies and bots substituting for human beings on boards,  we face the spectre of a future controlled by corporate entities devoid of human beings. But isn’t the modern corporation already an autonomous system? When we talk of driverless corporations, are we there yet?   I consider those issues in 'Corporations as Artificial Persons in the AI Age'.

AI will over time be able to make decisions for corporate entities. Algorithms will set the parameters for those decisions. Once AI makes these decisions, the corporation will clearly become an autonomous system or fully automated entity. We fear that we will be harmed by automated entities: that AI will make decisions detrimental to human beings. Dystopian scenarios in film and television  are thought experiments by auteurs who identify our deep and visceral fear of entities and robotic beings controlled by AI. Even though human beings are perfectly capable of harming each other and frequently do, the fear is caused by concerns that automated entities will lack that most human of attributes, a conscience. Without a moral compass, powerful entities could do harm to us and the world without any form of compunction and without our being able to control them and prevent harm.

Is this a problem with AI or with the corporate form itself? The definition of AI as ‘the ability of a non-natural entity to make choices by an evaluative process’ can easily be applied to decision making in a modern company even though it is Human Intelligence (HI) rather than AI that drives decision making. Given the constraints and imperatives that the human intelligences on boards currently operate under, will AI substituting for HI really make much difference?

Companies as  artificial legal persons have been recognised as separate from all natural persons at least since Lord Macnaghten in Salomon v Salomon stated: ‘The company is at law a different person altogether from the subscribers to the Memorandum.’ The trend through history is towards separation of the interests of shareholders from shareholders themselves at any time, combined with boards constrained to act in those interests.  Indeed as early as 1657 Oliver Cromwell granted the East India Company a charter with permanent capital. Members of governing bodies swore an oath to act in the interests of shareholders. Those interests were the capital contributed by shareholders held separately in the Company.

Can corporations with boards of human directors operate with conscience in a way that a board controlled by AI could not? Conscience requires an awareness of morality. It must develop in a context. For human beings, although some form of conscience may be inherent, it is often fostered and developed by upbringing. Conscience also requires a locus for that morality to be perceived and acted upon; in human beings it is the mind or, perhaps, the soul. In corporations it can only be the board.

A board of human beings may have a conscience in a way that a board driven by decision making by AI may not. Corporate conscience may be located in the board. But that conscience cannot operate in the same way as the conscience of an individual human being. People behave differently when they are part of an organisation: they do not bring their whole selves to their role. The total span of their knowledge, and also their values and morality,  cannot be attributed or imputed to the company. The board as the locus of corporate conscience is also hampered by internal constraints on decision making, such as fiduciary obligations owed to the entity, and perceived imperatives such as value maximisation. Members of boards are connected with companies only some of the time. When connected the decision making of these board members is constrained by their corporate roles and by their own perceptions of the limitations and obligations of their roles.  We can see how a corporate conscience residing in the board, even a board of human beings, that is akin to an individual human conscience becomes impossible.

Modern corporate governance increasingly focuses on corporate purpose and sustainability. But can we say that companies ‘care’? Boards may develop relationships with constituents and may address constituent concerns but that may only be to a certain point.  Doing good will be considered legitimate and be tolerated by shareholders to the extent it enhances the reputation of the corporate person, encourages investment,  and maximises value in the short or long term.  Moving beyond myopic focus on short term profit maximisation will make companies less bad but it will not make them completely good.

Who should be liable for corporate wrongs? AI making decisions for the artificial person will compel us to re-think about the human beings who benefit most from its activities. And accepting that the change in decision making from HI to AI may be  merely an incremental shift may make a clear-eyed assessment of culpability in modern corporations possible.

If we accept that modern corporations are already artificial persons, it could be argued that for all corporations, including the AI corporations of the future, human beings connected with the corporation should potentially be liable for the wrongs the corporation commits. Liability would extend to the board but also those who form part of the hierarchy of the corporation, such as its employees.

Where ultimately should the buck stop? Boards and employees are constrained by being compelled to act in the interests of the entity and therefore shareholders either in the short or long term. Shareholders launch and then keep the corporate vessel afloat by providing capital. Shareholders collectively have ultimate control over the corporation through their constitutional rights while the corporation is extant and through their ultimate collective right to withdraw their capital and liquidate the corporation. By operating the artificial legal person in the world shareholders benefit from sharing in profits and also in the growth in value of their shares over time.  It could therefore be argued that these rights and benefits carry with them concomitant obligations. Shareholders have long enjoyed limited liability to creditors of the company. But why should that limited liability necessarily protect shareholders from liability for all corporate wrongs? Whether it is algorithims or constrained boards of directors that govern corporations should not alter our requirement that shareholders as the creators, and ultimate controllers and beneficiaries of these artificial persons be liable for the harms they do in the world.

Susan Watson is Professor at the Faculty of Law and Professor and Dean at the Faculty of Business and Economics at the University of Auckland. She is a research member of ECGI.

The full article can be accessed here.

Share

With the support of