The Future of Machines: Property and Personhood

Posted

Time to read

3 Minutes

Author(s)

Kelvin F.K. Low
Professor of Law at the National University of Singapore (NUS) Faculty of Law
Wai Yee Wan
Associate Dean and Professor at the School of Law of City University of Hong Kong
Ying-Chieh Wu
Associate Professor of Comparative Law at Seoul National University School of Law

Property and personhood are often seen as mutually opposed. One can either be a subject or an object but not both. With artificial intelligence (AI) developing at an unprecedented speed, the current legal status quo of machines as property raises three questions that we attempt to explore in our paper ‘The Future of Machines: Property and Personhood’.

 

First, will the law need to consider when (if ever) machines that integrate with a person such as prostheses cease to be mere tools and become a part of our person? Proponents of such a development, who argue that treating such prostheses as things leads to undercompensation, tend to draw upon the analogical process of accessio, the process by which a minor object is subsumed into a major object so that the former loses its separate identity and becomes subsumed into the latter. Although like land, which is always the major object, this analogical process avoids one of accessio’s most difficult question (which object is major?), it cannot avoid the hard question of identity that lies at its heart. We propose that, as the concern revolves around the adequacy of compensation, a more enlightened approach towards mental distress, in which the physicality of mental distress is acknowledged, may be a better solution.

 

Secondly, could (or have) our tools increase(d) in sophistication to the point when they may themselves be conferred legal personhood? In most legal systems, personhood extends to both natural persons (humans), and legal persons (non-humans – most famously, corporations but occasionally natural objects such as parks and rivers, and Indian religious idols). Despite recent advances, we propose that given the current state of the art, there is as yet no moral basis for conferring legal personhood on AI machines. But can it then be justified on some utilitarian grounds in a manner akin to that for corporations? We answer resoundingly no. First, conferring personhood on an AI system insulates the assets held by manufacturers, programmers, or users of the AI system from the legal claims of the third parties, particularly tort victims, creating moral hazard. Secondly, conferring legal personhood on AI system dilutes criminal liability to a vanishing point. The modern approach is to subject corporations to theoretically all offences natural persons can commit. Deterrence is directed at human directors, managers, and employees as part of a socio-technical system. Proponents who argue that these problems already exist for corporations fail to justify why we would compound a known problem and do not acknowledge that AI systems are purely technical rather than socio-technical systems which will likely exacerbate these woes. A related yet different question is whether or not AI systems should be acknowledged to be authors or inventors for the purposes of IP even absent legal personhood. Whilst some jurisdictions (notably Australia) have taken this step, others (notably England) have resisted this development. On balance, we suggest that the English approach is correct and the case of AI authorship or inventorship is premised on a failure to appreciate distinctions between tangible and intangible property and a misunderstanding of the rationales for accessio insofar as it relates to fruits of tangible property, which lie more in policy than some natural law and are thus subject to exceptions (such as for emblements in the case of land and swans in the case of chattels).

 

Thirdly, many of us have surrendered vast amounts of personal data to corporations who trade in this information and seek not only to predict our desires but influence it. Should an ability to control data be considered property? How can one own data, which is merely information? Most legal systems, including the common law, rightly do not recognise property rights in information. As information is inherently non-rivalrous, we suggest that ‘ownership’ here merely serves as a metaphor for control. Much as is the case with self-ownership, property plays a purely rhetorical role. It underscores the justice of the outcome being argued for — that data subjects should control their data — whilst obscuring the circularity of the argument. Data is property because it is controlled by data subjects which have property in data because they control it. The problem (of impaired autonomy) is real but the solution cannot lie in property. It is important to exorcise the metaphorical ghost of data as property within the law since its rhetoric obscures more than it illuminates. It threatens to constrain legal discourse, leading to the metaphorical (ie property) tail wagging the very real (ie autonomy) dog.

 

 

Kelvin F.K. Low is a Professor of Law at the National University of Singapore (NUS) Faculty of Law.

Wai Yee Wan is Associate Dean and Professor at the School of Law of City University of Hong Kong.

Ying-Chieh Wu is Associate Professor of Comparative Law at Seoul National University School of Law.

Share

With the support of