Faculty of law blogs / UNIVERSITY OF OXFORD

Digital Personhood? The Status of Autonomous Software Agents in Private Law

Autonomous software agents already today are attributed social identity and the ability to act under certain conditions. Due to action attribution in economic and social contexts, they have become non-human members of society. Apart from their obvious economic and social advantages, they pose three new risks: (1) the autonomy risk, which has its origin in stand-alone “decisions” taken by the software agents, (2) the association risk, which is due to the close cooperation between people and software agents, and (3) the network risk that occurs when computer systems operate in close integration with other computer systems. These risks pose a challenge for private law: to define a new legal status for autonomous digital information systems, however not simply as complete legal personification. In response to each of the three risks, a legal status should be granted to each of the algorithmic types that is carefully calibrated to their specific role: (1) an actant with limited legal subjectivity, (2) a member of a human- machine association, (3) an element of a risk pool.

Concerning the autonomy risk, an adequate response would be to attribute limited legal personhood to software agents. Decision under uncertainty—this is likely to be the legally relevant criterion of their autonomy. If such a decisions is delegated to software agents and they behave accordingly, then the law is required to assign them legal action capacity. Software agents act autonomously in the legal sense, when their behavior no longer follows an exclusively stimulus-reaction scheme, but when they pursue their own goals and make decisions that cannot be predicted. Software agents should be given limited legal subjectivity, calibrated to their role as legal representatives who may enter into contracts for others. Corresponding exactly to its real function in business practice, economically speaking, in a principal-agent relationship, the software agent concludes the contract on his or her own legal authority, but does not act in his or her own name, but on behalf of the principal. At the same time, they are to be recognized as legally capable persons in cases of contractual and non-contractual liability so that the machine misbehaviour itself – and not just the behaviour of the underlying company – represents a breach of duty for which the company must stand.

A possible answer to the association risk would be their legal status as a member of a hybrid entity, i.e. a human-machine association. In contrast to the individualistic law of agency, which clearly separates the individual actions of principals and agents and declares the principal to be the contractual partner, the human-machine association itself would become the actual contractual partner. For contractual and non-contractual liability, the individual preconditions for liability would be composed of their composite conduct, without their individual contributions having to be painstakingly and often arbitrarily calculated apart. The association itself would be recognized de lege ferenda as the legal object of attribution for actions, rights and obligations.

Finally, the answer to the network risk would be that law itself would need to construct risk pools for the delimitation of these interrelationships. The network risk destroys assumptions about the individuality of actors which are constitutive for the attribution of action and responsibility. Both the actor and the causal relationships are difficult, if not impossible, to identify. As a consequence, the law no longer looks for individual or collective actors but rather focuses on the risky decisions as such. It makes chains of actions responsible for their consequences without caring for organized decision centers. The risk pool would define the legal status of the algorithms as part of a comprehensive digital information flow, with the liability of the pool resulting in the case of unlawful conduct of the pool. The risk pool would no longer be determined by cooperative, organizational or technical structures. Rather, it should be defined as a “digital problem area”, the limits of which should be legally determined by the suitability for collective risk management.

Ultimately, the three solutions are not to be understood as mutually exclusive legal alternatives. They could very well exist side by side. The agent-principal solution would be appropriate if the algorithms were to act in social life as clearly defined actors. If, by contrast, they are embedded in a dense interaction context with human actors, the associational solution should be preferred. If, finally, many autonomous algorithms are interconnected within a multi-agent system, liability law should no longer refer to organizational arrangements but rather define, not to say decree, new types of risk networks. However, all three solutions have in common the necessary precondition that they confer limited legal personhood to non-human actors— to actants, to hybrids, or to risk pools.

An elaborate version of this argument can be found here

Gunther Teubner is Professor emeritus of private law and legal sociology at Goethe- Universität Frankfurt.

 

Share

With the support of