Faculty of law blogs / UNIVERSITY OF OXFORD

The Matrix of Privacy: Data Infrastructure in the AI-Powered Metaverse


Time to read

4 Minutes


Leon Anidjar
Assistant Professor in Business Law at IE Law School, Madrid
Nizan Geslevich Packin
Professor at the City University of New York’s Baruch College, Zicklin School of Business, and Senior Lecturer at the Faculty of Law at the University of Haifa
Argyri Panezi
Canada Research Chair in Digital Information Law and Policy and Assistant Professor at the Faculty of Law of the University of New Brunswick

In his 1992 science fiction novel Snow Crash, the American author Neal Stephenson introduced the concept of the metaverse. It refers to  ‘a fully realized digital world that exists beyond the one in which we live; an immersive virtual world seamlessly integrated with the real world for social interaction, education, entertainment, and business. The metaverse is often associated with the concept of Web3, and relies on semantic technologies, such as natural language processing (NLP), Artificial Intelligence (AI), and Machine Learning (ML). These technologies utilize data as the driving force that facilitates communications across different platforms and applications. For example, the metaverse allows individuals from different parts of the world to join virtual spaces and to communicate with one another through AI-powered language translation tools.

Our article focuses on the metaverse as a potential Web3-application and its privacy challenges by considering two different points of view. The first, which we define as the ‘private view’, perceives the metaverse as an extension of Web2 applications in which each tech giant develops its own virtual space that is fully controlled and manipulated by the relevant entity. It stipulates that each tech giant would develop an infrastructure that allows it to offer a customized simulated experience. We claim that with some necessary adjustments, current legal tools, which mainly refer to privacy and antitrust laws, could adequately address most of the challenges raised by these developments. In contrast, according to a perspective that we define as the ‘public view’, the metaverse could become a fully decentralized virtual space not exclusively controlled by any business entity. Rather, the metaverse will include multiple virtual platforms that rely on open-source technology and foster a more dynamic and pluralistic marketplace of participants. If this public view of the metaverse materializes, lawmakers would need to create a unique legal tool to address privacy concerns outside the boundaries of the Web2 legal framework.

Along with the promises of exciting, far reaching immersive experiences and seamless integration of the physical and digital worlds come massive privacy risks. Whether the metaverse will develop as a fully private or public setting, it generally allows tech giants, through their virtual platforms, to expand the collection of data by tracking people’s personal information, individual locations, body movements, and facial expressions and capturing biometric information. This information enables those collecting it to easily identify users’ age, gender, sexual orientation, race, or disability without their knowledge or consent. Specifically, VR devices gather biometric data by following users’ head and body changes, locating different physiological parameters, such as eye and gaze movements, measuring heart rate, and sensing neural activities related to brain-computer interfaces, like speech activity. Such data is collected and shared with third parties for profiling and marketing customized products. 

Based on the public (or plural) perspective of the metaverse, we introduce the multidimensional conceptualization of data exchanges to address these privacy challenges. Our legal framework is grounded   on three levels of analysis: micro, macro, and meso. At the micro-level, a metaverse that facilitates interoperability allows users, through their avatars, to move between virtual spaces with their digital assets and personal data. As a result, any data exchange, data collection and processing are not limited to one space and time. Instead, they will be made on and across several platforms by numerous players interacting simultaneously. Based on the interconnectivity pattern of data exchanges at the micro level, we perceive platforms and interactions within them as an expression of complex system theory. This concept is based on three fields of study—General System Theory, Cybernetics, and Artificial Intelligence—and regards institutions and organizations as nodes in a network that interact physically, chemically, socially, or symbolically. In such systems, any actor’s interactions influence and, at the same time, are influenced, directly or indirectly, by interactions made by other components in the same environment.

Assuming that the metaverse develops as a public setting, it would include simultaneous intertwined data interactions among users, platforms, and service providers on several virtual platforms. For example, if a user employs her avatar to interact with other parties on a certain platform, data exchanges related to user interactions could be made within other platforms and among different players, even though that user is not present on those platforms at that time. Put differently, because the same players will be present on multiple platforms altogether, the collection, processing, and sharing of user data between them will take place even if the user does not explicitly interact with them on each platform.

To illustrate this idea, consider the Nike-created metaverse space utilizing the Roblox platform to allow its fans to interact with their favorite brands, meet new people, and participate in promotions. As part of such collaboration, we can assume that the platform shares valuable user information, including user identity, behavior, and social and commercial interactions, with Nike to enhance the marketing of sports products and promote the brand. However, because many entities, such as Nike, will operate on multiple platforms, any user data gained within one platform will likely be used to leverage the activities of those entities on other platforms.

While the traditional privacy liability assumes a linear and static relationship between the wrongdoer and the injured party based on cause-and-effect relationship, the complex understanding of civil liability in the metaverse is based on a non-linear and dynamic and simultaneous relationship across several platforms which removes our ability to observe causal relationships. Therefore, identifying ordinary data protection violations and understanding whether certain data governance norms have been breached could be challenging.

At the macro level, data exchanges reveal the vulnerabilities inherent in complex interpersonal relationships. This is especially true for groups based on sensitive characteristics such as age, gender, race, sexual preferences, and socioeconomic status. In addressing the collective nature of privacy risks, we invite policymakers to take into account the power imbalance between dominant tech giants and different vulnerable populations. Finally, at the meso-level, data exchanges in the metaverse will be the result of social, commercial, and professional, including employment, interactions, thus governed also by the relevant contractual relations and laws (eg, employment law) which will address incidental privacy violations. 

Our multi-level analysis demonstrates that at the micro level, imposing traditional civil liability for privacy violations could be challenging. To provide meaningful complementary protection to user privacy, we call on lawmakers to impose mandatory disclosure obligations and liability regimes for non-compliance with data governance regulations and the use norms of AI. We recommend mandating metaverse entities to report how they internally address privacy abuses in three stages within each virtual environment: the entry, the experience, and the exit. Although our transparency-enhancing solution is not intended to replace traditional privacy remedies, we believe it could motivate metaverse entities to self-regulate their AI systems.

Leon Anidjar is an Assistant Professor in Business Law at IE Law School–IE.

Nizan Geslevich Packin is a Professor of Law at Baruch College, City University of New York.

Argyri Panezi is a Canada Research Chair in Digital Information Law and Policy and Assistant Professor at the Faculty of Law of the University of New Brunswick.


With the support of