This post is part of a special series of posts based on contributions to a conference on ‘The Law between Singularity and Equality’ that took place in Berlin on 31 October/1 November 2025.
Vulnerability has been an essential part of the discussions about troubleshooting and reforming European consumer law, particularly under the upcoming Digital Fairness Act. Vulnerability has been traditionally used for the law to ‘see’ those most in need of protection. Essential instruments such as the Unfair Commercial Practices Directive (UCPD) have traditionally relied on classical benchmarks. For the UCPD, those are the ‘average consumer’, a consumer who is reasonably well-informed, observant, and circumspect, and the ‘vulnerable consumer’, defined through traits such as age, credulity, or infirmity.
Digital markets, with their data-driven tracking and profiling at scale, have raised new ways in which consumers can be vulnerable. Recent scholarship shows that vulnerability is not merely personal but systemic. It is produced by choice architecture, dark patterns, profiling, and targeting, among others. Platforms do not simply encounter vulnerability, but they actively configure it. In this view, vulnerability is architectural, relational, and deeply entangled with data-driven business practices.
It is true that, in principle, given the huge information and power asymmetry defining digital markets, even the most educated, business and technology-savvy consumer who does their research is no match for the control that a Big Tech platform exercises over their online footprint and behavior. However, shifts towards concepts such as situational vulnerability, which have been projecting policy recommendations such as changing definitions and consumer benchmarks under the UCPD, are only partial fixes for the overarching problem. If new interpretations of vulnerability shape it to be technology-based and systemic, because it is built on a personalization machine (eg, the platform), vulnerability becomes personalized in itself. In principle, the promise is appealing: if law can identify that more people are vulnerable, it can tailor when and how more protections are needed. But this emerging vision of personalized vulnerability raises deep conceptual and practical problems, as I will explore in this post.
Different Rules for Different Romans
Personalization in law is not new and it well predates any data-driven technology, as I have written earlier in Omri Ben-Shahar and Ariel Porat’s Personalized Law series hosted by the University of Chicago Law Review Online. Roman law already experimented with flexibility through bona fides, introduced gradually as a procedural defense against fraud or duress. These mechanisms corrected inequitable outcomes without rewriting an entire legal system. In that sense, they functioned as an early form of personalization, adapting abstract rules to concrete situations for legal subjects who needed such solutions.
Modern European law inherits this logic to a certain extent. Principles such as Treu und Glauben in the German Civil Code (§242 BGB) serve interpretative, supplementary, and corrective functions. Principles such as good faith, allowing for a high degree of flexibility in the law, do come at a cost though. When law becomes overly individualized, what is gained in precision may be lost in legal certainty.
The Platform Context
Personalized vulnerability emerges in a digital environment defined by platformization: an interconnected, datafied web in which social media, content delivery, and e-commerce blur into one another. Social platforms have become public squares, advertising infrastructures, and retail spaces all at once. Influencer economies, powered by parasocial relationships, shape consumption through emotion, aspiration, and perceived authenticity. This is a very important yet often overlooked aspect: platforms do not exist in a vacuum, but they host people who make content, sell, and generally engage in commerce with consumers.
Four Core Problems with Personalized Vulnerability
A lot can be said about why this approach to the personalization of vulnerability may not yet be ripe. Below, I will focus on four specific arguments to this end.
1. The Most Vulnerable Among Us
While situational vulnerability can be a good way to extend protections to more consumers, and bring down an accountability threshold, if anyone is vulnerable online, that means nobody is vulnerable. Some consumers will always need heightened protection: children, users with low digital literacy, or people with addictions. Personalization does not eliminate the need for categorical safeguards. Worse, focusing on individualized vulnerability can crowd out forms of positive discrimination designed to protect groups systematically. Even the supposedly stable concept of the ‘average consumer’ remains inconsistently interpreted by courts, raising doubts about how much more precise personalized benchmarks can realistically be in protecting those who really need this protection.
2. Legal Certainty
Law depends on predictability. Personalized vulnerability undermines this by fragmenting standards. If obligations vary across individuals based on opaque assessments, how can businesses know what compliance requires, or consumers know what protection they are entitled to? Vulnerability will end up being personalized by lower courts in Member States, a process which will bring with it a lot of legal divergence.
3. Accuracy
Profiling consumer behavior is fundamentally unreliable. While some technologies, such as speech-to-text or content recognition, show measurable progress, predicting social outcomes remains deeply flawed. Advertising effectiveness, susceptibility, or future behavior cannot be inferred with sufficient accuracy to justify individualized legal consequences. Some research highlights how difficult it is to even study the effectiveness of personalization based on the methodological limitations and access to data. As a result, using the platform logic and profiling might not always lead to a scientifically accurate depiction of vulnerability.
4. Technopragmatism
There is a risk of technopragmatism: structural/architectural problems are emphasized as interfaces or gateways towards consumer decisions. Yet the picture is much broader. People still lie—and that is a given with influencer marketing, where influencers blatantly fail to disclose the ads that bring them revenue. Emotion, such as parasociality, still drives consumption. Situational vulnerability risks diverting attention from the influence of consumers which still relies on emotion, to the programmable platform architecture that acts as a medium of influence.
Extro
Personalizing vulnerability can be, in principle, an intellectually stimulating option of looking at digital markets and consumer justice in the modern economy. Here, architectures are mighty, and platform organizations are out for as much profit as they can make. But interfaces and technologies are only part of the problem here. In this same economy, it’s also other people who directly lie, scam, defraud, or merely omit information in the hope to make money off consumers’ backs. Beyond that, the Court of Justice has already offered the ‘average consumer’ benchmark a makeover in Compass Banca, by lowering the threshold of just how diligent the average consumer ought to be. Perhaps this can be a more practical starting point in redefining consumer benchmarks for both theory and practice.
Catalina Goanta is Associate Professor of Private Law and Technology at Utrecht University.
OBLB categories:
OBLB types:
Share: