AI Agents as Enablers of Personalized Law: Towards Agentic Disclosures?
Posted:
Time to read:
This post is part of a special series of posts based on contributions to a conference on ‘The Law between Singularity and Equality’ that took place in Berlin on 31 October/1 November 2025.
Agentic AI is widely viewed as the next major stage in the evolution of AI. Unlike chatbots, AI agents can autonomously browse the web, interact with digital systems, and execute multi-step tasks on behalf of users. As these systems mature, they are likely to transform digital markets and reshape how consumers interact with digital services. In particular, consumers may soon delegate a growing share of their everyday purchasing decisions—from product search and comparison to contract formation—to AI agents acting on their behalf.
At the same time, a growing scholarly debate addresses the legal implications of agentic AI. Much of the existing literature focuses on agency law, liability, competition, or the broader economic consequences of agentic AI. What has so far received comparatively little attention, however, is the possibility that AI agents could function as enablers of Personalized Law. More specifically, AI agents may serve as techno-legal intermediaries capable of tailoring legally relevant information to the needs and preferences of individual users. This could give a new practical twist to the theoretical debate on Personalized Law.
Personalized Law and the limits of standardized disclosures
The core idea of Personalized Law (sometimes referred to as Granular Law) is to replace uniform legal standards with rules that vary across individuals. Instead of relying on one-size-fits-all benchmarks such as the ‘reasonable person’ or the ‘average consumer’, personalized law seeks to tailor legal norms and obligations to individual characteristics. The advantages and drawbacks of this approach have been extensively debated in the literature, including on this blog.
Suggested applications of Personalized Law range from tort and contract law to inheritance law. One area where personalization appears particularly promising is consumer law, and especially mandatory disclosures. Pre-contractual information duties are among the most widely used regulatory tools in consumer law, particularly in the European Union. Their underlying rationale is straightforward: providing consumers with relevant information should enable more informed and welfare-enhancing decisions.
Yet this information paradigm has come under sustained criticism. Behavioural research has repeatedly shown that the ever-growing volume of mandated disclosures often leads to information overload. Instead of improving decision-making, excessive information may overwhelm consumers and reduce decision quality. In addition to this quantitative problem, there is also a qualitative one: much of the standardized information provided is often of little relevance for the individual consumer’s specific situation or preferences.
From smart disclosures to agentic disclosures
In response to these shortcomings, some authors have suggested to rescue the information model by promoting ‘smart disclosures’ informed by behavioural insights. A further, more radical approach is to replace standardized disclosures with personalized ones, thereby reducing the volume of information while increasing its relevance.
The rise of agentic AI introduces a new dimension to this debate. AI agents could significantly mitigate the problem of information overload, as their capacity to process information is not constrained by human cognitive limitations. While it remains an open question whether AI agents exhibit human-like biases, it is clear that they can process far larger quantities of information than human consumers. As a result, one of the central objections to the information model, ie the problem of information overload, loses much of its force when decisions are mediated by AI agents.
Moreover, AI agents are not merely capable of processing information. They can also filter and prioritize information items based on relevance. In this sense, AI agents may achieve the functional objectives of personalized disclosures, but through a different institutional design. So far, the proponents of personalized disclosures rely on centralized supply-side personalization by traders or regulators. In the future, agentic AI could enable decentralized demand-side personalization by consumers and their AI agents.
Disclosures for AI agents: More is more?
What does this imply for the future of pre-contractual information requirements? Two broad policy responses can be envisaged.
One option would be to reduce or relax disclosure obligations in transactions involving AI agents. The argument would be that AI agents acting as algorithmic consumers require less regulatory protection because they can independently gather relevant information from a wide range of sources, such as reviews, comparison websites, expert reports, or online media. From this perspective, traditional disclosures could become less necessary once consumers rely on sophisticated AI assistants.
However, this approach would entail a significant shift of risk towards consumers. Whether a consumer is able to make an informed decision would largely depend on the quality and capabilities of the specific AI agent they use. This raises concerns about inequality, opacity, and accountability, particularly if consumers lack the expertise to assess the performance of their agents.
A second, and arguably preferable, response would be to move in the opposite direction: instead of reducing disclosure obligations, traders could be required to provide more detailed and structured information—specifically designed for machine consumption. While such an approach would be counterproductive for human consumers, it could be highly effective when disclosures are processed by AI agents. Agents could analyse detailed information and extract only those elements that are relevant for the consumer’s preferences and decision context.
Optimizing disclosures for machines
It appears that EU law has already begun to adapt the information paradigm to the machine age. Recent legislative acts, such as the Digital Services Act (DSA), require that key information be provided in an ‘easily accessible and machine-readable format’ (Article 14(1) DSA). An even more striking example is the European digital product passport (DPP) introduced by the Ecodesign for Sustainable Products Regulation. The DPP functions as a digital identity card for products, containing detailed information on materials, manufacturing processes, sustainability features, and repair or recycling options. It will become mandatory for various product categories between 2026 and 2030.
For human consumers, the complexity of such detailed information may be prohibitive. For AI agents, by contrast, it is easily manageable. Agents can process this data and translate it into actionable recommendations aligned with a consumer’s sustainability preferences or consumption habits. In this way, disclosures that are effectively unusable for humans may become highly valuable once filtered through AI agents.
AI agents as techno-legal intermediaries
In a sense, AI agents can be understood as ‘digital twins’ of consumers. Over time, they can learn users’ preferences, habits, and informational needs. Acting as co-pilots—or in some cases even auto-pilots—in consumer decision-making, they are uniquely positioned to contextualize legally relevant information in ways that static and standardized disclosures cannot.
From this perspective, the rise of agentic AI may make it necessary to rethink the design of disclosure obligations. Instead of focusing exclusively on what human consumers can reasonably process, lawmakers may increasingly need to consider what information AI agents require in order to act effectively on consumers’ behalf. This shift points toward a new model of ‘agentic disclosures’, where legal information is designed primarily for machine processing, with AI agents translating it into personalized insights. The upcoming Digital Fairness Act might be a good opportunity to make consumer law ‘agent-ready’.
Readers can find the complete Law between Singularity and Equality series on the OBLB here.
This post is based on the author’s articles ‘Consumer Law for AI Agents’ and ‘Enabling Innovation and Protecting Consumers in the Agentic Economy: Why the Digital Fairness Act Should Regulate Agentic AI’ available here and here.
Christoph Busch is Professor of European Business Law at the University of Osnabrück and Affiliated Fellow at the Yale Information Society Project.
OBLB categories:
OBLB types:
Share: