Faculty of law blogs / UNIVERSITY OF OXFORD

Regulating Gatekeeper AI and Data: Transparency, Access, and Fairness Under the DMA, the GDPR, and Beyond

Author(s)

Philipp Hacker
Professor for Law and Ethics of the Digital Society, European New School of Digital Studies
Johann Cordes
PhD Researcher at the European University Viadrina in Frankfurt
Janina Rochon
PhD Researcher at the European University Viadrina in Frankfurt

Artificial intelligence (AI) is becoming increasingly common in business and administration, and, jointly with China and Canada, the EU is leading the way in the race to regulate it. However, in our recently published paper, we argue that unless its list of high-risk applications is significantly expanded until its enactment, the most comprehensive and effective rules for AI applications in the digital economy will not be found in the proposed AI Act—but rather in the recently enacted Digital Markets Act (DMA). The DMA will not only provide more competitive opportunities on, alongside, and between large online platforms, but will also decisively shape the way gatekeepers and their competitors deal with AI. Most of these big tech companies, however, are not affected by the AI Act in their core business: the digital economy remains largely outside of the scope of the high-risk provisions in the AI Act.

Against this background, our paper explores the impact of the DMA and related EU acts on AI models, including Generative and Foundation Models, and their underlying data, focusing on four key areas: disclosure requirements, regulation of AI training data, access rules, and the regime for fair rankings.

First, new rules for fair rankings are introduced. As is well known, rankings are at the core of the business model of big tech companies, such as Google, Amazon, or Microsoft. One of our paper's findings is that the concept of fairness under the DMA goes beyond traditional categories of non-discrimination law and instead must be refined by incorporating principles of competition law and the F(R)AND criteria: Article 6(5) of the DMA consolidates the prohibition of self-preference known from competition law and transparency rules for rankings already existing in other EU law instruments. The inclusion of FR(A)ND criteria, however, is new and groundbreaking. In our view, these criteria introduce a need for justifying differentiations between comparable products in the ranking. Techniques developed in computer science research on algorithmic fairness can be utilized from a technical standpoint. Adapting this framework to the DMA is challenging because of the potentially limitless number of protected attribute combinations, unlike in traditional anti-discrimination law. The compliance requirements must consider this aspect. We show how a consistent understanding of the principle of non-discrimination in both conventional non-discrimination law and competition law can be established by drawing upon the jurisprudence of the Court of Justice of the European Union (CJEU).

Second, the use of data and thus, in particular, its collection and use for AI training by gatekeepers are significantly restricted. With Article 5(2), the DMA restricts gatekeepers' ability to use personal end user data (PED) across services. However, the practical impact of these rules is limited due to exemptions that allow processing activities if GDPR-compliant consent has been collected from the end user. The impact of Article 6(2) of the DMA, in turn, will be stronger: it does not apply only to personal data and cannot be waived based on consent or any other grounds. The provision prohibits gatekeepers from using non-public business user data for, inter alia, AI-based inferences. Overall, the thrust here is diametrically opposed to that of Article 10 of the AI Act. While the AI Act seeks to foster high-performing AI, the DMA seeks to prevent additional improvements of gatekeepers’ models based on the specific competitive setting in which gatekeepers operate. Given the tremendous amount of diverse data that large AI models are trained on, this becomes even more notable.

Thirdly, access rights are created for business users to enable them to develop high-performance AI models themselves: with restrictions on access to personal data Article 6(10) of the DMA grants business users and authorized third parties free access to data generated by the use of core platform services. Further, Article 6(11) of the DMA allows search engine operators to access the data set of gatekeeper search engines, on FRAND terms, to optimize their own AI models. However, personal data within this data set must be offered in an anonymized form, which creates implementation issues for gatekeepers. We argue that privacy-preserving machine learning strategies are likely to become more relevant, but gatekeepers may initially provide extensively altered and anonymized data sets with limited practical use.

Fourth, with Article 5(9) and (10), the DMA harnesses information obligations to reduce the information asymmetry between gatekeepers and their business users, especially in the area of advertising. This requirement indirectly compels gatekeepers and ad tech networks to use explainable AI (XAI) systems, and also creates an obligation to deliver local explanations for each individual decision. While this can be burdensome, especially with ‘black box’ systems such as artificial neural networks and the peculiarities of Generative AI models, we consider this a proportionate measure.

Finally, there are three pivotal areas that demand policy revisions, for which we offer suggestions. First, the AI Act should address the need for a meticulously defined transparency framework, which has to unravel the intricate ties to various technical strategies to implement explainable AI. The key is how to innovatively design disclosures that respect trade secrets while mitigating potentially manipulating rankings. Here, the role of feature salience might take centre stage. It fortifies accountability, escalates deterrence and compliance pressure, and champions the chance of contesting established ranking systems. Second, regarding data access, we have shown that the current legal framework is limited. Users’ rights need broadening, while a harmonious blend is needed that factors data protection principles as well as the interests of the gatekeepers’ competitors and society’s grater interests into the regulatory equation. The objective should be to cultivate meaningful data sets that usher in innovative products, which can challenge entrenched digital monopolies. Third, an equilibrium between these rankings’ primal economic role—selecting items and thus facilitating the fulfilment of consumer preferences—and the overarching competition interests in preventing winner-takes-all markets needs to be found. We suggest mandatory ranking shuffling mechanisms to ensure that newcomers don’t perennially grapple with popularity-based rankings favoring incumbents.

Our paper highlights the DMA’s attempt to bridge a variety of economic and non-economic discourses, and to combine crucial societal interests that necessitate delicate balancing exercises at many points. These regulatory concerns spotlight a trove of nascent issues that reside at the nexus of law and computer science. The quest has become more heated to decipher the best balance in transparency, performance, and fairness within e-commerce rankings—a cornerstone for competition in the burgeoning digital economy.

Philipp Hacker is a Professor of Law and Ethics of the Digital Society at the European University Viadrina in Frankfurt.

Johann Cordes is a PhD Researcher at the European University Viadrina in Frankfurt.

Janina Rochon is a PhD Researcher at the European University Viadrina in Frankfurt. 

Share

With the support of