Faculty of law blogs / UNIVERSITY OF OXFORD

Artificial Intelligence and Unfair Competition – Unveiling an Underestimated Building Block of the AI Regulation Landscape

Author(s)

Stefan Scheuerer
Civil Servant in Germany, formerly Junior Research Fellow at the Max Planck Institute for Innovation and Competition, Munich

Posted

Time to read

3 Minutes

For quite some time, artificial intelligence (in the following: AI) has been at the centre of attention of IP and competition law scholars. Yet, the role unfair competition law (in the following: UCL) can and should play in the regulatory landscape relating to AI has so far largely been neglected. One reason for this is that UCL is an ambiguous and complex legal field and its design varies widely across EU member states and worldwide. In order to fill the analytical gap, my article examines to what extent general principles proclaimed as guiding paradigms of AI regulation are reflected in sub-equivalents stemming from the realm of UCL, thus showing UCL’s potential to contribute to realising these principles. Prominent legal problems raised by AI are reconsidered in my article from a UCL perspective, showing that this perspective may both complement or even substitute traditional approaches.

To start with, one can reflect on a potential connecting line between the general desire for ‘Ethical AI’ and the notion of ‘business ethics’ often or at least historically associated with UCL. Such an alignment does not seem far-fetched in commercial contexts. Yet, more importantly, a general demystification of the ‘ethics’ narrative appears advised for the purposes of legal discourse. Irrespective of their metaphysical provenance, all the issues at stake ultimately come down to balancing the legally relevant interests of all market participants.

Then, the most obvious, yet at the same time most dubious potential ‘common ground’ of AI and UCL is the ‘fairness’ principle. Although in the AI debate ‘fairness’ is mostly understood as referring to the principle of equality, whereas the ‘fairness’ of UCL is teleologically entrenched in safeguarding competition or competition-related interests, there may well be overlaps.  Both concepts share an inherent openness and vagueness, and AI can in many ways be (mis-)used for negative impacts on competition, including discriminatory practices of competitive relevance. Whereas this is not the place to delve deeper into the long-standing debate about possible meanings of ‘fairness’, one aspect is especially worth highlighting: regulatory complementarity of UCL fairness rules to antitrust law. If one follows a ‘modern’ understanding of UCL, which puts safeguarding competition as an institution at the centre of teleological attention, its general clauses can be used to address AI induced market failures outside the realm (especially below the dominance threshold) of antitrust.

Another core mantra of AI regulation is transparency. Market transparency as an important subset thereof is traditionally safeguarded by UCL, which prohibits misleading commercial practices. In the AI context, such practices may involve ‘concealed’ AI based personalization, non-compliance with self-proclaimed ‘corporate digital responsibility’ codes, or marketing AI generated ‘works’ as human-made, thereby endangering market solutions to the ‘AI and IP’ debate which rely on the notion that consumers might value human-made works over AI generated ones.

Ensuring accountability of companies for damages ‘autonomously’ caused by their AI is the most ‘classic’ legal AI problem. The UCL concept of ‘liability for breaches of duty of care in competition’ may doctrinally inspire the construction of an adequate, holistic framework for such ‘attribution issues’.

Furthermore, preserving human autonomy vis-à-vis the threat of AI ultimately replacing humans lies at the core of AI regulation principles. Autonomous consumer choice as an important sub-aspect thereof is endangered by the proliferation of preference-tailored supply systems that capture consumers in ‘filter bubbles’. Transparency requirements of UCL may mitigate the respective tension. Yet, (even) more problematic is the use of AI by consumers: ‘smart assistants’ taking over most or all relevant decisions give rise to the anthropological risk that consumers are deprived of their very capability of acting as rational market agents. UCL with its rich experience on matters of consumer choice may provide guidelines for policymakers to assess how much decision-making power can reasonably be delegated to smart assistants and how much cannot, particularly in the course of implementing the relevant parameters ‘by design’.

Also, UCL can act as an (additional) enforcement pillar for a variety of AI-relevant market conduct rules, especially rules pertaining to non-discrimination, protection of personal data, and cybersecurity, via the doctrine of ‘breach of statutory duty’. Such enforcement, relying on competitors and consumer associations, is quick and flexible and thus especially fit for AI market dynamics. On substantive terms, ‘breach of statutory duty’ may play an active role in the discussion on a growing convergence of neighbouring areas of law relating to the protection of consumer interests in the digital economy.

Lastly, UCL can contribute to the AI innovation ecosystem. First, certain data access desires both by competitors and by consumers can be accommodated by UCL in complementarity to solutions based on antitrust law or sector-specific legislation. Second, UCL, construed in modern, economic-functional terms, can serve as a market sensitive investment protection regime enriching the ‘AI and IP’ discourse. Both symbolizing and realizing the paradigm of a flexible approach to the protection of intangible goods, UCL provides an alternative to the introduction of new and possibly dysfunctional IP rights in instances of uncertainty about market failure. To the extent such market failure shows, a ‘purely economic’ UCL approach could gain relevance especially for ‘AI generated’ intangible goods, for which traditional anthropocentric IP rationales no longer hold true. Lastly, UCL will inform the rules governing trade secret protection. The relevant European regime as recently harmonized is widely praised as adequately balancing the needs of exclusivity and access precisely through its reliance on flexible UCL standards.

On a final note, AI might inversely give impulses for the doctrinal advancement of UCL, which, in light of its characteristic flexibility, displays an extraordinary responsiveness to societal, economic and technological changes. These are currently and for the time to come significantly driven by AI.

Stefan Scheuerer is a doctoral student and Junior Research Fellow at the Max Planck Institute for Innovation and Competition in Munich, Germany.

Share

With the support of