AI Innovation Competition as a Discovery Procedure: The Role and Limits of Competition Law
Posted:
Time to read:
The normative debate on competition law in the context of artificial intelligence (AI) is notably asymmetrical: extensive concerns voiced by competition agencies and a growing body of critical scholarship on the one hand, and an almost non-existent enforcement practice on the other. Questions still largely confined to the theoretical realm include: can the existing competition-law framework—and, more broadly, related market regulation—address emerging competition and innovation concerns? Which practices should be considered restrictive of competition and innovation, and are existing theories of harm capable of identifying and responding to such restraints in this dynamic technological context? The paper by Josef Drexl and Daria Kim ‘AI Innovation Competition as a Discovery Procedure: The Role and Limits of Competition Law’ provides a novel analytical perspective on these issues.
Concerns related to strategic partnership agreements in AI innovation
Some of the most impressive recent advances in AI innovation, particularly in generative AI, have required—besides talent—a massive input of resources, namely, access to digital data, computing infrastructure, and capital. Access to AI models, particularly general-purpose models, is of critical importance for a flourishing innovation ecosystem. Occupying a central position within the AI value chain, these models constitute a key input for downstream innovation, including the development of use-case-specific systems and applications. In this context, concerns arise that restraints on access to such models may translate into restraints on competition and innovation. These concerns are further aggravated by structural features that render the AI sector prone to concentration and capable of entrenching dominant positions, including high fixed development costs, economies of scale and scope, strong network and platform effects, and data-driven feedback loops that are essential to model improvement.
Frontier AI models have been developed either in-house by large digital incumbents or by smaller AI developers that secure access to substantial financial and computational resources through strategic partnership agreements with those incumbents. Prominent examples include agreements between Microsoft and OpenAI, Microsoft and Mistral, as well as Google and Anthropic. These arrangements have been criticised on the grounds that they may amount to ‘mergers in disguise’ and entail competitive restrictions. However, a limited factual understanding of these dynamics—largely due to the confidentiality of contractual terms—hampers their theoretical and analytical appraisal.
Against this background, the paper explores the implications of strategic partnership agreements for dynamic competition in AI and focuses on the intersection between such partnerships and developments related to open-source (OS) practices in AI. The latter are of particular interest given the tendency to presume the pro-competitive effects of OS-related practices, alongside mounting criticism and increasing ambiguity (or rather context-dependency) of their normative implications. The central question examined is whether a departure from OS licensing of AI models—or, more specifically, conduct that diminishes such licensing in the context of strategic partnership agreements—can be considered a competition-law infringement.
Intersection between OS practices and AI innovation
To contextualise the analysis, the paper maps the interactions between open-source (OS) practices and AI innovation. OS licensing of AI models is often associated with enhanced research, innovation, market entry, and the diffusion of AI technologies, and the EU legislature has explicitly recognised these potential benefits through exemptions for models ‘released under a free and open-source licence’ from certain requirements under the EU AI Act. From this perspective, it may be tempting to regard any limitation on OS licensing as a restriction on innovation activity and dynamic competition. However, this view is overly simplistic.
Drawing on the available evidence on OS practices in AI development and diffusion, the paper provides a systematic overview of their implications for the rate, direction, and quality of AI innovation, demonstrating that these effects depend on both the context and the specific modalities of model ‘openness’. Consistent with existing accounts, the analysis treats OS as a multidimensional rather than a binary concept and emphasises that what ultimately matters for both innovation and competition-law assessment is what particular subject matter is made available, and under which conditions.
The analysis thus highlights that the release of AI models under OS terms cannot be equated with increased innovation. On the one hand, in certain circumstances, control over licensing conditions may constitute a legitimate source of competitive advantage in innovation. On the other hand, existing research points to growing concerns about ‘open-washing’ and the strategic (mis)use of openness as a means of securing competitive advantage within the AI ecosystem. The resulting landscape is therefore more nuanced than the assumption that OS licensing necessarily fosters greater innovation.
Accordingly, the paper argues that competition-law enforcement cannot rely on fixed presumptions regarding the innovation effects of any particular licensing modality, including OS licensing, but must instead adopt a highly context-sensitive approach. It further calls for placing potential harm to dynamic competition at the centre of competition-law analysis and points to the need for a more specific analytical framework for assessing such harm.
Competition as a Discovery Procedure as a Normative Orientation
As an alternative analytical approach, the paper suggests anchoring competition-law analysis in the notion of innovation competition as a discovery procedure. Originating in Hayek’s understanding of competition as a mechanism for generating knowledge, this concept has become influential in evolutionary economic thinking about innovation. The paper proposes this conceptual framework as a foundation for competition-law enforcement aimed at protecting innovation competition. In the specific context of AI innovation, including strategic partnerships in AI, this implies that competition law should safeguard AI developers’ freedom to choose their preferred licensing models and protect them against undue restrictions in cooperation agreements that may undermine the ability of dependent firms to design and pursue their own innovation strategies.
Against this background, the analysis identifies certain shortcomings in the current approach to the assessment of strategic partnerships in AI, drawing on decisions of the UK Competition and Markets Authority in the Microsoft/OpenAI and Microsoft/Mistral AI cases. In these decisions, the competition authority applied merger-control rules and allowed the transactions to proceed on the basis that the requirements for a concentration were not met. Having critically examined these decisions, the paper argues that the traditional competition-law approach is limited in its capacity to address concerns that collaborations between large technology firms and smaller developers of AI models may restrain innovation competition, particularly when viewed through the lens of dynamic competition as a discovery procedure.
Finally, given that traditional competition-enforcement tools may often come too late, the paper argues for a reform of the EU Digital Markets Act and, potentially, the adoption of a new competition-law instrument designed to account for the importance of maintaining and promoting free choice and access in AI-related markets. Such a tool should place greater emphasis on safeguarding dynamic competition and addressing the risk that powerful market players may exploit structural dependencies and leverage their position to restrict smaller firms’ freedom to design and pursue their innovation strategies.
The authors’ paper can be found here.
Josef Drexl is the Managing Director for the Max Planck Institute for Innovation and Competition.
Daria Kim is a Senior Research Fellow at the Max Planck Institute for Innovation and Competition.
Share: