Faculty of law blogs / UNIVERSITY OF OXFORD

Secrecy Without Oversight: How Trade Secrets Could Potentially Undermine the AI Act’s Transparency Mandate

Posted:

Time to read:

7 Minutes

Author(s):

Ian Gauci
Managing Partner at GTG, Malta
Introduction

The European Union's Artificial Intelligence Act was meant to be the world's most ambitious attempt at regulating AI. Instead, it risks becoming a framework of symbolic compliance. At the heart of this conflict lies a structural contradiction: while the Act aspires to enforce transparency, it allows trade secrecy claims to obstruct effective regulatory scrutiny.

This analysis examines the disclosure failures embedded in Article 78, tracks the emerging jurisprudence correcting them, and proposes reforms that anchor regulatory access in confidential but enforceable terms. In doing so, it reframes the debate away from the false binary of ‘transparency versus secrecy’ toward a practical solution already embedded in European law.

Article 78 Inbuilt Problem

Article 78 of the AI Act requires competent authorities to protect confidential information ‘in accordance with Union and national law on confidentiality,’ while ensuring such protection ‘must not prevent or hinder effective enforcement.’ The text appears to strike a balance between proprietary protections and regulatory needs. In practice, however, it creates a structural imbalance.

The provision imposes a disproportionately high threshold on regulators. Disclosure is permitted only where deemed ‘strictly necessary,’ a standard not defined in the Act and interpreted narrowly in most procedural contexts. By contrast, companies can invoke trade secret protection with little evidentiary burden. The mere assertion of confidentiality often suffices to obstruct access or delay enforcement.

This asymmetry produces what might be called regulatory displacement. Authorities tasked with enforcing transparency must first litigate their right to see relevant material. Meanwhile, providers might redact documentation or decline disclosure altogether. Because the Act lacks a built-in framework for resolving these disputes or compelling disclosure under confidentiality arrangements, the system invites defensive opacity, allowing a procedural stance where firms protect secrecy not by legal merit but by design inertia. Moreover, the provision remains silent on mechanisms such as secure-access environments, confidentiality protocols, or proportionality thresholds.

The CJEU Shifts the Balance

In CK v Dun & Bradstreet Austria GmbH (C-203/22), decided on 27 February 2025, the Court of Justice examined whether a credit scoring company could rely on trade secret protections to refuse disclosure of its scoring methodology under data protection law.

The case involved a controller's refusal to provide a data subject with intelligible access to the logic of an automated decision, citing commercial confidentiality. The Austrian Financial Market Authority had received similar denials in its supervisory functions. The CJEU decisively rejected this position and articulated three core principles that realign disclosure obligations under EU law.

First, trade secret protections cannot serve as categorical grounds for refusal. The Court made clear that the right to access under the GDPR and the Charter must be balanced against secrecy claims on a case-by-case basis and not overridden by them.

Second, only competent supervisory authorities or courts, not the controller, can determine whether information is lawfully protected. This prevents the emergence of de facto unilateral immunity from regulatory access.

Third, controllers must provide explanations that are ‘concise, transparent, intelligible and easily accessible,’ even if the technical nature of the system presents difficulties. The complexity of the algorithm cannot exempt the provider from this obligation.

The decision not only clarified the interpretation of GDPR rights in the context of algorithmic systems but also signalled a broader doctrinal shift in EU administrative law: a presumption in favour of transparency when fundamental rights are engaged. This principle directly contradicts the defensive opacity permitted under Article 78 of the AI Act.

Public vs Confidential Disclosure 

One of the most significant conceptual omissions in the AI Act is its failure to differentiate between public disclosure and confidential disclosure to regulators. These are not equivalents, nor are they interchangeable under EU law.

Public disclosure typically involves publication or disclosure to third parties without specific safeguards. It raises valid concerns about reverse engineering, competitive harm, or exploitation of proprietary systems. These risks often justify the existence of trade secret regimes and should not be dismissed lightly.

Confidential disclosure to authorities means only specific regulatory bodies can access sensitive information, under strict legal obligations to keep it secret from the public. This approach protects trade secrets while enabling meaningful oversight. Under Article 9 of Directive (EU) 2016/943, courts and regulatory authorities can examine commercially sensitive information through secure procedures that prevent public exposure while maintaining regulatory scrutiny.

European competition law has developed sophisticated confidential disclosure mechanisms, exemplified in RegioJet (C-57/21), where the CJEU recognized that national courts may apply procedural rules for restricted access procedures and confidentiality rings to protect sensitive information during evidence disclosure, even where not explicitly provided for in EU directives. The CJEU's landmark decision in Orde van Vlaamse Balies (C-694/20) also established comprehensive protections for confidential legal communications under Article 7 of the EU Charter of Fundamental Rights. German courts have refined these principles in competition investigations, as demonstrated in the 2024 Federal Supreme Court decision (KVB 69/23) on trade secrets disclosure, where the court applied a proportionality test requiring that disclosure be ‘suitable, necessary, and appropriate’ while weighing investigative needs against constitutionally protected commercial confidentiality.

Legal and Regulatory Arbitrage 

Legal obligations mean little when they can be evaded through jurisdictional manoeuvring. AI providers are increasingly structuring operations to take advantage of regulatory differentials. 

While legal obligations create compliance pressures, enforcement faces practical challenges in a complex global AI ecosystem. AI providers do structure operations across jurisdictions, though the AI Act's broad scope limits evasion opportunities. Under Article 2, the Act applies to any provider ‘placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country,’ and extends to third-country providers ‘where the output produced by the AI system is used in the Union.’

Some firms attempt to minimize compliance burdens through operational restructuring. Strategies include distributing development across multiple entities, separating technical development from market deployment, and leveraging corporate structures to obscure the true ‘provider’ under Article 3's definitions. However, the Act's functional approach to defining roles (based on actual control and decision-making rather than formal corporate structures) limits the effectiveness of such fragmentation.

Trade secrecy claims do pose enforcement challenges. While Article 78 includes confidentiality protections for legitimate trade secrets, overclaiming proprietary status for basic system functionality can impede transparency obligations under Articles 13-15. The tension between disclosure requirements and intellectual property protection remains a practical enforcement challenge.

The enforcement architecture relies on national market surveillance authorities (Article 74) coordinated by the European AI Office, with substantial penalties under Article 99 (up to €35 million or 7% of global turnover). However, resource constraints at regulatory authorities and the technical complexity of modern AI systems create genuine oversight challenges that go beyond jurisdictional arbitrage.

The proposed AI Liability Directive offers little improvement, maintaining similar balancing language that subjects disclosure duties to ‘proportionality and confidentiality controls.’ Rather than strengthening transparency requirements, it entrenches the same approach that the CJEU is systematically dismantling.

Here again, the CJEU's logic in Dun & Bradstreet proves vital. The Court ruled that what matters is not where the model was developed, but where its effects are felt. If an EU resident is impacted by a high-risk AI system, EU law applies, regardless of infrastructure location or organisational shell games. This principle must become operational. Otherwise, firms will continue to ‘contract around’ oversight using layered corporate structures and offshore deployments.

What Needs Fixing

To realign the AI Act with EU constitutional and procedural standards, three core reforms are required.

  1. Adopt a presumption in favour of disclosure. This means regulators need not prove the necessity of access. Rather, firms should be required to demonstrate specific, proportionate harms arising from disclosure. This logic is embedded in Dun & Bradstreet and reinforced by Article 41 of the Charter on good administration.
  2. Strengthen confidential disclosure frameworks. The AI Act should mandate structured confidential disclosure protocols that include independent technical assessors with appropriate clearance levels, standardized procedures for in-camera review of sensitive algorithms, and clear criteria for determining when public interest in transparency outweighs trade secrecy claims. Current provisions allow authorities to ‘request only data that is strictly necessary for the assessment of the risk posed by AI systems’ while protecting ‘intellectual property rights and confidential business information or trade secrets,’ but lack detailed procedural frameworks for managing disclosure disputes or ensuring proportionate access. As Edwards and Veale have argued, transparency mechanisms must move beyond individual rights-based approaches toward systemic regulatory frameworks that can effectively govern complex algorithmic systems. Their critique of the GDPR's ‘right to explanation’ as an inadequate remedy highlights the need for more sophisticated institutional mechanisms that can provide meaningful oversight without compromising legitimate business interests.
  3. Eliminate blanket trade secret exemptions. Article 78 should be amended to prohibit categorical refusals. Instead, it should require regulators and courts to conduct a proportionality assessment, weighing the disclosure interest against specific harms. This approach mirrors the logic of Directive (EU) 2016/943 and reflects Article 8 of the Charter on protection of personal data.

Beyond doctrinal reform, the AI Act requires operational mechanisms to make confidential disclosure work in practice. Drawing from successful models in competition law and financial regulation, three specific mechanisms should be implemented immediately.

  1. Establish secure regulatory access facilities. National supervisory authorities need technical infrastructure for confidential algorithm review. The European Securities and Markets Authority model provides a template: specialised secure facilities where designated technical experts can examine proprietary trading algorithms under strict confidentiality protocols. Similar facilities could be established for AI system audits, funded through industry levies as in financial services regulation.
  2. Create expert review panels. Complex AI systems require specialist knowledge that many regulatory authorities lack. The EU has already established a mechanism for a scientific panel under the AI Act, and they are meant to assist the Market Surveillance Authorities in their requests, so ideally this could be extended to them providing technical assessments under strict confidentiality obligations of these algorithmic systems. This mirrors the approach in pharmaceutical regulation, where the European Medicines Agency maintains specialist advisory committees for complex therapeutic assessments.
  3. Implement procedural safeguards for rapid disclosure. The current Article 78 framework allows indefinite delay through confidentiality claims. Reform should include strict time limits: firms must substantiate confidentiality claims within 30 days, with automatic disclosure to regulatory authorities in secure environments if no adequate justification is provided. This approach reflects the urgency provisions in competition law investigations under Regulation 1/2003.
Conclusion 

The AI Act was supposed to be Europe's answer to algorithmic governance; however it accidentally created a system where companies can hide behind trade secrecy claims while regulators struggle to do their jobs.

The disclosure framework in the AI Act is not neutral. As drafted, it privileges commercial secrecy by default and leaves regulators without the means to overcome resistance. This is a policy design failure, not just an enforcement gap. Yet the tools to correct it are already present in EU law. Dun & Bradstreet, Directive 2016/943, the GDPR, and the Charter all support a disclosure regime based on confidentiality, proportionality, and case-by-case balancing. 

The groundwork is there. The legal precedents exist. What's missing is the willingness to reflect them in the same text of the AI Act. Europe has a chance to show the world what responsible AI governance looks like. But that means making transparency more than just a nice idea buried in regulatory fine print.

 

Ian Gauci is the Managing Partner at GTG, Malta.