Faculty of law blogs / UNIVERSITY OF OXFORD

Why ‘Development’ Is the Wrong Regulatory Concept for African AI Governance

Posted:

Time to read:

3 Minutes

Author(s):

Samuel W Ugwumba
Senior Visiting Scholar, Max Planck Institute for Innovation; Competition Academic Fellow, Católica Global School of Law

African countries are building AI governance frameworks at pace. Zimbabwe launched its National AI Strategy on 14 March 2026; Ghana’s strategy received cabinet approval in February 2026; Nigeria, Kenya, and Rwanda adopted strategies in 2023–25. Each organises its governance architecture around ‘development’. This post argues that development is structurally incapable of performing the regulatory work assigned to it, and that this incapacity enables a form of regulatory capture in which the companies subject to governance co-author the frameworks that govern them.

Development as a regulatory concept

The AU Continental AI Strategy (July 2024) commits to an ‘Africa-centric, development-oriented and inclusive approach.’ In regulatory terms, ‘development’ is asked to do what a governing concept must: classify phenomena, guide institutional design, and provide criteria against which outcomes can be assessed. It fails at all three.

On classification, the strategy calls on international partners to ‘support Africa’s effort to accelerate AI use for solving its development challenges’ while warning that ‘external influence from AI technologies developed outside Africa may undermine national sovereignty’. On data, it acknowledges that ‘most of the data on the African population is now available to a handful of companies’ while promoting policies that ‘facilitate access and sharing of non-personal data for AI’. On labour, it is silent: no mention of content moderation workers, data labelling conditions, or psychological harm—despite over 140 Kenyan workers hired by Sama having developed PTSD from labelling traumatic content for OpenAI at $1.32 an hour. The concept validates each side of every tension it encounters, providing no basis on which to regulate.

Zimbabwe’s new strategy reproduces the pattern through its vision: ‘inclusive and sustainable AI for Development in Southern Africa’. It simultaneously promises ‘computational sovereignty’ and plans ‘strategic technology alliances’ with foreign partners. It frames AI as a vehicle for ‘shared prosperity’ while acknowledging Zimbabwe ranks 149th of 193 countries in the UN E-Government Development Index. Development accommodates both the aspiration and the constraint without specifying how the gap is to be governed.

How development discourse enables regulatory capture

The most significant regulatory consequence is visible in Nigeria’s National AI Strategy (April 2025). The strategy calls AI a ‘developmental equaliser’ and promises ‘locally developed AI solutions’ that ‘rebalance power structures.’ It simultaneously states that it was ‘guided’ by Google’s ‘AI Sprinters’ corporate report, which recommends ‘100% adoption of cloud-first policies’—a recommendation that directly benefits Google Cloud. The strategy development workshop was co-created with Meta, Google, and Microsoft, and Google separately committed a $2.1 million fund supporting the strategy’s implementation.

In any other regulatory context, this arrangement would be identified as a conflict of interest: the regulated entities co-authoring the framework that governs them. Development discourse is what prevents this identification. Because AI is framed as a ‘developmental equaliser,’ Google’s involvement becomes legible as development assistance rather than market capture. The strategy’s ecosystem map classifies Google, Microsoft, Intel, and Nvidia as ‘Platform Enablers’ and ‘Support Systems’, with no category for ‘regulated entities.’ The concept converts a structural conflict into a developmental partnership, and the regulatory question disappears.

This is not a Nigerian anomaly. Kenya’s AI Strategy positions the country as a ‘leading hub for technology and innovation’ while Kenyan data labellers earn $1.32–$2 per hour and have petitioned parliament over exploitative conditions. And Meta, after facing lawsuits in Nairobi, secretly relocated its content moderation operations to Accra, a move that amounts to a regulatory arbitrage. Ghana’s strategy frames data as a ‘national asset’ and AI as a vehicle for ‘inclusive social and economic transformation,’ while critical analysis notes it still lacks a robust national data governance framework.’ In each case, development absorbs the contradiction: it frames dependency on the companies extracting from the continent as a developmental partnership with them.

The IP precedent

This is not the first time development has failed as a regulatory concept. Consider intellectual property. When WIPO evaluated its Development Agenda after nearly two decades, independent reviewers found that impact ‘could not be determined.’ As the South Centre documented, ‘conflicting interpretations of development’ prevented agreement on what success meant. South Africa’s Copyright Amendment Bill, deadlocked since 2015, embodies the same failure: the bill’s fair-use provisions were simultaneously demanded by ‘development’ (access to knowledge for visually impaired South Africans) and blocked by it (the USTR threatened to revoke $2.38 billion in trade preferences—themselves a development instrument—because South Africa adopted fair-use provisions modelled on US law). Development was simultaneously the rationale for reform, the instrument of punishment, and the framework that could not classify either.

Regulatory implications

The comparative AI governance literature has examined regulatory fragmentation and the strategic dynamics of developer versus deployer states. African AI strategies introduce a third dynamic: governance frameworks whose organising concept structurally prevents them from regulating the entities they are designed to govern. This is not regulatory capture through lobbying or political influence. It is regulatory incapacity through conceptual design.

The alternative is not to abandon development as a diplomatic aspiration. But in the spaces where AI governance is actually designed—where data rules are written, procurement decisions made, and labour protections legislated—development must give way to disaggregated, falsifiable regulatory objectives. ‘Minimum wage parity between outsourced AI workers and equivalent domestic roles’ is a regulatory standard. ‘Development-oriented AI labour governance’ is not. ‘Enforceable data localisation requirements’ create testable obligations. ‘Development-oriented data governance’ creates none. As African countries from Zimbabwe to Nigeria to Ghana build their AI governance architectures in 2026, the question is whether those architectures will be built on a concept that can actually say no, or one that will permit more arrangements like Sama and Meta.

Samuel W. Ugwumba is a Senior Visiting Scholar at the Max Planck Institute for Innovation and Competition and an Academic Fellow at the Católica Global School of Law.