Faculty of law blogs / UNIVERSITY OF OXFORD

Wrong Way Round: Why Big AI Should Be Paying Universities, Not Billing Them

Posted:

Time to read:

9 Minutes

Universities are paying Big Tech AI for the privilege of acting as its global showroom. In this post, I argue that this is the wrong way round. When institutions like Oxford, Yale or Columbia sign university‑wide deals for generative AI tools, they are not just customers buying software. They are providing reputation, legitimacy and an extraordinarily rich testbed that helps firms such as OpenAI, Microsoft and Google capture a worldwide market for generative AI. Once one looks at the value created on both sides, a simple conclusion emerges: Big AI should be paying universities—and paying them a lot—rather than sending them licence invoices.

The new AI licence landscape in higher education

Over the last two years, leading universities have moved from cautious experimentation with public AI tools to institution-level licensing. These agreements integrate proprietary large language models into core teaching, research and administrative processes.

Oxford is a prominent example. From the 2025/26 academic year, all Oxford students and staff have free access, at the point of use, to ChatGPT Edu—a secure, enterprise‑style instance of OpenAI’s ChatGPT that sits inside the University’s governance and security framework. Oxford’s policy on generative AI in research recommends using ChatGPT Edu, which it explicitly describes as governed by an agreement between the University and OpenAI. Information Security guidance confirms that ChatGPT Edu has been through the University’s third‑party security assessment process, alongside tools such as Microsoft 365 Copilot.

Ivy League institutions offer similar examples. Yale’s Clarity platform provides a Yale‑branded interface to multiple models, including an AI chatbot powered by OpenAI’s GPT‑4o, available to all students, faculty and staff in a “walled‑off” environment where inputs are not used to train external models. Dartmouth distinguishes between locally hosted models and “Commercial Tools: Campus‑wide licensing”, listing OpenAI GPT models, Anthropic Claude and Microsoft Copilot as “free access for all Dartmouth users” through its Dartmouth Chat interface. Columbia’s AI Services suite includes “ChatGPT Education”, Google Gemini and NotebookLM, each described as “available to the Columbia University community” in a secure, education‑focused environment.

In all of these cases, universities are acting as enterprise customers. They integrate Big AI systems with single sign‑on, data‑classification schemes and institutional policies. Access is provided to tens of thousands of users, and the university absorbs much of the contractual and reputational risk of deployment. The standard framing is that universities “get access” to powerful tools for their communities. But this is only half the story.

Who pays whom? What we can see (and what we cannot)

The financial direction of these deals is much less visible than their technical content. The contracts are confidential. To the best of my knowledge, no university has published its licence agreement with OpenAI, Microsoft or Google. Yet public statements allow some inferences.

For Oxford, no official page discloses the licence fee or explicitly states who pays whom. But several facts point in the same direction. ChatGPT Edu is marketed by OpenAI as a business‑data product alongside ChatGPT Enterprise and ChatGPT Business, with custom institutional pricing. These are clearly paid‑for services. Elsewhere, we can see actual numbers. The University of Pennsylvania, for example, describes ChatGPT Edu as a “full‑range AI model… available at $13 per user/month”, paid for by schools or departments when needed for courses or research. Oxford’s own IT pages list ChatGPT Edu among centrally provided tools that are “free” to staff and students—language the University usually reserves for software it licenses and funds centrally.

Against this background, it is reasonable to infer that universities pay Big AI at least something for these licences, however modest. What seems clear is that they do not receive any direct payments under these arrangements. Is that fair? To answer this, we need to look at the total value these partnerships create and how that value is divided between the parties.

How big is the pie – and who gets which slice?

Consider Oxford’s relationship with OpenAI. Any numbers here are, by necessity, estimates, but they help reveal the orders of magnitude involved.

On Oxford’s side, start with the direct utility for users. The University has roughly 26,000 students and nearly 17,000 staff, implying a core pool of about 43,000 potential users. Not all of them will use ChatGPT Edu heavily. Suppose that 70% of students and 80% of staff become active users once access is fully rolled out. That yields about 31,700 active users.

If the annual per‑user value they derive—in time saved on routine drafting, coding, data analysis and administrative tasks—is equivalent to between US$120 and US$240 per year (US$10–20 per month), the total annual user‑side benefit falls in the US$4–8 million range. This is broadly consistent with what one would infer from retail willingness to pay for tools such as ChatGPT Plus.

Reputational gains—being seen as a leading adopter of responsible AI—are real but harder to quantify. Even a modest effect on research income or philanthropy could be worth low single‑digit millions of dollars per year, but that remains speculative.

On OpenAI’s side, the direct licence revenue is a portion of the use value obtained by the partner universities, so we have already figured that into our assessment of the pie. The more important component, however, is strategic and reputational. Oxford’s adoption of ChatGPT Enterprise and now ChatGPT Edu features prominently in OpenAI’s own marketing. Media coverage of the rollout of ChatGPT Edu across the California State University system presents Oxford and Wharton as early adopters that helped legitimise the technology in higher education. If Oxford, Yale or Columbia embrace a tool, it becomes much easier for other universities, school systems and governments to follow.

If the Oxford partnership helps OpenAI win even a modest number of additional large institutional deals that it would otherwise have struggled to secure—for example, dozens of university‑wide licences or one or two national education‑system deployments—the resulting incremental profit could easily run into the tens of millions of dollars over a multi‑year horizon. Even if only a small share of those gains is attributable to Oxford’s role as a reference customer, that share alone is likely to exceed the annual benefit Oxford derives from the licence. Oxford gets a powerful tool for its community. OpenAI gets a powerful signal to the world that its tools are safe and appropriate for elite universities. 

But even that is only part of the story.

Universities as customers – and as strategic assets

Universities have approached these agreements as if they were buying standard enterprise software. That is understandable. For decades, they have licensed office suites, learning-management systems and specialised databases. Generative AI has been slotted into the same procurement machinery.

Yet the logic of generative AI partnerships is different in at least three ways.

First, they are a reputational multiplier. A university like Oxford, Yale or Columbia is not just another corporate logo. Its adoption of a technology is itself a news story. It signals to other universities, regulators and the public that the tool is not only technically competent but also compatible with academic values and legal obligations. When a large state university system points to Oxford and Wharton as precedents for adopting ChatGPT Edu, those earlier decisions have become part of OpenAI’s marketing collateral.

Second, universities provide a valuable testbed. They use generative AI across disciplines, languages, assessment formats and administrative processes—in seminars and laboratories, HR departments and college offices. This is an extraordinarily rich environment for learning. AI firms see where their models fail, how users adapt them, which safeguards are effective, and how institutional governance interacts with technical design. Even if enterprise data is not used for model training, the organisational learning is invaluable.

Third, universities carry reputational and operational risk. They face student complaints, media criticism and regulatory scrutiny if something goes wrong. AI firms benefit from the legitimacy of these deployments but bear relatively little of this downstream risk.

Put differently, universities are not only buyers of AI. They are central strategic assets in the education market. Their choices accelerate or slow adoption more broadly. They lend credibility to particular firms and architectures. Competition between providers is cutthroat, and world-leading universities such as Oxford are cornerstones in the dominant AI firms’ strategies to capture market share. These universities also help shape regulatory and public perceptions. That contribution has substantial economic value—it is part of the pie created by the deal—yet current licence structures largely ignore it.

If universities create so much value, why are they paying?

If both sides are essential to the value created—the technology on one side, the reputation and testbed on the other—why are universities paying ordinary enterprise fees to Big AI rather than the other way round?

Part of the answer lies in path dependence and information asymmetry. These agreements were negotiated under time pressure, with a strong sense that universities “had to do something” about generative AI. The immediate focus was rightly on safety, data protection and basic access. The deeper question of how the surplus is allocated received less attention. Clever framing by Big AI likely helped: universities were portrayed as selected strategic partners and beneficiaries of a life-changing frontier technology.

From a negotiation perspective, the conclusion is uncomfortable. Once we take product learning, reputational multipliers and market-expansion effects seriously, the benefit to firms like OpenAI from university partnerships is, in all likelihood, much larger than the benefit to the universities themselves. In terms of dividing the incremental value created by the deal, universities seem to have accepted a relatively thin slice.

They should not. Negotiation scholars have argued that the cooperative surplus—the pie—should be shared equally. Both sides are equally necessary to do these deals. Big Tech AI could not do them without the universities, and the universities could not do them without Big Tech AI. Splitting the pie is a fair solution.

On that view, the financial logic should be reversed. Instead of universities paying full enterprise prices, AI firms should be offering deep discounts, free or heavily subsidised student access, substantial in-kind support—and, in many cases, substantial positive payments to universities for the right to showcase them as flagship partners. In other domains, this is standard. Corporations pay to put their names on business schools, research centres and stadiums. Big AI could and should similarly pay for the right to say: “Our models power the University of X.”

This is not a rhetorical flourish. It reflects the underlying economics. The incremental value that AI firms derive from prestigious university partnerships is large and persistent. The incremental value universities derive from any particular vendor relationship, given the availability of alternatives and open-source models, is more modest. If the parties were to bargain explicitly over their respective contributions and alternatives, the resulting division of surplus would look very different from today’s licence invoices.

Renegotiating the deals: universities’ BATNAs

All of this might sound like a missed opportunity that cannot be undone. It is not. These are ongoing commercial relationships, and universities’ alternatives to agreement are much stronger now than when many of the first contracts were signed.

Universities have at least four credible levers.

First, they can look into open‑source alternatives. The open‑source ecosystem for large language models is developing rapidly. Models such as Llama and Mistral, deployed on university-controlled infrastructure, can already be run on-premise to support academic use-cases while giving institutions more control over models and data. They may not match frontier proprietary models on every benchmark, but they provide a realistic fall‑back position.

Second, as already mentioned, vendors compete vigorously. OpenAI is not the only supplier. Microsoft offers Copilot Chat with strong data protection guarantees, and universities such as Cornell already describe it as a “university‑wide ‘private’ version of ChatGPT and DALL‑E”. Google offers Gemini Education and related tools aimed at schools and universities. OpenAI has just declared a “code red” to improve ChatGPT as Google’s Gemini threatens its AI lead. Anthropic, Cohere and others are also courting higher education. Switching is costly, but certainly not implausible.

Third, universities can pursue a strategy of partial or staged adoption, limiting the scope of proprietary AI deployments to lower‑risk domains and relying on internal or open‑source solutions for sensitive areas such as assessment design, admissions or high‑stakes research. This reduces dependence on any single vendor and strengthens the hand in negotiations.

Fourth, and most importantly, universities have a lot of leverage over Big AI. Leading universities can walk away in a way that hurts. If Oxford, Yale or Columbia were to discontinue a partnership and say publicly that the terms did not provide a fair share of the surplus, or that the applications did not fulfil quality expectations, this would significantly damage the vendor’s market standing.

In other words, universities’ “Best Alternatives to a Negotiated Agreement” (BATNA) is strong. They do not have to accept licence terms that treat them as ordinary enterprise customers and ignore the value they create. They can credibly threaten to switch providers, pivot towards open‑source models, or narrow the scope of deployments. They can also, individually or collectively, develop common expectations for fair AI partnerships in higher education.

Recognising this reality should be the starting point for the next round of negotiations. Universities should go back to their AI partners with a clear message: the current financial logic is upside‑down. Given the reputational and strategic value they provide, they should no longer be paying standard enterprise prices. The firms that gain most from these partnerships—OpenAI, Microsoft, Google and their peers—should be paying universities, and paying them substantially, for the privilege of turning them into global show cases for their technology.

The views expressed in this post are those of the author and do not necessarily reflect the views of the University of Oxford or any of its constituent colleges. I have used ChatGPT Edu for this post to research the current market for Big Tech AI licences to leading universities. 

Horst Eidenmueller is Statutory Professor for Commercial Law at the University of Oxford and Professorial Fellow of St. Hugh’s College, Oxford.