Faculty of law blogs / UNIVERSITY OF OXFORD

Aligned Structuring of AI Startups

Author(s)

Gad Weiss
Wagner Fellow in Law & Business at NYU School of Law

Posted

Time to read

3 Minutes

In recent years, two high-profile AI startups—OpenAI and Anthropic—have introduced innovative business structuring models that challenge traditional startup structuring conventions. Most startups optimize their governance and capital structures to maximize enterprise value by helping attract investments and talent. These new models, conversely, are designed to ensure that the need to protect society from AI risks guides startup managers’ discretion and that startup managers are insulated from pressures to maximize enterprise value.

In a new working paper, I ask whether these new startup architectures (here, ‘Aligned Structuring’ models) are likely to be adopted by other AI startups that care about the societal implications of their technology and whether society would benefit from their broad-based adoption. To do so, the paper does not directly extrapolate from OpenAI and Anthropic’s experiences or judge the success of their specific setups; even assuming that all information necessary to do so seriously was publicly available, a sample of two firms is too small to draw reliable generalized conclusions. Instead, it questions the robustness of the theoretical foundations behind their innovative structures.

Aligned Structuring models defy startup structuring practices in three key aspects. First, startups are typically controlled by their entrepreneurs, investors, or both. In contrast, OpenAI and Anthropic have placed control rights with independent third parties committed to AI safety and ethics (‘Alignment Champions’)—the nonprofit parent in OpenAI and a bespoke purpose trust in Anthropic. Second, while allowing startup stakeholders to rake in exceedingly high profits in case of success is traditionally seen as vital for attracting capital and talent, OpenAI has decided to cap investors’ and employees’ financial upside. Finally, due to concerns about the impact of directors’ fiduciary duties on their ability to prioritize AI safety and ethics, Anthropic and OpenAI have opted for business entities with flexible or inclusive fiduciary duties—a public benefit corporation (PBC) in Anthropic and an LLC in OpenAI.

Shifting control to Alignment Champions has two main drawbacks. First, it focuses on reallocating the startup’s formal controls while doing little to address the significant informal controls that profit-driven stakeholders may hold. Investors, entrepreneurs, and Big Tech supporters may exert significant informal influence over AI startups regardless of their formal controls, being the primary sources of the financial capital, human capital, or computing power and data the startup craves. Additionally, Alignment Champions’ alignment with society’s interests does not necessarily make them more prudent decision-makers. Unsafe or unethical conduct can result from prioritizing other considerations (a conflict problem), but it may also result from genuine failures to identify and address safety or ethics issues (a competence problem). Depending on product-specific circumstances, the relative advantage of Alignment Champions as less conflicted decision-makers may be set off by their inherent information disadvantage on product-related matters, which might make them less competent decision-makers compared to financially interested entrepreneurs.

Capping equity holders’ profits similarly has two significant limitations. First, Big Tech firms and their innovation arms—major players in the AI startup ecosystem—are often driven by non-financial investment goals. Other equity holders may rely on related party transactions or other indirect means to generate returns. Capping such investors’ formal cash flow rights might have little influence on their incentive structures. Moreover, for a profit-seeking owner, the main reason to support managerial decisions on safety and ethics is the prospect that they may increase their holdings’ value (for instance, by preventing reputational damage, stakeholder litigation, or unfavorable regulation). Capping owners’ financial upside, however, means they have less to gain from increases in the startup’s enterprise value, making them rationally risk-averse and less likely to support managerial decisions that sacrifice short-term for long-term profitability—including those based on stricter safety or ethics standards.

Lastly, using alternative business entities is aimed chiefly at protecting directors committed to AI safety and ethics from legal challenges by profit-driven shareholders. However, existing corporate structures offer ample protection for directors prioritizing broader societal interests. Even in a standard corporate setup, directors can justify safety and ethics considerations as part of long-term value maximization, and their discretion would likely be protected under the business judgment rule. Private ordering solutions, such as shareholder covenants not to sue and the non-litigious nature of the startup ecosystem, further mitigate the risk of fiduciary challenges.

While the utility of Aligned Structuring principles is questionable, their costs are clear and easily observable. Traditional startup structures strive to minimize information asymmetries and agency costs while incentivizing stakeholders with varying risk appetites and liquidity needs to collaborate effectively. Recalibrating startups’ control and cash flow rights to protect society from safety and ethics risks will undermine these efforts—particularly in startups that lack the rare concentration of talent, experience, and star power found in ‘celebrity startups’ like OpenAI or Anthropic. For example, alternative business entities might disadvantage competitors in raising capital. The threat of being replaced by Alignment Champion directors might disincentivize entrepreneur-managers from dedicating necessary time and attention to R&D. Capping employees’ financial upside would make it challenging for the startup to compete for top talent.

This conclusion does not imply that Aligned Structuring, as a concept, is doomed to fail. There may be other ways startups’ governance and capital structures could be harnessed to protect society from AI risks more effectively. Additionally, Aligned Structuring models may have different applications for startups that adopt them. Perhaps they are better understood as PR instruments designed to advertise startups’ proclaimed commitment to safety and ethics rather than enforce it.

The author’s paper can be found here.

Gad Weiss is a Wagner Fellow in Law & Business at NYU School of Law.

This post is published as part of the special series ‘The Law and Finance of Private and Venture Capital’.

 

Share

With the support of