Faculty of law blogs / UNIVERSITY OF OXFORD

AI and Corporate Climate Governance: Time for AI Pragmatism

Author(s)

Jan-Erik Schirmer
Professor of Law at European University Viadrina

Posted

Time to read

3 Minutes

Artificial intelligence (AI) and climate change are arguably the two greatest challenges of our time. While long considered in isolation, a more integrated perspective of a ‘Twin Transformation’ is gaining traction in public debate and the social sciences. Legal scholarship, however, has largely remained on the sidelines—especially in the field of corporate governance. Contributions on the respective impacts of AI or climate change are ubiquitous, but there is little engagement with how the two are interrelated.

In a chapter of the forthcoming Oxford Handbook on Climate Change and Private Law (edited by Douglas Kysar and Ernest Lim) I aim to bridge this gap. It brings together the siloed debates on AI and climate change and explores how their intersection matters for corporate governance. More specifically, it makes three contributions.

First, the chapter argues that uncritical calls for more AI in the boardroom pose significant climate risks. Worldwide, AI’s growing capabilities and optimization potential have sparked a veritable AI enthusiasm in corporate governance—with many scholars arguing that directors now have a duty to make use of state-of-the-art AI systems. But this overlooks an important point: advocating the extensive use of AI implicitly contributes to the growth and intensification of AI deployment—thereby increasing its already large carbon footprint. AI models, particularly large-scale ones such as GPT or DeepMind’s Alpha systems, require vast computational resources. Because most of the required electricity still comes from fossil sources, AI is currently responsible for between 0.06 and 0.17% of global annual greenhouse gas (GHG) emissions—putting AI's climate impact somewhere between that of countries like Jordan and Belgium. Despite rapid efficiency gains, overall emissions are increasing due to the exponential growth in AI development and deployment.

Second, the chapter responds to these risks by advocating a shift from uncritical AI enthusiasm to what I call AI pragmatism. The idea is simple: when interpreting corporate law (or any other area of law, for that matter), scholars should take the climate risks of AI into account. This is not to say that corporate governance should reject AI altogether. On the contrary, when used selectively, AI can play a meaningful role in enhancing corporate climate governance.

This leads to the chapter’s third contribution: it provides concrete examples of how AI can support climate governance at the corporate level. In particular, it focuses on climate reporting and planning—two areas that are increasingly mandated by new regulatory frameworks, especially in the EU.

Under the Corporate Sustainability Reporting Directive (CSRD), many companies are now required to report on their GHG emissions and climate-related risks. The Corporate Sustainability Due Diligence Directive (CSDDD) goes even further by imposing a substantive obligation on covered companies to adopt transition plans aligned with the 1.5°C goal of the Paris Agreement. US regulations are less stringent, but similar trends are emerging: while the SEC’s climate disclosure rules have been permanently suspended, the more legally robust California framework remains in place, and with it the far-reaching California effect. In light of this, companies subject to climate reporting and disclosure rules are well advised to (continue to) comply by disclosing all required information—not least to avoid disappointing their climate-sensitive stakeholders.

Both corporate climate reporting and corporate climate planning are complex tasks. Reporting GHG emissions or identifying climate-related hazards requires the collection and processing of vast amounts of data, and the use of complex climate models. AI can provide valuable assistance on all these fronts.

For instance, recent studies show that AI-driven emission models can determine emissions less costly and about 30% more accurately than traditional tools. This is particularly helpful for predicting future emissions, which is essential for setting science-based reduction targets. Similarly, AI can support companies in identifying both outside-in and inside-out climate risks—ie, how climate change impacts the company, and how the company contributes to climate change. Traditional climate models are time-consuming and computing-intensive; AI-based models can simulate phenomena like cloud behavior or glacier expansion up to 1,000 times faster and with greater precision.

But AI does not only help companies comply. It also supports stakeholders better processing the provided information on climate and sustainability reports. One of the key challenges in sustainability governance is information overload. Customers, employees, and investors are often overwhelmed by the sheer volume and complexity of sustainability reports. AI tools such as ClimateBERT can help reduce this overload by automatically analyzing and summarizing disclosures. Their scope has been limited so far, mainly due to the lack of standardized reporting formats. But that is changing. Reporting frameworks like CSRD and the European Sustainability Reporting Standards (ESRS) increasingly require machine-readable reports—enabling AI tools to unlock their full potential.

The upshot is clear: AI has both beneficial and adverse effects on climate change. In the realm of corporate governance, it is both problem and potential. In my chapter, I therefore advocate a more careful and pragmatic approach—one that accounts for AI’s risks, while making strategic use of its strengths.

 

Jan-Erik Schirmer is a Professor of Law at European University Viadrina, and Chair of Civil Law, Commercial and Corporate Law, Compliance and Sustainability.

The author’s chapter, on which this post is based, can be found here.

Share

With the support of