Faculty of law blogs / UNIVERSITY OF OXFORD

Information in Disclosing Emerging Technologies: Evidence from AI Disclosure

Posted

Time to read

4 Minutes

Author(s)

Yang Cao
Researcher at Carroll School of Management, Boston College
Miao Liu
Assistant Professor at Carroll School of Management, Boston College
Jiaping Qiu
Professor of Finance at DeGroote School of Business, McMaster University
Ran Zhao
Assistant Professor at Fowler College of Business, San Diego State University

Artificial Intelligence (AI) is undeniably one of the most transformative technologies in today's business landscape. A recent survey of 240 CEOs and senior executives revealed that nearly 60% believe AI will fundamentally transform their industries. This revolutionary potential introduces substantial uncertainty for investors attempting to evaluate its impact on firm value. On the one hand, AI can drive revenue growth through innovative products and precise customer targeting while also reducing operational costs by automating tasks, optimizing supply chains, and improving production processes. On the other hand, adopting AI comes with heightened risks, including regulatory uncertainties, cybersecurity vulnerabilities, and potential operational failures.

Given these uncertainties, it is essential for investors to have access to high-quality information about firms' AI initiatives and the associated risks. However, the current regulatory framework lacks explicit guidelines for AI reporting, making such disclosures voluntary. The US Securities and Exchange Commission (SEC) has expressed concerns about the credibility of corporate AI disclosures, warning against AI washing—the practice of overstating AI capabilities to appear more advanced. This practice has fueled skepticism among investors, contributing to rising securities class action lawsuits. Against this backdrop, our study, ‘Information in Disclosing Emerging Technologies: Evidence from AI Disclosure’, seeks to shed light on the determinants and information content of corporate AI disclosures in annual reports.

Measuring AI disclosure is challenging due to the voluntary nature of reporting and the lack of a standardized framework. Firms often distribute AI-related content across multiple sections of their 10-K reports, such as business descriptions, risk factors, and management discussion and analysis (MD&A). To address these measurement challenges, we developed a two-step approach that combines keyword searching and large language model (LLM)-based analysis. We began with a simple search for "artificial intelligence" in the 10-K filings of US public firms listed on the SEC EDGAR database. We then expanded the list of keywords by manually reviewing 10-K reports that discussed AI, identifying commonly co-occurring terms and bigrams. Traditional keyword-based searches, however, may miss contextual references to AI or misinterpret the nature of AI discussions. To address this, we leveraged ChatGPT to provide contextual analysis. For every identified AI-related keyword, we extracted 400 words before and after the keyword and asked ChatGPT to classify the nature of the AI usage into five categories: product development, pricing optimization, AI product provision, inventory management, and operational efficiency. This approach allowed us to categorize the nature of AI disclosure more accurately and link AI mentions to specific areas of firm activity.

Our analysis of 10-K filings from 2010 to 2023 reveals a significant increase in the frequency and intensity of AI disclosure. In 2010, only 2.36% of firms mentioned AI in their 10-K reports. By 2023, this figure had risen to 20.02%. Importantly, this increase occurred across all industries, reflecting the growing importance of AI in sectors like manufacturing, healthcare, finance, and retail. Most AI-related discussions are concentrated in the business description, risk factors, and management discussion and analysis sections of the 10-K report. These sections offer insight into how firms incorporate AI into their operations, assess associated risks, and outline future strategic initiatives.

To understand what motivates firms to disclose AI information, we investigated the relationship between firm characteristics and AI disclosure. We found that firms with a higher proportion of AI-skilled employees are significantly more likely to disclose AI information. Specifically, a 1% increase in AI-skilled employees increases the likelihood of AI disclosure by 0.6%. Other key determinants include firm size, valuation, and firm age. Larger firms are more likely to disclose AI initiatives due to their greater resources for investing in new technologies. Firms with higher market valuations tend to highlight AI to emphasize their innovation capabilities. Younger firms, often seen as disruptors, are more inclined to disclose AI-related initiatives as part of their growth narrative.

We then explored whether AI disclosures contain useful forward-looking information for investors. By linking AI disclosure to subsequent firm performance, we observed that AI disclosure is positively associated with future sales growth, employment growth, capital investment, and R&D intensity. Importantly, AI disclosures also correlate with increased firm risk, as evidenced by a rise in stock price volatility and option-implied volatility. This suggests that while AI may drive growth, it also introduces uncertainties. By classifying the nature of AI activities into categories such as product development, pricing optimization, inventory management, and operational efficiency, we found that both revenue-enhancing and cost-reducing AI activities significantly impact firm performance. Firms that use AI for both revenue generation and cost reduction exhibit stronger future growth compared to firms focused on only one aspect of AI adoption.

Given the inherent risks of AI adoption, companies often disclose AI-related risks in the risk factors section of their 10-K reports. We classified these risks into six categories: regulatory risks, operational risks, competitive risks, cybersecurity risks, ethical risks, and third-party risks. Regulatory risks stem from uncertainty in AI regulations and compliance requirements, while operational risks arise from integration failures and system malfunctions. Competitive risks are linked to threats from rival firms with superior AI technology. Cybersecurity risks are associated with the potential for data breaches and hacking of AI-driven systems. Ethical risks include concerns about fairness, discrimination, and societal impact, while third-party risks involve reliance on external AI vendors and service providers. Firms that disclose more AI risks exhibit higher stock and option-implied volatility, indicating that markets recognize the uncertainty associated with AI adoption.

Our study highlights the growing importance of corporate AI disclosure in annual reports. We provide evidence that AI disclosures offer valuable forward-looking information related to firm growth, operational efficiency, and risk. The adoption of advanced LLMs like ChatGPT allows for a more precise analysis of disclosure content, shedding light on the specific nature of firms’ AI activities. Our findings have implications for investors, regulators, and policymakers, particularly as regulatory efforts around AI continue to evolve. In light of increasing pressure for transparency, firms may face growing scrutiny over the quality and credibility of their AI disclosures.

Yang Cao is a Researcher at Carroll School of Management, Boston College.

Miao Liu is Assistant Professor at Carroll School of Management, Boston College.

Jiaping Qiu is Professor of Finance at DeGroote School of Business, McMaster University.

Ran Zhao is Assistant Professor at Fowler College of Business, San Diego State University.

The full paper is available here.

Share

With the support of