Shaping Minds with Machines: Appraising Communication Bias in Large Language Models
Posted:
Time to read:
How do large language models (LLMs)—the seemingly neutral engines powering today’s conversational AI—shape our perceptions, opinions, and choices? The rapid integration of these technologies into fields such as healthcare, finance, and even politics renders LLMs not mere tools but societal gatekeepers, subtly mediating information flow and influencing decision-making at both individual and collective levels.
Our forthcoming article Communication Bias in Large Language Models – A Regulatory Perspective offers a comprehensive overview of some of the most urgent legal issues and policy proposals around the growing use of LLMs. The future of trustworthy digital communication, and the integrity of our public sphere, may turn on how we answer some of the issues that we raise.
Our article probes the risks entailed in the deployment of biased LLMs and the unique regulatory challenges that follow from it. We scrutinize the EU Artificial Intelligence Act (AI Act) and the Digital Services Act (DSA) regarding their capacities and limitations in addressing the diverse forms of biases produced by LLMs. Notably, LLMs are becoming essential conduits of communication, amplifying social, cultural, and political perspectives, potentially affecting public discourse and even electoral outcomes. At issue is what we refer to as communication bias: the selective presentation of facts or outright misinformation, as well as the more subtle manipulation of beliefs and attitudes through the expression and amplification of perspectives that may escape any classification as either ‘true’ or ‘false’.
LLMs as Gatekeepers
LLMs increasingly function as the primary interface between users and artificial intelligence, and are steadily supplanting conventional gatekeepers, occupying a privileged position in contemporary digital information ecosystems. As AI systems become arbiters of retrieved information, their tendency to reinforce existing biases and to entrench existing echo chambers raises concerns about polarization and the erosion of pluralism. Contrary to existing narratives, these biases do not merely arise from flawed training data (‘data bias’) or user inertia (‘automation bias’), but are the cumulative result of complex generative processes and sometimes even (un)intentional design choices by model providers.
The Regulatory Response: the AI Act and the DSA
Our article provides a granular account of the AI Act and the DSA, charting how these frameworks establish obligations related to transparency, fairness, and risk management.
For instance, the AI Act introduces rigorous pre-market obligations, requiring high-quality, representative training datasets and ongoing risk management for high-impact systems. Developers and providers must conduct regular audits and implement robust human oversight, especially where outputs could affect fundamental rights or democratic processes. Meanwhile, the DSA enforces post-market obligations for platforms hosting LLM-generated information, including mechanisms to address illegal content, mandates for algorithmic transparency, and protocols for systemic risk assessment. Yet, these frameworks have important limitations: neither confronts communication bias head-on, and post-deployment safeguards remain largely reactive. Most notably, the AI Act’s strongest mechanisms coalesce in the pre-deployment phase, while the DSA’s remit is contingent on narrow post-market requirements that may sideline LLM-specific issues.
An Underrated Path: Competition and Technology Design Governance
The existing regulatory debate, as we point out, cannot be exhausted by recourse to value chain regulation (AI Act) and content moderation (DSA) alone. An example is Meta’s recent shift in content moderation, replacing professional fact-checking with user-generated community notes. Meta’s shift not only foregrounds the fragility of platform self-regulation when profit-driven business models dominate, but also highlights the challenges of mitigating communication bias when human oversight is ceded to algorithmic and communal processes. Meta’s policy shift calls attention to the urgent regulatory issues raised by LLM-mediated communication, inviting deeper reflection about whether and when LLMs foster genuine diversity or reinforce dominant platforms’ underlying incentives and inherent blind spots.
A central thesis of our article is the need for competition and ongoing technology design governance as complements to existing regulation. We demonstrate that neither pre- nor post-deployment mechanisms can fully tackle the incentive structures that bear upon communication bias in LLMs in the first place. Instead, we emphasize the need for market-based approaches—particularly robust competition and participatory technology design governance—as vital complements to existing regulatory frameworks. These mechanisms, supported by novel instruments like the Digital Markets Act, can foster pluralism, allow for model diversity, and empower users, ultimately creating a more balanced and trustworthy information ecosystem than regulation alone can achieve.
Where Next for AI Regulation?
Ultimately, our article makes a number of recommendations for a constructive agenda of AI governance: interpret and apply existing laws with communication bias in mind; support the development of benchmark datasets for measuring bias and sycophancy; institutionalize external audit processes; and empower users to effectuate change through participatory oversight. Mitigating the risks associated with the deployment of biased LLMs entails moving from snapshot compliance to market-based procedures that can tackle communication bias—a challenge that requires a number of approaches that combine value chain oversight, content-moderation, competition law enforcement, and participatory technology design.
The authors’ forthcoming article, 'Communication Bias in Large Language Models – A Regulatory Perspective', here.
Adrian Kuenzler is an Associate Professor at the University of Hong Kong Faculty of Law and an Affiliate Fellow at the Information Society Project, Yale Law School.
Stefan Schmid is a Professor at the Technical University of Berlin, Germany.
OBLB categories:
OBLB types:
Share: