Faculty of law blogs / UNIVERSITY OF OXFORD

Beyond Compliance: Democratizing Speech Governance by AI

Author(s)

Niva Elkin-Koren
Professor of Law at Tel-Aviv University, Faculty of Law
Maayan Perel
Assistant Professor, Netanya Academic College School of Law

Posted

Time to read

4 Minutes

Governance by Artificial Intelligence (AI) is challenging the notion of compliance in democratic societies. Compliance is generally ‘the state of being in accordance with established guidelines or specifications, or the process of becoming so.’ Yet, in AI-based decision-making, serious challenges surround the nature of the threshold for compliance. Most often, scholars point to the opaque and dynamic nature of the algorithmic norms that govern AI-based decision-making, which makes it difficult to verify their decisions against a pre-determined threshold. Scholars concerned with this type of challenge essentially presume that an acceptable threshold exists out there.

We challenge this hidden presumption. Unlike legal norms, which explicitly set rules and standards, Machine Learning (ML) systems learn from examples to predict or classify a new instance, which was not previously encountered by the system. Consequently, we argue, AI systems often do not simply comply with a given threshold but largely define the threshold themselves. This highlights a far more preliminary challenge to AI compliance, focusing on the opaque and dynamic manner in which thresholds (or rules) are set. Since social dialogue and public deliberation are missing in norm setting by AI, liberal democracies should question their legitimacy.

The Formation of Speech Norms by AI

Consider, for instance, speech governance by AI. While the scope of permissible speech in digital platforms is typically defined in legal terms, which are listed in the platforms’ Terms of Service, in practice, upload filters of social media provide an operational definition of free speech through the technical details. AI filters have now largely replaced humans in determining which content can become available online. ML systems are deployed in content moderation to detect illicit speech, such as hate speech, terrorist propaganda, and copyright infringements. Such systems share several features: a system for labeling data as either legitimate or unwarranted, and a predictive model, which classifies any given content as either illicit or not based on features learned in the training model. Classification will typically be followed by some automated action towards the content (eg post, downgrade, demonetize, remove, block, filter). Importantly, a key feature of ML content moderation systems is a recursive feedback loop. Content identified as illicit is fed back into the model so that it will be detected the next time the system runs. Such systems thus involve ongoing learning that is shaped by the subsequent content they process. Hence, speech governance via AI does not merely apply existing norms, but also crafts norms and shapes users' behavior.

(The Lack of) Contestation in Speech Governance by AI

The transition to AI in speech governance by digital platforms lacks some key features which are necessary to enable society to deliberatively decide self-governing norms. Democratic contestation seeks to enable citizens to collectively form public opinion by facilitating discursive interactions.

 

One important feature of democratic contestation facilitated by law is the dispersed power to decide and interpret norms held by competing institutions and diverse human decision-makers. Yet, AI systems of speech governance act simultaneously as legislatures, judges and executors when they define the classifiers, apply them to any given piece of content and generate an outcome: whether to allow or ban it. 

 

Another important feature of democratic contestation facilitated by law is reflected in the way it allows multiplicity of meanings, while enabling individuals and groups to collectively decide conflicting views, while often disagreeing. Traditional law-making enables social actors to agree on high-level principles and work out the details of the required tradeoffs down the road. Speech governance by AI, in contrast, applies data analytics techniques to identify patterns and correlations to classify content as unwarranted. In AI decision-making, outcomes turn into data-driven rules.

Finally, speech governance by AI is less susceptible to public scrutiny than speech governance by law. The norms generated by AI are opaque and inexplicable, and the value tradeoffs, which are embedded in the optimization functions of AI systems, are concealed. Thus, AI systems of content moderation formulate thresholds in a probabilistic fashion, without any conscious deliberation over what speech is, and what are the limitations to which it is subject.

AI systems that generate speech norms do not intend to reflect the underlying principles of our social contract. Yet, these exact same speech norms shape our online public sphere and define the threshold for compliance. Can we enhance their legitimacy?  

Speech Contestation by Design  

Reintroducing democratic contestation into the process of implementing and crafting speech norms is essential for sustaining a democratic online discourse. Our proposal for speech contestability by design is inspired by the contestation processes and procedures, which are embedded in the law.

One way to promote participatory public engagement in setting speech norms by AI systems is to incorporate adversarial procedures in the system design. An adversarial approach, inspired by law, could guide the creation of contesting algorithms, which would automate the process of contesting decisions about speech. A system of contesting algorithms would further enable the acquisition of information about content moderation, at scale, while creating an ongoing and dynamic check, as well as counter pressure, against platforms’ monolithic content removal systems.

Another possible means to promote democratic contestability by design is to inject the legal principle of separation of powers into algorithmic content moderation, through separation of functions. The idea is to separate different functions performed by the monolithic AI content moderation systems of digital platforms, and to outsource the law enforcement functions to external, independent, unbiased algorithms. This would ensure speech moderation complies with objective public norms, instead of exclusively following the narrow interests of the platforms that deploy them.

These by-design solutions create a common ground for negotiating speech norms—a procedural framework under which competing tradeoffs are confronted to produce an outcome that actually reflects social negotiation. They enable public scrutiny over speech moderation, thus facilitating a more democratic evolvement of online speech norms. Speech norms generated through processes of democratic contestation establish more legitimate thresholds for compliance.

Niva Elkin-Koren is a Professor of Law at Tel-Aviv University Faculty of Law.

Maayan Perel is an Assistant Professor at the Netanya Academic College School of Law.

This post is published as part of the series ‘Smart Compliance Systems in the AI Era: Combining Criminal and Administrative Measures’ and is a contribution from the symposium ‘Smart Compliance Systems in the AI Era: Combining Criminal and Administrative Measures’ co-organised by Bar-Ilan Lab for Law, Data-Science and Digital Ethics and Ono Academic College in December 2022.

Share

With the support of