Algorithmic Constitutionalism: the Next Frontier of Constitutional Law?
The increasing encroachment of artificial intelligence (‘AI’) on social life raises various risks to human welfare and human rights. These risks are most profoundly found in the info-spheres created and controlled by Google, Facebook, Reddit, Apple, and Amazon. In this blog post, which is based on our forthcoming article in Indiana Journal of Global Legal Studies, we propose a novel approach to coping with the risks presented by AI governance, which we have termed ‘algorithmic constitutionalism’ (‘AC’). We briefly demonstrate how this idea can operate within the digital platforms such as Facebook and Reddit and consider some challenges to our model by setting it against the idea of ‘societal constitutionalism’, developed by Gunther Teubner and others. We conclude by exploring the implications of our idea to the new governance framework established by the EU Digital Services Act (‘DSA’).
Digital platforms such as Facebook are hybrid entities, in which algorithms and human agents share responsibilities in performing various governance tasks, such as content moderation, friend recommendation, ad delivery, and more. We consider a hypothetical situation in which Facebook becomes fully controlled by an AI agent, with highly autonomous capabilities (a prepotent or super-intelligent AI, ‘PAI’). We assume that this PAI agent will replace Facebook’s current content-moderation framework, which combines algorithms and human agents (see, figure 1 at the end of this post). We distinguish between two types of risks that a shift to full AI control can create. The first is a potential increase in over and under enforcement (false positives and false negatives) due to loss of human oversight and algorithmic bias. A recent study by the NYU Brennan Center for Justice has shown, for example, that Facebook approach to hate speech was biased against minority groups. The second risk concerns the possibility, noted by AI scholars such as Nick Bostrom, Stuart Russell and others, that a PAI agent may develop new objectives and new interpretations of existing ones, which may be inconsistent with those of its creators. In the context of Facebook, this risk could be manifested by a morally misaligned revision of Facebook content moderation norms. For example, the PAI agent controlling Facebook could relax the norms regarding fake news in order to increase the communication traffic on the platform. Facebook has already been criticized on its failure to prevent the dissemination of fake news regarding COVID-19 and climate change.
One dominant response to the risk of governance by AI is the idea of ‘ethical engineering’, which proposes to instill AI agents with ethical principles, either by specifying through a top-down process, a small set of fundamental, rule-based principles, or by allowing them to develop ethical sensitivities through a bottom-up, case-based learning. The first difficulty with this approach concerns the problem of moral pluralism. How can we algorithmically resolve the incommensurability of moral values, where values are considered incommensurate 'if it is neither true that one is better than the other nor true that they are of equal value'. Advocates of ethical engineering have not given a satisfactory solution to this problem. A second shortcoming of the ethical engineering approach concerns the risk that a PAI agent develop new objectives which may be inconsistent with the ethical principles that the human designer has implanted in it.
The idea of ‘algorithmic constitutionalism’ (‘AC’), which we propose here, offers an alternative approach to the challenge of governing AI in the context of digital platforms. Our approach rests on three pillars: (a) layered architecture that consists of two levels of code: (i) operative or object level, and (ii) meta level. The purpose of this layered architecture is to shield the core principles of the system (which are located at the meta-code level) from algorithmically initiated changes that can radically transform the system; (b) algorithmic meta-reasoning, which allows the system to operate simultaneously at the two levels, so that it can (self) monitor, verify, and potentially correct in real time, operations at the object level, if they depart from the principles protected by the meta-code level; and (c) correction by deliberation. Considering both the problem of moral pluralism and the risks morally misaligned autonomous PAI, we propose to limit the ability of the meta-level code to initiate corrective actions, by subjecting it to hardwired deliberation procedures. Our thesis builds on humanity’s long experience in taming power. Rather than relying on ethics and morality to guide it, human society has put its trust in politics, drawing on various decision-making procedures, embedded in an intricate institutional structure of checks and balances. Together, these three principles form the basis of what we call ‘algorithmic constitutional law.’
To illustrate how AC could operate in practice, consider Facebook’s cross-check system, whose full extent was exposed in an important decision of the Oversight Board released on December 6, 2022. The system provided privileged treatment to Facebook’s business partners and most influential users, by creating additional layers of human review (on top of the conventional algorithmic layer). According to the AC model, if such a system were initiated by the object level code (eg as a way to increase data traffic on the platform), the meta-level code could have detected and corrected it, as the cross-check system has clearly violated Facebook Community Standards both by providing unequal access to human moderation and by failing to provide proper protection to entities that are likely to produce expression with significant human rights value.
A key challenge to AC concerns its relation to societal constitutionalism (‘SC’). Our argument shares with SC the claim that state constitutions do not exhaust the universe of constitutionality, but it departs from SC by developing an algorithmic interpretation of constitutionality. Paradoxically, the attempt to subject the AI algorithm to external deliberative control also opens the door for the AI agent to intervene in that process (eg by governing the conditions through which external deliberative data can be ‘validated’ and incorporated as input), potentially undermining its very purpose. The tension between societal and algorithmic constitutionalism poses we believe a new critical challenge to constitutional law.
An important direction for further research concerns the relation between AC and the EU Digital Services Act (‘DSA’), which established a new regulatory regime for social media platforms. The DSA includes provisions regarding the notification and provision of reasons (regarding, for example, removal of content), the establishment of an internal complaint-handling system that enables users to lodge complaints electronically and free of charge against a decision taken by the provider, and the creation of an out-of-court dispute settlement system that would provide users with accessible redress. The DSA regime insists that the new regulatory regime will not be fully governed by AI. Thus, for example, Art. 20(6) states that ‘[p]roviders of online platforms shall ensure that the decisions, referred to in paragraph 5, are taken under the supervision of appropriately qualified staff, and not solely on the basis of automated means.’ However, Art. 20(6) does not provide explicit guidelines regarding the ‘division of labor’ between AI and human judgment. Developing practical solutions to the tension between societal and algorithmic constitutionalism thus constitutes a major challenge for the implementation of the DSA.
Figure 1: Description of Facebook’s content moderation framework
Oren Perez is the head of Bar-Ilan University Multidisciplinary School for Environment and Sustainability and Professor of Law at BIU Faculty of Law.
Nurit Wimer is a PhD candidate at BIU Faculty of Law.
Share
YOU MAY ALSO BE INTERESTED IN