Faculty of law blogs / UNIVERSITY OF OXFORD

Rethinking Explainable Machines: The Next Chapter in the GDPR's 'Right to Explanation' Debate

Author(s)

Roland Vogl
Ashkon Farhangi
Bryan Casey

Whether on private social networks or in public sector courtrooms, machine learning applications are witnessing unprecedented rates of adoption due to their ability to radically improve data-driven decision-making at a cost and scale incomparable to humans. Today, virtually all experts agree that machine learning algorithms processing vast troves of data will only play an increasingly large role in regulating our lives. The question, thus, becomes: How are we to regulate these algorithms?

In 2016, the EU sought to become a global pioneer in answering this question by replacing its 1990s-era Data Protection Directive (DPD) with comprehensive reform legislation known as the General Data Protection Regulation (GDPR). Among the numerous protections it introduced was an update to the DPD’s rights surrounding automated decision-making. The update formally enshrined what has since come to be referred to as the “right to explanation”. The right mandates that entities handling the personal data of EU citizens “ensure fair and transparent processing” by providing them with access to “meaningful information about the logic involved” in certain automated decision-making systems.

Many view the GDPR’s “right to explanation” as a promising new mechanism for promoting fairness, accountability, and transparency in the types of machine learning systems being deployed with increasing regularity across the globe. But as is true of numerous other rights enshrined within the GDPR, the precise contours of its protections are less than clear.

In the two years since the GDPR’s official publication, this uncertainty has ignited a heated global debate surrounding the right’s actual substantive protections. The debate has centered on a cluster of four provisions found in Chapter 3 of the Regulation that circumscribe the specific text giving rise to the right. Scholars, industry leaders, and media sources across the globe have scoured the language of these provisions, proffering all variety of competing interpretations of what the GDPR’s new, and potentially revolutionary, “right to explanation” entails. But lost in the debate’s singular focus on Chapter 3 has been a recognition of the most revolutionary change of all ushered in by the Regulation: the sweeping new enforcement powers given to Europe’s data protection authorities by Chapters 6 and 8.

Unlike the DPD that it will replace, Chapters 6 and 8 of the GDPR grant EU data authorities vastly enhanced investigatory powers, a broad corrective “tool kit,” and the capacity to levy fines several thousand times larger than the current maximum available under EU law. The practical importance of this paradigm shift is difficult to overstate. Thanks to the GDPR’s introduction of truly threatening administrative powers, EU data authorities will no longer be rendered the toothless data watchdogs many companies have long viewed them to be. Rather, these newly empowered authorities will play a weighty role in enforcing and, therefore, interpreting the GDPR’s numerous protective mandates.

Viewed through this lens, it becomes apparent that many participants in the public debate surrounding the “right to explanation” have simply gotten ahead of themselves. While commenters have offered their rarified interpretations, those tasked with actually enforcing the right have quietly produced an extensive corpus of guidance offering a richly detailed framework for companies seeking to promote compliance with the “right to explanation.”

Now that the dust from this recent burst of activity has begun to settle, our new article, Rethinking Explainable Machines, attempts to take stock of the developments—just in time for the Regulation’s effectuation in May 2018. In doing so, it seeks to turn the page in the GDPR’s fraught “right to explanation” debate by answering a question that has, thus far, gone almost entirely overlooked: What do those actually tasked with enforcing the right think it entails?

Through the words of EU’s data authorities, Rethinking Explainable Machines argues that at least one matter of fierce public debate can now be laid to rest. The GDPR provides an unambiguous “right to explanation” with sweeping legal implications for the design, prototyping, field testing, and deployment of automated data processing systems. While the protections enshrined within the right may not mandate transparency in the form of a complete individualized explanation, a holistic understanding of the Regulation’s interpretation by EU authorities reveals that the right’s true power derives from its synergistic effects when combined with the algorithmic auditing and “data protection by design” methodologies codified by the Regulation’s subsequent chapters. Accordingly, the Article predicts that algorithmic auditing and “data protection by design” practices will likely become the new gold standard for enterprises deploying machine learning systems both inside and outside of the EU bloc.

Bryan Casey is a Fellow at CodeX: The Stanford Center for Legal Informatics, Stanford, CA.

Ashkon Farhangi is a Product Manager at Google, and a CodeX Fellow.

Roland Vogl is Lecturer in Law at Stanford Law School; Executive Director of CodeX, and Executive Director of the Stanford Program in Law, Science, and Technology, Stanford, CA.

 

Share

With the support of