Faculty of law blogs / UNIVERSITY OF OXFORD

Explainability and the Law: Needs and Requirements

Author(s)

Munib Mesinovic
DPhil student in Engineering Science at the University of Oxford

Posted

Time to read

4 Minutes

Explainability is the property of automated decision-making (ADM) systems that allows both developers and end-users a ‘look inside the hood’ and identify flaws in the design and sources of bias while increasing trust in automated decisions. ADM systems have been applied in many legal processes, from sentencing predictions, and contract summarisation, to smart contracts. In each of these, the reliability of the system is crucial to guarantee standards of service while preventing gross violation of data protection and human rights. While implementing explainability comes with a myriad of technical and legal challenges that have not yet been fully overcome by either the tech or legal communities, their current methods set out frameworks for achieving progress down the road.

My essay explores the relationship between explainability as a concept both in the legal and AI technical realms as well as its needs and requirements when it comes to implementing AI solutions within legal frameworks. AI models or, more broadly, ADMs, are becoming increasingly present in many influential aspects of society, from education, and healthcare, to legal practice. Doctors who use advanced machine learning to detect cancers sooner and with higher accuracy help transform an entire sector of work while also saving lives. Decisions about patient treatment or diagnosis affect people’s lives profoundly, and ADM systems are not without fault, sometimes leading to faulty results or biased decision-making. For both doctors and patients to understand and trust the decision-making process, especially when the impact of the decisions is significant, they need to be able to evaluate and identify how data is being used and whether the outcome is correct. Explainability also helps with increasing the security of ADM systems where deeper insight into the inner workings can help identify flaws and inconsistencies. While explainability is not the only factor to consider when achieving greater user trust or infrastructure security, it is, nevertheless, a key factor to optimise when inevitably integrating new technology into socially sensitive industries.

Besides healthcare, courts have been relying more and more on ADM systems in agency rule-making and the criminal justice system. Judges resort to decisional support and demand certain explanations for how those algorithms work so they are trustworthy and fair. Explainability can also help satisfy legal requirements of financial service providers dealing with risk modelling, who have to provide justifications when, for example, deciding on issuing credit. It can help overcome different types of bias by highlighting features affecting the results which can often be proxies for sensitive characteristics leading to discriminatory practices of protected groups. Such practices might not lead to liability due to unintentionality, but not using explainable ADM systems to preemptively detect these proxies could expose controllers to potential liability. In other words, to avoid negligence and liability under contract, corporate, and tort law, users of ADM systems, such as healthcare providers, and companies dealing with mergers might need to resort to explainability to guarantee professional standards of care across a multitude of jurisdictions.

Achieving explainability is, however, a significant technical challenge. Functionality cannot be completely accomplished by a pre-programmed set of logic-based rules but requires the algorithm to detect patterns across large amounts of contract examples and use those to predict sequences of text. Because every contract is different, we cannot create a general rule for when a sentence is a good summary of a document we have not seen yet. As a result, the simplest rule-based ADM systems lack the capabilities to solve these tasks and scale accordingly. One could resort to relatively simple models like decision trees that have high interpretability, but their performance would be considerably worse than for more complex models like deep neural networks, especially when it comes to abstract capabilities and processing large amounts of data. There is often a trade-off between higher accuracy, better results, and more robust AI tools, on one side, and explainability, on the other. Thus, some ADM systems can be inherently explainable, but those systems are commonly too simple and narrow in application to be transformative in optimising the task, leading to their explanations being trivial and not insightful to learning about patterns in the data.

Finally, explainability can be seen as a legal right or requirement. Existing regulatory frameworks like the General Data Protection Regulation (GDPR) are ambiguous regarding a right to an explanation of automated decisions. Under Article 15(1), the controller must provide “meaningful information about the logic involved” in ADM, especially when explanations are needed to guarantee accuracy and to potentially challenge correctness. Articles 13-15 set out the right to explanation explicitly despite not being clear on its extent. Nonetheless support for this right is enshrined in the overall trend of the regulation. Article 22, where the right to explanation is found under “the right not to be subject to a decision”, would only apply to decisions based solely on automated processing whose legal effects would have to be significant, which severely limits its applicability. The imprecision of the language used in the GDPR further weakens its safeguards and allows for competing interpretations, like the one above, taking power away from possible enforcement mechanisms for such a right.

As the needs for explainability grow higher as more ADM systems are integrated into different realms of society, including legal practice, it will become even more important to have robust measures to guarantee both standards of care and service as well as safeguard data protection and human rights. Right now, due to a mixture of limited technical solutions and politically sensitive ambiguously worded regulations, explainability is seen more as a good practice and not as a fully enforceable legal right of the consumer. Data protection regulation needs to be depoliticised to prioritise users and uphold human rights with clearer language and more mechanisms for enforceability. Until then, it is on the developers to increase the standards of explainability of their products driven by an interest in improving user experience and pressures from the research community trending towards more reflection on the social impact of the technology they develop.

The author’s full paper is available here.
 

Munib Mesinovic is a Rhodes Scholar and DPhil student in Engineering Science at the University of Oxford.

Share

With the support of