Dealing with AI Delusions: More Regulation Required?
It has long been claimed that the practice of law is at a technological crossroads. There is a sense in which this is always true: law firms are businesses under continuing pressure to innovate, and technological advances provide a mechanism for maximising efficiency, accuracy, and profitability.
A great deal of recent interest in legal innovation has centred upon artificial intelligence (‘AI’). The increasing utilisation of AI is perhaps one of the most significant recent transformations in legal practice. Much attention has been directed towards the opportunities AI presents. But the use of AI also gives rise to significant risks, particularly in the legal sector. Inputting confidential information into AI tools may risk inadvertently sharing client data. AI can suffer from biases, which if undetected risk perpetuating unfairness and error. And the opaque operation of gen-AI increases the difficulty in identifying these problems.
In this post I focus on the particular risk of AI hallucination—the tendency for AI models to produce inaccurate outputs. Gen-AI models are often prone to error, and there is no shortage of stories involving lawyers’ citation of fake AI-generated cases before tribunals. From this, a question emerges: is there a case for greater regulation, or indeed a prohibition, on AI use by lawyers? Whatever the answer, I argue that before adopting such regulation, authorities in England and Wales ought not overlook the utility of existing regulatory duties.
The sufficiency of existing duties
Given the potential dangers to which AI gives rise, it is hardly surprising that there have been calls for bans on the use of AI in certain contexts. The Federal Court of Canada recently imposed a ban on its judges’ use of AI in judicial decision-making. US appeals courts are considering greater regulation of the use of (gen-)AI by lawyers appearing before them. And some law firms have implemented outright bans on their members’ use of AI in the course of their employment.
However, in addressing the use of AI in legal practice, existing regulatory duties should not be neglected. A number of duties under the Solicitors Regulation Authority Code of Conduct for Solicitors are potentially useful in addressing practitioners’ use of AI. Solicitors are already duty-bound to not mislead their clients or the court. They must draw the court’s attention to relevant authorities. And they must only put forward legal interpretations and submissions which are properly arguable. Barristers are subject to similar regulatory duties. Given this, it appears that existing regulatory duties are already equipped to respond to the submission of false, AI-generated, reasoning and citations. Citation of, and argument from, non-existent authority is obviously and seriously in breach of these duties. And the sanctions for breach of these duties are potentially significant.
What role, then, would a specific ban on the use of AI in practitioners’ generation of legal argument play? Aside from standing little chance of being adhered to, it seems that this would simply be a needless replication of existing regulatory tools. But there are arguably good rule of law-based reasons to be wary of this unnecessary duplication of duties. The law must be capable of guiding action. Unless all overlapping duties are closely aligned in scope and sanction for breach, the duplication of obligations risks inhibiting this guiding function. This is quite undesirable, particularly given the sufficiency of existing duties.
A disclosure-based duty?
Perhaps a weaker regulatory approach—such as that mooted by US appeals courts, namely requiring lawyers to disclose use of AI—would be more appropriate? The rationale for this approach appears to be the thought that, if lawyers disclose their use of AI, courts will be alive to the potential falsity of their citations and reasoning.
Taken on its face, a disclosure-based duty appears sensible. AI tools sometimes produce untrustworthy results, and relying on AI tools (without verifying their outputs) could result in dangerously misleading submissions being made to courts. This is particularly worrying in relation to tools that generate false authorities: ‘precedent is the cornerstone of our legal system’, and the practice of precedent relies on the citation of (real) authorities, and submission of only properly arguable cases flowing from those authorities. Of course, under the principle iura novit curia, judges are obliged to be familiar with the law, and able to identify false authorities. However, any practising lawyer will attest to the fact that, pragmatically speaking, judges do usually rely on parties’ submissions as regards legal precedent. As noted in Harber v HMRC, AI tools generating inaccurate outputs could mislead counsel, and in turn courts, which rely on solicitors and barristers to, in accordance with their duties, accurately state the law.
Yet despite the initial attractiveness of a disclosure-based duty, the proposed duty harbours three significant issues. First, grant for now that disclosure of use of an untrustworthy tool is an adequate danger warning. That something might be adequate does not necessarily mean that it should be elected among other alternatives. Against this background, consider how the practice of false citation (and erroneous reasoning derived from that citation) is currently guarded against. Lawyers must provide authorities for their legal propositions. As such, an unsupported proposition itself warns of the danger of falsity. And the veracity of various propositions can be checked against the citations provided. Furthermore, as discussed above, regulatory duties already prohibit and sanction the production of false authorities and unarguable interpretations. It is unclear what a disclosure-based duty would add to the current practice of citation and existing regulatory duties.
Second, if disclosure of use is required for untrustworthy tools, why stop at AI? It seems that the main source of AI’s untrustworthiness is its capacity for error. But this form of untrustworthiness seemingly also extends to a number of other sources—internet search engines, other online tools, or print resources. If the citation-based method described above is inadequate to address the challenges posed by AI, why think that it sufficiently addresses other potentially untrustworthy sources?
Third, is it right to think that disclosing the use of a tool constitutes an adequate warning, alerting to potential dangers? Consider how a warning might in practice function. Current research indicates that over 50% of lawyers utilise some form of AI, and it is very likely that this figure will grow. If this is right, then disclosure of use of AI tools will likely begin to become the norm rather than exception. In this case, the ‘warning label’ of disclosing use of AI might cease to effectively warn at all, failing to distinguish between risky and less risky uses of AI. In any case, this disclosure would seemingly take us no further than indicating to judges that they ought to ensure the veracity and supportability of lawyers’ arguments—something that they are already bound to do.
Conclusion
This article has only sought to discuss one issue AI poses to legal practice. There are of course a myriad of others. And there may well be good reasons to regulate AI in a manner not considered here. But as the case for regulation has currently been made, it is not entirely convincing. In any event, attempts to regulate AI should not overlook the utility of properly enforcing existing regulatory frameworks.
Conor Hay is a Visiting Lecturer in Law, University of Westminster.
Share
YOU MAY ALSO BE INTERESTED IN