Summary of the AI in English Law Conference (18-19 March 2019)
The AI and English Law Launch Conference, hosted at the University of Oxford on the 18th-19th March, introduced themes, issues and questions on the potential and limitations of using artificial intelligence (AI) in support of legal services. This post aims to summarise these themes thematically, noting thoughts from Education Professor Ewart Keep that transpired during the proceedings. The conference proceedings can be watched on-line here.
During the inaugural presentation Richard Tromans posed the question: ‘If AI is the answer, what is the question we are asking as commercial lawyers?’ He argued that rather than start with AI, it might be better to start with business problems and opportunities and see what AI could do to solve them. Specific technologies were not the long-term issue; research and experimentation in this field needed to be ‘mission driven’. We should not assume that AI would deliver legal service outcomes in the same way as traditional legal processes. This may in the end mean re-thinking the law in the light of technology.
Strengths and weaknesses of AI’s capabilities and how these will shape its deployment
Machines are currently good at classification, transduction (changing one sequence into another) and translation. Whether machines would achieve meaning from syntax rather than simply more or less sophisticated capabilities in pattern recognition was discussed by Richard Tromans. We do not fully understand how humans achieve and represent meaning yet, and the ways in which children acquire language was a vital question for resolving some problems with AI. This was particularly discussed by Tromans in reference to legislation design and whether terms such as ‘near’ and ‘some’ could be interpreted in the same way by machines as by humans, an indisputably problematic issue.
How will AI be used?
James Faulconbridge outlined how research on next generation professional services within the legal sector and other forms of professional services was developing scenarios for the future of professional service firms covering:
- New tasks and work processes
- The emergence of ‘new professionals’
- What learning would be required, by whom and facilitated in what ways
- How firms will in future interact with the wider business environment, the media, regulation, clients and professional associations.
A key choice will be ‘make or buy’ – how much AI design and development will be undertaken in-house? It was foreseen that collaborations between organisations would be embarked upon across professional services as a collective learning process about AI took place. In the future, the relationship and inter-dependencies between different professions were likely to change, as new collaborations and combinations emerged as legal services were re-bundled with other professional service offers.
Faulconbridge argued that it was already clear that within commercial law firms, the masses of data that they already possessed could be leveraged via AI to yield new value propositions particularly in the areas of document review, expert systems and litigation analysis. It is also possible that law firms could start to work with insurance companies in areas such as fraudulent claim identification. One of the strengths of AI was that it enabled pattern recognition and could also pinpoint documents that needed to be read from a mass of material.
At present, the bulk of AI usage was business to business, but some argued that this was about to change and that the next wave of introduction would focus more on business to consumer products.
Several contributors to the session on AI and Dispute Resolution advanced the view that AI and electronic and on-line delivery could transform the cost base of small-scale litigation. For example, the adoption of electronic forms for divorce cases suggests that these could deliver far greater conformity than paper-based ones (Richard Tromans noted that 40% of paper-based forms had required queries and amendments, as opposed to just 0.5% for electronically delivered forms). In relation to cost savings, the example of payday lenders facing compensation claims was cited. Some claims could in future be settled by AI which had been trained on existing human-delivered judgements to resolve cases.
There is a choice between using technology to speed up the old way of doing things versus using technology to create entirely new working methods and models. In other words, rather than use AI to streamline existing models of justice, there is a potential for it to support blue sky thinking and develop entirely new approaches.
Constraints on the adoption of AI
The following constraints on adopting AI were mentioned by a variety of the speakers:
- Organisational and company structures (eg, partnerships) and their impact on the ability to fund investment in new technology. New business models are already being adopted in order to facilitate investments that traditional partnerships could not easily countenance.
- Trust and the legitimacy of such technologies within the justice process (which may vary by field of activity) and the transparency that AI can or cannot provide regarding its decision-making processes (eg, a judge can explain a judgement, but AI cannot explain how it reached a decision or choice). Currently, there is a right to ‘reasoned judgement’, which means humans being there to arrive at it and deliver it.
- Digital exclusion for some parts of the population who cannot access digitally-delivered justice.
- Cyber security concerns.
- Skill requirements and the ability of education and training provision to keep up with technologically-driven demand and new developments.
- The ability to get the existing professional workforce to use the technology in ways that maximised its benefits.
- Access to data. There are major questions of trust and it seems likely that finding bodies (like the Office for National Statistics - ONS) who could act as honest brokers, data depositories and who could oversee the anonymization of data would be important to making progress.
- Evidence that AI works to the benefit of the firm.
Overall, conference participants expressed a belief, based on experience, that the design and adoption of AI often brought with it requirements for major investments of time, money, person power, energy, data, training, cultural and organisational change.
Impacts of AI on legal thinking and practice
Ethical issues. It is clear that in the long run AI raises a number of thorny ethical issues and dilemmas. How does diversity and inclusion play out under AI usage? How can we build ethical tests into the evaluation of any piece of AI? The right type of ethical regulation for the right type of AI application could increase access to law and help tech developers.
It was suggested by Christina Blacklaws that an ethical framework for AI would cover: fairness, equality, reduction of harms (eg, access to justice, traceability of development and models, prevention of harms through bias, and protection of fundamental rights by design).
There was considerable discussion by conference participants of the danger of various forms of bias being built into legal algorithms, as had been the case in recent examples of algorithmic recruitment systems (which have favoured white males and discriminated against people of colour and women). There are also a host of issues to do with GDPR and the use of personal data and other forms of data from social media platforms to ‘rate’ applicants for employment, insurance, credit and attendance at university. We are being scored, often without our knowledge. It is very hard if not impossible for individuals to work out what inferences were being drawn from their data or why these inferences were being made.
Jurisprudence will still matter! Horst Eidenmüller argued that as AI makes inroads into legal processes and practice, contract law will expand and there is liable to be a shift from negligence to strict liability. New doctrinal categories will also probably emerge.At a deeper level, law’s ‘deep normative structures’ will be very relevant in helping shape policies about the use of AI. The example of deciding when to disallow humans to drive cars once driverless vehicles become safer than human drivers was advanced. Do utilitarian models of net increases in societal wellbeing drive change, which could drive humans out of more and more areas of activity and decision-making, or should thinking be grounded in non-utilitarian, rights-based concepts of ‘the good’?
Contract law. Sarah Green argued that AI would have some significant impacts on contract law. In traditional contract law trust between the parties was important. With ‘smart contracts’ trust in the system became what mattered. Then increasing usage of smart contracts would have a number of effects. For instance, interpretation of smart contracts would be a major issue, as they are not written in a human language, and attention would shift from non-performance to defective performance.
Competition law. Ariel Ezrachi explained how technology changes the nature of competition law and markets, with the increasing possibility of the substitution of the invisible hand of the market by the invisible hand of technological systems that could disrupt and subvert traditional conceptions of how markets worked via ‘algorithmic collusion’. In these circumstances, a question such as ‘what is the market price?’ may have less or different meanings from those that existed in the past, and this in turn raises large questions for how the law tries to regulate competition and collusion in marketplaces where algorithms are driving pricing behaviour.
From law to code? Jassim Happa discussed the issues involved in trying to think through how to embed legal compliance within software development. There is a huge conceptual gulf between legal and tech languages and thinking. Legal thinking says there is often room for interpretation whereas technological thinking wants and expects exact instructions. This will matter because technology is becoming all-pervasive (smart homes, smart cities, the internet of things), miniaturised and sometimes invisible to humans, is developing its own autonomous agency, and is bound up with many different kinds of privacy issues.
Incremental change versus fundamental disruption
Lord Keen noted that the US Bar Association’s research suggested that 10% or less of US law firms were using AI in 2018. At present, as several contributors noted, the hype had outrun the actual level of adoption and usage. What happened in the future was what mattered. Different trajectories for technological take-up existed – for example, steady linear development, a stop/start model, and a slow start followed by rapid change and mass adoption.
Mark Beer advanced the case for the more general adoption of AI. Some evidence suggested that globally there were in the region of 500 million serious legal issues occurring every year. 16% of those with a problem go to see a lawyer, just 5% go to court, and of those that do use legal processes, 70% are unhappy with the outcome. The state will not address this. The private sector probably will try to do so. Across the OECD, the average most people will pay to resolve a legal issue is just $50, which suggests that automated methods will be needed.
Skills and learning and development
Two sessions were devoted to this topic. The first explored a view from practice (with presentations from Sophia Adams Bhatti, David Curle, Julia Robinson, Adam Saunders and Ruth Ward), and the second covered the implications for higher education (Rebecca Eynon, Nigel Spencer, Nikita Aggarwal, and Ewart Keep). The main conclusions of the discussions were that the adoption of AI would, to a considerable extent, depend upon the interplay between talent and technology. In some university law schools (eg, Manchester, Aberdeen, Cambridge and Swansea) new course offerings on AI and the law are already in place. Oxford is piloting a new course that is shared between the Law Faculty and the Department of Computer Science. It is also important to remember that many of the skills needed to make AI work in legal services could be transferred from and to other professional service settings, such as accountancy and insurance.
Ewart Keep is a Professor in the University of Oxford’s Department of Education, and co-founder and director of the Centre on Skills, Knowledge & Organisational Performance (SKOPE).
Further information on the project 'Unlocking the Potential of Artificial Intelligence for English Law' can be found here.
Share
YOU MAY ALSO BE INTERESTED IN