Faculty of law blogs / UNIVERSITY OF OXFORD

Hard Law and Soft Law Regulations of Artificial Intelligence in Investment Management

Posted

Time to read

3 Minutes

Author(s)

Wojtek Buczynski
PhD Candidate, University of Cambridge
Felix Steffek
Professor of Law and J M Keynes Fellow at the University of Cambridge
Fabio Cuzzolin
Professor of Artificial Intelligence, Oxford Brookes University
Mateja Jamnik
Professor of Artificial Intelligence, University of Cambridge
Barbara J Sahakian
Professor of Clinical Neuropsychology, University of Cambridge

The wealth and asset management (WAM) industry has been actively looking into artificial intelligence (AI) use cases which would generate operational efficiencies (ie, save costs and streamline internal processes), enhance the service offering or improve the user experience for some years now. WAM is a heavily regulated industry and new regulations have been emerging at a fast pace for at least a decade. Therefore, we asked the question: are there any regulations currently in force that are applicable to AI in the WAM industry and what are the themes that emerge from these regulations?

We know there are no regulations covering AI within the WAM industry specifically. We also know there are informational documents and draft laws (eg, the UK government whitepaper on AI regulation, the EU AI Act), but none of the latter have been enacted as yet. Many in the industry believe that AI, so far, is not subject to regulation and thus can be viewed from a purely technological lens only.

In this post, we report on research investigating regulatory themes found in current and draft laws. Our article ‘Hard Law and Soft Law Regulations of Artificial Intelligence in Investment Management’ shows that—contrary to industry belief—various aspects of AI in the WAM industry fall under existing laws and regulations.

The starting point of our research was a comprehensive, global regulatory horizon scan. We analyzed a total of seventeen hard and soft laws issued worldwide between 2014 and 2022, including ‘flagship’ WAM regulations such as MIFID II and the Senior Managers and Certification Regime (SM&CR), as well as pertinent industry-neutral ones such as the draft EU AI Act or GDPR. Instead of summarizing them one by one, we set out to investigate the commonalties (or lack thereof) between their recommendations, restrictions and other considerations. Despite the many differences between the laws analysed we managed to systematise them and identified a total of seventeen themes across three larger categories: technology, governance and business/conduct. Here are the themes we identified:

Technology:

  1. Data-drivenness/reliance/quality
  2. Transparency
  3. Auditability/reproducibility/explainability
  4. (Cyber)security/vulnerability
  5. Autonomy/agency
  6. Complexity/emergent behaviours/interconnectedness
  7. Technology neutrality

Governance:

  1. Personal data protection/privacy
  2. Algorithm governance: pre-deployment testing/periodic assessment/ongoing monitoring
  3. Reliance on third parties/outsourcing
  4. Disclosures of AI use
  5. Algorithm inventory
  6. Risk-based approach/proportionality

Business / conduct:

  1. Senior management accountability
  2. Internal skills (especially compliance)
  3. Market abuse
  4. Suitability/knowledge of investment products offered

The most popular themes were transparency (12 mentions), personal data protection/privacy (11), data-drivenness (11) and auditability (10). Those themes are very prominent in the ongoing AI discourse and come as no surprise.

Internal skills also figure highly (9 mentions). In our view this is a reflection of a couple of concerns: overall shortage of AI talent, specific shortage of non-technical talent who understand AI (e.g., compliance) and the ongoing challenge of upskilling existing staff in AI. Complexity and emergent behaviours also had 9 mentions, which we found interesting given that we considered it more of a future than current concern. Algorithm governance was mentioned in 9 regulations as well, indicating a growing awareness that algorithms will need to be overseen, likely in a framework similar to model risk management (MRM).

Our research explores the limits of technology-neutral regulations. While MIFID II, SM&CR, GDPR, and many others are generally technology-neutral, the upcoming AI-specific regulations indicate the likely direction of travel: broader principle-based regulations such as MIFID II and, in parallel, AI-specific regulations such as the EU AI Act.

In addition, we see an upcoming step change in governance whereby clear, top-down oversight will need to give way to a flatter, interdisciplinary model, where traditional organisational ‘silos’ (Front Office, Tech, Compliance etc) will connect and collaborate to a much greater degree than ever before.  

We also see how the rapid expansion of AI and the emergence of AI regulations is beginning to impact multiple functions within WAM firms, including compliance advisory and, more broadly, general counsel functions. Advisory, historically focused on principle-based investment regulations, is facing disruption in the form of emerging technologies regulations, which in turn require new technical skills and expertise.

We see the gradual recognition of the systemic risks of AI , similar to the recognition accorded to Critical Third Parties (CTPs). CTPs are currently synonymous with large cloud services providers. However, some of the largest CTP cloud service providers are also providers of AI systems, and we expect AI to be given systemic consideration in the near future.

One thing we can say with certainty is that the ‘regulatory journey’ of AI in WAM is only just beginning. We expect the next 5–10 years to be very busy and dynamic as far as regulations are concerned and—if the spectacular emergence of generative AI in recent months is any indication—full of unforeseen technological developments, which will trigger further regulatory responses. Approaching AI from a perspective of use cases or themes may be more robust and future-proof than attempting to regulate specific AI techniques.

Wojtek Buczynski is a PhD Candidate at the University of Cambridge.

Felix Steffek is Professor of Law and J M Keynes Fellow at the University of Cambridge.

Fabio Cuzzolin is Professor of Artificial Intelligence at the School of Engineering, Computing and Mathematics of Oxford Brookes University.

Mateja Jamnik is Professor of Artificial Intelligence, Department of Computer Science and Technology of the University of Cambridge.

Barbara J Sahakian is Professor of Clinical Neuropsychology at the Department of Psychiatry of the University of Cambridge.

Share

With the support of