Faculty of law blogs / UNIVERSITY OF OXFORD

Law and Autonomous Systems Series: Regulating Robotic Conduct - On ESMA’s New Guidelines and Beyond

Author(s)

Florian Möslein
Professor of law at the University of Marburg, Germany

Posted

Time to read

7 Minutes

FinTech is rapidly transforming the financial services sector. Based on a broad range of different new technologies and innovations, it increasingly attracts the interest of national, global and European regulators. In the UK, for instance, HM Treasury published a Regulatory Innovation Plan which covers a number of actions financial services regulators are undertaking to “create a more supportive and agile regulatory and enforcement framework” for new business models and disruptive technologies while breaking down barriers to entry and boosting productivity in financial services. More recently, the European Commission has released its long-awaited FinTech Action Plan, and at the global level, the Financial Stability Board (FSB) has issued a report on the financial stability implications from FinTech.

So-called "robo advice" forms a more specific, important part of that FinTech sector. Automated financial product advisors are emerging all across the financial services industry, helping clients to choose investments, banking products, and insurance policies. While such advisors may have the potential to lower the costs and to increase the quality of financial advice, they also pose significant challenges for regulators. The FSB has also highlighted the use of artificial intelligence and machine learning in financial services. The respective regulatory challenges are of particular interest here because robo advisors are prime examples of autonomous systems, processing great volumes of financial data on the basis of algorithmic decision-making, including machine learning technologies. What we can currently observe with respect to robo advice is indeed a law of autonomous systems in the making. This process can teach us important lessons both for future law-making in the field of autonomous systems, and for regulating robotic conduct in general.

The “Guidelines on certain aspects of the MiFID II suitability requirements” that the European Securities and Markets Authority (ESMA) is currently elaborating are a first step in this specific rule-making process. In fact, ESMA expressly aims to “consider recent technological developments of the advisory market, i.e. the increasing use of automated or semi-automated systems for the provision of investment advice or portfolio management (so-called ‘robo-advice’)”. It builds on a report on automation in financial advice, published by the Joint Committee of the European Supervisory Authorities, and is based on the Commission Delegated Regulation regarding organisational requirements and operating conditions for investment firms. ESMA identifies three main areas where specific needs for protection may arise, namely (1) the information that should be provided to clients on the financial advice when it is provided through an automated tool, (2) the assessment of the suitability of financial products for the client, with particular attention to the use of online questionnaires with limited or without human interaction, and (3) the organisational arrangements that firms should implement when providing robo-advice. Both client information and the arrangements necessary for them to understand investment products, i.e. the first two areas, do not specifically regulate robotic conduct, but simply focus on the electronic communication between advisors and their clients. In other words, these provisions focus on humans interacting with machines rather than on the machines themselves (and similar provisions might even be applied if the investment advice was elaborated by humans but was only delivered via electronic means).

Of more specific interest are therefore the draft guidelines concerning the organisational arrangements that firms should implement when providing robo-advice. These rules are designed to apply even if the interaction with clients does not occur through automated systems and only the suitability assessment as such is conducted through automated tools. Above all, ESMA intends to provide for firms to regularly monitor and test the algorithms that underpin the suitability of transactions recommended or undertaken on behalf of clients. More specifically, firms should establish system-design documentation in order to set out the purpose, scope and design of the algorithms. They should also have a documented test strategy for algorithms and put in place policies and procedures for managing changes to these algorithms, and review and update these algorithms in order to reflect market or legal changes. Moreover, they are required to implement policies and procedures to detect and to deal with algorithmic errors, and to monitor and supervise the performance of algorithms more generally.

These new requirements for providers of robo-advice raise at least three different issues that are of a more general, jurisprudential interest for regulating robotic conduct. First of all, these requirements represent a general regulatory tendency to substitute rules of conduct with organisational duties when it comes to regulating robotic conduct. While human advisors are subject to fiduciary duties and duties of care, robo advisors – or rather the human providers of robo advice – are required to introduce organisational arrangements in order to avoid wrong investment decisions or recommendations. From a regulation theory perspective, the trend goes from a strategy of situational prevention of human wrongdoings towards a more general, ex ante approach that stipulates organisational duties to monitor, document, test and modify algorithms on an ongoing basis. This change in regulatory strategy has much to do with the fact that these duties are no longer addressed to the respective decision-maker or acting body itself, but to the human actors behind that machine. The fundamental idea is to regulate robotic conduct by regulating the humans that are running the robots. This idea is based on the fact that robots are not recognized as legal persons under current law. Accordingly, they cannot be the addressees of legal duties either, and the regulator has to address instead those humans ‘behind them’. While this regulatory approach is spelled out by ESMA, it is rooted in Art. 54 para. 1 of the Commission Delegated Regulation itself, stating that “where investment advice or portfolio management services are provided in whole or in  part  through  an  automated  or  semi-automated  system,  the  responsibility  to undertake the suitability assessment shall lie with the investment firm providing the service  and  shall  not  be  reduced  by  the  use  of  an  electronic  system  in  making  the personal recommendation or decision to trade”. Moreover, the same regulatory approach can also be traced in the legal design of other rules on robotic conduct, for instance with respect to self-driving cars.

My second point concerns the question of whether this regulatory strategy is comprehensively suitable for robotic conduct. As long as robots act and decide on the basis of algorithms, the answer would seem to be positive: Algorithms are commonly defined as a “detailed plan describing the finite number of steps to be executed to achieve a desired result”. They therefore follow fixed, pre-formed technical rules – their code – and produce foreseeable results. If that is the case, the robotic conduct is of a deterministic nature so that it seems necessary and sufficient to regulate those who command the algorithm: They have the power to change that code and are therefore in a position to influence the outcomes and results. That assessment changes, however, as soon as robotic conduct does not follow such fixed, pre-formed code, but starts to become of an indeterministic nature. Such unforeseeability arises once robotic systems develop into truly autonomous systems: Autonomy (also in that respect) implies freedom and thus the capacity to make independent, un-coerced and therefore unforeseeable decisions. Robotic conduct becomes unforeseeable in that sense if it is not based on algorithms, but on artificial intelligence and deep learning technologies: It is then “unpredictable by design”. Artificially intelligent robots do not follow logical calculus, but make their own experiences and mistakes, and that learning process is necessarily unforeseeable. This technological difference has an impact on the choice of a suitable regulatory design. Once robots decide autonomously, the regulatory strategy to address the humans that are running the robots by stipulating organisational duties becomes much less convincing. Monitoring, documenting, testing and modifying technical systems appear to be quite anachronistic approaches once these systems act truly autonomously. At least, such regulatory approach implicitly requires the autonomy of robotic systems to be reduced since otherwise the human addressees would be unable to comply with their duties. The only alternative approach, however, would require the recognition of robots as legal persons in order to impose rules of conduct on them directly. 

My third and final point is of a more technical nature. The guidelines that have been discussed so far are sector-specific in the sense that they apply exclusively to the financial sector, and more specifically, to the provision of investment advice or portfolio management. Concurrently, algorithms, artificial intelligence and autonomous systems are increasingly becoming the subject-matter of cross-sector, but technology-specific rules. For example, both the European Parliament’s public consultation and its report on civil law rules on robotics follow that direction. The same approach is also characteristic of the EU General Data Protection Regulation (GDPR), namely because its rules reach far beyond data protection as such. Of particular importance with respect to robotic conduct are the rules on automated individual decision-making in Art. 22 of that regulation, providing that everybody shall have the right not to be subject to a decision based solely on automated processing which produces legal effects concerning him or her or similarly significantly affects him or her. Regardless of the exceptions and modifications that the remainder of the rule provides for, it clearly concerns robo advisors as well and therefore overlaps significantly with ESMA’s Guidelines. For instance, requirements to inform clients about the degree of human interaction, to explain to them the purposes of algorithms, and to describe the circumstances that might cause an algorithm-override by humans can only be applied to the extent that automated decision-making is allowed in the first place. In that sense, the sector-specific guidelines that I have discussed build upon the technology-specific rules of the GDPR. Even if these two regulatory regimes do not necessarily conflict with each other, the lesson to be learned is that regulating robotic conduct requires careful coordination between those two regimes. Therefore, the fundamental (even if somewhat ‘technical’) challenge for the law-maker is to balance sector-specific and technology-specific rules on robotic conduct.

To conclude, this short analysis of the developing rules on robo advisors has clearly shown a regulatory trend towards organisational instead of situational duties: ESMA aims at requiring the humans behind the robots to monitor, document, test and modify algorithms on an ongoing basis. This regulatory strategy is not suited comprehensively to robotic conduct, however. If robots do not strictly follow a logical calculus, but make their own, autonomous decisions, then duties that require humans to monitor, document, test and modify their behaviour become pointless, and can only be complied with to the degree that those systems’ autonomy is reduced. Last but not least, a more ‘technical’ but nonetheless fundamental challenge for the rulemaker is to balance sector-specific and technology-specific rules on robotic conduct.

Florian Möslein is a Professor of Law at the Philipps University of Marburg.

Share

With the support of