Faculty of law blogs / UNIVERSITY OF OXFORD

AI for Banks – Key Ethical and Security Risks

Author(s)

Scott Atkins
President of INSOL International and Partner, Australian Chair and Head of Risk Advisory at Norton Rose Fulbright
Kai Luck
Executive Counsel at Norton Rose Fulbright

Posted

Time to read

4 Minutes

The use of AI to drive efficiency and business improvement has increased significantly in the last five years across multiple sectors. While a decade ago AI might have been thought of primarily in terms of aspirational robots and driverless cars, it has evolved to now have many more subtle uses that are just as powerful from an efficiency perspective.

In the banking sector, the use of AI now concentrates primarily on:

  • ‘Chat bots’, voice banking, robo-advice and other automated services for customers.
  • The screening of prospective customers in relation to credit and other financial product applications, including by way of know your customer checks.
  • Identifying new financial products to advertise to existing customers.
  • Identifying suspect transactions for risk monitoring, reporting and compliance purposes, including to meet anti-money laundering and financial crime obligations. In this space, a number of global institutions have been developing algorithms to assess past transactions that may predict unusual or suspect activity in the future.
  • Screening prospective candidates for their suitability for new jobs as part of the interview process.

While providing the opportunity for greater efficiency and revenue, better qualified personnel and improved risk management and compliance, there are also important ethical and security risks that banks need to properly manage if they elect to use AI. 

What are the ethical and security risks?

First—and most importantly—the use of AI to solicit customers for new products and for compliance purposes is inevitably based on existing collated customer data held by a bank. This creates the risk that, without more, a bank may breach privacy laws in using data for these AI purposes. To best control this risk, banks should ensure that the prospect of customer data being used for AI is disclosed in contractual arrangements with each new customer, and that express consent is obtained to do so. Relying on implied consent is fraught with difficulty, particularly in the context of strengthened privacy standards and consumer laws globally, such as the ‘consumer data right’ that has applied in Australia since 2020 to give individuals greater control over the use of their personal data.

Secondly, in relation to the use of automated services, banks need to ensure that AI is seamless and based on technology that actually works effectively. Most of us would have had experiences with service providers where customer queries are dealt with exclusively through AI chat bots or voice systems. When these systems, and the algorithms upon which they are based, are primitive and insufficient to respond to the breadth of the issues that customers raise, or are prone to technology breakdowns, they might save the service provider on employment and operational costs, but they maximise inefficiencies for customers. The level of frustration and alienation from customers may cause a longer-term reputational impact and declining customer revenue. 

Thirdly, the use of AI in prospective customer and employee screening could entrench existing unconscious bias and prejudice used to build the algorithms that underpin AI. This may result in a bank being unable to meet gender and diversity targets and build a broader customer base, for example by offering financial products to those from lower socioeconomic backgrounds and to improve community outcomes. 

The European Banking Federation recommends that the use of AI for screening purposes should therefore minimise reference to data—and assumptions derived from that data—such as gender, age, sexual orientation, ethnicity and religion. The Federation also recommends that data used to build AI-based decisions should be taken from a wide pool of different sources to reduce the prospect of bias. 

And the humans responsible for building AI-based algorithms should also have diverse backgrounds and characteristics. After all, algorithms, and the decisions that are made based on them, ultimately reflect the flaws of their creators.

Importantly, the AI decision-making model used to underpin employee and customer decisions also needs to be monitored for compliance with evolving legal and regulatory requirements, for example in linking credit decisions to fact-based income and capacity to pay, rather than a projected propensity to pay divorced from the particular circumstances of the individual. In that sense, AI can play an important role in the initial screening and information-gathering process but it cannot entirely replace the role of a human in making a final decision such as credit approval. 

Finally, for all forms of AI used by a bank, it is critical to have in place adequate security controls to reduce the threat of a cyber attack that may result in intellectual property and other organisational assets being compromised, as well as the access and misuse of employee and customer data. Unauthorised access to employee and customer data in particular exposes a bank to a regulatory privacy breach that could give rise to a substantial penalty as well as a private class action. There is also the prospect of a breach of substantive cybersecurity laws in such a case, with jurisdictions such as the European Union, Australia and the United States now having standalone cybersecurity legislation in place. In delivering effective cybersecurity controls, banks need to invest in appropriate expertise and innovative technology to keep pace with the growing sophistication of cyber threats, and to also ensure specific cyber-focused risk management, escalation, reporting and training throughout the organisation.

Takeaway

AI presents an opportunity to achieve substantial efficiency improvements on both an organisational level and a regulatory compliance level. While the ethical and security risks posed by AI are significant, they can be effectively managed by proactive boards and management. It is critical for banks to ensure this is done as the rapid uptake and expansion of AI continues. Moreover, with governments now moving to develop AI-specific regulations and establish dedicated AI regulators, controlling AI ethical and security risks is important from a governance and liability perspective. Collectively, it is likely that, as they continue to adapt and evolve in their use of AI, banks will themselves develop self-regulatory AI standards as a sign of trust, accountability and responsibility to the public.

Scott Atkins is the President of INSOL International and Partner, Australian Chair and Head of Risk Advisory at Norton Rose Fulbright.

Dr Kai Luck is Executive Counsel at Norton Rose Fulbright.

Share

With the support of