Artificial Intelligence: A Roadblock in the Way of Compliance with the GDPR?
As more businesses seek to expand their reach by offering services across the globe, including to the European Union countries, the issue of compliance with the General Data Protection Regulation (GDPR) becomes inevitable. This is certainly the case with corporations which are not part of the European Union but which process data related to EU citizens and make decisions about them using Artificial Intelligence (AI).
Whilst reliance on automated decision-making may increase efficiency and save costs as per the Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, it may increase compliance costs and heighten exposure to sanctions under the GDPR in case of noncompliance.
This post explores the compliance issue countered by corporations which have recourse to automated decision-making. It navigates through the GDPR AI-related requirements, makes recommendations to facilitate compliance with them and highlights the dilemmas posed by the intrinsic features of AI.
Prohibition of the use of AI
Article 22(1) of the GDPR states that ‘The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her’. According to the Guidelines, article 22 establishes a general prohibition on making decisions which are based solely on AI, ie made by technological means without human involvement.
Here the view can be taken that corporations could be able to opt out of article 22 through moderating automated decisions with human intervention. Thus, decision-making which has a hybrid nature due to human involvement means that decision-making is no longer considered as automated. Human intervention means generally that decision-makers within a corporation will need to actively oversee the automated decision instead of merely rubber-stamping it. Corporations may also want to consider excluding AI from the realm of decision-making that produces legal or similarly significant effects on data subject.
In the alternative, in lieu of avoiding the application of article 22, corporations could potentially rely on solely automated decisions if they have obtained the explicit consent of the data subject or if they can prove that automated decision-making is necessary for the performance of a contract with the data subject. Whether automated decision-making is necessary will depend on the circumstances of each case. As per the Guidelines, if other effective and less intrusive means to perform a contract exist, then it would not be necessary. While some corporations may seek to include a clause within their privacy policies, whereby the data subject acknowledges that automated decision-making is necessary for the performance of a contract, this does not mean that the corporation in question will necessarily fall within this exception. As so much uncertainty may lurk in relation to the question of whether automated decision-making is necessary in each case, the better approach would consist in obtaining consent as this constitutes another exception to the prohibition of automated decision-making. In this regard, the WP29 guidelines on consent clarify that explicit consent means the provision of an express statement of consent by the data subject.
It is noteworthy that relying on the necessity to perform a contract or the data subject’s explicit consent as lawful basis for processing data will not be enough since in these circumstances, article 22(3) requires corporations to implement suitable measures such as the right to obtain human intervention. It is equally important to note that reliance on a hybrid model of decision-making could not be the answer for all compliance issues since article 35 requires corporations to carry out a data protection impact assessment in relation to all automated decisions including those which involved human intervention. Furthermore, corporations are still bound, under article 12.1, to ensure transparency of data processing by providing data subjects with concise, transparent, intelligible and easily accessible information about the processing of their personal data.
While human intervention may assist corporations in opting out of article 22, it might not necessarily remedy the specific features of AI which are incompatible with the rights of data subjects.
Incompatibility of AI with the rights of data subjects
The explainability issue
The complexity of algorithms can make it challenging to understand the rationale behind an automated decision. This explainability issue conflicts with a data subject’s right, under article 15, to obtain information relating to, inter alia, the logic involved in automated decision.
Similar to the approach adopted in relation to article 22, one course of action a corporation can take is to opt out of the application of article 15. This can be achieved through human involvement in the decision process.
Data minimisation
The ability of AI to process large stacks of data could play a significant role in encouraging corporations to collect larger quantities of personal data than is actually needed. However, this would pose a significant problem in terms of complying with the data minimisation principle enshrined under article 5(1)(c), which mandates that personal data be ‘limited to what is necessary in relation to the purposes for which they are processed’.
Bias
While AI has been known for its ability to outperform the human mind given its ability to process large amounts of data and to remedy flaws in boards’ decision-making such as groupthink, AI tools cannot be said to be error-proof. Indeed, AI-based data is just as biased as its makers. When AI generates biased data, it is highly likely that automated decisions will necessarily be incorrect and tainted by inaccuracy. Thus, the issue of bias, inherent in AI will automatically place corporations in breach with article 5(1)(d), which assigns to corporations the responsibility of ensuring that personal data is accurate and up-to-data. It is noteworthy that bias is not the only root cause of inaccuracy, which may be caused by other factors such as incorrect interpretation of data by AI.
As this article has demonstrated, compliance by non-EU corporations is an issue of risk management calling for a thorough understanding of the obligations incumbent upon corporations as data controllers. And whilst corporations who rely on automated decision-making have a higher level of accountability under the GDPR, the prohibition of the use of AI cannot be said to constitute a roadblock that hinders corporate compliance with the GDPR. Review of AI by decision makers within a corporation is an effective measure to facilitate compliance but leaves the problem of incompatibility of AI with the rights of data subjects unsolved.
Samar Ashour is a Solicitor admitted at the Supreme Court of New South Wales, Australia.
Share
YOU MAY ALSO BE INTERESTED IN