Workplace Data Processing: Resources for Data Protection Day
Complex algorithmic management tools are increasingly used to make high-stakes decisions that have traditionally been made by human managers. These tools could significantly change working conditions and social relationships—and pose significant risks to the privacy, human dignity, health, equal treatment, and autonomy of workers. While the regulation of algorithmic management falls under the purview of multiple legal domains, data protection law has been the area of law that is most engaged owing to the vast troves of personal data processing underpinning most algorithmic management tools.
As we celebrate the 17th global Data Protection Day, it is important to recognise the key role that data protection legislation plays in taming some of the most intrusive algorithmic management systems and practices.
To help raise awareness of data protection at work, the team at iManage have brought together six resources examining this new phenomenon through a data protection lens. As the GDPR enters its fifth year of application, these papers provide a great starting point for stakeholders to understand and address data protection issues around AI systems in the workplace. Here at iManage, we explore these and related challenges in the course of our weekly Algorithms at Work discussion group (held in a hybrid format)—if you are interested in joining, please get in touch.
Promising next steps
It is a promising step that the US Equal Employment Opportunity Commission (EEOC) has identified artificial intelligence tools used by employers as a regulatory and enforcement priority. The EEOC will focus on the use of automated decision-making tools such as artificial intelligence or machine learning and the risks of discrimination against racial, ethnic, and religious groups, older workers, women, pregnant workers and those with pregnancy-related medical conditions.
In 2020, the German Federal Ministry for Labour and Social Affairs established an ‘Interdisciplinary Council on Employee Data Protection’ to assess whether a ‘stand-alone law on employee data protection’ is needed in Germany. In January 2022, the Council submitted its report. The report argues that GDPR alone doesn’t provide enough legal clarity to guide implementation of workplace data processing, and a new national data protection law is needed for the workplace setting. The report also notes that there is a serious deficit in the enforcement of data protection law, especially in the employment context. Accordingly, the report recommends increasing the enforcement powers and staff sizes of data protection authorities; strengthening the legal position of worker representatives for enforcement purposes; and increasing the independence of company data protection officers. We—and no doubt other researchers and policy makers across Europe—are looking forward to see what German legislators do to follow up on these recommendations.
Academic analysis
- Giulia Gentile and Orla Lynskey, ‘Deficient by Design? The Transnational Enforcement of the GDPR’ (2022) 71 International & Comparative Law Quarterly 799
The GDPR aspires to ensure a harmonised and consistent application and enforcement of data protection rules across the EU. The cooperation and consistency mechanisms of the GDPR were created to achieve this objective. However, it is extensively documented that the GDPR has a significant enforcement deficit. Not only is the GDPR under-enforced, but its cooperation and consistency mechanisms are under-utilised. This article argues that the inadequacies of the enforcement framework have to do with the design of the GDPR itself. The article identifies four design flaws that contribute to the under-enforcement of the GDPR, particularly in transnational contexts.
- Antonio Aloisi, ‘Regulating Algorithmic Management at Work in the European Union: Data Protection, Non-Discrimination and Collective Rights’ International Journal of Comparative Labour Law and Industrial Relations (Forthcoming)
This paper explores how different areas of EU law can be cumulatively used and leveraged to address the harms posed by algorithmic management in the workplace. Using examples from case law, administrative decisions and legislative developments, the paper argues that the mutually reinforcing relationship between data protection provisions and anti-discrimination measures can be adapted to render automated decisions documentable and contestable.
Policy Reports
- The Institute for Workplace Equality, ‘The use of artificial intelligence in employment decision making’ (Technical Advisory Committee Report, December 2022)
This report from the US-based Institute for Workplace Equality is a response to the lack of guidance at the federal level on the use of algorithmic tools for employment purposes. It is a very deep dive into some of the most important issues such as transparency, consent, privacy, fairness, and non-discrimination that arise in the deployment of AI-enabled tools in the workplace. The report suggests employers be transparent about the AI systems they use and give specific notice to anyone who will be assessed using these systems. This transparency requirement also extends to vendors: ‘just as employers need to provide information to applicants about the use of AI tools,’ the report suggests, ‘vendors of those AI tools need to provide information about the tools to the employers utilizing them’. The report also recognises that the use of AI-enabled processes poses unique privacy issues and that the issue of ‘consent’ in this context is more complex than traditional employee selection processes.
- Sebastião Barros Vale and Gabriela Zanfir-Fortuna, ‘Automated Decision-Making Under the GDPR: Practical Cases from Courts and Data Protection Authorities’ (Future of Privacy Forum 2022)
This is the most comprehensive report analysing how national courts and Data Protection Authorities in the EU/European Economic Area and UK have interpreted and applied the relevant GDPR provisions on automated decision-making (ADM) so far, as well as the notable trends and outliers in this respect. One of the most important findings of the report is that courts and Data Protection Authorities often apply the rest of the GDPR even in situations where the ADM at issue did not meet the high threshold established by Article 22 GDPR (the Article that specifically addresses ‘automated individual decision-making’).
For a comprehensive analysis of the regulation of algorithmic management in the workplace, stay tuned for our forthcoming European Labour Law Journal special issue consisting of eight papers from multidisciplinary and leading scholars and a ‘policy blueprint’ for regulating algorithmic management. Our blueprint identifies the novel regulatory issues and harms arising from algorithmic management. It offers eight concrete policy measures designed to address these issues and provides a rationale explaining each of the regulatory choices involved. It is designed to serve as a reference point for algorithmic management regulatory policy and legislation across national contexts. A preview appears in ‘Algorithms need management training, too’, recently published in Wired.
Halefom Abraha is a Postdoctoral Researcher at the University of Oxford.
Sangh Rakshita is a Researcher on Algorithmic Management at the University of Oxford.
M. Six Silberman is a Postdoctoral Researcher at the University of Oxford.
Jeremias Adams-Prassl is a Professor of Law at the University of Oxford.
Share
YOU MAY ALSO BE INTERESTED IN