Finetuning the EU’s Platform Work Directive
Posted
Time to read
OBLB categories
OBLB types
OBLB keywords
Jurisdiction
The European Commission’s draft Directive on working conditions in the platform (or ‘gig’) economy is a promising instrument (see our general overview, here). It addresses key risks facing gig workers, including employment status misclassification, algorithmic management, and the enforcement of existing rules.
The Directive’s focus on harms which stem from algorithmic management, from automatic pay-docking and shift scheduling to automated termination, is particularly welcome. As the Directive progresses through the legislative process, we suggest targeted amendments to improve the draft provisions on algorithmic management further and ensure comprehensive protection of workers’ fundamental rights.
The regulatory approach
In substantive terms, the proposal provides protection in two main ways: through detailed provision on human oversight, and clear redlines.
1. Human oversight
Questions of human agency are often at the forefront of debates on artificial intelligence (AI). Human oversight can come in three forms, serving different regulatory purposes, in what we term a role for humans above the loop (assessing system-level impacts), and for humans after the loop (reviewing individual decisions), in addition to more familiar provisions for humans in the loop (requiring human involvement in individual significant decisions).
Humans above the loop
A human ‘above’ the loop helps to monitor and detect system-level impacts, identifying harms and mitigations. The proposed Directive provides for such monitoring in Article 7(1).
The provision is promising, but still falls far short of the type of impact assessment which experts have called for. The requirement for monitoring to be ‘regular’ is potentially vague, and the proposal lacks any requirement for outcomes to be documented or provided to workers’ representatives or national labour authorities. In these ways, the proposal could be further strengthened in response to critiques of the GDPR’s data protection impact assessment regime (Article 35), which suffer from a similar lack of transparency and clear review timeline.
Recommendation 1: amend Article 7 to require impact assessments to be documented, reviewed within a specified timeline, and available on request to workers, their representatives, and national competent authorities.
Humans after the loop
While a macro view of algorithmic decision-making (ADM) impacts is crucial, advocates often suggest that individual decisions should also be subject to human explanation and review. Article 22 of the GDPR requires data controllers to implement ‘suitable… safeguard[s]’ where significant decisions are made on a solely automated basis, but debate has raged on whether the provision also entails a right to individualised explanations.
Article 8 of the proposed Directive cuts through this debate by providing platform workers with a clear right to obtain written explanations for significant decisions taken or supported by an ADM system, as well as a right to request a review of such decisions and to receive a ‘substantiated reply’ in each case. This requirement has three salient elements: that human review is required once an automated decision is made, that human review is triggered upon request by the affected person, and that the latter can invoke human review only with regard to certain significant automated decisions.
As with Article 22, the proposal would only mandate this human after the loop for ‘significant’ decisions. Defining significance is a deeply complex exercise, however—a challenge which the Directive seeks to address by limiting its scope to decisions that ‘significantly affect… working conditions’, and defining those conditions to include ‘in particular [workers’] access to work assignments, their earnings, their occupational safety and health, their working time, their promotion and their contractual status, including the restriction, suspension or termination of their account’.
Humans in the loop
Although the proposal provides for a human above the loop and a human after the loop, the absence of any provision requiring a human in the loop is notable. Moreover, this gap is not filled by the GDPR: although Article 22 generally prohibits data controllers from making significant decisions on a solely automated basis, it includes a specific carve-out for automated decisions grounded in contractual necessity (GDPR Art 22(2)(a)). The proposed Directive, meanwhile, would require all data processing to be strictly necessary for the performance of the employment contract (Art 6(5)). This gives rise to the slightly strange outcome that any ADM system the deployment of which falls within the scope of the proposed Directive as strictly necessary will for that very reason fall into the Article 22 carve-out: any such decision may thus be completely automated.
Many significant ADM decisions will be difficult to justify as contractually necessary. On the other hand, mandating human involvement on a decision-by-decision basis would undercut the value of automated decision-making systems altogether. In order to balance these competing objectives, the proposed Directive should specify circumstances in which human involvement is always required, including in particular termination decisions.
Recommendation 2: require human involvement in all decisions on worker termination, by inserting fully automated decisions in the list of prohibited practices in Art 6(5).
2. Redlines
The appropriate place for requiring human involvement in all termination decisions is Article 6(5). This provision sets out some ‘redlines’ which employers must not cross, including the use of emotion analysis systems and the monitoring of private conversations. As well as fully automated terminations, there are several other practices which are similar in levels of risk and harm to those set out in the current proposal, and should therefore similarly be subject to explicit prohibitions. This includes the use of ADM systems to predict the likelihood that a worker will exercise a legal right: an employer should never predict the likelihood that a branch of workers will unionise, for example.
Recommendation 3: add 'predicting the exercise of legal rights provided in Union or national law’ to Art 6(5).
Article 7(2) provides another example of a useful redline provision. The paragraph prohibits employers from using automated monitoring and decision-making systems ‘in any manner that puts undue pressure on platform workers or otherwise puts at risk the physical and mental health of platform workers’. Because the rest of the paragraph is tied to health and safety law which only protects people in employment relationships, there is a risk that misclassified gig workers will miss out on this crucial protection (Article 10(1)). Moving the redline to Article 6(5) would close that loophole.
Recommendation 4: move the redline on undue pressure from Art 7(2) to Art 6(5).
Summary
Overall, the proposed Directive holds much promise for regulating algorithmic management. It is a well-drafted instrument which builds on the existing acquis in a coherent and targeted way. Adopting the above recommendations would further strengthen the instrument’s role as a powerful blueprint for regulating algorithmic management more broadly.
We acknowledge funding from the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement No 947806).
Halefom Abraha is a Postdoctoral Researcher at the Bonavero Institute of Human Rights, University of Oxford
Jeremias Adams-Prassl is a Professor of Law at the University of Oxford, and Principal Investigator of the ERC-funded iManage project on algorithms at work
Aislinn Kelly-Lyth is a Researcher on Algorithmic Management at the Bonavero Institute of Human Rights, University of Oxford
Share
YOU MAY ALSO BE INTERESTED IN