Faculty of law blogs / UNIVERSITY OF OXFORD

Podcast Episode: Algorithmic Management, Employment Law and Discrimination Law

Hosted by Juliet Van Gyseghem and Rach Tan, the Oxford Undergraduate Law Podcast explores the law and its relationships with our society. Brought to you by the Oxford University Undergraduate Law Journal (OUULJ) is a summary and transcript of its latest episode: 'Algorithmic Management, Employment Law and Discrimination Law'.

Listen to the episode on Spotify. Find more episodes here.

For more information, discussion and academic publications on the issues discussed in our Podcast episodes, visit our Podcast’s webpage.

Posted

Time to read

30 Minutes

Guest: Sangh Rakshita

Sangh Rakshita completed her BCL in Law at the University of Oxford and has previously consulted with organisations such as the United Nations Development Programme and the Human Rights Watch. She also teaches Regulation of Internet Technologies at Stanford University and has worked as a law and technology researcher at the Centre for Communication Governance, India and as a legislative assistant in the Parliament of India.

Sangh Rakshita is part of pioneering project, iMANAGE, based in the Bonavero Institute of Human Rights, funded by the European Research Council. iMANAGE intends to develop the first systematic account of the challenges and potential of algorithmic management, examine its implications for legal regulation, and develop concrete solutions to avoid harmful path-dependencies.  

Read more about Sangh Rakshita and her research here.

 

Episode summary

This episode sees Sangh Rakshita discuss the development of algorithmic management in the workplace and iMANAGE’s work in relation to it. In discussion with Rach Tan, one of the OUULJ’s Podcast Editors, Sangh Rakshita highlights the potential risks which may arise from this development, such as tracking of productivity by mouse clicks or GPS tracking of truck drivers making deliveries. From these risks, two key regulatory gaps are identified – an exacerbation of privacy harms and information asymmetries, and a loss of human agency.

Sangh Rakshita then sets out the potential policy options which could address these novel harms – addressing the failings of and building on existing regulatory frameworks such as the GDPR. Alongside redlines and purpose limitations, she and iMANAGE call for provisions for human involvement not just ‘in the loop’, but after, before and above the loop, which aim to restore human agency in the process of algorithmic management.

She and Rach also put these risks and policies in the specific context of discrimination. Sangh Rakshita explains how algorithmic systems may discriminate by learning from biased real-world data, perpetuating existing inequalities and creating new patterns beyond human intention. Thus, they may indirectly discriminate against race using proxies like postal codes, which, even without explicit use of race, can exhibit discriminatory behaviour by favouring candidates with certain features in hiring processes. The episode ends with Sangh Rakshita urging the establishment of an ex-ante framework, complementing existing regulatory rules, to tackle discrimination caused by algorithmic systems.

Works mentioned

Adams-Prassl J and others, 'Regulating algorithmic management: A blueprint' (2023) European Labour Law Journal 14(2), 124 <https://doi.org/10.1177/20319525231167299>

Binns R, Adams-Prassl J, and Kelly-Lyth A, 'Legal Taxonomies of Machine Bias: Revisiting Direct Discrimination' (ACM Conference on Fairness, Accountability, and Transparency (FAccT '23), June 2023) <https://doi.org/10.1145/3593013.3594121>

 

Questions to consider

  1. Should there be a narrow legal basis for the deployment of algorithmic management?
  2. The GDPR already provides a list of transparency requirements when algorithmic management is taking place. However, these only apply when a decision is fully automated. Should this be extended to partially automated decision-making systems?
  3. Do you agree with Sangh Rakshita’s view that discrimination that occurs due to an algorithm can amount to direct discrimination?

 

Further reading

Kelly-Lyth A and Thomas A, 'Algorithmic management: Assessing the impacts of AI at work' (2023) European Labour Law Journal 14(2), 230 <https://doi-org.ezproxy-prd.bodleian.ox.ac.uk/10.1177/20319525231167478>

Adams Z and Wenckebach J, 'Collective regulation of algorithmic management' (2023) European Labour Law Journal 14(2), 211 <https://doi-org.ezproxy-prd.bodleian.ox.ac.uk/10.1177/20319525231167477>

Cefaliello A, Moore PV, and Donoghue R, 'Making algorithmic management safe and healthy for workers: Addressing psychosocial risks in new legal provisions' (2023) European Labour Law Journal 14(2), 192 <https://doi-org.ezproxy-prd.bodleian.ox.ac.uk/10.1177/20319525231167476>

 

Full episode transcript

Rach Tan: The rise of algorithmic management has allowed for new ways to measure, control and sanction workers. Yet, it is unclear how employment law can respond to a world where automation has not replaced workers, but their bosses.

Juliet Van Gysegham: Welcome to the Oxford Undergraduate Law Podcast, where we discuss the law and its relationship with society. I'm Juliet.

Rach Tan: And I'm Rach and we are your podcast editors. We will platform academics, practitioners and experts from different backgrounds on this podcast.

Rach Tan: In this episode, I am delighted to host iMANAGE, a pioneering project based in the Bonavaro Institute of Human Rights, funded by the European Research Council. iMANAGE intends to develop the first systematic account of the challenges and potential of algorithmic management, examine its implications for legal regulation and develop concrete solutions to avoid harmful path dependencies.

iMANAGE is led by Prof Jeremias Adams Prassl and comprises Halefom Abraha, Six Silberman, Sangh Rakshita and Aislinn Kelly-Lyth. I am honoured to be speaking to Sangh Rakshita today.

Sangh Rakshita completed her BCL in law at Oxford and has previously consulted with organisations such as the United Nations Development Programme and the Human Rights Watch. She also teaches Regulation of Internet Technologies for Stanford University. She has also worked as a law and technology researcher at the Centre for Communication Governance, India, and as a legislative assistant in the Parliament of India.

Thank you for joining us.

Sangh Rakshita: Thank you so much, Rach, for having me. It's a pleasure to be on the podcast.

Rach Tan: Alright, so just to start off, what does the automation of traditional employer functions look like?

Sangh Rakshita: So, the way automation of traditional employment function works is that it has slowly crept into the different functions from hiring, firing, disciplining, and managing workforce, allocating and organising work, supervising, and monitoring, then predicting productivity of the workers, future behaviour of workers, like their propensity to unionise or even their flight risk, for example. So, this requires and this generally includes the creation, collection or use of workers' information to build or support these systems. So basically the scenario occurs when your boss or the human resource department sort of is an automated system. It could be wholly or partly automated.

Rach Tan: Have we seen any trends in the use of algorithmic management? And does that perhaps relate to current mega trends like the gig economy of the COVID-19 pandemic?

Sangh Rakshita: Yes, absolutely. So algorithmic management actually started in the gig economy prior to the pandemic, prior to the COVID pandemic, and started with apps which used increasingly sophisticated systems to monitor and detect all aspects of platform work. And since then, there has been this deployment of algorithmic systems that are not just limited to gig work, but have also sort of extended to white collar jobs, which is therefore workplaces across the socioeconomic spectrum.

And this has specifically sort of boosted, has seen an unprecedented boost, I would say, during the pandemic, the COVID pandemic, and post pandemic as well. So, what started as remote work for everyone, as mandatory way of working, the only way of working during the pandemic,

often would require using monitoring cameras at home for employers to be able to monitor productivity of workers or compliance to working hours, etc. Or even things like worker productivity being determined by the movement of the mouse. So, whether the mouse is being moved enough, if there have been enough clicks being done. So, all those different measures around monitoring an employer. But then also we can go back to warehouses and how, for example, Amazon warehouses would require workers to wear bodily devices, which would then sort of monitor how fast they are moving, how many tasks they can do at the speed that they are at versus how much they end up doing.

So, productivity in those senses, but yes, pandemic and gig economy first and then pandemic second have ushered this era.

Rach Tan: So, in your paper, it's mentioned that the quantity and intimacy of this data collected, like we've alluded to, for example, mouse clicks or things like how fast workers are moving can create information asymmetries. 

What are information asymmetries? How are they created and what harms do they cause?

Sangh Rakshita:  I think in our paper, we outline three kinds of harms and we clubbed them into categories. One is privacy and harms and information asymmetry, which sort of go together.

And then the second part is loss of human agency. So, let's start with privacy, harms, information asymmetry. So algorithmic management systems need large amounts of data to make decisions. And as a result, they often require intensive and sometimes constant worker surveillance. This can create serious psychological and even physical harms. For example, delivery drivers may be pressured to work faster by knowing that they are tracked by GPS, leading to unsafe driving and even accidents, for example. Intensive worker surveillance also widens existing information asymmetries in the workplace. Employers typically know more about workers than even workers do. And employers also typically know more about workers than workers know about their employers. And this creates two problems.

First, employers know intimate information about their workers that they simply don't need to know and that can be misused against the workers. A manager does not need to know how often workers go to the bathroom, for example. And second, it makes it harder for workers to ensure their working conditions are fair and to negotiate with employers to improve them. And we should also realise that employer-employee relationships already have a power difference and this sort of exacerbates it.

Rach Tan: So, in the iMANAGE paper, it's set up that there are two kinds of ways to protect individuals from this form of harm. So mandating greater transparency or restricting the power of organisations to create this asymmetry in the first place. So, could you explain what sort of policy options you have in line with these two strategies?

Sangh Rakshita: Absolutely, thank you. So, our research sort of builds on existing legal and policy research as well as empirical work, which has been done by journalists and social scientists. And we propose eight policy options in the paper, in the blueprint. However, the first four policy options are aimed at tackling what I just explained, the privacy harms and information asymmetry harms related to that. And the second bit sort of relates the next four options related to loss of human agency. So, in these first four options, we have redlines and prohibitions, as well as certain requirements for what should be the legal basis for algorithmic management. And we can sort of delve into each of them as we go along.

Rach Tan: So, for redlines and prohibitions, for example, what forms of data collection are prohibited already by regulations such as the GDPR? And how do you think they should be extended, if at all?

Sangh Rakshita: Thank you so much. So, there are, of course, certain special categories of personal data, which have special protection under GDPR under Article 9, as well as even the 2017 opinion of the Article 29 Working Party, which now is the board, the reflection board. So, they have also stated that certain private spaces to which employer may not gain access, there must exist these spaces where there is absolutely no access for an employer to sort of collect data or record personal data or process personal data. Even the proposed platform work directive, in particular, it addresses these risks by sort of prohibiting processing of certain types of personal data, including personal data on emotional or psychological state of the platform worker.

But what we try to propose in the redline is that instead of focusing on particular categories of data, this particular policy option establishes protections by prohibiting the collection of worker data for purposes that pose serious risks to human dignity and fundamental rights, especially in particular, the prediction of a persuasion against the exercise of a legal right, for example. So how this policy option is framed is that employers should not be allowed to monitor workers outside of working hours or outside the workplace. Some places within the workplace as well, such as bathrooms and rest areas, should also be off limits to surveillance. And employers should not be allowed to, for example, monitor communications with worker representatives. And employers should be prohibited from monitoring workers for the purpose of, as I said, preventing them from exercising legal rights, such as organising or unionising.

Rach Tan: So, I guess one other question I might have would be, what exactly is the legal basis for algorithmic management?

Sangh Rakshita: So, I'll delve into what we're proposing and then I'll try to sort of tell you why we're proposing that as a gap, to try to fill a gap. So, the legal basis for an algorithmic, for the use of algorithmic management should be narrow and specific as per our research and as per the blueprint that we've worked on.

The purpose of this policy option is to narrow the legal basis, specifically consent, legitimate interest or public interest, which exists in the GDPR, should not constitute a valid basis for deploying algorithmic management tools. Algorithm management should only be allowed if it is strictly necessary for hiring, for carrying out the employment contract, for complying with external legal obligations, or for protecting the vital interests, such as health and safety, of workers or some other natural person. So, these are the limited, narrow grounds that we think should be the legal basis.

Existing legal basis for processing of workers' data in EU laws, especially legitimate interest and consent, may legitimise the indiscriminate use of algorithmic management systems, even when they pose a serious risk to human dignity or fundamental rights, or when they use it as not proportionate or relevant to legitimate managerial aims. So, in combination with the absence of an explicit proportionality requirement, the existing law is not able to impose a clear obligation on employers to ensure that algorithmic systems are able to be deployed for the purposes they are intended to serve.

Rach Tan: One policy option that was suggested in your paper is both individual and collective notice obligations. So, in line with that, what are the current transparency obligations under the GDPR and the Platform Work Directive already? Do you think they should be extended? And also between individual notice and collective notice obligations, what's the kind of gap that collective notice obligations are meant to fill?

Sangh Rakshita: So, the GDPR already provides a list of transparency requirements in the form of information access rights at the individual level. Workers can leverage these rights to counterbalance information asymmetry, exercise their rights and voice their concerns. Adequate information and access rights can also serve as organising and power building tools for workers. However, transparency requirements, which are specifically applicable to algorithmic management systems, are limited in its scope and detail under the law. For instance, the two significant safeguards under Article 15 of GDPR 1H do not apply to semi-automated decision-making systems. So, they only apply when a decision is fully automated. So, workers do not have the right to know the existence of a decision-making system unless it was fully automated.

What does that mean? That means that if there was even, say, a human decision-maker at the end who probably just sort of said yes to an algorithmic output, or if at any point in the algorithmic decision-making system there was a human being in the process, then that's not fully automated. And therefore, it becomes this grey area where what is fully automated is also contested. And that sort of becomes this basis of, one, denying the right to know the existence of a decision-making system, and second, the right to also obtain meaningful information about the logic, significance and consequences of a decision-making system. So these are available in the GDPR, these protections, but how algorithmic systems work, specifically algorithmic management works, makes it sort of complicated for its application.

And then Article 15.4 of the GDPR provides that data subjects have the right to receive a copy of their personal data, as long as it does not adversely affect the rights and freedoms of others. And this provision has been used to refuse to provide information to subjects of algorithmic management because rights and freedoms of others sort of becomes, again, a broader category of interpretation.

Further, the transparency requirements under GDPR are also limited to information about the existence of the decision-making and the information about how it was made, the logic and the consequences. And therefore, the GDPR does not require controllers to provide other information that might be decisive for workers' ability to understand how decisions were arrived at. Therefore, we try to sort of address that with the third policy option on individual notice obligations.

And so, in fact, other than GDPR, there is also the Platform Work Directive, proposed Platform Work Directive, which will require platforms to publish and regularly update information about the terms and conditions of their work. And then public authorities and workers representatives have a right to request further information on it. And this is what you'll also find in the policy obligation, the third policy that we have sort of put here, that employers using algorithmic management should be required to provide comprehensive information to the individual workers affected. And this should explain what systems are being used, their purpose, how they work, what consequences they may have, what human decision-makers are involved and how workers can contest decisions they think are wrong or inappropriate. Because often, even in algorithmic management systems, there would be a human at some point in decision-making, and that sort of requires that that information also should be available to the workers.

This also sort of clarifies the manner and means by which the information that I just listed should be provided. For example, there shouldn't be an overload of information, which sort of creates this fatigue on the part of the individual worker. So, there should be this notice should be concise, transparent, intelligible, should use clear and plain language, and should be made available in an easily and continuously accessible electronic format. And it also sort of requires employers to inform workers of their rights and available avenues for recourse.

But as we talk about individual data rights, they might still not be enough to mitigate this exacerbation of existing information asymmetries and power imbalance, specifically because algorithmic management can produce inaccurate or unlawful decisions. And a systematic pattern of these errors will not be immediately recognizable by individuals. They might not know that they're being subjected to an erroneous decision-making because it's an algorithmic system. And it will only become probably visible in the aggregate when it happens to a larger group. And second, in some cases, the harm suffered by individuals may be relatively small compared to the time and effort and the cost of exercising these data rights at an individual level. And to challenge these possibly erroneous decisions or the harm may be large, but individuals may not realize that they have suffered a harm. Therefore, in such cases, only a collective or representative body will, with an overarching mandate of ensuring the lawfulness and accuracy of the algorithmic management systems and with broad rights of access to information about and the data that has been used by those systems, only that representative body is positioned and incentivised to intervene or correct or prevent this harm. Therefore, the fourth policy option is that employers should be required to provide similar information, as they do to individual workers, to worker representatives. Worker representatives should also have the right to request individual-level data for the purpose of ensuring that workplace decision-making is lawful, safe and fair.

Rach Tan: Thank you so much for your explanation of the first issue that's created by algorithmic management. So perhaps now we can move on to the second problem that was identified in the iMANAGE paper, namely a lack of human agency.

So, it's said out that firms using algorithmic management software may rely on third-party vendors to produce this software, which poses challenges for management ability to control or even understand the systems that they use to guide managerial decisions. So how do you think this shifts the dynamic between worker and corporation?

Sangh Rakshita: Thank you, Rach, for that question. So, the loss of human agency, the way we've sort of framed it, is at least in three ways in the workplace. One is that automated systems can be used to make decisions about who does what work, and it automatically evaluates their performance and can even sort of fire workers automatically with no opportunity for human managers to intervene. And this sort of takes away or erodes this opportunity for human managers to understand workers' specific contexts, their needs, and exercise compassion and judgement. So, first of all, it sort of erodes flexibility and empathy. That's the cost of using algorithmic management systems, which are not designed with safeguards. Next, the control of workers using algorithmic management is fundamentally distinct from the prevailing bureaucratic structures, which rely on human managers and therefore individuals to take these decisions in the workplace. This poses a challenge to the operation of legal accountability mechanisms, which are structured around the human exercise of managerial prerogatives by obfuscating the location of responsibility for decisions made or supported by algorithms.

So this diffusion of responsibility of managerial decisions, which is in the supply chain now, as you say, by third-party vendors, threatens the operation of the regulatory system, such as, for example, employment law, which is again focused and designed around the condition of a human being having the managerial agency. And therefore, lastly, it also narrows the space for worker participation in individual and strategic decisions, including decisions regarding the implementation of new systems that affect workers' rights and working conditions.

Rach Tan: So, do you think that current accountability mechanisms are suitable to deal with the rise of algorithmic management?

Sangh Rakshita: Not entirely. So, the existing restrictions, for example, under Article 22 of the GDPR, are sort of limited again because they're limited both in scope and they're not even very clear in terms of implementation and practice. So Article 22 of the GDPR provides a data subject with the right not to be subject to a decision based solely on automated processing. And this particular decision, which is based on solely automated processing, should produce legal effects concerning that worker or significantly affecting them. The series of questions that arise around these provisions, which have led to a lot of scholarship being developed in this area, which is a very debated and contested ground in the courts as well as how is the significance to be understood? What is a significant impact on a worker? What does it mean for a decision to be solely automated again? And how do these considerations change when decision making processes are multi-stage, not one single stage of decision making, so as to where job applicants are screened via an algorithm and even before human review, the ones that have been rejected, the bottom 10% do not get any human review in that case. And once the ones who pass through that screening still get human review because they go on to undergo the interview stage, it's still not fully automated because the interview was probably done by a human, but the screening process was fully automated. But how you define the decision making process then changes the implication that the law can have for the particular worker who is applying, for example.

But at the same time, what is also true is that there is this problem of excessive involvement of human decision making, which could sort of lead to this fatigue and this rubber stamping of algorithmic management systems recommendations or decisions for a number of reasons. One, there is also this sometimes there's a deference to algorithmic systems decisions because it's a machine, it's analysed objectively. There's some biases that humans also have, and equally sometimes they could be opposite as well. Also, if there's a large number of decisions, it again becomes a process of sort of just getting it through, so stamping it or just approving it without much application of the mind. So then any regulatory intervention sort of has to balance it as to what is, what points require intervention and require the most amount of human review and at what point sort of automation or some extent of automation for business interests is OK.

Rach Tan: So, in the iMANAGE paper, four ways are suggested to address this lack of human agency, namely putting humans back into the loop, after the loop, before the loop and above the loop. So, what does this entail?

Sangh Rakshita: Well, so yes, these are the next four policy options that we have. And yes, we have used the human in the loop policy idea in different ways in terms of placing the human not just at one point of the loop, but at different points to sort of create accountability and create better regulation instead of just having a rubber-stamping figure at the end of it.

So, let me sort of take you through the four options. So, first is that automated termination of the employment relationship should be prohibited. That is, a human in the loop. If an employer wants to fire someone, a human being should be involved in making the decision. But we are limiting this prohibition or this particular prohibition specifically to termination only to the small set of termination decisions because it ensures then a more proportionate approach and sort of does not lead to another weakness in the term of what will be a significant decision making. Termination for sure is one of the most impactful employment decisions, evaluation decisions in the life of a worker or employee. And this narrow scope we hope will also sort of make meaningful, the human oversight becomes more meaningful and it's a more realistic prospect than instead of a large proportion of decisions requiring that level of involvement. So, we say that termination is sort of a uniquely harmful decision. There's something extremely wrong about it. It's the most sort of impactful decision. And the existing law also sort of makes it necessary to identify a moment of decision making. So, to implement a prohibition, I think, as a human in the loop, as to where you cannot automate this, termination sort of becomes the first answer to this.

The next one would be human after the loop or the right to review. That's our sixth policy option. That is, workers should have the right to receive a written explanation of the reasons for decisions affecting them that are made or supported by automated systems. They should also have the right to contest any such decision, discuss it with a human empowered to change the decision. That's a review and request that a decision be re-evaluated. Data protection law provides a right to rectify inaccurate or incomplete personal data under Article 16 of the GDPR. Domestic labour laws also restrict the lawful grounds for employee sanction dismissal, as well as provide sort of a variety of procedural protections. Algorithmic systems, however, expose a number of gaps in this existing apparatus of protections. For example, these laws do not appear to provide a right to valid or fit for person's decision-making process. This is a particularly salient problem because empirical research reveals that flawed algorithmic systems are common, especially in automated hiring. That means that they do not do what they say they will do. So, they're not fit for purpose, but they're still being used. And that itself sort of needs to be reviewed and probably sort of re-evaluated in terms of use.

The seventh is humans before the loop. So, human before the loop is for jurisdictions which have established rights to information and consultation for worker representatives. So, this one also brings data protection and labour law together, collective rights together. So, the configuration and use of algorithmic management systems should be brought within the scope of those information and consultation rights. We should be able to establish a formal right to information and consultation for worker representatives regarding the design, configuration and deployment of algorithmic management systems, as well as regarding any changes to configuration that trigger individual notifications as set in the individual notice policy number three.

So, and then eighth is humans above the loop, which is where employers should be obligated to produce regular algorithmic management impact assessments. Worker representatives should be involved in producing the assessments and assessment documents should be made public. The organizational impact of technically and logically complex algorithmic systems is often difficult to predict. And sometimes due to lack of data on impacts and structured oversight, high level oversight, it's also difficult to manage ex-post after it's been implemented and after it has effects on the people, on the workers, for example. So limited or unclear employer obligations regarding prospective impact assessments do not solve this problem. So the GDPR, for example, does have a data protection impact assessment requirement, but they may not apply to all algorithmic systems. And there are sort of no transparency obligations attached to our data protection impact assessment under GDPR. And transparency here sort of becomes the key part for assessments or becomes necessary for assessments, full benefits to be realized, both within the firm and more broadly. So, we propose that regulators should therefore try to increase the scope and quality of ex-ante, that is, before the system is deployed and ex-post human deliberation regarding the potential risks of the systems and considerations about risk mitigation strategies by ensuring internal collection of data on the system, on the algorithmic systems impacts and then sort of sharing it with workers and their representatives.

So, I think it's about this participatory co-construction of impacts and mitigations is, I think, a core part of this impact assessment, algorithmic management impact assessment systems that we propose.

Rach Tan: So, moving on to the last problem that was identified by your team, how does discrimination arise in algorithmic management?

Sangh Rakshita: Thank you, Rach, for sort of moving to the question of discrimination. That's also part of the work that the team does. And the way algorithmic system discriminate is that they learn from the data that is existing in the real world. And that sort of makes the because the data in the real world is not often unbiased, it's biased. So that also makes the functioning of the algorithm biased. So what it does is that's one way in which sort of bias occurs, is that it sort of entrenches existing inequality. And then because of the way algorithms work and big data analysis works and how correlations are found between seemingly non-related data sets or data points, is that it also creates new patterns based on these new proxies and correlations and which are often beyond the comprehension of human mind. For example, even if certain algorithmic systems have tried to avoid the use of race, systems have been able to sort of correlate outcomes with race still based on the proxy of postal codes, based on where people live and sort of correlation of the data. And then even without the use of race, therefore, the impact was still based discriminatory on against race, this particular case of ComPass in the US. And that sort of is a very interesting case study on how discrimination occurs in algorithmic systems.

So, for example, in the scenario of workplace, if an algorithm is trained for selecting the best candidate based on the best candidates that the company had in the past few decades, and those were, say, all mostly men with a particular background, which is not very diverse, not ethnically or racially diverse, then the algorithm will also learn to reward similar features in the candidates that they are sort of screening and penalise any diverse features based on, say, nationality, ethnicity, accent, the use of active verbs, which could be, you know, masculine or feminine, like, it could be more masculine than feminine in terms of data points. And even type of university access, or as I said, even based on postcode. So, in essence, the algorithm is predicting correctly on the metrics that it has been trained for. However, it will only end up perpetuating these discriminatory harms. For example, Amazon had a hiring algorithm which discriminated against women because it was trained on sort of historical data, but majority workforce was male and it also learned to sort of detect or discard CVs which had candidates who went to only women colleges. And that is not what it was trained for, but it learned from sort of the data sets. So, it is one data set. It's also that nexus that they learn from, as was in ComPass, from seemingly unrelated aspects of proxies and new correlations.

Rach Tan: What are the current avenues for redress provided by existing discrimination law apparatus?

Sangh Rakshita: So, among existing rules that can play a crucial role in regulating algorithmic management are, of course, those of anti-discrimination law or equality law. And existing equality law framework has been used by scholars to sort of look at this problem, often more from a US-centric lens of how anti-discrimination law works there, but even the framework in Europe, as we've analysed often now, offers some solutions for this problem, though I think there are still gaps, within the apparatus and that needs to be filled. But, for instance, the concept of direct and direct discrimination can cover many cases of algorithmic discrimination and bias. And what also we must remember is that there are certain bits about algorithmic discrimination which are novel, but mostly the challenges that algorithmic discrimination brings forth are often challenges that have existed for discrimination law to deal with traditionally as well. So, it kind of shines a new light on the traditional problems. And so not all of our laws need to be discarded just because it's a new automated issue. So, a lot of our existing problems in an automated world will be solved by using our existing apparatus or flaws and the tools in them well. And then the second bit, of course, comes about expanding them or sort of filling the gaps in them.

Rach Tan: So, for our audience, by way of legal background, in EU law, there's a distinction made between direct discrimination, where a person is treated less favourably than a comparable other person on grounds of a protected characteristic like race or sex, and indirect discrimination, when apparently neutral practice is applied, but which still puts someone with a protected characteristic at a particular disadvantage. So why does this distinction matter and what implications does this distinction have?

Sangh Rakshita: Thank you so much, Rach. So, as you said that there is this distinction between direct and indirect discrimination, but the most important, as you've explained the definitions of both types of discrimination already, the most important distinction that then becomes in terms of impact of it, of an implication, is that direct discrimination is outright prohibited in most employment cases. So, once you prove a case of direct discrimination, then it is a prohibited practice in most cases. Whereas unlawful indirect discrimination does not arise if the use of the provision, practice or criteria, provision, criteria, practice, the PCP, can be objectively justified. So, if there exists a justification and it sort of stands proportionality test, then it is not unlawful. So, the impact of proving a direct discrimination case is, I'd say, more in terms of there being a lack of justification and there being an absolute prohibition.

Rach Tan: So, the layman like me may think that algorithmic systems, for the most part, apply neutral practices and so cannot meet the definition of direct discrimination. But in one of the iMANAGE papers, it's argued that algorithms may actually produce direct discrimination. So how does direct discrimination arise in algorithmic systems?

Sangh Rakshita: Thank you for your question, Rach. This is a paper, ‘Directly Discriminatory Algorithms’, by Jeremia Adams-Prassl, Aislinn Kelly-Lyth and Reuben Binns. And I think, so you're correct, the discrimination and algorithmic management can definitely be covered under the concept of indirect discrimination. But what is not true is that it is not covered in the direct discrimination. And why this has particularly arisen, I think the main reason for this scholarship to mostly focus on indirect discrimination is because this is a US, United States-centric discrimination law view. Under the US law, this narrow lens sort of makes sense because direct discrimination, which would be the equivalent of disparate treatment there, it is limited by the need to demonstrate discriminatory intent or explicit classification. And for algorithmic discrimination cases, most of them are unintentional and therefore, it will be very narrow cases which will then be covered in this requirement of showing intent for disparate treatment, that is, direct discrimination, our understanding. But on the other hand, the disparate impact doctrine in the US, which would be the indirect discrimination, as we understand it, in Europe, would be better suited to therefore find liability for discrimination because this requirement of intent, intentional discrimination, is not there.

But in Europe, the law is not that. Here, you do not need to prove intent. You need to prove that there was a direct discrimination. And once it's been proven, there is prohibition. So, it's a more straightforward test here. When we come to sort of how direct discrimination will sort of apply, then let's look at the framework and let's say that how the fact that under European law, direct discrimination, once it focuses only on reasons and grounds of decision, becomes quite well suited for bringing a lot of cases of algorithmic discrimination bias. Not only does it therefore cover the obvious cases of deployment of automated systems, which camouflage intentional discrimination, or where proactistics are explicitly coded into an algorithmic decision-making system easily. It will even cover proxy discrimination cases, which is generally, proxy discrimination is what happens in a large set of algorithmic bias cases.

So, in the proxy discrimination cases, two types of, so two subtypes of direct discrimination cases should emerge in this frame of using the doctrine, is that one, decisions which are made using an inherently discriminatory criteria, that is, that it uses an appropriate characteristic to discriminate. And second would be decisions made through subjectively discriminatory mental processes. Now inherent discrimination will occur when a criterion used by a decision maker, is inextricably linked to a protected characteristic, like race, gender. And subjectively discriminatory decision will arise when a person's particular characteristic influences the decision maker's conscious or subconscious mental process, such that a different outcome is reached.

Now, how does this happen in an algorithmic system? Let's sort of unpack that a little bit. So, for an inherently discriminatory criteria, it could be based on a trained proxy, that is, for example, you know, that the discriminatory criterion is coded in the algorithm, that that mortgage sort of application assessment systems will identify that marital status is sort of correlated with likelihood of repayment. And then if marriage sort of becomes this particular code to sort of exclude civil partnerships, then a same-sex couple could receive a score that is lower than that of the similarly situated heterosexual couples. And if sort of, if they denied the mortgage, then in a civil partnership, if the couple is denied, then direct discrimination occurs. So that's where the algorithm codes it directly. So those cases then get covered and therefore are prohibited at the outset.

Then the second would be learned proxy discrimination, that is, algorithms sort of draw their correlations between data points, and because how intentional it is, is irrelevant, then the outcome sort of is the same when these sort of, the discrimination is created by this indissoluble proxy. So, the discrimination stems from, say, the application of the criteria, or say, for example, Amazon recruitment algorithm, which penalizes graduates who went to all women's colleges, for example. So that sort of correlates different, that's not what the algorithm was coded for, but it learned from its data sets that there weren't women workers who had gone to all women's colleges, and therefore it learned to penalize them, but it was based on sort of their gender. So that sort of, again, gets covered.

Then there's also something called latent variable proxy, where it's about how an algorithm learns the perfect proxy, using a combination of variables. So that would also be sort of covered under this. So these were the inherently discriminatory criteria. Then the next would be the more imperfect proxies, or the subjective direct discrimination criteria, where the algorithm sort of operates as an automated version of unconscious human bias. Now, for example, in the Amazon algorithm, where it was sort of penalizing women candidates for having gone to all women's colleges, it also learns to sort of mark up the use of active verbs, which they associate with a more male trait, and sort of then starts rewarding candidates who do that, who are male. So then sort of it also then, you know, is more indicative of masculinity, and is not sort of linked much to their efficiency as computer scientists, people that are being hired. So that sort of becomes the subjective, it has learned the unconscious human bias. And as I said, the impact of using direct discrimination would then be that a series of decision, automated decision-making systems, or algorithmic decision-making systems, which have now been seen as indirectly discriminatory, and therefore potentially justifiable also, may be directly discriminatory, and therefore just unlawful from the outset. And a prima facie finding of direct discrimination affords a decision-maker very little scope for justification. And so rather than showing the use of algorithm can be objectively justified, the decision-maker sort that unfavourable output was not because of a protected characteristic, which sort of, and the inability to prove that results in our finding of discrimination.

Rach Tan: So just to end off, my last question would be, as you mentioned earlier, the problem of discrimination is exacerbated in algorithmic systems. So if this is the case, could we still say that existing tools are sufficient to deal with algorithmic systems?

Sangh Rakshita: Yes, why I say that the issue of discrimination is exacerbated is because of the way the discrimination in algorithmic systems takes place. It sort of stands apart for its scale, its pervasiveness, the granularity, the constancy, the opacity. People often don't know until the harm has been done at an aggregate level, for example. There's also huge power asymmetry, as we discussed. There's also very sophisticated, and there is, there also sort of, that sort of exacerbates the challenges also, which means that what we were already dealing with, what equality law and discrimination law was already trying to deal with, becomes even more complex under algorithmic management or algorithmic decision-making systems. So we say that this is the area that I also research, and the gaps that exist in the existing framework sort of require us to think of how do we support these existing framework to sort of, one, for better enforcement, and second, to think if there can be interventions at an earlier stage to prevent that harm to occur at an aggregate level, like an ex-ante understanding of intervention would probably be something that I'd be thinking of or directing this discussion towards. Because, as we said we acknowledge the potential of these existing tools, but there's, the gaps would primarily be that their ex-post enforcement nature, which relies one on the harmed individual to detect and prove a prima facie discrimination case, makes them sort of not very effective when compared to the scale and the concealed nature of algorithmic discrimination. And challenges posed by algorithms have led to interventions in the form of ex-ante, regulatory regulations in other domains of law to fill the cracks or to support the existing legal frameworks. For example, the Digital Markets Act, which has been introduced to regulate the harms to markets as ex-post tools within antitrust law or competition law framework in the EU, was struggling to grapple with algorithmic challenges. Decisions were made, costly investigations were done against the dominant firms, and even if there was a finding of abuse, for example, not much could be done because the market had been completely distorted. Similarly, here, even once what is the remedy, are the remedies enough to sort of undo the harm that has been suffered for the workers who are applying, who've been terminated or who face certain different kinds of discrimination that are to its scale. So therefore, we need to sort of think similarly, similar to sort of other areas of law, whether an ex-ante framework which can be developed to support existing quality rules. And this can be in the form of more positive action and positive duties on the part of the developers, the deployers, the users, basically to the third-party vendors as well as your employers who are using them. Then also more equality impact assessments as part of these algorithmic management assessments, any algorithmic system being deployed. And these should all be of course done, as we also discussed earlier, prior to deploying the algorithm. And this could include involvement of equality bodies, not to just rely on data protection bodies, which are already overburdened. So, this sort of synergy of action between different regulatory bodies in this area would also be quite helpful, we think.

Rach Tan: Thank you so much for coming onto our podcast.

Sangh Rakshita: Thank you so much, Rach. It's a pleasure talking to you.

Share