Gig workers—long classified as independent contractors—have traditionally lacked access to the legal protections granted to employees. With the rise of artificial intelligence (AI), platforms that hire gig workers are increasingly relying on algorithms to allocate work, measure performance, and manage workers. The use of AI—and the algorithmic discrimination that often accompanies it—poses new risks for both gig workers and the platforms themselves.
Algorithmic discrimination occurs when AI systems are trained on biased or otherwise unrepresentative data, resulting in systematically discriminatory outputs. This negatively affects important aspects of gig workers’ lives: who is assigned a task, how pay is calculated, and whose account is terminated. These disparities worsen existing inequalities—an issue that is further exacerbated by the global nature of these platforms, which may export discriminatory practices across borders.
Existing literature on algorithmic discrimination in the gig economy largely focuses on the impact on the gig worker. However, one critical gap remains underexplored within this discourse: its intersection with corporate governance. With the recent passing of EU laws in this area—namely the Platform Work Directive and AI Act—this link will only further intertwine.
Our article examines this connection. We discuss how algorithmic bias exposes platforms and their directors and officers to new risks—including litigation, regulatory scrutiny, and reputational damage—and how corporate boards can proactively mitigate these emerging risks.
The Corporate Governance Gap
The UK has the concept of enlightened shareholder value, which requires directors to promote the company’s long-term success. This includes considering stakeholders whose legitimate interests are impacted by the company’s decisions.
Simultaneously, corporate boards are also responsible for overseeing the corporation’s risk profile. This includes identifying and managing risks to ensure the corporation carries out its legal and ethical obligations.
The EU’s issuance of new binding laws in this area marks a global turn in how gig workers are treated. Gig workers’ stakeholder rights are no longer an afterthought. For platforms, especially ones that operate in the EU, how they treat gig workers is not just an ethical decision, but also increasingly a compliance requirement.
The Turning Tide: New Regulations with Teeth
The Platform Work Directive specifically addresses platforms’ treatment of gig workers in the age of AI. It presumes that gig workers are the platform’s employees unless the company proves otherwise, shifting the burden of classification onto platforms. The Directive also requires algorithmic transparency, human oversight over key algorithmic decisions, and workers’ input over changes to algorithms which directly impact them.
The AI Act, which regulates AI systems generally, is also applicable to platforms. The Act classifies AI systems into different risk levels. AI systems that make work-related decisions—such as platforms hiring gig workers—would be considered ‘high-risk’ and subject to strict requirements around risk assessments, bias mitigation, transparency, and human oversight. Non-compliance is expensive: violating companies may receive fines up to €35 million or 7% of global turnover, whichever is higher.
These laws signal a major shift in the global regulation of platforms. They are extraterritorial in nature and come with hefty penalties. More importantly, they treat the corporate misuse of AI systems as a significant governance issue—shifting responsibility towards corporations who oversee them.
While the Platform Work Directive and AI Act do not directly apply to the UK or other common law jurisdictions, they nonetheless may affect companies whose activities have an EU nexus. As seen with the General Data Protection Regulation (GDPR), EU regulations can have ripple effects globally (i.e., the Brussels Effect, a concept explained by Anu Bradford in her work). Accordingly, these emerging laws may also similarly influence how gig workers are protected from algorithmic management worldwide. They also serve as references for addressing emerging issues at the intersection of AI and law—particularly as the UK and other common law countries are developing their own approaches.
The Future of Corporate Governance
Our article discusses how these developments will affect corporate boards and executives. The legal and enforcement risks associated with the Platform Work Directive and AI Act may result in expensive penalties for platforms and subject their decisionmakers to heightened risks. Directors and officers that fail to create risk management systems for algorithmic management may be unprepared and exposed to new legal and regulatory risks. Methods to mitigate risks include conducting AI audits to check for bias and compliance, and re-evaluating and re-classifying gig workers’ existing status. At the board level, corporations can establish board committees that address AI and cybersecurity risks. Further, they can create AI ethics committees to evaluate the ethical implications associated with the use of new AI technologies.
A Watershed Moment for Platforms and Gig Workers
These new laws suggest we are witnessing a watershed moment for platform governance globally. Perhaps the most enduring impact they will have will be on culture—similar to the GDPR, which has fundamentally reshaped corporate culture and stakeholder expectations towards data protection in the UK and globally.
Noncompliance with these nascent laws is costly. But beyond regulatory and compliance risks, these laws also offer a pivotal opportunity: a chance for platforms and their boards to lead in deploying AI responsibly—and to get governance right from the beginning.
The author’s complete article can be found here.
David S. Lee is an Associate Professor at HKU Business School, University of Hong Kong.
Felicia Feiran Chen writes on the emerging intersections of technology law and corporate governance.
Share: