Faculty of law blogs / UNIVERSITY OF OXFORD

The Case for Automation Rights: Rethinking AI Regulation

Author(s)

Orly Lobel
Warren Distinguished Professor of Law, the Director of the Center for Employment and Labor Law, and a founding member of the Center for Intellectual Property Law and Markets at the University of San Diego

Posted

Time to read

2 Minutes

In the rapidly evolving landscape of artificial intelligence, our legal frameworks face the challenge not only to keep pace, but also to propel the best, most socially valuable technologies forward. While numerous bills and regulations aim to restrict AI use and mitigate risks, regulation and public debates have largely neglected a crucial aspect: the potential need to mandate AI deployment when it demonstrably outperforms human decision-making.

The Paradox of AI Regulation

Current regulatory trends, exemplified by the EU AI Act and various US federal and state bills, focus heavily on limiting AI applications deemed ‘high-risk.’ This approach, generally well-intentioned, too often fails to consider the comparative advantages of AI over human decision-making. Indeed, we at times are witnessing a paradox: the higher the stakes, the more likely legislative proposals are to insist on human intervention and slow the growth of technology, even when evidence suggests AI could be safer and more effective.

The Need for Comparative Analysis

To develop more balanced and effective AI policies, I have argued in my research that policymakers must shift our focus to a comprehensive comparative analysis between AI and human performance. This analysis should consider a range of factors:

  1. Accuracy, effectiveness, and consistency in achieving desired outcomes
  2. Scalability, accessibility and cost
  3. Transparency and explainability
  4. Traceability and error detection
  5. Potential for improvement and advancement trajectory
  6. Protection of rights and liberties including equality and privacy
  7. Applicable liability frameworks

By evaluating these factors comparatively rather than in absolute terms, we can make more informed decisions about when, how, and where to deploy AI systems.

Examples of Potential Automation Rights

Several areas could benefit from a shift towards ‘automation rights’—the right to demand and the duty to deploy AI when it outperforms humans. One example is that of Transportation: Despite evidence suggesting that autonomous vehicles could significantly reduce accidents, legislation often seeks to slow their deployment. A second example which I have written about in my book The Equality Machine is Healthcare: AI systems have shown superior performance in various diagnostic tasks, yet their adoption faces regulatory hurdles. A third example is that of public administration and regulatory compliance and enforcement of the law: From criminal justice to child welfare, AI tools may under certain circumstances enhance decision-making fairness, consistency and efficiency. Finally, in all stages of Employment there may be benefits in utilizing AI: AI-based software could help address longstanding pay inequities more effectively than current human-centered approaches.

Overcoming Barriers to Rational AI Adoption

Several factors contribute to the reluctance in embracing AI improvements. These include status quo bias: Our tendency to prefer the current state of affairs, even when change could be beneficial; loss aversion: Overemphasizing potential losses from AI adoption while undervaluing potential gains; and a tendency to distrust the artificial: A preference for human decision-making, even when it's demonstrably less reliable. To realize the full potential of AI while anticipating and preventing its risks, we need a paradigm shift in our regulatory approach. We must:

  1. Develop frameworks for mandating AI deployment in areas where it consistently outperforms humans.
  2. Foster rational trust in AI through education and transparency.
  3. Address the societal impacts of AI adoption, particularly in the labor market, through robust policy measures.
  4. Reconsider blanket disclosure requirements about AI use, focusing instead on meaningful transparency that enhances trust and safety.

As AI continues to advance, it is crucial that our legal and policy frameworks evolve to not only protect against potential harms but also to ensure we harness its benefits for the greater good. The concept of automation rights offers a balanced approach to this challenge.

Orly Lobel is Warren Distinguished Professor of Law and Director of the Center for Employment and Labor Policy (CELP), University of San Diego.

This post is published as part of the series 'How AI Will Change the Law'.

Share

With the support of