How Big-Tech Barons Smash Innovation
This post is part of a special series including contributions to the OBLB Annual Conference 2022 on ‘Personalized Law—Law by Algorithm’, held in Oxford on 16 June 2022. This post comes from Ariel Ezrachi, who participated on the panel on ‘Law by Algorithm’.
When we think about the digital economy, many of us immediately think about innovation. After all, digital platforms operate as coral reefs that attract innovators, disruptors, and new business models. Indeed, there is little doubt as to the significant investments in research and development by leading tech firms such as GAFAM (Google, Apple, Meta/Facebook, Amazon and Microsoft). Four of these companies (sans Amazon) collectively spent over $451.6 billion on R&D between 2010 and 2020. To put that number into perspective, their combined R&D expenditures over eleven years exceeded the gross domestic product of over 160 countries in 2020. The four companies’ R&D expenditures would exceed the total market value of all final goods and services produced in Nigeria in 2020. (Nigeria’s GDP is ranked 27th of 194 countries.)
But, once we move beyond these impressive numbers, there lies a more complex and somewhat worrisome tale, in which distortion of innovation, exclusion and toxicity play a role. In our new book How Big-Tech Barons Smash Innovation―and How to Strike Back (HarperCollins 2022), we explore the means through which a few big tech firms, in controlling significant ecosystems, distort the paths of innovation and undermine disruption, to safeguard their own value chains. While these Tech Barons promote innovations that support their ecosystems, they quash disruption that threatens their profit models. And, as they control significant access points to markets, their strategies effectively distort the future paths of innovation and diminish its plurality.
How so? Among the tools they have (which earlier monopolies lacked) is the nowcasting radar. Tech Barons can identify market patterns within their ecosystems and neutralize innovation threats. Facebook, for example, acquired the data-security app Onavo to track users’ smartphone activity. That technology was central in its acquisitions of perceived competitive threats, including WhatsApp. With a clear view of risks beyond the horizons, the Tech Barons can engage in strategies aimed at distorting the supply of disruptive innovation. This includes, as our book discusses, their many weapons to exclude disruptors from their ecosystem. In speaking with disruptive innovators, we chronicle the toll on innovation when Tech Barons refuse access, reduce interoperability, copy technologies to deprive disruptors of the scale necessary to survive, restrict disruptors’ access to long-term funding, and, of course, acquire these disruptors.
Tech Barons can also manipulate our demand for innovation. We seemingly have control to choose the technologies that we want, while in reality we often opt for the technologies favored by Tech Barons. Tech Barons can fortify their position by increasing friction to disruptive innovations—such as their use of dark patterns that has captured the interest of policymakers on both sides of the Atlantic—while, in contrast, the path to their own products and services and those they favor are frictionless.
What happens when Tech Barons can distort the supply and demand of innovation? Many bad things, indeed. Among them is the rise of toxic innovations. As the plurality of innovation diminishes, its quality and nature change. Value-creating disruptive innovations are gradually displaced by innovations that principally extract (or destroy) value from individuals and business users and primarily benefit the Tech Barons. Illustrative are technologies that go far beyond predicting our behavior to some genuinely frightening methods of exploitation, manipulation, and extraction of value.
Many of us already sense that these toxic innovations weaken social cohesion, increase tribalism, and undermine democracy. But avoiding Tech Barons’ ecosystems, even if we could, is not the answer. As our book explores, the ripple effects from their ecosystems’ toxic innovation extend far beyond the digital economy. These toxic innovations ultimately erode our social and political fabric and harm our autonomy, democracy, and well-being. We see these effects, for example, when looking at the business models and retention strategies at the heart of social media and online behavioral advertising.
Many are familiar with the story of Cambridge Analytica, but, importantly, it is only one example of the microtargeting, manipulation, and deception of voters spawned by these toxic innovations. It is a symptom of a spreading problem where data advantage and negative messaging are the ultimate tools to manipulate behavior. Political campaigns are now designed to trigger the desired emotional reaction of individual voters. With disruptors crushed, these innovations fortify the prevailing value chain and business model. And, while the Tech Barons offer tools to mitigate some of these effects (like requiring certain political ads to include disclaimers with the name and entity that paid for the ads), they cannot prevent their platforms from being weaponized or their toxic innovations from being deployed to undermine democracy. They must feed the beast, and that beast is destroying us.
So, will Europe’s Digital Markets Act, Digital Service Act and Data Act, along with the increased antitrust enforcement, fix the problem? Our book examines the limitations of antitrust enforcement and current regulatory proposals (with an apt analogy to duck hunting). There simply is no clear‑cut fix to deter toxic innovation and promote disruptive innovations that actually create value.
Ultimately, the current incentives and policies have put the digital economy on the wrong trajectory. With these faults at the base of our policies, there is little wonder that we are off course. After all, any navigator knows a basic rule. A small degree error, insignificant in a short voyage, will increase the longer one travels. It’s known as the ‘1 in 60 rule of thumb’. A one-degree error in navigation will lead a pilot one mile away from her destination for every 60 miles of travel. This rule helps illustrate how seemingly insignificant flaws in past assumptions and policies have led us off course. It helps us appreciate the impact and actual costs of past economic and industrial policies that failed to adapt to the changing dynamics of competition and innovation in the digital economy. Considering the supersonic speed in which we travel in the digital economy, and the significant degrees of error, it is perhaps not surprising that we find ourselves at a crisis point.
So, what can be done? To ensure innovation delivers on its promise, we offer three key focal points to guide the design and enforcement of innovation policies. As we set to design policies for the future, we must take account of the value of the innovation, acknowledge the incentives at stake, and seek to promote the diversity of innovation. Most importantly, we cannot simply assume that the current toxic innovation trajectory will self-correct. But there is reason for hope, especially when we invest in the engine of innovation—which isn’t Tech Barons. Instead, one surprising potential avenue is cities.
Ariel Ezrachi is the Slaughter and May Professor of Competition Law at the University of Oxford.
Maurice E. Stucke is the Douglas A. Blaze Distinguished Professor of Law at the University of Tennessee (Knoxville) and was an Academic Visitor at the University of Oxford.
This post is part of an OBLB series on Personalised Law—Law by Algorithm. The introductory post of the series is available here. Other posts in the series can be accessed from the OBLB series page.
Share
YOU MAY ALSO BE INTERESTED IN