Dysfunctional Design and Autocratic Corporate Governance via AI
In mid 2017, having paid little attention to Artificial Intelligence (AI) beyond noticing that it had replaced “globalisation” as the focus of ambitious middle class conversation, my inbox started to fill with queries about how AI could get legal personality. The trigger for this was a European Parliament resolution suggesting a form of legal personhood for AI. Given that one of my areas of core expertise is legal personality, I found myself invited to conferences and into AI networks to discuss how this might work for AI. A number of issues were immediately apparent at these events and in the research around them. Science fiction abounded and many, many, speakers spoke of AI impacts that only existed in novels as if they were real. Phrases like “machine learning”, “artificial neural networks” and “training the AI” were used casually and confusingly, and turned out to merely refer to statistical models using mass computational power to make decisions/predictions and not some HAL 9000 like superior intelligence. Crucially, there was a general misunderstanding of the limits of using statistical modeling to make decisions. In general, AI seemed not that intelligent. Also, on another level, some academic work in the area was supported by the tech industry without, worryingly those academics seeing, or declaring, a conflict of interest. Even when problems were recognized, the background radiation was that tech and AI were exceptional world changing positive forces for good. It was all somewhat reminiscent of the treatment of the tobacco industry in the 1950s.
My paper “Artificial Intelligence: The Very Human Dangers of Dysfunctional Design and Autocratic Corporate Governance” attempts to get to the heart of some of the general misunderstanding and misapplication of AI decision-making technology and corporate governance arrangements of the leading companies behind it. Ultimately it proposes a regulatory model to place public rather than private interest at the heart of AI implementation.
In the article, I first examine the nature of AI decision-making. I focus on its human design and impact, and concludes that technology is not the cause of many problematic outcomes in the area: the causes are flawed human design, understanding and implementation. In short, most AI is not an intelligent technological agent but rather one designed, made and implemented by a particularly narrow group of humans. Crucially and ironically, given general perception, AI can have significant problematic design bias and is unable to replicate entirely previous human decision-making. This is widely misunderstood even by the AI tech industry.
In addition, some complex AI “black box” systems have been deployed without the designer, operator or those subject to the decision having any way of understanding the basis of the AI decision. In some situations, such as image or language analysis, this might be fine. However where understanding the basis of the AI decision is important in areas such as medical diagnosis, autonomous vehicles or where judicial review is significant, a lack of knowledge of the reasons for the decision can lead to incorrect treatment, death of road users and innocent people going to jail. For computer scientists and engineers the key is that the AI is operational, but ultimately for these systems to be safe, lawful or useful in operation, the key is the basis upon which those operational decisions are made.
In some areas, AI does present the possibility of genuinely novel decision making that humans could not do or do as well. Some AI designers, however, understand that because AI is a product of human design, the potential exists for AI to shape the world to their own private ends in novel AI circumstances. This presents particular public interest concerns where the private and public interests clash. Car manufacturers, finance companies and social media companies have been early adopters of AI in novel circumstances leading to clashes with regulators with regard to public safety and discrimination. If public interest requirements are not inserted in the coding design of genuinely novel AI decision-making, then the industry itself will insert its own private interests, which may directly work against the public interest. People may literally die in the street, who might otherwise have lived.
An additional exacerbating factor that drives potential AI harm is the general misconception that AI has superior intelligence to humans. This leads human users in, for example, radiology treatment or in Boeing’s recent 737 Max autopilot related crashes to defer to AI’s outcomes even when such outcomes are manifestly problematic. These combinations may also have greater impact where AI is being used by non-experts such as in an employment or public sector situation. They compound issues of design flaws, bias and the lack of explainability in “black box” AI. However, despite these problematic issues of poor design, bias, black box implementation, designing against the public interest and human deference to AI decisions a huge driver of AI decision-making implementation is cost. Cost savings for AI over human decision-making can be enormous. If the AI is good enough rather than better than humans and the cost savings are huge, then it will get implemented, despite flaws and all.
While there are problematic humans on the technical side of designing and implementing AI, if the human lens is set wider, it also reveals problematic humans behind the façade of the AI companies at the governance level. In the past five years the global balance of leading AI developments has split between the US and China. The second part of the paper considers the private sector governance of the leading private sector AI developers, Uber, Amazon, Facebook, Microsoft, Google, Apple, IBM and Tesla. All have unusual private sector governance structures designed in general to give control to a small group of insiders with tight connections to each other and very similar backgrounds and interests. Governance in these leading AI companies is in general unusually autocratic and unaccountable.
Overall the article concludes that given the rudimentary state of AI, the dominance of self interested autocratic governance in leading AI companies and the enormous risks to society, AI should be treated in a similar manner to pharmaceutical products by introducing public interest regulation through the medium of a state regulatory body, and that changes to the corporate governance regulation of tech companies are necessary.
Alan Dignam is Professor of Corporate Law at Queen Mary University of London.
Share
YOU MAY ALSO BE INTERESTED IN