Faculty of law blogs / UNIVERSITY OF OXFORD

How Blockchains Increase Artificial Responsibility

There has been an explosion of articles in the popular press about the dangers of artificial intelligence (‘AI’). Some fear that machines with human-like intelligence could someday develop goals at odds with our own. For example, a suitably intelligent AI that seeks to maximize the number of paper clips might, as Nick Bostrom has suggested, enslave humanity if doing so will best achieve its cold, calculated objective.

But as these fears imply, what really concerns us is not so much machine intelligence. What we’re really worried about is giving machines control over important matters. Control and intelligence are not the same thing. I use the expression ‘artificial responsibility’ to refer to what scares us more directly: the ability of machines to control important matters with limited opportunities for humans to veto decisions or revoke control.

Even if an AI is a little smarter than the smartest human, it doesn’t mean it can enslave us. Dominance over others isn’t just a function of intelligence. We needn’t be especially worried about a machine superintelligence that has no tangible control over the world unless it effectively has substantial control because of its ability to coax or manipulate us into doing its bidding. Our real concern is how easy it will be to wrest control back from machines that no longer serve our best interests and to avoid giving them control in the first place.

Artificial responsibility is related to artificial intelligence because we might be inclined to give greater control to more intelligent machines. But even unintelligent machines can be dangerous when they’re given a lot of responsibility. And herein lies the connection to bitcoin and blockchains more generally. Even though the blockchain technology that enables bitcoin is low on the scale of artificial intelligence (so low it is not usually thought of as artificially intelligent at all), it is nevertheless surprisingly high on the scale of artificial responsibility.

Bitcoin is a kind of digital currency invented in 2008 by a person or group of people pseudonymously known as Satoshi Nakomoto. The bitcoin ecosystem enables users to store and transfer value, in the form of bitcoin, across a decentralized computer network. While heady math underlies the cryptographic principles that keep bitcoin secure, most would say the network is rather unintelligent. It doesn’t recognize our voices or faces, and it certainly wouldn’t pass a Turing Test.

Nevertheless, the bitcoin network can accomplish quite a bit with limited human intervention. If bitcoin (or a competitor coin) is able to scale up properly, it could enable millions of people to easily transfer substantial value without the intervention of banks or other trusted intermediaries. Transactions that take banks days to accomplish, such as clearing checks, will be done with cryptocurrency in minutes or seconds. Unintelligent as it may be, bitcoin still has substantial artificial responsibility because the network accomplishes the important task of transacting billions of dollars in value through a network spread across the globe with no person, bank, or government in charge of it. 

As I discuss in a forthcoming article, the blockchain technology that underlies bitcoin can be used for more than just  digital currencies. One can create what are called ‘smart contracts’ and then put a group of smart contracts together to make a ‘decentralized autonomous organization’ (‘DAO’). The first high-profile DAO, oddly called ‘TheDAO,’ was formed in 2016 and used blockchain smart contracts to allow strangers to come together online to vote on and invest in venture capital proposals. Newspapers raved about the $160 million it quickly raised, even though it purported to have no central human authority, including no managers, executives, or board of directors.

TheDAO itself, however, is now a cautionary tale. A bug in its smart contract code was exploited to drain more than $50 million in value. And here was can see our willingness to endow blockchains with artificial responsibility: despite the loss of funds, there was no easy mechanism and certainly no central authority that could recover the money. It would take substantial agreement among the community running the blockchain platform used by TheDAO to mitigate the damage. Eventually, such consensus was reached. But it caused a continuing rift in the community, and this solution may not be available in the future as those running a blockchain will not easily come together to make alterations (indeed, blockchains are often advertised as immutable and ‘unstoppable’). So not only is it difficult to revoke the control given to a DAO, many people prefer not to do so as a matter of principle. Some purists denounced efforts to mitigate TheDAO exploit, arguing that the alleged hacker simply withdrew money in accordance with the organization’s agreed-upon contractual terms in the form of computer code. 

TheDAO had tremendous ‘artificial responsibility’ in that we gave it considerable control that couldn’t be easily revoked or reined in. Not-so-smart contracts in the future may prove even more dangerous: guests at a DAO hotel might be locked out of their rooms; DAO self-driving cars might drive off bridges. Blockchains have great promise. But we should be thoughtful about how we endow machines with artificial responsibility, even when (and perhaps especially when) these machines are not very intelligent.

Adam J. Kolber is Professor of Law at Brooklyn Law School and a Visiting Fellow at NYU Law School’s Center for Research in Crime and Justice. This post is adapted from an article forthcoming in the Stanford Technology Law Review and originally appeared on PrawfsBlawg.

Share

With the support of