While artificial intelligence is being touted by many as the technology that may end humankind in the not-so-distant future, little attention has been put towards how blockchain technology could contribute to a dystopian future where faulty smart contracts could wrongly interact with the Internet of Things (IoT) to the detriment of humans.
Artificial Responsibility and the Blockchain
In an article titled “How Blockchains Increase Artificial Responsibility,” Adam Kolber, Professor of Law at Brooklyn Law School, coins the term “artificial responsibility.”
Artificial responsibility refers to “the ability of machines to control important matters with limited opportunities for humans to veto decisions or revoke control.”
Kolber is not concerned about how intelligent machines can become despite the discussion surrounding this topic. He does not believe that machines will enslave humans regardless of how intelligent they will become. Instead, he worries about the amount of control that AI-driven machines will be given, especially those that cannot be controlled by humans.
“Even if an AI is a little smarter than the smartest human, it doesn’t mean it can enslave us. Dominance over others isn’t just a function of intelligence. We needn’t be especially worried about a machine superintelligence that has no tangible control over the world unless it effectively has substantial control because of its ability to coax or manipulate us into doing its bidding,” Kolber wrote.
Instead, he believes that the “real concern is how easy it will be to wrest control back from machines that no longer serve our best interests and to avoid giving them control in the first place.”
This is where the blockchain and, more specifically, self-executing smart contracts come into play. Kolber argues that machines that are effectively run like a DAO (decentralized autonomics organization) have the potential to make adverse decisions in a fully-automated manner since human users do not have any control over the machines’ programming and cannot intervene in the case something is going wrong once the smart contracts have been coded and deployed.
Kolber cites TheDAO as an example of artificial responsibility gone wrong. TheDAO was the first decentralized autonomous organization that was set up to act as a de facto venture capital fund for blockchain projects in 2016. Shortly after TheDAO manages to raise over $160 million worth of ether, the smart contract that the DAO was based on was hacked.
During the DAO hack, a hacker was able to exploit a vulnerability in the DAO’s smart contract to cipher off over $50 million worth of ETH out of the $160 million the autonomous smart contract-based venture was able to raise. While the damage that was done – due to the reliance on “not-so-smart contracts” – was only financial, it acts as a use case for the potential dangers of smart contract-driven autonomous organizations that do not allow for any human intervention.
How Faulty Smart Contract Code Could Kill
In his paper titled “Not-So-Smart Blockchain Contracts and Artificial Responsibility,” Kolber states:
“[…] It’s easy to see how not-so-smart contracts like those underlying TheDAO can be disastrous. Given that one study flagged tens of thousands of smart contracts with possible vulnerabilities, bugs like the one that plagued TheDAO are inevitable.”
Kolber makes an important point. A faulty or vulnerable code could leave consumers of a smart contract-powered Internet of Things product and services helpless if there is no “off” button on the program these machines are running.
In his paper, he argues that if TheDAO would have survived and not been hacked, then it would have likely invested in the company that was partly behind the launch of TheDAO, called Slock.it. Slock.it is a blockchain startup that aims to connected real-world devices with smart contract technology.
Through the work of Slock.it and similar startups, we could someday see smart contracts controlling door locks, car ignitions, and even medical devices, which means that blockchain-based smart contracts could soon have a substantial impact on the physical world.
The success of the Internet of Things combined with the growth in smart contract technology could, according to Kolber, lead to products such as DAO hotel, DAO taxis, and even DAO banks. While these would – in theory – contribute to society by making these services faster, more efficient and less labor intensive, there is a substantial downside risk to users if the smart contracts turn out to be “not-so-smart contracts.”
DAO hotel guests could be locked out of or into their rooms, self-driving taxis could have unforeseen accidents, and DAO banks could be spending money on your behalf without you even knowing it.
“The combination of the blockchain and the internet of things promises tremendous opportunities to promote creativity, free expression, deliberative democracy, and economic efficiency. But given the complexity of the entire apparatus, it also raises the risk that we are creating machines that will prove difficult or even impossible to rein in,” Kobler concludes.
In light of the early stage of this technology and the high possibility of coding errors, smart contract-driven machines will need to undergo thorough testing and should probably also not be fully autonomous if they have the potential to harm humans.