In an apparent attempt at a joke, a Twitter user sent a Business Insider tweet featuring a driverless Tesla car to Elon Musk, asking him to confirm that the development in “humanless automation” would not result in a “robotic apocalypse.” Musk replied via tweet, reaffirming his oft-repeated position that it is not automation per se, but deep AI, that poses more of an “apocalyptic” risk to humanity:
Disruption certainly. Deep AI is the real risk, though, not automation.
— Elon Musk (@elonmusk) June 9, 2017
Disruption may cause us discomfort, but it’s not a threat in and of itself. However, Musk and others do see the potential for deep AI to be world-shattering, at least for humans.
It’s easy to understand why some are worried about this; AIs are learning how to encrypt messages efficiently. Jürgen Schmidhuber, considered to be the father of deep learning, believes that there will be trillions of self-replicating robot factories along our Solar System’s asteroid belt by 2050. He also thinks that robots will eventually explore the galaxy by themselves, motivated by their own curiosity, capable of deciding their own agenda without much human oversight. And, perhaps most disturbing, scientists working with Google’s DeepMind AI tested whether or not AI are more prone to cooperation or competition — and found that it can go either way, and AI are even capable of developing “killer instincts,” or a cooperative mindset, depending on the situation.
Musk’s solution to this potential threat is his famous neural lace concept. In brief, this ambitious project would use easily injectable electrodes to form a neural lace over the brain. The lace could both stimulate and interpret the brain’s electrical activity, and would eventually merge with the brain entirely, making human and AI part of the same organism.
The key isn’t halting progress, or even fearing AI — it’s learning how to merge with it successfully.