“Future of Life Institute (FLI) letter “Research Priorities for Robust and Beneficial Artificial Intelligence,” signed by hundreds of AI researchers in addition to Elon Musk and Stephen Hawking, many representing government regulators and some sitting on committees with names like “Presidential Panel on Long Term AI future,” offers a program professing to protect the mankind from the threat of “super-intelligent AIs.” In a contrarian view, I believe that should they succeed, rather than upcoming salvation we will see a 21st century version of 17th century Salem Witch trials instead, where technologies competing with AI will be tried and burned at stake with much fanfare and applause from mainstream press.
Before I proceed to my concerns, here’s some background on AI. For the last 50 years AI researchers have promised to deliver intelligent computers, which always seem to be five years in the future. For example, Dharmendra Modha, in charge of IBM’s Synapse “neuromorphic” chips, claimed two or three years ago that IBM “will deliver computer equivalent of human brain” by 2018. I have heard this echoed of in statements of virtually all recently funded AI and Deep Learning companies. The press accepts these claims with the same gullibility it displayed during Apple Siri’s launch, and hails arrival of the “brain like” computing as a fait accompli. This is very far from the truth.
The investments on the other hand are real, with old AI technologies dressed up in new clothes of “Deep Learning.” In addition to acquiring Deep Mind, Google hired Geoffrey Hinton’s University of Toronto team as well as Ray Kurzweil, whose primary motivation for joining Google Brain seems to be the opportunity to upload his brain into vast Google supercomputer. Baidu invested $300M in Stanford University’s Andrew Ng Deep Learning lab, Facebook and Zuckerberg personally invested $55M in Vicarious and hired Yann LeCun, the “other” deep learning guru. Samsung and Intel invested in Expect labs and Reactor, and Qualcomm made a sizable investment in BrainCorp. While some progress in speech processing and image recognition will be made, it will not be sufficient to justify lofty valuations of recent funding events.
While my background is in fact in AI, I worked for last few years closely with the preeminent neural scientist Walter Freeman at UC Berkeley on a new kind of wearable personal assistant, one based not on AI but on neural science. During this time, I came to the conclusion that symbol-based computing technologies, including point-to-point “deep” neural networks (not neural science) can not possibly deliver on claims made by many of these well funded AI labs and startups. Here are just three of the reasons:
- Every single innovation in evolution of vertebrate brains was due to advances in organism locomotion, and none of the new formations indicate the emergence of symbol processing in cortex.
- Human intelligence is a product of resonating, coupled electric fields produced by massive population of neurons, synapses and ion channels of cortex resulting in dynamic, AM modulated waves in gamma and beta range, not static point-to-point neural networks.
- Human memories are formed in hippocampus via “phase precession” of theta waves which transform time events into spatial domain without use of symbols like time stamps.
Each of the above three empirical findings invalidates AI’s symbolic, computation approach. I could provide more but it is hard to fight prevalent cultural myths perpetuated by mass media. Movies are a good example. At the beginning of the movie Transcendence, Johnny Depp’s character, an AI researcher (from Berkeley) makes the bold claim that “just one AI will be smarter than the entire population of humans that ever lived on earth.” By my calculation this estimate is incorrect today by almost 20 orders of magnitude — it will take more than a few years to bridge this gap.
Which brings me back to the FLI letter. While individual investors have every right to lose their assets, the problem gets much more complicated when government regulators are involved. Here are the the main claims of the letter I have a problem with (quotes from the letter in italics):
- Statements like: “There is a broad consensus that AI research is progressing steadily,” even “progressing dramatically” (Google Brain signatories on FLI web site), are just not true. In the last 50 years there has been very little AI progress (more stasis like than “steady”) and not a single major AI based breakthrough commercial product, unless you count iPhone’s infamous Siri. In short, despite the overwhelming media push, AI simply does not work.
- “AI systems must do what we want them to do” begs the question of who is “we?” There are 92 references included in this letter, all of them from CS, AI and political scientists, there are many references to approaching, civilization threatening “singularity,” several references to possibilities for “mind uploading,” but not a single reference from a biologist or a neural scientist. To call such an approach to study of intellect “interdisciplinary” is just not credible.
- “Identify research directions that can maximize societal benefits” is outright chilling. Again, who decides whether research is “socially desirable?”
- “AI super-intelligence will not act with human wishes and will threaten the humanity” is just a cover for justification of the attempted power grab of AI group over the competing approaches to study of intellect.
Why should government regulators support technology that has failed to deliver on its promises repeatedly for 50 years? Newly emerging branches of neural science, which made major breakthroughs in recent years, are of much greater promise, in many cases exposing glaring weaknesses of AI approach. So it is precisely these groups which will suffer if AI is allowed to “regulate” the direction of future research of intellect, whether human or “artificial.” Neural scientists study actual brains with imaging techniques such as fMRI, EEG, ECOG, etc and then postulate predictions about their structure and function from the empirical data they gathered. The more neural research progresses, the clearer it becomes that brain is vastly more complex than we thought just a few decades ago.
AI researchers, on the other hand, start with the a priori assumption that the brain is quite simple, really just a carbon version of a Von Neumann CPU. As Google Brain AI researcher and FLI letter signatory, Illya Sutskever, recently told me, “[The] brain absolutely is just a CPU and further study of brain would be a waste of my time.” This is almost word for word repetition of famous statement of Noam Chomsky made decades ago “predicting” the existence of a language “generator” in the brain.
FLI letter signatories say: Do not to worry, “we” will allow “good” AI and “identify research directions” in order to maximize societal benefits and eradicate diseases and poverty. I believe that it would be precisely the newly emerging neural science groups which would suffer if AI is allowed to regulate research direction in this field. Why should “evidence” like this allow AI scientists to control what biologists and neural scientists can and cannot do?
It is quite possible that signatories’ motives are pure. But at the moment the AI lobby has a near monopoly on forming public opinion and attracting government dollars through the influence of compliant media. Indeed government regulators in this space are all AI researchers, often funding AI startups with taxpayer dollars, and later taking up jobs with the very same companies they funded and were supposed to regulate. And often, when government regulators lead, private VC funds follow in a “Don’t fight the Fed,” sheep-like movement.”