I agree with Rachel Lomasky and Ryan Calo about two things: that Sophia the robot is a fake, and that we are unlikely to get human level artificial intelligence in the next decade or two. But Calo’s claim that “Artificial intelligence is somewhere between a century and infinity away from approximating all aspects of human cognition,” supported by a footnote asserting “overwhelming scientific consensus,” betrays a confidence about his ability to predict the distant future that I regard as wholly unjustified. As he surely knows, before AlphaGo demonstrated that it was the strongest go player in the world, the consensus in the field was that a program capable of beating a human champion was at least five to ten years away.
Achieving human level AI depends on solving two quite different problems, one in computer technology, one in AI programming. It requires a computer with about the power of a human brain for the AI to run on. If computers continue to improve at their current rate, that should be achievable sometime in the next few decades. It also requires a program; we do not know whether such a program can be written, or if so how and when. That introduces a second and much larger source of uncertainty. As AlphaGo demonstrated, improvements in programming are sometimes sudden and unexpected, so it could be next week. Or next century. Or never.
Predictions fail. In the 1960s, the overwhelming consensus—every significant figure save Julian Simon, widely regarded as a crank—was that population increase was a looming global catastrophe, dooming large parts of the world to continued and growing poverty. Paul Ehrlich’s prediction of unstoppable mass famines dooming hundreds of millions of people to death in the 1970s was within that consensus. What has happened since then was the precise opposite of the prediction, extreme poverty declining steeply, calories per capita in poor countries trending steadily upwards. A confident prediction a century into the future on AI, population, climate, or any major issue other than the orbits of asteroids, is worth very nearly nothing.
If it can be done, will it be? Zoltan Istvan, concerned about political opposition to the parallel issue of improvements in humans, writes “This means most of the leaders in our country fundamentally believe the human body and mind is a God-given temple not to tampered with unless changed by the Almighty.” That claim is clearly false, since human improvements to the human body—stents, vaccines, pacemakers, prosthetics—have for quite a long time been taken for granted. It is true that people tend to be suspicious of rapid change, but that has nothing much to do with religion. Consider, for the most notable current example, how many people make the jump from “climate change” to “catastrophe” without paying much attention to the links between. No religion required.
While an instantaneous jump to genetic engineering of humans might be blocked by conservative bias, a gradual one will not be. Artificial insemination, in vitro pregnancy, selection of egg and sperm donors for desired characteristics, and pre-embryo selection are already happening. Somatic gene therapy is an accepted technology, while gene line therapy is still controversial. Similar continuous chains of improvement could lead from the computer I am writing this on to a programmed computer with human, or more than human, intelligence.
A further reason that such technologies, if possible, are unlikely to be blocked, is that they do not have to be introduced everywhere, only somewhere. The world contains a large number of different countries with different ideologies, religions, politics, policies, and prejudices. As long as at least one of them is willing to accept a useful technology, even a potentially dangerous one, it is likely to happen. The natural response to some futures is “stop the train, I want to get off.” As I pointed out some years back, this train is not equipped with brakes.
While the transhuman project starts from a higher level than the AI project—we already have human level humans, after all—improvements from that level will be more difficult. Carbon based life is an old technology, and Darwinian evolution has already explored many of its possibilities. Silicon based intelligence is a newer technology with more room for improvement. A human with the intellect of Einstein might be useful, might be dangerous, but is not novel; the human race has, after all, not only survived but benefitted by Einstein, da Vinci, Von Neumann. A very strong human eight feet tall would be formidable in one on one combat, but considerably less formidable than a tank—and there are already lots of tanks. Improvements in biological life are likely and interesting, but the range of plausible outcomes is less radical, less threatening, than human level, or eventually more than human level, AI.