Zoltan Istvan and I agree that the regulators should be kept far away from legislation involving AI, although we arrived there from different directions. Legislators just recently displayed their incredible ignorance and incompetence when interviewing Mark Zuckerberg about Facebook, including misunderstanding the basics of Facebook’s business model. Even assuming good intentions, it is unreasonable to believe that they will be able to understand the subtleties of AI. Rather, the result would be closer to a “bull in a china shop”–instituting unnecessary constraints on the AI activities. Citizens are apprehensive of new technology, and this is exactly the apprehension that politicians seek to exploit.
Additionally, the issue is not urgent. Friedman mentions Eliza, which perfectly embodies the difference between mimicry and real intelligence. It also shows how our predictions can be accurate in some aspects, while completely missing the mark in others. Discussing Eliza, the Journal of Nervous and Mental Disease said in 1966, “Several hundred patients an hour could be handled by a computer system designed for this purpose. The human therapist, involved in the design and operation of this system, would not be replaced, but would become a much more efficient man.” On some level, the prediction was correct, and there are apps that perform a similar function today (of course, most therapists are women). But those apps are little more than toys, even fifty years later, and the field of psychology is alive and well. Technology continues to march ahead too slowly to take over human intelligence growth.
Likewise, as I said in my original post, it is far too soon to begin forming policy on AI. I have some faith there will be general intelligences someday. But their arrival is so far away that it’s hard to make out their shape, let alone their impact on government and vice versa. Disruptive technologies have a habit of defying expectations, and even if we understood what AI would look like, its consequences are not obvious. Currently even the word “intelligence” is severely under-defined, although presumably for this conversation, it would need to at the minimum show decisionmaking abilities on par with the average voter. As the spectrum of essays here indicate, reasonable people can disagree on when to expect this major breakthrough. Perhaps quantum computing offers hope, as Zoltan Istvan offers. Maybe a combination of current research will get us there, possibly coupled with an exponential increase in computing power, if we continue to follow Moore’s law. Even the necessary research areas remain an open question.
As Roland Benedikter proposes, governments can certainly begin working on plans for “real” AI. This seems to be a fool’s errand–predictions are notoriously hard to make. As noted by Istvan, the American presidential choice in the 2016 election was such a surprise that very few predicted it even on the day of the election. It takes a fair amount of hubris to make the assumption that one knows what AIs will do, let alone whether they will have bodies.
Note that alleging that public policy debates about AI are premature does not imply that any of the contributors here wish to stifle scientific progress, nor need we believe that they are scared of the consequences of robots. On the contrary, I also think the dangers of generalized AI are vastly overstated, a pattern of human behavior that is probably as old as the wheel. Surely, there will be some negative consequences, as with any invention, but I see no cause to think they will outweigh the good. Baked into the fears is the assumption that intelligence is winner-take-all. I think it is far more likely that AIs will specialize in what they are good at, and humans will continue to do what they are good at.
I’m certainly not a critic of AI, rather I’m a practitioner myself. But the devil is in the details, and more clarity is needed as to the nature of AI, as well as observing how it affects society before rational plans can be put into place to control it. Perhaps we will jump over the chasm soon, but the major breakthrough remains hazily in the future, and the recent failure of Moore’s law is not working in its favor. Sophia isn’t fake, but she is definitely demoware, and hard to draw too many conclusions from.