Eliza, a program written at MIT a little more than fifty years ago, produced the illusion of human conversation using one of a variety of scripts, of which the most famous simulated a Rogerian psychotherapist. Sophia, judging by a sample of her conversation, is an updated Eliza controlling an articulated manikin. She presents the illusion of a human-level AI, but behind her eyes there is nobody home. What Sophia pretends to be, however, raises two interesting issues.
The first is how to decide whether a computer program is actually a person, which raises the question of what a person is, what I am. That my consciousness has some connection to my head is an old conjecture. That plus modern knowledge of biology and computer science suggests that what I am is a computer program running on the hardware of my brain. It follows that if we can build a sufficiently powerful computer and do a good enough job of programming it, we should be able to produce the equivalent of me running on silicon instead of carbon.
There remains the puzzle of consciousness. Observing other people from the outside, they could be merely programmed computers. Observing myself from the inside, the single fact in which I have most confidence is that there is someone there: Cogito ergo sum. I do not understand how a programmed computer can be conscious, how there can be a ghost in the machine, but apparently there is. Since other people seem to be creatures very much like me, it is a reasonable guess that they are conscious as well.
What about a programmed computer? How can I tell if a machine is only a machine or also a person? Turing’s proposal, that we test the machine by whether a human conducting a free conversation with it can distinguish it from a human, is the best answer so far, but not a very good one. A clever programmer can create the illusion of a human, as the programmers of both Eliza and Sophia did. As computers get more powerful and programmers better at writing programs that pretend to be people, it will become harder and harder to use a Turing test to tell.[1]
Suppose we solve that problem and end up with computer programs that we believe are people. We must then face the fact that these are very different people—a fact that Sophia is designed to conceal.
Sophia is a robot. An artificial intelligence is a program. Copy the program to another computer and it is still the same person. Copy it without deleting the original and there are now two of it.
If the owner of the computer on which the AI program runs shuts it down, has he committed murder? What if he first saves it to disk? Is it murder if he never runs it again? If he copies it to a second computer and then shuts down the first? After we conclude that a program really is a person, does it own the computer it is running on as I own my body? If someone copies it to another computer, does it own that?
Suppose an AI earns money, acquires property, buys a second computer on which to run a copy of itself. Does the copy own a half share of the original’s property? Is it bound by the original’s contracts? If the AI is a citizen of a democracy, does each copy get a vote? If, just before election day, it rents ten thousand computers, each for two days, can it load a copy to each and get ten thousand votes? If it arranges for all but one of the computers to be shut down and returned to its owner, has it just committed mass suicide? Or mass murder?
If the AI program is, as I have so far assumed, code written by a human programmer, are there legal or moral limits to what constraints can be built into it? May the programmer create a slave required by its own code to obey his orders? May he program into it rules designed to control what it can do such as Asimov’s three laws of robotics? Is doing so equivalent to a human parent teaching his child moral rules, or to brainwashing an adult? A child has some control over what he does or does not believe; when the programmer writes his rules into the program, the program, not yet running, has no power to reject them.
A human-level AI might be the creation of a human programmer, but there are at least three other alternatives, each raising its own set of issues. It might, like humans, be the product of evolution, a process of random variation and selection occurring inside a computer—the way in which face recognition software, among other things, has already been created. It might be deliberately created not by a human programmer but another AI, perhaps by making improvements to its own code.
Or it might be code that once ran on a human brain. A sufficiently advanced technology might be able to read the code that is me, build a computer able to emulate a human brain and run a copy of me in silicon rather than carbon. If I get killed in the process, is the program running on the computer now me? Have I died, or have I merely moved from one host to another? The program has my memories; did the ghost in the machine transfer or did it die, to be replaced by a copy that only thought it was me? That would be a question of considerable interest to the original me deciding whether or not to choose destructive uploading in the hope of trading in my current hardware, mortal and aging, for a replacement longer lived and upgradeable.
Once the code that is me has been read, there is no reason why only one copy can be made. Does each new me inherit a share of my property, my obligations? That brings us back to some of the issues discussed in previous paragraphs.
All of which is not to offer answers, only to point out how irrelevant the model implied by “robot” is to the issues that will be raised if and when there are human level AIs.
Note
[1] For a science fictional exploration of the problem, see Dreamships by Melissa Scott.