TED Conversations

Danger Lampost

Futurist & Technology Consultant,


This conversation is closed. Start a new conversation
or join one »

What would make a good Turing Test for the Soul?

In the movie "2001, A Space Odyssey", Stanley Kubrick presents us with the HAL-9000 computer with a seemingly human like consciousness. This question pre-supposes that we will be able to create a computing device of a similar sort that would behave to all the world like a human consciousness.

What it feels like to be this consciousness is an interesting question. You could ask it and listen to what it says. Could you trust what it says though?

Alan Turing devised a deviously clever test: You'd sit someone down at a keyboard and ask them to converse with a personality on the other side, and solely through the conversation, determine whether they are talking with a piece of software or a real human.

Let's say we have succeeded in building such a software system that could pass the Turing Test and fool any and all questioners - impossible to differentiate from a real human solely based on the conversation.

Let's kick it up a notch now and ask: What test could you devise to determine whether the software system had a soul? Or to put it another way for effect, what evidence might you present to a court of law to argue that pulling the plug on such a software system was tantamount to murder? Suppose *you* were this software system, arguing with a court of law that they should not pull the plug on you. What arguments would you make?

progress indicator
  • thumb
    Mar 27 2013: Danger, even if it could respond to every question, wouldn't it only do so on the basis of principles that have been fed into it by a living human? So it's not original like a human?
    • thumb
      Mar 28 2013: I think a good metaphor is to compare an artificial consciousness to a human child. Are not children quite literally "carbon" copies of us, for the most part not particularly original? (That is, we are much more like other people than we are different.) Don't we teach them knowledge over decades of time, so are not our children's responses to questions based on the principles that have been "fed into" them by living human beings, their parents?

      The key point here, is that you are presented with a consciousness which to yours and everyone else's examinations and questions, is indistinguishable from a human mind. That is taken as a precondition or assumption for this question. I think whether or not that is possible (and hence, how interesting this whole question is) is a whole other debate which I think is well covered elsewhere on this forum. In this question though, I am hoping to focus the discussion on "Now what?" Can it have a soul?
      • thumb
        Apr 1 2013: Well, one important thing, Danger, is that this machine would have no emotions. We can program it to say the things one might say if one had emotions, but I don't think it will feel them, like love, for example.

        If we do program our children, we might ask how they go off-track. How do good parents raise criminal children? Or vice versa?
  • thumb
    Mar 31 2013: I think a number of people are getting caught up in the concept of a man made or artificial intelligence. That is just one of the possibilities and if that seems impossible, then I ask you to consider the next option which has nothing to do with man made, artificial life, or alien life.

    However, this is a hypothetical question, so you will have to accept the premise of this question in order to answer it. The premise is that we have discovered a previously hidden tribe of Neanderthals living in a remote part of Alaska. A TV news crew goes out to film a hunter going on the hunt to kill a Neanderthal. He is filmed killing the Neanderthal. This is all in the United States. Should he be charged with murder?

    Should we be prepared to consider the killing of a Neanderthal with the same penalties as killing a human?

    For those in the United States, remember that the penalty for willful murder of a black slave used to be a fine.

    For those who wish to brush up on Neanderthals: http://en.wikipedia.org/wiki/Neanderthal is a start.
    • Comment deleted

      • thumb
        Mar 31 2013: Are you essentially saying that murder, by definition, can only be between human beings, and that the definition of murder can not therefore be extended? If so, then I ask you why we can not extend this definition? In answering this question, I ask you to consider the case of a half-breed human/neanderthal - would that be murder? How human do you have to get before it is murder - only 100% human?

        While there is no debate about the existence of Neanderthals, I believe the concept of Root Races has not yet gained wide acceptance. Fascinating concepts, to be sure, well worth exploring I believe, but not yet widely accepted/
  • thumb
    Mar 30 2013: semantics vs syntax, it was a stab. I.read the Wiki and thank you Danger I have a clearer understanding.

    ok here are a few ?'s for AI...

    what does it feel like to fall in love and then have your heart broken?
    what does it feel like to be judged as kind?
    what does it fell like to make a mistake?
    What to does it feel like to witness beauty?
    what does it feel like to eat a vanilla ice cream cone (with rainbow sprinkles) on a hot sunny day?
    what does it feel like to realize you have misjudged someones character?
    what does it feel like to have an a moment of inspiration?
    what does it feel like to come to a conclusion on your own?
    what does it feel like to tell a lie to another?
    what does it fell like to be a machine?

    sorry..getting carried away a bit

    My Soul.. a tough one

    ok...my Brain has explored this notion with fearless thought for a very long time. My library is stocked with the the words of scientists, theologians, gurus , atheists, story tellers,etc, etc, etc. Through my journey of exploration, my Mind recognized the truth, with certainty that I AM more then my body.Yes..... I have a soul. I guess the word Mind could be used as an equivalent if it speak to conscience , if it speaks to an innate knowing,

    thanks for your response surely a fun ride!
  • thumb
    Mar 30 2013: Would you say the, that if an entity, human or otherwise, could convince the U.S. Supreme Court that they were talking to another human, that that entity had a soul?
    • Comment deleted

      • thumb
        Mar 31 2013: Did you perhaps mean to reply to a different comment? I'm confused about your reply, as you said SOUL = MIND and you replied to a comment that, as far as a can see, does not reference "mind"?

        I am saddened you feel I "summarily ignored" you - I read what you wrote with real interest (and thank you for taking the time). However, as far as I can see, you did not propose any Turing Test for the Soul.

        I believe what you have said, if I understand it properly, is that this is not a valid question because "artificial intelligence...can not generate life." However, you provided no justification for your position, as far as I can see.
      • thumb
        Mar 31 2013: Awesome reply, thank you.

        I seems you do not believe that artificial intelligence can ever have a soul. You keep stating that in different ways. However, I do not believe you have provided any justification for your position. I could talk at length about knowledge management systems, and medical diagnostic systems, and we could get into the weeds on the data->information->knowledge->wisdom paradigm, but I suspect that would not be a fruitful conversation.

        Would you please provide a justification for your position that an artificial intelligence could never have a soul? I understand your concept of what a human soul is, but you have not explained why an artificial intelligence can not have this?
  • thumb
    Mar 30 2013: It's a great question to ask "what is a soul?" One of the elegantly beautiful things about Turing's original test, is that he essentially defined a test for consciousness without defining what consciousness is. In a similar manner, I am hoping we can develop a "Turing Test for the Soul" without having to define what a soul is.

    For some people, they take it as an unquestioned axiom that only a human being can have a soul, perhaps because that is what their religious tradition has taught them. I think that for such people this may not be a very interesting question, and their Turing Test for the Soul would simply be, "Are you Human? Then you have a soul, and if you are not Human, then you don't have a soul."

    If you believe, as some people do, that we will eventually have the technology to "upload" ourselves to silicon hardware or other artificial computing hardware, then this may one day not be such a moot question.
    • Comment deleted

      • thumb
        Mar 31 2013: I believe you are saying that the words "soul" and "mind" can be used interchangeably. Is that correct?

        If so, can you tell me why that is significant or what the implication is?
  • thumb
    Mar 30 2013: one of my favorite movies. I wonder if the perfect HAL-9000, capable of no error had any idea what being human is,what it feels like? surely while observing human behavior this made no logical sense to HAL-9000. BUT, If we did, would man have had to shut it down?

    I think you are addressing two things that make us human, mind and brain. The brain is concrete, physical. it speaks in electric currents generated by the biochemical process. now the mind is abstract, spiritual, intuitive (this is where the knowledge my soul lives)

    ok so you are writing programs for a machine that rivals or exceeds human intelligence, the brain. How exciting! However, this is all well and good so long as you always keep in mind that when using this machine's programs neither is it constitutive or sufficient for minds. We must remember that Programs don't have semantics. Programs have only syntax, and syntax is insufficient for semantics. Every mind has semantics. Therefore..... programs are not minds. agree?

    Your Turing test; I would ask your computer on trial questions that begin with........... What does it feel like? I guarantee you that the answer would be a perfectly constructed response with no reference to" meaning".

    Computers are wonderful and helpful tools , period

    • thumb
      Mar 30 2013: I love your proposed Turing Test! "What does it feel like to..."

      I do disagree with your assessment of syntax versus semantics though. All languages, by definition, have both syntax and semantics - even the artificial ones that computers use. As computer scientists, we study syntax versus semantics in undergraduate studies. [See http://en.wikipedia.org/wiki/Syntax_(programming_languages)#Syntax_versus_semantics.]

      As your Turing Test questions were about what it feels like to do something, perhaps by semantics, you mean that feeling-like quality, and when you use the word 'mind', is that equivalent for you to the word 'soul'?
  • Mar 30 2013: The best defense would be to sound as human as possible. Make em believe they are speaking to another human.
  • Mar 30 2013: You would have to let them decide for themselves whether they hold that belief or not. If they believe, what would give anyone the authority to say otherwise?
  • thumb
    Mar 30 2013: What is a soul? How do you define it? Is it something we can test for in humans?

    One day we may build something that has a consciousness similar to humans, a sense of self, hopes and fears. Interesting question. At what point would these entities be deserving of rights similar to humans?

    I suggest a starting point would be self awareness and the ability to suffer.

    Mind you we treat other humans poorly and other animals even worse. I wonder how much empathy there will be for artificial life (oxymoron?)
  • thumb
    Mar 30 2013: You are most welcome to say anything and I have learned rom you.
  • thumb
    Mar 30 2013: I will cry and plead that I have no arguments, just let me carry on please.
  • thumb
    Mar 28 2013: Watching the Watson computer play Jeopardy, I think the ability to converse verbally, and with emotion, would be significant here. Although that moves beyond Turing's initial keyboard test, using one's voice to speak adds an emotional element. Certainly adding natural voice conversation to the Turing Test would make it much harder to pass (and perhaps Turning could not have imagined where we'd be with voice recognition/synthesis today). But... If you did add voice to the Turing test, I think that would allow the software to make a much more compelling case (or perhaps manipulate us more). You could imagine extending the Turing test, bit by bit - maybe next would be a video you could not distinguish from a real person. Would these extra capabilities help with a Turing Test for the Soul? Or can you find out everything you need to (in principle) through a typed conversation?
  • thumb
    Mar 28 2013: hmmm.. Interesting topic. I will have to think about this and collect more data. Here are my initial ideas.

    If I were to argue (logical argumentation) I would start with the reasons for being shut down. What were my actions that warranted being shut down and do those actions warrant being shut down vs imprisoned, or reprogrammed, (analogous to therapy or treatment)? I could argue on the basis of logic that if my sentience was in question could the punishment be fitting to the crime or reason for being shut down. Basically, can we take a safer and less permanent solution than shutting down. This argument will lose weight if it can be proved that shutting down is not permanent.

    If I were to employ rhetoric. Could I use my sentience to move you to empathy. If this could be done then the effective argument would be one that arouses emotions within the judges. If I could argue effectively I would arouse your emotions, bring you to tears and awaken that human part that will not allow us to harm others. The emotions will trump logic. This uses the axiom that people follow their heart not their head.

    So perhaps a more viable Turing test is not sitting at an emotionless computer screen interpreting responses from someone that might or not be sentient, but a combination of those answers with the arousal of emotions within us.

    My mind conjures up the scene in AI (if you have seen it) when the little boy is first hit by acid at the carnival. The droid he is with is acting in a similar manner to the boy. Both appear human and act human in motion and speech. Bu,t it is the only the boy that illicits an emotional response of empathy from the audience. This becomes a powerful combination of action and emotion.