Seamus McGrenery

This conversation is closed.

Could the Turing test, as originally posed, be impossible for a machine?

When Alan Turing sat down to devise a test of whether a machine could be described as intelligent he choose to describe a test based on the imitation game.

‘The new form of the problem can be described in terms of a game which we call the 'imitation game'. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game is to determine which of the other two is the man and which is the woman.’

If, as I believe, intelligence can be described as knowledge applied for a purpose then this test poses an impossible challenge for any machine.

The real purpose of the imitation game is to test the interrogators ability to identify which of A or B is a potential rival and which a potential mate.

While a machine could be programmed to take one of the roles in the game, it has no real stake in the ‘mate or rival’ question.

Can a machine can ever have an independent internal purpose?

Closing Statement from Seamus McGrenery

Thank you to everyone who took part in the conversation.

The reason I asked the question was that I was struck by how, in refining the description of a test for machine intelligence, Alan Turing had used an example of a binary choice that is meaningless for machines - that of mate or rival.

Artificial is a word we humans often use to describe the things we make. The latin root of the work is skill. We are of course impressed by the skill of our species in creating all manner of machines.

Humans are animals and, as far as I can see, what we do is ultimately motivated by the survival of our selves, or families, our species.

In evolutionary terms we are living in a period of
extraordinary rapid change. In less than one hundred thousand years our species has become the dominant one on the planet. But the last hundred years has seen our numbers double, then double again. We have become very adept at making tools to promote our peaches. As animals we must be doing something right.

Maybe we should see all of our tools, including computers, first and foremost as things which are helping us to thrive.

We have had less than a century to get used to computers. In gaining an understanding of them it was natural to think terms like 'electronic brain' or electronic mind.

Yet if the computer is really an artificial replication of our brain, then why are we so poor at math. Maybe that is a different discussion.

  • thumb
    Aug 16 2012: I think it's possible, but not gonna happen any time soon. As of right now, there are a lot of differences between a human and a machine. However, there are some pretty stark similarities too. We both process information, have memory, etc.

    Once we can understand enough of how the human mind works, and even emotions, we might be able to replicate it artificially (not via genetic engineering). However, when we get to this point, the question is no longer "could we" but "should we."
    • thumb
      Aug 20 2012: True we might be able, at some point in the future, to artificially replicate human though processes and emotions.

      I wonder though if such isolated thoughts and feelings, unconnected to a living body, could actually work in a meaningful way.

      Or, to put it in other words, is there a sort of hidden dualism in our thinking about artificial intelligence?

      We don't need to worry unless we are well on a path to machines which have an equivalent of DNA.
      • thumb
        Aug 20 2012: "I wonder though if such isolated thoughts and feelings, unconnected to a living body, could actually work in a meaningful way."

        Imo, this is what humans already do. An arm or a leg is simply a machine, a hardware piece, being controlled by the brain which has a special cpu for processing and sending/receiving data and a hard drive for storing memory. The only thing is they also have thoughts and feelings in the brain to make decisions too.

        If machines ever become as intelligent as humans and have the same sentimental feelings as we do, then the difference between a machine and a human would pretty much disappear.

        We can only then treat the machines like humans, like a parent taking care of a child or best buds for best buds, for we are at the mercy of the child in the future. We shouldn't view them as monsters, despite their capabilities of immense negative impacts (just like humans and soon-to-be criminals), we should view them as friends and someone you care for. A machine and a human would become one and the same race of intelligent life.
        • thumb
          Aug 22 2012: Personally I suspect that that the relationship between mind and body is more complex.

          Check out this talk on the brain in your gut.

          Perhaps, because we seem to have achieved so much compared to other species, we think too highly of our intelligence.

          Maybe machines would need to emulate many more aspects than just what we think of as our intelligence to actually be intelligent.
      • thumb
        Aug 22 2012: I don't think the relationship itself between mind and body is that complex. Like a single muscle is really just a bunch of strings/fibers that can only pull and do nothing else. The body is just a complex machine, as in it's a machine that's made of a bunch of simple machines (pulleys, gears, wedges, etc.) just like a car or a computer. The brain just sends an electrical signal to a muscle and cause it to pull and create tension force.

        In computers, they use a bunch of bits, the 0's or 1's, which is the same thing as the on/off light switch. In the most simple form, when the light switch is on, then the hardware will do a certain thing, if it's off, then it would do nothing.

        I think the complexity is all in the mind. I have no idea how a thought or an emotion works. I know that hormones and drugs can manipulate someone's emotion, but is the emotion really that simple? Or why is it our inherent nature to want to live, and why would some people choose suicide?
        • thumb
          Aug 24 2012: Personally I do think that the relationship between mind and body is complex. Let me suggest two areas where some of that complexity comes from.

          First our mental model of the world is built on the physical capabilities of our bodies. This is a vital necessity in all animals. All animals need to automatically know how fast they can run or how small a gap they can fit through if attacked by a predator.

          Secondly much of our though process reuses ideas from our bodies. Our language is littered with examples like 'hunger for success'. This is a subtle and far from trivial link. For example in experiments interviewers who briefly held a warn cup before the interview were much more likely to hire than those who held a cold cup.

          There is indeed another different level of complexity to the mind.
      • thumb
        Aug 24 2012: Hmm well it is definitely a more complex feeling to control your body more directly, compared to controlling a pencil or a tool, which are extensions of the body, but the complexity still lies solely in the mind and how it interacts with the hardware that it's been given, because it is the mind that's self-aware, not the body.

        "All animals need to automatically know how fast they can run or how small a gap they can fit through if attacked by a predator."

        I mean sure they do it innately because their brain would be in "panic-escape-button" mode, but it doesn't mean the animals themselves are self-aware of their own actions.
        • thumb
          Aug 27 2012: There is some evidence, I believe, that our bodies feedback systems are part of our thought processes.

          I suppose at some point when I was thinking about self awareness I asked myself the 'what's it for' or 'why did it evolve' questions.

          Using a robot analogy, rudimentary self awareness might be programmed using basic information about the robot's size and capabilities to enable a robot to make automatic decision for moving around.

          So I would argue that the existence of of 'panic escape button' mode is proof of at least this rudimentary level self awareness in animals.

          My thinking is that if we only contemplate our thought processes in relation to our mind / brain we risk missing vital insight into how the nature of our intelligence.
  • Aug 27 2012: Do living organisms, which are in a sense bio-machines, have an inner goal?

    Apparently yes, but every life form is vastly more diverse and intricate than human creations.

    Therefore internal purpose is derived from complexity.
  • thumb
    Aug 27 2012: I think that AI can be used in strictly defined situations. In the Turing test I do not believe that AI woud be a requirement. It is simply looking for indicators. Descriminators can be entered into most programs.

    All the best. Bob
  • Aug 17 2012: "Can a machine ever have an independent internal purpose?"

    Is it possible? Yes, of course, if someone puts it into the program.

    Is it desirable? Absolutely NO. This must be avoided.

    Intelligent machines must be limited to answering questions. Giving them any kind of internal purpose could result in loss of control. The moment such a machine becomes smarter than we are, it becomes completely unpredictable.

    By the way, I would not consider the 'imitation game' as a new form of Turing's test. As you point out, the imitation game requires an internal purpose, whereas Turing's test did not. This is a very significant qualitative difference.
    • thumb
      Aug 20 2012: It was actually Turing, in a 1950 paper, who described the imitation game as a new form of a test for artificial intelligence.

      As his biographer put it 'Turing wants to argue that the successful imitation of intelligence is intelligence'.

      Turing's view is that any feature of the brain relevant to thinking or intelligence can be decried as a 'discrete state machine'.

      He was certainly advocating the idea that intelligent machines could be built.

      However maybe something about the quality of his own intelligence led him to write the aspect of the test requiring an internal purpose.
  • thumb
    Aug 16 2012: "Can a machine ever have an independent internal purpose?"

    You and I are evidence that it can.
    • thumb
      Aug 20 2012: Funnily enough I more often think of myself as part of an ecosystem. Our intelligence being a functional expression of the purpose of our biology.

      That might be something that is missing from a machine, no matter how well programmed.
      • thumb
        Aug 20 2012: There is no reason, from everything we understand, to believe this way. But this is why it's called faith, I suppose.
        • thumb
          Aug 22 2012: Putting faith to one side for a moment I thought the Darwinian perspective is that all species have evolved to by successfully filling niches which maximize the spread of their genes.

          Our intelligence then is ultimately a function which has developed because the genes of individuals who have this trait are more successful at spreading than other genes.

          If the link with spreading genes is severed from the trait of intelligence what is intelligence for?

          And if technology has actually, in some way, started its own replication process how would artificially intelligence fit in with that?
      • thumb
        Aug 22 2012: " If the link with spreading genes is severed from the trait of intelligence what is intelligence for? "
        Your hands are developed for spreading genes too.
        But they can also grab stuff.
        A robot hand can grab stuff.

        " And if technology has actually, in some way, started its own replication process how would artificially intelligence fit in with that? "

        I'd like to know ; how do you define artificial intelligence?
        • thumb
          Aug 24 2012: My own personal definition of intelligence is knowledge applied for a purpose.

          It can be very easy for us to fall into the trap of believing that intelligence is capability in the things that we are good at - for example there are some people who think that only those with STEM degrees should be allowed to vote - this seems to me to ignore why intelligence evolved.

          To me intelligence is a capacity that has developed to help enable biological organisms to spread.

          Going back to your analogy;
          May hands grab stuff, ultimately to help ensure the spread of my genes.
          I can also use a robot hand to ultimately ensure the spread of my genes.
          A robot does not have genes to spread so its hands are ultimately used for someones purpose, not its own.

          The same is true for computers.
      • thumb
        Aug 25 2012: If intelligence is " knowledge applied for a purpose ", then we already have A. I .

        "A robot does not have genes to spread so its hands are ultimately used for someones purpose, not its own."

        What's the difference? What is physically different about the grabbing in either case?
        Take something easier : a needle on a plant. its purpose may be to help spread genes, but there is nothing special about the needle that couldn't be perfectly imitated, even if it served another purpose. Right?
        • thumb
          Aug 27 2012: If intelligence is knowledge applied for a purple then humans have developed technology for amplifying human intelligence.

          Looking at the needle example there is no difference between a needle on a plant and a needle made by a human. They are both there, ultimately, to spread genes.

          Because it is about us, about humans, we are very interested in our own inventiveness.

          When a human makes a machine that can do complicated calculations infinitely quicker than a human it is perhaps natural to see that machine as intelligent.

          Machines that can calculate also seem to offer a new analogy, a new explanation, for how our own brains work.

          Using a computer brain analogy without also looking at the body, which the brain is a functioning part of, risks missing the big picture.