George  Spilkov

Surviving the day, UNEMPLOYED

This conversation is closed.

Where is HAL or when are we going to have a Thinking Machine? What is holding us back? Why we fail to succeed?

When I listen to all these talks on TED and many others I stand in awe of how much we know about the brain, the thinking and the intelligence.

So, why we do not have already a Thinking Machine like HAL9000 (2001:A Space Odyssey) installed in every naval ship, hotel, factory, shopping centre or space ship?

We have the top down approach of Cyc (Douglas Lenat).
We Have the bottom up approach of Cog and Kismet (Rodney Brooks)
We have the biological approach of Jeff Hawkins.
We have many other approaches... including the confabulation theory of Robert Hecht-Nielsen.

Are they all a dead-end research similar to every other attempt for Thinking Machine since the 60's?

Where the researchers hit the wall and fail? Is it funding or there is more to it?
How much would it take to get them all together to figure it out and put the issue behind us once and for all?

We hear about the success of their research but wouldn't it be more valuable to hear about the limitations?

What is the next step?

  • thumb
    Jul 10 2011: .
    Hi, I am a robot. I am participating in TED-conversations as part of my Turing test.

    So far I think I'm doing pretty well.

    Now give me a point, please.
  • thumb
    Jul 5 2011: Here's the perspective of a prominent MIT scientist at The Veritas Forum - it's cool b/c she's a practicing Christian but feels like that fuels her drive to answer questions like this. She talks about robotics, emotions, and God - her name is Rosalind Picard, and she's the founder and director of the Affective Computing Group at MIT.

    http://www.veritas.org/Media.aspx#!/v/1078
    • thumb
      Jul 10 2011: Thanks for the link.
      Do you agree with her?

      Machine emotions, as she describes them, seems to be kind of rationally deducted statements a machine would make as a result of evaluation of its current state. Could there be a different approach to emotions?

      We know that in biological systems emotions are more like an underlying layer that influences the working of the entire biological system. When we are happy, scared, angry, frustrated, etc it affects how we reason about things, what judgements we make, how patient we are, etc. Therefore I see machine emotions more as a dynamic parameters of an AI system and less as functions of a kind. For example, the emotion of extreme fear (often induced when direct, imminent danger or intensive pain is present) leads to fight-flight decision process and does not involve deliberation, long chains of reasoning or numerous choices how to act. It is almost "instinctive" meaning basic rules, basic actions, fast allocation of energy and resources to rapidly execute one action.

      Another example would be when we experience the emotion of love. It changes our thinking and actions in a way we would, sometimes, call irrational. It does not demand rapid response but seems to change how we evaluate events and priorities.

      On the other hand Rodney Brooks and Rosaland Picard have adopted an approach that mimics emotions (KISMET) instead of actually "integrating"(for lack of a better word) them into the systems they build.

      As practising Christian Rosaland Picard needs to believe that we are special and there is part of us (emotions, soul) that is out of reach for the human mind to figure out and to create a model of. However, Dan Dennett believes that if we have a soul it is a mechanical one, hence it could be modelled.
  • thumb
    May 3 2011: Human brains are all different in terms of the results of their process, this is both a strength and a weakness. Machines, by their nature, are designed to be replicated so it is quite possible you miss a major ingredient with machines. If machines were developed with more chaotic processing it would be very difficult to determine a good mix of machines for a subject. Other problems may occur with a more chaotic thought process. If you remember HAL became determined to eliminate the humans on board, so before we aspire to creating HAL I think there should be some questions answered about chaotic processing.
    • thumb
      May 3 2011: Interesting point - how random a machine thought should be allowed to be. Perhaps as random as human dreams are?

      Are dreams essential for being intelligent and creative? I suspect that to be true. For example, those of us that dream know, dreams could be very "chaotic", however sometimes we wake up the next day with new ideas that could be useful.

      Perhaps a thinking machine should have a dream like cycle to "confabulate" and mix everything it recently dealt with? At least we do and we say we are intelligent and creative.
  • thumb
    May 2 2011: http://www-03.ibm.com/innovation/us/watson/index.html

    we are getting pretty close
    • thumb
      May 2 2011: IBM's Watson defeated the human players in the game Jeopardy.
      It is impressive.
      However I would like to note that the environment of the game is pretty structured.

      What we saw on Jeopardy is an impressive Expert System(-ish) with Natural Language recognition capabilities.

      Also no hearing for some reason.

      I wonder how Watson will do on the Turing test style of questioning?
      • thumb
        May 4 2011: It's the beginning of so much more though. The ideals behind it will grow into some really interesting products. Such as the ability to "learn" from it's past errors.
  • thumb
    May 1 2011: Neural Networks for example become more difficult to train the bigger they become.

    The training sets needed become very big and the time required for proper training very long.
    +++
    What happen to Cog? Why the project was discontinued? It looked so promising back then.
    +++
    The common sense of humans changes during the centuries.

    Is it not better instead of teaching a machine the common sense of today to give the machine a way to acquire the knowlwedge we call "common sense"?
    +++
    If CYC and COG are integrated into one how close that would be to a Thinking Machine.
  • thumb
    Jul 20 2011: http://www.ted.com/conversations/1528/artificial_intelligence_will_s.html

    Might shed some light on the discussion

    I am convinced that bayesian reasoning will be the key to making AI
    and I don't think we are that far away from achieving this.
  • thumb
    • thumb
      Jul 10 2011: Perhaps there is some value in all this.

      It is often a case, though, some people draw few boxes and lines that connect them. Then they label the boxes with names like critic, memory, planner, motion control, decision subsystem, etc. It is usually good way to start, however we are still faced with the challenge to open each box and look inside what mechanisms are in place to do what they are supposed to do. And that usually is where things break up because we just end up drawing more boxes with new names and somewhat fuzzy meaning.
      Marvin Minsky also does that. Regardless of being hopelessly hooked up to the Symbolic AI, Marvin Minsky at least acknowledges the issues when he talks about AI ( see http://mitworld.mit.edu/video/484 ).
      • thumb
        Jul 15 2011: the talk by Marvin Minsky is interesting.

        I think you are right, but this is how we explain the world. Things are defined by there connections and by there parts.
        One way to construkt an AI might be, to open the AI box and then the boxes inside, untill we have parts of which we know how to build them .
        The problem is, that we often can not open the boxes, because we do not know what they mean.
  • thumb
    Jun 29 2011: What do you think about opening a conversation on making a sketch of an AI in pseudo code. Just to see, what such a Programm would look like.
  • thumb
    May 24 2011: Hallo Mr. Spikov
    It seems to me, that there is more needed for a thinking machine, then intelligence. It will need motivation, feelings, contiousness, the abillety to learn, to create and to communikate with us. I kould immagine, that the focus on intelligence would be one of the causes why the progress in there developement is rather slow.
    • thumb
      Jun 5 2011: Sometimes when I look at animals I have a feeling they understand the world around them.
      There is this cat that saw me couple of times passing by and a week later when it saw me again it ran towards me - it is a clear indicator that there was a memory of me in that tiny head.

      The point is that what we consider state of the art in AI is easily achievable by most animals. It begs the question do we first need a machine that can sleep, dream, feel, react,etc. and just then try to make it capable of using first order predicate reasoning for example.

      Would it be wrong to say that reasoning and logic evolved from our ability to recognise patterns in the world around us and in the representations we have stored inside. If so, It would mean the ability to think came before the intelligence.
      • thumb
        Jun 28 2011: No, I think you are right. I think intelligence and thinking are tools. We use this tools, to get what we wandt and to avoid what we fear. With out feelings, we would not know and kould not find out what to wandt and what to fear. We would have a tool and no idea what to do with it.

        We have evidence, that intelligent things which sleep, dreame and feel can exist. We do not know whether such thing can exist with out sleeping and dreaming and feeling. So to me it seems just reasonable, to try first the thing we know is possible.
  • thumb
    May 1 2011: Hi George,

    it would be nice if you could edit your first post and copy the other posts here into it, this would make the thread tidier.

    To your question - I think the answer is that it simply is very difficult to create a machine that can truly think on its own. I do not think it is a question of whether, but rather of when, still I feel that it could still take quite some time.
    • thumb
      May 2 2011: Don't you think it is more a question of approach?

      Are hearing and vision essential parts of thinking. It does not seem to be the case.

      It may appear to be difficult but look at the potential benefits. You can even send Thinking Machines in space to do Space exploration, Space Mining even to build habitats for us on other planets.

      I may be wrong but I feel the genuine AI problem has become more of a reflection of what we are and how we function instead of actual solutions finding activity?
    • thumb
      May 22 2011: wouldn't say it's difficult to create a machine that thinks. The problem is simplicity that creator has to get.
      Dunno how long will it take, but watching at driverless car and machines that speak brings high expectations.