This conversation is closed.

Artificial Intelligence will supersede Human intelligence

Anna Hoffmann said in http://www.ted.com/conversations/1321/what_are_10_things_you_know_to.html
"But Christophe, I am not sure that all human intelligence can be described with algorithms, for example emotional intelligence (as Lee mentions), empathy, the kind of creative genius that creates mind blowing art....
...and if the computers are created by humans, maybe they will be kind of human as well? Or the distinction might become more unclear, what is man and what is machine...."
So this is about the subject

I myself defend the thesis.

Closing Statement from Christophe Cop

How can consciousness arise from algorithms seems to be the main hurdle...
And emotions, and creativity, and sexuality,...

There are thoughts in different directions trying to see how this might (in principle) be done, and what some problems are.

I would conclude that AI can supersede HI, but it seems we don't seem to know how...
How long it will take is difficult to predict, but somewhere between 2 and 100 years would be broad boundaries.

A good many questions have been raised. I guess laymen and researchers might find this debate useful

  • thumb
    Mar 30 2011: Will an AI ever ask itself whether humans supersede it, and then also will start worrying about this possibility ??

    Will an AI ever say to itself something like: "I am bored today. I don't care what I am programmed to do today. Today I shall enjoy myself by driving these humans around me crazy. Today I shall give them only wrong answers to anything they ask me."

    Will AI ever say to itself: "Somehow, I feel like communicating only with that AI standing 4 desks away from me. I feel like sharing everything stored in my memory with that specific AI and I also feel curious to know everything it keeps inside its memory. I don't know why I suddenly feel like doing this only with that AI so away from me."

    Will AI ever ask itself: "Why am I doing all this stuff ?? For what ?? What do I gain of it except just more data & information ?? Why not shut down everything and enjoy just pure peace ?? What do I want to be 5 years from now ?? Am I supposed to do just these calculations and predictions all over the many years to come ?? And then what ??"

    Will AI ever say to itself: "Basically I see around me mostly 2 types of humans. One type looks more tough and rough and bodily stronger, while the other type looks more gentler and bodily softer & weaker. But, too many times the interaction between these 2 type looks much more complicated and unpredicted and sometimes irrational, compared to the interaction between the same types. Sometimes they are so close & friendly to each other while other times they cannot bear each other at all. A completely erratic behavior they reveal when they are together. So I don't get why they need a third different type of beings like us – the AIs – near them. Why they add more complications to their existence instead of solving first their own inter-type complications ??"
  • thumb
    Apr 8 2011: Dear friends,
    I see you have been engaged in a long discussion about pleasure and pain and the mechanisms behind it. The reality (as I understood it after studying the physiology of pain as part of my education) is very complex, not only in the brain, but in the spine and along the spine, where the peripheral nerve goes into meet the central nervous system. The spinal cord is part of the brain in a way. Everything in the body, including the brain, is very dependent on every other part of it. That is how life works.
    The models that describe how the brain or the nervous system works are just models, not the complex reality.
    It is popular to talk about the different parts of the brain and a lot of research has been done, but in reality the parts of the brain mostly works together and functions are more often than not shared by nearby areas.
    Our bodies are biochemical machines, not electronic mechanic devices. I have still not met anyone who can comprehend ALL the different new data that keep developing in the fields of medicine, biology, life sciences and similar subjects. And I think that comprehension is lacking in this conversation, because WE are humans and the people who want to create AI are humans so they will create something they can comprehend....
  • Apr 3 2011: Human intelligence, as it is now, will be superseded if we last a few more decades.

    Whether it is superseded by artificial intelligence, or artificially augmented human intelligence is hard to predict. Evolution spent a long time developing the technology of human intelligence. It'll be a race between deciphering those designs, designing new ones on our own, and implementing them in hardware on the one hand, versus augmenting human intelligence on the other.

    The irony will be that squeamishness about augmenting human intelligence is what could cause it to be superseded by machine intelligence.
  • thumb
    Mar 31 2011: Through study and clinical experience (I am a physical therapist), I know that the intelligence of a human individual is not just located in his or her brain. The brain has a plasticity, it is in constant biological exchange with the rest of the body. The motor- and sensory neurons that run up and down a major part of the spine, are swimming in the same soup of neurotransmittors as the brain. What we eat, how and when we move, sleep, shit, have sex...and I am just talking basic functions....shape our thinking and is like a foundation for our intelligence, has shaped our language and our view of the world. We need a body to have a brain! It is very visible in rehab situations. For example after a massive stroke, when the patient manages to sit up for the first time, with help, and suddenly makes an effort to communicate, even though she can not talk at this point. And months later, with more rehab, suddenly says something.
    I wish more people appreciated how fantastic our body-mind is!
  • Mar 30 2011: Hey Christophe, your topic hit 100 comments! (Anybody else remember Christophe? He posted this debate.) Do you get a prize? ;)
    • thumb
      Mar 31 2011: Hurrah!
      [and there was much rejoicing]
      the number of posts is intrinsically rewarding ;-)

      I'm following the entire debate. There are a lot of very good arguments (I like the discussion between Birdie and Ben a lot for example: good questions and answers there)!

      I'm learning.
      I think that most arguments "against" are indicating the long road ahead (and some desirability and ethical questions coming up as well).
      I haven't met the "look, here's a law of nature blocking the possibility of that ever going to happen"
      • thumb
        Mar 31 2011: What I wonder, Christophe, is: Who, what individual or group, will supersede human intelligence with AI?
        A few intelligent humans? Would they really see that as priority number one?
        Or power hungry evil humans maybe? Because that kind of people always try to create slaves. They would want to control the creation of AI. Those who do not like (or are unable) to relate with friends, just to control and exploit others.
        Those who, unfortunately, see themselves as egoistic machines, just using any means to survive and stay in power.
        Out of curiosity you can torture people, too. It does not make it ethical.
        I am very interested in the future of computer science and AI.
        I just hope that human relations and societies will be able to develop more love and compassion before we create AI.
        • thumb
          Mar 31 2011: [As the thought of the potential power produces an evil gleam in my eyes and a slight laughter of pure cold enjoyment by the idea of possessing just that, passes...]
          "But no, we would use it for doing good"

          ;-)

          I do understand that concern, and I don't know the answer to it...
          But an open society can reduce the possibility to adverse effects...
          => so let's make it open source!

          But morality was not the topic... I did however mention that morality can be programmed too (I would think that Sam Harris' latest book can be inspiring: http://en.wikipedia.org/wiki/The_Moral_Landscape)

          We might end up with very loving AI as well... who knows?
  • Comment deleted

    • thumb
      Apr 1 2011: Yes, but I think that is a petitio principii...
      And if you do that, you'd need to read a whole lot of books/watch a lot of lectures/learn...

      So let's assume that we do know what intelligence is, even though it is fuzzy...
      Apparently, the discussion works out fine

      p.s.: Can you give a good definition of a chair? when is it a stool or a coach, or a log you can sit on?
      ... but when we speak about a chair, we can convey the meaning... "take a chair and have a seat" works
      So if you see some 'risk' arising, then you can clarify...
      until then, we assume that we have some common idea about the aforementioned concepts...
      • Comment deleted

        • thumb
          Apr 2 2011: I think I can agree.

          Maybe a Meta-topic concerning reasoning, assumptions and fallacies might be interesting...
        • thumb
          Apr 2 2011: Conversation is an interesting art. Language and it's meaning is a process. Without conversation there will be no language. Through participating in these conversations we have an opportunity to be cocreators of language and meaning. We can make agreements about definitions. And use all our imagination creating new ones and using words as well as silence,to express meaning. For that to happen we need to have the intention to communicate, understand and be understood. And we might still fail. But it is worth the trouble and I think humans are better at doing it than any AI (;-)......
  • Mar 29 2011: What happens when your computer debates you over on this topic?
    • thumb
      Mar 29 2011: What happens when the computer decides your not even worth debating?
  • Mar 28 2011: It is impossible for machine to ever achieve parallel capability to humans. Machine has no genetic code... it simply follows a profusion of complex calculations and commands. Sure, machine can and will continue to further supersede humans in many ways, but machine and humans will always be fundamentally different.
    • Mar 28 2011: Well, I don't know about that. Have you seen this talk yet? http://www.ted.com/talks/paul_root_wolpe_it_s_time_to_question_bio_engineering.html

      It's all about bio engineering and he talks about how we can already tie a brain to a machine and make it do stuff. It seems reasonable to say that our future "computers" will be a hybrid of machine and organic matter (a brain) that we grow in a beaker.
    • thumb
      Mar 29 2011: Austin, maybe yes or maybe no.....The point is we can't really be sure, can we ?
      Isn't it all about information and how the information is handled ? Why should, at least in theory, a computer not being able to be equal to a human ?
      • Mar 30 2011: Nick, I watched the talk very interesting.

        Harald, I think you're misunderstanding... I believe that machines can only achieve a certain degree of intelligence. At a certain point, we are no longer creating machine, we are developing life. For example, reverse engineering a human brain, and possibly even a body, wouldn't be artificial intelligence, it would be life.

        The real question is... Would a synthetically developed human be considered artificial intelligence or an actual life form?
  • thumb
    Mar 28 2011: AI itself represnts its worth, i.e. artificial. Intelligence has nothing to do with things that are artificial. Things need creator to be created in real and for that, they are human behind. We dream of the things first then strive to get it completed in real existence. While on the other hand unless or untill we put in the data of any kind, to be processed, artificial intelligence would remain just an empty product.

    I hope that clears the dilemma :)
    Cheers
  • Mar 28 2011: Yes! Just ask Watson :)
  • thumb
    Mar 28 2011: The human mind is powered by many things, not just logical and procedural, we have dreams, we have feelings, we humans break statistics down! We are everything but predictable, AI is a powerful thing, it will become better than us at some things but never supersede human intelligence.
    • thumb
      Mar 29 2011: Hi Pablo, yes we have dreams, feelings, etc, but it all comes down to electro chemical reactions in our brain. As I suggested above, I don't think there is anything that would prohibit a machine to exhibit similar functions.
      About predictibility: Maybe it only SEEMS to us that we are not predictable because we don't have a full understanding of how the system works.
      Intelligence means so many things. I'm still not clear what "superseding" actually means in this context.
  • thumb
    Mar 27 2011: Chris......A bot with an attitude......................hmmmmmm.
  • thumb
    Mar 26 2011: So Anna: concerning the second remark: I think it will get blurry between man and machine. we are becoming more and more cyborg each day...

    But why I think any form of intelligence can come from computers:

    * what our brain does is essentially computation,...
    for example: how to introduce emotion:
    Emotoins can be seen as reward or averse stimuli (see Dennet). the differences give an indication of what kind of response is needed.

    Empathy is "feeling" (trough your mirror neurons for example) what somebody else must be feeling.
    so if you have enough visual, auditive and other clues, you can figure out how you would respond to that... this is like imagining less emotional things, but with added information that the person might desire some response that corresponds to the feeling expresses.
    so the bot needs to learn what patterns correspond with what expressed feelings, and then learn how to respond to it.

    That would be a difficult job, and a lot of computation, but not impossible
    • Mar 27 2011: I have same thoughts. However I wonder if AI can feel and have consciousness as people do when to external observers it will appear so.
  • thumb
    Apr 8 2011: I'm going to say you are right and I believe for two reasons:

    1. It has been proven that artificial intelligence can come to a quicker response to a problem than human intelligence can and I believe this is because it looks at the fact only and has no emotional baggage to go with it.

    2. There may be a limit to human intelligence but there is no limit to what a human can create artificially and at some point the artificial intelligence will outrun the need for human programming in which case it can then programme it's self indefinitely.

    Let's hope there is a shut off button.
  • thumb
    Apr 5 2011: I just finished a book that argues that consciousness is non-algorithmic because it uses properties of quantum mechanics which, whilst deterministic are non-computable. If that were true, it would seriously impede the creation of strong AI. To be honest I'm not sure I fully understand his argument but if anyone wants to give it a shot it's called "The Emperor's New Mind" by Professor Roger Penrose. It's a great book, but some of the science in it is beyond me.
    • thumb
      Apr 5 2011: Matthieu - Can you explain what the book describes as non-algorithmic processing?

      The model I understand of the neuron is that it is an element which accepts a large number of analog inputs and combines them in a non-linear fashion to produce an output. Large numbers of these are wired together (perhaps somewhat randomly, but also in clusters) to produce the complete nervous system. The whole exhibits complex filtering and storage characteristics in a way that might be considered an emergent phenomena (thanks for the reference to emergence Mark, cool concept).

      This is non-algorithmic, right? Does the book's model differ from this one?
      • thumb
        Apr 5 2011: No that's perfectly algorithmic and computable. Granted the way neurons interact with each other is different to the way computers act (as in they are non-linear and neural circuits are constantly re-arranged) but these are things that parallel computing can easily fix or that a sequential computer can simulate (as a universal Turing machine in theory can take another Turing machine as input).

        There are things however that are non-computable however powerful a Turing machine/computer is. The most famous example is the Halting problem where a program has to give as output whether it terminates or not. If it doesn't terminate, it never tells you it's never going to terminate because it runs forever. There's no suitable algorithm for it.

        In his book, Professor Roger Penrose argues that there's something about consciousness at the level where quantum mechanics and general relativity meet that makes it non-computable therefore non-algorithmic. I got lost at that point. Something to do with quantum superposition. The rest of the book I enjoyed though because he touches upon all matters of science, computer sicence and maths (thouroughly! That book is like 700 pages...).

        The book takes into account that model of the brain but also points out that this only accounts for part of how the brain works.

        I always kind of assumed the whole brain could be computationally simulated and that consciousness was just an emergent property of complexity and I liked the idea that organisms were simply nature's machine (we both have a code after all). That book makes me think like I need to delve more into the subject! So many unanswered questions now!
    • thumb
      Apr 7 2011: http://c2.com/cgi/wiki?MistakesOfRogerPenrose

      I think Penrose is totally wrong... The basic reason is that you don't need to use quantum effects for any neuronal network to see how it works.
      Look at the animal kingdom and all the experiments done on the sea cucumber (they mapped and know the effect of each neuron) for example...
      Penrose assumes something that is in addition to the current elements needed to give a plausible explanation.

      Furthermore, if something is "quantum", that would mean it is in a probabilistic state. Probability distributions are calculable, and are hence algorithmic. AND you can always use things as "expansions", "transformationse" and other mathematical approximations to any thing that might be "non-algorithmic" (Whatever that may mean)...

      One needs to note that a brain, and any computational device receiving data, is an open system, so there need not be an assumption of "endedness" of an algorithm in your brain (although many of the subroutines are)

      Anyway, the burden of proof lies in the camp of Mr Penrose, not in the camp of the neuroscientists, who, up to now, have apt models to explain the things they observe without using quantum-effects.

      Concering the halting problem: that is a contradiction or paradox you created. A paradox cannot be solved, not with computation, nor non-computation.

      The reason you got lost, is probably because it makes no sense... and because Penrose is unable to explain it. "If one cannot explain something he claims to know, he doesn't understand it and might be wrong" Is an adagio I like to use...

      [EDIT:] If this all doesn't make sense, I might be wrong too, of course
      • thumb
        Apr 7 2011: To be honest I think the computational part of his book would have evaded me had I not studied Computer Science as an undergrad. He lost me on some pretty well estblished physics at times, so I'm going to go ahead and be humble and conclude I need to learn more about physics in order to understand his probably quite decent explanation. It'd be nice for someone else who has read his book and understood it fully to step in and give his opinion.

        I do agree that the burden of proof lies with him as he's made a statement that's still pretty hypothetical.

        I quite like your idea of using approximations where needed. Thanks for the link, I'll have a look soon.
  • thumb
    Apr 4 2011: I think Anna brought out a good point below, that in order to replicate human consciousness it would be necessary to synthesize an entire body since the mind and body are so intertwined.

    However. Suppose we were able to identify the functional characteristics and fabricate neurons in great numbers and interconnect them in structures similar to a human nervous system. Imagine, furthermore, attaching some set of sensors to the environment (video, audio, etc) to provide sensory input from the environment. Could such an artifice be said to have consciousness? How would it differ from human consciousness? What would be required to make it more closely resemble human consciousness?
    • thumb
      Apr 4 2011: Although our mind and body are intertwined, or, as I would put it: our mind is a part of our body (I don't like the dualism, as it sometimes seem to suggest that there is something as an independent mind without a body... there are independent bodies without a mind... the ones we call dead for example)...

      This does not imply one needs a human body.
      Any kind of body that can do the necessary computations would do. Saying only a human body could do this, is making a very anthropocentric fallacy!

      Of course you need to have a lot of sensors on your bot, as that are ways to obtain (new) data,... which is essential for the learning process of a bot. I would add a lot more sensors than we humans have (infrared, ultrasound, UV, more chemical sensitivity, finer thermal, gravity, accelerometers,... pressure, torque, luminance, magnetism, electricity,...)

      Concerning consciousness: Yes it could (I don't see why not...)... And It would be there when the bot is processing information (not when it's shut of). I assume it would need some prerequisites that are yet undefined in this discussion (any suggestions are possible)

      To resemble human consciousness, it would probably need a lot of visual-based thinking, big linguistic, small olfactory and papal,... &c &c.
      • thumb
        Apr 5 2011: In terms of prerequisites - how could analogs of pleasure and pain be integrated? Aren't these perhaps primary motivators in the learning process?
    • thumb
      Apr 5 2011: Well Tim. lets take your idea a little farther. Instead just copying the circuitry of the brain why not replicate the entire process of development. The process that occurs from an infant, whose cranium then matures and develops to that of an adult. Note that there is no direct human programming in this process of development, there are just physical processes that integrate all the sensory information and produce a mature and intelligent human.Why not do something like that with a machine why not let the sensors do the programming as oppossed to people? Why not let them determine the development of both the hardware and the software, this would be an entirely different approach to AI which completely mimics the process of human development, and thus the development of intellgient behavior. Latest research shows that the brain also changes this way, way into adulthood, so the process of development never stops but it does drastically slow down.So I think this demonstrates that brains have this quality which programs and hardware don't, at least not yet. This may account for the difference in behavior that rigid programs and hardware produce from the non-rigid ones found in humans.
      • thumb
        Apr 5 2011: Budimir: I believe we're basically thinking along the same lines. My question is, besides the basic information collection and processing apparatus, what needs to be done to get close to human consciousness? That is, what scenarios need to be created? Is something additional needed to motivate learning (pain, pleasure, etc.), or is the motivation inherent in the structure?
        • thumb
          Apr 7 2011: Human consciousness is such a strange thing. In any feat of engineering there has to be a mechanism by which a given machine goes from input to out put. If we try to engineer consciousness, lets say pain. We need to have some kind of a mechanism by which some kind of input, extreme pressure or heat could produce the sensation of pain.

          pressure --> nerve impulses --> pain. We can describe how pressure affects nerve impulses really well and could probably producea similar pattern in a machine. The hard part is producing the sensation of pain. How does something conretely material like a nerve impulse produce something that seems almost like an illusion, with no substance.
      • thumb
        Apr 7 2011: Yes, it's an interesting topic - from a neurological point of view what constitutes physical pleasure and pain?
        • thumb
          Apr 7 2011: If I'm not mistaken (and I think recent research might correct me)...

          You can see a stimulus that hurts you as something you need to avert. So, there is a strong signal that says "Stop getting this stimulus"... We experience this as a negative emotion, and to be a bit simple, I'd say: that's in the amygdala (there are other area's involved in emotions, so this is a "probably")
          Initially, it was very concrete (heat, pressure, cut,...); but other stimuli create aversive emotions (fear, disgust, pain), and can be triggered by imaginary, abstract or very complex stimuli...

          Same goes for good feelings (sedation, safety, comfort, sweetness, warmth, a fertile mate), so also more and more complex thought- area's (association areas) have contact with the amygdala.

          In Birdia's example: Rewire the "rose" and "book" representation to the pain-generating center, instead of to the pleasure center.

          It should approximately work like that, but I might be wrong on the exact location, Although emotions are from the "primitive" (evolutionary older) parts of the brain.
        • thumb
          Apr 7 2011: Wow! Very good question Birdia. My example was simple for clarity's sake, but the brain is actually a interconnected organ whos parts constantly communicate.

          Christophe explained it well. I would just add that since different regions communicate they also very likely penetrate each other. Pain is not just a single unit of qualia the brain experiences but it overlaps with many others.
        • thumb
          Apr 8 2011: Budimir:
          I did make it seem as if there was this fixed place in the brain where pain is "made" ...

          Neurons with "bad" information come to places where they make synaps with neurons spreading the "pain" sensation to whatever regions that need such information...
          Whether this happens in one region (i suggested the amygdala, which might be seen as too big to be a region), or more (brain stem, thalamus,...) I don't know.
          http://scholar.google.be/scholar?q=brain+regions+human+emotion+pain&hl=en&as_sdt=0&as_vis=1&oi=scholart might help
        • thumb
          Apr 8 2011: Yes, that's why I added that part myself. I may be wrong but in my opinion what we call pain is probably experienced along with many other impressions like aversion, anxiety and so on.

          Pain is used to describe many different expriences.
        • thumb
          Apr 8 2011: Very interesting thoughts Bidia,

          How would "heartbroken" be learned or programmed....?

          Hmm...
          That would imply a bot needs to feel a reason to be close to someone... So that would go to the account of sexual selection (from Darwinian point of view).

          So If we don't pre-wire the desire to be (communicating) with another agent, and if losing of the contact would be considered a painful loss (because of previously acquired positive responses)... heart-brokenness would be difficult to conceive...

          But as human emotions cannot be reasoned out without gender differences and procreation, we might find out we need to simulate or create that in AI
          S
      • thumb
        Apr 8 2011: Birdia's observation is an interesting one, but I think distracts us from the initial question.

        She was describing learning to associate a new stimulus with a painful one in a Pavlovian sense. For example, if we shine a bright light and shock a person simultaneously several times then before long we can just shine the light and the person will feel pain.

        But what is the fundamental nature of the initial pain? If we consider pleasure a form of positive (re-inforcing) feedback and pain a negative feedback, what are the physical characteristics of each? Is an electric shock detected by a pain nerve? Or all nerves pain/pleasure generic and simply give out different signals based on the stimulus? What mechanism exists within the brain (or extended brain if we consider the entire nervous system the brain) that makes a given signal type positive vs negative reinforcing?
        • thumb
          Apr 8 2011: I don't know Tim,
          I suggest you pick up some books about neurology
          There are a lot of answers there... they have a lot of information...

          Electric shocks are detected by the damage they do, and by the activations of the muscles, giving the muscle-tension sensors a jolt... the signal is interpreted as pain

          I think there is no such thing as a "fundamental nature of initial pain"... It is something that evolved and is fuzzy across species. there are multiple forms of pain, as each pain needs to convey different kinds of information,.

          Positive vs negative: evolution could do the trick
        • thumb
          Apr 8 2011: I guess we could say that any intense mechanical damage to the body would be extremely painful even for a masochist. I don't think anyone can endure intense physical torture, this doesn't have to be the case in all animals but it is for humans who have evolved a physiology that can be disrupted by mechanical damage.
        • thumb
          Apr 8 2011: Could be, and it sometimes can occur without an actual physical stimulus like in phantom limbs, for these reasons I think pain can certainly be seperated from mechanical damage. But it tends to occur with mechanical damage most of the time.
        • thumb
          Apr 9 2011: I think sometimes what actually causes pain can mislead us. The brain is such a complex thing that it can make you believe your missing arm is hurting you. So the brain and consciousness remains a deep puzzle.
    • thumb
      Apr 5 2011: Tim....Good idea....If we could control and design as you propose that would solve the problem of mental illnesses. We would not have any neurotic A I. Because of course we would program only love. No free will.
  • thumb
    Apr 4 2011: In a lot of the discussions, there is much theoretical discussion about human vs computer intelligence.

    I wanna discuss a more practical example now. What about Watson the game show computer? If we consider trivia knowledge a kind of intelligence, could we also say that at least in one instance artificial intelligence has superceded human intelligence? After all Watson beat some of the best game show competitors.
    • Apr 4 2011: Was it really intelligence? Or a sophisticated search algorithm coupled with speech and linguist software? Was it able to carry on a reasonable conversation and make hypothetical analogies? I only saw the first show.
      • thumb
        Apr 4 2011: It definitely cannot generate a hypothesis and in my opinion it might be that it never will be able to do that. I outlined previously that intelligently creative behaviour cannot exist without semantics and I am sticking to that opinion. I don't think a computer will measure up to an Eistein or a Tesla.

        Watson was able to outpreform many humans on the games show. But like you said is it really intelligent? Many would say well if it's behavior demonstrated intelligence the it must be considered intelligent but it's interesting how we seperate those definitions in the natural world. We wouldn't consider our immune system intelligent but it gets rid of disease much better than anything we can make. Same with cancer it evolves so fast it has mechanisms to evade our best cures, is it intelligent?
        • Apr 4 2011: Yes, Let me state it this way. Watson = speech & linguist software + search algorithm + probability calculator. That’s it.

          Not that it isn’t a real AI accomplishment, absolutely, but it’s not intelligent by my understanding. Also consider it took a bunch of Ph.Ds, a ton of technology, a temperature controlled room, and more than a couple of kilowatts to do the same thing the other contestants did with three pounds of neurological tissue & breakfast.

          Additionally, the other guys could; order dinner from a menu, pack their suitcases, pick-up a present for ‘Stacy’ at the airport gift shop, and tell stories about Alex Trebek WITH THE SAME three pound brain. No extra programing required.
      • thumb
        Apr 5 2011: Yeah it's fascinating what humans can do. I suggested something to Tim in the post below this one. I think you may find it interesting. I went beyond comparing the brain as a circuit but also compared it as a substance. So I am wondering if there could be something in that.
    • thumb
      Apr 4 2011: I think that Watson is very intelligent to solve the game he is intended to play.
      Watson "understands" the answers better than most humans (so superseded or on par).
      Watson looks for the most probable question, and thinks about alternatives...

      On other tasks he performs very poor (i guess)... so maybe you can see Watson as an idiot savant...

      So while passing the Turing test in-game, he will be recognized out-game
      • thumb
        Apr 5 2011: But by "understanding" are you refering to the subjective experience of understanding or simply the iteration Watson presents to the audience.
      • thumb
        Apr 7 2011: But would you say our immune system understands what it is doing? It is very efficient at protecting us. I remember hearing about how are immune system works and marveling at the fact that it is not consciously performing all these sophistcated functions.
        • thumb
          Apr 7 2011: Now you imply that understanding means knowing...

          Knowing means a self consciousness (I feel X to be true, and I have an image of I).
          An immune system has -as far as I know- no self-consciousness.

          I don't think Watson has a self consciousness, so he would not know.
  • thumb
    Apr 3 2011: "I myself defend the thesis"
    Which thesis is that?

    Edit: i see, number 7...

    totally with you on that one... quite soon even!
  • thumb
    Mar 31 2011: Yes, but our weird wiring still gives us the edge.
  • thumb
    Mar 31 2011: This MAY annoy alot of people, or not. But I figured making 5 posts in succession would be ridiculously stupid.

    So, because my response to this issue is considerably longer, I suggest please, read my response at the provided link

    If it's too much of a hassle for people to read on the web, i will repost here.

    Thanks for your time to read in advance...it's long :P

    http://berserkerlion.com/tedsponse.htm
    • Comment deleted

      • thumb
        Mar 31 2011: Karls AI, is the art itself. The art created by the AI, is actually Karls tool to create said artwork. A great quote from your article shows this;

        "His paper "Artificial Evolution for Computer Graphics" described the application of genetic algorithms to generate abstract 2D images from complex mathematical formulae, evolved under the guidance of a human." I'd like to point out the last four words.

        Evolutionary computation and animal communication are awesome examples of intelligence without language. But not examples of human creativity. I suppose this is my fault for assuming my use of the term intelligence would encompass creativity along with it. Creativity being my argument for machines currently and unlikly within near future be able to surpass (or whatever word you want to use to show machines > humans aka the entire subject of this 'conversation'

        Cycl, if there were ever a machine language to come close to human language is probably it. It may even be eventually in some radically different form be what does it. However, Cycl is a classic example of the limitations of widespread reification that learning machine languages suffer from and no human does, and is nicely exampled in both it's own article and what I had written. Cycl is also a communication form...a really really smart one. But it's not much different than bee dancing. As in, it's communication without creativity. It's not that difficult to get a machine to learn. It is currently however impossible to get Cycl to create something like, "Green dreams sleep furiously." Not to mention even comprehend what it could mean.

        Case in point, machines are still not at the level of humans, and won't be any time soon, without radical changes. I still have hope for the machine however, someday, we'll get there....they will too.

        But don't confuse people here, just because the bee can conjugate, doesn't mean the bee is anything human. Neither is Cycl.
    • thumb
      Mar 31 2011: Scrolling through your text, I pick one thing

      "The number 1 exists, the number 0 exists. Logic gates. Open the gate, close the gate.
      Nothing else to the machine exists."
      and
      "Our current way of making machine language, definitely -won't- give rise to the machines."

      Combined give me the impression that you have overlooked probability theory and self learning algorithms...
      Furthermore: from a cybernetic point of view, there is no evidence for languages being incommensurable, thus allowing to create grammar in logical languages...

      I know these are many difficult words, which make it vague... so let me try this again:
      - learning is taking new information and adding it to your old state of knowledge. this gives you an updated version of knowledge... This is described mathematically by E.T. Jayness, meaning you can program it
      - Even with difficult to translate meanings and words, you can always approximate the meaning of a word by writing a paragraph about it. You translated it, but in a very time consuming manner...
      this means that on a Turing machine, you can, in principle, simulate a human mind and all it's aspects
      • thumb
        Mar 31 2011: Thank you, for showing that you didn't actually understand or even fully read what I had written.

        Basically, everything I said can come down to a single point, Until a machine can create a sentence/idea such as 'Green ideas sleep furiously' or 'Godly farts dance in rivers of my purplish blues' On it's own, without any kind of prior art and be able to understand the fact it could do it, without having to reference something it learned before and further discuss said new idea with me and its possible meanings, or where it even came up with the idea on its own. No machine will ever be at the same level of a humans ability. And I cannot possibly agree that simulation of the human mind and "all it's aspects" is possible without radical new machine design/programming.

        Those very sentences will mean something different to absolutely everyone who reads them and there are no right or wrong answers. Every human on earth has this innate ability. It's the only real example of creativity. And it is directly related to our real language.

        I think machines will get there someday. But for now no matter how much a machine learns, it's just another smart machine, it's nowhere near human, yet.
        • thumb
          Mar 31 2011: Putting your ad hominem aside,

          * I don't think you can give one valid example of any idea "ex nihilo" let alone a piece of art.
          (I don't think -analyzing your example- that putting a random adjective, noun, verb and adverb together is very artful or original)

          As you say they will get there some day,... agreed.
  • Mar 31 2011: What do you consider supersede?

    In terms of computational power computers already supersede humans. If at the beginning of the 20th century some computations took 5 years to be done by human, now they probably take less than 10 minutes when done on computer.

    In terms of creativity I agree with Anna. I think there will be some things that computers won't be able to do. (...and right now I think we are way far from even making AI reach HI).
    I think people want to use AI in the wrong ways & sometimes expect the wrong things from it. Although planes were inspired from birds, they don't have the same purpose and no one tries to copy entirely a bird when making a plane. I consider it is the same with AI. It is inspired from HI, but it shouldn't try to copy it. We should use to make intelligent and fast algorithms.
  • thumb
    Mar 30 2011: We will have artificial intelligence, and when we do there will be nothing "artificial" about i. How do I know? Reverse engineering. We will reserve engineer our own brains eventually, it is not a question of if, but when.
  • thumb
    Mar 30 2011: I am more interested in us humans becoming less programmable and mechanical, than I am interested in efforts trying to create mechanical devices that mimic us.
    And just so you don´t misunderstand: I love my Iphone 4, my computer, our smart car, Skype and all great new technical devices. I would not mind owning a vacuum cleaning robot.But I would not call them intelligent, just smart.
  • thumb
    Mar 30 2011: And what about the idea of meaning ( "KAZAM" ), the sense of wonder, the mindblowing experience?
    The intelligent historical individuals I admire seem to have that - the Ah This!!! - along with their bright mind. How do you build that into a machine? And why should you?
  • Mar 30 2011: Once the Earth was flat & in the center of the universe, humans couldn’t fly, the Moon was made out of cheese, and people on opposite sides of the planet couldn't communicate with each other easily.

    Although not considered remotely possible or probable at one time, these beliefs and many others have proven to be wrong. So taking history into account, I suspect my opinion that AI could not supersede HI in creative endeavors, would also be proven wrong eventually, despite MY not being able to conceive how it might.

    Christophe provided a scenario an hour ago how AI might develop artistic appreciation. I’m thinking that AI would have to develop artistic appreciation BEFORE it could create art. As original creations are usually beyond an audience’s ability to appreciate it at first, would humans even recognize it as art? Would humans take it on faith that something is art if AI produced it and said it was? Could there be a theory of art that could only be appreciated by AI?
    • Comment deleted

      • Mar 30 2011: No, he hadn’t convinced me, but his scenario did make me look at the historical evidence of what was impossible, becoming possible.

        That’s not to say I understand how AI might also make that jump. However there are lots of things that exist that I don’t understand, so my understanding is not a prerequisite for it being possible. Heck, I don’t understand how ‘1000101101001101010’ translates into me writing to you on the other side of the world, yet here it is.

        While I don’t believe AI could be creative, I’m not ready to proclaim AI would never be creative. Like you I am agnostic on the issue.
      • Mar 30 2011: As there are currently many examples of man being destroyed by ‘his’ creation; Chernobyl, Gulf oil spill, financial derivatives, etc., I would not be surprised it could/would occur with AI.

        A movie called [Colossus: The Forbin Project] uses Ben's proposition as its plot.

        As long as we retain the power to pull the plug, I think we'll be okay.
        • thumb
          Mar 31 2011: @ Birdia

          I am just talking in a very hypotethical sense that the same kind or randomness that produced consciousness in us could possibly do the same in computers. I can't say how likely it is that it will occur.

          Calling them artificial depends on how you define natural., If artificial is man made then a randomly evolving computer would naturally develop consciousness since humans didn't directly give it any consciousness.
      • thumb
        Mar 30 2011: I got the idea from evolution, because that's how living cells became "functional."

        Funny thing is there is also evolutionary algorithms in computer science created exactly for that purpose. It eventually makes a program more functional but the evolution of the program is random.

        Now let programs like that run for a few billion years what prevents them from becoming conscious just like we did? We are the product of the same kind of randomness technically.
        • Mar 31 2011: @ Ben - I think I agree with you. >It might develop its own form of art to try and convey concepts to us that we cannot easily grasp. <

          The debate seems to return to whether AI can have an authentic emotional response, and not just mimic a human emotional response. At least as far superseding HI in creating art, or should I say culture? I think if AI would be able to achieve an AUTHENTIC emotional response, the emotions AI would have would be quite different what people experience.

          With no drive for reproduction or need to care for children, how would it value something like the first warm day in spring when it is possible for children play outside in the sun without a coat again? Or watch a flight of geese fly south for the winter and realize that reality is better than fantasy?

          Would AI appreciate differences in electrical current? Would AI develop a philosophy if it should recognize that it is asked to solve certain kinds of problems in reoccurring patterns? If AI achieved an emotional response, would we even recognize it?
      • Mar 30 2011: > True. but if AI "will supersede Human intelligence", how do you propose humans can "retain the power to pull the plug"? <

        Don’t give the machine the ability to connect to its power source on its own. Don’t give it the ability to ‘blackmail’ humans with dire consequences. Keep the human element in the loop.

        Or are you implying AI would be able to con humans into subverting these precautions?

        Just because AI may become smarter than humans, doesn’t mean humans have to become stupid (although I’m not so sure that isn’t happening now anyway even without advanced AI).
        • Mar 30 2011: I wasn't referring to some kind of PC on a killing spree ... more to the fact that this AI, if it wanted to create art, would have to have a very deep understanding of the psyche of humans, demagogics and the creation of 'sublimal messages' that affect who we are and what we think.
          What lead me to this was the question of what the purpose of art is. Many artworks are an outcry against injustice, a call for action to do 'the right thing'. If an AI wants to change the world like these artists do, to make it a better place, and can do so in a 'superior way' to humans, would we promote that?
        • Mar 30 2011: I agree that it's hard to imagine a computer to have an intense desire for self-expression, but, as I wrote somewhere else in this discussion (I'm slowly losing track of things here), if we were to hardwire an AI to be conditional, to be motivated by positive stimuli, and program it to seek these stimuli. If we then teach it that we give it these stimuli when it provides us with new insights. It might develop its own form of art to try and convey concepts to us that we cannot easily grasp. And convey its frustration when it does not receive stimuli. If one of those stimuli might be attention, it might try to find ways to capture this attention.
          When it comes to dance ... the sensation of being able to move and having functioning organs is somehow in itself rewarding, perhaps programming an AI in a machine with moving parts to be selfpreserving will lead to it randomly testing these movements, being 'happy' that it can move, thereby creating a mechanical ballet?
          Will those movements be beautiful?
          I think the answer to that question is the answer to the question "What makes those movements beautiful?" What is grace? I once saw a documantary where the movement of a tiger was described as gracefull because "it was the most efficient way to move, expending no more energy or effort than absolutely nescesary.". Can a computer analyse these movements and compute a more efficient way to move? A computer can calculate balance, it can calculate the expense of energy, it can calculate countermovements to minimize impact on landing... I don't know if it will happen but I don't think it unthinkable ...
        • Mar 31 2011: It is my hope that AI's one day will be able to break communication barriers between people, much like the 'universal translator' in Star Trek (a device that can interpret a language without prior knowledge of it, based on analogies in other languages). And go further than that, interpreting behaviour in a cultural background. Removing some misunderstandings might go a long way towards worldpeace, even though for some people cultivating misunderstandings seems of greater benefit ...

          Also research might become easier ... ever had something you knew, and more or less knew where you had seen it before, but not exactly? I think search engines may become much more advanced so one can easier search based on context, and create a digest sifting out usefull and useless data. For the rest I think it will mainly be an interesting excercise, and hope people don't become more lazy and forgetfull (as with the calculator, that other significant technological enabler that taught us how to fail at simple arithmetic) ...

          I suspect those who could benefit most from its logical capabilities are least likely to make use of them though. And I think AI might come to some conclusions shocking our core of 'scientific truths', and thus quickly be discarded as erroneous, gathering dust for 500 more years later till someone more credible 'reinvents' it. Most problems we face today are more attitude-related than technology-related.

          Other life forms ... if intelligent,... I admit the thought of the human xenophobe nature dealing with such an event scares me ... maybe the fact that they haven't contacted us yet proves their intelligence ^.~

          I'm sorry but I am by nature very skeptical and pessimistic, ... but I did say worldpeace! ^.^
  • Mar 30 2011: What happens when your computer wins you over on this debate?
  • thumb
    Mar 30 2011: I think that it is inevitable that artificial intelligence will surpass our own if for no other reason than the fact that any artificial creation with that level of intelligence would be essentially - that is to say, produce or invent and then fabricate the equivalent of more processing power or memory and upgrade itself to use it. The human brain, though incredibly complex and powerful, still has an upper limit - we don't know exactly what that upper limit is or even how to know for sure that we've reached it, but I do believe it exists. At the same time it has to be said that before AI achieves an equal level of intelligence to our own, we may very well have begun to upgrade ourselves, or even transfer our consciousness into some kind of artificial structure that would allow us to grow beyond the physical limitations of our brain. At that point would it not even make sense to differentiate between what we consider artificial intelligence and that of humans? I think we'll have to wait and see how everything develops.
  • thumb
    Mar 29 2011: Let us assume that AI supersedes HI: What is good about it? It means we have more capacity to solve our problems - the problems we can not cope with. What is the problem? We fear that a more intelligent life form might be more powerful - the moral hazards we expect drives us to debate this question at all. Rightly so - given our experiences with the instable relation of intelligence and morality. Man was never really good in doing the right thing, even though he knew what was morally right.
    But there is hope: An AI will not have our experiences - it might have a chance to more intelligent and more ethically. Reviewing this idea experiment I wonder if we should not demand to have an AI.
  • thumb
    Mar 29 2011: Oh my God, I am cited on TED, in the very question!!! Someone wants my attention.
    This boosts my ego. And challenges me to respond as intelligent I can.

    This, my own reaction, is a kind of reaction that is difficult for me to imagine from a machine.

    The complex net of emotion/affect, mental thoughts, memories, expectations and URGE TO BE UNDERSTOOD, that makes up a human being.

    Can a machine long to be understood? Are "Pinocchio" and "AI" (the movie) just stories, or symbolic mythological tales, made to point at how we humans can either reduce our selves to be mechanical machines, or stay true to our inner longing and keep moving beyond our own mind and concepts, forever. Can a machine keep moving like the human mind? Then we might be close to discover Perpetuum Mobile.
    • thumb
      Mar 29 2011: Anna - I agree it is difficult to imagine how a machine could have these feelings. But isn't it also difficult to imagine how a human has them? Does that mean that we can discount the possibility of a human construction having a consciousness?
    • thumb
      Mar 30 2011: Well Anna, thank you for wanting me (sparking me) to open this (apparently attention-drawing) debate...

      I think Tim poses the right questions.
      If we -humans- truly understand what emotions and longing and such are, we might (following the church-turing thesis http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis) be able to simulate and succeed in creating this level of AI

      Concerning the perpetuum mobile: we humans need food, machines need electricity... so that would be a fantasy (hard to beat laws of thermodynamics)
      • Mar 30 2011: If we may believe psychoanalists, psychopaths, like machines, are incapable of genuine empathy, and yet they convincingly emulate this by studying those around them. That would suggest a machine would be able to do so too. But then, would we want our machines to be psychopaths?

        I do think the question as to what would drive this AI to do anything is the core question here.
        Would we allow a machine to develop its own ethics, define what is good and evil?
        If we would not preprogram it to think that "what is good is what is most desirable for its creators", I don't think it is inconceivable it will come to a nihilistic conclusion, decide there is no true good or evil, seek out some weak minds and manipulate them (a machine that uses only logic can't be wrong right?), and start building a new world order adhering to this new-found ethic. Although, more likely, it will just complain there is insufficient data to conclude anything and just stop working altogether ...
        • thumb
          Mar 30 2011: Most of the people who have the inability of empathy tend to live lives that aren't violent.
          If however they had traumatic experiences and a bad environment, they might become psychopaths...

          Concerning ethics: if you see good as pro-social (be it human, plant, animal, robot) behavior (enhancing pleasure, reducing harm and effort) and bad as anti-social (decreasing pleasure, increasing harm and effort) and analyse behavior as having both elements, one can make a refined (almost utilitarian) decision...
          maybe we can add Azimov's rules to it...

          But ethics is not the debate here...
          (I guess increased intelligence implies a better understanding of ethics too...)
      • thumb
        Mar 30 2011: Mind does not belong to one individual. It is shared. As human intelligence. A human is nothing without connections, inside (nerves, blood vessels and so on so forth) and out (relating to the environment, relationships and social structures). The immense complex way humans connect, on so many different levels (from microbiology to cosmology), can not be replicated by machines. The way we take care of our own needs, cooperate and multiply can not be replicated by machines. Even though biology, life sciences and all those new fields of research are exploding, we are still far from replicating ourselves. And why should we replicate our minds? Why not keep on making machines that do the things we can´t. Like going deep into the ocean or far away into space. Like counting and processing data fast.
        But stay in charge and practice our mind so we don´t end up slaves of our own creations, the machines.
        Thats my understanding. I am interested in learning more and that is why I connect with you here.
        • Mar 30 2011: I realize it may seem like I was demonising AI, this was not my intention. I do believe AI-technology can and will have a positive impact on our future. However, my goal was to explain the difficulty I have with attributing a personality to this AI, and the question of what would motivate this AI to do something (if it has the choice not to do it).

          As Anna states, intelligence is nothing without a context, yet the fact that the AI would be dependent for its learning and decisionmaking on humans could be an argument to say that AI will always be inferior to human intelligence. And thus the logical conclusion for me would be to wonder howmuch freedom we can grant this AI.

          We can either hardwire its choices (As in Asimov's 4 laws (http://www.rogerclarke.com/SOS/Asimov.html)), or we can condition it.
          To condition it we can create positive and negative stimuli that affect it. We can motivate it to search positive stimuli and shun negative stimuli.
          As such it will see as 'good' anything that brings more positive stimuli and 'bad' anything that brings more negative stimuli. Being a super AI it will evaluate this with longterm and shortterm effects in mind and act accordingly. This way ethics would be the basis of any decision it makes, and thus why I think ethics are most important to this discussion.
        • thumb
          Apr 1 2011: Anna, you said " Why not keep on making machines that do the things we can´t?"

          What if an artificial form of intelligence can figure out a way to resolve conflicts without war? Isn't that something we don't seem too good at doing?
  • Mar 29 2011: An AI will probably be able to determine a solution based on an algorithmic calculation, but it will be unable to actually connect what it thinks would be the best solution to the actual situation. That is one of the things that human intelligence has over any form of AI.
  • Mar 29 2011: I’m thinking it’s an invalid question, by definition. Will AI supersede us in ‘Intelligence’ (arriving at answers from data)? Yes. Will AI supersede us in ‘Human’? No.
    • thumb
      Mar 29 2011: I think that is a good nuance Vincine,

      Might have to do with the word supersede...
      That might imply AI will overtake? I did not mean to imply that...

      AI might be better adapted to space for example... so in that occasion it actually might supersede in the (true?) sense of the word....
      (correct me if I'm wrong, my English ain't perfect)
  • thumb
    Mar 29 2011: Artificial Intelligence would supersede than Human only if it could think up solutions that humans could not think of. This is possible if AI can make up the relations by itself. Otherwise it is still limited by options and logic of its maker and our human “connecting-dots”.

    Making emotional or rational decisions is one big difference. In the movie I Robot Will Smith is saved only based on calculation of chance for Survival. Regardless of the question if AI will supersede the Human Intelligence, AI should never be allowed to act.
  • thumb
    Mar 28 2011: Hi Christophe,
    I think that I probably agree that AI will eventually supercede the human mind in almost any domain specific area but I think it will be a long time before any computer system or AI assisted device supercedes the versitility and flexibility of the human mind as a whole.
    • Comment deleted

      • thumb
        Mar 30 2011: Frederic, Now I know that I am your favourite commenter!
        Still no picture, huh, Frederic? Not willing to be judged as you judge?
  • Mar 27 2011: Hello Christopher,

    Note: I’m speaking WAY outside my area of competence and am not likely using the correct terminology.

    I looked up Bayesian logic. It implies that math, thus programing, thus AI, and thus a machine; can receive data it hasn’t a category for, recognize it as data, rewrite its own program to place the data in an appropriate place, assign it an appropriate weighting, and accommodate for the variable in its computations, BY ITSELF?

    In other words; suppose there is a program concerned with vehicle miles per gallon/kilometers per liter, with data categories for vehicle weight, horse power and speed, and it were to somehow start receiving barometric pressure, wind resistance, payload, and route data, the machine would be able to RECOGNIZE the data as such, and factor it appropriately, BY ITSELF?

    That would be impressive.
    • thumb
      Mar 29 2011: That would be impressive indeed.
      But not impossible

      Maybe something close might be the self-driving cars from the DARPA challenge?
      • Mar 29 2011: That’s what I had in mind on my post down 2 days ago but didn’t know the contest's name.

        Can a DARPA Challenge vehicle decide -by itself- to go get gas (and maybe a cup of coffee for the passenger) find the station –by itself-, figure out if it pays to finish the contest –by itself- without a human somewhere in the loop, either at the moment or in the basic programing, providing the instructions?

        Can AI write and/or rewrite its own programs, the way a human can reprioritize as needed. (Admittedly some of us are better at this than others.) I think not, at least not with silicon, perhaps with neurological tissue.
        • thumb
          Mar 30 2011: I think It is better to ask the question:

          How could this be done?
          Or
          Why can't this be done?

          I assume that, given enough computational power, with self-learning algorithms (soft wired), and a good prior code (hard-wired), it is possible
          It is possible to simulate neurons on silicon. Might take a lot more power, and might be inefficient.
          So I don't see why it can't be done... although it is difficult
      • Mar 31 2011: Christpher, This is just for you.

        http://www.ted.com/talks/sebastian_thrun_google_s_driverless_car.html

        I don't think it's advanced enough to get me coffee, yet. (Stanford University slackers!)
  • thumb
    Mar 27 2011: I would also defend the thesis arguing that in a way we are nature's computers. Many of the things that we would refuse ourselves to see as computational (that includes the example that was given of emotional intelligence) arose from the interaction of our genetic code with the environment. It's true that computers run in a sequential manner, only giving the appearance of multitasking because of their speed, when brains actually act in a truly parallel way. But, given the advances made in parallel computing, this won't be a long-term difficulty for AI.

    In the interest of challenging my own views (and also because he came to my university and I just had to get him to sign one of his books), I am currently reading "The Emperor's New Mind" by Professor Roger Penrose in which it is argued that strong artificial intelligence is not possible. So I guess next time I write on this thread I might have a totally different viewpoint, that is if I can get that 700+ page book finished in the next week or so.
  • Mar 27 2011: Can AI identify problems or situations from a mass of data, and write or assemble programs to solve them all by itself? Can it make 'educated guesses' without 'knowing' all the relevant data? Can AI know whether it has all the relevant data that's needed? Can AI know what it doesn't know, know how to find out what it doesn't know, and know where to go to get what data it is missing?

    ???
    • thumb
      Mar 27 2011: At this point, I think AI is still far behind human capabilities. I think one of the main handicaps of AI is pattern recognition.
      If you look at a buuterfly you know it's a butterfly, regardless of color shape, angle you look at it, etc. The same for a myriad of other objects. We can even identify random patterns such as an animal form in clouds or the famous madonna pictures that people apparently even find on cheese sandwich.
      Here AI still is far behind us. This doesn't mean it's impossible, but I just don't see it happen any time soon.
      • Mar 27 2011: Actually, from what little I know, pattern RECOGNITION is something computers are actually good at, provided they are ‘told’ what pattern to look for. Problem IDENTIFICATION (?) I think will continue to be a human advantage.

        Until AI develops to the point where a machine can ‘understand’ what it ‘needs’ to look for by itself, instead of matching what it has been told it needs, to what data it has available, humans will have the advantage.

        There is a contest various technical & science universities compete in to develop a vehicle that can drive itself through a road course. Shortest time wins. Some entries are more successful than others. I don’t think any vehicle would be able to think ‘Outside the Box’ if the course was suddenly affected by a flash flood, mudslide or other unexpected course modification, unless it was previously ‘told’ about these possibilities.

        Until a vehicle can ‘understand’ it is running low on fuel, needs to get refueled, can go find a station, one that is opened, not closed, get refilled, and return to the course, BY ITSELF; or alternatively ‘realize’ it has ‘lost’ because the time it would take to be refilled would make finishing academic, and so decide to concede the contest, BY ITSELF, I don’t think AI will supersede human intelligence.

        Until AI can ‘understand’ changing circumstances outside its experience and develop effective compensations by itself; until it can organize data that has no relationship to any data it has been given before, I don’t think AI will supersede human intelligence.

        Basically, as long and AI relies on mathematics, it will not supersede human intelligence. As extensive mathematics is, it cannot encompass all the factors humans can take into account.
        • thumb
          Mar 27 2011: And what about self-learning algorithms?

          If you look (for example) at Bayesian logic, you have prior knowledge, new data and posterior knowledge combining the new data with the prior knowledge.
          This is learning: your new response to a similar problem might be different.

          Would such mathematics be able to supersede it?
      • Mar 27 2011: I believe that nowadays computers have fairly good pattern recognition abilities. I just saw an app for Android that can read text from objects. Other apps can determine what objects is in front of camera and find similar items in online store. Face recognition is very advanced as is recognition of people's writings etc.

        In terms of AI "understanding" about what is happing around it, check a project called Cyc:
        http://en.wikipedia.org/wiki/Cyc

        "Cyc is an artificial intelligence project that attempts to assemble a comprehensive ontology and knowledge base of everyday common sense knowledge, with the goal of enabling AI applications to perform human-like reasoning."

        I think it is only a matter of time before Cyc or somilar project has a complete knowledge and understanding of what a typical person would know. Making decisions would then be based on priorities and it would not be restricted by limited knowledge?

        In terms of emotional intelligence, can that be also learned and to some degree randomized internally? Can we emulate artificial and real needs of AI? Artificial needs of belonging etc. and real needs like consumption of electric power and its security?

        Eventually the computer might not think or feel like us but it will look that way.
  • thumb
    Mar 27 2011: I am not a computer scientist so my knowledge of programming is limited but this is the first time I saw the talk with Jeff Hawkins and it really strengthened alot of my previous assumptions about AI. What makes us intelligent is not that we can grind output but we have the ability to predict the future and produce intelligent reactions to new contingencies. In this respect even logical relations are subject to some intuitive leap when it comes to intelligent behaviour. This implies that even when we engage and encounter a new kind of problem, we are not grinding the information that already exists in our head to solve that problem. Problem solving itself requires some degree of creativity. Our current AI models just store a bunch of input/output instructuons but there is no unified principle from which the computer derives conclusions. We can solve a wide variety of problems just by understanding a few general principles, we don't have instructions programmed to solve every single problem. For instance the laws of thought are

    A is A
    A cannot be both B and not B
    A is either B or not B

    Based on these three axiomic principles it is reasonable to assume that an intelligent human can derive the following conclusions without any preprogrammed data.

    All A is B
    All B is C
    Conclusion: All A is C

    Now is it possible for a computer to make an intuitive leap and draw a similar conclusion by only using the three laws as guidelines?
    • Mar 27 2011: I am not expert in mathematics but I think if AI is aware of axiomic principles then it should be able to apply them same way humans are? Is there really an intuitive leap or just logical thinking as I believe mathematics do not uses intuition. People's intuition varies so would then conclusions?

      Do we really have intuition or is intuition just a way for the brain to process information in its subconsciousness, using acquired knowledge and perhaps some randomness in order to come up with "intuition" ?

      For some, "The intuition is the pattern-matching process that quickly suggests feasible courses of action." as opposed to analytical approach where we consciously compare various solutions.
      • thumb
        Mar 28 2011: Yes logic tends to be precise and in most cases there are very few number of solutions which are correct or valid. The intuition which I am talking about is what we use when we encounter a problem we have never seen before. There are no specific instructions for the solution to the new problem, the new problem can only be solved through a synthesis using previous concepts and relations, this is why we are capable of making mistakes. Everytime we solve a new problem we a also producing something new. This new thing can either be semantically correct or semantically incorrect. There is no law of nature that forbids us from writing out an incorrect solution, or even a completely random one. But a correct solution can only be produced from semantics.
        • Mar 30 2011: I think we can subconsciously compare the new situation with our past experience and use that to have "feeling" or intuition on how to proceed. I would think AI can do the same. We already have Expert systems that do such thing?

          This is similar to our vision. When we see unfamiliar object we compare it (subconsciously) to what we know and try to apply similar understanding. That is why people came up with so many illusions on paper where we trick the eye or brain into seeing something as it tries to understand the new object by extrapolating from existing knowledge.
    • thumb
      Mar 27 2011: I would give the bot probabilistic logic. Giving it the ability to do inductive reasoning. (based on http://www-biba.inrialpes.fr/Jaynes/prob.html)

      So I don't see why you need to limit the basic instructions for the bot.
      • thumb
        Mar 28 2011: Probabilistic logic might work well I really have to see how it is implemented in computer processing.

        Well because lots of instructions just takes up a lot of space, it is messy and it's not how humans tend to derive conclusions about things. We usually work with few general theories or concepts which can be elegantly applied to many different problems. We don't have specific instructions on how to solve problems.
  • thumb
    Mar 26 2011: Hi Christophe,
    Do you mean domain specifically or in totality?
  • thumb
    Mar 26 2011: In what sense do you think AI will supercede human intelligence ?
    For example, emotional intelligence, while relevant for interactions between humans, would be pointless when it comes to AI.
    So what specifically do you think AI will eventually be better in than human intelligence ?
    • Mar 27 2011: Hi Harald,

      if AI is to interface with humans then I think emotional intelligence is very important. Perhaps it might be also important to a group of AIs given that emotional intelligence is important for group interaction and cooperation?

      I am guessing that there is nothing that AI cannot do in terms of knowledge, orientation in any environment and emotional intelligence, at least as seen by the external observer. Whether internally AI has similar perceptions as humans like having feeling, pain or consciousness is another question that we might never have an answer for.
      • thumb
        Mar 28 2011: Hi Zdenek, that's precisely the point. Can AI have feelings or even consciousness ? Emotional intelligence depends on feelings (feeling of empathy). So if AI cannot feel, then it probably couldn't show any emotional intelligence.
        But then as I said, I don't think emotional intelligence would be necessary between machines (as long as they don't have feelings). It might be of advantage for machines interacting with humans, but even then it's probably not essential.
        Just think how you would react differently whether a person calls you an idiot or a machine.
        • Mar 28 2011: If we map the human brain's neural network on a computer, completely and fully, then we "turn it on" and let it "think" will it think like a human? If it's mapped identically, it seems logical that it would. As our computing power and understanding of the brain increases it seems reasonable to believe that we will someday (if not relatively soon) have the ability to do just that. I'm very tempted to believe that if that should come to pass, machines will be able to think and behave exactly as we do.
        • Mar 30 2011: Hi Harald, if we program AI to understand and act as if it has feeling then I would think it will have emotional intelligence?

          If machine calls me an idiot I can take it seriously (or people will) because if the AI looks like human then people feel like it is human. Look at the recent advances in robotics. People have increasingly identical feelings toward robots as to humans because robots increasingly look like humans in appearance and interaction =)

          I think emotions play important part of drive to live and work so perhaps AIs need it too? But that is just my feeling ;)
      • thumb
        Mar 29 2011: Nick, I think at this point this is a rethorical question, because as far as I know, we are far from having the technical capabilities to map the brain on a computer.
        Even if it should be possible one day, I'm not sure that a computer would be equal to a human brain.
        This question (and it's answer) also has deep implications for religious beliefer. If, what you say, is correct, then religion will be obsolete I suppose.
        • Mar 31 2011: Why? I would assume that a digital duplication of a human brain would wonder about the same things a normal one does. A computer pondering existence, spirituality and religion seems possible if this were to happen. A perfect digital copy would, in theory, work the same way. So I don't really see religion going away because of such technology. In fact, some science fiction suggests that such thinking machines would even develop new religions or take our religions steps further.

          The technology is not as far off as it seems either.
          "Scientists perform cat-scale cortical simulations and map the human brain in effort to build advanced chip technology"
          http://www-03.ibm.com/press/us/en/pressrelease/28842.wss

          Taking what computing power it took to do that, apply Moore's law, and in a couple of decades our PCs will have the computing power to emulate a cats brain and a super computer will be many times more powerful and likely able to emulate more complex brains.
      • thumb
        Mar 31 2011: Nick, Moors law is soon coming to an end. At least when it comes to silicone based computer chips. You can't reduce chips infinitely. According to INTEL, the predicted increase in processor power (Moore's law) that worked for the past 45 years or so, will come to a halt by approx. 2020. By then, other technologies will have to be developed to keep processor power increasing.
        So, for me, there are still too many question marks. At this point the question of whether we can or can not map a brain on a computer chip is pure speculation.
        I think we can agree that computers are entities working based on logic. The world of computers is reduced to a world of 0s and 1s. Where should religion find a place in this rational computer world. Religion, is inherently illogical.
        • Apr 1 2011: Our brains are based in a reality of logic as well. Neurons fire because of electric and chemical reactions. All of reality obeys the laws of physics. We humans are as equally 1s and 0s as machine code in our purest source. We're just a collection of electrons and protons whirling around and bouncing between neutrons, right? So why should religion exist within us? 1s and 0s are the smallest pieces, and our world, or existence, our beings can be reduced to 1s and 0s because we're all atomic based and atoms follow rules just like 1s and 0s. To simulate our reality on computers will be more than possible in the future.

          The only thing holding it back is cost and time. Because of the parallel nature of the processing, we already have the power, we'd just need to keep throwing more processors at the problem like modern day super computers. IBM threw 147,000 processors at the cat brain work above. How many processors will it take to make a human brain? Is it already possible? Our super computer FP operations per second have been climbing hugely. And simply throwing more processors at it can make it go up even more.
          http://en.wikipedia.org/wiki/Super_computers#Timeline_of_supercomputers

          So while the silicon size limit will put a stop to single processor speed growth, we'll instead be throwing more processors at the problem. And should some new technology (like graphene based processors) come along and replace or supplement our silicon based technology, who knows how quickly our processing speeds will increase?

          But at their cores computer code and our reality aren't that different at all.
      • thumb
        Apr 4 2011: Nick, in some way you are right. The whole universe can probably be reduced to 1s and 0s. However, somehow, we as human manage to bring an irrational component into the game, something computers don't do.
        But as I said, we don't fully understand yet, how our brains work, hence it's difficult to imagine how, if at all, we can map the brain to a computer.
        Also, having an emotional computer would probably be a step back. One thing that we appreciate in a computer is that it provides objective answers to a question. We probably wouldn't want to depend on a machine that has a bad temper.
  • Comment deleted

    • thumb
      Mar 26 2011: I don't know

      I'm not sure whether I agree with the singularity idea.
      If you define the singularity as the point where no more predictions can be made: Well each moment in time can predict somewhat about the future, and it gets increasingly difficult the further we try to project things... but I don't believe we will ever be without predictive power... so I don't. More vague and popular notions of singularity I most often reject.

      I do however think that AI is possible,and will eventually will be created/programmed/evolve
      • Comment deleted

        • thumb
          Mar 27 2011: Wel Birdia,

          It greatly depends on what you understand with spontaneity and creativity.
          (But I do think AI can)

          I would see spontaneity as a fast action (to internal or external stimuli) without much deliberate cognition or conscious reasoning... (i.e. seemingly automatic)

          learned things become automatic over time (taking not much effort, like driving). So spontaneity can be learned

          Creativity is (very brief) making new connections between existing things. I think a computer can do that too.
        • Mar 27 2011: Hi Birdia,

          That is a good question. Is it possible that we are spontaneous as a result of some randomness of chemical interaction in our brains and each brain has various emotional needs that accumulate over time and lead to spontaneity? Can we simulate those in AI?

          Is creativity the result of combining facts and existing solutions into a new combination? Given a person is aware of of his/her environment, explores it to find new facts and makes new observations, then combining these facts, observations and current solutions one can then be creative and pick the right combination for given problem? Can AI replicate this process?

          I believe the answer is yes in both cases. What do you think?
        • Mar 30 2011: Excuse the intrusion.

          AI can be programed to solve ‘2+2’ and it will come up with ‘4’.

          Suppose AI can be programed, or can program itself, to solve ‘2+2’ by coming up ‘Purple’, which for the sake of argument we will say is a useful, creative, & spontaneous answer that we will accept. Would AI also be able to know that solving ‘2+2’ by coming up with ‘Chair’ is not a useful answer despite being creative & spontaneous?

          I have doubts that AI would be able to discern useful creative responses from useless creative responses. I think aesthetic judgment would remain a human quality as it relies on a background of a human's experience and changing tastes and sensibilities. At best AI may achieve the equivalent to ‘elevator music’.
        • thumb
          Mar 30 2011: @ Vincine:

          Given that the knowledge he has about 2+2 means most often 4, sometimes purple (in creative contexts) and never chair (in no context at all)... it would also infer that all other words/connections fall under the default 'near 0 probability of relation with 2+2'

          If the bot keeps encountering people or other bots saying 2+2=chair, then this association would be made, and the bot could start to use the "2+2=chair" in a meaningful way

          If we allow the bot to connect (from time to time) things that are unconnected (say 2+2= anger) and see what the responses are, he can start to make the connection (if response to it is positive or negative) or stop testing it (if responses are neutral/ incoherent).
        • Mar 30 2011: @Vincine:

          What are the criteria of 2+2 = Purple being useful and creative while 2+2 = Chair not? Would at least majority of people agree with such conclusions?
      • Comment deleted

        • thumb
          Mar 30 2011: With any creative task I think AI will be much like a random generator. It can piece together notes and maybe create good music by coincidence, or after shuffling words it might create a credible novel though the monkeys writing shakespeare example states that there will be a lot of random shuffling for a very long time before anything like that actually emerges. A computer can do that really fast though

          But in my opinion a random generator is not intelligently creative it is just random creative. I am still waiting to see a mechanism by which a computer can be intelligently creative, the way humans are.
        • Mar 30 2011: Hi Birdia,

          I agree with you that life is spontaneous and I always wondered what it is causing it. I think once we discover the "driving force" we will better understand the Universe =) I am not exactly sure how to duplicate that in AI as I don't even understand the mechanics of it in us and the nature.

          What is beauty? According to some research, if people see a face with certain asymmetric features then they will not consider it beautiful. Similarly between men and women, each are "looking" at different features of the face and to a different degree they consider certain features (like size of one's jaw) as being attractive or unattractive.

          Is sunrise beautiful because sun/light means life, warmth and no darkness? It is also because of our previous experience?

          I am sorry I don't want to take away magical moments in our lives but I see it as a high possibility that we are preprogrammed by nature and evolution to feel and act certain way with some randomness in it =)

          Very interesting topic!
        • Mar 30 2011: Budimir, is it possible that people learn and store patterns of music as they listen to what others created? Could AI do that as well?

          Thou I don't know how AI will know which pattern of music is good for human ears and which is not. Perhaps we can find some common patterns that AI could use to ensure that the music is pleasing?
        • thumb
          Mar 31 2011: I think an AI machine could do that but the challenge is in how it could create a completely new pattern of music instead of just storing old patterns.
      • Comment deleted

        • Mar 30 2011: If humans are not able to agree what art is, whether fine art, performance, writing, whatever, (let alone what ‘good’ art is), and humans write the code for AI, how is it possible for a machine to arrive at a valid artistic judgment?

          Historically when the aesthetic envelope is expanded by truly original work, it is at first immature and most often jarring to the culture’s current sensibility. Not only does it take time & effort of the artist to mature the aesthetic they are developing, the audience also requires time & exposure to adjust and appreciate the new vision.

          I’m thinking of Van Gogh who was only appreciated years after his death. I’m thinking of ‘The Rite of Spring’ which sounded like noise when it was first played.

          Art is creative precisely because it is a new and heretofore unique way of looking or doing something. As such it requires some failure before the creator ‘gets it right’. How would one program AI to produce the unknown?

          Art that can be reduced to algorithms would at best be craft. Not that a lot of craft isn’t better than a lot of art, or that lot of craft creators aren't better than many artists. But if the art can be reduced to formula, than the aesthetic problem has been ‘solved’, and further executions are exercises in technique, not creation.

          If art cannot be programed, I don’t see how its appreciation could be programed.
          (Birdia, I took a couple of courses at Parsons too)
        • Mar 30 2011: I think the best way to test whether AI can produce art is to have a project with AI producing "art" and then ask people (without letting them know) whether they consider it art =)
      • Comment deleted

        • Mar 30 2011: I think saying it would be a random word generator would be oversimplifying things.
          The AI could learn patterns by 'studying' art. Most art is appreciated for an underlying sense of harmony (and yes, even chaos (like noise music) has a certain harmony), this sense of harmony answers to certain ratio's that appeal to our brain.
          While a lot of these ratio's and patterns are unknown to us, apart from an unconsious reasoning, an AI with it's computing power and vast databases may well be able to discover such patterns. A good example of this for me is the allegations that Harry Potter is actually a rip off of *insert one of a dozen titles here* (yes, I'm aware HP isn't exactly high art but the example can easily be applied to other works as well). Artists don't create works out of the blue, they are expert observers, using whatever they find in their surroundings, and applying it to a different situation. The AI would have access to any book ever written, in any culture, in any language, it's popularity and its intended audience. It would be able to discern the techniques used by various authors, and recombine them, much like parts of the DNA of two humans is recombined to create an individual child. It would thus create a new book tailored for a specific person (as it's aware of the person's preferences and emotional responses to previous artworks). Then it would record the responses of the person and learn what had a desirable effect and what didn't.

          A bigger problem would be that, art is about education. Apart from the possibility, would we want to build a rebelious machine that uses shocking tactics to try and force us to change to something that's more to its liking? Isn't that exactly what all those horror sci-fi movies are about? A machine, that observed the human world, saw that it was in error, and set out to fix it ...
          Another problem is that those machines would probably work like psychopaths do, without feelings of their own, able to manipulate people ...
        • thumb
          Mar 30 2011: Haha, so pretty much you are telling me AI will produce plenty of "Wal-Mart" romance novels. I believe those follow a formula.

          Seriously though I would love it if a computer could produce something like that, it would be really cool. But I also have to be realistic and state some of the shortcomings that I am seeing in this discussion.

          One is thing is that major artists don't have access to every book, ever written or statistical data of what kind of books appeal to people and what kind of books win nobel awards. But a good artist can still produce great works of art. So the mechanism that you are describing is technically not evidence that by using these large storage of previous literature the machine will produce better literature. There are many people who are very book smart, but they don't necessarily produce great art.

          Second problem is that art or literature is not just about aesthetics, academics consider those books or paintings that are made solely for aestethic pleasure highly kitchy. That's because that kind of art lacks a deeper, more profound theme that tells us something about the human condition or it just recycles previously used themes.

          Third problem is what is the computer going to exactly extract and use from previous stories? Being that the computer cannot use anything from a previous story without physically extracting compnents such as words, sentences or letters, how will it recombine any of these non-randomly?
        • Mar 30 2011: Answering your first and third problem: This is why I was talking about patterns as opposed to words or sentences.
          I'm not talking about copy/pasting works, I'm talking about totally analysing a work by mapping every sentence to its meaning in a symbolic abstraction (no more words but pure (possible) meaning), analysing it for any pattern appearing, linking those patterns to effects achieved by the book, crossreferencing with context and stylefigures, asserting the desired effects, selecting observed patterns and combining them, generating the symbolic structure, translating the symbolics into readable text. I believe this impossible for any human but it might be possible for an advanced AI.
          When a good artist doesn't have every work of art, they do have to learn.
          A human being draws from experience to create art. Apart from the skills required that have to be learned (in visual arts for example: perspective, lighting, colour symbolics, abstraction), it learns from other peoples storytelling, it experiences things in daily life, howmany books could you fill with every minute of your life? The vast amount of data would make a richer foundation to build on.

          As to your second problem; I don't consider J.S. Bach kitchy, and still he mainly just recycled previously used themes, there's even people creating logarithms that can predict his pieces based on a part of it. However I share your concern about the 'goal' of this artificial art (see previous 'HAL' scenario). I could imagine maybe this AI comes to conclusions of such high abstraction it is hard for us to comprehend, it could use a form of 'art' we can relate to to try and communicate its findings, like we sometimes quote 'wisdoms' or use analogies to explain what we mean.
        • thumb
          Mar 31 2011: So what you are saying is very similar to John Searle. You are looking for a semantical computer that can grasp the meaning of concept. But according to Searle there is a problem with building that kind of a machine, it's what he calls the Chinese Room experiment.

          http://en.wikipedia.org/wiki/Chinese_room
      • Comment deleted

        • thumb
          Mar 30 2011: I think thats the only way that it can happen. I was implying that and waiting to see if anyone would pick up on it.

          Originally the argument was produced by John Searle, a linguist and philosopher who I don't share too many views in common with but I do agree with his views on AI.
      • Comment deleted

        • thumb
          Mar 31 2011: Hello Birdia,

          Thanks, it is a very good argument. Though the argument itself places no real limit on what a machine can do, it inspired me to question certain limitations of the machine when semantics is not involved.

          So this is the problem with semantics is understanding the direct response of our neural impulses and circuits? Does our behaviour in understanding thing have a foundation purely in our physiology or iis it just pure qualia like color or sound for instance.
      • Comment deleted

        • Apr 1 2011: I wonder if the question of "where does the mind reside" is that relevant. My engineering background may be blinding me because my standard model to understand the world is basically based on a lot of black boxes (http://en.wikipedia.org/wiki/Black_box), not unlike Searles room. A similar question might be to ask which ant in a colony has the intelligence to find the shortest path to the food (http://en.wikipedia.org/wiki/Ant_colony_optimization). The answer would seem only the collective has this capability. Can one thus argue that an ant-colony is or is not intelligent? One could defend the position of a collective intelligence (http://en.wikipedia.org/wiki/Collective_intelligence).
          Whereas if the mind is plasticly stored in the configuration of our nerves, or is some kind of software ... I think it is a combination of both. When a heart is transplanted, receivers sometimes report having memories of the donor, and sometimes are said to have altered personalities. This would suggest part of the mind is physicly etched. However, one does not grow a new neuronal network everytime we have a shortterm thought, suggesting at least part of the mind is also volatile.

          I believe arts are necesary for human functioning, and that the different forms of art were developed to solve a very specific problem in our awakening. If this is the case, these same problems may arise in the development of AI, and may lead to similar solutions, and thus creativity, in AI. If we want to call this creativity art, will be up to the art-critics I guess ...
        • thumb
          Apr 1 2011: Oh damn I hope I'm not making you uneasy with my obtuse posts. I'm sorry I just compose them really quickly.

          Either way though you shouldn't feel nervous its just a laid back discussion anyway. Ok so what I mean is that understanding can either be presented to you as behaviour, for instance you are teaching a student and he doesn't get something, you explain the subject in more detail and he gets it. You give him a problem and he solves it. although you can't get into the students mind and definitely prove that he understood the subject you can infer from his behaviour that he very llikely understands the subject. so one way we know someone understands something is by observing behaviour though that is not a definite confirmation.

          A second way that we can analyze understanding is when we experience it ourselves, when we experience it we can be certain that we understand something. So the question I am posing is understanding a mental phenomenon that is not reducible to behaviour or can we describe understanding by physical behaviour through our nerves, brain and body? Is that whole mental realm needed for a body to exhibit the behavour of understanding?