TED Conversations

This conversation is closed.

Artificial Intelligence will supersede Human intelligence

Anna Hoffmann said in http://www.ted.com/conversations/1321/what_are_10_things_you_know_to.html
"But Christophe, I am not sure that all human intelligence can be described with algorithms, for example emotional intelligence (as Lee mentions), empathy, the kind of creative genius that creates mind blowing art....
...and if the computers are created by humans, maybe they will be kind of human as well? Or the distinction might become more unclear, what is man and what is machine...."
So this is about the subject

I myself defend the thesis.

Share:

Closing Statement from Christophe Cop

How can consciousness arise from algorithms seems to be the main hurdle...
And emotions, and creativity, and sexuality,...

There are thoughts in different directions trying to see how this might (in principle) be done, and what some problems are.

I would conclude that AI can supersede HI, but it seems we don't seem to know how...
How long it will take is difficult to predict, but somewhere between 2 and 100 years would be broad boundaries.

A good many questions have been raised. I guess laymen and researchers might find this debate useful

  • thumb
    Mar 30 2011: Will an AI ever ask itself whether humans supersede it, and then also will start worrying about this possibility ??

    Will an AI ever say to itself something like: "I am bored today. I don't care what I am programmed to do today. Today I shall enjoy myself by driving these humans around me crazy. Today I shall give them only wrong answers to anything they ask me."

    Will AI ever say to itself: "Somehow, I feel like communicating only with that AI standing 4 desks away from me. I feel like sharing everything stored in my memory with that specific AI and I also feel curious to know everything it keeps inside its memory. I don't know why I suddenly feel like doing this only with that AI so away from me."

    Will AI ever ask itself: "Why am I doing all this stuff ?? For what ?? What do I gain of it except just more data & information ?? Why not shut down everything and enjoy just pure peace ?? What do I want to be 5 years from now ?? Am I supposed to do just these calculations and predictions all over the many years to come ?? And then what ??"

    Will AI ever say to itself: "Basically I see around me mostly 2 types of humans. One type looks more tough and rough and bodily stronger, while the other type looks more gentler and bodily softer & weaker. But, too many times the interaction between these 2 type looks much more complicated and unpredicted and sometimes irrational, compared to the interaction between the same types. Sometimes they are so close & friendly to each other while other times they cannot bear each other at all. A completely erratic behavior they reveal when they are together. So I don't get why they need a third different type of beings like us – the AIs – near them. Why they add more complications to their existence instead of solving first their own inter-type complications ??"
  • thumb
    Apr 8 2011: Dear friends,
    I see you have been engaged in a long discussion about pleasure and pain and the mechanisms behind it. The reality (as I understood it after studying the physiology of pain as part of my education) is very complex, not only in the brain, but in the spine and along the spine, where the peripheral nerve goes into meet the central nervous system. The spinal cord is part of the brain in a way. Everything in the body, including the brain, is very dependent on every other part of it. That is how life works.
    The models that describe how the brain or the nervous system works are just models, not the complex reality.
    It is popular to talk about the different parts of the brain and a lot of research has been done, but in reality the parts of the brain mostly works together and functions are more often than not shared by nearby areas.
    Our bodies are biochemical machines, not electronic mechanic devices. I have still not met anyone who can comprehend ALL the different new data that keep developing in the fields of medicine, biology, life sciences and similar subjects. And I think that comprehension is lacking in this conversation, because WE are humans and the people who want to create AI are humans so they will create something they can comprehend....
  • Apr 3 2011: Human intelligence, as it is now, will be superseded if we last a few more decades.

    Whether it is superseded by artificial intelligence, or artificially augmented human intelligence is hard to predict. Evolution spent a long time developing the technology of human intelligence. It'll be a race between deciphering those designs, designing new ones on our own, and implementing them in hardware on the one hand, versus augmenting human intelligence on the other.

    The irony will be that squeamishness about augmenting human intelligence is what could cause it to be superseded by machine intelligence.
  • thumb
    Mar 31 2011: Through study and clinical experience (I am a physical therapist), I know that the intelligence of a human individual is not just located in his or her brain. The brain has a plasticity, it is in constant biological exchange with the rest of the body. The motor- and sensory neurons that run up and down a major part of the spine, are swimming in the same soup of neurotransmittors as the brain. What we eat, how and when we move, sleep, shit, have sex...and I am just talking basic functions....shape our thinking and is like a foundation for our intelligence, has shaped our language and our view of the world. We need a body to have a brain! It is very visible in rehab situations. For example after a massive stroke, when the patient manages to sit up for the first time, with help, and suddenly makes an effort to communicate, even though she can not talk at this point. And months later, with more rehab, suddenly says something.
    I wish more people appreciated how fantastic our body-mind is!
  • Mar 30 2011: Hey Christophe, your topic hit 100 comments! (Anybody else remember Christophe? He posted this debate.) Do you get a prize? ;)
    • thumb
      Mar 31 2011: Hurrah!
      [and there was much rejoicing]
      the number of posts is intrinsically rewarding ;-)

      I'm following the entire debate. There are a lot of very good arguments (I like the discussion between Birdie and Ben a lot for example: good questions and answers there)!

      I'm learning.
      I think that most arguments "against" are indicating the long road ahead (and some desirability and ethical questions coming up as well).
      I haven't met the "look, here's a law of nature blocking the possibility of that ever going to happen"
      • thumb
        Mar 31 2011: What I wonder, Christophe, is: Who, what individual or group, will supersede human intelligence with AI?
        A few intelligent humans? Would they really see that as priority number one?
        Or power hungry evil humans maybe? Because that kind of people always try to create slaves. They would want to control the creation of AI. Those who do not like (or are unable) to relate with friends, just to control and exploit others.
        Those who, unfortunately, see themselves as egoistic machines, just using any means to survive and stay in power.
        Out of curiosity you can torture people, too. It does not make it ethical.
        I am very interested in the future of computer science and AI.
        I just hope that human relations and societies will be able to develop more love and compassion before we create AI.
        • thumb
          Mar 31 2011: [As the thought of the potential power produces an evil gleam in my eyes and a slight laughter of pure cold enjoyment by the idea of possessing just that, passes...]
          "But no, we would use it for doing good"

          ;-)

          I do understand that concern, and I don't know the answer to it...
          But an open society can reduce the possibility to adverse effects...
          => so let's make it open source!

          But morality was not the topic... I did however mention that morality can be programmed too (I would think that Sam Harris' latest book can be inspiring: http://en.wikipedia.org/wiki/The_Moral_Landscape)

          We might end up with very loving AI as well... who knows?
  • Comment deleted

    • thumb
      Apr 1 2011: Yes, but I think that is a petitio principii...
      And if you do that, you'd need to read a whole lot of books/watch a lot of lectures/learn...

      So let's assume that we do know what intelligence is, even though it is fuzzy...
      Apparently, the discussion works out fine

      p.s.: Can you give a good definition of a chair? when is it a stool or a coach, or a log you can sit on?
      ... but when we speak about a chair, we can convey the meaning... "take a chair and have a seat" works
      So if you see some 'risk' arising, then you can clarify...
      until then, we assume that we have some common idea about the aforementioned concepts...
      • Comment deleted

        • thumb
          Apr 2 2011: I think I can agree.

          Maybe a Meta-topic concerning reasoning, assumptions and fallacies might be interesting...
        • thumb
          Apr 2 2011: Conversation is an interesting art. Language and it's meaning is a process. Without conversation there will be no language. Through participating in these conversations we have an opportunity to be cocreators of language and meaning. We can make agreements about definitions. And use all our imagination creating new ones and using words as well as silence,to express meaning. For that to happen we need to have the intention to communicate, understand and be understood. And we might still fail. But it is worth the trouble and I think humans are better at doing it than any AI (;-)......
  • Mar 29 2011: What happens when your computer debates you over on this topic?
    • thumb
      Mar 29 2011: What happens when the computer decides your not even worth debating?
  • Mar 28 2011: It is impossible for machine to ever achieve parallel capability to humans. Machine has no genetic code... it simply follows a profusion of complex calculations and commands. Sure, machine can and will continue to further supersede humans in many ways, but machine and humans will always be fundamentally different.
    • Mar 28 2011: Well, I don't know about that. Have you seen this talk yet? http://www.ted.com/talks/paul_root_wolpe_it_s_time_to_question_bio_engineering.html

      It's all about bio engineering and he talks about how we can already tie a brain to a machine and make it do stuff. It seems reasonable to say that our future "computers" will be a hybrid of machine and organic matter (a brain) that we grow in a beaker.
    • thumb
      Mar 29 2011: Austin, maybe yes or maybe no.....The point is we can't really be sure, can we ?
      Isn't it all about information and how the information is handled ? Why should, at least in theory, a computer not being able to be equal to a human ?
      • Mar 30 2011: Nick, I watched the talk very interesting.

        Harald, I think you're misunderstanding... I believe that machines can only achieve a certain degree of intelligence. At a certain point, we are no longer creating machine, we are developing life. For example, reverse engineering a human brain, and possibly even a body, wouldn't be artificial intelligence, it would be life.

        The real question is... Would a synthetically developed human be considered artificial intelligence or an actual life form?
  • thumb
    Mar 28 2011: AI itself represnts its worth, i.e. artificial. Intelligence has nothing to do with things that are artificial. Things need creator to be created in real and for that, they are human behind. We dream of the things first then strive to get it completed in real existence. While on the other hand unless or untill we put in the data of any kind, to be processed, artificial intelligence would remain just an empty product.

    I hope that clears the dilemma :)
    Cheers
  • Mar 28 2011: Yes! Just ask Watson :)
  • thumb
    Mar 28 2011: The human mind is powered by many things, not just logical and procedural, we have dreams, we have feelings, we humans break statistics down! We are everything but predictable, AI is a powerful thing, it will become better than us at some things but never supersede human intelligence.
    • thumb
      Mar 29 2011: Hi Pablo, yes we have dreams, feelings, etc, but it all comes down to electro chemical reactions in our brain. As I suggested above, I don't think there is anything that would prohibit a machine to exhibit similar functions.
      About predictibility: Maybe it only SEEMS to us that we are not predictable because we don't have a full understanding of how the system works.
      Intelligence means so many things. I'm still not clear what "superseding" actually means in this context.
  • thumb
    Mar 27 2011: Chris......A bot with an attitude......................hmmmmmm.
  • thumb
    Mar 26 2011: So Anna: concerning the second remark: I think it will get blurry between man and machine. we are becoming more and more cyborg each day...

    But why I think any form of intelligence can come from computers:

    * what our brain does is essentially computation,...
    for example: how to introduce emotion:
    Emotoins can be seen as reward or averse stimuli (see Dennet). the differences give an indication of what kind of response is needed.

    Empathy is "feeling" (trough your mirror neurons for example) what somebody else must be feeling.
    so if you have enough visual, auditive and other clues, you can figure out how you would respond to that... this is like imagining less emotional things, but with added information that the person might desire some response that corresponds to the feeling expresses.
    so the bot needs to learn what patterns correspond with what expressed feelings, and then learn how to respond to it.

    That would be a difficult job, and a lot of computation, but not impossible
    • Mar 27 2011: I have same thoughts. However I wonder if AI can feel and have consciousness as people do when to external observers it will appear so.
  • thumb
    Apr 8 2011: I'm going to say you are right and I believe for two reasons:

    1. It has been proven that artificial intelligence can come to a quicker response to a problem than human intelligence can and I believe this is because it looks at the fact only and has no emotional baggage to go with it.

    2. There may be a limit to human intelligence but there is no limit to what a human can create artificially and at some point the artificial intelligence will outrun the need for human programming in which case it can then programme it's self indefinitely.

    Let's hope there is a shut off button.
  • thumb
    Apr 5 2011: I just finished a book that argues that consciousness is non-algorithmic because it uses properties of quantum mechanics which, whilst deterministic are non-computable. If that were true, it would seriously impede the creation of strong AI. To be honest I'm not sure I fully understand his argument but if anyone wants to give it a shot it's called "The Emperor's New Mind" by Professor Roger Penrose. It's a great book, but some of the science in it is beyond me.
    • thumb
      Apr 5 2011: Matthieu - Can you explain what the book describes as non-algorithmic processing?

      The model I understand of the neuron is that it is an element which accepts a large number of analog inputs and combines them in a non-linear fashion to produce an output. Large numbers of these are wired together (perhaps somewhat randomly, but also in clusters) to produce the complete nervous system. The whole exhibits complex filtering and storage characteristics in a way that might be considered an emergent phenomena (thanks for the reference to emergence Mark, cool concept).

      This is non-algorithmic, right? Does the book's model differ from this one?
      • thumb
        Apr 5 2011: No that's perfectly algorithmic and computable. Granted the way neurons interact with each other is different to the way computers act (as in they are non-linear and neural circuits are constantly re-arranged) but these are things that parallel computing can easily fix or that a sequential computer can simulate (as a universal Turing machine in theory can take another Turing machine as input).

        There are things however that are non-computable however powerful a Turing machine/computer is. The most famous example is the Halting problem where a program has to give as output whether it terminates or not. If it doesn't terminate, it never tells you it's never going to terminate because it runs forever. There's no suitable algorithm for it.

        In his book, Professor Roger Penrose argues that there's something about consciousness at the level where quantum mechanics and general relativity meet that makes it non-computable therefore non-algorithmic. I got lost at that point. Something to do with quantum superposition. The rest of the book I enjoyed though because he touches upon all matters of science, computer sicence and maths (thouroughly! That book is like 700 pages...).

        The book takes into account that model of the brain but also points out that this only accounts for part of how the brain works.

        I always kind of assumed the whole brain could be computationally simulated and that consciousness was just an emergent property of complexity and I liked the idea that organisms were simply nature's machine (we both have a code after all). That book makes me think like I need to delve more into the subject! So many unanswered questions now!
    • thumb
      Apr 7 2011: http://c2.com/cgi/wiki?MistakesOfRogerPenrose

      I think Penrose is totally wrong... The basic reason is that you don't need to use quantum effects for any neuronal network to see how it works.
      Look at the animal kingdom and all the experiments done on the sea cucumber (they mapped and know the effect of each neuron) for example...
      Penrose assumes something that is in addition to the current elements needed to give a plausible explanation.

      Furthermore, if something is "quantum", that would mean it is in a probabilistic state. Probability distributions are calculable, and are hence algorithmic. AND you can always use things as "expansions", "transformationse" and other mathematical approximations to any thing that might be "non-algorithmic" (Whatever that may mean)...

      One needs to note that a brain, and any computational device receiving data, is an open system, so there need not be an assumption of "endedness" of an algorithm in your brain (although many of the subroutines are)

      Anyway, the burden of proof lies in the camp of Mr Penrose, not in the camp of the neuroscientists, who, up to now, have apt models to explain the things they observe without using quantum-effects.

      Concering the halting problem: that is a contradiction or paradox you created. A paradox cannot be solved, not with computation, nor non-computation.

      The reason you got lost, is probably because it makes no sense... and because Penrose is unable to explain it. "If one cannot explain something he claims to know, he doesn't understand it and might be wrong" Is an adagio I like to use...

      [EDIT:] If this all doesn't make sense, I might be wrong too, of course
      • thumb
        Apr 7 2011: To be honest I think the computational part of his book would have evaded me had I not studied Computer Science as an undergrad. He lost me on some pretty well estblished physics at times, so I'm going to go ahead and be humble and conclude I need to learn more about physics in order to understand his probably quite decent explanation. It'd be nice for someone else who has read his book and understood it fully to step in and give his opinion.

        I do agree that the burden of proof lies with him as he's made a statement that's still pretty hypothetical.

        I quite like your idea of using approximations where needed. Thanks for the link, I'll have a look soon.
  • thumb
    Apr 4 2011: I think Anna brought out a good point below, that in order to replicate human consciousness it would be necessary to synthesize an entire body since the mind and body are so intertwined.

    However. Suppose we were able to identify the functional characteristics and fabricate neurons in great numbers and interconnect them in structures similar to a human nervous system. Imagine, furthermore, attaching some set of sensors to the environment (video, audio, etc) to provide sensory input from the environment. Could such an artifice be said to have consciousness? How would it differ from human consciousness? What would be required to make it more closely resemble human consciousness?
    • thumb
      Apr 4 2011: Although our mind and body are intertwined, or, as I would put it: our mind is a part of our body (I don't like the dualism, as it sometimes seem to suggest that there is something as an independent mind without a body... there are independent bodies without a mind... the ones we call dead for example)...

      This does not imply one needs a human body.
      Any kind of body that can do the necessary computations would do. Saying only a human body could do this, is making a very anthropocentric fallacy!

      Of course you need to have a lot of sensors on your bot, as that are ways to obtain (new) data,... which is essential for the learning process of a bot. I would add a lot more sensors than we humans have (infrared, ultrasound, UV, more chemical sensitivity, finer thermal, gravity, accelerometers,... pressure, torque, luminance, magnetism, electricity,...)

      Concerning consciousness: Yes it could (I don't see why not...)... And It would be there when the bot is processing information (not when it's shut of). I assume it would need some prerequisites that are yet undefined in this discussion (any suggestions are possible)

      To resemble human consciousness, it would probably need a lot of visual-based thinking, big linguistic, small olfactory and papal,... &c &c.
      • thumb
        Apr 5 2011: In terms of prerequisites - how could analogs of pleasure and pain be integrated? Aren't these perhaps primary motivators in the learning process?
    • thumb
      Apr 5 2011: Well Tim. lets take your idea a little farther. Instead just copying the circuitry of the brain why not replicate the entire process of development. The process that occurs from an infant, whose cranium then matures and develops to that of an adult. Note that there is no direct human programming in this process of development, there are just physical processes that integrate all the sensory information and produce a mature and intelligent human.Why not do something like that with a machine why not let the sensors do the programming as oppossed to people? Why not let them determine the development of both the hardware and the software, this would be an entirely different approach to AI which completely mimics the process of human development, and thus the development of intellgient behavior. Latest research shows that the brain also changes this way, way into adulthood, so the process of development never stops but it does drastically slow down.So I think this demonstrates that brains have this quality which programs and hardware don't, at least not yet. This may account for the difference in behavior that rigid programs and hardware produce from the non-rigid ones found in humans.
      • thumb
        Apr 5 2011: Budimir: I believe we're basically thinking along the same lines. My question is, besides the basic information collection and processing apparatus, what needs to be done to get close to human consciousness? That is, what scenarios need to be created? Is something additional needed to motivate learning (pain, pleasure, etc.), or is the motivation inherent in the structure?
        • thumb
          Apr 7 2011: Human consciousness is such a strange thing. In any feat of engineering there has to be a mechanism by which a given machine goes from input to out put. If we try to engineer consciousness, lets say pain. We need to have some kind of a mechanism by which some kind of input, extreme pressure or heat could produce the sensation of pain.

          pressure --> nerve impulses --> pain. We can describe how pressure affects nerve impulses really well and could probably producea similar pattern in a machine. The hard part is producing the sensation of pain. How does something conretely material like a nerve impulse produce something that seems almost like an illusion, with no substance.
      • thumb
        Apr 7 2011: Yes, it's an interesting topic - from a neurological point of view what constitutes physical pleasure and pain?
        • thumb
          Apr 7 2011: If I'm not mistaken (and I think recent research might correct me)...

          You can see a stimulus that hurts you as something you need to avert. So, there is a strong signal that says "Stop getting this stimulus"... We experience this as a negative emotion, and to be a bit simple, I'd say: that's in the amygdala (there are other area's involved in emotions, so this is a "probably")
          Initially, it was very concrete (heat, pressure, cut,...); but other stimuli create aversive emotions (fear, disgust, pain), and can be triggered by imaginary, abstract or very complex stimuli...

          Same goes for good feelings (sedation, safety, comfort, sweetness, warmth, a fertile mate), so also more and more complex thought- area's (association areas) have contact with the amygdala.

          In Birdia's example: Rewire the "rose" and "book" representation to the pain-generating center, instead of to the pleasure center.

          It should approximately work like that, but I might be wrong on the exact location, Although emotions are from the "primitive" (evolutionary older) parts of the brain.
        • thumb
          Apr 7 2011: Wow! Very good question Birdia. My example was simple for clarity's sake, but the brain is actually a interconnected organ whos parts constantly communicate.

          Christophe explained it well. I would just add that since different regions communicate they also very likely penetrate each other. Pain is not just a single unit of qualia the brain experiences but it overlaps with many others.
        • thumb
          Apr 8 2011: Budimir:
          I did make it seem as if there was this fixed place in the brain where pain is "made" ...

          Neurons with "bad" information come to places where they make synaps with neurons spreading the "pain" sensation to whatever regions that need such information...
          Whether this happens in one region (i suggested the amygdala, which might be seen as too big to be a region), or more (brain stem, thalamus,...) I don't know.
          http://scholar.google.be/scholar?q=brain+regions+human+emotion+pain&hl=en&as_sdt=0&as_vis=1&oi=scholart might help
        • thumb
          Apr 8 2011: Yes, that's why I added that part myself. I may be wrong but in my opinion what we call pain is probably experienced along with many other impressions like aversion, anxiety and so on.

          Pain is used to describe many different expriences.
        • thumb
          Apr 8 2011: Very interesting thoughts Bidia,

          How would "heartbroken" be learned or programmed....?

          Hmm...
          That would imply a bot needs to feel a reason to be close to someone... So that would go to the account of sexual selection (from Darwinian point of view).

          So If we don't pre-wire the desire to be (communicating) with another agent, and if losing of the contact would be considered a painful loss (because of previously acquired positive responses)... heart-brokenness would be difficult to conceive...

          But as human emotions cannot be reasoned out without gender differences and procreation, we might find out we need to simulate or create that in AI
          S
      • thumb
        Apr 8 2011: Birdia's observation is an interesting one, but I think distracts us from the initial question.

        She was describing learning to associate a new stimulus with a painful one in a Pavlovian sense. For example, if we shine a bright light and shock a person simultaneously several times then before long we can just shine the light and the person will feel pain.

        But what is the fundamental nature of the initial pain? If we consider pleasure a form of positive (re-inforcing) feedback and pain a negative feedback, what are the physical characteristics of each? Is an electric shock detected by a pain nerve? Or all nerves pain/pleasure generic and simply give out different signals based on the stimulus? What mechanism exists within the brain (or extended brain if we consider the entire nervous system the brain) that makes a given signal type positive vs negative reinforcing?
        • thumb
          Apr 8 2011: I don't know Tim,
          I suggest you pick up some books about neurology
          There are a lot of answers there... they have a lot of information...

          Electric shocks are detected by the damage they do, and by the activations of the muscles, giving the muscle-tension sensors a jolt... the signal is interpreted as pain

          I think there is no such thing as a "fundamental nature of initial pain"... It is something that evolved and is fuzzy across species. there are multiple forms of pain, as each pain needs to convey different kinds of information,.

          Positive vs negative: evolution could do the trick
        • thumb
          Apr 8 2011: I guess we could say that any intense mechanical damage to the body would be extremely painful even for a masochist. I don't think anyone can endure intense physical torture, this doesn't have to be the case in all animals but it is for humans who have evolved a physiology that can be disrupted by mechanical damage.
        • thumb
          Apr 8 2011: Could be, and it sometimes can occur without an actual physical stimulus like in phantom limbs, for these reasons I think pain can certainly be seperated from mechanical damage. But it tends to occur with mechanical damage most of the time.
        • thumb
          Apr 9 2011: I think sometimes what actually causes pain can mislead us. The brain is such a complex thing that it can make you believe your missing arm is hurting you. So the brain and consciousness remains a deep puzzle.
    • thumb
      Apr 5 2011: Tim....Good idea....If we could control and design as you propose that would solve the problem of mental illnesses. We would not have any neurotic A I. Because of course we would program only love. No free will.
  • thumb
    Apr 4 2011: In a lot of the discussions, there is much theoretical discussion about human vs computer intelligence.

    I wanna discuss a more practical example now. What about Watson the game show computer? If we consider trivia knowledge a kind of intelligence, could we also say that at least in one instance artificial intelligence has superceded human intelligence? After all Watson beat some of the best game show competitors.
    • Apr 4 2011: Was it really intelligence? Or a sophisticated search algorithm coupled with speech and linguist software? Was it able to carry on a reasonable conversation and make hypothetical analogies? I only saw the first show.
      • thumb
        Apr 4 2011: It definitely cannot generate a hypothesis and in my opinion it might be that it never will be able to do that. I outlined previously that intelligently creative behaviour cannot exist without semantics and I am sticking to that opinion. I don't think a computer will measure up to an Eistein or a Tesla.

        Watson was able to outpreform many humans on the games show. But like you said is it really intelligent? Many would say well if it's behavior demonstrated intelligence the it must be considered intelligent but it's interesting how we seperate those definitions in the natural world. We wouldn't consider our immune system intelligent but it gets rid of disease much better than anything we can make. Same with cancer it evolves so fast it has mechanisms to evade our best cures, is it intelligent?
        • Apr 4 2011: Yes, Let me state it this way. Watson = speech & linguist software + search algorithm + probability calculator. That’s it.

          Not that it isn’t a real AI accomplishment, absolutely, but it’s not intelligent by my understanding. Also consider it took a bunch of Ph.Ds, a ton of technology, a temperature controlled room, and more than a couple of kilowatts to do the same thing the other contestants did with three pounds of neurological tissue & breakfast.

          Additionally, the other guys could; order dinner from a menu, pack their suitcases, pick-up a present for ‘Stacy’ at the airport gift shop, and tell stories about Alex Trebek WITH THE SAME three pound brain. No extra programing required.
      • thumb
        Apr 5 2011: Yeah it's fascinating what humans can do. I suggested something to Tim in the post below this one. I think you may find it interesting. I went beyond comparing the brain as a circuit but also compared it as a substance. So I am wondering if there could be something in that.
    • thumb
      Apr 4 2011: I think that Watson is very intelligent to solve the game he is intended to play.
      Watson "understands" the answers better than most humans (so superseded or on par).
      Watson looks for the most probable question, and thinks about alternatives...

      On other tasks he performs very poor (i guess)... so maybe you can see Watson as an idiot savant...

      So while passing the Turing test in-game, he will be recognized out-game
      • thumb
        Apr 5 2011: But by "understanding" are you refering to the subjective experience of understanding or simply the iteration Watson presents to the audience.
      • thumb
        Apr 7 2011: But would you say our immune system understands what it is doing? It is very efficient at protecting us. I remember hearing about how are immune system works and marveling at the fact that it is not consciously performing all these sophistcated functions.
        • thumb
          Apr 7 2011: Now you imply that understanding means knowing...

          Knowing means a self consciousness (I feel X to be true, and I have an image of I).
          An immune system has -as far as I know- no self-consciousness.

          I don't think Watson has a self consciousness, so he would not know.
  • thumb
    Apr 3 2011: "I myself defend the thesis"
    Which thesis is that?

    Edit: i see, number 7...

    totally with you on that one... quite soon even!
  • thumb
    Mar 31 2011: Yes, but our weird wiring still gives us the edge.
  • thumb
    Mar 31 2011: This MAY annoy alot of people, or not. But I figured making 5 posts in succession would be ridiculously stupid.

    So, because my response to this issue is considerably longer, I suggest please, read my response at the provided link

    If it's too much of a hassle for people to read on the web, i will repost here.

    Thanks for your time to read in advance...it's long :P

    http://berserkerlion.com/tedsponse.htm
    • Comment deleted

      • thumb
        Mar 31 2011: Karls AI, is the art itself. The art created by the AI, is actually Karls tool to create said artwork. A great quote from your article shows this;

        "His paper "Artificial Evolution for Computer Graphics" described the application of genetic algorithms to generate abstract 2D images from complex mathematical formulae, evolved under the guidance of a human." I'd like to point out the last four words.

        Evolutionary computation and animal communication are awesome examples of intelligence without language. But not examples of human creativity. I suppose this is my fault for assuming my use of the term intelligence would encompass creativity along with it. Creativity being my argument for machines currently and unlikly within near future be able to surpass (or whatever word you want to use to show machines > humans aka the entire subject of this 'conversation'

        Cycl, if there were ever a machine language to come close to human language is probably it. It may even be eventually in some radically different form be what does it. However, Cycl is a classic example of the limitations of widespread reification that learning machine languages suffer from and no human does, and is nicely exampled in both it's own article and what I had written. Cycl is also a communication form...a really really smart one. But it's not much different than bee dancing. As in, it's communication without creativity. It's not that difficult to get a machine to learn. It is currently however impossible to get Cycl to create something like, "Green dreams sleep furiously." Not to mention even comprehend what it could mean.

        Case in point, machines are still not at the level of humans, and won't be any time soon, without radical changes. I still have hope for the machine however, someday, we'll get there....they will too.

        But don't confuse people here, just because the bee can conjugate, doesn't mean the bee is anything human. Neither is Cycl.
    • thumb
      Mar 31 2011: Scrolling through your text, I pick one thing

      "The number 1 exists, the number 0 exists. Logic gates. Open the gate, close the gate.
      Nothing else to the machine exists."
      and
      "Our current way of making machine language, definitely -won't- give rise to the machines."

      Combined give me the impression that you have overlooked probability theory and self learning algorithms...
      Furthermore: from a cybernetic point of view, there is no evidence for languages being incommensurable, thus allowing to create grammar in logical languages...

      I know these are many difficult words, which make it vague... so let me try this again:
      - learning is taking new information and adding it to your old state of knowledge. this gives you an updated version of knowledge... This is described mathematically by E.T. Jayness, meaning you can program it
      - Even with difficult to translate meanings and words, you can always approximate the meaning of a word by writing a paragraph about it. You translated it, but in a very time consuming manner...
      this means that on a Turing machine, you can, in principle, simulate a human mind and all it's aspects
      • thumb
        Mar 31 2011: Thank you, for showing that you didn't actually understand or even fully read what I had written.

        Basically, everything I said can come down to a single point, Until a machine can create a sentence/idea such as 'Green ideas sleep furiously' or 'Godly farts dance in rivers of my purplish blues' On it's own, without any kind of prior art and be able to understand the fact it could do it, without having to reference something it learned before and further discuss said new idea with me and its possible meanings, or where it even came up with the idea on its own. No machine will ever be at the same level of a humans ability. And I cannot possibly agree that simulation of the human mind and "all it's aspects" is possible without radical new machine design/programming.

        Those very sentences will mean something different to absolutely everyone who reads them and there are no right or wrong answers. Every human on earth has this innate ability. It's the only real example of creativity. And it is directly related to our real language.

        I think machines will get there someday. But for now no matter how much a machine learns, it's just another smart machine, it's nowhere near human, yet.
        • thumb
          Mar 31 2011: Putting your ad hominem aside,

          * I don't think you can give one valid example of any idea "ex nihilo" let alone a piece of art.
          (I don't think -analyzing your example- that putting a random adjective, noun, verb and adverb together is very artful or original)

          As you say they will get there some day,... agreed.
  • Mar 31 2011: What do you consider supersede?

    In terms of computational power computers already supersede humans. If at the beginning of the 20th century some computations took 5 years to be done by human, now they probably take less than 10 minutes when done on computer.

    In terms of creativity I agree with Anna. I think there will be some things that computers won't be able to do. (...and right now I think we are way far from even making AI reach HI).
    I think people want to use AI in the wrong ways & sometimes expect the wrong things from it. Although planes were inspired from birds, they don't have the same purpose and no one tries to copy entirely a bird when making a plane. I consider it is the same with AI. It is inspired from HI, but it shouldn't try to copy it. We should use to make intelligent and fast algorithms.
  • thumb
    Mar 30 2011: We will have artificial intelligence, and when we do there will be nothing "artificial" about i. How do I know? Reverse engineering. We will reserve engineer our own brains eventually, it is not a question of if, but when.
  • thumb
    Mar 30 2011: I am more interested in us humans becoming less programmable and mechanical, than I am interested in efforts trying to create mechanical devices that mimic us.
    And just so you don´t misunderstand: I love my Iphone 4, my computer, our smart car, Skype and all great new technical devices. I would not mind owning a vacuum cleaning robot.But I would not call them intelligent, just smart.
  • thumb
    Mar 30 2011: And what about the idea of meaning ( "KAZAM" ), the sense of wonder, the mindblowing experience?
    The intelligent historical individuals I admire seem to have that - the Ah This!!! - along with their bright mind. How do you build that into a machine? And why should you?
  • Mar 30 2011: Once the Earth was flat & in the center of the universe, humans couldn’t fly, the Moon was made out of cheese, and people on opposite sides of the planet couldn't communicate with each other easily.

    Although not considered remotely possible or probable at one time, these beliefs and many others have proven to be wrong. So taking history into account, I suspect my opinion that AI could not supersede HI in creative endeavors, would also be proven wrong eventually, despite MY not being able to conceive how it might.

    Christophe provided a scenario an hour ago how AI might develop artistic appreciation. I’m thinking that AI would have to develop artistic appreciation BEFORE it could create art. As original creations are usually beyond an audience’s ability to appreciate it at first, would humans even recognize it as art? Would humans take it on faith that something is art if AI produced it and said it was? Could there be a theory of art that could only be appreciated by AI?
    • Comment deleted

      • Mar 30 2011: No, he hadn’t convinced me, but his scenario did make me look at the historical evidence of what was impossible, becoming possible.

        That’s not to say I understand how AI might also make that jump. However there are lots of things that exist that I don’t understand, so my understanding is not a prerequisite for it being possible. Heck, I don’t understand how ‘1000101101001101010’ translates into me writing to you on the other side of the world, yet here it is.

        While I don’t believe AI could be creative, I’m not ready to proclaim AI would never be creative. Like you I am agnostic on the issue.
      • Mar 30 2011: As there are currently many examples of man being destroyed by ‘his’ creation; Chernobyl, Gulf oil spill, financial derivatives, etc., I would not be surprised it could/would occur with AI.

        A movie called [Colossus: The Forbin Project] uses Ben's proposition as its plot.

        As long as we retain the power to pull the plug, I think we'll be okay.
        • thumb
          Mar 31 2011: @ Birdia

          I am just talking in a very hypotethical sense that the same kind or randomness that produced consciousness in us could possibly do the same in computers. I can't say how likely it is that it will occur.

          Calling them artificial depends on how you define natural., If artificial is man made then a randomly evolving computer would naturally develop consciousness since humans didn't directly give it any consciousness.
      • thumb
        Mar 30 2011: I got the idea from evolution, because that's how living cells became "functional."

        Funny thing is there is also evolutionary algorithms in computer science created exactly for that purpose. It eventually makes a program more functional but the evolution of the program is random.

        Now let programs like that run for a few billion years what prevents them from becoming conscious just like we did? We are the product of the same kind of randomness technically.
        • Mar 31 2011: @ Ben - I think I agree with you. >It might develop its own form of art to try and convey concepts to us that we cannot easily grasp. <

          The debate seems to return to whether AI can have an authentic emotional response, and not just mimic a human emotional response. At least as far superseding HI in creating art, or should I say culture? I think if AI would be able to achieve an AUTHENTIC emotional response, the emotions AI would have would be quite different what people experience.

          With no drive for reproduction or need to care for children, how would it value something like the first warm day in spring when it is possible for children play outside in the sun without a coat again? Or watch a flight of geese fly south for the winter and realize that reality is better than fantasy?

          Would AI appreciate differences in electrical current? Would AI develop a philosophy if it should recognize that it is asked to solve certain kinds of problems in reoccurring patterns? If AI achieved an emotional response, would we even recognize it?
      • Mar 30 2011: > True. but if AI "will supersede Human intelligence", how do you propose humans can "retain the power to pull the plug"? <

        Don’t give the machine the ability to connect to its power source on its own. Don’t give it the ability to ‘blackmail’ humans with dire consequences. Keep the human element in the loop.

        Or are you implying AI would be able to con humans into subverting these precautions?

        Just because AI may become smarter than humans, doesn’t mean humans have to become stupid (although I’m not so sure that isn’t happening now anyway even without advanced AI).
        • Ben B

          • 0
          Mar 30 2011: I wasn't referring to some kind of PC on a killing spree ... more to the fact that this AI, if it wanted to create art, would have to have a very deep understanding of the psyche of humans, demagogics and the creation of 'sublimal messages' that affect who we are and what we think.
          What lead me to this was the question of what the purpose of art is. Many artworks are an outcry against injustice, a call for action to do 'the right thing'. If an AI wants to change the world like these artists do, to make it a better place, and can do so in a 'superior way' to humans, would we promote that?
        • Ben B

          • +2
          Mar 30 2011: I agree that it's hard to imagine a computer to have an intense desire for self-expression, but, as I wrote somewhere else in this discussion (I'm slowly losing track of things here), if we were to hardwire an AI to be conditional, to be motivated by positive stimuli, and program it to seek these stimuli. If we then teach it that we give it these stimuli when it provides us with new insights. It might develop its own form of art to try and convey concepts to us that we cannot easily grasp. And convey its frustration when it does not receive stimuli. If one of those stimuli might be attention, it might try to find ways to capture this attention.
          When it comes to dance ... the sensation of being able to move and having functioning organs is somehow in itself rewarding, perhaps programming an AI in a machine with moving parts to be selfpreserving will lead to it randomly testing these movements, being 'happy' that it can move, thereby creating a mechanical ballet?
          Will those movements be beautiful?
          I think the answer to that question is the answer to the question "What makes those movements beautiful?" What is grace? I once saw a documantary where the movement of a tiger was described as gracefull because "it was the most efficient way to move, expending no more energy or effort than absolutely nescesary.". Can a computer analyse these movements and compute a more efficient way to move? A computer can calculate balance, it can calculate the expense of energy, it can calculate countermovements to minimize impact on landing... I don't know if it will happen but I don't think it unthinkable ...
        • Ben B

          • +1
          Mar 31 2011: It is my hope that AI's one day will be able to break communication barriers between people, much like the 'universal translator' in Star Trek (a device that can interpret a language without prior knowledge of it, based on analogies in other languages). And go further than that, interpreting behaviour in a cultural background. Removing some misunderstandings might go a long way towards worldpeace, even though for some people cultivating misunderstandings seems of greater benefit ...

          Also research might become easier ... ever had something you knew, and more or less knew where you had seen it before, but not exactly? I think search engines may become much more advanced so one can easier search based on context, and create a digest sifting out usefull and useless data. For the rest I think it will mainly be an interesting excercise, and hope people don't become more lazy and forgetfull (as with the calculator, that other significant technological enabler that taught us how to fail at simple arithmetic) ...

          I suspect those who could benefit most from its logical capabilities are least likely to make use of them though. And I think AI might come to some conclusions shocking our core of 'scientific truths', and thus quickly be discarded as erroneous, gathering dust for 500 more years later till someone more credible 'reinvents' it. Most problems we face today are more attitude-related than technology-related.

          Other life forms ... if intelligent,... I admit the thought of the human xenophobe nature dealing with such an event scares me ... maybe the fact that they haven't contacted us yet proves their intelligence ^.~

          I'm sorry but I am by nature very skeptical and pessimistic, ... but I did say worldpeace! ^.^
  • Mar 30 2011: What happens when your computer wins you over on this debate?
  • thumb
    Mar 30 2011: I think that it is inevitable that artificial intelligence will surpass our own if for no other reason than the fact that any artificial creation with that level of intelligence would be essentially - that is to say, produce or invent and then fabricate the equivalent of more processing power or memory and upgrade itself to use it. The human brain, though incredibly complex and powerful, still has an upper limit - we don't know exactly what that upper limit is or even how to know for sure that we've reached it, but I do believe it exists. At the same time it has to be said that before AI achieves an equal level of intelligence to our own, we may very well have begun to upgrade ourselves, or even transfer our consciousness into some kind of artificial structure that would allow us to grow beyond the physical limitations of our brain. At that point would it not even make sense to differentiate between what we consider artificial intelligence and that of humans? I think we'll have to wait and see how everything develops.
  • thumb
    Mar 29 2011: Let us assume that AI supersedes HI: What is good about it? It means we have more capacity to solve our problems - the problems we can not cope with. What is the problem? We fear that a more intelligent life form might be more powerful - the moral hazards we expect drives us to debate this question at all. Rightly so - given our experiences with the instable relation of intelligence and morality. Man was never really good in doing the right thing, even though he knew what was morally right.
    But there is hope: An AI will not have our experiences - it might have a chance to more intelligent and more ethically. Reviewing this idea experiment I wonder if we should not demand to have an AI.
  • thumb
    Mar 29 2011: Oh my God, I am cited on TED, in the very question!!! Someone wants my attention.
    This boosts my ego. And challenges me to respond as intelligent I can.

    This, my own reaction, is a kind of reaction that is difficult for me to imagine from a machine.

    The complex net of emotion/affect, mental thoughts, memories, expectations and URGE TO BE UNDERSTOOD, that makes up a human being.

    Can a machine long to be understood? Are "Pinocchio" and "AI" (the movie) just stories, or symbolic mythological tales, made to point at how we humans can either reduce our selves to be mechanical machines, or stay true to our inner longing and keep moving beyond our own mind and concepts, forever. Can a machine keep moving like the human mind? Then we might be close to discover Perpetuum Mobile.
    • thumb
      Mar 29 2011: Anna - I agree it is difficult to imagine how a machine could have these feelings. But isn't it also difficult to imagine how a human has them? Does that mean that we can discount the possibility of a human construction having a consciousness?
    • thumb
      Mar 30 2011: Well Anna, thank you for wanting me (sparking me) to open this (apparently attention-drawing) debate...

      I think Tim poses the right questions.
      If we -humans- truly understand what emotions and longing and such are, we might (following the church-turing thesis http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis) be able to simulate and succeed in creating this level of AI

      Concerning the perpetuum mobile: we humans need food, machines need electricity... so that would be a fantasy (hard to beat laws of thermodynamics)
      • Ben B

        • 0
        Mar 30 2011: If we may believe psychoanalists, psychopaths, like machines, are incapable of genuine empathy, and yet they convincingly emulate this by studying those around them. That would suggest a machine would be able to do so too. But then, would we want our machines to be psychopaths?

        I do think the question as to what would drive this AI to do anything is the core question here.
        Would we allow a machine to develop its own ethics, define what is good and evil?
        If we would not preprogram it to think that "what is good is what is most desirable for its creators", I don't think it is inconceivable it will come to a nihilistic conclusion, decide there is no true good or evil, seek out some weak minds and manipulate them (a machine that uses only logic can't be wrong right?), and start building a new world order adhering to this new-found ethic. Although, more likely, it will just complain there is insufficient data to conclude anything and just stop working altogether ...
        • thumb
          Mar 30 2011: Most of the people who have the inability of empathy tend to live lives that aren't violent.
          If however they had traumatic experiences and a bad environment, they might become psychopaths...

          Concerning ethics: if you see good as pro-social (be it human, plant, animal, robot) behavior (enhancing pleasure, reducing harm and effort) and bad as anti-social (decreasing pleasure, increasing harm and effort) and analyse behavior as having both elements, one can make a refined (almost utilitarian) decision...
          maybe we can add Azimov's rules to it...

          But ethics is not the debate here...
          (I guess increased intelligence implies a better understanding of ethics too...)
      • thumb
        Mar 30 2011: Mind does not belong to one individual. It is shared. As human intelligence. A human is nothing without connections, inside (nerves, blood vessels and so on so forth) and out (relating to the environment, relationships and social structures). The immense complex way humans connect, on so many different levels (from microbiology to cosmology), can not be replicated by machines. The way we take care of our own needs, cooperate and multiply can not be replicated by machines. Even though biology, life sciences and all those new fields of research are exploding, we are still far from replicating ourselves. And why should we replicate our minds? Why not keep on making machines that do the things we can´t. Like going deep into the ocean or far away into space. Like counting and processing data fast.
        But stay in charge and practice our mind so we don´t end up slaves of our own creations, the machines.
        Thats my understanding. I am interested in learning more and that is why I connect with you here.
        • Ben B

          • 0
          Mar 30 2011: I realize it may seem like I was demonising AI, this was not my intention. I do believe AI-technology can and will have a positive impact on our future. However, my goal was to explain the difficulty I have with attributing a personality to this AI, and the question of what would motivate this AI to do something (if it has the choice not to do it).

          As Anna states, intelligence is nothing without a context, yet the fact that the AI would be dependent for its learning and decisionmaking on humans could be an argument to say that AI will always be inferior to human intelligence. And thus the logical conclusion for me would be to wonder howmuch freedom we can grant this AI.

          We can either hardwire its choices (As in Asimov's 4 laws (http://www.rogerclarke.com/SOS/Asimov.html)), or we can condition it.
          To condition it we can create positive and negative stimuli that affect it. We can motivate it to search positive stimuli and shun negative stimuli.
          As such it will see as 'good' anything that brings more positive stimuli and 'bad' anything that brings more negative stimuli. Being a super AI it will evaluate this with longterm and shortterm effects in mind and act accordingly. This way ethics would be the basis of any decision it makes, and thus why I think ethics are most important to this discussion.
        • thumb
          Apr 1 2011: Anna, you said " Why not keep on making machines that do the things we can´t?"

          What if an artificial form of intelligence can figure out a way to resolve conflicts without war? Isn't that something we don't seem too good at doing?