TED Conversations

Howard Yee

Software Engineer @ Rubenstein Technology Group,


This conversation is closed. Start a new conversation
or join one »

Can technology replace human intelligence?

This week in my Bioelectricity class we learned about extracellular fields. One facet of the study of extracellular field I find interesting is the determination of the field from a known source (AKA the Forward Problem) versus the determination of the source from a known field (AKA the Inverse Problem). Whereas the forward problem is simple and solutions may be obtained through calculations, the inverse problem poses a problem. The lack of uniqueness to the inverse problem means the solution requires interpretation, which may be subjective. We may also apply a mechanism for the interpretation; this mechanism is known as an AI. However, this facet of AI (document classification) is only the surface of the field.

Damon Horowitz gave a recent presentation at TEDxSoMa called “Why machines need people”. In it, he says that AI can never approach the intelligence of humans. He gives examples of AI systems, like classification and summarization. He explains that those systems are simply “pattern matching” without any intelligence behind them. If true, perhaps the subjective interpretation of inverse problems is welcomed over the dumb classification. Through experience, the interpreters may have more insight than one can impart on an algorithm.

However, what Damon failed to mention was that most of those AI systems built to do small tasks is known as weak AI. There is a whole other field of study for strong AI, whose methods of creating intelligence is much more than “pattern matching”. Proponents of strong AI believe that human intelligence can be replicated. Of course we are a long way off from seeing human-level AI. What makes human intelligence hard to replicate? Can it be simulated? What if we created a model of the human brain, would it be able to think?

Related Videos (not on Ted):
“Why Machines need People”


Showing single comment thread. View the full conversation.

  • thumb
    Mar 7 2012: I think technology can augment human intelligence, but never replace it. There are a lot of useful things that computers can and already do help us with (maths and information retention are the big ones -- imagine having Wikipedia instantly searchable and streaming directly into your brain), but the most important part of solving any problem is asking the right question in the first place, and only human intuition and experience can really get you there.
    • Mar 7 2012: There's no reason why we couldn't, in principle, work out how to build a machine capable of 'human intuition and experience'. If push comes to shove, we could do it by simulating an entire human brain. You're mistaking 'not knowing how intuition works' for 'never being able to find out how intuition works'. And even if we can't work it out, past human experience shows that fantastic discoveries and technologies will come out of the attempt.
      • Mar 7 2012: How about for example spontaneous emotional based or just random changes of mind. A machine would always use either a (pseudo) randomness or calculate with probabilities on how a decision could affect the problem given. I think a machine can evolve an answer to a given problem either by calculating or by being random, but only a human brain has the ability to sometimes be truely randomly random (paradox) that means for example sometimes emotionally/logically/randomly decide when to be random and when not to be.
        • thumb
          Mar 7 2012: Are we truly random? I am interested in research that shows that the human brain is acting randomly. Perhaps our brains are very deterministic, but it's simply external stimuli that is random. If that's the case, then we can definitely build a machine (that's deterministic) capable of responding to random inputs.

          Computers already taken randomness from the randominess of external inputs (like from a mouse, or keyboard, or other devices attached to it), there's no reason why an AI can't be subjected to the same randomness
        • Mar 7 2012: Input from keyboads and other sources are seldom random for a computer, but when they are what does a computer do with the random input? Now consider what a human would do or has done with random input from the environment: the random brush of hair on skin has probably inspired much poetry. Modern Art is mostly random, as is it's interpetation.You should probably be giving potential A.I.s 'rorshach tests' for confirmation.
        • Mar 7 2012: Johannes, your argument seems to be 'We don't know how emotional changes of mind etc. happen, so we never will, and we'll never be able to build machines that do the same'. I'm sure you can see the flaw when I put it like that. Anyway, the bigger mystery is how come we have conscious experiences - the 'hard problem' of consciousness. I'd put my money on someone coming up with a scientific answer for it, eventually, and the rest of us going like Thomas Huxley, 'How extremely stupid not to have thought of that!' But we're nowhere near that point right now.
      • thumb
        Mar 7 2012: Except you'd have to treat a machine completely as a human in order for it to have the human experiences necessary for it to be able to know what's important and what can be ignored when it comes to having novel ideas.

        I think any future where we seriously consider treating machines like people before we've gotten around to treating people more like people is one that is ill-considered.
        • Mar 7 2012: If the machine is confirmed to be conscious, I don't see why that shouldn't take place.

Showing single comment thread. View the full conversation.