TED Conversations

Howard Yee

Software Engineer @ Rubenstein Technology Group,


This conversation is closed.

Can technology replace human intelligence?

This week in my Bioelectricity class we learned about extracellular fields. One facet of the study of extracellular field I find interesting is the determination of the field from a known source (AKA the Forward Problem) versus the determination of the source from a known field (AKA the Inverse Problem). Whereas the forward problem is simple and solutions may be obtained through calculations, the inverse problem poses a problem. The lack of uniqueness to the inverse problem means the solution requires interpretation, which may be subjective. We may also apply a mechanism for the interpretation; this mechanism is known as an AI. However, this facet of AI (document classification) is only the surface of the field.

Damon Horowitz gave a recent presentation at TEDxSoMa called “Why machines need people”. In it, he says that AI can never approach the intelligence of humans. He gives examples of AI systems, like classification and summarization. He explains that those systems are simply “pattern matching” without any intelligence behind them. If true, perhaps the subjective interpretation of inverse problems is welcomed over the dumb classification. Through experience, the interpreters may have more insight than one can impart on an algorithm.

However, what Damon failed to mention was that most of those AI systems built to do small tasks is known as weak AI. There is a whole other field of study for strong AI, whose methods of creating intelligence is much more than “pattern matching”. Proponents of strong AI believe that human intelligence can be replicated. Of course we are a long way off from seeing human-level AI. What makes human intelligence hard to replicate? Can it be simulated? What if we created a model of the human brain, would it be able to think?

Related Videos (not on Ted):
“Why Machines need People”


Showing single comment thread. View the full conversation.

  • Mar 7 2012: I say ask the being if it thinks it is alive or if it is a machine. If it thinks it is alive then who are we to say otherwise?
    • Mar 7 2012: It can only think it's alive if it can think. But something doesn't have to be able to think to pass a Turing test. The danger of your approach is that we might make unconscious machines that wrongly insist that they can think.

      Consciousness is something that really happens. There is a fact of the matter of whether something is conscious or not. And if we're going to make machines that do impressions of being conscious, we really, really need to know what that fact of the matter involves.
      • Mar 7 2012: I'm not sure if we can ever satisfacorily answer this question. Is consciousness really either a yes or no question, or is there a grey area of being partially conscious. I'm also thinking of the evolution of humans from less conscious ancestors.
        • Mar 7 2012: I agree with you, but we have to try. And imagine how fantastic it would be if we succeeded - we'd finally have an answer to one of the biggest questions there is.

Showing single comment thread. View the full conversation.