TED Conversations

Howard Yee

Software Engineer @ Rubenstein Technology Group,


This conversation is closed.

Can technology replace human intelligence?

This week in my Bioelectricity class we learned about extracellular fields. One facet of the study of extracellular field I find interesting is the determination of the field from a known source (AKA the Forward Problem) versus the determination of the source from a known field (AKA the Inverse Problem). Whereas the forward problem is simple and solutions may be obtained through calculations, the inverse problem poses a problem. The lack of uniqueness to the inverse problem means the solution requires interpretation, which may be subjective. We may also apply a mechanism for the interpretation; this mechanism is known as an AI. However, this facet of AI (document classification) is only the surface of the field.

Damon Horowitz gave a recent presentation at TEDxSoMa called “Why machines need people”. In it, he says that AI can never approach the intelligence of humans. He gives examples of AI systems, like classification and summarization. He explains that those systems are simply “pattern matching” without any intelligence behind them. If true, perhaps the subjective interpretation of inverse problems is welcomed over the dumb classification. Through experience, the interpreters may have more insight than one can impart on an algorithm.

However, what Damon failed to mention was that most of those AI systems built to do small tasks is known as weak AI. There is a whole other field of study for strong AI, whose methods of creating intelligence is much more than “pattern matching”. Proponents of strong AI believe that human intelligence can be replicated. Of course we are a long way off from seeing human-level AI. What makes human intelligence hard to replicate? Can it be simulated? What if we created a model of the human brain, would it be able to think?

Related Videos (not on Ted):
“Why Machines need People”


Showing single comment thread. View the full conversation.

  • thumb
    Mar 11 2012: Yes, I believe we can replace part of human intelligence with machines! But this thing is that we can only replace the part of human intelligence we discoverd within ourselves with machines, this means that as we evolve and find more intelligence within our brains we can then replace this new found intelligence with machines. So, human will always be ahead of machines and not the other way around. Infinity to grasp and master! My opinion...
    • thumb
      Mar 13 2012: A great insight. Let me point out that this makes the assumption that the AI we build will not have any emergent intelligence beyond what we've built into it. To me, this is like saying that when you draw a pattern, the ONLY pattern is the one you meant to draw!

      What about when we build the first machine that ponders its own existence and aspects of its own thought? This is something we know we do, so why wouldn't we try to implement it in a machine? Once the machine is capable of self-inquiry, what stops it from digging down deeper into its psyche and ours, building more upon itself ad infinitum?

      Are we capable of making something this complex? We can only guess. But is there a philosophical argument that proves it to be impossible? I haven't heard anything even almost convincing. But remember evolution: stupid cells made smarter cells.

Showing single comment thread. View the full conversation.