TED Conversations

Howard Yee

Software Engineer @ Rubenstein Technology Group,

TEDCRED 50+

This conversation is closed.

Can technology replace human intelligence?

This week in my Bioelectricity class we learned about extracellular fields. One facet of the study of extracellular field I find interesting is the determination of the field from a known source (AKA the Forward Problem) versus the determination of the source from a known field (AKA the Inverse Problem). Whereas the forward problem is simple and solutions may be obtained through calculations, the inverse problem poses a problem. The lack of uniqueness to the inverse problem means the solution requires interpretation, which may be subjective. We may also apply a mechanism for the interpretation; this mechanism is known as an AI. However, this facet of AI (document classification) is only the surface of the field.

Damon Horowitz gave a recent presentation at TEDxSoMa called “Why machines need people”. In it, he says that AI can never approach the intelligence of humans. He gives examples of AI systems, like classification and summarization. He explains that those systems are simply “pattern matching” without any intelligence behind them. If true, perhaps the subjective interpretation of inverse problems is welcomed over the dumb classification. Through experience, the interpreters may have more insight than one can impart on an algorithm.

However, what Damon failed to mention was that most of those AI systems built to do small tasks is known as weak AI. There is a whole other field of study for strong AI, whose methods of creating intelligence is much more than “pattern matching”. Proponents of strong AI believe that human intelligence can be replicated. Of course we are a long way off from seeing human-level AI. What makes human intelligence hard to replicate? Can it be simulated? What if we created a model of the human brain, would it be able to think?

Related Videos (not on Ted):
“Why Machines need People”
http://www.youtube.com/watch?v=1YdE-D_lSgI&feature=player_embedded

Share:

Showing single comment thread. View the full conversation.

  • Mar 10 2012: It is only a matter of time. In a way we are the same but our design is better so far. From one of the few abilities machines lack we have the ability to forget which is connected to the ability to learn, it is very important with its pros and cons (effective selective omission from which to further build on). As for the subject of self-awareness we have to understand where the root of it stems, we possess senses which machines don’t have integrated. We know how important our arm is to us and the consequences of losing it, this gives birth to self-importance and collectively to self-awareness. Intuition as well is a complex sense of interpretation which we cannot completely define, doesn't mean machines will not possess it, it's more like trace connections that some create better than others. You have to ask yourself how a thought process occurs (a snapshot of the brain) and whether you really have a choice or an output, in time we will know for sure. Design of machines are pretty static from what we have seen and take examples from (with respect to AI the general public doesn’t have great examples). Better design implementations would change the definition of machines itself; it’s a question about design. Emotion may be an action to generate a response from another system or an exaggerated checkpoint due to a temporary or permanent inability to cope (what is love? Except for it’s magical definition). All of mans’ best work can go into the 'perfect'(:P) machine but you can't have it the other way around. I may not be able to be too clear or correct; this is my take and my design :)

Showing single comment thread. View the full conversation.