TED Conversations

Howard Yee

Software Engineer @ Rubenstein Technology Group,

TEDCRED 50+

This conversation is closed.

Can technology replace human intelligence?

This week in my Bioelectricity class we learned about extracellular fields. One facet of the study of extracellular field I find interesting is the determination of the field from a known source (AKA the Forward Problem) versus the determination of the source from a known field (AKA the Inverse Problem). Whereas the forward problem is simple and solutions may be obtained through calculations, the inverse problem poses a problem. The lack of uniqueness to the inverse problem means the solution requires interpretation, which may be subjective. We may also apply a mechanism for the interpretation; this mechanism is known as an AI. However, this facet of AI (document classification) is only the surface of the field.

Damon Horowitz gave a recent presentation at TEDxSoMa called “Why machines need people”. In it, he says that AI can never approach the intelligence of humans. He gives examples of AI systems, like classification and summarization. He explains that those systems are simply “pattern matching” without any intelligence behind them. If true, perhaps the subjective interpretation of inverse problems is welcomed over the dumb classification. Through experience, the interpreters may have more insight than one can impart on an algorithm.

However, what Damon failed to mention was that most of those AI systems built to do small tasks is known as weak AI. There is a whole other field of study for strong AI, whose methods of creating intelligence is much more than “pattern matching”. Proponents of strong AI believe that human intelligence can be replicated. Of course we are a long way off from seeing human-level AI. What makes human intelligence hard to replicate? Can it be simulated? What if we created a model of the human brain, would it be able to think?

Related Videos (not on Ted):
“Why Machines need People”
http://www.youtube.com/watch?v=1YdE-D_lSgI&feature=player_embedded

Share:

Showing single comment thread. View the full conversation.

  • thumb
    Mar 11 2012: I am not a programmer an do not know so much beyond MATLAB ...
    But program of a human is like :
    1. See your environment.
    2. Take its pattern.
    3. save in memory.
    And if a problem occurs:
    1. What is the unsuitable stuff ?
    2. What is your destination ?
    3. Make a pattern from 1 to 2.
    4. Match the 3 pattern with one of patterns in the memory.

    I think simulation of this path for an electronic brain is hard but not impossible.
    • thumb
      Mar 13 2012: Hi Amirpouya,

      I think your simplification of the computing process of the human mind is pretty spot on. However, it brings up a question with me. To me, it seems like an artificial mind would need to go through as many iterations as a human has life experiences to fully gain "human intelligence." And even then, how does a computer make decisions that we as humans deem impossible? A computer can master facts and memorize information, but I feel that how it interprets it is no where near that of how a human does. You can assign as many numbers and weights and formulas, but at the end of the day, given a situation where the right answer may be the irrational one, how can we expect a computer to make that distinction?
      • thumb
        Mar 13 2012: And extend your question to group choices: A decision by one person for his own well-being might mean to do step A. But a community decision in a city might result in step B - rational for the group - irrational for 45% of the individuals... can a computer learn and interact that way?

        I guess we tend to underestimate that our rational individual choices are bounded by groups we are acting in.. this is my daily experience in city development. Here is a lecture which is a good example of what we can compute in a city - and what not: http://www.labkultur.tv/en/blog/deltalecture-arrival-cities-1
        • thumb
          Mar 13 2012: Bernd,if you were to try and code for an AI, would binary be sufficient? i'm no programmer but the way i see it is that we would have to start at the bottom and start modelling the amino acids and build up from there.I don't think we can rely on equations,what i mean is a neuron won't fire off the same signal constantly.What are your thoughts on this?
      • thumb
        Mar 13 2012: Hi Harnsowl -
        I hope I got your meaning ...
        A computer should not be programming to reacting like a human.
        If it has all of a human's passions it will become like an infant.
        And if it has the cognition ways just like a human (seeing etc.) ,
        and plus making itself better during the time -which I believe if a machine has this ability will destroy all of the mankind- I think it will be a complete human.
        But one another thing remains : all of us feel WE are someone except US.
        For example I feel I am someone except this body and I just analyze my jobs.
        This feeling makes us feel we comprehend datas in another way as a computer do.
        But this SELF is just an independent system for making anything better for the body by correcting its programming.
        But I don't think this system deserves to be called "soul".
        I said it's hard but not impossible.

Showing single comment thread. View the full conversation.