TED Conversations

Howard Yee

Software Engineer @ Rubenstein Technology Group,

TEDCRED 50+

This conversation is closed.

Can technology replace human intelligence?

This week in my Bioelectricity class we learned about extracellular fields. One facet of the study of extracellular field I find interesting is the determination of the field from a known source (AKA the Forward Problem) versus the determination of the source from a known field (AKA the Inverse Problem). Whereas the forward problem is simple and solutions may be obtained through calculations, the inverse problem poses a problem. The lack of uniqueness to the inverse problem means the solution requires interpretation, which may be subjective. We may also apply a mechanism for the interpretation; this mechanism is known as an AI. However, this facet of AI (document classification) is only the surface of the field.

Damon Horowitz gave a recent presentation at TEDxSoMa called “Why machines need people”. In it, he says that AI can never approach the intelligence of humans. He gives examples of AI systems, like classification and summarization. He explains that those systems are simply “pattern matching” without any intelligence behind them. If true, perhaps the subjective interpretation of inverse problems is welcomed over the dumb classification. Through experience, the interpreters may have more insight than one can impart on an algorithm.

However, what Damon failed to mention was that most of those AI systems built to do small tasks is known as weak AI. There is a whole other field of study for strong AI, whose methods of creating intelligence is much more than “pattern matching”. Proponents of strong AI believe that human intelligence can be replicated. Of course we are a long way off from seeing human-level AI. What makes human intelligence hard to replicate? Can it be simulated? What if we created a model of the human brain, would it be able to think?

Related Videos (not on Ted):
“Why Machines need People”
http://www.youtube.com/watch?v=1YdE-D_lSgI&feature=player_embedded

Share:

Showing single comment thread. View the full conversation.

  • thumb
    Mar 8 2012: I have written a number of blog posts on this and related questions. The topics below transition from where we are and why we're "not there yet" with creating humanlike AIs, through how to create non-intelligent machine learning systems that at least do useful things, through some views on what we should be doing to create humanlike intelligence, through to some musings on intelligence, entropy, the universe and everything. As far as the issue of consciousness, I try not to touch that with a 10-foot pole :-)

    "Watson's Jeopardy win, and a reality check on the future of AI":
    http://www.metalev.org/2011/02/reality-check-on-future-of-ai-and.html

    "Why we may not have intelligent computers by 2019":
    http://www.metalev.org/2010/12/why-we-may-not-have-intelligent.html

    "Machine intelligence: the earthmoving equipment of the information age, and the future of meaningful lives":
    http://www.metalev.org/2011/08/machine-intelligence-earthmoving.html

    "On hierarchical learning and building a brain":
    http://www.metalev.org/2011/08/on-hierarchical-learning-and-building.html

    "Life, Intelligence and the Second Law of Thermodynamics":
    http://www.metalev.org/2011/04/life-intelligence-and-second-law-of.html

    I hope some of this is at least thought-provoking!
    --Luke
    • thumb
      Mar 8 2012: So luke basically without reading these links yet, what are your thoughts on a learning thinking AI?
      • thumb
        Mar 8 2012: Ken -- most of my current thoughts are in the links above. Happy to discuss once you've had a chance to peruse them :-)
        • thumb
          Mar 8 2012: Ok i've read the first two of them and yeah it goes along the same lines of what i thought, which is uneducated.I've kind of followed how intel have stayed on course with moores law but this year or last year it "Ticked"? but theirs no "Tock" til two more years? and it's been a programmers nightmare trying to develop for the multicore bottleneck?

          I know this is not what i asked but i can't see today's chip development ever getting to what kurzweil states unless there is a new element or design introduced?Here's what i found trawling one day.

          http://scitechdaily.com/penn-researchers-build-a-circuit-with-light/

          It takes me awhile to read things as i tend to think it through then reread it.it's slow i know but it woorks for me.
      • Comment deleted

        • Mar 9 2012: There isn't a multi-core dilemma. There is just people, that don't know any electronics and how to write compilers, and people that don't use modern technology. They will think this is a problem because they don't know any better.

          Computers made past that point long ago. Computers today that is used for heavy calculations has thousands of cores. With common graphic cards you can do many thousands of calculations in parallel.

          Ever heard of Open-CL? Check it out, you aught to know it.

          Ever heard of High-frequency trading for example? They now use FPGA's, and can calculate and respond to hundreds of thousands parameters in parallel. These languages to program these parallel systems has been around since the 1980s.

          And if you think thats difficult, then there is computer languages that fix that also. Check out Mitrion-C

          This isn't a problem. Even programs like word and photoshop and webbrowsers, scales to thousands of calculation units. Just insert a modern graphics card in your computer.

Showing single comment thread. View the full conversation.