TED Conversations

Howard Yee

Software Engineer @ Rubenstein Technology Group,

TEDCRED 50+

This conversation is closed.

Can technology replace human intelligence?

This week in my Bioelectricity class we learned about extracellular fields. One facet of the study of extracellular field I find interesting is the determination of the field from a known source (AKA the Forward Problem) versus the determination of the source from a known field (AKA the Inverse Problem). Whereas the forward problem is simple and solutions may be obtained through calculations, the inverse problem poses a problem. The lack of uniqueness to the inverse problem means the solution requires interpretation, which may be subjective. We may also apply a mechanism for the interpretation; this mechanism is known as an AI. However, this facet of AI (document classification) is only the surface of the field.

Damon Horowitz gave a recent presentation at TEDxSoMa called “Why machines need people”. In it, he says that AI can never approach the intelligence of humans. He gives examples of AI systems, like classification and summarization. He explains that those systems are simply “pattern matching” without any intelligence behind them. If true, perhaps the subjective interpretation of inverse problems is welcomed over the dumb classification. Through experience, the interpreters may have more insight than one can impart on an algorithm.

However, what Damon failed to mention was that most of those AI systems built to do small tasks is known as weak AI. There is a whole other field of study for strong AI, whose methods of creating intelligence is much more than “pattern matching”. Proponents of strong AI believe that human intelligence can be replicated. Of course we are a long way off from seeing human-level AI. What makes human intelligence hard to replicate? Can it be simulated? What if we created a model of the human brain, would it be able to think?

Related Videos (not on Ted):
“Why Machines need People”
http://www.youtube.com/watch?v=1YdE-D_lSgI&feature=player_embedded

Share:

Showing single comment thread. View the full conversation.

  • thumb
    Mar 8 2012: With the word "replace" in the question my answer will be a no. My reasoning is fairly simple - a human is the sum of its genes, its experience and the lives of those who came before it. There is something intangible about that last bit - our decisions are based not only on our own genes and experience, but also on interpreted history. And the key word there is "interpreted"; you can feed all the history of the world into an AI and make that enter into its decision making process, but it will never be able to emulate the "interpretation factor".

    So no, I don't think technology can replace human intelligence. But in a narrow scope, it CAN surpass it - by a lot. The first thing we need to do, though, is make a computer that calculates outside of right and wrong, or outside the binary domain. For an AI to be successful it needs to recognize that there is such a thing as more right, more wrong and neither right nor wrong. I think this is more of a challenge than people realize.
    • thumb
      Mar 8 2012: It seems like you are interpreting (pun intended) that AI systems are only discrete entities with a very algorithmic core. The problem of the strength of AI is more substantial than that. Currently in the field of neuroscience, we are unable to make the connection between the microscopic systems (neurons) whose inputs and outputs for very well defined, and the macroscopic system (our consciousness). Right now, we can emulate neurons very well; proponents of strong AI believe that with enough emulated neurons we can replicate consciousness. They question at hand goes beyond whether we can artificially create consciousness, it's a question of "what is consciousness?" because we are unable to tease it out of the known systems (the human brain)
      • Mar 8 2012: The real question is whether or not we will believe it is in fact consciousness once we have created it.

        There's no reply to your reply, so I am dropping this above the line.
        I never said human. I said consciousness. My use of the word believe stems from the fact that we cannot know.
        • thumb
          Mar 8 2012: The real question, which Oliver Milne has been pushing countless times in this conversation, is whether or not we KNOW it's in fact conscious. Machines can be made to pass a Turing test without having any real intelligence, and if we "believe" it's a human, then we are lying to ourselves. Part of being able to create a conscious system is to definitely show that it is conscious. If we are unable to show without a doubt that it is conscious then we have fallen for hokum.
        • thumb
          Mar 8 2012: Then that begs the question: how do we test for consciousness? Ignoring for a moment the immense difficulty in creating consciousness, let's devise a test for consciousness on the only entities we suspect of having consciousness now---humans.

          And if we can't even show that we're conscious, does that imply we've already fallen for hokum?
        • thumb
          Mar 9 2012: @Logan. There's something known as the "three aspects of consciousness". There's also the concept of "theory of mind". Scientists have devised well accepted tests to test for those aspects in humans and animals. The mirror test, tests for one aspect: the ability to recognize oneself. There's also the ability to sympathize with others by being able to recognize external events as if it is oneself's, and finally there's the ability to take previous experiences and apply them through deduction to future events. Many animals have facets of the three, but not all three.

          Using these tests, we've been able to find out that babies develop these abilities in steps and do not fully gain all three until months after birth.

          And as evidence that these facets of consciousness are tied to real-world systems, watch this video about mirror neurons: http://www.ted.com/talks/vs_ramachandran_the_neurons_that_shaped_civilization.html. It would seem like we have evolved with systems in place to aid the sympathetic aspect.

          So it would seem like we have tests for consciousness. If anything, we should scrutinize the three aspects and theory of mind to see whether they truly encapsulate what it means to be conscious.
        • Mar 9 2012: Those tests are a starting point, but I don't think they address the 'hard problem' of consciousness (http://en.wikipedia.org/wiki/Hard_problem_of_consciousness) which is the part that really matters. It's possible, and a little disturbing, to imagine a sort of android that acts exactly like a person, including in those behavioural tests, but which doesn't have consciousness. If we didn't look inside its head (I mean that literally), we could never tell whether or not it was a person. You suggested elsewhere that perhaps nothing unconscious could manifest all the signs of conscious. That'd be a fantastic discovery if it were ever confirmed, but, on the face of it, it seems like something that would be almost impossible to find out without first knowing what consciousness is.
      • thumb
        Mar 8 2012: That is actually not my interpretation :)

        I have no doubt whatsoever that we will one day spawn a conscious AI whose thinking pattern mimics that of a human, nor do I doubt that such an AI will one day be vastly more intelligent than any human. As I said, technology CAN surpass us - but only in a narrow scope. Something *will* be lost in the translation between the biological and the technological. I sincerely doubt we will be able to infuse an AI with the "human condition".

        You may argue that I'm wrong because if we can create a technological system perfectly analogous to the way the human mind operates, the "human condition" may come forth naturally. Then I counter with this - if we are able to do that, what we will have is the technological equivalent of a caveman with a library full of history books. Yes, the caveman may be incredibly intelligent and he may have access to all of our history, but the interpretation factor cannot be replicated artificially.
        • Mar 9 2012: Maybe not your human condition. But that caveman-robot might equally despair at the impossibility of creating a human capable of understanding the caveman-robot condition :P

Showing single comment thread. View the full conversation.