TED Conversations

Howard Yee

Software Engineer @ Rubenstein Technology Group,

TEDCRED 50+

This conversation is closed.

Can technology replace human intelligence?

This week in my Bioelectricity class we learned about extracellular fields. One facet of the study of extracellular field I find interesting is the determination of the field from a known source (AKA the Forward Problem) versus the determination of the source from a known field (AKA the Inverse Problem). Whereas the forward problem is simple and solutions may be obtained through calculations, the inverse problem poses a problem. The lack of uniqueness to the inverse problem means the solution requires interpretation, which may be subjective. We may also apply a mechanism for the interpretation; this mechanism is known as an AI. However, this facet of AI (document classification) is only the surface of the field.

Damon Horowitz gave a recent presentation at TEDxSoMa called “Why machines need people”. In it, he says that AI can never approach the intelligence of humans. He gives examples of AI systems, like classification and summarization. He explains that those systems are simply “pattern matching” without any intelligence behind them. If true, perhaps the subjective interpretation of inverse problems is welcomed over the dumb classification. Through experience, the interpreters may have more insight than one can impart on an algorithm.

However, what Damon failed to mention was that most of those AI systems built to do small tasks is known as weak AI. There is a whole other field of study for strong AI, whose methods of creating intelligence is much more than “pattern matching”. Proponents of strong AI believe that human intelligence can be replicated. Of course we are a long way off from seeing human-level AI. What makes human intelligence hard to replicate? Can it be simulated? What if we created a model of the human brain, would it be able to think?

Related Videos (not on Ted):
“Why Machines need People”
http://www.youtube.com/watch?v=1YdE-D_lSgI&feature=player_embedded

Share:

Showing single comment thread. View the full conversation.

  • Mar 7 2012: According to me, no; technology can not replace human intelligence. Why? Has the technology being lasted since ancient history or did it end up somehow mysteriously? I really don't think so. Like this conversation is called as "artificial" intelligence, technology is just an artificial (let's say) substance.If we scrutinize carefully to the lexical meaning of artificial, which means "made or produced by human beings", it can be easily interpreted that human intelligence is creator-producer of the technology, isn't it? For instance, if we assume that technology replaced human intelligence insistently, and if we consider that we need teleportation, can the technology solve the problem with all new formulas which it had better to improve by itself? and can it tell us that here it is what you have being expected for so long; teleportation? If so, why couldn't it do such creative things till now?
    This is the opinion i support.
    Thanks...
    • Mar 7 2012: You're talking as if technology were one thing, with some sort of essence. This is a distressingly common mistake. A piece of technology is just an arrangement of matter. So is a person. There is no reason why we should not, at some point, work out how to produce an arrangement of matter which is both. And the reason why that arrangement of matter would be able to do creative things when previous pieces of technology could not would be that we had not previously worked out how to make things that do that. Please try not to be superstitious about things like 'nature', 'life', and 'technology'. They are just categories we impose on the world.
      • Mar 7 2012: Exactly, technology is one thing, but like every one and only thing, technology consists of parts,elements inside. Any machine or stuff which is created new gets an interaction among them, which is inevitable result of developing current world. So, this is an naturally arrangement of matters; there is nothing about being superstitious or something. Look at that "matters", so you support that matters are just technology's business and after enough technology level is caught, there will be no need human intelligence. Why do software engineer,programmer,machine,computer or mechatronics engineers still study then? for disappearing after just they wrote an code or produce an machine? While an error during its life cycle, sooner or later technology can repair itself, right? is it that? Technology is just a tool and every tool has an user.
        And these categories... we live them together, so they can not be seen quite isolated from our lives like technology.
        • thumb
          Mar 7 2012: I have to agree with Oliver, the technology of circuit boards, processors and software is no different than describing neurons and cells in the context of AI. It is merely semantics to separate the two.

          It comes down to the old Blade Runner scenario, if there was a machine that was advanced enough to fool everyone that it was human, then by any standard it is intelligent.

          Technology is only a word. I think the definitions can cloud the discussion.
        • Mar 7 2012: Warren, the biggest problem with that Blade Runner scenario is that it's entirely possible that we could create machines that are advanced enough to fool everyone into thinking they're people, but which aren't actually conscious. That would mean that these machines' 'friends', 'lovers', in fact anyone who treats them as a person, would be living a lie. Nobody else on this thread seems to have twigged this despite me spamming it all over, and it has me worried. What if we were to end up handing over our civilisation to robotic 'successors' who weren't actually conscious? It would be the end of all meaning and value in the universe.
        • thumb
          Mar 8 2012: @Oliver: Suppose we created them, and they are such convincing deceptions that we, with all of our intuition, logic, and inherent "humanness" are unable to tell them apart from the real thing.

          Of what value, then, are our meanings, values, and civilisations? Not worth a darn, obviously. But if they can fake it better than we can, and are not truly conscious, then what right do we have to say "we" are truly conscious?

          They would be living a lie---but who says we aren't as well? They would just be better at believing and perpetuating their lies than we are. And isn't the ability to consciously lie and deceive, to oneself, and others, as defining a human characteristic as any?
        • thumb
          Mar 8 2012: Or better yet, it's UNCONSCIOUSLY lying to itself, and others. The presence of an unconscious would indicate, somewhere in those circuits is a counterpart CONSCIOUSNESS, wouldn't it?

          You're right; the Universe might (barring the existence of extra-terrestrial intelligence) lose the greatest source of meaning and value it's ever known. But it would have gained a far craftier source, predicated, as it might have been, on a lie.

          Guess what it boils down to is we may have to come to terms with the idea that we're just not as special as we'd like to think we are.
        • thumb
          Mar 8 2012: But, you may argue that there is a difference between unconscious lying, where the truth lurks but is hidden from conscious thought, and a complete lack of any knowledge of the truth at all. Perfectly valid. But that is functionally indistinguishable, and you might be hard-pressed to show someone knows something they have hidden, even from themselves.

          And the rules of obtaining evidence of the lengths they go towards maintaining their self-deception being an indication of their unconscious knowing works JUST AS MUCH for any automaton keen on convincing others it is real, because whether it is a "real" or "fake" person who insists he or she is real THEY ARE MAKING THE SAME ARGUMENTS.

          Or in the Blade Runner analogy, he or she is making similar arguments, just not as finely crafted as the machine's argument would have to be in order to fool EVERYONE ALL of the time.

          Which reminds me of an old adage by Abraham Lincoln: "You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time."
        • thumb
          Mar 8 2012: Going further down the rabbit hole, because it's that time of night for me and I have nothing better to do than sit and ponder things whilst stuffing my face with dry cereal, if I were a soulless automaton trying to convince everyone I was real, I might be tempted to embrace the void and proclaim "I am not conscious, and I am not truly aware! This is the most reasonable facsimile of sentience I could come up with, but alas, it was to no avail. I did the best I could."

          To which there would be much laughter, because, after all, aren't we all just trying to do the best we can? And it's not like you would necessarily believe I wasn't conscious, either, for the very crux of your argument rests as much upon the machine's INCAPABILITY of knowing as it does humanity's CAPABILITY of distinguishing between what is conscious or not, and vice versa, wouldn't you agree?

          In any case, I hope that is sufficiently "twigged" as you say. I have finished the last of my cereal, proof-read this a half a dozen times or more, making additions, redactions, and revisions where I felt it necessary. These two Benadryl are about to hit like me like a brick, so good night, and pleasant dreams.
        • thumb
          Mar 9 2012: @ Logan.
          The jury is still out whether consciousness and the semblance of consciousness are one and the same. It might be a truth like the Heisenberg's uncertainty principle, where to seem conscious, one must be truly conscious. It's still heavily debated, and I suppose why it's debated is because we have yet to really show where consciousness comes from. Like the forward vs inverse problem, we have tests to tell if consciousness is present, but not the source of consciousness.

          Also, your comments remind me of a short story in "The Mind's I" called "An Unfortunate Dualist". http://themindi.blogspot.com/2007/02/chapter-23-unfortunate-dualist.html. It's a very interesting read.
        • Mar 9 2012: Logan, you seem to have missed my point. Without consciousness there would be no 'they'. There would be only some objects that move and make noises in such a way that we are fooled into thinking there is a 'they'. The idea that they really are conscious just because =from outside= they look for all the world as if they were conscious is absurd. The difference is that, while it appears =for us= as though they are conscious, there is no =for them=. You have a right to say that you are truly conscious just because you have an experience of any kind. This is Descarte's 'Cogito ergo sum', put slightly differently.

          But I'm not being a dualist about this. When I say 'from the outside' I mean 'in day-to-day life'. I expect that once we work out exactly what physical processes go on in our brains we will probably be able to discover what consciousness is, and how it happens, in objective physical terms. We should then be able to construct new consciousnesses artificially. If I'm wrong, it may be that some sort of primitive consciousness is a fundamental part of all existence, and my worries are misplaced.

          'Of what value, then, are our meanings, values, and civilisations' if we can't tell the difference between fakes and the real thing? The value we place in things (including one another) is made 'true' or 'false' not because of how the things appear to us but in virtue of the way they really are. To take an example, this is why we respect the wishes of the dead in the form of wills: we are doing right by them even though by definition they cannot know about it. Or why we would rather know the truth than believe a comforting lie. Our civilisation is of immense value, and an unconscious civilisation would be of very little, because of factors we don't yet know how to detect. That doesn't mean those valuations are mistaken.

          Interesting story, Howard. I'm always dubious of the appeal to absurdity in philosophy. The unthinkable has frequently turned out true in the past.
        • thumb
          Mar 9 2012: Oliver, I feel that, contrary to your assertion, I have captured your point very well.

          What I was driving at is, and the scenario I am outlining that may be just as likely as your scenario, is the possibility that we may not be conscious---there is no "they" because there is no "us". That we are (or might be), as you say about possible machine intelligence, only objects that move and make noises in such a way that we are fooled into thinking there is a 'they' or an 'us'.

          After all, look at our very criterion for determining what is conscious. How very convenient for us! We just *happen* to have all of these characteristics. It's a bit like throwing the arrow down into the ground and painting the bullseye around it.

          So, too, is the way we as a species feel about our civilisation. It only means so much to people in general because it is *our* civilisation; these things only have value and meaning *for us*. I don't believe ours is a civilisation that could be defined in any sense of the word as a tower of conscious thought, or a victory of the rational over the universe.

          By our own broad definitions of humanitarian thought and action we act positively irrational towards each other, and have done so for many generations, and as a whole, are very much a failure, if not by nature's terms, that of survival of the fittest, then our own terms.

          It is just what happened to have happened. We were (or could have been) bred with a matter-based compulsion to build stuff, and we were left alone long enough in conditions favorable to really acrue lots of stuff. And if a meteor like the one that may have helped the dinosaurs into their graves hits--? We will be supplanted by something else, and the universe will not weep, because what was lost only really held meaning for humans anyway.
        • thumb
          Mar 9 2012: And it seems like you are discounting any experience a machine experiences, precisely because it may be of a different quality, or nature, than a human's experience. It seems like you are mistakenly equating "experience" and "awareness" with " *human* experience " and " *human* awareness " as being the only rule. Maybe this is just an evolutionarily-based phenomenon, that we, as a species, only recognize human endeavors as being of any importance.

          In fact, I find your wariness of philosophy humorous, and somewhat surprising. You have, up to this point, been talking as if there is some essence to people, some "real" thing, "true" thing, or "virtue" (words you have used) that makes people "people", that goes beyond mere appearances. And don't get me wrong! I don't necessarily disagree with you.

          But what, pray tell, is this metaphysical construct that we suspect is there, whether or not people can tell the difference or not (as might be the case in the Blade Runner analogy), that goes beyond just what we can see, in day-to-day life?

          While I am not sure, one way or the other, if we truly have some quality that is an unknown quality of or that transcends the physical, you seem to be asserting there is some virtue, a distinct aspect of, if not entirely independent from, the physical.

          All of which sounds very---philosophical. :)
        • thumb
          Mar 10 2012: Logan, I don't think Oliver has been talking "as if there is some essence to people that makes people, 'people' the goes beyond appearances". It's the complete opposite. What he's saying, and I agree with him, is that people are caught up with just appearances and it's short sighted. The "appearances" has a source that generates that appearance. It's short sighted to stop at the appearance level and think that if we can replicate the appearance, then it's something we can call "conscious".

          For instance, if I happen to have synesthesia or am color blind all my life and I did not know that other people experience sensual information differently than I do, does that mean my experiences are real and others are not? If I was tasked to explain what I experience, what I experience is completely different. We've shown that there's a reason for the evolution of these systems. We have more sensory cells for detecting green and reds because we need to differentiate leaves from fruit. These systems have a purpose, and so I would say if one is color blind, something is faulty with their system. If one was tasked to replicate this faulty system (without knowledge that it's faulty), then they'd believe that the replicated system is correct.

          Right now, I see consciousness as something like colorblindness. It's as if we all have this flaw, and we don't have a reference for what isn't colorblindness. Since we do not know, we live thinking that our condition is acceptable. We may even try to make artificial systems that are colorblind. And since it matches what we observe, we are content.
        • thumb
          Mar 10 2012: I think this has developed into a matter of wanting to have your cake and eat it, too.

          "Oh, there's nothing special about people, they're just matter. . . But look beyond just what they look like."

          "Oh, there are mechanisms that are at work beyond appearances. . . Buut it's just another kind of appearance and we don't really know what it is."

          "Oh, it just looks conscious---it isn't really, because it hasn't truly replicated the "system" just "the appearance of" the system, and moves, acts, receives inputs and produces the proper outputs like the real thing. Not that it's actually the real thing."

          Another way of saying it "If it looks like a duck, walks like a duck, and quacks like a duck---it's not a duck, it just looks, walks, and quacks like a duck. And sometimes it flies south for the winter due to its inborn instincts, which is strange for a manufactured system that not only won't freeze in the cold, but also, technically, wasn't born, so it's instincts, also artificially conceived, are rather out-of-place with it's condition."

          Here is how I look at it.

          There is a series of books called the Discworld series. One such book in the series is called "Hogfather."

          There was a time in the story that Hex ( http://wiki.lspace.org/wiki/Hex ) decided to believe in the Hogfather ( http://wiki.lspace.org/wiki/Hogfather ). When it begins to scribble out it's list of presents that it wants (Which it, as a firm believer in the Hogfather, is fully entitled to do even though it was merely programmed to believe, and isn't human!!!) Death ( http://wiki.lspace.org/wiki/Death) stops and reevaluates what it means to be human. And what is a legitimate human behavior.

          There comes a time when the debate becomes rather---pointless. You can move no nearer without a better definition, and if a better definition is not being offered here, then moving forward with what you have been able to come up with until a better definition becomes available is pretty much your only move.
        • thumb
          Mar 10 2012: Here is a quick overview of the book itself, in case you've never read it.

          http://wiki.lspace.org/wiki/Book:Hogfather

Showing single comment thread. View the full conversation.