TED Conversations

This conversation is closed.

AI: is SENTIENCE derived from pain & pleasure?

Apologies in advance. I'm an AI noob, but...

RE:machine intelligence, would sentience/consciousness potentially be derived from the ability to feel PAIN & PLEASURE?

For example, my PC can execute code & follow orders, but it has no WANTS or DESIRES. But, if it could feel P&P, it would WANT to seek pleasure & avoid pain.

Could that be enough to make it self-aware? To say "I AM"?

Could P&P be the cause for HUMAN consciousness? After all, the 2 basic human drives are:

PROCREATE & SURVIVE

To feel pain & pleasure enables those drives, right? if AI could feel P&P, wouldn't it be conscious? In addition to electrical, might sentient AI require a chemical component?

Again, I'm new to AI. Please forgive me if this question has already been answered.

Share:

Showing single comment thread. View the full conversation.

  • Jan 4 2013: IMHO, No one knows, yet, but we will surely learn.

    It might be the case that P&P led to human sentience. For the sake of discussion, let us assume that P&P did lead to human sentience. That alone does not lead to the conclusion that P&P is a necessary requirement for AI sentience. Nor does it tell us that building P&P into an AI machine will necessarily lead to AI self awareness.

    AI sentience might possibly be achieved in a number of different ways, one of them being the simulation of P&P. Generally, when an ability is simulated closely enough, there comes a point when the simulation and the actual ability are indistinguishable. With respect to AI, I suspect we will witness machine abilities continue to improve over time, to the point where they equal and exceed human abilities. At some point, when a machine has many abilities that exceed human levels, we will have to recognize that the machine is sentient.

    I do not believe a chemical component is necessary, but I believe that sensory awareness is necessary for sentience.

    I think that the whole question of motivation for an AI machine is given too little attention. The first AI machine might be extremely intelligent, then stop communicating for lack of any motivation to communicate.
    • Jan 4 2013: Thank you for the replies.

      Good point(s), Barry.  machine sentience may not require P&P, nor chemical components.  Still, I'm thinking that, at least initially, it's easier to reverse engineer a working model (us), than to reinvent the wheel.

      Today, electrons are the technological medium for communication & computing.  Could chemistry be another technology platform that might work in concert with electrical circuits & semiconductor technology?  Are feelings (P&P) "information"?  Is chemistry the best technology to accurately communicate & process that information?  Might a parallel processor composed of endorphins, serotonin & nerve endings be the key to machine sentience?

      RE: HIMAN sentience, if I could not feel P&P, would I WANT to do anything?  Would I have any desires or motivation at all?  

      Or would I just follow orders & instructions, acquired via millions of years of evolutionary filters?

      Maybe we can't figure out how to program sentience & personal motivation, because they are illusions that result from the complexity of billions of lines of IF/THEN code, provided for us courtesy of evolution.

      Is "free will" an illusion?  Maybe Einstein was right to be a determinist.  Maybe the physical laws of the big & the small really AREN'T compatible & random chance happens ONLY at the quantum scale.

      Maybe, we don't make our own choices.  Maybe we're only following a set of instructions SO COMPLEX, that we don't realize we're not ultimately in control.

      Maybe machine sentience will emerge after we write the trillionth IF/THEN statement?

      Maybe, I don't need any more coffee today.
      • Jan 4 2013: Reverse engineering humans is certainly a plausible approach, and some people are actively engaged in that approach.

        My point is that what works for us, who are sacks of chemicals, might not work at all for an electronic AI machine. Yes, it is still possible, with the very limited knowledge we now have, that sentience requires a chemical approach and all non-chemical attempts to produce sentience are doomed to failure; but we do not yet know that, and I very much doubt it. I think basically it is a matter of inputs, processing and outputs, and that processing could be chemical, electronic, luminous or even mechanical, and it could still produce the same outputs..

        IMHO, we are following a set of instructions, and that is how we make choices. To an outside observer a good chess machine is making a choice every time it makes a move, and we all know that it is following a complex set of instructions. In a sense, free will is an illusion. Free will has been the subject of argument for centuries, and all that arguing has been a big waste of time because it does not matter. What matters is that (1) we make choices and (2) what we will choose to do is unpredictable. Discussions of free will, destiny, and fate are pointless in the light of these simple, observable facts.

        By the way, I was a programmer for many years, and the quantity of IF/THEN statements is irrelevant. What matters is a thorough understanding of what we want to produce. Since basically we want to simulate human capabilities, the pursuit of AI puts a renewed emphasis on "Know thyself." Also, we need to remember that we are not duplicating a human. We will have to produce the motivation for an AI machine. Since the ultimate goal is for this machine to be more intelligent than we are, the choice of this motivation, and its implementation, are very critical.
        • Jan 4 2013: Haha!  Agreed.  I'm not fond of pointless debates on free will, destiny & the meaning of life unless I have a beer in my hand & refills nearby.  To put it in context, I just read the Kurzweil/Singularity book & was thinking about the "downloader" idea, where an aging person might download their mind to a machine & later, upload to a new body.

          I'm not an engineer, but if you remove a part from a system, you lose some functionality.  If my brain could function in a jar, apart from my nervous system, would I still be me?  Or would I lose something?

          If I removed my "mind" from my body, my nervous system, etc., by downloading to a machine, would I still have motivations & desires? Would I still laugh at the same things?  Or at all?  would I still be self-aware?  Would I still be me?  Or just information?

          And, if I did lose something in the download, would I regain it, once I uploaded to a new body?  I.E., a complete system?

          Is "self", or sentience, solely a product of the brain?  Or is consciousness an irreducible part of a complete system, I.E., the body, the nervous system & the ability to feel P&P?

          I'm thinking maybe the person/mind/self is a product of a complete system, not just a brain?

          Now, whether we SHOULD build sentient machines that quote Sartre & ponder their existence, is another conversation ;-)

Showing single comment thread. View the full conversation.