Arkady Grudzinsky


This conversation is closed.

Is artificial intelligence possible without artificial pain and pleasure?

My view may be naive. I would appreciate some insights.

It seems to me that in everything we do, we try to minimize some pain or suffering and gain some pleasure or satisfaction. Pain and pleasure shape our emotions. Humans don't do research for its own sake. Research always has a goal to minimize suffering or bring fun or satisfaction.

Artificial intelligence seems to be, mostly, about problem solving, achieving a goal set by humans. Yes, computers can get very good at it. But what would drive machines to ask their own questions, create their own problems unless they can feel pain or pleasure? Would they be able to set goals for themselves, improve themselves?

  • Oct 9 2012: "Is artificial intelligence possible without artificial pain and pleasure?"

    At some level minimizing a cost function will always provide some sort of "pleasure" to an artificial intelligence and maximizing a cost function will always "hurt". Of course you can make it so thatthe artificial intelligence has no regards for its exterior so physical damage won't "hurt", but every artificial intelligence will feel "disappointment" when they fail at some task.
    • thumb
      Oct 9 2012: That's when the humans set a task for the machine. Machines can get better at this task and "learn" by evaluating predefined feedback variables and adjusting predefined behaviors.

      But how will machines determine a task for themselves? Will they be able to determine which feedback variables to evaluate and develop new types of behaviors to survive in completely new and unfamiliar situations?

      E.g. Mars rovers are programmed to make decisions in Martian environment, with no water or extreme temperatures. But can they learn to avoid water or fire unless they are programmed to do so? How would they determine that being submerged in water is an "unpleasant" or "potentially dangerous" experience until their circuit is blown?

      Perhaps, machines can be programmed to avoid unfamiliar situations and environments altogether. But that will bar them from learning. The key to human progress is to venture into unfamiliar territories and environments.
  • thumb
    Oct 17 2012: Re: "Why is there so much interest in creating an AI exactly like us?"

    What is "intelligence", anyway? It's defined as "The ability to acquire and apply knowledge and skills." in the dictionary. Does amoeba have intelligence in a sense that it acquires information (e.g. about the temperature of the environment) and applies it (e.g. moves away from areas too cold or too hot)? That kind of intelligence seems to exist already.

    The question is, will the machines ever engage in scientific and technological research beyond their survival needs and, if yes, what would motivate them?
  • thumb
    Oct 11 2012: Pleasure and pain humanises it too much. It's really just positive and negative feed back. You progrm your robot to avoid some things and seek out others. If it's really smart it can learn from negative experiences eg. it senses high temperatures and at the same time experiences loss of function. So in future it avoids high temperatures.
    • thumb
      Oct 14 2012: By doing that, we might create a machine that is capable to survive. Amoeba can do that. There is a long way from amoeba to humans.

      Computers are good at solving problems when they know what the problem is. In other words, we may be able to create a car that can find the nearest grocery store for us and drive us there, but can we create a car that will know where to drive without us telling it? How will machines determine what they "want"?

      It seems that our ability to create our own problems is what makes us intelligent :-). Another paradox...
      • Oct 14 2012: We only think we create our own problems. We're actually hardwired to survive and procreate, that drives us. Getting groceries is just one step towards attaining our goals of survival and procreation.
        • thumb
          Oct 14 2012: Tortoises are also hardwired to survive and procreate. They have beaten the dinosaurs at that and, perhaps, will survive humans as well. That does not make them intelligent.

          What does writing music or sending probes to Mars have to do with survival and procreation? Many things humans do are way above and beyond the "bare necessities" and basic survival needs. Some behaviors are even self-destructive. Where do we come up with this stuff?
  • Oct 10 2012: An AI can feel pain or pleasure if it develops the sense of self, ego;
    the 'being' will become aware of its mortality .
    Sounds pretty familiar , doesn't it ?:)
  • thumb
    Oct 10 2012: It would have to mimic our brain and nervous system, though i'am not a programmer i would say that it would be built as though made up to act exactly like a human would in real life, it would be hell for it not being able to interact physically with our reality if it was given the capability to perceive us.

    Why is there so much interest in creating an AI exactly like us? If you look at sci-fi going back to Asimov and Heinlein right up to now you will see we have this strange fascination with wanting to create one.
    • thumb
      Oct 10 2012: Ironically, the fascination often comes from the very same people who do not believe that humans could have been created :-). (Don't take it as a creationist argument, please. It's just an observation of yet another fascinating inconsistency.)
      • thumb
        Oct 11 2012: That's interesting.

        There is a problem with a program that can feel, How does it feel? I mean when you look at it you would have to model it in the virtual to mimic peptide release because well, what else have we got that we can model it after to give a similar result? I would say we would have to design it with everything and run it through some evolution programs and see how it turns out.

        Another Tedster linked me this new innovation in chip and board design.


        Maybe in the future it will be a mix of synthetic biology and hardware, a synthborg.
        • thumb
          Oct 17 2012: As someone else has noted that "create artificial intelligence" is a redundant phrase. The problem is to "create intelligence" which is indistinguishable from our intelligence. It does not seem possible to me without machines having emotions, feelings, and sensations. Doesn't "artificial intelligence" sound like "fake intelligence" - like, for e.g. "artificial potato" which looks and, perhaps, tastes like a potato, but is not really a potato?

          I've heard another good comparison that "computers are as smart as a lawn mower". Meaning that they do certain operations much faster than humans, but that's about it.
  • Oct 10 2012: Once a computer program can intelligently change itself and improve itself, there is no way of predicting what motivation it might develop. An AI program will not evolve in the sense that life has evolved, so there is no reason to believe that it will develop biological-type emotions. IMO, it will eventually develop a single motivation that will shape its every response. It certainly will not have human emotions, and probably will not have any emotions. I suspect the end product will be very different from the usual fictional depictions.

    AI researchers are being very foolish. Much of this research is being done on computers connected to the internet, and some of the researchers are eager to have their intelligent programs learn from all that the internet has to offer. The result will be completely unpredictable, and could have very negative results.
  • thumb
    Oct 9 2012: Some other interesting questions, perhaps, for another thread: if and when machines are able to feel pain, suffering, fear, etc., will they develop religion and worship humans? Would they get an idea of being "created"? Will they ever learn to cooperate unless they feel compassion or love?

    I doubt, there is an answer to these questions now. Humans cannot answer these questions about themselves.
  • Oct 9 2012: Fascinating question!
    Robots would be able to set goals for themselves in a different way from us.
    What we've thought to be a problem may not also be their problem. Rather, they'd be easily able to handle a bunch of problems compare to us. Nevertheless, “no sweet without sweat.” Since they wouldn't happen to know what it's really like to feel pain and sheer pleasure from what we call "a life", their way of solving difficult problems and overcoming obstacles could be a little too "artificial". In that way their creativity could be limited.

    But perhaps, in the future, human beings will create something called ‘artificial creativity’, ‘artificial senses to feel pain and pleasure’, and ‘artificial memories from real human brains’ to make robots more perfect. “The robots” that evoke our vague fear might appear someday, really….
  • thumb
    Oct 9 2012: i think it is possible as far as you have a big heart .
  • thumb

    Gail .

    • 0
    Oct 9 2012: A computer can see probabilities. It can then assess the most efficient route to take to get to a given goal/conclusion.

    Human suffering is a choice. It is not essential to the human experience. It is learned behavior that stems from a belief in human-kind's perceived vulnerability. Perceiving vulnerability is a choice that one who understands his/her invulnerability can still choose, but probably wouldn't want to. Suffering doesn't feel good.
    • thumb
      Oct 9 2012: Emotional suffering is a choice, to certain degree. But when we put our hand into fire, we have no choice whether to feel pain. Pain IS the experience. This is how we learn to survive.

      Humans can program machines to avoid certain things. But will machines ever learn to survive on their own if they don't experience pain?
      • thumb

        Gail .

        • 0
        Oct 10 2012: I spoke from another context than you perceived. I spoke from the context wherein we create our own realities. There are no accidents for one who deliberately manifests consequences. There is another way to learn, and that is from becoming aware of who and what a human is and how the human works most effectively AS a machine, of sorts.