TED Conversations

This conversation is closed.

AI: is SENTIENCE derived from pain & pleasure?

Apologies in advance. I'm an AI noob, but...

RE:machine intelligence, would sentience/consciousness potentially be derived from the ability to feel PAIN & PLEASURE?

For example, my PC can execute code & follow orders, but it has no WANTS or DESIRES. But, if it could feel P&P, it would WANT to seek pleasure & avoid pain.

Could that be enough to make it self-aware? To say "I AM"?

Could P&P be the cause for HUMAN consciousness? After all, the 2 basic human drives are:


To feel pain & pleasure enables those drives, right? if AI could feel P&P, wouldn't it be conscious? In addition to electrical, might sentient AI require a chemical component?

Again, I'm new to AI. Please forgive me if this question has already been answered.


Showing single comment thread. View the full conversation.

  • Jan 5 2013: Having just completed UCBerkeleyX's 188.1x course, "Artificial Intelligence," I can tell you that Pain and Pleasure is how you create a QLearning algorithm. After the 6 week class, I created a set of code which could teach itself to play Pacman, and play it extraordinarily well!!

    The way it did this was through a pain/pleasure learning method. The AI has a feature function; This function takes a Pacman game state, and analyzes it, returning certain measurements, such as distance to the next pellet, how many pellets are left, whether or not you eat a pellet, if there is a ghost about to eat you, game score, etc.

    At the first time Pacman goes out onto the field of battle, as it were, Pacman has zero idea what any of these things mean. He keeps track of a set of coefficients which are attributable to each feature. When exploring the choices Pacman has, up/down and left/right, Pacman scores each of these by taking the features of the game state after the move is made, and using the coefficients he has learned to come up with a composite score for that movement; he then takes the best scoring move.

    At first, as I said, he has no idea what is going on. Coefficients of 1, and he is off to the races. He comes across a pellet; after eating this pellet, he is still alive and his score goes up. Hrm, must mean eating pellets is a good thing: so he increase the coefficient for "eats pellet" a bit. If the square pacman is about to jump on gives a reward through eating a pellet, pacman will know that it is a good thing.

    Next, he runs into the first ghost he sees, and dies. He learns from his death though; the coefficient for "Number of ghosts 1 square away" goes negative. This enforces it as a bad thing, and in the future, any square with a ghost about to step onto it will have a total score of much less. He shouldn't have to run into a second ghost; the negative coefficient is sufficient to lower the total score out of "viable move" range; he will dodge any future ghosts.

Showing single comment thread. View the full conversation.