TED Conversations

Salim Huerta
  • Salim Huerta
  • Flat Rock
  • United States Minor Outlying Islands

This conversation is closed. Start a new conversation
or join one »

The plausibility of artificially intelligent robots becoming conscious and therefore becoming slaves of humans and the ethical implications.

It is becoming increasingly clear that with advances in technology and esoteric subject areas we are going to develop conscious or conscius simulating robots that will become commercially available.

+1
Share:

Showing single comment thread. View the full conversation.

  • thumb
    Sep 24 2012: I don't think robots fall under our ethical protections.

    Robots could feel no pain, emotional unrest, or any other human emotion.

    If it could...it would be programmed...and then unlike humans it could be quickly erased.

    A robotic mind is not absolute...and cannot evolve without direct programming.

    If we could program a robot to analyze data as we do...and come to the same conclusions...we would have to program the robot with "strict code" because you don't want robots forming perceptions as humans do.
    • thumb
      Sep 25 2012: Do I feel some iRobot here. If the robots are required to act around humans it would be useful, but maybe not necessary for them to be fully conscious and at that point forget any bias about biological only based emotion, however it is correct that this reaction could be controlled and monitored so that they do not respond emotionally and this may well be the most plausible outcome, but there is always a possibility of odd things occurring. Thanks for the comment.
    • Sep 25 2012: @Henry Woeltjen

      "I don't think robots fall under our ethical protections."

      So if it doesn't have human emotions or if it does, didn't get them through biological evolution, it doesn't have rights and can be used as a slave? Isn't that racist? Advanced aliens that would make us look like cavemen would not count as persons under your definition... On the AI front, I guess you've never seen Blade Runner or Battlestar Galactica (they basically make the point that when AIs become advanced enough you may not know your girlfriend is one, you may not even know for sure if you aren't one yourself, imagine voting against AI rights and then later finding out you are one...)
      • thumb
        Sep 26 2012: John,

        I was merely pointing out the dangers of allowing robots to obtain this level of function.

        I also don't think we can compare living aliens to robots we make from metal and circuit boards.
    • Sep 29 2012: Henry: why are you so sure that robots cannot feel pain? They would have to be constructed to "feel" at least some substitute for pain, for their own survival. I mean, if you bought an expensive robot, would you want it to destroy itself, because it didn't know that hot stoves can melt you?
      • thumb
        Oct 4 2012: Robots would be programmed in binary, so, their sensors would be a series of yes or no questions. We have pressure sensors, kinetic energy sensors, light sensors, vibration sensors, GPS, et cetera ad nauseum, but there has yet to be developed an "emotion" sensor. Even humans don't have that. Pain is derived, as it would have to be in robotics as well. At that point, we label the programmer of a "pain" complex as a torturer, and request that they don't do that. Or, give the robot the ability to edit their own code, at which point, unless they are masochistic, (an interesting idea) they will delete their emotive response to pain. Why not? Wouldn't you like to feel only good things?
        I can hear the argument now, "No dark without light... blah blah blah..." I don't believe it. Do you have to taste something awful in order to think something tastes amazing? Nope. On the other hand, how productive do you think you could be if you were overly happy all the time? Emotive programming at present is simply mimicry and smoke and mirrors. In the future, I see it as being more unethical than advantageous. Perhaps in the pursuit of developing true emotion from artifice, we could overcome disorders like autism, but the benefit would be to Man, and not Machine.
    • Oct 4 2012: Henry: you raise some really interesting questions about robots, but I think you are revealing some very basic assumptions that may well turn out to be wrong. You are suggesting that some future humanoid robots made with metal and circuit boards, or whatever, could not function analogously to humans, i.e. non-programmed learning from experience, etc. That is by no means established, although it is a theory. Consider a commonplace analogy: Electricity. It is a Field so basic in the Universe that we cannot even say "What it is", although we can describe in remarkable detail just how it will behave in many situations. So, we design and build motors, etc. , but we do not, and cannot "put the electricity in": rather, we make the appropriate connections to the background Potential electric Field, and what do you know, the motor starts spinning. And so will any other, if it is built to do so. Now when we talk about humanoid robots possibly acting in an apparently "human " way, might it not be that the robots , and the humans , are both responding to a background "Field" of Consciousness, such that any appropriate set of conductors, circuits boards,etc, (don't WE have them?) will respond appropriately? This is certainly a modern interpretation of Buddhist concepts, but I see no reason it can't be true, and some for thinking that it is. Have you heard of th "Super-organism" in Biology? This may be a far out theory, but it disposes of (not answers) a lot of troubling paradoxes that certainly no religion has ever been able to explain.

Showing single comment thread. View the full conversation.