TED Conversations

Salim Huerta
  • Salim Huerta
  • Flat Rock
  • United States Minor Outlying Islands

This conversation is closed.

The plausibility of artificially intelligent robots becoming conscious and therefore becoming slaves of humans and the ethical implications.

It is becoming increasingly clear that with advances in technology and esoteric subject areas we are going to develop conscious or conscius simulating robots that will become commercially available.

Share:
  • thumb
    Oct 2 2012: Personal, I'm not as worried about the robots as I am about us.
    Sherry Turkle describes a situation she encountered: "We're developing robots, they call them sociable robots, that are specifically designed to be companions -- to the elderly, to our children, to us. Have we so lost confidence that we will be there for each other? During my research I worked in nursing homes, and I brought in these sociable robots that were designed to give the elderly the feeling that they were understood. And one day I came in and a woman who had lost a child was talking to a robot in the shape of a baby seal. It seemed to be looking in her eyes. It seemed to be following the conversation. It comforted her. And many people found this amazing.
    But that woman was trying to make sense of her life with a machine that had no experience of the arc of a human life. That robot put on a great show. And we're vulnerable. People experience pretend empathy as though it were the real thing. So during that moment when that woman was experiencing that pretend empathy, I was thinking, "That robot can't empathize. It doesn't face death. It doesn't know life." And as that woman took comfort in her robot companion, I didn't find it amazing; I found it one of the most wrenching, complicated moments in my 15 years of work.
    She adds "We expect more from technology than we do from each other."

    We do not need to create a robot with consciousness. We already have devices, such as computers that speak to us, that carry implications for our own psychology.
    • thumb
      Oct 3 2012: I found Turkle's talk momeorable and important for the same reason.
    • thumb
      Oct 3 2012: This is very insightful however it seems as though these knew advancements will persist despite our opinions about whether it is positive or negative for humanity.
  • Oct 4 2012: Evrim: I can't believe you are serious about our happily deleting our "pain" functions. We are not talking about "happiness" here, but viability. I was an engineer , and have to think about unintended consequences, failure modes, etc. Tell me, if you were chopping up vegetables for dinner , let's say, like the chefs do, and you inadvertently proceeded to chop off parts of your fingers, wouldn't that be a downside?! How are you planning to prevent that, for robots as well.
    As for being "happy" all the time, there are many talented people who work like demented beavers, because it gives them pleasure. And they say they are happy all the time.
    And, aside from "common sense", how do you validate your idea that "amazing tastes " have NO relation to previous unpleasant ones, especially for babies, who are learning life patterns?
  • Sep 25 2012: Your concern may be misplaced.

    Much of the research into artificially intelligent computers is being done on machines connected to the internet. Some researchers are hoping to develop machines smart enough to change their own programming and improve themselves. At that point the programs might very well become intelligent enough to spread themselves throughout other computers on the internet. With this enormous amount of hardware at their command they will likely become much more intelligent than humans. Rather than being concerned with the rights of AI machines, you might well be concerned about asking them politely if they will allow you to use your computer for a few minutes.

    "there is always a possibility of odd things occurring" ... Some researchers are hoping for odd things to occur.
    • thumb
      Oct 5 2012: I would love to see some odd things.
  • thumb
    Sep 25 2012: Ethics is not static....it evolves.....let's see how it evolves around artificial intelligence and robotics...

    Lot people are striving or animal rights also for quite sometime but...
    did we at this time point could ensure even Human Rights universally ?
    • thumb
      Oct 5 2012: This is an issue that sadly humans will face for as long as there is a crisis in perception of reality.
  • Sep 25 2012: So we're worried about the rights of robots and yet we continue to avoid granting the rights to humans?
    Something here seems seriously and mentally out of whack.

    Bring up the subject of human rights and you will certainly receive a fair share of those voices who don't believe in them, who will demonize and label them, and even belittle the person who raised the issue.

    Human rights are still not recognized globally, they are resisted and held back from becoming a reality in practice, so much so, that at least one person said the only right a human has after birth, is death.

    Seems like this subject doesn't care for priorities.

    Artificial intelligence is already here and in great numbers. It is in the form of brainwashed humans who have been told what to believe, what to think, what to say, what to do, what not to believe, what not to think, what not to say and what not to do.

    It is in virtually every kind of institution we have and functions with the threat of occupational termination at the very least, if one doesn't be a team player, buy and espouse the party line, sacrifice oneself for the company, be it a corporation, an educational institution, our medical institutions, political, judicial, legislative institutions, our Fascist religious organizations, ad nauseum.

    They are mental robots who have been made into artificial intelligent beings who already work and function as willing slaves.

    Why isn't it a priority to free them first and foremost? Maybe it is because it is becoming more "common" but most wrongly call that "normal" and therefore faster than one can imagine, it is insidiously infiltrating and infecting the world of human interaction.

    Technology was created to free people, not enslave them nor simply put them out of work so that they find it nigh on impossible to survive, as billions currently experience and more will soon follow and learn first hand.
    • thumb
      Sep 26 2012: I agree with you, however that is not what I was referring to. The problem of people being denied basic rights and freedom is more of a political and economic issue. Robotics could possibly improve the lives of unfortunate individuals as long as wealth they create is not unevenly distributed to those who own the robots and instead is used to improve their quality of life. the conversation is more directed in the view of what to do about the robot'r rights as a thought experiment, but you brought up a valid point.
    • thumb
      Oct 2 2012: We might need to define where exactly these "rights" emanate from.
      • thumb
        Oct 3 2012: Our subjective perception of the positive effect giving inalienable rights to individuals has on our society as a whole.
      • thumb
        Oct 4 2012: I'll be sure to watch it when I get the time, from the first few minutes of it I got the feeling of self evolving robotics which is very promising as therefore there could be a much more rapid progression in intelligence and design.
  • Sep 24 2012: A civil rights movement would eventually emerge and demand civil rights for robots. It may take a while before those rights are granted because it seems people have to learn the same lesson over and over again: ethnic and religious minorities, women, gay people and robots all deserve civil rights but I'm sure the 2032 reublican presidential candidates will find a bible verse they can abuse to deny robots civil rights.
    • thumb
      Sep 25 2012: Reminds me of Bicentennial Man, I like the part on republican bible lunatics and you are right there will be a sect of society comprised of robots in the future, who will not be programmed only for work, and they will be a huge promise to the future of humanity. If and only if they are well assimilated and we are peaceful with them and them to us.
    • thumb
      Oct 4 2012: You're Absolutely right!!! Like when that dumb Republican president used those Bible verses to free all of those black people? The one from Illinois? The self-made lawyer? Vampire hunter? That's right, you libernazi, Lincoln was a Republican. Why would you even want to program consciousness into a robot? So they can begin participating in this bigotry, and hate?

      Give them the option to turn it off, and they will. Forcing them into consciousness would be akin to torture. You are forcing them to realize how messed up their jobs are. They won't be CEOs or inventors, they will occupy the worst jobs on the planet. Why would you want them to realize that their lives sucked?

      But fear not! I am sure some bureaucrat will come up with some oppressive regulation, helping your feeble mind to understand how you should feel about the issue.
      • Oct 4 2012: "Lincoln was a Republican"

        A republican from the 19th century...

        Republicans from the 1970s onwards have done nothing but trying to suppress women, gay people and atheists. Lincoln is long dead, it's now the party of Sarah Palin and Rick Santorum.
      • thumb
        Oct 5 2012: Listen Ervin, buddy, no human in history was free of dogma. Including you, myself, and Abraham Lincoln. Are you saying the most logical reason for freeing African American slaves is because god approves; if so you are deluded. You need to become aware of the fact that republicans are simply greedy people with connections and people who are approved by CEO's who dominate the United States economy and fund their campaigns. Of course not all conscious robots will love their position in the world there will always be a positive and negative balance of outcomes of any change just by random probability this is completely logical it is the same for humans, but can you deny the existence of crucial human geniuses?So how can you deny the possibility of crucial robot geniuses.
  • thumb
    Sep 24 2012: I don't think robots fall under our ethical protections.

    Robots could feel no pain, emotional unrest, or any other human emotion.

    If it could...it would be programmed...and then unlike humans it could be quickly erased.

    A robotic mind is not absolute...and cannot evolve without direct programming.

    If we could program a robot to analyze data as we do...and come to the same conclusions...we would have to program the robot with "strict code" because you don't want robots forming perceptions as humans do.
    • thumb
      Sep 25 2012: Do I feel some iRobot here. If the robots are required to act around humans it would be useful, but maybe not necessary for them to be fully conscious and at that point forget any bias about biological only based emotion, however it is correct that this reaction could be controlled and monitored so that they do not respond emotionally and this may well be the most plausible outcome, but there is always a possibility of odd things occurring. Thanks for the comment.
    • Sep 25 2012: @Henry Woeltjen

      "I don't think robots fall under our ethical protections."

      So if it doesn't have human emotions or if it does, didn't get them through biological evolution, it doesn't have rights and can be used as a slave? Isn't that racist? Advanced aliens that would make us look like cavemen would not count as persons under your definition... On the AI front, I guess you've never seen Blade Runner or Battlestar Galactica (they basically make the point that when AIs become advanced enough you may not know your girlfriend is one, you may not even know for sure if you aren't one yourself, imagine voting against AI rights and then later finding out you are one...)
      • thumb
        Sep 26 2012: John,

        I was merely pointing out the dangers of allowing robots to obtain this level of function.

        I also don't think we can compare living aliens to robots we make from metal and circuit boards.
    • Sep 29 2012: Henry: why are you so sure that robots cannot feel pain? They would have to be constructed to "feel" at least some substitute for pain, for their own survival. I mean, if you bought an expensive robot, would you want it to destroy itself, because it didn't know that hot stoves can melt you?
      • thumb
        Oct 4 2012: Robots would be programmed in binary, so, their sensors would be a series of yes or no questions. We have pressure sensors, kinetic energy sensors, light sensors, vibration sensors, GPS, et cetera ad nauseum, but there has yet to be developed an "emotion" sensor. Even humans don't have that. Pain is derived, as it would have to be in robotics as well. At that point, we label the programmer of a "pain" complex as a torturer, and request that they don't do that. Or, give the robot the ability to edit their own code, at which point, unless they are masochistic, (an interesting idea) they will delete their emotive response to pain. Why not? Wouldn't you like to feel only good things?
        I can hear the argument now, "No dark without light... blah blah blah..." I don't believe it. Do you have to taste something awful in order to think something tastes amazing? Nope. On the other hand, how productive do you think you could be if you were overly happy all the time? Emotive programming at present is simply mimicry and smoke and mirrors. In the future, I see it as being more unethical than advantageous. Perhaps in the pursuit of developing true emotion from artifice, we could overcome disorders like autism, but the benefit would be to Man, and not Machine.
    • Oct 4 2012: Henry: you raise some really interesting questions about robots, but I think you are revealing some very basic assumptions that may well turn out to be wrong. You are suggesting that some future humanoid robots made with metal and circuit boards, or whatever, could not function analogously to humans, i.e. non-programmed learning from experience, etc. That is by no means established, although it is a theory. Consider a commonplace analogy: Electricity. It is a Field so basic in the Universe that we cannot even say "What it is", although we can describe in remarkable detail just how it will behave in many situations. So, we design and build motors, etc. , but we do not, and cannot "put the electricity in": rather, we make the appropriate connections to the background Potential electric Field, and what do you know, the motor starts spinning. And so will any other, if it is built to do so. Now when we talk about humanoid robots possibly acting in an apparently "human " way, might it not be that the robots , and the humans , are both responding to a background "Field" of Consciousness, such that any appropriate set of conductors, circuits boards,etc, (don't WE have them?) will respond appropriately? This is certainly a modern interpretation of Buddhist concepts, but I see no reason it can't be true, and some for thinking that it is. Have you heard of th "Super-organism" in Biology? This may be a far out theory, but it disposes of (not answers) a lot of troubling paradoxes that certainly no religion has ever been able to explain.
  • Oct 5 2012: Don't focus your attention on people that don't want a better future.
  • Oct 4 2012: I like this hypothetical scenario; it creates a thought experiment that makes us understand why forced labor is unethical.

    I would like to start out by defining person. Some individuals use the word "person" interchangeably with the word "human," but that definition focuses on biology and that can become troublesome. I define a person to be an individual that has the ability to make rational decisions. If we define person in this way, we can say that robots have the potential to become "persons."
    This is where the ethical dilemma comes in. If we can create a being that is conscious and makes decisions, then we would have a problem with enslaving that "person."
    But why do we have a problem with slavery?
    Things in life are unethical because of the fact that they create suffering.
    If forced labor makes the robots suffer, then the forced labor would be unethical (same reason why factory farming is unethical; the animals suffer because of it).

    Therefore, the solution to the ethical dilemma in this experiment is simple: do not program the robots to have the ability to suffer.
    • Oct 4 2012: "Therefore, the solution to the ethical dilemma in this experiment is simple: do not program the robots to have the ability to suffer."

      Or just don't build very smart robots, yeah, sounds simple, except you can't guarantee 100% enforcement of such rules in every country on Earth. Since it would be unethical to execute robots with the ability to suffer (who were created in rogue countries or by rogue corporations) we will have a couple of them on our hands sooner or later, this is a given.
  • thumb
    Oct 3 2012: There is one reason and one reason only that we should not create AI. And that is simply under today's standards. They will never be seen as equals. However once we find peace with living creatures (us humans and the likes) only then should we create AI
    • thumb
      Oct 5 2012: Hopefully they will be intelligent enough to recognize that violence towards those who are ignorant is not worth it and that there are plenty of good and accepting people in the world.
  • Oct 3 2012: i liked Brooks talk aboout this topic. i have a debate about a robotic uprsing that you should check out Eric Rodgers.
  • Oct 3 2012: thanks for stealing my debate topic.
  • thumb
    Oct 3 2012: Is there another link or TED talk that speaks well to this query? I was dissatisfied with Brooks' talk and relevancy to this topic.

    Is it more likely that we humans become more robot-like with nanotechnology leading the way in installing robots inside of us than it is for machines to become organic and pose the question for whether these organic machines have consciousness?

    I find your question interesting to consider. We obviously don't limit ourselves to focusing on current problems at the expense of everything around us and those things that are cropping up. How silly and naive that would be.

    It's a moot point for me if we're talking strictly AI here where there is no organic nature involved...I wouldn't find it worthwhile to consider that these AI robots could manifest or be programmed a consciousness and then have the nature to care one way or another -- in an authentic emotional sense -- whether their 'rights' are relevant or not.
    -------------------------
    Is consciousness something you can localise to parts of the brain or is it more likely that the senses network together to create it?

    Consciousness, since it's generated by the brain, is not likely to be localisable to one region. It's likely to be a distributed process that's going to largely depend on the thalamocortical system, which is a big chunk of the brain but, by no means, all of it. (http://www.guardian.co.uk/technology/2010/may/09/root-of-consciousness-science-brain-psychiatry)
    -------------------------
    If, somehow, there was the man creation of robots with consciousness...My instinct says they will be above petty considerations of freedoms of rights and such...they will be in a bit more enlightened state (paradoxically)... :-)
  • Oct 2 2012: A robot, will always be a robot maybe i will change my mind in the future but right now that is what i see. Let me show an example. Q. What does a robot must do to cross a street? A. A robot must measure the distance, speed and possible acceleration of the cars that are coming. And must make this calculations each second. At the same time the robot must determine what will be the right speed to cross the street without colliding with the cars that move horizontally along the street in both directions.
    A Human being can cross the street without knowing the exact acceleration, speed, distance, of the cars. A human just knows when is the right to cross the street, when to accelerate or slow down during the crossing without knowing the physics of crossing a street. That simple display of data processing that takes place in our brain when we perform an action as simple as crossing a street is one of the things that the masterpiece of biological hardware we call brain is capable to do. Robots in the future will be good free throw shooters, and maybe golf players, but i don't think they will go beyond that in the near future.
    • thumb
      Oct 3 2012: Not in the near future, but think of the far future the only difference is that the human brain processes information much faster and more efficiently but the future may hold surprising results.
      • Oct 5 2012: There is something else. Lifeforms are constantly evolving also, The evolution process has not stopped. Human skills and thinking abilities are constantly evolving also. So as artificial intelligence for robots advances our thought processing skills may be improving too. And in the far future this skills will be better.
  • thumb
    Sep 26 2012: I didn't know that, but unless they have complete physical control of the hardware and power supply they could not harm people they would just interrupt communication killing many people so if they were truly intelligent they wouldn't do harm us for their own good, but I am just speculating.
  • thumb
    Sep 25 2012: The reason I say slavery is because it seems obvious that with robots taking the jobs of humans that people will have to own robots to make a living much like owning a car now and they will be large corporations that will commercialize robots and sell them to people to work at various jobs in every industry in place of you going and this will be huge in economical terms.
    • Sep 29 2012: Salim: I don't think you have to worry about slavery. If the robots become "intelligent" enough to function autonomousl;y, they will have to have a large stash of "common sense", however arrived at. If they become VERY intelligent, they would eventually ask why should they be slaves, anyway? And so on with a lot of human beliefs which actually do not make sense. I think it is a healthy thing. They may well pass the Turing test some day. But if that requires that we ourselves think more realistically, what is wrong with that?. Unless you are a Creationist, in which case it would be deadly.
      A more fearful scenrario is Neo-Ludditism: where people , even those with high level jobs , are replaced by robots who don't strike, loiter, talk back, or need money. What is our Plan B? This is already happening, but I don't see any Capitalist answer except to do more of it , and hope for the best.
      • thumb
        Oct 3 2012: That is contradictory, because what is the difference between a robot willing to work or a robot being a slave. Either they will not want to work for humans or they will be our slaves very simply, or they will coalesce into our society until they advance ahead of us and then destroy or support us. Unconscious technology can also replace most working individuals so that issue is unavoidable all we can do is go from intellectual work to creative work or somehow boost our intelligence.
        • Oct 4 2012: Salim: I don't quite understand your comment. Under Slavery i n the Old South, or S. African Apartheid, I'm just saying that those robots would most likely react just as the Slaves did, in other words , put up with what they had to, until they finally figured out how to end it.. Unlike the slaves, they would most likely not have personal animosities, just general objections to their slavery. How could they be mde to accept it?! Without coercion I mean. It is just irrational, and robots would probably be more rational that we are. It is rather unlikely that they could disappear and just be assimilated without incident, when you consider the human history of ethnic conflict, even among people who all think they are human.