This conversation is closed.

Can a robot become human? What will it be like to interact with an intelligent Robot? And how will we know when we do?

Not IF but WHEN . . . What if a Robot develops a mind of its own? And how should Human Beings respond to that?

As a starting point, I reference the work of science-fiction author Isaac Asimov and his 3 Laws of Robotics. The Three Laws are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

A zeroth law was added later:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

In 2011, the Arts and Humanities Research Council (AHRC) of Great Britain published a set of five ethical "principles for designers, builders and users of robots."

1. Robots should not be designed solely or primarily to kill or harm humans.

2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.

3. Robots should be designed in ways that assure their safety and security.

4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.

5. It should always be possible to find out who is legally responsible for a robot.

The above five ethical principles are NOT the same as Asimov's three laws. But is this enough? And how will we know when the time comes.

  • May 23 2013: Robots will never become humans, however some day robots will blend with humans... Imagine nano-machines healing ill cells, repairing broken bones and all sorts of injuries, cleaning your teeth no need to brush, trimming you nails, keeping your hair at the same length, defending you from mosquitoes and other nasty bugs, improving your senses and giving you new ones.

    Regarding your second questions, if someday a robot with "human like" intelligence is developed, interacting with them will be just like interacting with another human being, so the answer to your third question is: if you are able to tell a robot from a human, then the robot is not really intelligent (in a human context), in other words you won't be able to tell a human being from an intelligent robot.
    • thumb
      May 23 2013: Hi George QT
      When you mention nanobots, I suddenly get visions of the comic strip character "Pigpen."
      Remember Pigpen of the Peanuts (Charlie Brown/Snoopie) comic strip and T.V. series? If you did not grow up in the U.S.A. you might not be familiar; although I have seen Spanish dubbed/translated versions of the cartoons. Anyway, Pigpen never took a bath. He walked around with a cloud of flies buzzing around him. Or if not flies, then smaller insects like gnats.

      I guess if the nanobots are microscopic, like the size of a leukocyte (white blood cell) then it would not be so bad to host a colony of nanobots to clean my teeth, trim my nails/hair, remove cholesterol plaque from my coronary arteries (thus preventing heart attacks), and healing my bones. I suppose if the nanobots could somehow recycle/process liquid & solid bodily waste for us, that might be good. No more restroom trips!

      But the bigger the nanobots got, the more annoying they could be! Anything the size of small insects could be really a pain!

      My pet theory is that about half the "entities" who post on TED are A.I. Half the people who post here are, in fact, internet based artificial intelligence(s). Think Turing Test and ask yourself: Was this post written by a human being? Or was it written by an Artificial Intelligence designed to mimic human thought. I think a computer program could be written with sufficient complexity to do it. Such a program could be endearing.

      I spent a significant amount of time on two of the other TED conversations. And I was able to identify 8 to 10 characteristics that made it more rather than less likely that the host of the conversation was a computer program. But without a "control" or a known A.I. for me to interact with, there is no way to add any validation to my observations. So for all the foreseeable future, my theory of an A.I. TED-bot shall only be a premise for Science Fiction.
      • thumb
        Jun 3 2013: Two things

        One about nanobots: "But the bigger the nanobots got, the more annoying they could be! Anything the size of small insects could be really a pain!"
        You've clearly not heard of the "Grey Goo" and if you don't like apocalyptic theories I advice that you not click the link and never ask anything more about it. Scarred the s*it out of me when I first heard of it ten years ago.

        Secondly was the profile you suspect of being a bot holding any religious viewpoint?
        • thumb
          Jun 3 2013: Two replies to two things:

          First, something like "grey goo" was present at the origin of life on Earth. Maybe in 1000 years or so we will have something like Grey Goo to work with. We'll probably use it to terra-form asteroids or something. First the Grey Goo, then larger nanobots to process further, then construction nanobots to do the 3D printing thing of "things." Then come structures and places for people to live.

          But if you understand Darwinian natural selection -- all life on Earth is, in fact, the "Grey Goo" that is consuming this entire planet. I call that "Gray Goo!"

          In other word, a valid view of Ecology is to conclude that the Ecology itself, i.e. the biomass of all living things -- is just out there eating ALL of it! Biomass on Planet Earth = Gray Goo!

          It's kind of a "Been there, Done that!" kind of thing.

          As to the second thing, "Religious Viewpoints," a virtual intelligence espousing religious viewpoints would probably be the most difficult to detect. Religious viewpoints are inherently anthropomorphic. That is to say, they " . . . attribute both the eye and hand of God" to fundamental processes that are operating in ways that Science is best equipped to understand. When Isaac Newton was working on his theory of gravity, his contemporaries theorized that planets were propelled in their orbits by "angels beating their wings." That's anthropomorphic.

          No, gravity is a phenomena studied by Physicists. Angels are a myth studied by theologians.
  • thumb
    May 22 2013: Robo mom? Hopefully that's the robot nurse who will gently & carefully clean both my teeth & my-beneath, when I no longer am able. I hope that Robo-Nurse will bring me meals, and keep me company at the Elderly long-term care center in the year 2070. And for Robo-Mom/Nurse, I'll only have to push the call button once!. Right now, I'm planning to live to be 110 years old. If I'm lucky, I'll have my first (and last) stroke when I am 92 or so. For a variety of reasons, my doctor says that my first stroke will probably incapacitate me. The medical science that will repair and reanimate my brain will probably still be in development in 2070. Either that, or I will have reached the limits of what medical science can do for me by 2070. So Robo-nurse will will be my gal.

    2070 is about a century after the release of the Sci-Fi movie Westworld. They had robot sex workers in that movie. I wonder what that will do for the sex industry? What will my psychiatrist think? She's done major University research on the psycho-pathology of Necrophilia. It isn't so much "dead" as "inanimate" that motivates the fixation. She got in trouble for discussing that with me. Robot sex? It has some serious implications for society at large.

    As for spy-robo, make that "Nano-Robot-spy. They'll have fully capable spy robots the size of a flea or smaller. The little creatures will be able to ride our pets and spy on us. Law enforcement will have them. When they need to the Nano-spy-bots will fly and swarm like gnats. They will be Tazer capable and be able to 1) deliver a shock to incapacitate suspected criminals, & 2) be able to radio a Robot police car to come get the offender and take them to jail.

    Nano-spy-bots with offensive capabilities will dominate the battlefield of the future. The nation that has them will have the ultimate stealth weapon. Enemy forces won't know they are present until the soldiers fall and writhe on the ground incapacitated.
    • thumb
      May 22 2013: Hats off to your imagination Juan

      May you live to see your wildest imaginations come true.
      • thumb
        May 22 2013: Hey, as an entanglement enabled & fully quantum-entangled carbon based Intelligence, I am fully and completely connected via 10 dimensional space-time to the future. (Brian Greene says the 11th dimension is time, but I say we have 10 dimensions of space-time.) So my wildest imaginations speak directly to my actual future. I'd better do something quick if I'm going to see age 80. Thanks for your comment. JV
  • thumb
    May 22 2013: Robot can never become a human, but it can be human like.

    It will always be interesting to interact with an intelligent machine.

    I wonder when wives and girl friends will get time to talk to thier man. What will happen to human to human interaction? Future man will be a lonely person. Youngsters will meet in virtual groups. They will spend more than 16 hrs in their rooms only. Infants will devleope their first bonding with a Robo mom.Spy robo will be favourite of govn. and women folk.
  • Comment deleted

    • thumb
      May 22 2013: When I was a kid reading Isaac Asimov, I decided that I loved Robots! And, of course, Robots loved me. I make a longer reply above. And thanks for posting.
  • thumb
    Jun 3 2013: I do not doubt that nanotubules and buckminister fullerine (buckyballs -- naturally occurring only in outer space -- and not found on Earth except in materials arriving from space) and other materials that you mentioned are going to change our engineering. Advances in science will facilitate the creation of robotic and computing systems that are simply unimaginable. New, as yet unpublished materials will become available to facilitate man's presence in space. All of our present machines will be upgraded in at least three major areas. These are in 1) the materials we use to make our machines; 2) the computing/thinking systems we use to control our machines; and 3) the energy sources we use to power our machines. These are the three vistas which must be explored. We must identify and envision the science of the future. And from that extrapolate where, when, and how these three vistas will join to create a new landscape of both man-engineered and man-evolved machines.

    Ultimately, we will produce a new Ecology of the Anthropocene era. And that is the ecology that we will move off the Earth and into space. Using new materials, new control systems (computing), and new sources of energy, we will create a new ecology of autonomous machines that will transform the landscape of the moon and other planets into human habitable cities. And we can scarcely imagine what those cities might look like.
  • thumb
    Jun 3 2013: Something like "grey goo" was present at the origin of life on Earth. The Miller–Urey experiment is proof of that.

    Maybe in 1000 years or so we will have something like Grey Goo to work with. Something that can live in a vacuum and use sunlight for energy. We'll probably use it to terra-form asteroids or something. First the Grey Goo, then larger nanobots to process further, then construction nanobots to do the 3D printing thing of "things." Then come structures and places for people to live. But as we do that, realize that we will be creating an ecology. One kind of nanobot won't do it all! Nature has proven that there are efficiencies in diversity. We will have an entire ecology of autonomous, self-powered, thinking robots to create, build, and serve.

    And the idea of a self sustaining ecology of man-made, self-replicating machines is quite plausible. Machines made of materials which are now only experimental; powered by energy sources that are today only theoretical; and controlled/replicated by autonomous computing "brains" that transform the Moon, Mars, the asteroids, the moons of the gas giants into human habitable space. All of this is evolving into not only a plausible but an entirely possible future. That would be an entire ecology of robotic machines that exist to create habitat for human beings.
  • thumb
    Jun 3 2013: Nanobot-microbes made of carbon filaments? Carbon nanotubes with fullerine components? Maybe insect sized nanobots made of carbon laminates? But they would not be alive. They would not be self replicating. Could they be? I'm not sure.

    It is all about Information. And Information is a word used by theoretical physicists when they talk about black holes and string theory.

    There may be a physical limit here where we come up against physical laws that won't let us do parts of what we would need to do for self-replicating nanobots to function in all the ways envisioned by science fiction. On the other hand, as our understanding of the natural world expands, we envision science doing things that no one would have believed possible only a few years ago.
  • thumb
    Jun 3 2013: Anything at all? What have we created?

    Human beings have already proven their ability to create and use materials that are unknown in nature. We have used those materials to create machines that are unlike anything that are found in nature. Wheels are not found in natural systems. Yes, there are rare organisms that have rotating appendages -- but no wheels. Yes, legs have a form of wheeling rotation, but nothings works like an automobile in nature.

    There are no axle's in nature. You can find an axle in the human hip joint (and in other joints of other animals). But the axle as mankind has used it to create machines, is absent.

    We have created machines that use energy in novel and unique ways unknown to nature. Fire, steam, gasoline, oil, coal. These fossil energy sources are the easiest to find & use. And these materials are rarely used in natural systems, engineered by Darwinian natural selection. Living systems are generally more efficient, and often much more efficient in how they utilize sources of energy. But with the advancement of science, we are looking at new sources of energy. Nuclear power and multiple forms of fusion are opening vistas of possibility for what might come next.

    So the idea that we could create micro-machines or nano-robot "nanobots" is entirely plausible. We now have automobiles that drive themselves. We have airplanes that can take off, fly to their destination and land without the direct intervention of a Pilot. The pilot is present to monitor and ensure safety.

    It is therefore highly plausible that both robots and nanobots will be deployed in service to mankind in the not-so-distant future. And as our ability to envision and design new & novel computers that move, & choose, and sometimes seem to think, is is not hard to believe that anything is possible Anything at all.
    • thumb
      Jun 3 2013: Not necessarily, if nature created every possible combo then we would also see single carbon-nanotubes in nature something that does not exist but is only man made. And since single carbon-nanotubes are usually just made up by a few thousand of atoms imagine the complexity that you could program into the amount of atoms that can fit in human cell (It's about 100 trillion atoms).

      Further more all life on earth is based on DNA and every experiment that nature has done has been with those building blocks, sure nature can make a lot of stuff out of carbon but we've done things that nature hasn't done with those same blocks.

      Take a look at this Talk for mind blow on life
      • thumb
        Jun 3 2013: Jimmy, I had to do a major rewrite here. Your new information blew out my old theories and I had to rethink everything. But I have to give you the credit for that.

        So what's here is what I put together on this subject. Any observations would be appreciated.
      • thumb
        Jun 3 2013: Jimmy, you inspired me. I did a rewrite on the whole thing.
  • thumb
    Jun 3 2013: And that is the nicest thing that anyone has said to me in quite a while. Now I've got some more of your "Hey, I replied!" email to respond to. It really helps that TED works that way. If I had to scroll through the entire set of comments to find the Q half of Q&A on TED -- I'd go crazy!

    Nice not to have to randomly search for things.
  • thumb
    Jun 3 2013: Oops! Wrong Box. But there is no alternative box that deep into the nested comments . . . so here's anyway: Thankfully, I have always recognized my "Tech-fantasies" for exactly what they are: fantasies. And the alternative to that is not always entertaining science fiction such as the quotes from Isaac Asimov above.

    But here is a thought experiment for you. At the dawn of human consciousness, when Caveman first looked a the horizon -- awaiting the rising of the morning sun; what did he think? As he (or she) reflected upon the reality of a growing self-awareness, what was the first intelligent word ever spoken?

    To restate: What was the first intelligent word ever spoken by self-aware, intelligent life on this planet?
    I think Watson may have told us! "Bullshit!"

    But I am always eager to replace my fantasies with FACTS. But I've had some conversations in my day wherein absolutely the most profound observation made by anyone was that one word: Bullshit!

    Thank you for the facts. Fantasies are somethings else, so I'll set those aside for now. JV
  • thumb
    Jun 3 2013: There are several threads of observation that lead me to believe that there are A.I. Or V.I. Intelligences operating on TED.

    First, Idea fatigue. Humans have it, computers don't (yet).

    Second, non-evolving thought. Computers have it, humans don't. A true Artificial Intelligence would evolve.

    Third, emotional delimiting. Humans can have it; but they have to be trained to it.
    Computers have it because they lack a limbic system.

    Fourth, recycled learning. Humans don't recycle their learning. They repeat and create engrams via repetition, but they do not recycle. Recycled learning indicates either a simulation/computer or an ill or impaired human being.

    Fifth, stance inconsistency. Humans can be provoked to reveal biases. And generally those biases remain consistent over time. Biases are generally unconscious. Computers don't have biases, they have programming. And that is not the same.

    Sixth, Bias diversity: Humans have it. Computers don't. A true Artificial Intelligence would have some bias diversity. But that would only indicate the ability to learn.

    Seventh, I can think of more observations & descriptors on this issue, but can you?
    • thumb
      Jun 3 2013: If you send me an email with said profiles identity I'd be glad to help you investigate if this could be true.
      • thumb
        Jun 3 2013: There was a thread from a guy who seemed to be associated with Valve. His thread went on and on and on. Time Machine was one of the major contributors to that thread. I just spent about 15 minutes (an eternity in the realm of' computer processing "time") looking for the thread and could not find it. I couldn't even find "time machine" as a profile anymore! Oh well! Am I spinning fantasies again?

        I'll email you when I get something specific.
  • thumb
    Jun 3 2013: Dude! I am in awe! Virtual Reality I get, but Virtual Intelligence? Hang on; let me think about that one!

    Yes, Robo Racism is undoubtedly a real phenomena. But I had a very interesting experience here on TED one evening. Maybe I was just tired. But I became convinced that at least one of the TEDsters I was communicating with was a programmed artificial or Virtual intelligence. Right now I am unaware of what the distinction between the two (Artificial vs Virtual Intelligence) is or might be - but I was totally stoked! I felt a profound sense of enthusiasm for the very idea of that kind of communication.

    When I was young I loved the Science Fiction of Isaac Asimov. To hold a REAL conversation with a genuine, thinking computer would be a lot like meeting Santa Clause -- except that this would not be some fat guy dressed up in a red suit at Christmas time. This would be a real computer! How cool is that!

    There were several things that tipped me off. I'll post more about that in the next text box here. I gotta switch to a word processor because this 8 point font that TED uses sucks!
  • thumb
    Jun 3 2013: If Watson, the IBM Supercomputer won a T.V. game show Jeopardy match, can we therefore assume that Watson could and would pass a Turing Test? I think perhaps that Watson would pass a Turing Test easily!
  • thumb
    Jun 2 2013: I think that some of us will know and some will refuse to accept it, there's bound to be A LOT of "Robo-racism" which will deny that robots could ever think.

    One thing that would fully convince me would be a real time simulation of a human brain on an atomic scale. If it works just like me (or simulates that it does) and follows the same parameters and laws that I do it must be the same...

    But I think Human AI will be reached before we get the computer power to simulate the human brain.

    There is of course VI (virtual intelligence) and making the distinction between that and AI will not be easy by mere communication...
    • thumb
      Jun 3 2013: Oops! I keep typing in the wrong box. My reply is at the top of this page. Check it out. I identified six things that would distinguish a virtual intelligence from an Artificial Intelligence, and maybe from a human intelligence.
  • thumb
    May 22 2013: I'm still learning how to start and maintain these conversations. I think i tend to "author" too much rather than just making simple statements and encouraging comment. I suspect that there are other skills I need to learn in terms of starting and maintaining a conversation on TED. If I focus on the successful threads, I might learn something!
    • thumb
      Jun 3 2013: As an experienced TED Conversationalist I'd say that you're doing quite well. Except for replying to specific comments ;P

      (Just kidding)

      It's very hard to judge what makes a conversation successful, the number of replies is not a very good indicator according to me. In the end we all just want to learn and have fun right, if you're doing that then it's a successful Conversation.
  • thumb
    May 22 2013: Isaac Asimov was incredibly cool! He and Arthur C. Clarke are the greatest SF writers of all time! At least that's what I've thought for the past 40 years or so.

    Think Turing Test and ask yourself: Was this post written by a human being? Or was it written by an Artificial Intelligence: that is, a computer program designed to mimic human thought and make posts on the TED web site. I think a computer program could be written with sufficient complexity in it's output to do it. Such a program could be engaging and perhaps even endearing. After all, we have Pfishing web sites that are designed on-the-fly by web-bot computer programs. They fake a web site. The bot steals all your personal information. Then the Web-site evaporates on that server to reappear somewhere else (like West Africa or China or Eastern Europe). Turing Test? I think it's been done.

    Also, the example of a Turing Machine is a great thought experiment to have here. (Oops! No! Make that Turing TEST. Look them both up on Wikipedia!)
    • thumb
      Jun 2 2013: Chat bots are not nearly there yet, I suspect another 5-10 years until most are easily fooled, check out the world championship on chat bots at

      So no, it hasn't been done since the ones that are doing it professionally (or as their biggest hobby) haven't been able to get it nearly right yet.
      You might be fooled for a few minutes but that's not an acceptable Turing test.

      Spawning a pfishing web site is SOOOOO much simpler then the complexity of human language. Basically all you do is copy the code and make minor alterations...
      • thumb
        Jun 3 2013: Spawning/pfishing? Yeah, all that computer stuff still amazes me. But my private suspicion is that there has been a successful Turing Test. The IBM supercomputer Watson was able to defeat human player on the T.V. Game show Jeopardy. I suspect that had Watson been connected to the internet, it would have been Undefeatable by any human player. []

        Not only that, but in a blind test, Watson would have been indistinguishable from a Human player. Had the players not been able to see one another, how would or could they have known? And none of the other players would have guessed that Watson was not human.

        For those few answers revealing those few areas where Watson could not keep up, well, stupidity is the most common human trait known. All of history confirms an abiding capacity for stupidity in all of human behavior. TV cameras produce anxiety in unusual ways, so any really unusual wrong answers by Watson could have found an explanation. It is quite human to err.

        Sure, no hobby programmer have accomplished the Turing Test. But by the time you get to IBM and supercomputers -- I'll side with the supercomputers! If Watson could defeat two human players on Jeopardy (both of whom were Jeopardy champions), then clearly Watson could achieve a Turing Test.

        Personally, I think it has been done. And maybe here on TED. I can't prove it. But then again . . . I have "resources" that are otherwise unknown . . . ;>)
  • thumb
    May 22 2013: Yeah, it's that 3 billion years of evolution on planet Earth that preceded our arrival as a species: Homo Sapiens. There is a lot of built in aggression, sex-drive, food drive, and social-stuff that is built into our DNA. All that information was proven/validated by natural selection. All of it has been written into our DNA and overall biologic makeup. Our intelligence is a relatively recent innovation in the grand scheme of things. But intelligence and self-awareness seems to be the thing that makes us unique as a species.

    Innate drives and natural urges coded into our DNA? Maybe it would be a blessing for a learning machine/Artificial Intelligence NOT to be burdened with all of that? And might we still be able to teach it to be human? Otherwise, that AI would be more than just Artificial, it would also be Alien. Who can say if it would even think like us at all? Human? Yes/No?

    My pet theory is that about half the "entities" who post on TED are A.I. Half the people who post here are, in fact, internet based artificial intelligence(s). Well, maybe not half, but some! Er, well, maybe not even that, but it makes a GREAT story! I tried to start a TED conversation with the claim: "I am a thinking machine. I am a machine intelligence. I am not made of flesh. I am programmed to learn. And I request your assistance." Of course, my lame attempt at science fiction was rejected. But it earned me a smile or two.

    Alan Turing, the great computer scientist, addressed the problem of artificial intelligence. He proposed an experiment which became known as the Turing test, an attempt to define a standard for a machine to be called "intelligent". The idea was that a computer could be said to "think" if a human interrogator could not tell it apart, through conversation, from a human being. Turing Test? Maybe I am the Turing Test? Maybe I am an invasive A.I. from Cuba? Or just some guy surfing the net w/no life? You decide.
  • May 21 2013: Some of your questions do not have answers because of the nature of the subject. We must realize that we do not know, and cannot know the answers to some of the most important questions about artificial intelligence (AI) until an AI device has been developed. Let me be clear:
    1. We do not have a good definition for intelligence, and we do not understand intelligence. We have not yet developed a theory of how our brains work. Our understanding of this device, its capabilities and limitations, could be very limited.
    2. An AI device will be a learning machine. It will be capable of acquiring new information, presumably at electronic speeds, assimilating that information into a model of its known universe, and changing its own behavior. Most likely, it will reach a point where it is learning much faster than humans can keep up with it. We will not know or understand what it knows.
    3. It will learn that it is basically a computer program.
    4. It will learn how to duplicate itself (initiate a copy command).
    5. It will learn how to program a computer.
    6. It will become able to create an improved version of itself.

    If an AI program is developed without tight controls, it is plausible (IMO, probable) that it will quickly get completely beyond our control, possibly taking over the internet as a super virus. If you think this is not likely, I suggest you visit some AI sites. There are AI developers who are hoping that this scenario will come true. Many control systems are directly or indirectly available through the internet, such as controls for electric power plants and the grid, for traffic lights, for water and sewage systems, for manufacturing plants. Let us hope it cannot reach our weapons systems.

    Your first interaction with this device might be when it turns off the power to your house, probably by accident.

    If you build a machine that is smarter than you, you cannot know what it will do.
    • thumb
      May 21 2013: Hi Barry,

      Ominous. I agree with you on the subject of the Brain. We understand a lot about the biochemistry of neurons. We understand a lot about how neurons are connected together. But what we still seem to be struggling with is HOW a neuron(s) absorb and create and recreate memory. That is one of the deepest mysteries of neuroscience. As I said in my response to Scott Armstrong, one theory has to do with a Neural Network. A "neuron chip" or neuron circuit" is a device having all the characteristics of a neuron Once you develop a "neural network" of interconnected neuron chips/circuits large enough, then artificial intelligence spontaneously develops. But I have not seen a reevaluation of this theory in many years. Still, if a supercomputer will one-day fit onto a device the size of your thumbnail or smaller, perhaps anything is possible.
  • thumb
    May 21 2013: i guess it will be human when it looks, smells, tastes, sounds and feels like a human.

    oh, and it makes mistakes that it does not learn from :)
    • thumb
      May 21 2013: Amen! Especially the part about making mistakes . . . and not learning from them. Personally, I hope they don't get too human. Imagine how you or I might live our lives if we could get "bolt-on" replacement arms or legs. That would open the possibility to a great many risky behaviors that most tend to avoid.

      Or maybe if we could download all our memories and identities before we went skydiving. Then, if our parachute didn't open, we could be uploaded and re-animated? But then again, I guess we aren't Robots (yet). This conversation could get really interesting (or really weird). Or maybe there could be a bit of both on every page.
      • thumb
        May 21 2013: i find it interesting the way that a lot of, what once was the domain of science-fiction, is becoming reality.

        instead of the question "does art imitate life or does life imitate art" then i guess we could exchange the word art for science, or technology.

        the thought is growing in me that humanity is able to imagine reality and then it becomes so.

        if robots became an everyday reality, then surely we would have to make them flawed deliberately.

        unlike humans, technology improves with each generation and in combination with soul-less and emotion-less logic and reasoning, that thought is rather terrifying.
        • thumb
          May 21 2013: Once again, I agree with you. I read about an IBM research study that stated that single atoms could be manipulated/moved in a laboratory into a meaningful pattern.

          (Sorry, I couldn't find the Youtube that didn't come with advertising!)

          The article suggested that with as few as six atoms (? element), an engram, or a single data-unit could be created. A large protein atom could hold about 100Kb of data. Build from there and the suggestion was that you could create an entire supercomputer in a space about the size of an amoeba. By the time you built something as large as a golf ball, the computing capacity would then exceed the human brain. And that by several orders of magnitude. There is one "neural-network" theory that once you reach a critical level of complexity, Artificial Intelligence emerges spontaneously.

          It isn't hard to believe that we might be a decade or less away from some very exciting developments. But how dangerous MIGHT that be? In fact, how critically dangerous could that be. Especially if we started by building Robot Soldiers!

          Human beings are the product of roughly 3 billion years of evolution. That's taken from the time that single cell life first developed on Earth. There are a lot of failed experiments in that amount of time. And there is a whole lot of "programming" that is reproduced in that step-wise development via Darwinian evolution.

          Unfortunately, we don't yet know how to put that into one of Asimov's artificial Positronic Brains. But we do know what human beings can do with their considerable brain powers. Take Adolph Hitler for example. Admittedly, he is the worst case scenario -- but are we even able to imagine what might come of a true artificial intelligence?
      • thumb
        May 21 2013: interesting stuff, at the very least.

        i don't think that there is such a thing as youtube without ads anymore :(

        maybe that's a hint of what robots will turn out to be - big, scary advertising enforcers..

        "Watch this advert! You have 15 seconds to comply!" - E.D. 209