This conversation is closed.

Developing a humanoid battlefield robot and or/AI that recognizes "innocent civilians" and can differentiate them from enemy targets.

This robot would have to have an intelligence factor similar to a human, hence the problem of making a AI "smart" like a adult human or soldier.
Is it possible to teach a robot everything a human ever knows?
It would have to be able to differentiate "innocent civilians" from the enemy....there would also be an emotional impact on the enemy or civilian experiencing a humanoid robot saving or shooting at them.

  • Jun 9 2013: Who will be considered innocent? This thing will end up like the santa on futerama, and will decide that all have been naughty.
  • May 28 2013: Paul: you have stated it very well; I agree with you.
    As to just how long this is going to take, that is a question.. For some enlightenment about it, consider the History of pre-Dynastic China , what they refer to as the "Period of Warring States". China was of course ,, "isolated " from Europe, although they would surely have put it the other way around. The point is, there was as yet no "China,", but a collection of Sovereign States, as large as European countries are now. For some hundreds of years, they experienced all the problems that Europe has had for hundreds of years: alliances, "Agreements", Treaties, wars with shifting alliances based on "Self Interest", etc. Nothing led to stability,until finally one rather backward State conquered all the others. Seemed catastrophic, but very soon the advantiages made themselves evident, and from then on, the Chinese considered that one Empire was vastly preferable to a reign of Mafiosos, which is what the alternative was like.
    So we are now in the same boat. We have a choice, either World Government by agreement, as our Founders arranged it , peaceably, or keep on as we are, drifting into a war of consolidation by one" Country". De Facto World Government. However, there has developed a third possiblity. I.e. a World Corporate Government., by creeping bureaucratic money control by some sort of Multi National Cartel.It is notable even today how Multi Nationals are more powerful than many countries, and aren't afraid to show it. This merely indicates the obsolescence of the national State, which has far outlived its usefulness.
  • May 28 2013: What a mind boggling idea! Since no human being can reliably tell who is a "Bad Guy" and who is not, how could a robot possibly be given ANY information or procedure for making such a judgment?! I trust that we are all aware that the old police method of "experienced" policemen who could tell at a glance who is a likely suspect were all too often deluding themselves, as subsequent DNA evidence has revealed, far too often.
    I have a particular aversion to this kind of thinking. As a former war vet, with an interest in History, I cannot escape the feeling that over the last hundred years, with 2 world wars, plus many others, that looking aback, it is fair to say that the vast majority of those hundred or two MILLION deaths, they were almost all "Innocent Civilians", in fact. Regardless of what self justifications and self congratulations the "Winners" (who also suffered) enjoyed. The facts are, that all the wars did not accomplish their goals, which were confused and often wrong in the first place.
  • May 23 2013: Even during the WWII time a full scale war realistically can't avoid civilian casualties. So we should be careful about robots engaged in battles in domestic fighting and/or law enforcement activities. But to use robots in battles in a formal war, I don't see any problem at all. A better arrangement should be to put a battalion of robot soldiers under a human commander who is not on the battle field but with the views of the entire battle field through the eyes of the robot soldiers as well as by other equipments. Thus the commander can directly order the robots to attack or retreat, whichever should be the better strategy anyway. As far as the possible casualty of civilians are concerned, there is simply no way to avoid it completely, except maybe that the robot soldiers be trained to recognize the gesture of surrender which should not be too difficult to program into the "consciousness" in them. As a matter of fact, the robots are relatively safer than a human soldier, to be harmed from any deception by those who surrendered.
  • thumb
    May 22 2013: Well, if we could make such a robot, the enemy would soon also have such robots, perhaps with better programming that fools our robots into thinking they're friendly. So they destroy our robots, and we'll need better software. This is called an arms race, and we've been through that many times. In fact we haven't learned very much, so we're still in the middle of one that foolishly uses up our resources, and of which there is no end in sight.

    Much better to use the money to develop robot diplomats that can actually negotiate their way to agreement, thus avoiding the need to fight like adolescents.
    • May 22 2013: Dang, that is a good idea..... better than robot soldiers. That might be what my conversation will be about.
    • May 28 2013: Instead of robot diplomats, how about following Pope John Pauls pithy advice: "If you want Peace, work for Justice". Our wars usually do not even consider such an idea, which is why the "Negotiations" never seem to work out., or the wars either.
      • thumb
        May 28 2013: If only justice weren't so hard to identify, especially in the international arena. Wars do often start because country "A" considers that it has a just cause that country "B" somehow is hindering or violating. Of course "B" sees it from another view. The failure of a higher power to determine right is what leads to war.

        The Pope made a memorable aphorism, with his twist on the old Roman saw about preparing for war, but it's more helpful to be more specific. Thus, an international justice system with binding laws and authoritative courts that can resolve international conflicts is the mechanism that will eventually make war an historic anomaly. We do have the beginnings of such international courts, but they still depend on the consent of the parties. To make them effective, international military force is also necessary, in the same way that national laws are ineffectual without enforcement. This is probably still a long ways off.
  • May 13 2013: Perhaps you are not aware of this: The government is considering deploying drones that would identify targets and kill them, with no human directly involved in the decision. That is not in some speculative future, but right now.

    My personal opinion is that this deployment would be insane and egregiously unethical, both now and at any future time, regardless of advances in technology.
    • May 13 2013: I know of the drones and such, but those would be more of a "killer without a conscience" ready to kill any threat no matter if they are innocent are not. The idea i am suggesting would have a conscience and once again be able to differentiate innocent civilians from enemys. The hardest part would be able to make a AI that could do something like that.....but i am sure it is possible, somehow.
  • thumb
    May 11 2013: I honestly can't imagine a colder idea than throwing a human life in front of an if-then statement. Sure it can be done, we can eradicate anyone holding something that looks like a gun, or anyone exhibiting agitated behavior. We can eradicate anyone with a specific gene, accent, criminal status, body weight, programatically with today's technology. But we'll never be able to create a robot that looks into a person's eyes, sees a human life and can make a conscious decision whether to take it. Binary patterns will always be just that, nothing more.
  • May 10 2013: Not "Kill them all and let God sort it out."?
    • May 10 2013: Kill all innocent civilians and enemys? Even if they are shooting at you or not? That would be a killer without a conscience...which we would need if we wanted to eradicate PTSD in human soldiers.
      • May 11 2013: That's an old Baby Boomer vet joke. It wasn't meant to be taken completely seriously.Thursday a nephew told me something that my Dad told my brother that I didn't realize. My Dad was a hospital corpsman at the Battle of the Bulge. I didn't realize that he went to Omaha beach a few days after D-Day. He was there to pick up[ parts. Parts that had once be part of living people. I believe we need to learn to get along better. That's all I wanted to say - Maybe I did a bad job.
  • thumb
    May 10 2013: We shouldn't really evaluate this from our present perspective. We should attempt to put ourselves in a time that robots are responsible for more advanced functions.

    The problem with robots is they rely on human programming. In the future we may see robots that are able to process information much like humans do. However, for now we aren't even close.

    Is it possible to teach a robot everything a human could ever know? Right now the answer would be no. I mean we can program robots with a massive amount of information. We have robots that play the violin and kick soccer balls. However, they will never do anything that extends outside of their programming.

    They may be able to intake information. However, they intake this information, and analyze it, based on a set framework of programmed information.

    They compare (X internal information) to (Y external information). Robots cannot update their own internal programming at the moment. Humans must do that. If the internal programming doesn't outline all potential information the robot may evaluate external stimuli incorrectly. This could cause massive problems in combat.
    • May 10 2013: So what you are saying is that with current technology, we could not make a humanoid robot to differentiate friendlys or enemys in the battlefield?