H+ Magazine
Covering technological, scientific, and cultural trends that are changing–and will change–human beings in fundamental ways.

Editor's Blog

R.U. Sirius and Surfdaddy Orca
May 19, 2009


Terminator Salvation

In a fascinating paper entitled “How Just Could a Robot War Be?”, philosopher Peter Asaro of Rutgers University explores a number of robot war scenarios.
Asaro imagines a situation in which a nation is taken over by robots -- a sort of revolution or civil war. Would a third party nation have a just cause for interceding to prevent this?
Asaro concludes that the use of autonomous technologies such as robot soldiers is neither “completely morally acceptable nor completely morally unacceptable” according to the just war theory formulated by Michael Walzer.

Just war theory defines the principles underlying most of the international laws regulating warfare, including the Geneva and Hague Conventions. Walzer's classic book Just and Unjust Wars was a standard text at the West Point Military Academy for many years, although it was recently removed from the required reading list.

Asaro asserts that robotic technology, like all military force, could be just or unjust, depending on the situation.

h+: We're using semi-autonomous robots now in Iraq and, of course, we've been using smart bombs for some time now. What is the tipping point – at what point does a war become a “robot war”?

PETER ASARO: There are many kinds of technologies being used already by the U.S. military, and I think it is quite easy to see the U.S. military as being a technological system. I wouldn't call it robotic yet, though, as I think there is something important about having a "human-in-the-loop,” even if the military is trying to train soldiers to behave "robotically" and follow orders without question.

I think there is always a chance that a soldier will question a bad order, even if they are trained not to, and there is a lot of pressure on them to obey.

Ron Arkin is a roboticist at Georgia Tech who has designed an architecture for lethal robots that allows them to question their orders. He thinks we can actually make robots super-moral, and thereby reduce civilian casualties and war crimes.

We might be able to design robotic soldiers that could be more ethical than human soldiers.

I think Ron has made a good start on the kinds of technological design that might make this possible. The real technical and practical challenges are in properly identifying soldiers and civilians.

The criteria for doing this are obscure, and humans often make mistakes because information is ambiguous, incomplete, and uncertain. A robot and its computer might be able to do what is optimal in such a situation, but that might not be much better than what humans can do.

More importantly, human soldiers have the capacity to understand complex social situations, even if they often make mistakes because of a lack of cultural understanding.

I think we are a long way from achieving this with a computer, which at best will be using simplified models and making numerous potentially hazardous assumptions about the people they are deciding whether or not to kill.

Also, while it would surely be better if no soldiers were killed, having the technological ability to fight a war without casualties would certainly make it easier to wage unjust and imperial wars. This is not the only constraint, but it is probably the strongest one in domestic U.S. politics of the past 40 years or so.

By the way, I see robots primarily as a way to reduce the number of soldiers needed to fight a war. I don't see them improving the capabilities of the military, but rather just automating them. The military hold an ideal vision of itself as operating like a well-oiled machine, so it seems that it can be rationalized and automated and roboticized. The reality is that the [human] military is a complex socio-technical system, and the social structure does a lot of hidden work in regulating the system and making it work well. Eliminating it altogether holds a lot of hidden dangers.

h+: Does robotic warfare heighten the possibility of accidental war, or might it guard against it?

PA: There was a news item March 2008 about a unit of the Swiss Army, about 170 infantry soldiers, entering into Liechtenstein at night by way of a dark forest. This turned out to be an accident –- they were lost during a training exercise –- so there wound up being no international incident. If there had been tensions between the countries, there could have been a just cause for Liechtenstein to declare war on Switzerland on the basis of an aggression.



Of course, Liechtenstein does not even have an army. But something similar happened in 2002 when a platoon of British Royal marines accidently invaded a Spanish beach, instead of Gibraltar.

 

I think the same is true of machines. They could inadvertently start a war, though this depends both on the technology malfunctioning and on the human political leadership desiring a war. Many wars have been started on false pretenses, or misconstrued or inadvertent acts: consider the sinking of the Maine in Havana or the Gulf of Tonkin incident.

h+: You talk about the notion that robots could have moral agency – - even superior moral agency –- to human soldiers. What military would build such a soldier? Wouldn't such a solider be likely to start overruling the military commanders on policy decisions?

PA: I think there are varying degrees of moral agency, ranging from amoral agents to fully autonomous moral agents. Our current robots are between these extremes, though they definitely have the potential to improve.

I think we are now starting to see robots that are capable of taking morally significant actions, and we're beginning to see the design of systems that choose these actions based on moral reasoning. In this sense, they are moral, but not really autonomous because they are not coming up with the morality themselves... or for themselves.

They are a long way from being Kantian moral agents –- like some humans –- who are asserting and engaging their moral autonomy through their moral deliberations and choices. [Philosopher Immanuel Kant's “categorical imperative” is the standard of rationality from which moral requirements are derived.]

We might be able to design robotic soldiers that could be more ethical than human soldiers.

Robots might be better at distinguishing civilians from combatants; or at choosing targets with lower risk of collateral damage, or understanding the implications of their actions. Or they might even be programmed with cultural or linguistic knowledge that is impractical to train every human soldier to understand.

Ron Arkin thinks we can design machines like this. He also thinks that because robots can be programmed to be more inclined to self-sacrifice, they will also be able to avoid making overly hasty decisions without enough information. Ron also designed architecture for robots to override their orders when they see them as being in conflict with humanitarian laws or the rules of engagement. I think this is possible in principle, but only if we really invest time and effort into ensuring that robots really do act this way. So the question is how to get the military to do this.

It does seem like a hard sell to convince the military to build robots that might disobey orders. But they actually do tell soldiers to disobey illegal orders. The problem is that there are usually strong social and psychological pressures on soldiers to obey their commanders, so they usually carry them out anyway. The laws of war generally only hold commanders responsible for war crimes for this reason. For a killing in war to truly be just, then the one doing the killing must actually be on the just side in the war. In other words, the combatants do not have equal liability to be killed in war. For a robot to be really sure that any act of killing is just, it would first have to be sure that it was fighting for a just cause. It would have to question the nature of the war it is fighting in and it would need to understand international politics and so forth.

The robots would need to be more knowledgeable than most of the high school graduates who currently get recruited into the military. As long as the war is just and the orders are legal, then the robot would obey, otherwise it wouldn't. I don't think we are likely to see this capability in robots any time soon.

I do think that human soldiers are very concerned about morality and ethics, as they bear most of the moral burdens of war. They are worried about the public reaction as well, and want to be sure that there are systems in place to prevent tragic events that will outrage the public. It's not impossible to try to control robot soldiers in this way. What we need is both the political will, and the technological design innovation to come together and shape a new set of international arms control agreements that ensures that all lethal robots will be required to have these types of ethical control systems.

Of course, there are also issues of proliferation, verification and enforcement for any such arms control strategy. There is also the problem of generating the political will for these controls. I think that robotic armies probably have the potential to change the geo-political balance of power in ways far more dramatic than nuclear arms.

We will have to come up with some very innovative strategies to contain and control them. I believe that it is very important that we are not naive about what the implications of developing robotic soldiers will mean for civil society.

 

44 Comments

    Interesting. Well timed post. Everyone is super stoked about the new terminator movie coming out.

    How close are we to actually having robot soldiers? Are we actually trying to move in that direction?

    I think the whole idea of fully-autonomous killing machines is ridiculous. The article makes it sound as if that's what the military is building. None of the equipment currently deployed, now or in the foreseeable future, makes decisions about what to target. Smart bombs and smart munitions find targets that are pre-programmed by humans, or take out targets within an area designated by humans.

    The thing that bothers me about lethal RAVs and non-lethal weapons (such as tazers and tanglefoot) is that they make use of force more likely because there's less personal risk involved for the soldiers and police who decide to use them. Worrying about whether or not robots will make a bad decision misses the point -- robots don't make the decisions, people do.

    I'm really surprised nobody's quoted Isaac Asimov's three laws of robotics:
    http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

    The difficulty, though, is that even Asimov's laws assumed that the robots were fully able to relate and interact with humans. You can't have a moral robot soldier until you can have a robot that at LEAST holds its own discussing politics and TV shows at a cocktail party.

    "...even if the military is trying to train soldiers to behave "robotically" and follow orders without question."

    Nope. Not even close. I've been in for 10 years, and this is NOT how it works. Go do some actual research before you spout your mouth off.

    To replace the lives of fathers, sons, brothers and others with machines in times of war is to reduce armed conflict to an expensive video game. Why not paper/rock/scissors to decide the winner?

      because the guys ur playing rock paper scissors with will shoot you. what kind of questio n is that?

    Anonymous commented that machines don't suffer from mental breakdown. Ever had a laptop or a desktop go wonky on you when it had a part fail or had malware get involved in its programming? What do you think the consequences would/will be when a heavily armed warbot suffers a similar hardware or software failure and "goes nuts? And I'm not just thinking about immediate friendly and/or enemy casualties, I'm also thinking about socio-political fallout. You'd better be able to either hit the off button quickly or have a way to disable it quickly before it does so much collateral damage that public outcry causes the shutting down and destruction of all such warbots.

    Any robot with enough awareness and independence to disobey bad orders will soon disobey good orders. And imagine a military dictatorship in the United States where our oppressors are efficient robot killers. Well, the interviewees sure didn't, they're willing to take the technology to its natural ends because "it's cool".

    Once you develop military AI to the point where it has unbounded learning (because otherwise its too complicated to explicitly program), the Skynet/Matrix scenarios aren't such a fiction anymore. You basically created a predator class of inorganic life, that doesn't have vulnerable bodies made of calcium/protein mush.

    The proponents of robotic warfare in this article are complete fools.

      Morality in robots. An interesting article, but unrealistic. For a robot to have a sense of morality, it would need to be a sentient being. Otherwise, disobeying orders to shoot children amounts to nothing but smart programming similar to a treadmill stopping when the runner falls and the safety pin is pulled out.

      I support following a good moral sense, however, it seems unlikely that robots in the military will be embedded with complex ethics programming. Robots are tools like all military equipment. Guns, hand grenades, tanks, planes, and missiles don't prevent their operators from making questionable acts. The last thing a commander will want in his military is an expensive robot that refuses to follow orders, regardless of the context of the situation. Therefore, it is highly unlikely that the military would ever consent to using robots with moral programming.

      If there really is a concern of military robots being used for unethical purposes, the real answer falls upon the human side. Instill a strong moral compass into the people commanding the robots. Similar to the saying, "Guns don't kill people, people kill people," the root of the problem should be the focus, the operator.

      In another comment to this post, "Occam's Razor" makes a good point. Unfortunately, it's a fact of life that military commanders will give an immoral order and expect it to be followed, allowing thereafter for a "scapegoat". There are several cases in point during the Vietnam War. The military also works in the grey area of "moral relativism", the doctrine of the "lesser of two evils" as it were. Military commanders would never allow the use of a tool that would use a binary decision tree, "moral or immoral" in deciding whether to follow an order. Among the horrors of war that we ask our military to live with for the rest of their lives in some cases is the choice between the lesser of two evils. Accept it or not, it's one of the reasons that we honor these men and women, the decisions they've made and are forced to live with. If a robot can't set aside morality considerations, or at least in the face of those considerations, still make a decision and act on it, there's no place for them in war.

    The fact that we risk the ultimate sacrafice for a cause forces us to choose carefully those causes which we deem worth of killing for and; therefore, potentially dying for. To eliminate all risk on our part will not lead to less wars, it will lead to a greater use of force elsewhere with little regard for the lives we take. Eventually, the wrong group of men will be in control of such a weapon, and then who will be able to stop it? A similar arguement was used by some about nuclear weapons, but this is a false analogy, this is a conventional force weapon in which we have superior advantage in-when the nuclear bomb was developed-our enemies were right behind us. Though we were the only one's mad enough to use nukes first, mutually assured destruction kept us from using it since. What will it look like when we use AI robots first? I believe that for the defense of the homeland, and for freedom, are the only justifications for war-we already have fallen from that theory just and moral war-what will we do with our new Terminators?

    "We will have to come up with some very innovative strategies to contain and control them."
    Every strategy at some point fails, and every backup plan needs a backup plan beyond it. What happens when a robot can think for it's self? Would it question the point of it's own existence, and the logic of it's programing in the context of it's actions? The rational of this article seems to be that war is sometimes justified, and that robots would be programed with that justification. So when it realizes that it's only reason for being is killing people, why would it stop with our enemies?
    Let's just hope that by the time AI is developed and a killer robot to go with it, war is only in our history books. Otherwise, we wouldn't survive a year against them.

    Hmm...interesting article....but I share in with the view of others that it is somewhat incomplete....perhaps due to its aim (i.e. being it philosophical instead of technical).

    First of all, there is a difference that must be made clear.

    Robots are mindless automated machines. We already have robots and we use them all the time, for manufacturing mainly.

    Robots are a natural extension of computers and they are based on the same principles. Robots, like computers depend on constant human input to perform their task. In other words they are incapable of self-generating independent thought. The smart bombs, or un-manned vehicles described in this article are just such things. They don't do anything unless an independent thought generating agent (i.e. human, the only one in existence to date) tells them to do so.
    The feature that distinguishes robots from computers is just the fact that they can interact with the physical world and aid humans in troublesome tasks.

    When thinking of Asimov's three laws we are no longer talking about robots, we are talking about androids (in other words human-like machines that posses pure AI capability).

    No fault to Asimov of course, at the time he wrote "I, Robot" the differentiation was not so crucial as we were not yet at a point to truly start pondering Artificial Intelligence as even theoretically possible.

    By accepting this slightly more accurate definition it is easy to see that he "Terminators" from the famous trilogy (or well...should I say quadrology now?) of films were nothing more than robots (again, machines pre-programmed by an independent thought generating agent, i.e. Skynet).
    I haven't seen the fourth one yet but in all previous three movies, the T-800 was simply carrying out the program instilled by Skynet. No questions asked.

    What is proposed here in this article is quite different. We are talking here about a device that is capable to make independent decisions about what course of action to pursue and eventually countermand direct orders based on its own judgment. Clearly here we are talking about the implementation of portable pure AI.

    Now I know what you are thinking, "Can't we just program a set of universally moral directives to follow that allow such a machine to countermand inadvertently (or purposefully) immoral orders?" but yet again, what you are describing here is a robot. Think about it, this machine would be programmed to abide by a pre-set directives list...set by whom? Ultimately HUMANS (again the independent thought generators). It is a simple matter to set priorities in the programming and this technology is readily available today and it has been for quite some time.

    I'd wager you use it every day without even realizing it. Think about it. Do you ever use Word? Doesn't it automatically correct misspelled words for you (the famous example being changing the i in I even when it's not necessary)? It is the same principle.

    Now, pure AI is still impossible to this day (but we are getting there, according to Moore's law anyway) but there a few tricks we came up with as substitutes.
    Simulated AI is one of them. This simply consists of smartly using databases to simulate or "fake" an AI, thereby creating a program that seems intelligent and even able to pass standardized tests that recognize intelligence.

    Incorporating such a system in a mobile unit is vastly impractical due to the computing power required to handle such a piece of complex software but it has been postulated that somehow employing such types of software in mobile machines could dramatically improve their accuracy and their dependability.

    Work in AI has been going on since the 1950's and it is still largely theoretical. Recent studies have shown that there is a huge obstacle to overcome. Emotion.

    You see... although we can't fully comprehend how thoughts in our mind are generated, if we try to emulate the human mind to create pure AI we recently discovered that our emotional mind, our primordial instincts, play a major role in our decision making progress (thus the origin of our different perceptions on anything, even an article like this one we just read).

    Therefore such a pure AI would have to have its OWN emotions and thus its OWN set of morality to guide its judgment. In other words such mechanical soldiers would not be different from flesh and blood soldiers apart from the fact that they are easily replaceable (at least in physical form, because don't forget...such AI equipped androids would end up having their own personality).
    But then the issue would not be resolved, it could be even worsened, because like Asimov predicted they may choose to rebel.

    "Robots might be better at distinguishing civilians from combatants; or at choosing targets with lower risk of collateral damage, or understanding the implications of their actions. Or they might even be programmed with cultural or linguistic knowledge that is impractical to train every human soldier to understand."

    I find this solution a robust starting point, but once again it's incomplete.
    It is unreasonable to impart instructions to machines about socio-cultural situations, especially when they are as volatile as they are in a war zone.
    The moral standards to be accessed to guide such machines needs to be fixed and perhaps centralized.

    A theoretical solution to this scenario could be the following, call it a mental exercise if you will.

    First, forget about pure AI, as we have seen it would not solve this problem at all, and maybe create new ones instead. Just use a very, very, very, sophisticated version of Simulated AI to control these machines in battle (i.e. movement, tactical strategies, terrain/enemy detection, etc.).

    Second the "Moral Code". This can be essentially a very large database remotely linked to the machines. The rules defined in such a database must be agreed upon by all nations that possess such technology, alas another Geneva Convention (notice that in the end it is the HUMANS that give the primary instructions).

    A simple "root" of such commands could be:
    DO NOT KILL HUMANS....UNLESS:

    causality a)....
    causality b)...
    etc.

    Now, obviously this could be a very long list, and not the only one to go through when encountering a human being, and for anyone that understands basic computer science fundamentals it is clear that no matter how advanced a computer system is, it would take a loooong time to get through all of this one causality branch at a time.
    And here comes the revolutionary idea.
    Keep in mind even though I thought of this I DO NOT APPROVE OF IT in ANY WAY...to me it is the nth corruption of something good science could give us into a weapon of destruction, but nevertheless it is interesting to consider.
    It is possible to calculate the end result of all of these decision branches (as a matter of fact an infinite amount more, literally) all at once, in the same moment, through quantum computing.

    Quantum computing is a largely theoretical field that employs the theory of quantum mechanics and general relativity as a basis, instead of our current mode of computing based on binary numbers represented by electrical impulses. In essence, all of the possible permutations of a problem would be calculated at the same time in multiple universes parallel to our own (I know, it sounds ridiculously fictional...but it is true...and it works, at least theoretically).

    The main problem though is that our current smallest quantum "computer" (and I use the term computer VEEERY loosely) is actually comprised of a large laboratory room fully equipped with lasers, mirrors, particle detectors, beam splitters, ultra sensitive oscilloscopes, and some very very smart scientists.
    In other words...a long way from even the simplest practical use.

    However if we ever manage to perfect and miniaturize the technology this system could work, right?

    Unfortunately...no.
    There are simply too many issues to deal with even if we had this "perfect" system in place to fight our fights for us.

    First of all, logic error handling. What if NONE of the possible courses of action are allowed according to the Moral Code? What then? Does the machine simply freeze?
    Consider this situation:

    Suppose one of the "laws" states: DO NOT KILL CHILDREN.

    Now another movie comes to mind that strangely enough didn't come up in any of the comments I read, especially strange since it embodies in my opinion the whole point of this article (i.e. Morality V. Machines), and that is the RoboCop trilogy (by the way, this also will become a quadrology..RoboCop 4 is coming!!). In RoboCop 2, our hero is faced with a criminal which happens to be a little boy and fighting against his own morality, RoboCop freezes receiving a nice bullet between his eyes as a thank you.
    Now, think of the rule of escalation (it is the rule that says if gangsters carry sticks cops will carry guns, and in turn gangsters will carry automatic guns, and so on and so forth). It is logical to assume that groups or nations intent on defeating this deadly new weapon would try to exploit all its weaknesses (i.e. use kids as soldiers). This is a gruesome reality even now, simply think of what goes on in parts of Africa for example.

    All right, this could be fixed with a simple override protocol...but then we would be breaking the Geneva Convention again, and thus this whole endeavor would have been pointless, since human soldiers can do that faster, better, and cheaper than machines.

    And what about the impossible Triage situation?
    What if there is a situation in which exactly two (or more) outcomes are possible that yield exactly the same result ? How is a robot supposed to pick one without a pure AI backing him up? Without instinct?

    And consider the possibility if even only one of these machines would be captured by the enemy. This is no ordinary WMD. This is not designed to explode and lay waste to anything in its blast radius. This is a piece of equipment designed for re-usability. What if an enemy force which DOES NOT recognize this new Geneva Convention reverse engineers this weapon? What will we do then?

    As I reason this out and I share these fruitless thoughts with you all that had the patience of reading this through I realize that it may not be possible, certainly not very feasible, to implement a machine army such as the one possessed by Skynet, and almost certainly not one that would benefit mankind in any way.

    It is interesting to note that so many brilliant minds are intent in discovering better, safer ways to kill each other... simply look at the title of this article... "Can "Terminators" actually be our saviors?".

    Funny, perhaps naively I would think that such genius could be used to try to eradicate war all together, and hunger, and disease, and poverty...but perhaps that was always only Gene Roddenberry's view of the future. Now that is sad.

    I just want to see a real robot do the robot dance. That would be cool.

    Glad to be a person who is sentient enough to realize that we will not reproduce feelings in machines until we have mastered our own, as long as we seek it anywhere else shows the amount of arrogance we have for being crazy enough to think we can get robots to do what we cant even do ourselves. Plillip K Dick had a perfect vision of self replicating robots in his book which was made into the movie screamers, ha! it became more evolved than us and still killed us all!

    I just want to see a real robot do the robot dance. That would be cool.

    why arnold didn't play in the terminator salvtion ??

    What you fail to take into account is that the military is the most effective union on the planet. They do not exist to support us, defends us or aid us, (except by taking a certain percentage of the natural born killers and lowest IQ norms off the street and out of the country).

    They exist for themselves.

    If the Pentagon was told to take a 5 division cut in all all employees, you can guarantee there would be a war /found/ before any of the 50,000 men took off their uniform. Not so GM.

    Which is one of the three reasons why it's foolish to assume a military role for Robotic Development. Because they will reduce any program that interferes with their union vote to an 'experiment' in singular roles (hauling logistics, bomb sniffing etc.) and then cancel it at the first possible opportunity when it could actually start to carry weapons (Talon SWORD was ordered to stand down rather than deploy to Iraq) and replace their hired guns. Because 'only a general' can order a soldier to commit suicide. Any video gamer can fly/drive/shoot teleo-operate a robotic platform.

    This type of crap has been going on since the TDR UCAVs of 1943 proved that they could do missions which no manned aircraft could achieve. Don't think it will change as long as we let ourselves by intimidated by our 'protectors'. Like sheep before the dog.

    And for those other reasons:

    2. Humans exist to be enslaved. Every rich person's greatest secret to sustained wealth is the certainty that _doing nothing_ is the best way to make money. They can neither think nor work hard enough to earn their social status and material wealth. What they can do is own 'stocks and bonds'

    http://photos.travellerspoint.com/57671/stocks.JPG
    http://kaganof.com/kagablog/wp-content/uploads/2009/06/adverts.JPG

    Whose name metaphors are particularly apt because they represent a life long breed-train-reproduce system whose output is intentionally directed towards giving a rich man 1 penny for every back his 'investments' represent to the process of useless, wasted, labor that is capitalism. A system DESIGNED to create excesses of labor yield and thus devalue it to the true owner of the system. Those who slave within it.

    What happens when your slaves are robots and robots don't have to be paid anything? Do we suddenly become a society of learning individuals where 'every man' is a stock holder in the representative labor that robots do which prevents him from contributing to everything from global warming to resource depletion?

    The Answer: It's a trick question. Because the rich will fail as a body without the means to support their lifestyles any better than the poor now do.

    And so they have no intention of letting us ALL become 'slave owners'.

    3. The military, at least in the West, tends to make machines that achieve the last 10% of performance for reasons that are utterly ridiculous given the essentially random nature of warfare and it's basic function of blowing things up. Thus it makes no sense for the military to be in charge of something which the MUCH greater quantitative mass of commercial sales could more rapidly pay for development of in a hardy (long lasting), standardized (upgradeable) 'good enough' system. Yet look who has control over the development of robotics: The Military, defense contractors and their entirely owned and operated subdivisions: College Labs.

    CONCLUSION:
    They say a pig can like a Rolex without knowing a damn thing about what it means or does, just on sheer shine factor. But it's still a pig. Which is why, if you want to make anthropomorphic robots to save our nation and society (rapid swing-force labor in agro and construction, among other things) you'd better take them out of the hands of the destructive children and keep them away from the megacorps to whose stockholders, robots represent equally huge risk in a conversion to a socialist system from a capitalist one.

    You'd better design a system which has lots of scope for 'retraining' compared to the hardened intellects of humans in their third and fourth decades. Is sturdy and solid in construction. And thus able to be long lifed in it's gradual movement from the most elite to secondary and tertiary job positions before final retirement.

    What idiot then would trust the images of a Hollyweird propoganda mill to tell them that the evil rich white dudes want working robots? Those who are so ignorant and/or stupid in their own failure to realize their slave existence as to actually be convinceable that robots are a bad thing for them.

    i think the movie is going to become a reality at the pace that technology is moving at

    I think in the not too distant future, terminators are possible. The trends shows that more and more robots will be part of our lives.

      Glad to be a person who is sentient enough to realize that we will not reproduce feelings in machines until we have mastered our own, as long as we seek it anywhere else shows the amount of arrogance we have for being crazy enough to think we can get robots to do what we cant even do ourselves. Plillip K Dick had a perfect vision of self replicating robots in his book which was made into the movie screamers, ha! it became more evolved than us and still killed us all! Cheers Hochzeit

    This is one post that has generated a lot of controversy. Personally, I don't think terminators can be our salvation.

    Driver Detective

    Great Article, thanks for sharing.

    What a nonsense, these scenarios are pretty much impossible simply because there is such a thing called, electro magnetic wave which can cause every electronic device to fry or shut down.

      http://en.wikipedia.org/wiki/Electromagnetic_radiation

      perhaps you're thinking of an electromagnetic pulse, which can be shielded against. (Not very well now, but if we had robot armies I think we'd come up with some way to shield them from EMPs)

        You're absolutely correct, - let's get those robot armies (haha)

    At the pace that technology is developing at, we are heading towards a path of having electronics imprinted into all the aspects of our lives. In the not to distant future, I can see nanobots implanted in our bodies to make us stronger (I believe they are working on nanobots that are placed in your brain and will allow you to recall any piece of information you've ever encountered... with 100% confidence)

    These are revolutionary concepts and procedures that will continue to grow more prominent in our lives, from runescape hacks to cyborg humans.

    This new reality of robot war is unavoidable in our future. Just like when Noble discovered dynamite/explosives, he was so worry on the exploitation of them as war equipment that came into reality so far. Any means to dominate other people will keep developed by human being. used stationary bikes

    Nice Robot with a new activity i like it its really nice..
    Thank you for post...

    Improved cognition is also impossible in practice as any improvements could be gamed by the enemy to avoid attack. It would essentially amount to profiling and the underlying criteria could be determined in relatively few encounters. After of few rounds of “improvements” and counter responses you again have an indiscriminate killing machine – or you give up on killing people, but that’s not going to happen.

    One thing not mentioned is that machines wouldn't be susceptible to indulging in previous psychopathic bents, mental breakdown which could and often has led to unnecessary 'civilian casualties' like killings, maimings, torture, terrorism (yes, terror is a military tactic, although this might be mitigated by having armed killer robots marching down your street), and let's not forget the party favor that accompanies war wherever it goes: rape. (The 'Support Yer Troops' crowd will balk at this and I'll retract it if you can find me a (as in one, singular) war in which rape can be proven to not have occurred as a byproduct of said war. They're hired as and trained to be killers, not altar boys or girl scouts.) Machines wouldn't feel the need to loot either.

    The machines would need to question the war and be aware of the politics involved to make rational decisions; while there are soldiers every day who do this, the most come from very low income backgrounds with very blurry ideas about 'God and country', 'Keeping us safe from Terrorists', and quite a few who really just want to go blow something up. And I don't think that's an accident.

    However, this all comes down to what the bot is programmed to do and the quality of that programming; some bean-counting nearsightedness added into the mix, some overworked underpaid code monkeys and a rush through tests to get them into the field and it could well be 'Terminator'.

    If we build terminators, they will terminate us. It will be our last mistake.

    Anyone ever see the movie "I, Robot" ?
    Sounds somewhat similar to the "Fundamental Laws for Robots" or whatever they call it in the movie.

    Silly boys with their guns. Your gonna shoot an eye out. Please learn to share.

    "I think the same is true of machines. They could inadvertently start a war, though this depends both on the technology malfunctioning and on the human political leadership desiring a war. Many wars have been started on false pretenses, or misconstrued or inadvertent acts: consider the sinking of the Maine in Havana or the Gulf of Tonkin incident."

    -They forgot the false-flag operation of 9/11. 9/11 was an inside job!

    I stopped reading after the first response, in which Asaro asserts the american soldier is trained to robotically obey orders. This is foolishness of the highest order; since WW2 the americans have borrowed the german system of mission style orders that encourage initiative at the lowest level. The strenght of the american army lies not in a robotic style top down control, but in the massive swarming of intelligent actors directed towards a single purpose.

    If ever we get attacked by Robots or whatever, im picking up my BB and making a run for it!
    No robot aint blasting my bubblebutt with his lazers!

    According to battlefield research many if not most soldiers in WWII fired rounds over the heads of the enemy and otherwise avoided killing, and the US military has spent the entire post WWII period using psychological conditioning to try and remove any element of moral agency from human soldiers, so there is no way they are going to allow any robot soldier to have any ability to overrule orders.

    Improved cognition is also impossible in practice as any improvements could be gamed by the enemy to avoid attack. It would essentially amount to profiling and the underlying criteria could be determined in relatively few encounters. After of few rounds of "improvements" and counter responses you again have an indiscriminate killing machine - or you give up on killing people, but that's not going to happen.

    Our is to do or die, not to question why.
    Sound familiar?

    Tennyson, a UK poet. You'd think a few seconds with google would prevent looking like a jackass, but no one ever does it =[

    This was really sick folks, thanks a lot.
    Mike - the driver robot review dude.

    Aircrafts are already piloted by robots (autopilot).

    Not if we build an Arnold one, that one will save us from the others

    sonicare brush heads

    Haha you guys are funny! arnie cant save the world now he has to save his city instead!
    Governor of Poker

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*

Join the h+ Community