Observers of the ongoing conflicts in Iraq and Afghanistan have hotly debated the wisdom of using unmanned drone aircraft to carry out military operations. These debates have taken place within the traditional laws-of-war rubric for assessing tactics and strategy: did the attack accomplish a military objective and did it minimize the risk of civilian casualties? However, the substitution of machines for men in an ever-increasing number of combat roles means that armies, governments, and legal scholars will have to rethink the laws of war as they apply to robot belligerents.
The trend towards employing more sophisticated machines in larger and larger numbers will only accelerate, writes PW Singer in his new book, Wired for War: The Robotics Revolution and Conflict in the 21st Century. The American military has already recognized the myriad benefits of robot warfare, including greater accuracy, reduced risk to personnel, and much more cost-effective operations. Consequently, the Pentagon plans to invest hundreds of billions of dollars in developing and deploying machine warriors. However, as Singer discusses, no concomitant investment in fleshing out the legal ramifications of killing machines exists. The laws of war do not have a framework established for evaluating the next step in weaponized robot technology – one where the machine has no human controller. Under the current legal paradigm, in the absence of a person to appear before a tribunal as the responsible actor, mistakes might go unpunished.
Wendell Wallach and Colin Allen in Moral Machines: Teaching Robots Right from Wrong, take some of the first steps toward imagining how robots could make ethical choices. They examine serious shortcomings in the ability of current choice algorithms to yield predictably good, safe, legal outcomes. As a result, in the absence of human control, they, along with Singer, advocate restricting the weaponry available to mechanized combatants to nonlethal weaponry only.