Tag Archives: automated military

Killer Robots: Your Kids V. Theirs

The harop, a kamikaze drone, bolts from its launcher like a horse out of the gates. But it is not built for speed, nor for a jockey. Instead it just loiters, unsupervised, too high for those on the battlefield below to hear the thin old-fashioned whine of its propeller, waiting for its chance.

Israeli Aerospace Industries (IAI) has been selling the Harop for more than a decade. A number of countries have bought the drone, including India and Germany. …In 2017, according to a report by the Stockholm International Peace Research Institute (sipri), a think-tank, the Harop was one of 49 deployed systems which could detect possible targets and attack them without human intervention. It is thus very much the sort of thing which disturbs the coalition of 89 non-governmental organisations (ngos) in 50 countries that has come together under the banner of the “Campaign to Stop Killer Robots”.

The Phalanx guns used by the navies of America and its allies. Once switched on, the Phalanx will fire on anything it sees heading towards the ship it is mounted on. And in the case of a ship at sea that knows itself to be under attack by missiles too fast for any human trigger finger, that seems fair enough. Similar arguments can be made for the robot sentry guns in the demilitarised zone (dmz) between North and South Korea.

Autonomous vehicles do not have to become autonomous weapons, even when capable of deadly force. The Reaper drones with which America assassinates enemies are under firm human control when it comes to acts of violence, even though they can fly autonomously…. One of the advantages that MDBA, a European missile-maker, boasts for its air-to-ground Brimstones is that they can “self-sort” based on firing order. If different planes launch volleys of Brimstones into the same “kill box”, where they are free to do their worst, the missiles will keep tabs on each other to reduce the chance that two strike the same target.

Cost is also a factor in armies where trained personnel are pricey. “The thing about robots is that they don’t have pensions,”…If keeping a human in the loop was merely a matter of spending more, it might be deemed worthwhile regardless. But human control creates vulnerabilities. It means that you must pump a lot of encrypted data back and forth. What if the necessary data links are attacked physically—for example with anti-satellite weapons—jammed electronically or subverted through cyberwarfare? Future wars are likely to be fought in what America’s armed forces call “contested electromagnetic environments”. The Royal Air Force is confident that encrypted data links would survive such environments. But air forces have an interest in making sure there are still jobs for pilots; this may leave them prey to unconscious bias.

The vulnerability of communication links to interference is an argument for greater autonomy. But autonomous systems can be interfered with, too. The sensors for weapons like Brimstone need to be a lot more fly than those required by, say, self-driving cars, not just because battlefields are chaotic, but also because the other side will be trying to disorient them. Just as some activists use asymmetric make-up to try to confuse face-recognition systems, so military targets will try to distort the signatures which autonomous weapons seek to discern. Paul Scharre, author of “Army of None: Autonomous Weapons and the Future of War”, warns that the neural networks used in machine learning are intrinsically vulnerable to spoofing.

The 2017 UN Convention on Certain Conventional Weapons has put together a group of governmental experts to study the finer points of autonomy. As well as trying to develop a common understanding of what weapons should be considered fully autonomous, it is considering both a blanket ban and other options for dealing with the humanitarian and security challenges that they create.  Most states involved in the convention’s discussions agree on the importance of human control. But they differ on what this actually means. In a paper for Article 36, an advocacy group named after a provision of the Geneva conventions that calls for legal reviews on new methods of warfare, Heather Roff and Richard Moyes argue that “a human simply pressing a ‘fire’ button in response to indications from a computer, without cognitive clarity or awareness” is not really in control. “Meaningful control”, they say, requires an understanding of the context in which the weapon is being used as well as capacity for timely and reasoned intervention. It also requires accountability…

The two dozen states that want a legally binding ban on fully autonomous weapons are mostly military minnows like Djibouti and Peru, but some members, such as Austria, have diplomatic sway. None of them has the sort of arms industry that stands to profit from autonomous weapons. They ground their argument in part on International Humanitarian Law (IHL), a corpus built around the rules of war laid down in the Hague and Geneva conventions. This demands that armies distinguish between combatants and civilians, refrain from attacks where the risk to civilians outweighs the military advantage, use no more force than is proportional to the objective and avoid unnecessary suffering…Beyond the core group advocating a ban there is a range of opinions. China has indicated that it supports a ban in principle; but on use, not development. France and Germany oppose a ban, for now; but they want states to agree a code of conduct with wriggle room “for national interpretations”. India is reserving its position. It is eager to avoid a repeat of nuclear history, in which technological have-nots were locked out of game-changing weaponry by a discriminatory treaty.

At the far end of the spectrum a group of states, including America, Britain and Russia, explicitly opposes the ban. These countries insist that existing international law provides a sufficient check on all future systems….States are likely to sacrifice human control for self-preservation, says General Barrons. “You can send your children to fight this war and do terrible things, or you can send machines and hang on to your children.” Other people’s children are other people’s concern.

Excerpts from Briefing Autonomous Weapons: Trying to Restrain the Robots, Economist, Jan. 19, 2019, at 22

Military Robots and Automated Killing

Military robots come in an astonishing range of shapes and sizes. DelFly, a dragonfly-shaped surveillance drone built at the Delft University of Technology in the Netherlands, weighs less than a gold wedding ring, camera included. At the other end of the scale is America’s biggest and fastest drone, the $15m Avenger, the first of which recently began testing in Afghanistan. It uses a jet engine to carry up to 2.7 tonnes of bombs, sensors and other types of payload at more than 740kph (460mph).

On the ground, robots range from truck-sized to tiny. TerraMax, a robotics kit made by Oshkosh Defense, based in Wisconsin, turns military lorries or armoured vehicles into remotely controlled or autonomous machines. And smaller robotic beasties are hopping, crawling and running into action, as three models built by Boston Dynamics, a spin-out from the Massachusetts Institute of Technology (MIT), illustrate.  By jabbing the ground with a gas-powered piston, the Sand Flea can leap through a window, or onto a roof nine metres up. Gyro-stabilisers provide smooth in-air filming and landings. The 5kg robot then rolls along on wheels until another hop is needed—to jump up some stairs, perhaps, or to a rooftop across the street. Another robot, RiSE, resembles a giant cockroach and uses six legs, tipped with short, Velcro-like spikes, to climb coarse walls. Biggest of all is the LS3, a four-legged dog-like robot that uses computer vision to trot behind a human over rough terrain carrying more than 180kg of supplies. The firm says it could be deployed within three years.

Demand for land robots, also known as unmanned ground vehicles (UGVs), began to pick up a decade ago after American-led forces knocked the Taliban from power in Afghanistan. Soldiers hunting Osama bin Laden and his al-Qaeda fighters in the Hindu Kush were keen to send robot scouts into caves first. Remote-controlled ground robots then proved enormously helpful in the discovery and removal of makeshift roadside bombs in Afghanistan, Iraq, and elsewhere. Visiongain, a research firm, reckons a total of $689m will be spent on ground robots this year. The ten biggest buyers in descending order are America, followed by Israel, a distant second, and Britain, Germany, China, South Korea, Singapore, Australia, France and Canada.

Robots’ capabilities have steadily improved. Upload a mugshot into an SUGV, a briefcase-sized robot than runs on caterpillar tracks, and it can identify a man walking in a crowd and follow him. Its maker, iRobot, another MIT spin-out, is best known for its robot vacuum cleaners. Its latest military robot, FirstLook, is a smaller device that also runs on tracks. Equipped with four cameras, it is designed to be thrown through windows or over walls.

Another throwable reconnaissance robot, the Scout XT Throwbot made by Recon Robotics, based in Edina, Minnesota, was one of the stars of the Ground Robotics Capabilities conference held in San Diego in March. Shaped like a two-headed hammer with wheels on each head, the Scout XT has the heft of a grenade and can be thrown through glass windows. Wheel spikes provide traction on steep or rocky surfaces. In February the US Army ordered 1,100 Scout XTs for $13.9m. Another version, being developed with the US Navy, can be taken to a ship inside a small aquatic robot, and will use magnetic wheels to climb up the hull and onto the deck, says Alan Bignall, Recon’s boss.

Even more exotic designs are in development. DARPA, the research arm of America’s Department of Defence, is funding the development of small, soft robots that move like jerky slithering blobs. EATR, another DARPA project, is a foraging robot that gathers leaves and wood for fuel and then burns it to generate electricity. Researchers at Italy’s Sant’Anna School of Advanced Studies, in Pisa, have designed a snakelike aquatic robot. And a small helicopter drone called the Pelican, designed by German and American companies, could remain aloft for weeks, powered by energy from a ground-based laser….

A larger worry is that countries with high-performance military robots may be more inclined to launch attacks. Robots protect soldiers and improve their odds of success. Using drones sidesteps the tricky politics of putting boots on foreign soil. In the past eight years drone strikes by America’s Central Intelligence Agency (CIA) have killed more than 2,400 people in Pakistan, including 479 civilians, according to the Bureau for Investigative Journalism in London. Technological progress appears to have contributed to an increase in the frequency of strikes. In 2005 CIA drones struck targets in Pakistan three times; last year there were 76 strikes there. Do armed robots make killing too easy?

Not necessarily….. Today’s drones, blimps, unmanned boats and reconnaissance robots collect and transmit so much data, she says, that Western countries now practise “warfare by committee”. Government lawyers and others in operation rooms monitor video feeds from robots to call off strikes that are illegal or would “look bad on CNN”, says Ms Cummings, who is now a robotics researcher at MIT. And unlike pilots at the scene, these remote observers are unaffected by the physical toil of flying a jet or the adrenalin rush of combat.

In March Britain’s Royal Artillery began buying robotic missiles designed by MBDA, a French company. The Fire Shadow is a “loitering munition” capable of travelling 100km, more than twice the maximum range of a traditional artillery shell. It can circle in the sky for hours, using sensors to track even a moving target. A human operator, viewing a video feed, then issues an instruction to attack, fly elsewhere to find a better target, or abort the mission by destroying itself. But bypassing the human operator to automate attacks would be, technologically, in the “realm of feasibility”, an MBDA spokesman says……

Traditional rules of engagement stipulate that a human must decide if a weapon is to be fired. But this restriction is starting to come under pressure. Already, defence planners are considering whether a drone aircraft should be able to fire a weapon based on its own analysis. In 2009 the authors of a US Air Force report suggested that humans will increasingly operate not “in the loop” but “on the loop”, monitoring armed robots rather than fully controlling them. Better artificial intelligence will eventually allow robots to “make lethal combat decisions”, they wrote, provided legal and ethical issues can be resolved…..

Pressure will grow for armies to automate their robots if only so machines can shoot before being shot, says Jürgen Altmann of the Technical University of Dortmund, in Germany, and a founder of the International Committee for Robot Arms Control, an advocacy group. Some robot weapons already operate without human operators to save precious seconds. An incoming anti-ship missile detected even a dozen miles away can be safely shot down only by a robot, says Frank Biemans, head of sensing technologies for the Goalkeeper automatic ship-defence cannons made by Thales Nederland.  Admittedly, that involves a machine destroying another machine. But as human operators struggle to assimilate the information collected by robotic sensors, decision-making by robots seems likely to increase. This might be a good thing, says Ronald Arkin, a roboticist at the Georgia Institute of Technology, who is developing “ethics software” for armed robots. By crunching data from drone sensors and military databases, it might be possible to predict, for example, that a strike from a missile could damage a nearby religious building. Clever software might be used to call off attacks as well as initiate them.

In the air, on land and at sea, military robots are proliferating. But the revolution in military robotics does have an Achilles heel, notes Emmanuel Goffi of the French air-force academy in Salon-de-Provence. As robots become more autonomous, identifying a human to hold accountable for a bloody blunder will become very difficult, he says. Should it be the robot’s programmer, designer, manufacturer, human overseer or his superiors? It is hard to say. The backlash from a deadly and well-publicised mistake may be the only thing that can halt the rapid march of the robots.

Robots go to war: March of the robots, Economist Technology Quarterly, June 2, 2012