Tag Archives: lethal autonomous robotics (LARs)

How to Fool your Enemy: Artificial Intelligence in Conflict

The contest between China and America, the world’s two superpowers, has many dimensions… One of the most alarming and least understood is the race towards artificial-intelligence-enabled warfare. Both countries are investing large sums in militarised artificial intelligence  (AI), from autonomous robots to software that gives generals rapid tactical advice in the heat of battle….As Jack Shanahan, a general who is the Pentagon’s point man for AI, put it last month, “What I don’t want to see is a future where our potential adversaries have a fully ai-enabled force and we do not.”

AI-enabled weapons may offer superhuman speed and precision.  In order to gain a military advantage, the temptation for armies will be to allow them not only to recommend decisions but also to give orders. That could have worrying consequences. Able to think faster than humans, an AI-enabled command system might cue up missile strikes on aircraft carriers and airbases at a pace that leaves no time for diplomacy and in ways that are not fully understood by its operators. On top of that, ai systems can be hacked, and tricked with manipulated data.

AI in war might aid surprise attacks or confound them, and the death toll could range from none to millions.  Unlike missile silos, software cannot be spied on from satellites. And whereas warheads can be inspected by enemies without reducing their potency, showing the outside world an algorithm could compromise its effectiveness. The incentive may be for both sides to mislead the other. “Adversaries’ ignorance of AI-developed configurations will become a strategic advantage,” suggests Henry Kissinger, who led America’s cold-war arms-control efforts with the Soviet Union…Amid a confrontation between the world’s two big powers, the temptation will be to cut corners for temporary advantage. 

Excerpts from Mind control: Artificial intelligence and war, Economist,  Sept. 7, 2019

Example of the Use of AI in Warfare: The Real-time Adversarial Intelligence and Decision-making (RAID) program under the auspices of The Defense Advanced Research Projects Agency’s (DARPA) Information Exploitation Office (IXO)  focuses on the challenge of anticipating enemy actions in a military operation. In the US Air Force community, the term, predictive battlespace awareness, refers to capabilities that would help the commander and staff to characterize and predict likely enemy courses of action…Today’s practices of military intelligence and decision-making do include a number of processes specifically aimed at predicting enemy actions. Currently, these processes are largely manual as well as mental, and do not involve any significant use of technical means. Even when computerized wargaming is used (albeit rarely in field conditions), it relies either on human guidance of the simulated enemy units or on simple reactive behaviors of such simulated units; in neither case is there a computerized prediction of intelligent and forward-looking enemy actions….

[The deception reasoning of the adversary is very important in this context.]  Deception reasoning refers to an important aspect of predicting enemy actions: the fact that military operations are historically, crucially dependent on the ability to use various forms of concealment and deception for friendly purposes while detecting and counteracting the enemy’s concealment and deception. Therefore, adversarial reasoning must include deception reasoning.

The RAID Program will develop a real-time adversarial predictive analysis tool that operates as an automated enemy predictor providing a continuously updated picture of probable enemy actions in tactical ground operations. The RAID Program will strive to: prove that adversarial reasoning can be automated; prove that automated adversarial reasoning can include deception….

Excerpts from Real-time Adversarial Intelligence and Decision-making (RAID), US Federal Grants

Black Operations are Getting Blacker: US Military

Heterogeneous Collaborative Unmanned Systems (HCUS), as these drones will be known, would be dropped off by either a manned submarine or one of the navy’s big new Orca robot submersibles.

Logo for Orca Submarine by Lockheed Martin

They could be delivered individually, but will more often be part of a collective system called an encapsulated payload. Such a system will then release small underwater vehicles able to identify ships and submarines by their acoustic signatures, and also aerial drones similar to the BlackWing reconnaissance drones already flown from certain naval vessels.

BlackWing

Once the initial intelligence these drones collect has been analysed, a payload’s operators will be in a position to relay further orders. They could, for example, send aerial drones ashore to drop off solar-powered ground sensors at specified points. These sensors, typically disguised as rocks, will send back the data they collect via drones of the sort that dropped them off. Some will have cameras or microphones, others seismometers which detect the vibrations of ground vehicles, while others still intercept radio traffic or Wi-Fi.

Lockheed Martin Ground Sensor Disguised as Rock

HCUS will also be capable of what are described as “limited offensive effects”. Small drones like BlackWing can be fitted with warheads powerful enough to destroy an SUV or a pickup truck. Such drones are already used to assassinate the leaders of enemy forces. They might be deployed against fuel and ammunition stores, too.

Unmanned systems such as HCUS thus promise greatly to expand the scope of submarine-based spying and special operations. Drones are cheap, expendable and can be deployed with no risk of loss of personnel. They are also “deniable”. Even when a spy drone is captured it is hard to prove where it came from. Teams of robot spies and saboteurs launched from submarines, both manned and unmanned, could thus become an important feature of the black-ops of 21st-century warfare.

Excerpts from Submarine-launched drone platoons will soon be emerging from the sea: Clandestine Warfare, Economist, June 22, 2019

How to Navigate the Rubble: DARPA

Imagine a natural disaster scenario, such as an earthquake, that inflicts widespread damage to buildings and structures, critical utilities and infrastructure, and threatens human safety. Having the ability to navigate the rubble and enter highly unstable areas could prove invaluable to saving lives or detecting additional hazards among the wreckage.

Dr. Ronald Polcawich, a DARPA program manager in the Microsystems Technology Office (MTO):”There are a number of environments that are inaccessible for larger robotic platforms. Smaller robotics systems could provide significant aide, but shrinking down these platforms requires significant advancement of the underlying technology.”

Technological advances in microelectromechanical systems (MEMS), additive manufacturing, piezoelectric actuators, and low-power sensors have allowed researchers to expand into the realm of micro-to-milli robotics. However, due to the technical obstacles experienced as the technology shrinks, these platforms lack the power, navigation, and control to accomplish complex tasks proficiently

To help overcome the challenges of creating extremely [Size, Weight and Power] SWaP-constrained microrobotics, DARPA is launching a new program called SHort-Range Independent Microrobotic Platforms (SHRIMP). The goal of SHRIMP is to develop and demonstrate multi-functional micro-to-milli robotic platforms for use in natural and critical disaster scenarios. To achieve this mission, SHRIMP will explore fundamental research in actuator materials and mechanisms as well as power storage components, both of which are necessary to create the strength, dexterity, and independence of functional microrobotics platforms.

“The strength-to-weight ratio of an actuator influences both the load-bearing capability and endurance of a micro-robotic platform, while the maximum work density characterizes the capability of an actuator mechanism to perform high intensity tasks or operate over a desired duration,” said Polcawich. “

Excerpts from Developing Microrobotics for Disaster Recovery and High-Risk Environments: SHRIMP program seeks to advance the state-of-the art in micro-to-milli robotics platforms and underlying technology, OUTREACH@DARPA.MIL, July 17, 2018

Recyclable, Mini and Lethal: Drones

From DARPA Website:  An ability to send large numbers of small unmanned air systems (UAS) with coordinated, distributed capabilities could provide U.S. forces with improved operational flexibility at much lower cost than is possible with today’s expensive, all-in-one platforms—especially if those unmanned systems could be retrieved for reuse while airborne. So far, however, the technology to project volleys of low-cost, reusable systems over great distances and retrieve them in mid-air has remained out of reach.

To help make that technology a reality, DARPA has launched the Gremlins program….The program envisions launching groups of gremlins from large aircraft such as bombers or transport aircraft, as well as from fighters and other small, fixed-wing platforms while those planes are out of range of adversary defenses. When the gremlins complete their mission, a C-130 transport aircraft would retrieve them in the air and carry them home, where ground crews would prepare them for their next use within 24 hours….With an expected lifetime of about 20 uses, Gremlins could fill an advantageous design-and-use space between existing models of missiles and conventional aircraft…

Excerpts from Friendly “Gremlins” Could Enable Cheaper, More Effective, Distributed Air Operations, DARPA Website, Aug. 28, 2015

 

Hunter and Killer Drones

The Pentagon is discussing the possibility of replacing human drone operators with computer algorithms, especially for ‘signature strikes‘ where unknown targets are killed simply because they meet certain criteria. So what characteristics define an ‘enemy combatant’ and where are they outlined in law?

Drone strikes and targeted killings have become the weapon of choice for the Obama administration in their ongoing war against terrorists. But what impact is this technology having, not only on those who are the targets (both intended and unintended), but on the way we are likely to wage war in the future?

John Sifton is the advocacy director for Asia at Human Rights Watch, and says that while drones are currently controlled remotely by trained military personnel, there are already fears that the roving killing machines could be automated in the future.  ‘One of the biggest concerns human rights groups have right now is the notion of a signature strike,’ he says. ‘[This is] the notion that you could make a decision about a target based on its appearance. Say—this man has a Kalashnikov, he’s walking on the side of the road, he is near a military base. He’s a combatant, let’s kill him. That decision is made by a human right now, but the notion that you could write an algorithm for that and then program it into a drone… sounds science fiction but is in fact what the Pentagon is already thinking about. There are already discussions about this, autonomous weapons systems.’‘That is to human rights groups the most terrifying spectre that is currently presented by the drones.’

Sarah Knuckey is the director of the Project on Extrajudicial Executions at New York University Law School and an advisor to the UN. She says the way that drones are used to conduct warfare is stretching the limits of previous international conventions and is likely to require new rules of engagement to be drawn up…The rules of warfare built up after World War II to protect civilians are already hopelessly outdated, she says. The notion of border sovereignty has already been trashed by years of drone strikes, which she estimates have targeted upwards of 3,000 individuals, with reports of between 400 and 800 civilian casualties.

Excerpt from Annabelle Quince, Future of drone strikes could see execution by algorithm, May 21, 2013

The Nanosecond Decision to Kill: drones

These are excerpts from the report of the UN Special Rapporteur Christof Heyns,  Apr. 9, 2013

What are Lethal Autonomous Robotics?

Robots are often described as machines that are built upon the sense-think-act paradigm: they have sensors that give them a degree of situational awareness; processors or artificial intelligence that “decides” how to respond to a given stimulus; and effectors that carry out those “decisions”. …   Under the currently envisaged scenario, humans will at least remain part of what may be called the “wider loop”: they will programme the ultimate goals into the robotic systems and decide to activate and, if necessary, deactivate them, while autonomous weapons will translate those goals into tasks and execute them without requiring further human intervention. Supervised autonomy means that there is a “human on the loop” (as opposed to “in” or “out”), who monitors and can override the robot‟s decisions. However, the power to override may in reality be limited because the decision-making processes of robots are often measured in nanoseconds and the informational basis of those decisions may not be practically accessible to the supervisor. In such circumstances humans are de facto out of the loop and the machines thus effectively constitute LARs.

Examples of  Lethal Autonomous Robotics

  • The US Phalanx system for Aegis-class cruisers automatically detects, tracks and engages anti-air warfare threats such as anti-ship missiles and aircraft.
  • The US Counter Rocket, Artillery and Mortar (C-RAM) system can automatically destroy incoming artillery, rockets and mortar rounds.
  • Israel‟s Harpy is a “Fire-and-Forget” autonomous weapon system designed to detect, attack and destroy radar emitters.
  • The United Kingdom Taranis jet-propelled combat drone prototype can autonomously search, identify and locate enemies but can only engage with a target when authorized by mission command. It can also defend itself against enemy aircraft.
  • The Northrop Grumman X-47B is a fighter-size drone prototype commissioned by the US Navy to demonstrate autonomous launch and landing capability on aircraft carriers and navigate autonomously.
  • The Samsung Techwin surveillance and security guard robots, deployed in the demilitarized zone between North and South Korea, detect targets through infrared sensors. They are currently operated by humans but have an “automatic mode”.

Advantages of Lethal Autonomous Robotics

LARs will not be susceptible to some of the human shortcomings that may undermine the protection of life. Typically they would not act out of revenge, panic, anger, spite, prejudice or fear. Moreover, unless specifically programmed to do so, robots would not cause intentional suffering on civilian populations, for example through torture. Robots also do not rape.

Disadvantages of Lethal Autonomous Robotics

Yet robots have limitations in other respects as compared to humans. Armed conflict and IHL often require human judgement, common sense, appreciation of the larger picture, understanding of the intentions behind people‟s actions, and understanding of values and anticipation of the direction in which events are unfolding. Decisions over life and death in armed conflict may require compassion and intuition. Humans – while they are fallible – at least might possess these qualities, whereas robots definitely do not.

Full Report PDF