Tag Archives: robots in the marine domaine

How to Fool your Enemy: Artificial Intelligence in Conflict

The contest between China and America, the world’s two superpowers, has many dimensions… One of the most alarming and least understood is the race towards artificial-intelligence-enabled warfare. Both countries are investing large sums in militarised artificial intelligence  (AI), from autonomous robots to software that gives generals rapid tactical advice in the heat of battle….As Jack Shanahan, a general who is the Pentagon’s point man for AI, put it last month, “What I don’t want to see is a future where our potential adversaries have a fully ai-enabled force and we do not.”

AI-enabled weapons may offer superhuman speed and precision.  In order to gain a military advantage, the temptation for armies will be to allow them not only to recommend decisions but also to give orders. That could have worrying consequences. Able to think faster than humans, an AI-enabled command system might cue up missile strikes on aircraft carriers and airbases at a pace that leaves no time for diplomacy and in ways that are not fully understood by its operators. On top of that, ai systems can be hacked, and tricked with manipulated data.

AI in war might aid surprise attacks or confound them, and the death toll could range from none to millions.  Unlike missile silos, software cannot be spied on from satellites. And whereas warheads can be inspected by enemies without reducing their potency, showing the outside world an algorithm could compromise its effectiveness. The incentive may be for both sides to mislead the other. “Adversaries’ ignorance of AI-developed configurations will become a strategic advantage,” suggests Henry Kissinger, who led America’s cold-war arms-control efforts with the Soviet Union…Amid a confrontation between the world’s two big powers, the temptation will be to cut corners for temporary advantage. 

Excerpts from Mind control: Artificial intelligence and war, Economist,  Sept. 7, 2019

Example of the Use of AI in Warfare: The Real-time Adversarial Intelligence and Decision-making (RAID) program under the auspices of The Defense Advanced Research Projects Agency’s (DARPA) Information Exploitation Office (IXO)  focuses on the challenge of anticipating enemy actions in a military operation. In the US Air Force community, the term, predictive battlespace awareness, refers to capabilities that would help the commander and staff to characterize and predict likely enemy courses of action…Today’s practices of military intelligence and decision-making do include a number of processes specifically aimed at predicting enemy actions. Currently, these processes are largely manual as well as mental, and do not involve any significant use of technical means. Even when computerized wargaming is used (albeit rarely in field conditions), it relies either on human guidance of the simulated enemy units or on simple reactive behaviors of such simulated units; in neither case is there a computerized prediction of intelligent and forward-looking enemy actions….

[The deception reasoning of the adversary is very important in this context.]  Deception reasoning refers to an important aspect of predicting enemy actions: the fact that military operations are historically, crucially dependent on the ability to use various forms of concealment and deception for friendly purposes while detecting and counteracting the enemy’s concealment and deception. Therefore, adversarial reasoning must include deception reasoning.

The RAID Program will develop a real-time adversarial predictive analysis tool that operates as an automated enemy predictor providing a continuously updated picture of probable enemy actions in tactical ground operations. The RAID Program will strive to: prove that adversarial reasoning can be automated; prove that automated adversarial reasoning can include deception….

Excerpts from Real-time Adversarial Intelligence and Decision-making (RAID), US Federal Grants

Black Operations are Getting Blacker: US Military

Heterogeneous Collaborative Unmanned Systems (HCUS), as these drones will be known, would be dropped off by either a manned submarine or one of the navy’s big new Orca robot submersibles.

Logo for Orca Submarine by Lockheed Martin

They could be delivered individually, but will more often be part of a collective system called an encapsulated payload. Such a system will then release small underwater vehicles able to identify ships and submarines by their acoustic signatures, and also aerial drones similar to the BlackWing reconnaissance drones already flown from certain naval vessels.

BlackWing

Once the initial intelligence these drones collect has been analysed, a payload’s operators will be in a position to relay further orders. They could, for example, send aerial drones ashore to drop off solar-powered ground sensors at specified points. These sensors, typically disguised as rocks, will send back the data they collect via drones of the sort that dropped them off. Some will have cameras or microphones, others seismometers which detect the vibrations of ground vehicles, while others still intercept radio traffic or Wi-Fi.

Lockheed Martin Ground Sensor Disguised as Rock

HCUS will also be capable of what are described as “limited offensive effects”. Small drones like BlackWing can be fitted with warheads powerful enough to destroy an SUV or a pickup truck. Such drones are already used to assassinate the leaders of enemy forces. They might be deployed against fuel and ammunition stores, too.

Unmanned systems such as HCUS thus promise greatly to expand the scope of submarine-based spying and special operations. Drones are cheap, expendable and can be deployed with no risk of loss of personnel. They are also “deniable”. Even when a spy drone is captured it is hard to prove where it came from. Teams of robot spies and saboteurs launched from submarines, both manned and unmanned, could thus become an important feature of the black-ops of 21st-century warfare.

Excerpts from Submarine-launched drone platoons will soon be emerging from the sea: Clandestine Warfare, Economist, June 22, 2019

Undersea Drones: Military

Currently, manipulation operations on the seabed are conducted by Remotely Operated Vehicles (ROVs) tethered to a manned surface platform and tele-operated by a human pilot. Exclusive use of ROVs, tethered to manned ships and their operators, severely limits the potential utility of robots in the marine domain, due to the limitations of ROV tether length and the impracticality of wireless communications at the bandwidths necessary to tele-operate an underwater vehicle at such distances and depths. To address these limitations, the Angler program will develop and demonstrate an underwater robotic system capable of physically manipulating objects on the sea floor for long-duration missions in restricted environments, while deprived of both GPS and human intervention

The Angler program seeks to migrate advancements from terrestrial and space-based robotics, terrestrial autonomous manipulation, and underwater sensing technologies into the realm of undersea manipulation, with specific focus on long-distance, seabed-based missions. Specifically, the program aims to discover innovative autonomous robotic solutions capable of navigating unstructured ocean depths, surveying expansive underwater regions, and physically manipulating manmade objects of interest.

Excerpts DARPA Angle Program Nov. 2018