Tag Archives: autonomous underwater vehicles

Black Operations are Getting Blacker: US Military

Heterogeneous Collaborative Unmanned Systems (HCUS), as these drones will be known, would be dropped off by either a manned submarine or one of the navy’s big new Orca robot submersibles.

Logo for Orca Submarine by Lockheed Martin

They could be delivered individually, but will more often be part of a collective system called an encapsulated payload. Such a system will then release small underwater vehicles able to identify ships and submarines by their acoustic signatures, and also aerial drones similar to the BlackWing reconnaissance drones already flown from certain naval vessels.

BlackWing

Once the initial intelligence these drones collect has been analysed, a payload’s operators will be in a position to relay further orders. They could, for example, send aerial drones ashore to drop off solar-powered ground sensors at specified points. These sensors, typically disguised as rocks, will send back the data they collect via drones of the sort that dropped them off. Some will have cameras or microphones, others seismometers which detect the vibrations of ground vehicles, while others still intercept radio traffic or Wi-Fi.

Lockheed Martin Ground Sensor Disguised as Rock

HCUS will also be capable of what are described as “limited offensive effects”. Small drones like BlackWing can be fitted with warheads powerful enough to destroy an SUV or a pickup truck. Such drones are already used to assassinate the leaders of enemy forces. They might be deployed against fuel and ammunition stores, too.

Unmanned systems such as HCUS thus promise greatly to expand the scope of submarine-based spying and special operations. Drones are cheap, expendable and can be deployed with no risk of loss of personnel. They are also “deniable”. Even when a spy drone is captured it is hard to prove where it came from. Teams of robot spies and saboteurs launched from submarines, both manned and unmanned, could thus become an important feature of the black-ops of 21st-century warfare.

Excerpts from Submarine-launched drone platoons will soon be emerging from the sea: Clandestine Warfare, Economist, June 22, 2019

Killer Robots: Your Kids V. Theirs

The harop, a kamikaze drone, bolts from its launcher like a horse out of the gates. But it is not built for speed, nor for a jockey. Instead it just loiters, unsupervised, too high for those on the battlefield below to hear the thin old-fashioned whine of its propeller, waiting for its chance.

Israeli Aerospace Industries (IAI) has been selling the Harop for more than a decade. A number of countries have bought the drone, including India and Germany. …In 2017, according to a report by the Stockholm International Peace Research Institute (sipri), a think-tank, the Harop was one of 49 deployed systems which could detect possible targets and attack them without human intervention. It is thus very much the sort of thing which disturbs the coalition of 89 non-governmental organisations (ngos) in 50 countries that has come together under the banner of the “Campaign to Stop Killer Robots”.

The Phalanx guns used by the navies of America and its allies. Once switched on, the Phalanx will fire on anything it sees heading towards the ship it is mounted on. And in the case of a ship at sea that knows itself to be under attack by missiles too fast for any human trigger finger, that seems fair enough. Similar arguments can be made for the robot sentry guns in the demilitarised zone (dmz) between North and South Korea.

Autonomous vehicles do not have to become autonomous weapons, even when capable of deadly force. The Reaper drones with which America assassinates enemies are under firm human control when it comes to acts of violence, even though they can fly autonomously…. One of the advantages that MDBA, a European missile-maker, boasts for its air-to-ground Brimstones is that they can “self-sort” based on firing order. If different planes launch volleys of Brimstones into the same “kill box”, where they are free to do their worst, the missiles will keep tabs on each other to reduce the chance that two strike the same target.

Cost is also a factor in armies where trained personnel are pricey. “The thing about robots is that they don’t have pensions,”…If keeping a human in the loop was merely a matter of spending more, it might be deemed worthwhile regardless. But human control creates vulnerabilities. It means that you must pump a lot of encrypted data back and forth. What if the necessary data links are attacked physically—for example with anti-satellite weapons—jammed electronically or subverted through cyberwarfare? Future wars are likely to be fought in what America’s armed forces call “contested electromagnetic environments”. The Royal Air Force is confident that encrypted data links would survive such environments. But air forces have an interest in making sure there are still jobs for pilots; this may leave them prey to unconscious bias.

The vulnerability of communication links to interference is an argument for greater autonomy. But autonomous systems can be interfered with, too. The sensors for weapons like Brimstone need to be a lot more fly than those required by, say, self-driving cars, not just because battlefields are chaotic, but also because the other side will be trying to disorient them. Just as some activists use asymmetric make-up to try to confuse face-recognition systems, so military targets will try to distort the signatures which autonomous weapons seek to discern. Paul Scharre, author of “Army of None: Autonomous Weapons and the Future of War”, warns that the neural networks used in machine learning are intrinsically vulnerable to spoofing.

The 2017 UN Convention on Certain Conventional Weapons has put together a group of governmental experts to study the finer points of autonomy. As well as trying to develop a common understanding of what weapons should be considered fully autonomous, it is considering both a blanket ban and other options for dealing with the humanitarian and security challenges that they create.  Most states involved in the convention’s discussions agree on the importance of human control. But they differ on what this actually means. In a paper for Article 36, an advocacy group named after a provision of the Geneva conventions that calls for legal reviews on new methods of warfare, Heather Roff and Richard Moyes argue that “a human simply pressing a ‘fire’ button in response to indications from a computer, without cognitive clarity or awareness” is not really in control. “Meaningful control”, they say, requires an understanding of the context in which the weapon is being used as well as capacity for timely and reasoned intervention. It also requires accountability…

The two dozen states that want a legally binding ban on fully autonomous weapons are mostly military minnows like Djibouti and Peru, but some members, such as Austria, have diplomatic sway. None of them has the sort of arms industry that stands to profit from autonomous weapons. They ground their argument in part on International Humanitarian Law (IHL), a corpus built around the rules of war laid down in the Hague and Geneva conventions. This demands that armies distinguish between combatants and civilians, refrain from attacks where the risk to civilians outweighs the military advantage, use no more force than is proportional to the objective and avoid unnecessary suffering…Beyond the core group advocating a ban there is a range of opinions. China has indicated that it supports a ban in principle; but on use, not development. France and Germany oppose a ban, for now; but they want states to agree a code of conduct with wriggle room “for national interpretations”. India is reserving its position. It is eager to avoid a repeat of nuclear history, in which technological have-nots were locked out of game-changing weaponry by a discriminatory treaty.

At the far end of the spectrum a group of states, including America, Britain and Russia, explicitly opposes the ban. These countries insist that existing international law provides a sufficient check on all future systems….States are likely to sacrifice human control for self-preservation, says General Barrons. “You can send your children to fight this war and do terrible things, or you can send machines and hang on to your children.” Other people’s children are other people’s concern.

Excerpts from Briefing Autonomous Weapons: Trying to Restrain the Robots, Economist, Jan. 19, 2019, at 22

Stopping the Unstoppable: undersea nuclear torpedoes

On July 20th 1960, a missile popped out of an apparently empty Atlantic ocean. Its solid-fuel rocket fired just as it cleared the surface and it tore off into the sky. Hours later, a second missile followed. An officer on the ballistic-missile submarine USS George Washington sent a message to President Dwight Eisenhower: “POLARIS—FROM OUT OF THE DEEP TO TARGET. PERFECT.” America had just completed its first successful missile launch of an intercontinental ballistic missile (ICBM) from beneath the ocean. Less than two months later, Russia conducted a similar test in the White Sea, north of Archangel.

Those tests began a new phase in the cold war. Having ICBMs on effectively invisible launchers meant that neither side could destroy the other’s nuclear arsenal in a single attack. So by keeping safe the capacity for retaliatory second strikes, the introduction of ballistic-missile submarines helped develop the concept of “mutually assured destruction” (MAD), thereby deterring any form of nuclear first strike. America, Britain, China, France and Russia all have nuclear-powered submarines on permanent or near permanent patrol, capable of launching nuclear missiles; India has one such submarine, too, and Israel is believed to have nuclear missiles on conventionally powered submarines.

As well as menacing the world at large, submarines pose a much more specific threat to other countries’ navies; most military subs are attack boats rather than missile platforms. This makes anti-submarine warfare (ASW) a high priority for anyone who wants to keep their surface ships on the surface. Because such warfare depends on interpreting lots of data from different sources—sonar arrays on ships, sonar buoys dropped from aircraft, passive listening systems on the sea-floor—technology which allows new types of sensor and new ways of communicating could greatly increase its possibilities. “There’s an unmanned-systems explosion,” says Jim Galambos of DARPA, the Pentagon’s future-technology arm. Up until now, he says, submariners could be fairly sure of their hiding place, operating “alone and unafraid”. That is changing.

Aircraft play a big role in today’s ASW, flying from ships or shore to drop “sonobuoys” in patterns calculated to have the best chance of spotting something. This is expensive. An aeroplane with 8-10 people in it throws buoys out and waits around to listen to them and process their data on board. “In future you can envision a pair of AUVs [autonomous underwater vehicles], one deploying and one loitering and listening,” says Fred Cotaras of Ultra Electronics, a sonobuoy maker. Cheaper deployment means more buoys.

But more data is not that helpful if you do not have ways of moving it around, or of knowing where exactly it comes from. That is why DARPA is working on a Positioning System for Deep Ocean Navigation (POSYDON) which aims to provide “omnipresent, robust positioning across ocean basins” just as GPS satellites do above water, says Lisa Zurk, who heads up the programme. The system will use a natural feature of the ocean known as the “deep sound channel”. The speed of sound in water depends on temperature, pressure and, to some extent, salinity. The deep sound channel is found at the depth where these factors provide the lowest speed of sound. Below it, higher pressure makes the sound faster; above it, warmer water has the same effect…

Even in heavily surveilled seas, spotting submarines will remain tricky. They are already quiet, and getting quieter; new “air-independent propulsion” systems mean that conventionally powered submarines can now turn off their diesel engines and run as quietly as nuclear ones, perhaps even more so, for extended periods of time. Greater autonomy, and thus fewer humans—or none at all—could make submarines quieter still.

A case in point is a Russian weapon called Status-6, also known as Kanyon, about which Vladimir Putin boasted in a speech on March 1st, 2018. America’s recent nuclear-posture review describes it as “a new intercontinental, nuclear-armed, nuclear-powered, undersea autonomous torpedo”. A Russian state television broadcast in 2015 appeared to show it as a long, thin AUV that can be launched from a modified submarine and travel thousands of kilometres to explode off the shore of a major city with a great deal more energy than the largest warheads on ICBMs, thus generating a radioactive tsunami. Such a system might be seen as preserving a second-strike capability even if the target had a missile-defence system capable of shooting ICBMs out of the sky…

One part of the ocean that has become particularly interesting in this regard is the Arctic. Tracking submarines under or near ice is difficult, because ice constantly shifts, crackles and groans loudly enough to mask the subtle sounds of a submarine. With ever less ice in the Arctic this is becoming less of a problem, meaning America should be better able to track Russian submarines through its Assured Arctic Awareness programme…

Greater numbers of better sensors, better networked, will not soon make submarines useless; but even without breakthroughs, they could erode the strategic norm that has guided nuclear thinking for over half a century—that of an unstoppable second strike.

Excerpts from Mutually assured detection, Economist, Mar. 10, 2018