Tag Archives: autonomous underwater vehicles

The Act of Successful Sabotage: cables and pipelines

On October 12, 2022 Vladimir Putin, Russia’s president, gave an ominous warning. Energy infrastructure around the world was now “at risk”, he said. Mr Putin’s warning came a month after explosions tore through Nord Stream 1 and 2, a pair of gas pipelines running from Russia to Europe under the Baltic Sea. The pipes were not in use at the time. But the ruptures left plumes of methane bubbling to the surface for days…

Subsea pipelines and cables have proliferated since the first one was laid, in 1850…There are more than 530 active or planned submarine telecoms cables around the world. Spanning over 1.3m kilometers they carry 95% of the world’s internet traffic. In November 2021, cables serving underwater acoustic sensors off the coast of northern Norway—an area frequented by Russian submarines—were cut.

Western officials say that a particular source of concern is Russia’s Main Directorate of Deep-Sea Research, known by its Russian acronym GUGI. It has a variety of spy ships and specialist submarines—most notably the Belgorod, the world’s biggest submarine, commissioned in July 2022—which can work in unusually deep water. They can deploy divers, mini-submarines or underwater drones, which could be used to cut cables. 

Cable chicanery, though, is not a Russian invention. One of Britain’s first acts during the first world war was to tear up German telecoms cables laid across the Atlantic. Germany responded with attacks on Allied cables in the Pacific and Indian Oceans.

More recently, espionage has been the order of the day..I.n 2013 Edward Snowden, a contractor for the National Security Agency (NSA), America’s signals intelligence agency, revealed an Anglo-American project had tapped at least 200 fiber-optic cables around the world. Yet the seabed is not amenable to control. A paper published in 2021 noted that Estonia and other Baltic states had only a limited grasp of what was going on under the Baltic because of quirks of hydrology, scarce surveillance platforms and limited information-sharing between countries. It concluded, perhaps presciently: “It would be difficult to prevent Russian [drones] deployed in international waters from damaging critical undersea infrastructure.”…

The first step in a sabotage mission is finding the target. With big, heavy pipelines, which are typically made from concrete-lined metal sections, that is relatively easy. Older communication cables, being smaller and lighter, can shift with the currents. Newer ones are often buried, It is also increasingly possible for operators to detect tampering, through  “distributed fiber-optic sensing”, which can detect vibrations in the cable or changes in its temperature. But that will not reveal whether the problem is a geological event or an inquisitive drone—or which country might have sent it. Underwater attribution is slow and difficult.

Determined attackers, in other words, are likely to get through. The effects of a successful attack will differ. Pipelines and subsea electricity cables are few in number. If one is blown up, gas, oil or electricity cannot easily be rerouted through another. Communication cables are different. The internet was designed to allow data to flow through alternative paths if one is blocked. And at least when it comes to connections between big countries, plenty of alternatives exist. At least 18 communication cables link America and Europe…There is significant redundancy on these routes. But  “There’s no collective institution that records all the incidents that are going on, and what is behind them—we don’t have any statistics behind it.” according to  Elisabeth Braw of the American Enterprise Institute.

Excerpts from Sabotage at Sea: Underwater Infrastructure, Economist, Oct. 22, 2022

Smart Weapons Who Make Many Mistakes: AI in War

Autonomous weapon systems rely on artificial intelligence (AI), which in turn relies on data collected from those systems’ surroundings. When these data are good—plentiful, reliable and similar to the data on which the system’s algorithm was trained—AI can excel. But in many circumstances data are incomplete, ambiguous or overwhelming. Consider the difference between radiology, in which algorithms outperform human beings in analysing x-ray images, and self-driving cars, which still struggle to make sense of a cacophonous stream of disparate inputs from the outside world. On the battlefield, that problem is multiplied.

“Conflict environments are harsh, dynamic and adversarial,” says UNDIR. Dust, smoke and vibration can obscure or damage the cameras, radars and other sensors that capture data in the first place. Even a speck of dust on a sensor might, in a particular light, mislead an algorithm into classifying a civilian object as a military one, says Arthur Holland Michel, the report’s author. Moreover, enemies constantly attempt to fool those sensors through camouflage, concealment and trickery. Pedestrians have no reason to bamboozle self-driving cars, whereas soldiers work hard to blend into foliage. And a mixture of civilian and military objects—evident on the ground in Gaza in recent weeks—could produce a flood of confusing data.

The biggest problem is that algorithms trained on limited data samples would encounter a much wider range of inputs in a war zone. In the same way that recognition software trained largely on white faces struggles to recognise black ones, an autonomous weapon fed with examples of Russian military uniforms will be less reliable against Chinese ones. 

Despite these limitations, the technology is already trickling onto the battlefield. In its war with Armenia last year, Azerbaijan unleashed Israeli-made loitering munitions theoretically capable of choosing their own targets. Ziyan, a Chinese company, boasts that its Blowfish a3, a gun-toting helicopter drone, “autonomously performs…complex combat missions” including “targeted precision strikes”. The International Committee of the Red Cross (ICRC) says that many of today’s remote-controlled weapons could be turned into autonomous ones with little more than a software upgrade or a change of doctrine….

On May 12th, 2021, the ICRD published a new and nuanced position on the matter, recommending new rules to regulate autonomous weapons, including a prohibition on those that are “unpredictable”, and also a blanket ban on any such weapon that has human beings as its targets. These things will be debated in December 2021 at the five-yearly review conference of the UN Convention on Certain Conventional Weapons, originally established in 1980 to ban landmines and other “inhumane” arms. Government experts will meet thrice over the summer and autumn, under un auspices, to lay the groundwork. 

Yet powerful states remain wary of ceding an advantage to rivals. In March, 2021 a National Security Commission on Artificial Intelligence established by America’s Congress predicted that autonomous weapons would eventually be “capable of levels of performance, speed and discrimination that exceed human capabilities”. A worldwide prohibition on their development and use would be “neither feasible nor currently in the interests of the United States,” it concluded—in part, it argued, because Russia and China would probably cheat. 

Excerpt from Autonomous weapons: The fog of war may confound weapons that think for themselves, Economist, May 29, 2021

Black Operations are Getting Blacker: US Military

Heterogeneous Collaborative Unmanned Systems (HCUS), as these drones will be known, would be dropped off by either a manned submarine or one of the navy’s big new Orca robot submersibles.

Logo for Orca Submarine by Lockheed Martin

They could be delivered individually, but will more often be part of a collective system called an encapsulated payload. Such a system will then release small underwater vehicles able to identify ships and submarines by their acoustic signatures, and also aerial drones similar to the BlackWing reconnaissance drones already flown from certain naval vessels.

BlackWing

Once the initial intelligence these drones collect has been analysed, a payload’s operators will be in a position to relay further orders. They could, for example, send aerial drones ashore to drop off solar-powered ground sensors at specified points. These sensors, typically disguised as rocks, will send back the data they collect via drones of the sort that dropped them off. Some will have cameras or microphones, others seismometers which detect the vibrations of ground vehicles, while others still intercept radio traffic or Wi-Fi.

Lockheed Martin Ground Sensor Disguised as Rock

HCUS will also be capable of what are described as “limited offensive effects”. Small drones like BlackWing can be fitted with warheads powerful enough to destroy an SUV or a pickup truck. Such drones are already used to assassinate the leaders of enemy forces. They might be deployed against fuel and ammunition stores, too.

Unmanned systems such as HCUS thus promise greatly to expand the scope of submarine-based spying and special operations. Drones are cheap, expendable and can be deployed with no risk of loss of personnel. They are also “deniable”. Even when a spy drone is captured it is hard to prove where it came from. Teams of robot spies and saboteurs launched from submarines, both manned and unmanned, could thus become an important feature of the black-ops of 21st-century warfare.

Excerpts from Submarine-launched drone platoons will soon be emerging from the sea: Clandestine Warfare, Economist, June 22, 2019

Killer Robots: Your Kids V. Theirs

The harop, a kamikaze drone, bolts from its launcher like a horse out of the gates. But it is not built for speed, nor for a jockey. Instead it just loiters, unsupervised, too high for those on the battlefield below to hear the thin old-fashioned whine of its propeller, waiting for its chance.

Israeli Aerospace Industries (IAI) has been selling the Harop for more than a decade. A number of countries have bought the drone, including India and Germany. …In 2017, according to a report by the Stockholm International Peace Research Institute (sipri), a think-tank, the Harop was one of 49 deployed systems which could detect possible targets and attack them without human intervention. It is thus very much the sort of thing which disturbs the coalition of 89 non-governmental organisations (ngos) in 50 countries that has come together under the banner of the “Campaign to Stop Killer Robots”.

The Phalanx guns used by the navies of America and its allies. Once switched on, the Phalanx will fire on anything it sees heading towards the ship it is mounted on. And in the case of a ship at sea that knows itself to be under attack by missiles too fast for any human trigger finger, that seems fair enough. Similar arguments can be made for the robot sentry guns in the demilitarised zone (dmz) between North and South Korea.

Autonomous vehicles do not have to become autonomous weapons, even when capable of deadly force. The Reaper drones with which America assassinates enemies are under firm human control when it comes to acts of violence, even though they can fly autonomously…. One of the advantages that MDBA, a European missile-maker, boasts for its air-to-ground Brimstones is that they can “self-sort” based on firing order. If different planes launch volleys of Brimstones into the same “kill box”, where they are free to do their worst, the missiles will keep tabs on each other to reduce the chance that two strike the same target.

Cost is also a factor in armies where trained personnel are pricey. “The thing about robots is that they don’t have pensions,”…If keeping a human in the loop was merely a matter of spending more, it might be deemed worthwhile regardless. But human control creates vulnerabilities. It means that you must pump a lot of encrypted data back and forth. What if the necessary data links are attacked physically—for example with anti-satellite weapons—jammed electronically or subverted through cyberwarfare? Future wars are likely to be fought in what America’s armed forces call “contested electromagnetic environments”. The Royal Air Force is confident that encrypted data links would survive such environments. But air forces have an interest in making sure there are still jobs for pilots; this may leave them prey to unconscious bias.

The vulnerability of communication links to interference is an argument for greater autonomy. But autonomous systems can be interfered with, too. The sensors for weapons like Brimstone need to be a lot more fly than those required by, say, self-driving cars, not just because battlefields are chaotic, but also because the other side will be trying to disorient them. Just as some activists use asymmetric make-up to try to confuse face-recognition systems, so military targets will try to distort the signatures which autonomous weapons seek to discern. Paul Scharre, author of “Army of None: Autonomous Weapons and the Future of War”, warns that the neural networks used in machine learning are intrinsically vulnerable to spoofing.

The 2017 UN Convention on Certain Conventional Weapons has put together a group of governmental experts to study the finer points of autonomy. As well as trying to develop a common understanding of what weapons should be considered fully autonomous, it is considering both a blanket ban and other options for dealing with the humanitarian and security challenges that they create.  Most states involved in the convention’s discussions agree on the importance of human control. But they differ on what this actually means. In a paper for Article 36, an advocacy group named after a provision of the Geneva conventions that calls for legal reviews on new methods of warfare, Heather Roff and Richard Moyes argue that “a human simply pressing a ‘fire’ button in response to indications from a computer, without cognitive clarity or awareness” is not really in control. “Meaningful control”, they say, requires an understanding of the context in which the weapon is being used as well as capacity for timely and reasoned intervention. It also requires accountability…

The two dozen states that want a legally binding ban on fully autonomous weapons are mostly military minnows like Djibouti and Peru, but some members, such as Austria, have diplomatic sway. None of them has the sort of arms industry that stands to profit from autonomous weapons. They ground their argument in part on International Humanitarian Law (IHL), a corpus built around the rules of war laid down in the Hague and Geneva conventions. This demands that armies distinguish between combatants and civilians, refrain from attacks where the risk to civilians outweighs the military advantage, use no more force than is proportional to the objective and avoid unnecessary suffering…Beyond the core group advocating a ban there is a range of opinions. China has indicated that it supports a ban in principle; but on use, not development. France and Germany oppose a ban, for now; but they want states to agree a code of conduct with wriggle room “for national interpretations”. India is reserving its position. It is eager to avoid a repeat of nuclear history, in which technological have-nots were locked out of game-changing weaponry by a discriminatory treaty.

At the far end of the spectrum a group of states, including America, Britain and Russia, explicitly opposes the ban. These countries insist that existing international law provides a sufficient check on all future systems….States are likely to sacrifice human control for self-preservation, says General Barrons. “You can send your children to fight this war and do terrible things, or you can send machines and hang on to your children.” Other people’s children are other people’s concern.

Excerpts from Briefing Autonomous Weapons: Trying to Restrain the Robots, Economist, Jan. 19, 2019, at 22

Stopping the Unstoppable: undersea nuclear torpedoes

On July 20th 1960, a missile popped out of an apparently empty Atlantic ocean. Its solid-fuel rocket fired just as it cleared the surface and it tore off into the sky. Hours later, a second missile followed. An officer on the ballistic-missile submarine USS George Washington sent a message to President Dwight Eisenhower: “POLARIS—FROM OUT OF THE DEEP TO TARGET. PERFECT.” America had just completed its first successful missile launch of an intercontinental ballistic missile (ICBM) from beneath the ocean. Less than two months later, Russia conducted a similar test in the White Sea, north of Archangel.

Those tests began a new phase in the cold war. Having ICBMs on effectively invisible launchers meant that neither side could destroy the other’s nuclear arsenal in a single attack. So by keeping safe the capacity for retaliatory second strikes, the introduction of ballistic-missile submarines helped develop the concept of “mutually assured destruction” (MAD), thereby deterring any form of nuclear first strike. America, Britain, China, France and Russia all have nuclear-powered submarines on permanent or near permanent patrol, capable of launching nuclear missiles; India has one such submarine, too, and Israel is believed to have nuclear missiles on conventionally powered submarines.

As well as menacing the world at large, submarines pose a much more specific threat to other countries’ navies; most military subs are attack boats rather than missile platforms. This makes anti-submarine warfare (ASW) a high priority for anyone who wants to keep their surface ships on the surface. Because such warfare depends on interpreting lots of data from different sources—sonar arrays on ships, sonar buoys dropped from aircraft, passive listening systems on the sea-floor—technology which allows new types of sensor and new ways of communicating could greatly increase its possibilities. “There’s an unmanned-systems explosion,” says Jim Galambos of DARPA, the Pentagon’s future-technology arm. Up until now, he says, submariners could be fairly sure of their hiding place, operating “alone and unafraid”. That is changing.

Aircraft play a big role in today’s ASW, flying from ships or shore to drop “sonobuoys” in patterns calculated to have the best chance of spotting something. This is expensive. An aeroplane with 8-10 people in it throws buoys out and waits around to listen to them and process their data on board. “In future you can envision a pair of AUVs [autonomous underwater vehicles], one deploying and one loitering and listening,” says Fred Cotaras of Ultra Electronics, a sonobuoy maker. Cheaper deployment means more buoys.

But more data is not that helpful if you do not have ways of moving it around, or of knowing where exactly it comes from. That is why DARPA is working on a Positioning System for Deep Ocean Navigation (POSYDON) which aims to provide “omnipresent, robust positioning across ocean basins” just as GPS satellites do above water, says Lisa Zurk, who heads up the programme. The system will use a natural feature of the ocean known as the “deep sound channel”. The speed of sound in water depends on temperature, pressure and, to some extent, salinity. The deep sound channel is found at the depth where these factors provide the lowest speed of sound. Below it, higher pressure makes the sound faster; above it, warmer water has the same effect…

Even in heavily surveilled seas, spotting submarines will remain tricky. They are already quiet, and getting quieter; new “air-independent propulsion” systems mean that conventionally powered submarines can now turn off their diesel engines and run as quietly as nuclear ones, perhaps even more so, for extended periods of time. Greater autonomy, and thus fewer humans—or none at all—could make submarines quieter still.

A case in point is a Russian weapon called Status-6, also known as Kanyon, about which Vladimir Putin boasted in a speech on March 1st, 2018. America’s recent nuclear-posture review describes it as “a new intercontinental, nuclear-armed, nuclear-powered, undersea autonomous torpedo”. A Russian state television broadcast in 2015 appeared to show it as a long, thin AUV that can be launched from a modified submarine and travel thousands of kilometres to explode off the shore of a major city with a great deal more energy than the largest warheads on ICBMs, thus generating a radioactive tsunami. Such a system might be seen as preserving a second-strike capability even if the target had a missile-defence system capable of shooting ICBMs out of the sky…

One part of the ocean that has become particularly interesting in this regard is the Arctic. Tracking submarines under or near ice is difficult, because ice constantly shifts, crackles and groans loudly enough to mask the subtle sounds of a submarine. With ever less ice in the Arctic this is becoming less of a problem, meaning America should be better able to track Russian submarines through its Assured Arctic Awareness programme…

Greater numbers of better sensors, better networked, will not soon make submarines useless; but even without breakthroughs, they could erode the strategic norm that has guided nuclear thinking for over half a century—that of an unstoppable second strike.

Excerpts from Mutually assured detection, Economist, Mar. 10, 2018