Tag Archives: artificial intelligence

Satellites Shed Light on Modern Slavery in Fishing

While forced labor, a form of modern slavery, in the world’s fishing fleet has been widely documented, its extent remains unknown. No methods previously existed for remotely identifying individual fishing vessels potentially engaged in these abuses on a global scale. By combining expertise from human rights practitioners and satellite vessel monitoring data, scientists have showed in an recent study that vessels reported to use forced labor behave in systematically different ways from other vessels. Scientists used machine learning to identify high-risk vessels from among 16,000 industrial longliner, squid jigger, and trawler fishing vessels.

The study concluded that 14% and 26% of vessels were high-risk. It also revealed patterns of where these vessels fished and which ports they visited. Between 57,000 and 100,000 individuals worked on these vessels, many of whom may have been forced labor victims. This information provides unprecedented opportunities for novel interventions to combat this humanitarian tragedy….

The study found, inter alia, that longliners and trawlers using forced labor travel further from port and shore, fish more hours per day than other vessels, and have fewer voyages and longer voyage durations…  Taiwanese longliners, Chinese squid jiggers, and Chinese, Japanese, and South Korean longliners are consistently the five fisheries with the largest number of unique high-risk vessels. This pattern is consistent with reports on the abuses seen within distant water fleets that receive little legal oversight and often use marginalized migrant workers .

Excerpts from Gavin G. McDonald et, al, Satellites can reveal global extent of forced labor in the world’s fishing fleet, Dec. 21, 2020

Algorithms as Weapons –Tracking,Targeting Nuclear Weapons

 
New and unproved technologies—this time computer systems capable of performing superhuman tasks using machine learning and other forms of artificial intelligence (AI)—threaten to destabilise the global “strategic balance”, by seeming to offer ways to launch a knockout blow against a nuclear-armed adversary, without triggering an all-out war.

A report issued in November by America’s National Security Commission on Artificial Intelligence, a body created by Congress and chaired by Eric Schmidt, a former boss of Google, and Robert Work, who was deputy defence secretary from 2014-17, ponders how AI systems may reshape global balances of power, as dramatically as electricity changed warfare and society in the 19th century. Notably, it focuses on the ability of AI to “find the needle in the haystack”, by spotting patterns and anomalies in vast pools of data…In a military context, it may one day find the stealthiest nuclear-armed submarines, wherever they lurk. The commission is blunt. Nuclear deterrence could be undermined if AI-equipped systems succeed in tracking and targeting previously invulnerable military assets. That in turn could increase incentives for states, in a crisis, to launch a devastating pre-emptive strike. China’s rise as an AI power represents the most complex strategic challenge that America faces, the commission adds, because the two rivals’ tech sectors are so entangled by commercial, academic and investment ties.

Some Chinese officials sound gung-ho about AI as a path to prosperity and development, with few qualms about privacy or lost jobs. Still, other Chinese fret about AI that might put winning a war ahead of global stability, like some game-playing doomsday machine. Chinese officials have studied initiatives such as the “Digital Geneva Convention” drafted by Microsoft, a technology giant. This would require states to forswear cyber-attacks on such critical infrastructure as power grids, hospitals and international financial systems.  AI would make it easier to locate and exploit vulnerabilities in these…

One obstacle is physical. Warheads or missile defences can be counted by weapons inspectors. In contrast, rival powers cannot safely show off their most potent algorithms, or even describe AI capabilities in a verifiable way….Westerners worry especially about so-called “black box” algorithms, powerful systems that generate seemingly accurate results but whose reasoning is a mystery even to their designers.

Excerpts from Chaguan: The Digital Divide, Economist, Jan 18, 2019

How to Fool your Enemy: Artificial Intelligence in Conflict

The contest between China and America, the world’s two superpowers, has many dimensions… One of the most alarming and least understood is the race towards artificial-intelligence-enabled warfare. Both countries are investing large sums in militarised artificial intelligence  (AI), from autonomous robots to software that gives generals rapid tactical advice in the heat of battle….As Jack Shanahan, a general who is the Pentagon’s point man for AI, put it last month, “What I don’t want to see is a future where our potential adversaries have a fully ai-enabled force and we do not.”

AI-enabled weapons may offer superhuman speed and precision.  In order to gain a military advantage, the temptation for armies will be to allow them not only to recommend decisions but also to give orders. That could have worrying consequences. Able to think faster than humans, an AI-enabled command system might cue up missile strikes on aircraft carriers and airbases at a pace that leaves no time for diplomacy and in ways that are not fully understood by its operators. On top of that, ai systems can be hacked, and tricked with manipulated data.

AI in war might aid surprise attacks or confound them, and the death toll could range from none to millions.  Unlike missile silos, software cannot be spied on from satellites. And whereas warheads can be inspected by enemies without reducing their potency, showing the outside world an algorithm could compromise its effectiveness. The incentive may be for both sides to mislead the other. “Adversaries’ ignorance of AI-developed configurations will become a strategic advantage,” suggests Henry Kissinger, who led America’s cold-war arms-control efforts with the Soviet Union…Amid a confrontation between the world’s two big powers, the temptation will be to cut corners for temporary advantage. 

Excerpts from Mind control: Artificial intelligence and war, Economist,  Sept. 7, 2019

Example of the Use of AI in Warfare: The Real-time Adversarial Intelligence and Decision-making (RAID) program under the auspices of The Defense Advanced Research Projects Agency’s (DARPA) Information Exploitation Office (IXO)  focuses on the challenge of anticipating enemy actions in a military operation. In the US Air Force community, the term, predictive battlespace awareness, refers to capabilities that would help the commander and staff to characterize and predict likely enemy courses of action…Today’s practices of military intelligence and decision-making do include a number of processes specifically aimed at predicting enemy actions. Currently, these processes are largely manual as well as mental, and do not involve any significant use of technical means. Even when computerized wargaming is used (albeit rarely in field conditions), it relies either on human guidance of the simulated enemy units or on simple reactive behaviors of such simulated units; in neither case is there a computerized prediction of intelligent and forward-looking enemy actions….

[The deception reasoning of the adversary is very important in this context.]  Deception reasoning refers to an important aspect of predicting enemy actions: the fact that military operations are historically, crucially dependent on the ability to use various forms of concealment and deception for friendly purposes while detecting and counteracting the enemy’s concealment and deception. Therefore, adversarial reasoning must include deception reasoning.

The RAID Program will develop a real-time adversarial predictive analysis tool that operates as an automated enemy predictor providing a continuously updated picture of probable enemy actions in tactical ground operations. The RAID Program will strive to: prove that adversarial reasoning can be automated; prove that automated adversarial reasoning can include deception….

Excerpts from Real-time Adversarial Intelligence and Decision-making (RAID), US Federal Grants

Dodging the Camera: How to Beat the Surveillance State in its Own Game

Powered by advances in artificial intelligence (AI), face-recognition systems are spreading like knotweed. Facebook, a social network, uses the technology to label people in uploaded photographs. Modern smartphones can be unlocked with it… America’s Department of Homeland Security reckons face recognition will scrutinise 97% of outbound airline passengers by 2023. Networks of face-recognition cameras are part of the police state China has built in Xinjiang, in the country’s far west. And a number of British police forces have tested the technology as a tool of mass surveillance in trials designed to spot criminals on the street.  A backlash, though, is brewing.

Refuseniks can also take matters into their own hands by trying to hide their faces from the cameras or, as has happened recently during protests in Hong Kong, by pointing hand-held lasers at cctv cameras. to dazzle them. Meanwhile, a small but growing group of privacy campaigners and academics are looking at ways to subvert the underlying technology directly…

Laser Pointers Used to Blind CCTV cameras during the Hong Kong Protests 2019

In 2010… an American researcher and artist named Adam Harvey created “cv [computer vision] Dazzle”, a style of make-up designed to fool face recognisers. It uses bright colours, high contrast, graded shading and asymmetric stylings to confound an algorithm’s assumptions about what a face looks like. To a human being, the result is still clearly a face. But a computer—or, at least, the specific algorithm Mr Harvey was aiming at—is baffled….

Modern Make-Up to Hide from CCTV cameras

HyperFace is a newer project of Mr Harvey’s. Where cv Dazzle aims to alter faces, HyperFace aims to hide them among dozens of fakes. It uses blocky, semi-abstract and comparatively innocent-looking patterns that are designed to appeal as strongly as possible to face classifiers. The idea is to disguise the real thing among a sea of false positives. Clothes with the pattern, which features lines and sets of dark spots vaguely reminiscent of mouths and pairs of eyes are available…

Hyperface Clothing for Camouflage

 Even in China, says Mr Harvey, only a fraction of cctv cameras collect pictures sharp enough for face recognition to work. Low-tech approaches can help, too. “Even small things like wearing turtlenecks, wearing sunglasses, looking at your phone [and therefore not at the cameras]—together these have some protective effect”. 

Excerpts from As face-recognition technology spreads, so do ideas for subverting it: Fooling Big Brother,  Economist, Aug. 17, 2019

The Prefect Crystal Ball for Gray War

The activity, hostile action that falls short of–but often precedes–violence, is sometimes referred to as gray zone warfare, the ‘zone’ being a sort of liminal state in between peace and war. The actors that work in it are difficult to identify and their aims hard to predict, by design…

Dubbed COMPASS, the new program will “leverage advanced artificial intelligence technologies, game theory, and modeling and estimation to both identify stimuli that yield the most information about an adversary’s intentions, and provide decision makers high-fidelity intelligence on how to respond–-with positive and negative tradeoffs for each course of action,” according to a DARPA notice posted on March 14, 2018.

Teaching software to understand and interpret human intention — a task sometimes called “plan recognition” …has advanced as quickly as the spread of computers and the internet, because all three are intimately linked.

From Amazon to Google to Facebook, the world’s top tech companies are pouring money into probabilistic modeling of user behavior, as part of a constant race to keep from losing users to sites that can better predict what they want. A user’s every click, “like,” and even period of inactivity adds to the companies’ almost unimaginably large sets, and new  machine learning and statistical techniques make it easier than ever to use the information to predict what a given user will do next on a given site.

But inferring a user’s next Amazon purchase (based on data that user has volunteered about previous choices, likes, etc.) is altogether different from predicting how an adversary intends to engage in political or unconventional warfare. So the COMPASS program seeks to use video, text, and other pieces of intelligence that are a lot harder to get than shopping-cart data…

Unlike shopping, the analytical tricks that apply to one gray-zone adversary won’t work on another. “History has shown that no two [unconventional warfare] situations or solutions are identical, thus rendering cookie-cutter responses not only meaningless but also often counterproductive,” wrote Gen. Joseph Votel, who leads U.S. Central Command, in his seminal 2016 treatise on gray zone warfare.

Exceprts from The Pentagon Wants AI To Reveal Adversaries’ True Intention, www.govexec.com, Mar. 17, 2018

Biometrics Gone Wrong

Despite their huge potential, artificial intelligence and biometrics still very much need human input for accurate identification, according to the director of the Defense Advanced Research Projects Agency.  Speaking at  an Atlantic Council event, Arati Prabhakar said that while the best facial recognition systems out there are statistically better than most humans at image identification, that when they’re wrong, “they are wrong in ways that no human would ever be wrong”….

“You want to embrace the power of these new technologies but be completely clear-eyed about what their limitations are so that they don’t mislead us,” Prabhakar said. That’s a stance humans must take with technology writ large, she said, explaining her hesitance to take for granted what many of her friends in Silicon Valley often assume  — that more data is always a good thing.  More data could just mean that you have so much data that whatever hypothesis you have you can find something that supports it,” Prabhakar said

DARPA director cautious over AI, biometrics, Planet Biometrics, May 4, 2016