Tag Archives: artificial intelligence

How China Plans to Destroy the U.S. AI Industry

China’s restrictions on rare-earth materials announced on October 9, 2025 would mark a nearly unprecedented export control*** that stands to disrupt the global economy, giving Beijing more leverage in trade negotiations and ratcheting up pressure on the Trump administration to respond.

The rule, put out by China’s Commerce Ministry, is viewed as an escalation in the U.S.-China trade fight because it threatens the supply chain for semiconductors. Chips are the lifeblood of the economy, powering phones, computers and data centers needed to train artificial-intelligence models. The rule also would affect cars, solar panels and the equipment for making chips and other products, limiting the ability of other countries to support their own industries. China produces roughly 90% of the world’s rare-earth materials.

Global companies that sell goods with certain rare-earth materials sourced from China accounting for 0.1% or more of the product’s value would need permission from Beijing, under the new rule. Tech companies will probably find it extremely difficult to show that their chips, the equipment needed to make them and other components fall below the 0.1% threshold, industry experts said. The rules could cause a U.S. recession if implemented aggressively because of how important AI capital spending is to the economy… “It’s an economic equivalent of nuclear war—an intent to destroy the American AI industry,” said Dmitri Alperovitch, co-founder of the Silverado Policy Accelerator think tank.

Excerpt from Amrith Ramkumar,et al., China’s Rare-Earth Escalation Threatens Trade Talks—and the Global Economy, WSJ, Oct. 9, 2025

***The new export controls mark the first time China has applied the foreign direct product rule (FDPR)—a mechanism introduced in 1959 by the United States and long used United States to restrict semiconductor exports to China. The FDPR enables the United States to regulate the sale of foreign-made products if they incorporate U.S. technology, software, or equipment, even when produced by non-U.S. companies abroad. In effect, if U.S. technology appears anywhere in the supply chain, the United States can assert jurisdiction. See CSIS

Blackmail and Espionage: rogue AI

Today I am reading on how AI models can blackmail and spy.

See How LLMs could be insider threats

DECEPTION IN LLMS: SELF-PRESERVATION AND AUTONOMOUS GOALS IN LARGE LANGUAGE MODELS

Chilling…

Can AI Do That? Knowledge Impossible to Copy

Zuckerberg hasn’t had much success in his efforts to hire the field’s biggest stars, including OpenAI’s co-founder Ilya Sutskever and its chief research officer, Mark Chen. Many candidates are happy to take a meeting at Zuckerberg’s homes in Palo Alto and Lake Tahoe. In private, they are comparing gossip and calculating Meta’s chances of winning the AI race.

The handful of researchers who are smartest about AI have built up what one described as “tribal knowledge” that is almost impossible to replicate. Rival researchers have lived in the same group houses in San Francisco, where they discuss papers that might provide clues for achieving the next great breakthrough. 

Excerpt from Ben Coen et al, It’s Known as ‘The List’—and It’s a Secret File of AI Geniuses, WSJ, June 27, 2025

The Cat-and-Mouse Game: US-China, Chip Giants

The U.S. on March 28 2025 added dozens of Chinese companies to a trade blacklist over national security concerns. American businesses seeking to sell technology to these companies will need approval from the government. Among those added were subsidiaries of Inspur Group, China’s largest server maker and a major customer for U.S. chip makers such as Nvidia, Intel and Advanced Micro Devices. Companies linked to China’s largest supercomputer maker, Sugon, were also added…

Nearly 80 companies were put on the Commerce Department’s blacklist, known as the entity list…including the U.S. server maker Aivres Systems that is wholly owned by Inspur Electronic. The latter is one-third owned by Inspur Group, according to corporate records. Aivres has been assembling high-end artificial-intelligence equipment for Nvidia. The AI-chip giant has said that Aivres will make servers using chips in the Blackwell family, Nvidia’s newest and most powerful processors.  Aivres advertises on its website that it sells servers and infrastructure powered by Blackwell chips, which are banned from sale into China…About two months after Inspur Group was added to the trade blacklist in March 2023, California-based Inspur Systems changed its name to Aivres Systems.

Excerpts from Liza Lin, Trump Takes Tough Approach to Choking Off China’s Access to U.S. Tech, WSJ, Mar. 26, 2025

Like a Lamb to the Slaughter: DeepSeek Collects Personal Data–Nobody Cares

Amid ongoing fears over TikTok, Chinese generative AI platform DeepSeek says it’s sending heaps of US user data straight to its home country, potentially setting the stage for greater scrutiny. The United States’ recent regulatory action against the Chinese-owned social video platform TikTok prompted mass migration to another Chinese app, the social platform “Rednote.” Now, a generative artificial intelligence platform from the Chinese developer DeepSeek is exploding in popularity, posing a potential threat to US AI dominance and offering the latest evidence that moratoriums like the TikTok ban will not stop Americans from using Chinese-owned digital services…In many ways, DeepSeek is likely sending more data back to China than TikTok has in recent years, since the social media company moved to US cloud hosting to try to deflect US security concerns “It shouldn’t take a panic over Chinese AI to remind people that most companies set the terms for how they use your private data” says John Scott-Railton, a senior researcher at the University of Toronto’s Citizen Lab. “And that when you use their services, you’re doing work for them, not the other way around.”To be clear, DeepSeek is sending your data to China. The English-language DeepSeek privacy policy, which lays out how the company handles user data, is unequivocal: “We store the information we collect in secure servers located in the People’s Republic of China.”

In other words, all the conversations and questions you send to DeepSeek, along with the answers that it generates, are being sent to China or can be. DeepSeek’s privacy policies also outline the information it collects about you, which falls into three sweeping categories: information that you share with DeepSeek, information that it automatically collects, and information that it can get from other source…DeepSeek is largely free… “So what do we pay with? What… do we usually pay with: data, knowledge, content, information.” …

As with all digital platforms—from websites to apps—there can also be a large amount of data that is collected automatically and silently when you use the services. DeepSeek says it will collect information about what device you are using, your operating system, IP address, and information such as crash reports. It can also record your “keystroke patterns or rhythm.”…

Excerpts from John Scott-Railton, DeepSeek’s Popular AI App Is Explicitly Sending US Data to China, Wired, Jan. 27, 2025

 

The Quick and Dirty AI Boom

Nowhere else on Earth has been physically reshaped by artificial intelligence as quickly as the Malaysian state of Johor. Three years ago, this region next to Singapore was a tech-industry backwater. Palm-oil plantations dotted the wetlands. Now rising next to those tropical trees 100 miles from the equator are cavernous rectangular buildings that, all together, make up one of the world’s biggest AI construction projects…

TikTok’s Chinese parent company, ByteDance, is spending $350 million on data centers in Johor. Microsoft just bought a 123-acre plot not far away for $95 million. Asset manager Blackstone recently paid $16 billion to buy AirTrunk, a data-center operator with Asia-wide locations including a Johor facility spanning an area the size of 19 football fields. Oracle last week announced a $6.5 billion investment in Malaysia’s data-center sector, though it didn’t specify where. In all, investments in data centers in Johor, which can be used for both AI and more conventional cloud computing, will reach $3.8 billion this year, estimates regional bank Maybank.

To understand how one of the first boomtowns of the AI era sprouted at the southern tip of the Malay Peninsula, consider the infrastructure behind AI. Tech giants want to train chatbots, driverless cars and other AI technology as quickly as possible. They do so in data centers with thousands of computer chips, which require a lot of power, as well as water for cooling…Northern Virginia became the world’s biggest data-center market because of available power, water and land. But supply is running low. Tech companies can’t build data centers fast enough in the U.S. alone. Enter Johor. It has plentiful land and power—largely from coal—and enough water. Malaysia enjoys generally friendly relations with the U.S. and China, reducing political risk for companies from the rival nations. The other important factor: location. Across the border is Singapore, which has one of the world’s densest intersections of undersea internet cables. Those are modern-age highways, enabling tech companies to sling mountains of data around the world.

Excerpt from Stu Woo, One of the Biggest AI Boomtowns Is Rising in a Tech-Industry Backwater, WSJ, Oct.  8, 2024

If the United States is a Surveillance State How Does it Differ from China?

In November 2023, Michael Morell, a former deputy director of the Central Intelligence Agency (CIA), hinted at a big change in how the agency now operates. “The information that is available commercially would kind of knock your socks off…if we collected it using traditional intelligence methods, it would be top secret-sensitive. And you wouldn’t put it in a database, you’d keep it in a safe.”

In recent years, U.S. intelligence agencies, the military and even local police departments have gained access to enormous amounts of data through shadowy arrangements with brokers and aggregators. Everything from basic biographical information to consumer preferences to precise hour-by-hour movements can be obtained by government agencies without a warrant.

Most of this data is first collected by commercial entities as part of doing business. Companies acquire consumer names and addresses to ship goods and sell services. They acquire consumer preference data from loyalty programs, purchase history or online search queries. They get geolocation data when they build mobile apps or install roadside safety systems in cars. But once consumers agree to share information with a corporation, they have no way to monitor what happens to it after it is collected. Many corporations have relationships with data brokers and sell or trade information about their customers. And governments have come to realize that such corporate data not only offers a rich trove of valuable information but is available for sale in bulk.

Earlier generations of data brokers vacuumed up information from public records like driver’s licenses and marriage certificates. But today’s internet-enabled consumer technology makes it possible to acquire previously unimaginable kinds of data. Phone apps scan the signal environment around your phone and report back, hourly, about the cell towers, wireless earbuds, Bluetooth speakers and Wi-Fi routers that it encounters….The National Security Agency recently acknowledged buying internet browsing data from private brokers, and several sources have told me about programs allowing the U.S. to buy access to foreign cell phone networks. Those arrangements are cloaked in secrecy, but the data would allow the U.S. to see who hundreds of millions of people around the world are calling.

Car companies, roadside assistance services and satellite radio companies also collect geolocation data and sell it to brokers, who then resell it to government entities. Even tires can be a vector for surveillance. That little computer readout on your car that tells you the tire pressure is 42 PSI? It operates through a wireless signal from a tiny sensor, and government agencies and private companies have figured out how to use such signals to track people…

It’s legal for the government to use commercial data in intelligence programs because data brokers have either gotten the consent of consumers to collect their information or have stripped the data of any details that could be traced back to an individual. Much commercially available data doesn’t contain explicit personal information. But the truth is that there are ways to identify people in nearly all anonymized data sets. If you can associate a phone, a computer or a car tire with a daily pattern of behavior or a residential address, it can usually be associated with an individual.

And while consumers have technically consented to the acquisition of their personal data by large corporations, most aren’t aware that their data is also flowing to the government, which disguises its purchases of data by working with contractors. One giant defense contractor, Sierra Nevada, set up a marketing company called nContext which is acquiring huge amounts of advertising data from commercial providers. Big data brokers that have reams of consumer information, like LexisNexis and  Thomson Reuters, market products to government entities, as do smaller niche players. Companies like Babel Street, Shadowdragon, Flashpoint and Cobwebs have sprung up to sell insights into what happens on social media or other web forums. Location data brokers like Venntel and Safegraph have provided data on the movement of mobile phones…

A group of U.S. lawmakers is trying to stop the government from buying commercial data without court authorization by inserting a provision to that effect in a spy law, FISA Section 702, that Congress needs to reauthorize by April 19. The proposal would ban U.S. government agencies from buying data on Americans but would allow law-enforcement agencies and the intelligence community to continue buying data on foreigners…But many in the national security establishment think that it makes no sense to ban the government from acquiring data that everyone from the Chinese government to Home Depot can buy on the open market. The data is valuable—in some cases, so valuable that the government won’t even discuss what it’s buying. “Picture getting a suspect’s phone, then in the extraction [of data] being able to see everyplace they’d been in the last 18 months plotted on a map you filter by date ranges,” wrote one Maryland state trooper in an email obtained under public records laws. “The success lies in the secrecy.”

For spies and police officers alike, it is better for people to remain in the dark about what happens to the data generated by their daily activities—because if it were widely known how much data is collected and who buys it, it wouldn’t be such a powerful tool. Criminals might change their behavior. Foreign officials might realize they’re being surveilled. Consumers might be more reluctant to uncritically click “I accept” on the terms of service when downloading free apps. And the American public might finally demand that, after decades of inaction, their lawmakers finally do something about unrestrained data collection.

Excerpts from Byron Tau, US Spy Agencies Know Your Secrets. They Bought Them, WSJ, Mar. 8, 2024

See also Means of Control: How the Hidden Alliance of Tech and Government Is Creating a New American Surveillance State by Byron Tau (published 2024).

Why China Lags Behind in Artificial Intelligence

China is two or three years behind America in building foundation models of AI. There are three reasons for this underperformance. The first concerns data. A centralized autocracy should be able to marshal lots of it—the government was, for instance, able to hand over troves of surveillance information on Chinese citizens to firms such as SenseTime or Megvii that, with the help of China’s leading computer-vision labs, then used it to develop top-notch facial-recognition systems.

That advantage has proved less formidable in the context of generative AIs, because foundation models are trained on the voluminous unstructured data of the web. American model-builders benefit from the fact that 56% of all websites are in English, whereas just 1.5% are written in Chinese, according to data from w3Techs, an internet-research site. As Yiqin Fu of Stanford University points out, the Chinese interact with the internet primarily through mobile super-apps like WeChat and Weibo. These are “walled gardens”, so much of their content is not indexed on search engines. This makes that content harder for ai models to suck up. Lack of data may explain why Wu Dao 2.0, a model unveiled in 2021 by the Beijing Academy of Artificial Intelligence, a state-backed outfit, failed to make a splash despite its possibly being computationally more complex than GPT-4.

The second reason for China’s lackluster generative achievements has to do with hardware. In 2022 America imposed export controls on technology that might give China a leg-up in AI. These cover the powerful microprocessors used in the cloud-computing data centrers where foundation models do their learning, and the chipmaking tools that could enable China to build such semiconductors on its own.

That hurt Chinese model-builders. An analysis of 26 big Chinese models by the Centre for the Governance of ai, a British think-tank, found that more than half depended on Nvidia, an American chip designer, for their processing power. Some reports suggest that SMIC, China’s biggest chipmaker, has produced prototypes just a generation or two behind TSMC, the Taiwanese industry leader that manufactures chips for Nvidia. But SMIC can probably mass-produce only chips which TSMC was churning out by the million three or four years ago.

Chinese AI firms are having trouble getting their hands on another American export: know-how. America remains a magnet for the world’s tech talent; two-thirds of ai experts in America who present papers at the main ai conference are foreign-born. Chinese engineers made up 27% of that select group in 2019. Many Chinese AI boffins studied or worked in America before bringing expertise back home. The covid-19 pandemic and rising Sino-American tensions are causing their numbers to dwindle. In the first half of 2022 America granted half as many visas to Chinese students as in the same period in 2019.

The triple shortage—of data, hardware and expertise—has been a hurdle for China. Whether it will hold Chinese ai ambitions back much longer is another matter.

Excerpts from Artificial Intelligence: Model Socialists, Economist,  May 13, 2023, at 49

How Artificial Intelligence Can Help Produce Better Chemical Weapons

An international security conference convened by the Swiss Federal Institute for NBC (nuclear, biological and chemical) Protection —Spiez Laboratory explored how artificial intelligence (AI) technologies for drug discovery could be misused for de novo design of biochemical weapons.  According to the researchers, discussion of societal impacts of AI has principally focused on aspects such as safety, privacy, discrimination and potential criminal misuse, but not on national and international security. When we think of drug discovery, we normally do not consider technology misuse potential. We are not trained to consider it, and it is not even required for machine learning research.

According to the scientists, this should serve as a wake-up call for our colleagues in the ‘AI in drug discovery’ community. Although some expertise in chemistry or toxicology is still required to generate toxic substances or biological agents that can cause significant harm, when these fields intersect with machine learning models, where all you need is the ability to code and to understand the output of the models themselves, they dramatically lower technical thresholds. Open-source machine learning software is the primary route for learning and creating new models like ours, and toxicity datasets that provide a baseline model for predictions for a range of targets related to human health are readily available.

The genie is out of the medicine bottle when it comes to repurposing our machine learning. We must now ask: what are the implications? Our own commercial tools, as well as open-source software tools and many datasets that populate public databases, are available with no oversight. If the threat of harm, or actual harm, occurs with ties back to machine learning, what impact will this have on how this technology is perceived? Will hype in the press on AI-designed drugs suddenly flip to concern about AI-designed toxins, public shaming and decreased investment in these technologies? As a field, we should open a conversation on this topic. The reputational risk is substantial: it only takes one bad apple, such as an adversarial state or other actor looking for a technological edge, to cause actual harm by taking what we have vaguely described to the next logical step. How do we prevent this? Can we lock away all the tools and throw away the key? Do we monitor software downloads or restrict sales to certain groups?

Excerpts from Fabio Urbina et al, Dual use of artificial-intelligence-powered drug discovery, Nature Machine Intelligence (2022)

Robots to the Rescue: Best Dams on Amazon River

Proposed hydropower dams at more than 350 sites throughout the Amazon require strategic evaluation of trade-offs between the production electricity and the protection of biodiversity. 

Researchers are using artificial intelligence (AI) to identify sites that simultaneously minimize impacts on river flow, river connectivity, sediment transport, fish diversity, and greenhouse gas emissions while achieving energy production goals. The researchers found that uncoordinated, dam-by-dam hydropower expansion has resulted in forgone environmental benefits from the river. Minimizing further damage from hydropower development requires considering diverse environmental impacts across the entire basin, as well as cooperation among Amazonian nations. 

Alexander Flecker et al., Reducing adverse impacts of Amazon hydropower expansion, Science, Feb. 17, 2022

Alas! Computers that Really Get You

 Artificial intelligence (AI) software can already identify people by their voices or handwriting. Now, an AI has shown it can tag people based on their chess-playing behavior, an advance in the field of “stylometrics” that could help computers be better chess teachers or more humanlike in their game play. Alarmingly, the system could also be used to help identify and track people who think their online behavior is anonymous

The researchers are aware of the privacy risks posed by the system, which could be used to unmask anonymous chess players online…In theory, given the right data sets, such systems could identify people based on the quirks of their driving or the timing and location of their cellphone use.

Excerpt from  Matthew Hutson, AI unmasks anonymous chess players, posing privacy risks, Science, Jan. 14, 2022

So You Want a Job? De-Humanizing the Hiring Process

Dr. Lee, chairman and chief executive of venture-capital firm Sinovation Ventures and author of “AI Superpowers: China, Silicon Valley and the New World Order,” maintains that AI “will wipe out a huge portion of work as we’ve known it.” He hit on that theme when he spoke at The Wall Street Journal’s virtual CIO Network summit.

Artificial intelligence (AI) (i.e., robots), according to Dr. Lee, can be used for recruiting…We can have a lot of résumés coming in, and we want to match those résumés with job descriptions and route them to the right managers. If you’re thinking about AI computer and video interaction, there are products you can deploy to screen candidates. For example, AI can have a conversation with the person, via videoconference. And then AI would grade the people based on their answers to your questions that are preprogrammed, as well as your micro-expressions and facial expressions, to reflect whether you possess the right IQ and EQ (emotional intelligence) for a particular job.

Excerpts from Jared Council , AI’s Impact on Businesses—and Jobs, WSJ,  Mar. 8, 2021

A Worldwide Web that Kills with Success

Doubts are growing about the satellites, warships and other big pieces of hardware involved in the command and control of America’s military might. For the past couple of decades the country’s generals and admirals have focused their attention on defeating various forms of irregular warfare. For this, these castles in the sky and at sea have worked well. In the meantime, however, America’s rivals have been upgrading their regular forces—including weapons that can destroy such nodes of power. Both China and Russia have successfully blown up orbiting satellites. And both have developed, or are developing, sophisticated long-range anti-aircraft and anti-ship missiles.

As a result, America is trying to devise a different approach to C2, as command and control is known in military jargon. The Department of Defense has dubbed this idea “Joint All-Domain Command and Control”, or JADC2. It aims to eliminate vulnerable nodes in the system (e.g., satellites) by multiplying the number of peer-to-peer data links that connect pieces of military hardware directly to one another, rather than via a control center that might be eliminated by a single, well-aimed missile.

The goal, officials say, is to create a network that links “every sensor and every shooter”. When complete, this will encompass sensors as small as soldiers’ night-vision gear and sonar buoys drifting at sea, and shooters as potent as ground-based artillery and aerial drones armed with Hellfire missiles.

One likely beneficiary of the jadc2 approach is Anduril Industries, a Californian firm…Its products include small spy helicopter drones; radar, infrared and optical systems constructed as solar-powered towers; and paperback-sized ground sensors that can be disguised as rocks

Sensors come in still-more-diverse forms than Anduril’s, though. An autonomous doglike robot made by Ghost Robotics of Philadelphia offers a hint of things to come. In addition to infrared and video systems, this quadruped, dubbed v60 q-ugv, can be equipped with acoustic sensors (to recognise, among other things, animal and human footsteps), a millimetre-wave scanner (to see through walls) and “sniffers” that identify radiation, chemicals and electromagnetic signals. Thanks to navigation systems developed for self-driving cars, v60 q-ugv can scamper across rough terrain, climb stairs and hide from people. In a test by the air force this robot was able to spot a mobile missile launcher and pass its location on directly to an artillery team…

Applying Artificial Intelligence (AI) to more C2 processes should cut the time required to hit a target. In a demonstration in September 2020, army artillery controlled by AI and fed instructions by air-force sensors shot down a cruise missile in a response described as “blistering”…

There are, however, numerous obstacles to the success of all this. For a start, developing unhackable software for the purpose will be hard. Legions of machines containing proprietary and classified technologies, new and old, will have to be connected seamlessly, often without adding antennae or other equipment that would spoil their stealthiness…America’s technologists must, then, link the country’s military equipment into a “kill web” so robust that attempts to cripple it will amount to “trying to pop a balloon with one finger”, as Timothy Grayson, head of strategic technologies at DARPA, the defense department’s main research agency, puts it…

Excerpts from The future of armed conflict: Warfare’s worldwide web, Economist,  Jan. 9, 2021

Satellites Shed Light on Modern Slavery in Fishing

While forced labor, a form of modern slavery, in the world’s fishing fleet has been widely documented, its extent remains unknown. No methods previously existed for remotely identifying individual fishing vessels potentially engaged in these abuses on a global scale. By combining expertise from human rights practitioners and satellite vessel monitoring data, scientists have showed in an recent study that vessels reported to use forced labor behave in systematically different ways from other vessels. Scientists used machine learning to identify high-risk vessels from among 16,000 industrial longliner, squid jigger, and trawler fishing vessels.

The study concluded that 14% and 26% of vessels were high-risk. It also revealed patterns of where these vessels fished and which ports they visited. Between 57,000 and 100,000 individuals worked on these vessels, many of whom may have been forced labor victims. This information provides unprecedented opportunities for novel interventions to combat this humanitarian tragedy….

The study found, inter alia, that longliners and trawlers using forced labor travel further from port and shore, fish more hours per day than other vessels, and have fewer voyages and longer voyage durations…  Taiwanese longliners, Chinese squid jiggers, and Chinese, Japanese, and South Korean longliners are consistently the five fisheries with the largest number of unique high-risk vessels. This pattern is consistent with reports on the abuses seen within distant water fleets that receive little legal oversight and often use marginalized migrant workers .

Excerpts from Gavin G. McDonald et, al, Satellites can reveal global extent of forced labor in the world’s fishing fleet, Dec. 21, 2020

Algorithms as Weapons –Tracking,Targeting Nuclear Weapons

 
New and unproved technologies—this time computer systems capable of performing superhuman tasks using machine learning and other forms of artificial intelligence (AI)—threaten to destabilise the global “strategic balance”, by seeming to offer ways to launch a knockout blow against a nuclear-armed adversary, without triggering an all-out war.

A report issued in November by America’s National Security Commission on Artificial Intelligence, a body created by Congress and chaired by Eric Schmidt, a former boss of Google, and Robert Work, who was deputy defence secretary from 2014-17, ponders how AI systems may reshape global balances of power, as dramatically as electricity changed warfare and society in the 19th century. Notably, it focuses on the ability of AI to “find the needle in the haystack”, by spotting patterns and anomalies in vast pools of data…In a military context, it may one day find the stealthiest nuclear-armed submarines, wherever they lurk. The commission is blunt. Nuclear deterrence could be undermined if AI-equipped systems succeed in tracking and targeting previously invulnerable military assets. That in turn could increase incentives for states, in a crisis, to launch a devastating pre-emptive strike. China’s rise as an AI power represents the most complex strategic challenge that America faces, the commission adds, because the two rivals’ tech sectors are so entangled by commercial, academic and investment ties.

Some Chinese officials sound gung-ho about AI as a path to prosperity and development, with few qualms about privacy or lost jobs. Still, other Chinese fret about AI that might put winning a war ahead of global stability, like some game-playing doomsday machine. Chinese officials have studied initiatives such as the “Digital Geneva Convention” drafted by Microsoft, a technology giant. This would require states to forswear cyber-attacks on such critical infrastructure as power grids, hospitals and international financial systems.  AI would make it easier to locate and exploit vulnerabilities in these…

One obstacle is physical. Warheads or missile defences can be counted by weapons inspectors. In contrast, rival powers cannot safely show off their most potent algorithms, or even describe AI capabilities in a verifiable way….Westerners worry especially about so-called “black box” algorithms, powerful systems that generate seemingly accurate results but whose reasoning is a mystery even to their designers.

Excerpts from Chaguan: The Digital Divide, Economist, Jan 18, 2019

How to Fool your Enemy: Artificial Intelligence in Conflict

The contest between China and America, the world’s two superpowers, has many dimensions… One of the most alarming and least understood is the race towards artificial-intelligence-enabled warfare. Both countries are investing large sums in militarised artificial intelligence  (AI), from autonomous robots to software that gives generals rapid tactical advice in the heat of battle….As Jack Shanahan, a general who is the Pentagon’s point man for AI, put it last month, “What I don’t want to see is a future where our potential adversaries have a fully ai-enabled force and we do not.”

AI-enabled weapons may offer superhuman speed and precision.  In order to gain a military advantage, the temptation for armies will be to allow them not only to recommend decisions but also to give orders. That could have worrying consequences. Able to think faster than humans, an AI-enabled command system might cue up missile strikes on aircraft carriers and airbases at a pace that leaves no time for diplomacy and in ways that are not fully understood by its operators. On top of that, ai systems can be hacked, and tricked with manipulated data.

AI in war might aid surprise attacks or confound them, and the death toll could range from none to millions.  Unlike missile silos, software cannot be spied on from satellites. And whereas warheads can be inspected by enemies without reducing their potency, showing the outside world an algorithm could compromise its effectiveness. The incentive may be for both sides to mislead the other. “Adversaries’ ignorance of AI-developed configurations will become a strategic advantage,” suggests Henry Kissinger, who led America’s cold-war arms-control efforts with the Soviet Union…Amid a confrontation between the world’s two big powers, the temptation will be to cut corners for temporary advantage. 

Excerpts from Mind control: Artificial intelligence and war, Economist,  Sept. 7, 2019

Example of the Use of AI in Warfare: The Real-time Adversarial Intelligence and Decision-making (RAID) program under the auspices of The Defense Advanced Research Projects Agency’s (DARPA) Information Exploitation Office (IXO)  focuses on the challenge of anticipating enemy actions in a military operation. In the US Air Force community, the term, predictive battlespace awareness, refers to capabilities that would help the commander and staff to characterize and predict likely enemy courses of action…Today’s practices of military intelligence and decision-making do include a number of processes specifically aimed at predicting enemy actions. Currently, these processes are largely manual as well as mental, and do not involve any significant use of technical means. Even when computerized wargaming is used (albeit rarely in field conditions), it relies either on human guidance of the simulated enemy units or on simple reactive behaviors of such simulated units; in neither case is there a computerized prediction of intelligent and forward-looking enemy actions….

[The deception reasoning of the adversary is very important in this context.]  Deception reasoning refers to an important aspect of predicting enemy actions: the fact that military operations are historically, crucially dependent on the ability to use various forms of concealment and deception for friendly purposes while detecting and counteracting the enemy’s concealment and deception. Therefore, adversarial reasoning must include deception reasoning.

The RAID Program will develop a real-time adversarial predictive analysis tool that operates as an automated enemy predictor providing a continuously updated picture of probable enemy actions in tactical ground operations. The RAID Program will strive to: prove that adversarial reasoning can be automated; prove that automated adversarial reasoning can include deception….

Excerpts from Real-time Adversarial Intelligence and Decision-making (RAID), US Federal Grants

Dodging the Camera: How to Beat the Surveillance State in its Own Game

Powered by advances in artificial intelligence (AI), face-recognition systems are spreading like knotweed. Facebook, a social network, uses the technology to label people in uploaded photographs. Modern smartphones can be unlocked with it… America’s Department of Homeland Security reckons face recognition will scrutinise 97% of outbound airline passengers by 2023. Networks of face-recognition cameras are part of the police state China has built in Xinjiang, in the country’s far west. And a number of British police forces have tested the technology as a tool of mass surveillance in trials designed to spot criminals on the street.  A backlash, though, is brewing.

Refuseniks can also take matters into their own hands by trying to hide their faces from the cameras or, as has happened recently during protests in Hong Kong, by pointing hand-held lasers at cctv cameras. to dazzle them. Meanwhile, a small but growing group of privacy campaigners and academics are looking at ways to subvert the underlying technology directly…

Laser Pointers Used to Blind CCTV cameras during the Hong Kong Protests 2019

In 2010… an American researcher and artist named Adam Harvey created “cv [computer vision] Dazzle”, a style of make-up designed to fool face recognisers. It uses bright colours, high contrast, graded shading and asymmetric stylings to confound an algorithm’s assumptions about what a face looks like. To a human being, the result is still clearly a face. But a computer—or, at least, the specific algorithm Mr Harvey was aiming at—is baffled….

Modern Make-Up to Hide from CCTV cameras

HyperFace is a newer project of Mr Harvey’s. Where cv Dazzle aims to alter faces, HyperFace aims to hide them among dozens of fakes. It uses blocky, semi-abstract and comparatively innocent-looking patterns that are designed to appeal as strongly as possible to face classifiers. The idea is to disguise the real thing among a sea of false positives. Clothes with the pattern, which features lines and sets of dark spots vaguely reminiscent of mouths and pairs of eyes are available…

Hyperface Clothing for Camouflage

 Even in China, says Mr Harvey, only a fraction of cctv cameras collect pictures sharp enough for face recognition to work. Low-tech approaches can help, too. “Even small things like wearing turtlenecks, wearing sunglasses, looking at your phone [and therefore not at the cameras]—together these have some protective effect”. 

Excerpts from As face-recognition technology spreads, so do ideas for subverting it: Fooling Big Brother,  Economist, Aug. 17, 2019

The Prefect Crystal Ball for Gray War

The activity, hostile action that falls short of–but often precedes–violence, is sometimes referred to as gray zone warfare, the ‘zone’ being a sort of liminal state in between peace and war. The actors that work in it are difficult to identify and their aims hard to predict, by design…

Dubbed COMPASS, the new program will “leverage advanced artificial intelligence technologies, game theory, and modeling and estimation to both identify stimuli that yield the most information about an adversary’s intentions, and provide decision makers high-fidelity intelligence on how to respond–-with positive and negative tradeoffs for each course of action,” according to a DARPA notice posted on March 14, 2018.

Teaching software to understand and interpret human intention — a task sometimes called “plan recognition” …has advanced as quickly as the spread of computers and the internet, because all three are intimately linked.

From Amazon to Google to Facebook, the world’s top tech companies are pouring money into probabilistic modeling of user behavior, as part of a constant race to keep from losing users to sites that can better predict what they want. A user’s every click, “like,” and even period of inactivity adds to the companies’ almost unimaginably large sets, and new  machine learning and statistical techniques make it easier than ever to use the information to predict what a given user will do next on a given site.

But inferring a user’s next Amazon purchase (based on data that user has volunteered about previous choices, likes, etc.) is altogether different from predicting how an adversary intends to engage in political or unconventional warfare. So the COMPASS program seeks to use video, text, and other pieces of intelligence that are a lot harder to get than shopping-cart data…

Unlike shopping, the analytical tricks that apply to one gray-zone adversary won’t work on another. “History has shown that no two [unconventional warfare] situations or solutions are identical, thus rendering cookie-cutter responses not only meaningless but also often counterproductive,” wrote Gen. Joseph Votel, who leads U.S. Central Command, in his seminal 2016 treatise on gray zone warfare.

Exceprts from The Pentagon Wants AI To Reveal Adversaries’ True Intention, www.govexec.com, Mar. 17, 2018

Biometrics Gone Wrong

Despite their huge potential, artificial intelligence and biometrics still very much need human input for accurate identification, according to the director of the Defense Advanced Research Projects Agency.  Speaking at  an Atlantic Council event, Arati Prabhakar said that while the best facial recognition systems out there are statistically better than most humans at image identification, that when they’re wrong, “they are wrong in ways that no human would ever be wrong”….

“You want to embrace the power of these new technologies but be completely clear-eyed about what their limitations are so that they don’t mislead us,” Prabhakar said. That’s a stance humans must take with technology writ large, she said, explaining her hesitance to take for granted what many of her friends in Silicon Valley often assume  — that more data is always a good thing.  More data could just mean that you have so much data that whatever hypothesis you have you can find something that supports it,” Prabhakar said

DARPA director cautious over AI, biometrics, Planet Biometrics, May 4, 2016