Tag Archives: artificial intelligence and war

Why China Lags Behind in Artificial Intelligence

China is two or three years behind America in building foundation models of AI. There are three reasons for this underperformance. The first concerns data. A centralized autocracy should be able to marshal lots of it—the government was, for instance, able to hand over troves of surveillance information on Chinese citizens to firms such as SenseTime or Megvii that, with the help of China’s leading computer-vision labs, then used it to develop top-notch facial-recognition systems.

That advantage has proved less formidable in the context of generative AIs, because foundation models are trained on the voluminous unstructured data of the web. American model-builders benefit from the fact that 56% of all websites are in English, whereas just 1.5% are written in Chinese, according to data from w3Techs, an internet-research site. As Yiqin Fu of Stanford University points out, the Chinese interact with the internet primarily through mobile super-apps like WeChat and Weibo. These are “walled gardens”, so much of their content is not indexed on search engines. This makes that content harder for ai models to suck up. Lack of data may explain why Wu Dao 2.0, a model unveiled in 2021 by the Beijing Academy of Artificial Intelligence, a state-backed outfit, failed to make a splash despite its possibly being computationally more complex than GPT-4.

The second reason for China’s lackluster generative achievements has to do with hardware. In 2022 America imposed export controls on technology that might give China a leg-up in AI. These cover the powerful microprocessors used in the cloud-computing data centrers where foundation models do their learning, and the chipmaking tools that could enable China to build such semiconductors on its own.

That hurt Chinese model-builders. An analysis of 26 big Chinese models by the Centre for the Governance of ai, a British think-tank, found that more than half depended on Nvidia, an American chip designer, for their processing power. Some reports suggest that SMIC, China’s biggest chipmaker, has produced prototypes just a generation or two behind TSMC, the Taiwanese industry leader that manufactures chips for Nvidia. But SMIC can probably mass-produce only chips which TSMC was churning out by the million three or four years ago.

Chinese AI firms are having trouble getting their hands on another American export: know-how. America remains a magnet for the world’s tech talent; two-thirds of ai experts in America who present papers at the main ai conference are foreign-born. Chinese engineers made up 27% of that select group in 2019. Many Chinese AI boffins studied or worked in America before bringing expertise back home. The covid-19 pandemic and rising Sino-American tensions are causing their numbers to dwindle. In the first half of 2022 America granted half as many visas to Chinese students as in the same period in 2019.

The triple shortage—of data, hardware and expertise—has been a hurdle for China. Whether it will hold Chinese ai ambitions back much longer is another matter.

Excerpts from Artificial Intelligence: Model Socialists, Economist,  May 13, 2023, at 49

How Artificial Intelligence Can Help Produce Better Chemical Weapons

An international security conference convened by the Swiss Federal Institute for NBC (nuclear, biological and chemical) Protection —Spiez Laboratory explored how artificial intelligence (AI) technologies for drug discovery could be misused for de novo design of biochemical weapons.  According to the researchers, discussion of societal impacts of AI has principally focused on aspects such as safety, privacy, discrimination and potential criminal misuse, but not on national and international security. When we think of drug discovery, we normally do not consider technology misuse potential. We are not trained to consider it, and it is not even required for machine learning research.

According to the scientists, this should serve as a wake-up call for our colleagues in the ‘AI in drug discovery’ community. Although some expertise in chemistry or toxicology is still required to generate toxic substances or biological agents that can cause significant harm, when these fields intersect with machine learning models, where all you need is the ability to code and to understand the output of the models themselves, they dramatically lower technical thresholds. Open-source machine learning software is the primary route for learning and creating new models like ours, and toxicity datasets that provide a baseline model for predictions for a range of targets related to human health are readily available.

The genie is out of the medicine bottle when it comes to repurposing our machine learning. We must now ask: what are the implications? Our own commercial tools, as well as open-source software tools and many datasets that populate public databases, are available with no oversight. If the threat of harm, or actual harm, occurs with ties back to machine learning, what impact will this have on how this technology is perceived? Will hype in the press on AI-designed drugs suddenly flip to concern about AI-designed toxins, public shaming and decreased investment in these technologies? As a field, we should open a conversation on this topic. The reputational risk is substantial: it only takes one bad apple, such as an adversarial state or other actor looking for a technological edge, to cause actual harm by taking what we have vaguely described to the next logical step. How do we prevent this? Can we lock away all the tools and throw away the key? Do we monitor software downloads or restrict sales to certain groups?

Excerpts from Fabio Urbina et al, Dual use of artificial-intelligence-powered drug discovery, Nature Machine Intelligence (2022)

A Worldwide Web that Kills with Success

Doubts are growing about the satellites, warships and other big pieces of hardware involved in the command and control of America’s military might. For the past couple of decades the country’s generals and admirals have focused their attention on defeating various forms of irregular warfare. For this, these castles in the sky and at sea have worked well. In the meantime, however, America’s rivals have been upgrading their regular forces—including weapons that can destroy such nodes of power. Both China and Russia have successfully blown up orbiting satellites. And both have developed, or are developing, sophisticated long-range anti-aircraft and anti-ship missiles.

As a result, America is trying to devise a different approach to C2, as command and control is known in military jargon. The Department of Defense has dubbed this idea “Joint All-Domain Command and Control”, or JADC2. It aims to eliminate vulnerable nodes in the system (e.g., satellites) by multiplying the number of peer-to-peer data links that connect pieces of military hardware directly to one another, rather than via a control center that might be eliminated by a single, well-aimed missile.

The goal, officials say, is to create a network that links “every sensor and every shooter”. When complete, this will encompass sensors as small as soldiers’ night-vision gear and sonar buoys drifting at sea, and shooters as potent as ground-based artillery and aerial drones armed with Hellfire missiles.

One likely beneficiary of the jadc2 approach is Anduril Industries, a Californian firm…Its products include small spy helicopter drones; radar, infrared and optical systems constructed as solar-powered towers; and paperback-sized ground sensors that can be disguised as rocks

Sensors come in still-more-diverse forms than Anduril’s, though. An autonomous doglike robot made by Ghost Robotics of Philadelphia offers a hint of things to come. In addition to infrared and video systems, this quadruped, dubbed v60 q-ugv, can be equipped with acoustic sensors (to recognise, among other things, animal and human footsteps), a millimetre-wave scanner (to see through walls) and “sniffers” that identify radiation, chemicals and electromagnetic signals. Thanks to navigation systems developed for self-driving cars, v60 q-ugv can scamper across rough terrain, climb stairs and hide from people. In a test by the air force this robot was able to spot a mobile missile launcher and pass its location on directly to an artillery team…

Applying Artificial Intelligence (AI) to more C2 processes should cut the time required to hit a target. In a demonstration in September 2020, army artillery controlled by AI and fed instructions by air-force sensors shot down a cruise missile in a response described as “blistering”…

There are, however, numerous obstacles to the success of all this. For a start, developing unhackable software for the purpose will be hard. Legions of machines containing proprietary and classified technologies, new and old, will have to be connected seamlessly, often without adding antennae or other equipment that would spoil their stealthiness…America’s technologists must, then, link the country’s military equipment into a “kill web” so robust that attempts to cripple it will amount to “trying to pop a balloon with one finger”, as Timothy Grayson, head of strategic technologies at DARPA, the defense department’s main research agency, puts it…

Excerpts from The future of armed conflict: Warfare’s worldwide web, Economist,  Jan. 9, 2021