Tag Archives: AI and chemical weapons

AI or Just Bots: the Truth about Artificial Intelligence

Americans are becoming increasingly convinced that artificial intelligence is actually thinking like humans do…This fuels narratives about a future in which AI takes over the economy, leading to heightened insecurity for all of us while providing cover for companies that might be laying off workers for other reasons. It leads us to accept as true answers that are frequently made up or incorrect, even when we are repeatedly told that chatbots can’t stop delivering this kind of misinformation…Our cognitive biases developed to help us survive in complex social environments… We have evolved to view linguistic fluency as a proxy for intelligence, and engagement and helpfulness as indicators of trustworthiness. Builders of AI tools lean in to this deliberately. The humanlike qualities of chatbots are a calculated effort by designers and engineers to make AI more useful, but also more compelling and stickier [i.e. addictive]—just like social media.

Microsoft AI chief Mustafa Suleyman… warned that today’s seemingly conscious AIs [consists of a bunch of] highly accelerated information processors. “These systems are not waking up,” he wrote. “They are retracing and mirroring the contours of human drama and debate, as documented in their vast training data.” He recommends a solution: “Developers must actively engineer the illusion of consciousness out of the products.”…

Humans have a tendency to anthropomorphize animals and even inanimate objects, says Ayanna Howard, dean of Ohio State University’s College of Engineering and a robotics….Humans’ trusting nature makes sense for social creatures who must cooperate with members of their own tribe to survive. With AI and robots, however, this same tendency leads us to trust any system that appears to listen, understand and want to help, a phenomenon Howard calls “over-trust.” Today’s AIs are engineered to actively induce us to over-trust them, she adds. They do this by behaving in ways that are friendly and helpful, mimicking us through memory and personalization.

Excerpt from Christopher Mims, Why Even Smart People Believe AI Is Really Thinking, WSJ, Mar. 20, 2026

Blackmail and Espionage: rogue AI

Today I am reading on how AI models can blackmail and spy.

See How LLMs could be insider threats

DECEPTION IN LLMS: SELF-PRESERVATION AND AUTONOMOUS GOALS IN LARGE LANGUAGE MODELS

Chilling…

Out-of-Date: Academic Cooperation

Mr. Trump noted in the summer of 2025  that “the United States is in a race to achieve global dominance in artificial intelligence,” which Joe Biden called “a defining technology of our era.” Universities help drive that race. Meta’s chief AI officer, Alexandr Wang, has argued that the rate of AI progress may be such that “you need to prevent all of our secrets from going over to our adversaries and you need to lock down the labs.”

Thousands of Chinese citizens are working and studying in such labs….In AI specifically, nearly 40% of top-tier researchers at U.S. institutions are of Chinese origin. Beijing is aggressively cultivating American-educated and American-employed researchers via the Thousand Talents program.

Blindly embracing academic cooperation with a geopolitical rival is absurd. Nobody suggests we should train Iranian nuclear physicists or Russian ballistics engineers. The U.S. wouldn’t have been better off collaborating more with Nazi Germany in the 1930s or with the Soviet Union during the Cold War. Why make an exception for a nation dedicated to surpassing the U.S. in emerging technologies?

Excerpt from  Mike Gallagher, Send Harvard’s Chinese Students Home, WSJ, Aug. 19, 2025

How Artificial Intelligence Can Help Produce Better Chemical Weapons

An international security conference convened by the Swiss Federal Institute for NBC (nuclear, biological and chemical) Protection —Spiez Laboratory explored how artificial intelligence (AI) technologies for drug discovery could be misused for de novo design of biochemical weapons.  According to the researchers, discussion of societal impacts of AI has principally focused on aspects such as safety, privacy, discrimination and potential criminal misuse, but not on national and international security. When we think of drug discovery, we normally do not consider technology misuse potential. We are not trained to consider it, and it is not even required for machine learning research.

According to the scientists, this should serve as a wake-up call for our colleagues in the ‘AI in drug discovery’ community. Although some expertise in chemistry or toxicology is still required to generate toxic substances or biological agents that can cause significant harm, when these fields intersect with machine learning models, where all you need is the ability to code and to understand the output of the models themselves, they dramatically lower technical thresholds. Open-source machine learning software is the primary route for learning and creating new models like ours, and toxicity datasets that provide a baseline model for predictions for a range of targets related to human health are readily available.

The genie is out of the medicine bottle when it comes to repurposing our machine learning. We must now ask: what are the implications? Our own commercial tools, as well as open-source software tools and many datasets that populate public databases, are available with no oversight. If the threat of harm, or actual harm, occurs with ties back to machine learning, what impact will this have on how this technology is perceived? Will hype in the press on AI-designed drugs suddenly flip to concern about AI-designed toxins, public shaming and decreased investment in these technologies? As a field, we should open a conversation on this topic. The reputational risk is substantial: it only takes one bad apple, such as an adversarial state or other actor looking for a technological edge, to cause actual harm by taking what we have vaguely described to the next logical step. How do we prevent this? Can we lock away all the tools and throw away the key? Do we monitor software downloads or restrict sales to certain groups?

Excerpts from Fabio Urbina et al, Dual use of artificial-intelligence-powered drug discovery, Nature Machine Intelligence (2022)