How Artificial Intelligence Can Help Produce Better Chemical Weapons

An international security conference convened by the Swiss Federal Institute for NBC (nuclear, biological and chemical) Protection —Spiez Laboratory explored how artificial intelligence (AI) technologies for drug discovery could be misused for de novo design of biochemical weapons.  According to the researchers, discussion of societal impacts of AI has principally focused on aspects such as safety, privacy, discrimination and potential criminal misuse, but not on national and international security. When we think of drug discovery, we normally do not consider technology misuse potential. We are not trained to consider it, and it is not even required for machine learning research.

According to the scientists, this should serve as a wake-up call for our colleagues in the ‘AI in drug discovery’ community. Although some expertise in chemistry or toxicology is still required to generate toxic substances or biological agents that can cause significant harm, when these fields intersect with machine learning models, where all you need is the ability to code and to understand the output of the models themselves, they dramatically lower technical thresholds. Open-source machine learning software is the primary route for learning and creating new models like ours, and toxicity datasets that provide a baseline model for predictions for a range of targets related to human health are readily available.

The genie is out of the medicine bottle when it comes to repurposing our machine learning. We must now ask: what are the implications? Our own commercial tools, as well as open-source software tools and many datasets that populate public databases, are available with no oversight. If the threat of harm, or actual harm, occurs with ties back to machine learning, what impact will this have on how this technology is perceived? Will hype in the press on AI-designed drugs suddenly flip to concern about AI-designed toxins, public shaming and decreased investment in these technologies? As a field, we should open a conversation on this topic. The reputational risk is substantial: it only takes one bad apple, such as an adversarial state or other actor looking for a technological edge, to cause actual harm by taking what we have vaguely described to the next logical step. How do we prevent this? Can we lock away all the tools and throw away the key? Do we monitor software downloads or restrict sales to certain groups?

Excerpts from Fabio Urbina et al, Dual use of artificial-intelligence-powered drug discovery, Nature Machine Intelligence (2022)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s