sites.duke.edu

Podcast: Lala Qadir on “Biosecurity”

One of the most dangerous threats facing not just the US but the world is the potential of biological agents being weaponized. In our latest podcast from the 30th Annual National Security Law Conference, this challenge is addressed by my brilliant friend of many years,Ms. Lala Qadir. Currently she is Senior Director for Technology Security and AI Policy for Microsoft (but is speaking her in her personal capacity) but was formerly Chief of Staff to the National Security Division at the White House Office of Science and Technology.

In her presentation on “Biosecurity” Lala is outlines for us not only the threats, but also sets out questions scholars need to address. She is especially focused on the potential of the convergence of artificial intelligence and biotechnology. It has exciting positive possibilities, but also great risks that must be mitigated.

This presentation is so rich, that it really does defy summarization. Below are a just few excerpts from her presentation but, really, this is one that you’ll very much want to listen to in its entirety.

Democratization of risk

Lala speaks about some of the unique risks biological threats presents:

“As general purpose models become more sophisticated, they enable individuals with much less training, much less capability and knowledge to essentially engage at a higher level.”

“So before you had to worry about Dr. Evil doing something, you had a PhD-level background in biology or cellular metabolism or neurology. And today, you can have a Mr. or Mrs. Evil. You don’t have to have the PhD. You get significant uplift from various AI-based large language models.”

The recent system cards, in particular, from OpenAI and Anthropic, speak to this. And Anthropic has now publicly come out and said in their most recent system card that they believe that the next level that they will be publishing will be at an ASL three level, which is one of the ways that they gauge risk for artificial intelligence systems, which means that it will provide significant uplift of dual-use technical development.”

“Historically, when we think about biorisk, we think about state-sponsored biorisk. Today, however, we must realize that it could also be foreign terrorists to domestic violent extremists who are able to increasingly achieve the tools and the knowledge to leverage this information and do harm at a mass scale.”

Different from other threats

She also differentiates this threat from other weapons of mass destruction.

“The ability to manipulate a pathogen to increase virulence or pathogenicity in addition to transmissibility does increase a unique scaling risk that conventional weapons do not have. So if you think about it in the context of CBW and nuclear risk, nuclear risk, chemical risk is generally localized to a particular location. You can have some second and third order effects with nuclear weapons. But generally speaking, it can be contained. And there’s a direction of travel depending on the meteorological conditions.”

“Biological events, however, are borderless. They are diffuse. And they can also continue to mutate naturally in the wild, creating unintended consequences. So that is why I think the biology dimension of some of these weapons of mass destruction risk is very different from the nuclear and chemical risks.”

“And then lastly, ubiquity. So where other domains, we can implement pretty strict controls. For example, in nuclear dimension, you need some pretty significant material investments. You have to have infrastructure. You have to have very specialized equipment, processes. With biology, you don’t have to have as much.”

“Every living organism contains the basic building blocks that could potentially be manipulated, which is creating unique challenges for containment and control that doesn’t necessarily exist with other types of technology, like, nuclear materials.”

“So in light of these four considerations, while there are extraordinary gains to be had at the intersection of AI and biology, there are also real risks that unmitigated, could have very grave, consequential impact. In this duality, consider it innovation paired with vulnerability is at the heart of our national security concerns, which brings us to parts two and three.”

Her concluding thoughts:

“In conclusion, I’ll note that the convergence of AI and biotechnology, I think, is very exciting. It creates lots of opportunities with incredible positive aspects. And if we get it right, if we’re able to manage the risk, this will be a boundless opportunity to really transform the way that we interact with the world around us and advance solutions for the biggest problems that are facing humanity. But we must mitigate these risks.”

“Each of the areas discussed today represents a critical frontier in legal scholarship where I think new thinking and fresh approaches are very much needed. The rapid advancement of technology demands equally rapid evolution in our legal thinking.”

“Therefore, our response must be as sophisticated as the technologies we seek to govern, while remaining grounded in the constitutional principles and values that define our democracy. The stakes could not be higher. And adding AI to the equation dramatically increases both the opportunities and risks.”

“Our response to these challenges will define the future of national security law in the 21st century. So it’s incumbent upon legal scholars and practitioners, such as yourself, to engage with industry, policymakers, and researchers, to think creatively, to think aggressively in crafting solutions that will not only counter emerging threats, but also safeguard the principles that are the bedrock of our society.”

This is just a few brief snapshots, but really listen to the whole presentation. Among other things, the Q & A for this session was especially interesting. Again, you can find the video here.

The views expressed by guest speakers do not necessarily reflect my views or those of the Center on Law, Ethics and National Security, or Duke University. (See alsohere).

Remember what we like to say onLawfire ® : gather the facts, examine the law, evaluate the arguments – and then decide for yourself!

Watch this space for additional podcasts from the conference. Some presentations, however, were for attendees only, so save the date to attend LENS 31 set for 27-28 Feb 2026

Related

Read full news in source page