Defensive acceleration needs execution, not just good intentions
We need a more specific strategy to accelerate AI-enabled biodefence. It might look a little bit like this ...
This is only my second blog post of the Asterisk Fellowship and I’m already a bit worried that every section I write needs a post of its own (and feedback from my first post on risks from agentic tool design sure corroborates that!).
If you would be excited to see more posts from me after my Fellowship ends, please heart-react, restack and subscribe to Securing the Interface. Thanks for your support!
“The rapid proliferation and increasing accessibility of these [AI] technologies will almost certainly enable less-sophisticated threat actors to conduct previously unattainable attacks.”
– Safety and security risks of generative artificial intelligence to 2025 (Annex B), UK Government, October 2023
AI is beginning to profoundly reshape the strategic landscape and pose significant national security risks. Some governments—especially the U.S., the UK, China and the UAE—are starting to recognise this. Most others are not.
Placing restrictions on AI advances is, at best, unpalatable and, at times, seemingly impossible. The answer that many turn to is defensive acceleration: deliberately using advanced AI to enhance safety and security and, thereby, negate the serious and growing risks that AI systems pose.
Build better defences with AI! Sovereign AI! AI for strategic advantage! Be AI-first!
What’s not to love?
As it happens, I do think that defensive acceleration—def/acc, for the cool kids in the back—is a very important strategy, perhaps the most important, for mitigating extreme AI risks. But I’m worried that, right now, no one seems to have gone beyond those catchy slogans to actually do the hard work of spelling out what it means and what we need to do.
The following is my attempt to operationalise what def/acc means and what the UK government should do to figure out how to accelerate its own defensive capabilities. For now, I’m mostly sticking to the biological and chemical defence space—it’s the one I know best and, courtesy of frontier models breaching the ‘High Risk’ / ASL-3 threshold, the risk domain that AI is transforming most rapidly.
How to do def/acc
Matt Clifford—who arguably has done more to advance the AI def/acc agenda in the UK than anyone else—summarises def/acc as follows: “I interpret def/acc very broadly: it’s about building the infrastructure for the future where our values win.”
Clifford’s article is great and I recommend reading it, but it doesn’t really get at what governments and funders should do to actually start to accelerate their defences. Here’s what I think def/acc actually involves:
Strategic area prioritisation: Focus on domains that actually matter for overall strategy, then identify technologies within those domains that likely favour defence over offence.
Differential tech development: Develop a subset of the above technologies first or faster or further than the offence-biased or defence-neutral technologies.
Execute for uplift: Actually make a plan to accelerate defensive actors in advance and carry out that plan once the technology is developed.
For Step 1, I shall choose chemical and biological defence. The recent UK Strategic Defence Review (SDR) specifies that new R&D activity is crucial for “a small number of [prioritised] national security issues” and that, within those, chemical and biological defence is “the urgent and essential activity”. This is a strong signal and I recommend the SDR as the first port-of-call for Step 1 activities in a UK context, though I expect a deeper focus on AI is needed, too.
Now, we shall explore Steps 2 and 3 …
Adversarial uplift or, Why Differential Tech Development is Hard for Bio
Biology is offence-dominant: the effort that an attacker requires to cause harm with biology is much smaller than that needed to defend against such attacks. We’ve known this for a long time, but a recent paper from RAND is the best and most detailed investigation I have seen on how and why the biological domain favours attackers. They identify five high-level asymmetries in the offence–defence balance; four of these favour malicious use and thus make it much harder to differentially develop technology to favour defence:
Time to spread (which they term ‘kinetic considerations’): Biological pathogens can self-replicate and spread quickly while vaccines and therapeutics take time to manufacture and deploy widely
Financial burden: It’s much cheaper to make a single vial of something very bad than to produce billions of vaccine doses
Threat surface: Attackers choose one of more than 1,500 known human-infecting pathogens (not even including future engineered ones) while defenders have to defend against this entire spectrum
Consequences of failure: Attackers can keep attacking until they are caught or deterred, but (at least for pandemic threats) defenders mustn’t let even one attack through
Luckily, defenders have one advantage:
Access to knowledge and materials: Defenders have much greater funding, infrastructure and the ability to (mostly) operate in the open, while attackers have to act covertly and avoid detection with fewer resources.
This global offence-bias means that building tech that is defence-neutral by itself isn’t enough. De novo protein design is a pinnacle of scientific and technological achievement and recent advances thoroughly earned their share of the Nobel Prize last year. But ultimately, protein design is defence-neutral: you can make proteins that bind to things to inhibit disease and you can, about as easily, make proteins designed to exacerbate disease. This improves vaccine design and pathogen design … and then the asymmetries kick in. The pathogen can spread, unlike a vaccine (until we get self-transmitting ones!), it’s cheaper to make and distribute a pathogen than its countermeasure, the attacker can keep on designing and building until success, and so on …
This means that—in the context of a rapidly accelerating field that is already tilted towards offence and a landscape of threat actors, state and non-state, that continue to show interest in developing biological weapons—defence-neutral tools are insufficient to improve our collective biological defences.
On the other hand, detection capabilities like microbial forensics and attribution platforms really are defence-dominant. The capability to classify whether biological sequences are engineered, identify their region-of-origin (and even the precise laboratory) and prove these facts to the level of rigour needed to satisfy a court is incredibly important for deterrence. Threat actors are likely more cautious about launching attacks if they know that nefarious activity can be traced right back to their doorstep and that retribution will swiftly follow. And these and similar detection capabilities provide no offensive advantage.
But how does one actually draw the distinction between these kinds of technologies before they’re even built? Jason Matheny—CEO of the RAND Corporation and former Director of the U.S. Intelligence Advanced Research Projects Activity—gives the answer on the Statecraft podcast from Santi Ruiz at Institute for Progress. Building on the classic Heilmeier questions, he produced several ‘red-team’–focused questions one should ask before developing a technology. The first two are these:
“What’s your estimate about how long it would take a major nation competitor to weaponize this technology after they learn about it? What’s your estimate for a non-state terrorist group with resources like those of Al Qaeda in the first decade of the century?”
I wish that everyone building biotech would take time to consider these questions very, very carefully.
Stop ignoring execution
So, we’ve picked our strategic area (Step 1) and hopefully made some brilliant tactical choices about which defence-biased technologies to bet on (Step 2) (ignoring for now the very reasonable counterargument “Hmm, but I heard that central planning is difficult …”). We shall also merrily glide over the actual process of doing differential tech development. This is hard and it’s especially difficult to pull through capabilities from early prototype to scalable deployment (particularly in the UK with its persistent start-up to scale-up ‘Valley of Death’). But the UK and the U.S. have some of the world’s top scientists and companies, so I’m going to be unusually optimistic. Now we arrive at Step 3: Execute for uplift.
This Step is the one that I think people miss the most. Too often I read a paper whose Abstract extols the virtue of how their fancy AI–bio tool will “accelerate vaccine development” and “transform pandemic preparedness”. Wonderful, I hear you cry, we’re all super on board with this! But let’s just quickly check, shall we?
They’ve identified an important strategic area (preventing and responding to catastrophic biological threats) and (in theory) picked a defensive tool (though a fair few AI–bio tools are very relevant for misuse) ✅
They’ve clearly developed this technology enough to get a pre-print out and release it on GitHub ✅
But … has anyone checked that the vaccine developers need this capability? Is this tool really better than the thing they were already using? Have they actually told those defenders they mentioned that this capability is available? Can you actually integrate the tool into your workflow quickly, or does it still take a month because the GitHub is janky and the documentation is terrible?
A lot of “No”s for that last bit = not real def/acc. The tool isn’t the defence. Releasing an AI model doesn’t magically guarantee that defenders are faster or more effective in their work in actually preventing, detecting and responding to threats.
Making defensive acceleration go right means actually following through with technological development all the way to real-world defensive uplift. Businesses are much more likely to get this right because it’s only at the point of providing real value to the end-user that customers start paying. Governments and academic funders do not always benefit from this … helpful focus.
I appreciate that a lot of existing defensive work lacks a price signal. First, maybe try and get one? Fewer single-entry PhD student GitHubs and more start-ups, please. But sometimes price signals aren’t there because the entire capability development stays buried in the bowels of the national security enterprise for very good reasons.
In that case, I urge those developing and funding defensive capabilities to consider doing the following things in advance:
Verify that your end-users actually want your new capability
Privilege your end-users with prototype capabilities early on during development
This is important for you getting feedback on your tool development
This is also crucial for those defenders to learn how to use that tool effectively to achieve greater uplift
Measure how far your tool actually improves defensive capabilities
On that last point, I would love to see more defensive uplift randomised controlled trials (RCTs). Running an RCT to check how much more the latest LLM helps undergraduates in California do wet-lab biology is very in vogue at the moment. Why don’t we try doing this for expert biological defenders in government, academia and industry?
The principle of more privilege
It seems backward to me that precisely those defensive actors who produce the most relevant countermeasure work might have the worst and least access to AI. Defenders should be the ones with more privileged access, to newer and better capabilities, first. To make def/acc work, this means:
Governments will need more flexible rules that give their experts front-of-the-queue access to state-of-the-art AI models
Industry will need to build secure systems to facilitate sharing their and others’ models
But government IT systems are famously slow and difficult to change. Defence procurement remains a very significant challenge, especially for the UK. And academia is still unusually sceptical of AI, even if there were space in academic grants for large inference compute allocations (ha!). Even the academics I know who are using AI are usually not on the Pro plans and missed out on reasoning models for months until GPT-5 brought reasoning to the masses. Think of the additional medical countermeasure advances and therapeutics start-ups we might have had this year if someone thought “Huh, maybe the nation’s top scientists and engineers, who work daily on pressing national defence priorities, should have frontier AI as standard?”. I fear the number is not small.
Pharmaceutical companies will get the newest AI (eventually, after months of internal-only use in the AI companies …) but market failures abound: I will never forget the example of GlaxoSmithKline spending twenty four years developing a tuberculosis vaccine before essentially shelving it in favour of a shingles vaccine using the same technology.
Frontier AI companies are on the hook for this, too. Google DeepMind has done really excellent work getting advanced biological AI capabilities to trusted academic researchers through their Trusted Tester programs. But there’s a lot more hard work to be done to get the equivalent access to those working on classified projects at national biodefense labs. This work has to be done, and fast. It is important to uplift scientists studying how to repurpose existing medicines and searching for new drug targets, but it’s also crucial that we uplift the people designing countermeasures for anthrax and building the world’s most advanced microbial forensics capability to detect engineered pandemics. I want to see stronger prioritisation of defensive uplift from both government and industry.
As for other frontier AI companies, I applaud efforts to get GPT-5 and Claude into the U.S. Government through OpenAI for Government and Claude for Gov. The UK should be pushing hard to get that access, too. But OpenAI announced in June that their new models—models that are now publicly available across the world—would likely meaningfully uplift novice actors in building biological weapons, if the safeguards were broken. And model safeguards can be broken—we’ve seen that time and time again. In the same post, they said that they would work to “grant vetted-institutions access to maximally helpful models so they can advance biological sciences”. I have many colleagues working directly on pandemic preparedness, on both technical and policy research, who are yet to benefit from any bespoke def/acc programme, whatsoever, from any of the major companies.
This is not good enough.
Developing unsafeguarded defence-neutral capabilities that pose extreme risk in an offence-biased environment flirts with catastrophe. Governments, companies and funders must work together to identify, prioritise and build those defensive technologies that can actually tilt the strategic landscape. Then they’ve got to get that tech directly into the hands of defenders and accelerate their vital work.
Thank you so much for reading, and thanks especially to Cassidy Nelson, Avital Morris and Jake Eaton for helpful feedback. Stay tuned for my final post next week where I shall suggest priority def/acc projects I’d like to see in the AI–bio space and outline specific actions that governments, industry, funders and technical researchers could take to advance the def/acc agenda.


Great post! I'm looking forward to your next piece on priorities for bio def/acc.
Great piece, thanks for writing it! Highlighting that Shaun Ee has a good piece on differential access for cyberdefence you might be interested by. In particular, it makes some concrete suggestions for schemes that governments might run, and some of the logistical and technical infrastructure that would be necessary to support them. Interested in the extent to which similar models of d.a. could be relevant to biorisk.
https://www.arxiv.org/abs/2506.02035