American Scientist Federation Addresses AI Biosecurity Risks

The Federation of American Scientists- FAS has gone on to publish five policy recommendations so as to address the issues concerning AI when it comes to life sciences.

It is well to be noted that the Bio X AI policy recommendations point out the need for oversight in terms of biodesign AI tools, biosecurity screening of synthetic DNA, and guidance when it comes to biosecurity practices as far as automated laboratories are concerned.

FAS looks forward to the fact that these recommendations will aid in informing the policy development when it comes to these topics, which include the tasks of the National Security Commission on Emerging Biotechnologies.

According to the FAS, AI is most likely to make incredible advances in their fundamental understanding of biological systems and also have significant advantages for health, agriculture, along with the broader bioeconomy. But it is a fact that AI tools, if not used to their potential, misused, or created irresponsibly, can go on to pose risks in terms of biosecurity. The spectrum of biosecurity challenges concerning with AI happens to be complex and quickly changing, and gauging the issues needs some diverse aspects as well as expertise.

It is worth noting that significant effort has already been put into establishing frameworks so as to evaluate and lessen the risks from foundational AI models, such as large models that are put in place for varied purposes, like the recent Executive Order on Safe, Secure, as well as Trustworthy Development and the usage of AI. (Bioeconomy EO).

But specific regulations will be required to cover biodesign tools that are more specialized AI models trained when it comes to biological data and provide insight into biological systems.

The recommendations

  • Oliver Crook, who happens to be a postdoctoral researcher from the University of Oxford as well as a machine learning expert, has called upon the US government to make sure of responsible development when it comes to biodesign tools by having a framework in terms of checklist-based, institutional oversight in terms of these tools.
  • An AI-Biosecurity Fellow from the Centre for Long-Term Resilience, Richard Moulange, as well as Sophie Rose, the Centre for Long-Term Resilience’s Senior Biosecurity Policy Advisor, look to broaden the Executive Order when it comes to AI with recommendations for coming up with benchmarks so as to evaluate the risks.
  • Samuel Curtis, who happens to be an AI Governance Associate at The Future Society, goes on to take a more open-science approach, with a recommendation so as to expand infrastructure for cloud-based computational resources across borders so as to promote crucial advances in biodesign tools by way of establishing norms for a development that’s responsible in every way.
  • Scientist and biosecurity researcher Shrestha Rath goes on to focus on the biosecurity screening of synthetic DNA, which the Executive Order on AI underscores as a major safeguard, and has in it recommendations for how to enhance the screening methods to prepare better for designs produced by way of AI.
  • Biosecurity and bioweapons expert Tessa Alexanian go on to call for the US government to come up with guidance when it comes to biosecurity practices concerning automated laboratories, which are times also called cloud labs, that can create organisms as well as other biological agents.

FAS finally says that each of these recommendations goes on to present an opportunity for the US government so as to reduce risks concerning AI, make the position of the US as a global leader in AI governance more robust, and also make sure of a future that’s safer and more secure.