Artificial Intelligence & Machine Learning
,
Cyberwarfare / Nation-State Attacks
,
Fraud Management & Cybercrime
New Report Urges Public-Private Collaboration to Reduce Chemical, Nuclear AI Risks
Artificial intelligence is lowering the barriers of entry for global threat actors to create and deploy new chemical, biological and nuclear risks, warns the U.S. Department of Homeland Security.
See Also: Safeguarding Election Integrity in the Digital Age
The department on Monday published a report on reducing risks at the intersection of AI and chemical, biological, radiological and nuclear threats, after teasing a draft in April. The report calls on Congress, federal agencies and the private sector to be “adaptive and iterative” in their AI technology governance and “to respond to rapid or unpredictable technological advancements.”
“The revolutionary pace of change in the biotechnology, biomanufacturing, and AI sectors compounds existing regulatory challenges,” the report states. It recommends “continued interaction among industry, government, and academia.”
Current regulations and export controls fail to account for risks posed by potentially harmful nucleic acid sequences created with the assistance of AI, DHS said. Those sorts of gaps in current controls could allow threat actors to misuse AI to develop dangerous biological agents, evade detection and potentially cause widespread harm.
The report says the integration of AI into CBRN prevention, detection, response and mitigation efforts “could yield important or emergent benefits,” but it also says that current regulatory limitations hinder the government’s ability to properly oversee AI research, development and implementation. DHS also acknowledged in the report that the federal government “currently does not have an overarching legal or regulatory framework to comprehensively regulate or oversee AI,” and it warned that various AI governance approaches could result in compliance challenges for many developers.
Among its recommendations is encouraging AI developers to voluntary release source code and AI model weights for models used in biological or chemical research.
The report says engagement with international stakeholders such as governments, global organizations and private entities “is needed to develop approaches, principles and frameworks to manage AI risks, unlock AI’s potential for good, and promote common approaches to shared challenges in light of worldwide development and spread of AI technologies.”
DHS Secretary Alejandro Mayorkas said in a statement accompanying the report that it was meant “to provide longer-term objectives around how to ensure safe, secure, and trustworthy development and use” of AI technologies.