Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
New Report Says Global Threat Actors May Use AI to Enhance Physical Attacks on US
Global threat actors of varying sophistication levels could gain the capabilities required to conceptualize and carry out nuclear or chemical attacks against U.S. interests, the Department of Homeland Security is warning in a new report.
See Also: Live Webinar | Protecting Your AI: Strategies for Securing AI Systems
The agency published guidelines on Monday that seek to secure critical infrastructure sectors and U.S. weapons of mass destruction from AI-enabled threats, directing organizations to identify methods for monitoring AI risks and implement risk management controls.
DHS said in a separate report published the same day that the U.S. suffers from known limitations in existing biological and chemical security regulations that “could increase the likelihood of both intentional and unintentional dangerous research outcomes that pose a risk to public health, economic security, or national security.”
Security researchers echoed the concerns expressed by DHS in multiple interviews with Information Security Media Group, warning that physical attacks against U.S. targets or national security interests could be exacerbated by using AI to help create new weapons or harmful materials. Analysts also warned that many critical infrastructure organizations are currently under-resourced and unprepared to rapidly improve their cybersecurity postures.
The agency’s newly launched AI Safety and Security board “should prioritize releasing more specific, actionable implementation recommendations alongside the high-level guidelines” to help close the gaps between security regulatory limitations and enhanced cybersecurity, said Joseph Thacker, principal AI engineer for the security platform AppOmni.
“Given how fast AI is moving, critical infrastructure operators will need concrete technical guidance they can put into practice,” Thacker said. “The board and organization should provide hands-on tools like reference architectures, configuration checklists and code samples that translate the principles into real-world safeguards.”
Nuclear security is regressing in the U.S., according to a 2023 index published by the Washington-based nonprofit the Nuclear Threat Initiative.
While the threats associated with emerging technologies in chemical, biological, radiological and nuclear attacks could be catastrophic, DHS also said “integration of AI into CBRN prevention, detection, response and mitigation capabilities could yield important or emergent benefits.”
The agency said AI tools could enhance international collaboration on key efforts around chemical, biological, radiological and nuclear, including monitoring international compliance with global agreements such as disarmament treaties and nonproliferation commitments.
Even with their enormous promise, AI tools are “fully capable” of advancing CBRN attacks against U.S. interests, according to Ken Dunham, director of cyberthreats for the cloud security firm Qualys’ threat hunting unit.
“AI is being positioned and developed by nation-state adversaries to attack and control critical infrastructure, influence people groups and more,” Dunham said. “When scaled against the computing power and strategic use and refinement of focused, targeted, powerful nations, an unknown potential for AI risk exists in this emergent threatscape.”
The guidelines also warn of failures in the design and implementation of AI tools that could lead to major security vulnerabilities, as well as a surge in attacks specifically targeting AI systems. DHS said common examples of design and implementation failures include “autonomy, brittleness and inscrutability.”
Threat actors have also increasingly deployed interruption of service attacks against AI algorithms as well as adversarial manipulation campaigns and evasion tactics to avoid detection.