Artificial Intelligence & Machine Learning
,
Governance & Risk Management
,
Next-Generation Technologies & Secure Development
Analysts Urge Mandatory Guardrails on AI Agents, Identity and Privilege

Cybersecurity analysts and public sector security leaders are urging the Department of Treasury to go well beyond principles and into enforceable controls as it prepares a series of cybersecurity and risk management guidance for artificial intelligence tools in the financial sector.
See Also: AI or Data Governance? Gartner Says You Need Both
The forthcoming materials are being developed through the Artificial Intelligence Executive Oversight Group and are expected to address governance, data practices, fraud, transparency and digital identity, according to a Wednesday statement. Treasury said the framework will aim to emphasize practical implementation over prescriptive requirements. Experts across identity security, AI threat intelligence and fraud detection warned that unless the guidance mandates concrete guardrails around adversarial testing, AI inventory, model monitoring and real-time identity validation, the guidance risks becoming aspirational at a moment when AI-enabled attacks are escalating across financial services.
Analysts told Information Security Media Group that Treasury should prioritize specific risks like data poisoning of training pipelines, insider-driven model manipulation, deepfake fraud, synthetic identity attacks, adversarial prompt injection and expanded third-party cloud exposure. Given the scope and potential risk posed by AI, “it’s important to leave no privilege behind,” said Kevin Greene, former program manager of the Department of Homeland Security’s science and technology directorate.
Greene, who now serves as BeyondTrust’s public sector chief technology strategist, said that means “every AI agent, its identity and privilege scope must be identified and well-defined in order to protect sensitive data and financial operations.”
“Adopting agentic AI requires a fundamental shift in cybersecurity approaches and a deeper level of understanding to better manage the unintended consequences of autonomous AI,” Greene said. He added that the speed at which AI agents operate is currently outpacing the “ability to protect sensitive financial data – creating a window of exposure that traditional security controls were never designed to close.”
Experts warned the biggest gaps in AI governance center around maintaining commensurate protection capabilities and a concrete plan to manage AI risks. While guidance has not yet been published, Treasury said the oversight group focused on governance, data practices, transparency, fraud and digital identity in an integrated manner – suggesting it will address AI risk as an enterprise wide security challenge.
Cyberattacks against the financial sector have increased in both frequency and sophistication in recent years, as state-linked actors, ransomware groups and financially-motivated cybercrime organizations exploit expanded cloud adoption and AI-driven automation. Banks, payment processors and market infrastructure providers now face sustained campaigns involving data extortion, distributed denial-of-service disruptions, supply chain compromises and large-scale identity fraud (see: Scattered Spider Tied to Fresh Attacks on Financial Services ).
Treasury Secretary Scott Bessent said in a statement the oversight group’s work “demonstrates that government and industry can come together to support secure AI adoption that increases the resilience of our financial system.” Officials also said the guidance is intended to help institutions – particularly small and mid-sized firms – deploy AI more securely while enhancing cyber defenses.
AI systems are increasingly embedded in transaction monitoring, identity verification, credit underwriting and trading models. Each deployment introduces potential vulnerabilities, from adversarial inputs and manipulation of training data, to compromised third-party AI services and automated fraud at scale.
Experts praised the public-private structure of the oversight group, which brought together financial executives, federal and state regulators and other sector stakeholders, aiming to give the guidance cross-regulatory relevance across banking, securities and state oversight frameworks. Treasury said the initiative advances the President’s AI Action Plan by strengthening security of AI data, infrastructure and models while promoting best practices for secure deployment and global adoption of American AI systems.
The National Institute of Standards and Technology has developed an AI risk management framework to aid organizations in developing and deploying AI systems responsibly. That guidance is “the right foundation but needs sector-specific tailoring to be actionable,” said Daniel Wilbricht, president of the cybersecurity firm Optiv and ClearShark.
“The largest gaps in AI governance and security across financial institutions is the pace of AI adoption and investments made to date,” Wilbricht said. “The innovation is outpacing the strict guidelines and governance that is required across the board.”
