Banks Must Prepare for the Coming Deluge

Global crime intelligence agency Interpol added to warnings about a coming torrent of artificial intelligence-enabled global fraud in an annual assessment assessing that victim losses reached $442 billion last year.
See Also: AI Impersonation Is the New Arms Race-Is Your Workforce Ready?
Financial fraud is one of the top five global crime threats, alongside money laundering and drugs offenses, according to Interpol.
“Enabled by artificial intelligence, low-cost digital tools and increased global criminal collaboration, we are witnessing the industrialization of fraud,” said Interpol’s Secretary General Valdecy Urquiza while speaking at the UN Global Fraud Summit in March.
Numbers published by Juniper Research reflect the scale of the problem: The industry analyst company forecasts that fraud will cost financial institutions $58.3 billion globally by 2030.
One of Interpol’s largest concerns is how artificial intelligence is being used to accelerate the execution of fraud and increase its effectiveness through deception. Chatbots and agentic AI are creating what Interpol calls a “force multiplier” in accelerating fraud around the world. AI-enhanced fraud is more than four times more profitable than traditional fraud methods, Interpol found.
With agentic AI, specific goals can be achieved and human-like decisions made autonomously with little or no human intervention.
“Digital technology and AI, in particular, have dramatically transformed social engineering techniques and victim profiling, enabling fraudsters to construct highly persuasive fraud environments,” Interpol wrote in its assessment.
“AI agents can autonomously plan and execute entire fraud schemes through reconnaissance of victims, harvesting of credentials, infiltration of systems, selection of high-value data, calculation of optimal ransom amounts based on financial analysis, and generation of psychologically tailored, visually alarming ransom notes,” it added.
Agentic AI executes fraud based on a set of processes not dissimilar to the cyber kill chain and can undertake processes on behalf of a human threat actor such as mapping systems, identifying vulnerabilities and data exfiltration. But most experts expect that agentic fraud will extend beyond cybersecurity such as by generating fake documents and applying for loans or services.
Stephen Topliss, vice president of fraud and identity at LexisNexis Risk Solutions, told Information Security Media Group that his company has seen early signs of agentic traffic in 2025, which had rapidly increased in volume.
The growth in agentic traffic “suggests we are going to see both good customer agents and agentic threat traffic impacting financial institutions in 2026 and beyond,” he said.
“It is likely that criminals will embrace its capabilities more rapidly than the financial services industry simply because financial services have controls and regulations imposed on them that organized criminals and scam centers do not, so the threat is real,” he added.
Topliss predicted that financial services will be impacted in two ways – firstly by an increase in the number and sophistication of scams targeting their customers (including deepfakes) and secondly by higher volumes of attacks on the financial institutions’ digital services.
Business Leaders Must Re-Skill to Combat the Evolving Fraud Landscape
While the risks of fraud accelerated by agentic AI are substantial, Mathieu Auger-Perreault, partner and national fraud risk consulting leader at EY, outlined actions that the financial services sector should take to mitigate those risks. Organizations should establish a dynamic control environment that adapts in real time to emerging threats including those powered by agentic AI capabilities – under an umbrella concept of fraud resilience, he advised.
Fraud resilience requires advanced analytics and AI-driven insight to monitor, detect and predict fraudulent behavior, he said. Strong change management and cross-functional collaboration skills are also important, he said, because implementing agentic AI requires breaking down silos between fraud, cybersecurity, compliance and business units.
EY argues that there are four ways organizations can deploy agentic AI to reduce the impact of fraud: autonomous detection, control optimization and strategy tuning, real-time behavioral risk assessment and automated red teaming.
Agentic AI isn’t all bad, of course. It can offer operational and customer-facing improvements across the business, meaning models deployed to counter fraud must sit within a broader architectural framework.
JP Morgan Chase now says it has 450 AI use cases in production, including some that cover risk mitigation. Even prior to the roll out of agentic models, the financial institution reported that using AI for fraud prevention was already saving the bank $250 million per year.
“The rise in fraudulent transactions has effects reaching beyond fraud loss,” said Lorien Carter, senior research analyst at Juniper Research. “The recent spate of banks being fined for failing to correctly identify high-risk transactions, such as Monzo, Barclays and TD Bank, displays that regulators are taking this issue extremely seriously. Financial institutions must increase investment in their fraud detection teams and technology to avoid further monetary and reputational losses.”
