Artificial Intelligence & Machine Learning
,
Finance & Banking
,
Industry Specific
Department Says It May Contribute Its Own Data for Training Models
The widespread advent of artificial intelligence is opening a fraud detection capability gap between large and small financial institutions, the U.S. Department of the Treasury warns, suggesting that it may use its own historical data to narrow the divide.
See Also: OnDemand | Innovation for Financial Services | Accelerate your AI program with Process Automation
The financial sector was an early pioneer in applying AI to detect fraud, but larger firms now are experimenting with AI in ways smaller institutions lack resources and data to do, the department said in a report called for by the White House in an October executive order seeking to influence use of the nascent technology (see: White House Issues Sweeping Executive Order to Secure AI).
Robust collaboration among sector players in matters of cybersecurity isn’t mirrored when it comes to fraud, the report says – even as industry groups including the American Bankers Association are testing information exchanges to swap fraud data. Still, “a clearinghouse for fraud data that allows rapid sharing of data and can support financial institutions of all sizes is currently not available.” As a result, small institutions lack the wide-ranging data set AI can use to create ever-more accurate detection tools.
And AI-fueled detection does appear to have an appreciable effect on detection, the report says, citing “one large firm” that trained AI on internal historical data and found an estimated 50% reduction in fraud.
Treasury itself, through the Bureau of Fiscal Service, in February said it recovered $375 million through an AI method of mitigating check fraud “in near real-time by strengthening and expediting processes to recover potentially fraudulent payments from financial institutions.”
One solution for the widening gap, the report says, is for Treasury to contribute to a data lake of fraud data available to train AI. At a minimum, a senior Treasury official told reporters Tuesday afternoon, the department could share lessons learned with the private sector as it grows its internal program.
The official told reporters that federal officials believe that AI gives cybercriminals an advantage over defenders in the short term. Fraudsters have already used generative AI to make phishing emails less obviously malicious. The rise of deepfakes will lead to sophisticated identity impersonation techniques over audio and video.
Yet, the consensus is that most risks posed by AI boost existing fraud methods rather than creating new ones, the senior official said.
The report also calls on financial institutions to be watchful of new third-party risks introduced by AI, whether they develop models in house or acquire them through a vendor. Either method increases institutions’ reliance on third-party IT infrastructure. The report calls on the private sector to affix the digital equivalent of a nutrition label on vendor AI systems and third-party data, identifying the data used to train the AI, where it came from and how the model uses the data.
Financial institutions also told Treasury they mainly limit applications of generative AI to cases where they don’t need to explain in detail how they made a decision. Amid concerns about fairness and bias, “explainability” has become a rallying cry for ensuring that AI models don’t become ungovernable black boxes. Using AI in cases that raise concerns over privacy and consumer protection will require a higher level of explainability, Treasury said, calling for additional research and development on the matter.