AI-Based Attacks
,
Finance & Banking
,
Fraud Management & Cybercrime
AI Is Also Making Traditional Scams More Convincing, Warns Visa
Artificial intelligence technologies such as generative AI are not helping fraudsters create new and innovative types of scams. They are doing just fine relying on the traditional scams, but the advent of AI is helping them scale up attacks and snare more victims, according to fraud researchers at Visa.
Organized threat actors continue to target the most vulnerable point in the payments ecosystem – humans. And they’re using AI to make their scams more convincing, leading to “unprecedented losses” for victims, said Paul Fabara, chief risk and client services officer at Visa.
Fraudsters can use AI to automate the process of identifying vulnerabilities in a system to make it easier for threat actors to launch targeted attacks, carry out large-scale social engineering attacks and generate convincing phishing emails on a massive scale by analyzing and mimicking human behavior. Generative AI tools also can generate realistic speech capable of mimicking human emotions and logic, which threat actors can exploit to impersonate financial institutions and obtain one-time passwords or execute phishing campaigns to steal payment account credentials.
“So while AI holds great potential for improving security, it can also be a powerful tool for threat actors,” Michael Jabbara, Visa’s global head of fraud services, told Information Security Media Group.
AI deepfakes are a growing concern. Criminals recently used a deepfake video to impersonate company executives and trick an employee into transferring $25.6 million to several accounts held by the group. Researchers say hackers need just three seconds of audio sample to clone a voice using AI in 10 minutes. A month after that research became public, a Vice reporter demonstrated how an unauthorized person used a cloned voice to access a consumer’s bank account.
Dark web developers have released WormGPT and FraudGPT to help their hackers create phishing emails, develop cracking tools and conduct carding operations. The tools also help threat actors scan for and test vulnerabilities in critical systems, identify victim networks that contain those vulnerabilities, and develop malicious scripts, apps and programs to help them carry out attacks.
They are using the technology “widely,” Visa said, to exploit banks by developing novel malware that can identify bugs in transaction messaging. These tools also can help threat actors create digital skimming code, which can be embedded on an e-commerce merchant’s checkout webpage and used to steal sensitive payment account data, the report says.
Visa also said that AI tech has lowered the barrier of entry for criminals across the world and enables unskilled criminals to carry out more sophisticated fraud scams.
A case in point is romance scams, which used to be fairly easily identifiable and prey mostly on susceptible men on social media platforms. Now these scams have evolved into pig-butchering, which combines romance and cryptocurrency investments that target people of all ages and genders. This increasingly popular scam has made billions for fraudsters so far, and AI has helped make the scam more convincing.
“The part of AI that stands out to me the most is how fraudsters can so easily create fake personae of themselves on dating apps or social media. With LLMs, responses can be optimized to help these fraudsters get sensitive information like banking IDs and passwords at scale, so everyone needs to be on the lookout, all the time,” Jabbara said.
Forecasting Problems and Potential Solutions
A major challenge facing banks is that standard identity verification and authentication solutions may soon become less trustworthy. For example, videos for automated liveness tests that banks use to verify customer identities can be compromised. Hackers have already developed deepfakes to easily fool these systems, and the scam held the top slot among verification frauds in the United States in 2022. A more recent report by iProov shows the number of deepfake attacks on its remote identity verification systems increased 704% in 2023 compared to the previous year.
Advancements in AI technology spurred this type of visual biometrics bypass fraud to target the payments ecosystem, Jabbara said.
Visa advised financial services organizations to implement behavioral biometrics to create digital fingerprints for each user to authenticate identity. Jabbara recommends using AI-powered biometric authentication to enhance the accuracy of biometric systems such as facial recognition and fingerprint scanning and alert to possible deepfake identities. These systems can learn and adapt to slight changes in a person’s biometrics, making it harder for fraudsters to mimic identity, he said.
Oftentimes though, going back to the basics can be an effective strategy.
“AI deepfakes are becoming increasingly complex and show how fraudsters are using new technology to innovate upon tried-and-tested fraud schemes. The good news is that we have a lot of protections across the ecosystem to combat this type of fraud, and a lot of the same principles and best practices still apply to AI technology and deepfakes,” Jabbara said. He said multifactor authentication is an “incredibly powerful tool” to ensure that even if one factor, such as a liveness test, is bypassed, there are still other factors of authentication that a bad actor will need to circumvent for a successful fraud attack.
“Multifactor authentication is among the key tenets in how we combat fraud. It sends up another barrier for these fraudsters to have to overcome. Through the ‘Swiss cheese’ method, we are able to stop more fraud than through any one way,” he said.
Ironically, threat actors are using advanced language learning models and other AI chatbots to deliver malware at the same time banks are rapidly adopting AI into their operations – starting with chatbots. Popular chatbot makers have controls in place to ensure their services are not blatantly used for harmful purposes, but hackers have found ways to bypass these security controls.
Behavioral analysis also can aid antimalware solutions in searching for suspicious behavior and updating their applications, Visa said. Jabbara said deep learning algorithms can be used to detect anomalies in user behavior or system operations, which could indicate fraudulent activities. This can help in identifying and preventing potential AI-related threats in real time, he said.
Financial services firms can also put AI to use to secure decentralized identity systems, in which user identity data is stored across multiple trusted nodes. AI can detect attempts to tamper with a user’s data, providing an added layer of security.
“While the malicious use of AI poses challenges to identity verification and authentication, the good use of AI can provide effective solutions to these challenges, enhancing the security and reliability of these systems,” he said.