AI-Based Attacks
,
Artificial Intelligence & Machine Learning
,
Fraud Management & Cybercrime
AI-Powered Social Engineering and Deepfake Threats in 2025

In 2025, cybersecurity is not just about firewalls, passwords or encrypted data. It’s about trust – and how artificial intelligence is being weaponized. Imagine receiving a voice message from your bank, perfectly mimicking the tone of a customer service agent, urging you to confirm suspicious activity. Or scrolling past a LinkedIn profile of a “recruiter” who seems to have all the right credentials.
See Also: The Expert Guide to Mitigating Ransomware & Extortion Attacks
But there’s a catch – none of it is real.
According to Google Cloud Security’s “Cybersecurity Forecast 2025,” malicious actors will rapidly adopt AI-based tools to augment their online operations across various phases of the attack life cycle.
“2025 is the first year where we’ll genuinely see the second phase of AI in action with security,” said Sunil Potti, vice president and general manager at Google Cloud Security.
Researchers highlight a troubling shift: Attackers are moving beyond traditional approaches and leveraging AI and large language models to launch more convincing and scalable social engineering attacks. Unlike traditional phishing emails riddled with spelling errors and generic language (now passé), these messages are tailored to the recipient, mimicking real conversations with unsettling accuracy.
Deepfake technology is also blurring the lines between reality and fabrication, enabling identity theft on a scale previously thought impossible. Synthetic videos and voice recordings can now bypass know-your-customer security protocols.
The risks don’t stop at impersonation. The report warns of growing underground market for “unrestricted” LLMs – tools stripped of ethical guardrails. These models allow threat actors to query illicit topics without limitation, giving them an edge in vulnerability research, code development and reconnaissance.
Digital Deception at Scale
John Hultquist, intelligence chief analyst at Mandiant, told Information Security Media Group that content fabrication remains the primary AI use case. “Whether that is images, sound or video, that’s where we see the most use. There are many social engineering incidents involving a fake persona,” Hultquist said.
Adversaries have generated fake personas for years using random image generators from websites like “This Person Does Not Exist” to target victims, he said.
A Misinformation Review article reported how scammers and spammers exploit AI-image generators for audience growth on Facebook. Of the studied 125 Facebook pages that posted at least 50 AI-generated images each, researchers classified them into spam, scam and other creator categories. Some formed coordinated clusters run by the same administrators.
As of April 2024, these pages had a mean follower count of 146,681 and a median of 81,000. These images received hundreds of millions of exposures. In Q3 2023, an AI-generated image post ranked among Facebook’s top 20 most viewed posts, garnering 40 million views and more than 1.9 million interactions.
Spam pages employed clickbait tactics, directing users to off-platform content farms and low-quality domains. Scam pages also attempted to sell non-existent products or extract users’ personal information.
The Human Factor
A recent article in MIT Technology Review stated that Stanford and Google DeepMind researchers found a two-hour interview provides enough data to capture someone’s values and preferences. Their research paper titled “Generative Agent Simulations of 1,000 People” explored human behavioral simulation nuances.
Joon Sung Park, a Stanford PhD student in computer science, led the research. His team recruited 1,000 people of varying ages, gender, race, region, education and political ideologies.
The team created agent replicas by analyzing recorded interviews. To test how well the agents mimicked their human counterparts, participants completed personality tests, social surveys and logic games twice, each two weeks apart. The AI agents then completed identical exercises, achieving 85% similarity in results.
Hultquist explained how North Korean actors use this technique. “Instead of just sending a bunch of malicious links out into the world, which is how these actors operated for years, we can increasingly see them taking their time and having a social engineering conversation before they send something malicious [to the intended victim].”
He said targets become more receptive after lengthy conversations. “They are less likely to question it and more likely to open it.”
Maintaining lengthy conversations in fluent English poses a challenge to a North Korean threat actor, especially when multiple interactions are needed.
“It’s complicated for a North Korean to impersonate an American HR representative, as their English is potentially poor. So they use AI to get around that challenge for lengthy and recurring conversations,” he said.
The Google Cloud Security report predicted North Korean actors will continue pursuing revenue through IT workers and cryptocurrency theft. IT workers will use stolen and fabricated identities to apply for high-paying software development jobs. These workers have exploited privileged access to employer systems to enable malicious cyber intrusions, a trend expected to continue into 2025 and beyond (see: US Sanctions North Korean Remote IT Worker Front Companies).
The Road Ahead
Security teams in 2024 used AI to democratize security by automating report summarization, querying vast datasets and obtaining real-time assistance for various tasks. This integration improved investigation efficiency for security decision-makers while reducing repetitive tasks for analysts.
Google security researchers predicted 2025 will usher in semi-autonomous security operations. This advancement requires sufficient autonomous system capabilities in security workflows. But human oversight remains essential, with AI support enabling analysts to accomplish more. Teams will parse through alerts – including false positives – to identify high-priority items for further triage and risk remediation.
As we move deeper into 2025, the ability to trust what we see, hear and read is increasingly under threat. It’s not a question of whether organizations will face AI-driven threats – it’s a question of how prepared they are to respond.