Impersonation Hoax Leverages Top Officials’ Known Use of Commercial Messaging App

An attempt by a scammer to impersonate U.S. Secretary of State Marco Rubio on chat app Signal by using artificial intelligence is sparking warnings that deepfakes are well past the point of plausibility – opening up new vistas for attackers.
See Also: Beyond Replication & Versioning: Securing S3 Data in the Face of Advanced Ransomware Attacks
The Department of State is investigating a campaign targeted at Rubio that began in mid-June and used the display name “Marco.Rubio@state.gov” on the messaging platform. The investigation follows a Washington Post report citing a diplomatic cable warning the imposter used AI tools to mimic Rubio’s voice and writing style while contacting three foreign ministers, a U.S. governor and a member of Congress.
State “takes seriously its responsibility to safeguard its information,” a State Department spokesperson said. The Trump administration may be especially vulnerable to deepfake impersonations, given its reliance on consumer apps such as Signal (see: White House’s Operational Security Fail: No Signal Required).
Signal’s end-to-end encryption is unbreakable by current levels of computing power. But anyone can create an account, even someone purporting to be “Marco Rubio.”
“Leadership doesn’t seem to take any security protocols seriously – especially when they slow things down,” a State staffer granted anonymity to discuss internal security protocols told Information Security Media Group. “It’s not that hard to convince someone you’re somebody else – especially on apps like Signal.”
The government relies on tools like advanced caller verification, biometric voice authentication and AI-driven deepfake detection that can spot subtle acoustic or visual flaws. But these defenses are not widely deployed or among foreign partners and often need deep integration into existing systems to work effectively.
“Someone is eventually going to take the bait,” the staffer said.
The FBI warned the public in May that malicious actors have been impersonating senior U.S. officials through AI-generated voice messages and texts. The advisory urged anyone receiving suspicious messages from senior officials to verify the sender’s identity and inspect email or contact details. It also warned to watch for subtle flaws in images and videos, such as distorted hands or feet, unnatural facial features, blurry or irregular faces, odd accessories and awkward movements.
Compounding the problem of deepfake risk is a diminishing ability to tell truth from deception, largely due to AI and emerging technologies, said Margaret Cunningham, director of security and AI strategy at the AI cybersecurity firm Darktrace.
“The sophistication and scale of AI-generated impersonation means it is no longer reasonable to expect individuals, even the most senior leaders, to detect these attacks alone,” Cunningham said.
Analysts said future safeguards must pair real-time detection with overhauls in how sensitive communications are verified, reducing reliance on human judgment in favor of layered security protocols. Avoiding unofficial communication channels would also help.
“This administration needs to make clear that it values and prioritizes security,” said a former Department of Defense cybersecurity official who requested anonymity to discuss the Rubio hoax. “Our partners need to know that when we contact them, it will always be done through the proper channels.”
President Donald Trump’s inner circle and White House have faced criticism over repeated cybersecurity lapses, including accidentally adding a journalist to a Signal group chat about a secret bombing mission in a scandal later called “Signalgate.” Reports also showed the private contact details and personal data of the president’s top advisers were easily found through commercial data search services (see: Report: Top Trump Officials’ Private Data Leaked).
The most recent impersonation campaign invited targeted individuals to communicate on Signal and included impersonations of other State Department personnel, according to the Washington Post. It remains unclear if any recipients of the messages responded to the unknown malicious actor.
“This actor was skilled; the AI had the Secretary of State’s voice cloned and even his ‘personality’ in the text messages,” said Mary Ann Miller, vice president and fraud executive advisor at the digital identity verification firm Prove. Miller said countering AI-driven impersonation of senior officials demands a “holistic approach” that includes “the implementation of proper policies and procedures across all communications.”
