AI-Based Attacks
,
Artificial Intelligence & Machine Learning
,
Fraud Management & Cybercrime
OpenAI CEO Says AI Has Beaten Voice Recognition, But Experts Disagree

OpenAI CEO Sam Altman recently claimed that artificial intelligence has “fully defeated most of the ways that people authenticate currently, other than passwords.” A host of security experts disagree and point out that passwords got us into this authentication mess to begin with.
See Also: Post-Quantum Cryptography – A Fundamental Pillar in the Future of Cybersecurity [ES]
After all, the tech industry has been trying to kill the password for over a decade. They are clunky, difficult for users to manage, frequently reused, easily phished and hated by everyone – except for cybercriminals.
At least 45 vendors are offering passwordless authentication solutions, and the market is growing. Despite Altman’s faith in passwords, profound weaknesses and user friction turned passwords into an obsolete form of authentication long ago. Suddenly, passwords are more secure again in an AI-driven threat landscape? Really? Are we stuck in a loop?
But a number of cybersecurity industry players pointed out that passwords alone are inherently risky. While “AI is breaking systems we once thought were resilient,” said Troy Leach, chief strategy officer at Cloud Security Alliance and former CTO at the PCI Council, “no authentication factor is unbreakable in isolation. The path forward is layered, context-aware and adaptive authentication.”
Andras Cser, principal analyst at Forrester, said no one should trust passwords alone to protect sensitive data. “Passwords are legacy technology that do not provide any adequate protection as they are easy to replay, snoop, capture, phish and shoulder surf. The right level of context-aware friction is key here for high-risk, high-value transactions.”
Altman’s sweeping statement also shows a lack of how modern fraud detection works. No bank is going to let you move large sums of money using only a voiceprint. Voice is just one layer, and most financial institutions require step-up authentication by default. No single factor does the job alone.
While both voice and facial recognition have been shown to be vulnerable to deepfake technologies, modern multifactor authentication systems are countering with liveness detection technologies – also powered by AI. In reality, focusing on authentication methods misses the bigger issue – human risk.
“Many threats come from tricking real users into taking harmful actions,” said Roy Zur, CEO at Charm Security. “We need to engage the user and break the scam spell, not just verify their identity.”
But like a lot of things coming out of Silicon Valley these days, Altman’s dismissal of voice biometrics was more about product positioning than practical advice. It supports his sales pitch for one of his ventures – Worldcoin – which uses iris scanning to distinguish between human users and bot. It’s the same technology powering Worldcoin’s World ID project.
The reality is there is no authentication panacea. Security controls degrade fast in today’s threat landscape. What worked last quarter might not hold up against the next wave of AI technologies. That’s why testing really matters. You can’t just set it and forget it, Mr. Altman.
To secure enterprises in this brave new world of AI-powered deception, we need more factors – not fewer – and we need to be continuously evolving them.
