AI-Based Attacks
,
Artificial Intelligence & Machine Learning
,
Fraud Management & Cybercrime
Sam Curry and Heather West on Authentication, AI Labelling and Adaptive Security
As deepfakes evolve, they pose significant cybersecurity risks and require adaptable security measures. In this episode of “Proof of Concept,” Sam Curry of Zscaler and Heather West of Venable discuss strategies for using advanced security tactics to outpace deepfake threats.
See Also: Safeguarding against GenAI Cyberthreats with Zero Trust
“We must assume digital interactions aren’t always real and build our processes accordingly,” said Curry, vice president and CISO at Zscaler.
West, senior director of cybersecurity and privacy services at Venable, underscored the importance of rigorous processes. “We need to be proactive with robust authentication methods,” she said. “We can’t rely on technology alone. It’s about creating a culture where processes are followed meticulously to ensure security.”
Curry; West; Anna Delaney, director, productions; and Tom Field, vice president, editorial; discussed:
- The need to evolve security processes to counter deepfake threats;
- Why traditional authentication methods are insufficient;
- Strategies for adapting to AI-driven fraud tactics.
Curry, who leads cybersecurity at Zscaler, previously served as chief security officer at Cybereason and chief technology and security officer at Arbor Networks. Prior to those roles, he spent more than seven years at RSA – the security division of EMC – in a variety of senior management positions, including chief strategy officer and chief technologist and senior vice president of product management and product marketing. Curry also has held senior roles at Microstrategy, Computer Associates and McAfee.
West focuses on data governance, data security, digital identity and privacy in the digital age at Venable LLP. She has been a policy and tech translator, product consultant, and long-term internet strategist, guiding clients through the intersection of emerging technologies, culture, governments and policy.
Don’t miss our previous installments of “Proof of Concept,” including the March 21 edition on opening up the AI “black box” and the May 22 edition on ensuring AI compliance and security controls.