AI-Based Attacks
,
Artificial Intelligence & Machine Learning
,
Fraud Management & Cybercrime
Also: Safeguarding AI Vulnerabilities From Cyber Adversaries
In the latest “Proof of Concept,” Sam Curry of Zscaler and Heather West of Venable assess how vulnerable AI models are to potential attacks, offer practical measures to bolster the resilience of AI models and discuss how to address bias in training data and model predictions.
See Also: Live Webinar | Securing the Cloud: Mitigating Vulnerabilities for Government
Anna Delaney, director, productions; Tom Field, senior vice president, editorial; Sam Curry, vice president and CISO, Zscaler; and West, senior director of cybersecurity and privacy services, Venable – discussed:
- Methodologies for assessing the vulnerability of AI models;
- How to evaluate and mitigate privacy concerns in AI systems;
- How to identify and address biases in training data and model predictions.
Curry previously served as chief security officer at Cybereason and chief technology and security officer at Arbor Networks. Prior to those roles, he spent more than seven years at RSA – the security division of EMC – in a variety of senior management positions, including chief strategy officer and chief technologist and senior vice president of product management and product marketing. Curry also has held senior roles at MicroStrategy, Computer Associates and McAfee.
West focuses on data governance, data security, digital identity and privacy in the digital age at Venable LLP. She has been a policy and tech translator, product consultant and long-term internet strategist, guiding clients through the intersection of emerging technologies, culture, governments and policy.
Don’t miss our previous installments of “Proof of Concept”, including the Nov. 17 edition on the impact of the U.S. executive order on AI and the Dec. 8 edition on navigating software liability.