Artificial Intelligence & Machine Learning
,
Data Governance
,
Data Privacy
Product Head Jeetu Patel on How AI Defense Ensures Secure LLM Operations at Runtime
The introduction of AI models has created safety risks related to model toxicity as well as hallucinations and security risks around prompt injection attacks. Organizations must continuously validate AI models to ensure they perform as intended across scenarios – especially when they’re fine-tuned or exposed to new data, said Jeetu Patel, executive vice president and chief product officer at Cisco.
See Also: AI Surge Drives a 40-1 Ratio of Machine-to-Human Identities
Algorithmic red teaming enables testing at an unprecedented scale to ensure that models are robust under a variety of conditions, and guardrails can be enforced after vulnerabilities are identified through validation to prevent similar failures, he said (see: Cisco Bolsters AI Security by Buying Robust Intelligence).
“You need to have some kind of validation that has to be done on the model on an ongoing basis,” Patel said. “Every single time you fine-tune a model – every single time there’s a tweak that happens to the model – there’s new data that the model trains on that actually has the potential of changing model behavior.”
In this video interview with Information Security Media Group, Patel also discussed:
- How Cisco uses AI-powered tools for operational efficiency and risk mitigation;
- How the AI Defense platform integrates with Cisco’s broader security framework;
- How acquiring Robust Intelligence bolstered Cisco’s AI security capabilities.
Patel, Cisco’s chief product officer since August, previously spent four years leading the firm’s security and collaboration business units. He previously led Box’s product and platform strategy, setting the firm’s long-term vision and road map for cloud content management. He also previously served as the general manager and chief executive of EMC’s Syncplicity Business Unit, a cloud service for file sync sharing and collaboration.