Application Security
,
Artificial Intelligence & Machine Learning
,
Events
GitLab’s Joern Schneeweisz on Prompt Injections and Old AppSec Issues
Prompt injections are the new face of an old problem. As developers rush to deploy artificial intelligence features, longstanding issues such as cross-site scripting and memory corruption are resurfacing.
See Also: AI Agents Demand Scalable Identity Security Frameworks
“The new stuff is like the prompt injections, which are inherent to the AI. They are a systemic thing, just like memory corruption, where data and code mix in the same space,” said Joern Schneeweisz, principal security engineer at GitLab. “People are rushing right now to build AI features, and all the old AppSec issues pop up again because people don’t take the time to put security into their processes.”
AI systems often introduce vulnerabilities where they intersect with existing technologies, he said. The rush to market, driven by capitalistic urgency, discourages teams from building security into early development stages.
In this video interview with Information Security Media Group at Nullcon Berlin 2025, Schneeweisz also discussed:
- Why capitalistic pressure leads to insecure product development;
- How security delays can stall releases and cost companies market share;
- How system intersections, including AI integrations, create new vulnerabilities.
Schneeweisz has more than 15 years of experience in vulnerability research, app security and AI/ML threat analysis. He helps secure product features, leads design reviews and advises on risks across the software development life cycle.

