Today applications are no longer written purely by humans. Traditional AppSec, built around manual code reviews, static rules and periodic scans, was never designed for machine-generated software or the pace at which modern applications are now delivered.
This article will delve into the intersection of three forces that are reshaping security teams right now: AI adoption, software delivery speed and the limits of legacy AppSec models.
Kalpana mentions development speed is one of the most immediate pressures.
“Development is moving faster than ever, with code going to production much quicker. AppSec has to keep up with that pace. Instead of slowing things down, security needs to move earlier in the process and work at the same speed as developers, catching issues sooner and fixing them faster.“
AI has fundamentally altered both software creation and software exploitation, and traditional AppSec models are under real strain. Whether AppSec is “dead” or evolving is less important than the fact that the old assumptions no longer hold and the industry must adapt quickly.
“Traditional AppSec isn’t dead; it’s evolving to be faster, more integrated, and less of a bottleneck.” Kalpana emphasises.
That said, AI is already deeply embedded in AppSec and DevSecOps workflows, not as a futuristic add-on but as a practical enabler woven into daily tasks for developers. Kalpana points first to the impact on developer velocity, which has forced security to adapt quickly.
“Tools like Copilot are helping developers write code much faster, which means security needs to operate at the same speed instead of slowing things down,” she says. “That shift alone has changed how AppSec fits into the development lifecycle.”
That pressure has pushed AI deeper into the pipeline, where it supports code reviews, highlights risky patterns, suggests fixes, and helps teams triage and prioritise findings earlier. It is also proving useful in identifying misconfigurations and exposed secrets—areas where speed and scale matter. More importantly, AI is adding context to security signals.
“It’s helping identify vulnerabilities in context, like showing real attack paths,” Kalpana says. “Overall, it’s helping us move from just finding issues to focusing on what actually matters.”
One of the most unexpected shifts has been cultural rather than technical. Kalpana notes that clearer, more actionable feedback has changed how developers engage with security.
“The biggest change has been in developer behaviour more than the tools themselves,” she says. “Developers are more willing to fix issues when the feedback is clear and easy to understand, so security teams are spending less time explaining the basics and more time on complex problems.”
That acceleration does come with trade‑offs. Faster delivery has led to more “good enough” code, while some risks have become subtler and harder to spot. At the same time, these changes have reshaped how AppSec is perceived within organisations.
“It’s also shifted AppSec from being seen as a gatekeeper to more of a guide,” Kalpana notes. “And that happened faster than anticipated.”
Despite its benefits, AI still has clear limitations in practical use. Kalpana cautions against overconfidence, particularly when teams treat AI output as authoritative.
“AI can sound very confident even when it’s wrong, which is risky if teams trust it too quickly,” she says. “It’s good at spotting patterns, but it often misses business context or how systems actually work together.”
That has changed how security teams approach trust and validation. While repeatable, high‑confidence patterns can be relied on more, AI‑generated explanations require closer scrutiny.
“Trust has become more balanced,” Kalpana explains. “We validate findings by checking context—whether an attack path is real and exploitable, and whether it actually matters. Anything high‑risk still needs human review. AI hasn’t removed skepticism; it’s made validation faster, but we still have to be careful.”
Looking ahead, Kalpana expects some aspects of traditional AppSec to fade, particularly low‑value, manual work. Routine triage, rigid checklists, and basic secure‑coding guidance are likely to reduce as AI increasingly supports developers in real time. However, the core of AppSec remains firmly human‑led.
“AI can assist with ideas and patterns, but it can’t replace judgement,” she says. “Understanding how systems really connect, making trade‑offs, and defining business risk will always need human ownership.”
In that sense, AppSec isn’t disappearing—it’s being reshaped.
“The work isn’t going away,” Kalpana adds. “It’s moving towards more strategic, higher‑value areas.”
In an AI‑driven world, success won’t be defined purely by speed or accuracy.
“A good AppSec program is really about making the right decisions at scale,” she says.
“Reducing noise, focusing on what matters to the business, and knowing when to rely on AI versus when to step in with human judgement. Speed and accuracy still matter but if they don’t lead to better decisions and safer outcomes, they’re not enough.”
Don’t miss the opportunity to hear more from Kalpana Venkatesan at AppSec & DevSecOps Melbourne 2026 (15 July) at Crown Promenade.
Alongside this event, we are hosting CISO Melbourne 2026 (14-15 July), OT Security Melbourne 2026 (14 July) and Cloud Security Melbourne 2026 (15 July)
If you would like to share your experience and insights at our events, feel free to reach out to Maddie Abe.