‘Rogue Agents’ and Why Autonomous Models Will Force Tech Leaders to Build AI Better

Mastercard's Fellow of Data & AI and AI Truth’s founder on the risks of unsanctioned agentic AI and the need for codes of conduct for non-human decision-makers
By Eoin Connolly
Agentic AI promises new ways of working, new frontiers in automation, and game-changing efficiency gains across just about every sector you can think of.
But autonomy demands accountability, and many enterprises haven’t even finished building the basic infrastructure of governance, alignment, and oversight that agentic systems will require.
AI agents will be capable of taking goal-driven, independent actions. So if they’re going to work for us, rather than the other way around, leaders must figure out how to lay the strongest possible foundations.
For guidance on tackling this challenge, we spoke with two seasoned AI experts: JoAnn Stonier, Fellow of Data & AI at Mastercard, and Cortnie Abercrombie, founder of AI Truth. Stonier has hands-on experience shaping AI governance in large organizations and Abercrombie is a long-time advisor on enterprise AI initiatives.
Their insights reveal a more nuanced story than headlines about agentic AI’s disruptive potential might suggest: one where experimentation is surging, risk is real, and readiness remains a work in progress.
Unsanctioned models are probably already out there
At the surface level, agentic AI seems like a natural next step for many enterprises. After all, why stop at predictive analytics or content generation when AI could start executing tasks on your behalf?
But Stonier says it could be dangerous for stakeholders to launch agentic AI use cases without proper oversight.
“I was at a dinner where someone said, ‘You could have rogue agents popping up now the way rogue generative AI projects did.’ And I thought: that’s a really bad situation.”
In other words, while enterprises are still debating policy, agentic AI experiments may already be operating in the wild, unsanctioned and unmonitored.
“Even if companies aren’t formally ready, people are curious,” Stonier adds. “There are always going to be employees thinking, ‘Can I build an agent to do this task?’”
Most organizations are several crucial steps short of deploying agentic systems at scale. Yet it would be unwise to rush things, says Stonier, especially in risk-sensitive sectors like financial services or healthcare.
“Security and control are paramount, especially when autonomous agents are involved,” she explains. “In FS, explainability and trust aren’t just preferences, they’re regulatory expectations.”
So even though the tooling is improving, most organizations are in PoC stages, figuring out what these systems can and should do, and how to govern them responsibly. And they may well find themselves at that early stage for longer than the media hype would suggest.
‘Leaders will finally be forced to build with outcomes in mind’
That is because of unanswered questions around governance. If agents are going to make decisions, how do we constrain them? Who’s responsible when they act? And how can we ensure those actions are aligned with company values and laws?
Stonier believes governance will have to evolve much faster than most leaders anticipate.
“As companies start exploring agentic AI in a more significant way, there’ll need to be more rigor. Governance is going to become more necessary and more robust. The conversations are going to become more difficult.”
She draws a parallel to employee policies. Just as organizations have codes of conduct for staff, they’ll need something similar for autonomous agents.
“They’re going to need an agentic code of conduct. And the more agents you have, the more important that becomes,” she says.
Abercrombie agrees that governance is a critical challenge, but also sees an opportunity. Agentic AI may well prove itself to be a forcing factor for responsible innovation.
“If anything, this shift forces business and tech leaders to finally sit down and work out what outcomes they’re actually building for,” she says. “You can’t launch something autonomous without understanding the rules it should follow and what success really looks like.”
Despite the uncertainty, both experts agree that experimentation, done safely, is essential.
“Sandbox environments are a useful way to test multi-agent systems,” Stonier suggests. “But they need to be structured. Cross-functional risk reviews, involving compliance, tech, and AI teams, can help identify and mitigate risks before anything goes live.”
Why it’s crucial to document early lessons
She also emphasizes the importance of education, not just policy: “Leaders should invest in helping teams understand agentic AI’s capabilities and limitations. Curiosity isn’t bad. It’s unmanaged curiosity that’s dangerous.”
Abercrombie adds that now is the time to start documenting those learnings.
“Every project should leave a paper trail: what worked, what didn’t, where the risks emerged. That’s how you build institutional muscle memory, and it’s how you scale safely.”
Ultimately, the rise of agentic AI is a moment of choice for enterprises. Do they chase the hype without preparation, or take a moment – perhaps several moments – to ensure the right scaffolding is in place? In the hyper-competitive, fast-moving world of modern enterprise, potentially allowing your competitors to get the jump on you isn’t a compelling proposition.
For Stonier, though, the way forward is clear: “Just because the technology can do something doesn’t mean the organization is ready for it. We need policy, practice, and shared understanding. Otherwise, we’re not managing AI, we’re just hoping it behaves.”
Abercrombie echoes that sentiment: “Responsible AI isn’t just about ethics or governance. It’s about business survival. If your autonomous system doesn’t help you hit a strategic goal, and do so safely, then why are you building it?