Content Hub | Corinium Intelligence

The Hidden Infrastructure Powering Safe AI in Financial Services

Written by Gareth Becker | Apr 2, 2026 2:50:06 PM

Financial services organizations are innovating with AI at a blistering pace, but without the right infrastructure and governance, trust remains the biggest barrier to scale

Financial institutions have been experimenting with AI at an accelerated pace over the last 18 months. As a result, many have no shortage of proofs of concept, from copilots to early-stage agentic workflows.

As Purnima Padmanabhan of the Tanzu Division of Broadcom explains, building AI-powered applications has never been easier. Operating them safely at scale, however, is a very different proposition.

“It’s really easy to write AI-powered software, but that doesn’t mean it’s easy to test, integrate, and securely operate enterprise-grade software,” she says.

This is a critical distinction for financial institutions navigating the dual pressures of maintaining customer trust and innovating customer experiences while facing fierce competition.

From deterministic systems to probabilistic risk

Traditional enterprise systems are designed to be predictable. If a specific input is provided, a defined output follows. This determinism underpins everything from testing to auditing to troubleshooting and root cause analysis.

AI systems have the potential to upend that model.

Agentic AI, in particular, introduces probabilistic behavior. Systems don’t follow a single predefined path; they explore multiple ways to complete a task, often interacting with data, APIs, and services across the organization.

That flexibility is powerful, but it also introduces new forms of risk.

“If you don’t have hard constraints and good planning frameworks… agents can run amok with unintended consequences such as changing or deleting critical data or accessing unauthorized systems or consuming large amounts of resources by running in an unbounded loop, or opening pathways for new attacks such as prompt injection,” Padmanabhan warns.

For banks and insurers, this raises concerns around model validation, operational risk, and regulatory compliance. The question becomes not just whether an AI system works, but whether it can be trusted to behave within acceptable boundaries.

Governance must be built into the stack

One of the clearest shifts emerging in enterprise AI is the increasing focus on AI governance as an organizing design principle.

This begins at the earliest stages of development. Software development frameworks such as Spring Framework, specifically Spring AI and Embable, enable organizations to create agentic applications and workflows with structured prompts and clearly defined outputs with human-in-the-loop validation to ensure more consistent behavior. Spring also enforces standards during code generation, ensuring that security controls, observability hooks, and best practices are applied by default.

From there, attention shifts to the software supply chain. Rather than allowing developers or agents to pull dependencies from unverified sources, organizations are increasingly standardizing build processes using trusted, centrally managed components.

This reduces the risk of introducing vulnerabilities into AI systems before they are even deployed.

Runtime control: the rise of “deny by default”

If development is the first line of defence, runtime is the second.

A key principle gaining traction, particularly in financial services, is the idea of “deny by default” environments. In this model, AI agents have no inherent access to systems, data, or services. Every interaction must be explicitly granted.

“For example, within a Tanzu Platform agent foundation, there is nothing that an agent can get access to that wasn’t explicitly granted as part of the agent’s environment,” Padmanabhan explains. “Every service connection, every database connection, every MCP tool connection… everything is granted explicitly.”

This approach mirrors zero-trust security models and is particularly well-suited to agentic AI, where autonomous systems may otherwise attempt to discover and access resources unpredictably.

By tightly controlling ingress, egress, and service connections, organizations can limit the “blast radius” of any given agent, ensuring that even if something goes wrong, the impact is contained and every action can be recorded. Further, by governing the agentic runtime, a system can limit the resources and tokens consumed by any agent thereby guarding against runaway loops.

Centralization as a control mechanism

As AI ecosystems grow more complex, centralization can be a powerful way to govern critical systems and processes.

Rather than allowing agent capabilities, or “skills,” to proliferate across the enterprise, leading organizations are building centralized catalogs of validated services. This includes model context protocol (MCP) servers, APIs, and reusable components that agents can access.

The benefits of this approach are significant:

  • Maintaining a chain of trust throughout an agentic process
  • Consistent validation and approval processes
  • Reduced duplication of effort
  • Improved auditability and traceability
  • Stronger alignment with enterprise risk frameworks

In effect, this creates a controlled marketplace of capabilities, where innovation can continue within clearly defined boundaries.

Private infrastructure and data sovereignty

While public models offer undeniable capabilities, many institutions remain cautious about exposing sensitive data.

Private infrastructure allows AI systems and model environments to be deployed within an organization’s own controlled environment. This is becoming an increasingly powerful tool for scaling AI safely.

In doing so, organizations can retain control over their data, meet regulatory requirements, and reduce exposure to external risks. At the same time, hybrid approaches are likely to dominate, combining the strengths of public and private models depending on the use case.

Designing the future AI platform stack

Looking ahead, a clear picture is emerging of what the enterprise AI platform stack will require.

At its core are five layers:

  • Agent runtime environments, capable of securing both short-lived and long-running agents
  • Agent middleware and AI content gateway layers, providing integration, identity, control, and observability
  • Governed model brokerage and serving, spanning both public and private deployments
  • Standardized agentic development frameworks, enabling consistent, enterprise-grade builds
  • Low-latency data products, with access controls, governance, and linearity

Together, these components form the foundation for scalable, well-governed AI.

Financial institutions must rethink how they design, test, and operate software in a world where systems are no longer fully deterministic. That requires new approaches to governance, stronger platform engineering capabilities, and a renewed focus on trust.

“It’s all about constraint and guardrails… providing an environment where the agent can do the right things and stay on track,” Padmanabhan says.

Organizations that get this right have much to gain. AI agents have the potential to transform everything from customer experience to fraud detection and operational efficiency.

But only if they are built on infrastructure that is as robust as it is innovative.

 

__

Join us in New York on April 15 for RE.WORK AI in Finance to hear more from the Tanzu division of Broadcom and other industry leaders tackling today’s most pressing AI challenges.