How JP Morgan Gains a Competitive Edge During AI Implementation

An expert from the firm on why you should pair engineers with compliance officers, traders with data scientists, and risk managers with product teams
By Eoin Connolly
As adoption of AI accelerates in the financial services industry, the question of how to integrate it responsibly is becoming ever more urgent. In a recent talk hosted by Corinium Global Intelligence, Leonard Hawkes, Software Engineer at JPMorgan Chase, outlined the strategies, risks, and cultural changes needed to ensure AI strengthens, rather than undermines, the financial sector.
His core message is a simple one: AI is powerful, but it must be implemented with a human touch. Much of the debate around AI in finance centers on whether it will replace human roles. Hawkes rejects this binary framing.
“AI will not replace us, but it is a tool we can use hand in hand,” he explains. “AI is going to be the acceleration, but we’re still handling the brake and the steering wheel.”
In practice, this means financial professionals will see their roles redefined. Traders will move from manual execution to strategy oversight. Risk managers will focus less on monitoring individual transactions and more on designing safeguards. Engineers – the beating heart of AI development – will spend less time on grunt work and more on high-value problem solving.
As AI takes on more of the operational load, new roles are emerging. Perhaps none of these has garnered more widespread attention than the prompt engineer, someone skilled at asking the right questions of AI systems to generate useful results. Hawkes, who uses AI daily in his coding work, noted how critical this skill has become.
“AI isn’t 100% accurate. You need to know how to guide it. Sometimes I tell the system, ‘Don’t use this function’ or ‘Use this syntax,’ so it knows for the future. Getting good results is as much about how you ask as what you ask.”
Beyond prompting, financial firms need cross-functional collaboration. Hawkes recommends pairing engineers with compliance officers, traders with data scientists, and risk managers with product teams. These collaborations reduce blind spots and ensure AI adoption aligns with both technical and regulatory requirements.
Hawkes outlined four different approaches to embedding AI responsibly in finance:
1. Start with low-risk use casesExperiment with areas where mistakes are cheap, such as internal document summarization or customer query triage. These early wins build confidence and institutional learning without exposing firms to catastrophic risks.
2. Design for explainability from day oneIf an AI system cannot justify its reasoning, it’s not ready for prime time. Transparency is especially critical in finance, where regulators and customers demand to know why a decision was made.
3. Bridge the data gapData reflects human bias. Hawkes stressed the importance of monitoring and correcting for this. “If a biased decision-maker trained the data, the AI will replicate it,” he warns. Disastrous cases that involved women or minority borrowers receiving worse loan terms highlight the urgency of tackling this issue.
4. Build cross-functional teamsCollaboration across departments ensures AI rollouts are not just technically sound, but also compliant and aligned with customer needs. Together, these strategies form a playbook for reducing risk while scaling AI adoption.
Competitive advantages for financial firms
Why should firms invest in responsible AI now? The answer is a simple one: when done well, AI provides a genuine competitive edge. Hawkes noted that AI allows large firms like JP Morgan to move with the nimbleness of startups, while serving millions of customers.
By analyzing transaction data, banks can tailor offers, such as targeted cashback rewards, that make customers feel valued. AI enables institutions to respond quickly to volatility and changing customer behaviors. Automated compliance, customer service chatbots, and back-office tools reduce overhead and free up human employees for higher-value work.
In Hawkes’ words: “AI is giving financial firms the ability to move like startups but operate at scale.”
Responsible integration requires more than just technical expertise. Hawkes argued that AI literacy should extend across all departments, not just engineering.
“Some companies now require employees to use AI at least once or twice, even in non-technical roles,” he explains. “There’s pushback, but it builds literacy. People learn how to prompt, how to get the answers they want.”
This democratization of AI ensures that its benefits, as well as its risks, are understood broadly, while also helping to prevent innovation from being siloed in IT departments.
Governance and regulation: guardrails, not blockers
Hawkes emphasized that governance is not a barrier to innovation, but rather a necessity for sustainable adoption. He pointed to two key frameworks:
The EU AI Act, which classifies financial applications as “high risk” and requires documentation, monitoring, and human oversight.
The NIST AI Risk Management Framework, which stresses explainability, supervision, and accountability.
“Regulation isn’t a blocker; it’s a signal that AI is growing up,” Hawkes said. “If your model can’t explain itself, or your employees aren’t supervised, then you’re not compliant. And non-compliance is costly.”
The future of AI in finance will be defined less by algorithms themselves and more by how humans and machines collaborate. Firms that treat AI as an accelerator while keeping humans firmly at the wheel will not only innovate faster but also earn the trust of regulators and customers alike.