<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=306561&amp;fmt=gif">
Skip to content

Can't Hire an AI Translator? Grow Your Own

Finding talent to bridge the technical and strategic is crucial to the long-term success of AI projects – nurturing this role in-house requires cross-departmental exposure 

By Eoin Connolly  

For years, organizations have been told that if they want to successfully integrate AI, they need to close the AI literacy gap. So they run training workshops, host webinars, and share glossy explainer decks that promise to bring their workforce up to speed. 

And yet, even after all this effort, AI projects are still failing. Literacy, while necessary, isn’t sufficient for success. 

“AI literacy is important,” says Cortnie Abercrombie, founder of AI Truth and a long-time advisor to enterprise AI initiatives. “But it’s not just about education. It’s about forcing collaboration. Literally hiring people who will get out of their comfort zones and communicate.” 

The way Abercrombie sees it, organizations often fall into the trap of thinking that education alone will bridge the gap between data teams and business units. But understanding how AI works isn’t the same as knowing how to work with AI in a real-world business context. 

“It’s like business and IT are two different species,” she says. “One’s thinking strategy and revenue; the other’s thinking systems and feasibility. And no one’s trained to translate.” 

In other words, technical fluency and business alignment are not the same thing. But too many AI strategies assume that if business leaders can just grasp the basics of how a model functions, the rest will fall into place. 

This gap between understanding and translation is more than theoretical; it has real-world, bottom-line business consequences. AI initiatives get green-lit with vague expectations, in isolation, and wind up with outputs that don’t map to any meaningful business goal. At best, they underdeliver. At worst, they actively undermine in-house organizational trust in AI altogether. 

“Too many projects fail because there was a difference in expectations and no process, no outcome defined, and no mutual accountability,” Abercrombie says.  

It’s a participation problem, too 

JoAnn Stonier, Fellow of Data and AI at Mastercard, has worked on AI governance frameworks across sectors for years and offers a complementary perspective. 

“I think it’s a participatory moment,” she explains. “It’s not that silos are bad – domain expertise still matters deeply. What’s needed is that all these perspectives get brought together.” 

In her view, the issue isn’t that different departments don’t care about AI or want to be involved. It’s that they’re not being equipped, or invited, to participate effectively. 

“We need to make sure that everybody’s trained up to participate,” she says. “Not everyone needs to be an expert in security, but they should be informed enough to engage in discussions around AI risk and governance.” 

This is a perspective that may help reframe the conversation. AI literacy isn’t just about knowing how a model works. It’s about having the confidence, context, and support to contribute to its development and deployment. 

From literacy to alignment 

The jump from isolated understanding to collaborative action is where most organizations stumble. Training programs don’t always translate to practice because they stop at knowledge acquisition, rather than building cross-functional fluency. 

Abercrombie advocates for a more integrated model. “The best way to run an AI project? Get business people in the daily stand-ups. Let them hear how things work. Make them part of the Agile loop.” 

It shouldn’t be an earth-shattering suggestion, but it’s rarely followed. Too often, AI remains the domain of a central team working in isolation. Business stakeholders, in most cases, are either disengaged or deferential.  

What’s needed is a structural shift towards embedding business users, risk specialists, and policy stakeholders into the process. Not just in post-implementation review stages – by that point, any misalignment-related damage has already been done – but in the design and iteration phases of the project as well. That’s how organizations begin to foster the kind of experiential literacy that drives adoption and results, allowing them to move beyond the “mandatory in-house seminar” brand of AI literacy education that has dominated until now. 

The new fluency: translators, not just technologists 

This new model doesn’t mean everyone in the business becomes a data scientist. But it does mean recognizing and elevating the role of the “translator”: people who can bridge technical and strategic worlds and ask the right questions at the right time. 

These individuals are often overlooked in org charts, but they’re invaluable. They’re the product owners who understand both the pipeline and the pitch deck, the policy leads who can speak risk and reward, the designers who understand AI capabilities and user pain points. 

And because they’re so valuable, they’re also increasingly hard to hire. 

So organizations must grow their own. That means pairing training with cross-departmental exposure, rotating roles across tech and business units, and giving rising talent the opportunity to develop both technical and interpersonal fluency. 

Responsible AI demands more than smarts 

Stonier makes one final point that captures what’s at stake. “As firms start exploring in a more significant way – like with agentic AI – there’s going to need to be more rigor. Governance is going to become more necessary and more robust. The conversations are going to become more difficult.”  

This is the future AI teams are heading toward. And it’ll take more than a glossary of machine learning terms to navigate it. It will take shared language, incentives, and commitment to seeing AI as a team sport, not just a technical endeavor. 

AI literacy is essential, but it’s only a first step. The organizations that will find themselves at the forefront of the next AI wave won’t be the ones with the flashiest data science team or the slickest LLM demo. They’re the ones that know how to talk across the table, align goals across functions, and translate technical possibility into business reality.