<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=306561&amp;fmt=gif">
Skip to content

Why AI Leaders Should Enforce Governance at the Enterprise Level

Many efforts to oversee AI projects lack clear organizational ownership and accountability, and are at risk of duplicate work as teams suffer misalignment 

By Corinium Global Intelligence 

A growing number of organizations now treat AI governance as an enabler, rather than a barrier to realizing ROI. But while best practices can reduce time-to-value, there are several obstacles to adopting new governance platforms and programs.  

Chief among them, according to recent research from Corinium and ModelOp, is the need to integrate fragmented systems: 58% of C-Suite leaders we surveyed named this as one of their biggest challenges.   

Close behind is replacing or scaling manual processes, which is a key challenge for 55% of organizations. Internal procurement policies and administrative burdens are an issue for 53%, while 43% say regulatory or compliance hurdles are holding them back.   

One major hurdle when it comes to replacing manual processes is documentation.  

“It’s hard to get data scientists to stop doing data science and document their models – how they work, what data they’re based on, what tests were run, and so on,” says Skip McCormick, CTO of Cornerstone Technologies.  

“If you don’t capture that information while it’s fresh in their minds, it becomes nearly impossible to get later,” he adds. “If they’ve worked on three other models since then, they won’t remember why they made certain decisions. That becomes a major problem during audits.”  

Documentation is costly both because it takes time away from well-compensated data scientists, and because it demotivates them, he adds. “If you make them do too much documentation, they’ll go work somewhere else because they hate doing it.”   

He recommends that leaders consider AI lifecycle automation and governance solutions to manage this issue.   

Leaders lack confidence in AI traceability  

Among our respondents, only 36% said their organizations had strong capabilities in documentation for generative AI, while 51% said this ability was moderate and 13% said it was weak.   

These difficulties with documentation help to explain why most organizations do not have a high level of confidence that their generative AI models are traceable.   

Only 28% had high or complete confidence that they had full traceability between AI use cases and specific test results related to them. When it came to connecting use cases to exact deployment locations, 23% said they had limited confidence in traceability, while 38% had high or complete confidence.   

Leaders felt more upbeat about their ability to link use cases to the underlying training data, with 46% reporting high confidence and 13% reporting limited confidence.   

Our survey also identified gaps in interpretability, suggesting more work is needed to ensure AI-driven decisions are understandable, auditable and compliant with regulations. Only 40% felt they had strong capabilities in this area.   

More respondents felt capable of providing visibility for users, which includes enabling transparency into the model lifecycle and showing where models are in the review and approval process. Almost half – 48% – said they had strong or very strong abilities here. At the same time, though, a full 20% felt their capabilities were weak.   

Those struggling to establish clear links between use cases, datasets and model outputs face compliance risks, difficulty in auditing AI decisions and challenges in ensuring fairness and accountability.   

Automation offers a big opportunity in AI governance  

Generative AI assurance is a top-budgeted initiative for 2025 among our respondents: 60% said they are prioritizing funding here, emphasizing recognition of the need to test and validate AI models for accuracy, bias, and compliance.  

At the same time, however, our survey identified potential issues with the enforcement of assurance processes. Just 14% of our respondents perform AI assurance at the enterprise level, while 51% perform these functions at the business level and escalate reporting to the enterprise level. The remaining 35% keep all assurance processes within business units.   

This opens the door to duplicate work, different teams being misaligned, and a lack of clear ownership and accountability. If reporting between business units is not consistent, data that makes its way up the chain of command may be used to draw inaccurate conclusions based on the assumption it was all collected in the same way.  

Everyone in our survey indicated that they had some kind of formal generative AI assurance process in place. This shows enterprises recognize the importance of assurance even if there is still work to do to mature the frameworks supporting it.  

A fully developed assurance framework will help to streamline overall AI governance. Organizations hoping to do the latter should also ensure they have a comprehensive overview of all their models, McCormick says.    

“A systematic model inventory is essential. Companies with legacy governance processes often rely on manual steps – documents are written, sent to reviewers, assessed and sent back with questions,” he says.   

“That process can take six months just to get a model approved. Then, depending on its risk classification, models might be reviewed every six months for medium risk or every two months for high risk.”  

There is a major opportunity to automate much of this process, he adds. “The problem is that humans are involved in every step. That made sense when these processes were first designed because there was no alternative. But now we can automate and accelerate a lot of that work – without removing human judgment. The goal isn’t to replace humans but to speed up the process so that human expertise is applied where it matters most. That’s the real opportunity in modern AI governance.”  

To get a more in-depth look into leveraging AI governance to accelerate time-to-value, download our report, which is based on insights from 100 C-Suite leaders.