AI ethics emerged as a key barrier to enterprise AI adoption when analytics company FICO commissioned Corinium to survey 100 CDOs, CAOs and CDAOs about their AI strategies. So for the second episode of the Business of Data podcast, we invite FICO CAO Scott Zoldi to join us and share his views about the findings of this research.
"The hype cycle of AI is over and the hard work has begun," he says. "To the extent that the data which is around our society it biased (which it is), you need models that you can demonstrate do not necessarily reflect those biases."
For Zoldi, the buck for AI ethics stops with a company's CDO or CAO. It's up to them to get ethics recognized as a board-level issue and ensure there are processes in place to ensure ethical AI usage.
"They have to define one standard within their organization," he explains. "They need to make sure it aligns from a regulatory perspective. They need to align all their data scientists around a centralized management or standardization of how you do that. And that takes a lot of work."
Crucially, Zoldi stresses that enterprises must monitor AI systems on an ongoing basis to be sure they're using AI ethically. Our research shows that just 33% of AI-using enterprises currently do this.
"Look at the pandemic," Zoldi argues. "[The pandemic] affects different protected and ethnic groups differently, based on their exposure to the virus and the types of work that they're forced to do. That means, [certain] models that may have been ethical at the time they were built are no longer ethical today."
He concludes: "You're not done with the model when you're done building it. You're done with the model when it ceases to be used."