<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=306561&amp;fmt=gif">
Skip to content

Cortnie Abercrombie: Executives Need to Get Serious About AI Ethics

Neglecting AI ethics is more than just a reputational risk, argues Cortnie Abercrombie, former IBM executive and Founder and CEO of non-profit organization AI Truth. It also harms the long-term ROI of AI systems

Would you please tell us a bit about AI Truth and why you left your role as IBM’s Global AI Offerings Executive to found the organization in 2018?

I led a Shark Tank-style AI solutions incubator for IBM’s consulting group. I traveled the globe working with IBM’s Fortune 500 clients, their data science teams, AI researchers and C-level executives.

Part of my job was to understand the artificial intelligence operations of the world's largest companies. What I found in some cases actually threatened not only the companies’ brand reputations but their ability to get a return on investment at all from some of the AI solutions they were building.

I would get access to their data science teams and I would see what they were doing. I was able to see top-down and bottom-up in all the different operations. I started noting that the same behaviors were being done everywhere that were just not as good as they could be.

It wasn't that anyone had malintent. It's just that they didn’t have any set operating models or norms because the field of machine learning that we know today is fairly new and many machine learning data scientists were and are fresh out of college.

IBM brought a lot of that structure and tightened methods and processes with clients. But to me, it revealed a pervasive gap in most firms that introduced too much room for unintentional harms for everyone else not ‘in the room’.

What are the consequences of developing AI in these non-standardized ways, from a business perspective?

The data science teams working on these projects are typically in silos underneath a particular executive or a specialty person. But those Data Science Leads would often leave after around 12-18 months.

So, their incentives were basically to try and get as many high-profile, high-visibility projects done as they could and then go on to the next company and do the biggest challenge they could find.

Then, what was happening was that businesses were literally retiring million-dollar algorithms because they had no way to understand what went into them. Once the lead data scientist was gone, there was really no way to know or to have trust in some of these algorithms.

How much progress would you say has been made in the field of AI ethics since you launched AI Truth?

Unfortunately, AI ethics is still being treated like a huge roadblock that's in the way of progress. People would rather borrow, beg, steal or whatever they need to do to get the data and produce a Minimal Viable Product in 6-8 weeks at any cost. The culture that we have adopted so far for data science has mostly been the one of Silicon Valley, which is, 'Move fast and break things'.

If you do a lot of research on what AI ethicists in general have achieved in their organizations up to this point, you'll see that they keep encountering these same roadblocks. They’re told by data science and product teams 1) ‘I don't want to slow down’ and 2) ‘If I slow down, I may lose my job because I don't have incentives to do things the right way’.

So, until culture changes around this stuff, we won't see real progress. I really don't know what's going to finally catalyze the change. But it seems like it's going to have to be regulations, unfortunately.

Why do you think executives should care more about AI ethics?

Oh, that's easy. They have brand reputation to uphold.

When it comes out that they've been scraping or violating people's data privacy, or swapping with business partners they weren't supposed to be swapping with, or they know something personal that they're not supposed to know because they bought data they shouldn’t have, I think brands’ reputations will start to come to the forefront.

Right now, the only time that we see people making real change on the AI ethics front is when the news picks up stories about their AI ethics violations. For example, when Amazon was called out about using biased facial recognition programs.

So far, public anger has been mostly pointed at technology firms. But I think in the future, they won't be the only ones whose reputations will be on the hook.

From what you were saying earlier, it sounds like there’s also a link between AI ethics and extracting ROI from AI?

The reason we're not seeing AI at a huge scale is because people can't trace it, track it or be accountable for it. They can't rectify it when it goes wrong because of the behaviors I've talked about before.

When you let the one person who happens to know about all the aspects of your algorithm walk out the door to some other company, then you have no return on investment, because you're going to retire that million-dollar algorithm without having used it.

There's a whole underpinning that has to happen before we can get to a point where people can actually trust AI and companies can reliably extract ROI from the technology.

The underpinning that needs to be there to make AI trustworthy is: 1) data privacy, 2) consent, 3) agency or control (can I change it if it has something wrong about me), 4) transparency, 5) access, 6) algorithmic explainability (can I understand what it did and why), 7) accountability (who could fix what parts of the solution), 8) traceability (which part went wrong and when), 9) rectification processes when things go wrong, 10) feedback loops and 11) governance processes (constant monitoring, tweaking, fixing).

Corporations haven’t got there yet because that’s asking a lot and there hasn’t been enough external pressure for them to tackle these issues.