<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=306561&amp;fmt=gif">
Skip to content

Three Pillars of Ethical AI Use

Six of the world’s leading female data-focused executives share their experiences of developing processes to ensure ethical and responsible AI use in their organizations

 

 

Ethical AI use depends on three core factors, according to the six of the world’s top female data leaders. These are: 1) responsible data collection and governance, 2) responsible model development and 3) responsible model monitoring and maintenance.

Speaking at July’s Business of Data ‘female data executives’ virtual roundtable, they shared how their respective organizations are working to put the right processes in place to ensure ethical and responsible AI use.

“For us, it’s a very different process for a fraud use case versus an identity use case, versus a cyber abuse case,” said JoAnn Stonier, Chief Data Officer at Mastercard. “So, we’re getting more and more specific as our teams apply AI into different business cases.”

Stonier captured the views of the group when she AI ethics as a “team sport”. Although the specific processes required may vary from case to case, ensuring any application for AI technology is used responsibly requires stakeholders from multiple teams to work together.

AI Ethics Starts with Data Ethics

Responsible and ethical AI use means ensuring AI models function in a way that is fair, unbiased and in customers’ best interests.

This process must begin with ensuring the data that feeds into AI models is collected, governed and used ethically, as Sathya Bala, Head of Global Data Governance at Channel was keen to point out.

Bala said: “Data governance can play a really key role in terms of connecting the dots between AI models and data ethics.”

“We should assume that bias exists wherever we’re using historic datasets,” she continued. “So, unless you are consciously creating interventions to ensure that your data and algorithms are fair, we should assume bias is built in, until we build it out.”

Minna Kärhä, Principal Consultant, Data Strategist at Vainda Consulting and former Data and Analytics Lead at Finnair, agreed: “Everyone needs to understand what data is used and what algorithms are used to provide that insight, to be able to understand why certain decisions are made.”

Customers arguably have a right to ask for an explanation about how an AI system arrived at a decision or recommendation that affects them. And being able to provide that explanation starts with understanding the data that feeds into AI models.

AI Ethics Spans the Whole AI Model Lifecycle

Of course, ensuring that the data feeding AI systems is collected ethically and representative of the audience those models will be making decisions about is just the first pillar of ethical AI.

Enterprises must also ensure that AI models are developed responsibly, so that they operate in ways that treat whoever or whatever they are analyzing fairly. Then, executives must ensure these models are effectively monitored and maintained so that they continue functioning fairly over time.

“At Finnair, we started from defining the ethical principles for AI models,” Kärhä explained. “For us, first of all, it meant that all the models needed to be understandable and transparent.”

She continued: “Then, the most important thing in the long run is about continuously monitoring and taking care of the model and the input data.”

Harleen Thethy, Head of Analytics at BBC Global News, agreed that transparency around how models operate is essential for AI ethics.

“For us, it’s just important to ensure transparency in the data that we’re collecting and how it’s monitored and manipulated in the model,” Thethy said. “We need to be able to explain and justify our data and try to eliminate that bias as much as we possibly can.”

Meeting this requirement means having processes in place to govern the end-to-end AI model lifecycle.

Ethical AI Best Practices are Still Evolving

Although global AI maturity has come on a long way in recent years, the field of AI ethics is still in its infancy.

Regulators and industry bodies are publishing guidelines on AI ethics to act as frameworks for businesses to follow. But the complexity of AI initiatives means enterprises are having to learn and adapt as they go.

“I think it’s important for an organization, when they deploy any sort of application, to be reflexive and learn as you’re going,” said Besa Bauta, Chief Data Officer at social care non-profit MercyFirst. “Especially when you’re using artificial intelligence.”

As Stonier noted, part of the challenge is the need for stakeholders across multiple teams to work together to ensure models are developed and used responsibly over time. Our research shows that many enterprises are still developing the teams and functions necessary to do this.

At the same time, executives believe the responsibilities they have around AI ethics varies depending on how an AI system is being used. For some, these responsibilities include making sure staff and customers are able to adapt to the introduction of AI into their daily lives.

While the six executives involved in this roundtable are all working to make AI ethics a core part of their corporations’ strategies, there’s still work to be done across the business community to put guard rails in place to support ethical and responsible AI use.

“Responsible AI actually goes beyond data ethics; it also speaks to business ethics,” concluded Maritza Curry, Head of Data at financial services company RCS. “We should challenge our decisions around this.”