The AI Ethics Debate is Heating Up
With reports of unfair, biased or flawed AI use becoming common in the press, we ask whether stricter AI and data ethics regulations are on the way
It’s a tale as old as capitalism itself. Big business argues that strict regulations will stifle innovation and give an edge to international competitors. But activists argue that more rules are needed to protect the public.
In the world of data ethics, this long-running debate is heating up rapidly. With media reports of AI ethics violations on the rise, it looks like the scales are starting to tip against those arguing against tougher regulations.
“Unless you have the legal requirement for companies to abide by the law, it is highly unlikely they will engage in [ethical] behavior”
Ganna Pogrebna, Lead: Behavioral Data Science, The Alan Turing Institute
As IBM’s former Global AI Offerings Executive, Cortnie Abercrombie has seen the AI ethics story unfold with her own eyes. So, it’s telling that she left the tech giant in 2018 to found AI ethics-focused non-profit AI Truth.
“Part of my job was to understand the AI operations of the world’s largest companies,” she recalls. “What I found in some cases threatened not only the companies’ brands, but their ability to get a return on investment from some of the AI solutions they were building.”
“Right now, the only time we see real change on the AI ethics front is when the news picks up stories about their AI ethics violations,” she adds. “So far, public anger has mostly been directed at technology firms. But I think in the future, they won’t be the only ones whose reputations will be on the hook.”
Calls for AI Ethics Regulations on the Rise
It’s important to note that Abercrombie views AI ethics regulations as a last resort. She thinks Europe’s GDPR regulations have held up innovation on the continent and worries that strict regulations could do the same in the US.
But Abercrombie is just one voice in a growing chorus of industry leaders and institutions that say the current way of doing things often doesn’t work. The Alan Turing Institute, the UK’s national institute for data science and AI, is another organization that’s calling for change.
“Self-regulation does not work,” argues Ganna Pogrebna, Lead: Behavioral Data Science at The Alan Turing Institute. “At the moment, companies decide for themselves whatever they think is ethical and unethical, which is extremely dangerous.”
“There is a huge amount of attention and energy going into developing good principles for how to do AI ethically,” continues Mark Caine, Lead: AI and ML at the World Economic Forum. “What we’re seeing is a bit of an implementation gap.”
“We have got about halfway in terms of having a really robust conversation about what ethical principles AI should be governed by,” he adds. “We still have a long way to go in terms of making sure that actually happens.”
Of course, AI ethics regulations could take many forms. Caine envisions a world where companies display their AI ethics credentials in a similar way to Fair Trade labels on food and clothing. Meanwhile, Pogrebna supports expanding the Universal Declaration of Human Rights to include digital rights and creating organizations to act as custodians of the public’s data.
“You need legal regulations in place,” she concludes. “Unless you have the legal requirement for companies to abide by the law, it is highly unlikely they will engage in [ethical] behavior.”
The Case for Data Ethics Self-Regulation
Time may be running out for the business community to prove that it can be trusted to tend its own garden, with respect to data ethics. But there are certainly data and analytics leaders who are thinking ahead about these issues and showing that self-regulation approaches can work.
MercyFirst CDO Dr Besa Bauta is one such executive. For the past three years, she has been balancing her mandate to advance human services organization’s data strategy with her dual role as its Chief Compliance Officer.
“Thinking about information and privacy is really key,” she says. “For me, it’s always thinking about, ‘Why is this important to ask? Is it for the purpose of ensuring patient safety? Or is this information going to be used for something else? What are the potential risks of collecting and storing this information?’”
Developing a COVID-19 contact tracing system created a recent opportunity for Dr Bauta to incorporate self-regulation and data ethics as essential steps in application development. Data privacy considerations meant exploring options for protecting staff health information while ensuring patient and staff safety.
“We had a lot of ethical questions with regard to how information will be used,” she explains. “Technically, we’re not their healthcare providers. So, I don’t want to keep that information on our system.”
Johns Hopkins Healthcare Senior Director of Healthcare Economics and Data Science Romy Hussain is another example of a data leader who is taking steps to ensure her organization uses data and AI ethically.
She says ensuring development teams are diverse and have access to the right subject-matter expertise is vital for ensuring AI systems don’t cause unintended consequences.
“I developed an ethics advisory board for precisely that reason,” she says. “There have been times where we’ve fundamentally shifted the way we build, train or conceptualize a model or project based on the input of this group.”
The relative success of the ethical frameworks and processes Hussain and Dr Bauta have put in place show that it is possible for organizations to regulate their own data and AI usage. But the question remains: What will it take to ensure that all companies consistently use AI ethically and in their customers’ best interests?
Finding a Middle Ground on AI Ethics
The fundamental tension of the AI ethics debate is between regulation and innovation. Companies have a duty to consider the potential ethical impact of their AI initiatives and adjust their plans accordingly. But there are forces within many organizations that push data and AI leaders to cut corners.
For Hussain, the key to resolving this conflict lays in juggling multiple high-value projects at once. That way, other projects can be ramped up when one must be paused for ethical adjustments.
“We have got about halfway in terms of having a really robust conversation about what ethical principles AI should be governed by”
Mark Caine,Lead: AI and ML, World Economic Forum
“The nice thing is that I have enough insight into both sides of the equation that I’m able to balance them,” she explains. “Where we know we have to slow down the ‘inpatient length of stay’ model, for example, we can run with another one for a couple of weeks while we figure things out.”
However, not all data leaders will be able to juggle projects like this. Some may even lack the political capital within their organizations to steer the business agenda in this way.
For this reason, it seems like stricter laws around data and AI ethics are inevitable. Some may want to delay this for as long as possible. But it could prove more prudent for the business community to partner with regulators to design a system of rules that is compatible with innovation.
Either way, it seems likely that pressure on companies to use data and AI ethically will continue to build in the months ahead. Forward-thinking data leaders should be acting now to protect their organizations’ reputations and futureproof their operations for when new regulations do arrive.
This is an extract from our 2021 Transformational Data Strategy US report. For more exclusive insights about the technologies shaping the future of data and analytics in America, click here now.