There are three principles that form the backbone of the many AI ethics frameworks that have been created across the globe in recent years: fairness, accountability and transparency.
As Dr Rachel Adams, Senior Research Specialist at South African think tank Human Sciences Research Council, says in this episode of Data Conversations Over Coffee, the proliferation of these frameworks reflects how AI is reshaping society.
"Data scientists and those developing these kinds of technologies have a very powerful role within society," she notes. "They have a particular social responsibility to think very carefully and very clearly about where and how it may negatively impact on society."
In practice, that means developing processes to ensure AI is used for good, eliminate bias from AI systems and ensure people can understand how AI is affecting their lives.
"We need to be very attendant to the fact that bias and discrimination occur and do absolutely everything we can to try and address that," she argues.
However, Dr Adams stressed that organizations can't simply 'copy and paste' AI ethics frameworks that exist elsewhere. Every company, organization and nation has its own specific context that will affect its ethical duties with respect to AI.
"[It's important] to think about ethics from a South African context," she concludes. "[This] does mean a particular consideration around race, the history of colonialism and racialization. But it also means values like ubuntu and understanding ourselves through others."