<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=306561&amp;fmt=gif">
Skip to content

Discussing Responsible Technology with Australia's Former Human Rights Commissioner

University of Technology Sydney Industry Professor for Responsible Technology, Edward Santow, talks AI Ethics and Innovation

It can’t be denied that data and analytics and AI are two major drivers propelling the world along the path of digital transformation. It’s also fascinating to imagine how much our lives will continue to change as analytics and AI technologies advance.   

While it’s an exciting time in digital innovation, it’s also a time of discourse around how best to chart our course forward, and what guardrails must exist to guide this industry to proceed in the most responsible way.

Previously the Australian Human Rights Commissioner, Professor Edward Santow started to think a lot about how AI might impact society from a fairness perspective some years ago.

“It can simultaneously be a force for good in terms of connecting people and making good economic decisions, but it also poses the risk of supercharging unfairness, making decision-making less accurate and less rational,” he says.

AI models such as artificial neural networks learn about the world from examples provided to it by humans.

One of the concerns in the field of responsible AI is that a neural net’s understanding of things like language, objects, images, data and the correlations between all things, are entirely influenced by the training data it is provided. If that data is biased, biases will be built into the model, which will replicate biased decision-making.  

Santow now works as Industry Professor, Responsible Technology at UTS, and is focused on tackling the upcoming challenges of fairness in AI across industries.

“There are three key things that we’re focusing on,” he says. “What are the strategic skills that are going to be needed in government and the private sector to procure and then implement AI safely, and effectively, first of all?

“Secondly, what are the tools that people are going to need to help them realise good intentions with AI? The vast majority of companies don't want to do the wrong thing, but they need practical help in getting there.

“Thirdly, we've established a policy lab, which will explore things like whether we need to adjust some of our policy and legal settings to ensure that we're heading in the right direction.”

Squaring Innovation with Caution

Santow wants organisations to be able to embrace AI for good, and he feels that tweaking policies and legislation to keep pace with modern technology usage is far from being an arbitrary hurdle to innovation.

During his presentation at Corinium’s CDAO Sydney conference earlier this year, he noted that the “move fast and break things” approach to technology innovation touted by Facebook’s Mark Zuckerberg might no longer be an intelligent approach, given the scandals that Facebook has experienced in recent years.

As users have grown weary of big tech pushing boundaries on privacy and use of personal data, maintaining public trust and being accountable for tech decisions may be the way forward. But would regulating this clash with the spirit of innovation? Santow doesn’t believe so.

“I think some of the best innovation and certainly some of the most enduring innovation tends to see regulatory guard rails not as a problem or a bug but rather something that can really help channel innovation that will be safe, bulletproof in a regulatory sense and that will also meet the needs of citizens,” he says.

“Government is imperfect, but its whole reason for being is to support citizens, to protect them against harm and to help them flourish. That's really what our regulation is meant to do. On the whole, requirements that are set out in law are actually quite useful in guiding innovators to give people the kind of products and services that they want to need, not the ones that they fear.”

Roles in Responsible Technology

Technology, data and AI are increasingly influencing business decisions. To help organisations navigate the changes and undocumented risks that digital transformation brings about, many new roles around data and AI ethics are popping up.

Several banks, for example, have hired senior data ethics roles. So how does a responsible technology advocate like Santow view these developments?

“I would cautiously welcome the rise of new roles like data ethics specialists and responsible AI specialists,” he says. “I think it's a good thing in the sense that it signals that an organisation has identified these things as important issues. They should be investing in their people having relevant expertise.

“The only reason I remain a little bit cautious is that you don't want those people to become siloed.

“To put it more positively, they need to be agents of change. They need to go into the organisation and make responsible AI, fairness and those sorts of measures integrated throughout the entire organisation, rather than just something that is turned to very briefly maybe at the end of a design process hoping to get a tick of approval.”

AI’s Biggest Benefits and Urgency of Action

The evolution of AI through a responsible lens is something Santow says we can all get excited about, and he welcomes the many benefits AI can bring as a tool used by specialists to make good, quick decisions on certain problems.

“We're seeing, for example in precision medicine, the best AI-powered applications can be diagnosing skin cancers with the same or greater reliability than highly trained human doctors. That’s amazing. It will literally save lives,” he says.

“It’s also really good because it shows how doctors and machines can work in tandem. You might rely heavily on the machine to help you determine what grade of skin cancer a patient has but then there is a suite of decisions that a human is better placed at making in terms of what the treatment regiment might be.”

Santow does however view the changes that AI is set to bring about in business and society as rapidly approaching, and that having discussions on the topic of responsible technology and ethics is quite urgent.

“There are these three things that are happening around us, the first is exponential growth in the use of AI either by companies or government and yet it is failing at a very high rate,” he says. “Gartner has predicted that in 2022, 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for them, which is worrisome.  

“The second thing is that the overall risk environment is changing. The growth in regulatory reputational and commercial risks is changing. Which means that people who are using AI really need to think very concretely about how they will manage those risks. “Thirdly, and more positively, there's a real demand on the part of citizens or consumers to have AI products that treat them fairly. There is a real self-interest in in getting this right and I'm having those fairness approaches right at the centre of how you develop and then use AI.”