Discover how responsible AI can drive innovation rather than hinder it. In this interview with Dr. Maruf Hossain, Vice President of Data Science at ANZ, we explore practical strategies for embedding ethics into AI development, navigating regulatory uncertainty, and building trust through transparency, governance, and collaboration—ensuring AI delivers long-term value.
How do you define ‘responsible AI’ in practice, and what does it take to embed that mindset into an organisation's innovation culture?
Responsible AI is the discipline of designing, developing, and deploying AI systems in a way that is ethical, transparent, fair, and aligned with human values and societal norms. In practice, this means proactively identifying and mitigating risks such as bias, discrimination, privacy violations, and lack of accountability. It also involves ensuring that AI systems are explainable, secure, and subject to human oversight—especially in high-stakes or regulated environments.
To embed this mindset into an organisation’s innovation culture, it must be treated as a strategic imperative, not a compliance checkbox. This starts with leadership setting the tone and investing in education across all levels of the organisation. It also requires integrating responsible AI principles into the development lifecycle—from ideation and data sourcing to model deployment and monitoring. When ethical considerations are built into the innovation process, they become drivers of trust and long-term value, rather than barriers to speed.
What are some of the biggest challenges you’ve encountered when trying to innovate with AI while staying aligned with ethical, legal, or regulatory expectations?
One of the most persistent challenges is the lack of clarity and consistency in global AI regulations. As governments race to catch up with technological advancements, organisations are left navigating a fragmented and evolving compliance landscape. This creates uncertainty—especially for multinational companies that must align with varying standards such as GDPR, the EU AI Act, and emerging frameworks in the U.S. and Asia.
Another major challenge is dealing with imperfect or biased data, which is a central theme in my upcoming book, AI Success Beyond Perfect Data, due out by the end of the year. The book explores how organisations can still achieve meaningful, responsible AI outcomes even when working with messy, incomplete, or historically biased datasets. Balancing innovation with ethical rigour in these scenarios requires not only technical solutions but also a strong ethical compass and a culture that prioritises long-term trust over short-term gains.
How can organisations ensure transparency and explainability in AI-driven decisions, especially when those decisions directly impact customers?
Ensuring transparency and explainability starts with choosing the right models for the right use cases. In high-impact domains—such as lending, healthcare, or hiring—organisations should prioritise interpretable models or use explainability techniques like SHAP, LIME, or counterfactual analysis to make complex models more understandable. These tools help teams and stakeholders understand why a model made a particular decision and whether that decision aligns with ethical and legal standards.
Equally important is how this information is communicated to customers. Technical explanations must be translated into clear, accessible language that empowers users to understand, question, or appeal decisions that affect them. Providing transparency not only builds trust but also creates a feedback loop that helps improve model performance and fairness over time. It’s about making AI not just intelligent, but also accountable and human-centric.
What role should governance and cross-functional collaboration play in building responsible AI frameworks?
Governance is the backbone of responsible AI. It provides the structure, policies, and accountability mechanisms needed to ensure that AI systems are developed and used in line with ethical, legal, and organisational standards. This includes defining acceptable use cases, setting risk thresholds, and establishing oversight bodies such as AI ethics boards or review committees.
Cross-functional collaboration is equally critical. Responsible AI is not just a technical challenge—it’s a multidisciplinary one. Legal, compliance, product, engineering, data science, and even marketing teams must work together to identify risks, align on values, and ensure that AI systems serve both business goals and societal expectations. When governance and collaboration are strong, organisations can move quickly while staying grounded in principles that protect people and build trust.
As AI capabilities evolve rapidly, how can businesses maintain agility without compromising on principles like fairness, privacy, and human oversight?
Agility and responsibility are not mutually exclusive—they can and should reinforce each other. Businesses can maintain speed by embedding ethical principles into their development processes from the outset. This includes adopting privacy-by-design and fairness-by-design methodologies, using automated tools for bias detection and model monitoring, and building in human oversight where needed.
To stay agile, organisations should also adopt modular governance frameworks that can evolve with technology. Empowering teams with the right tools, training, and decision-making autonomy allows them to innovate responsibly without unnecessary bottlenecks. Ultimately, the most successful organisations will be those that treat responsible AI not as a constraint, but as a foundation for sustainable, scalable innovation.
How can organisations measure the success of AI initiatives beyond traditional performance metrics like accuracy or ROI?
Traditional metrics like accuracy, precision, and ROI are important, but they only tell part of the story. True success in AI also depends on how well a system aligns with ethical principles and societal expectations. This means measuring fairness across demographic groups, robustness to data drift, explainability of decisions, and the level of trust users place in the system. These metrics are more nuanced but are essential for long-term viability and public acceptance.
Organisations should also consider broader impact metrics—such as whether the AI system enhances human well-being, reduces harm, or supports inclusive outcomes. These dimensions are especially important as AI becomes more embedded in critical decisions that affect people’s lives. By expanding the definition of success, businesses can ensure that their AI initiatives are not only effective but also responsible and future-proof.
Dr. Maruf Hossain is a speaker at CDAO Melbourne 2025. Interested in learning more about Data? Join us at CDAO Melbourne this September!