<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=306561&amp;fmt=gif">
Skip to content

Who Owns the Risk? Redefining Accountability in the Age of AI

In Singapore’s increasingly AI-driven economy, one question looms larger than most: who ultimately owns the risk? As organisations lean deeper into machine-led decision-making, accountability has become both a technical and ethical challenge — one that extends far beyond data science teams.

Singapore’s regulators have been proactive in anticipating these challenges. The Monetary Authority of Singapore (MAS) introduced the FEAT principlesFairness, Ethics, Accountability, and Transparency — to guide financial institutions on the responsible use of AI and data analytics. These principles later evolved into the Veritas initiative, an industry-led effort to translate ethical AI principles into practice. The latest version of the Veritas Toolkit, an open-source resource, helps institutions assess their AI models across multiple dimensions, ensuring fairness and explainability while maintaining compliance.

At the same time, the Personal Data Protection Commission (PDPC) has rolled out its Model AI Governance Framework, offering practical guidance for organisations to implement responsible AI governance. Together, these efforts signal Singapore’s position as one of the most advanced and forward-thinking jurisdictions for AI accountability globally.

But while frameworks are clear, implementation remains complex. A data leader in Singapore shared that the conversation has shifted — it’s no longer just about building sophisticated models but about ensuring these models can withstand ethical and regulatory scrutiny. Many organisations are now moving towards embedding accountability into their design processes, treating governance not as a compliance checkbox but as an enabler of trust and long-term business value.

The idea of shared accountability is also gaining traction. As AI ecosystems become more interconnected — spanning data providers, third-party vendors, and model developers — responsibility can no longer reside with a single function or department. Instead, forward-looking firms are starting to explore cross-functional governance models that bring together compliance, technology, and business leaders. These structures are not yet universal, but they reflect a growing awareness that risk ownership must evolve alongside AI sophistication.

Emerging tools and methodologies are helping bridge this gap. For example, explainability and bias detection features are increasingly being integrated into AI model management systems, aligning with MAS and PDPC’s emphasis on transparency and fairness. Some financial institutions have even piloted internal frameworks inspired by the Veritas methodology, where model validation teams and data ethics committees jointly assess AI systems throughout their lifecycle.

Singapore’s approach is gaining international attention because it offers a blueprint for responsible AI adoption at scale — one that balances innovation with governance. While global regulators continue to refine their own AI policies, Singapore’s model shows that accountability doesn’t have to stifle progress. Instead, it can be the foundation for sustainable innovation.

As AI becomes further woven into decision-making, the organisations that succeed will be those that embed responsibility into every stage of their data lifecycle — from design and development to deployment and oversight. Because in the age of intelligent machines, accountability isn’t just a compliance requirement; it’s a leadership choice.

 


CDAO Singapore is happening on 22-23 April 2026. Join us for more insights on data, analytics, and AI!