<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=306561&amp;fmt=gif">
Skip to content

Why Trustworthy AI Starts With Better Decisions

Canadian author and the founder of FedEthics Inc. Celio Oliveira is one of the leading voices about Ethical AI worldwide. In this article, he argues that responsible AI governance must begin with decision quality, constitutional values, and the discipline to challenge data before acting on it.

For Celio Oliveira, the conversation about trustworthy AI does not begin with tools. It begins with decisions.

As governments and organizations seek to expand their use of AI, Oliveira argues that leaders must pay closer attention to the architecture behind the choices they make. Who is accountable? What evidence is being used? Which assumptions are being carried forward? And how can leaders ensure that AI-enabled decisions reflect public value, rather than simply technical possibility?

These are at the core of Oliveira’s new book, Constitutional Intelligence: A Decision Architecture for Trustworthy AI Governance. But this is not typical thought leadership or analysis. It’s a practical guide for leaders who want to govern AI with discipline and intention.

Trustworthy AI Starts Before the Technology

AI proliferation has created pressure on leaders to modernize quickly. But Oliveira warns that adopting new technology without changing the thinking and processes around it will not deliver meaningful transformation.

“Bringing new tools is not going to solve any problem if we keep using old processes,” he says.

That point is especially important in government, where technology decisions affect public services, taxpayer money, and citizen trust. Oliveira believes public-sector leaders must be prepared to challenge not only vendors and technical teams, but also their own assumptions about what AI is for.

For him, responsible AI leadership is not about chasing innovation for its own sake. It is about asking why a tool is needed, whether AI is the right approach, and what consequences may follow from its use.

“Why are we making this choice of using artificial intelligence instead of using simple automation, for example?” he asks.

Decision Architecture Is the Foundation of Responsible AI

Oliveira argues that senior leaders are often presented with dashboards, metrics, and recommendations that appear authoritative. But information is never neutral, and model outputs are not a substitute for executive judgment.

Executives must ask whether the evidence behind a recommendation is sound, whether the right measures are being used, and whether the decision aligns with the broader responsibilities of the organization.

“We own them, we are accountable for them,” Oliveira says of leadership decisions. “It’s just doing our job instead of ticking boxes.”

This is where decision architecture is crucial. In Oliveira’s view, leaders need structures that force them to examine context, evidence, risks, expected outcomes, and accountability before they act.

He gives the example of healthcare, where a model may appear accurate overall but still create serious harm if leaders focus on the wrong metric. A false positive or false negative in a cancer-related prediction can have life-changing consequences.

In those cases, Oliveira argues, leaders must look beyond headline accuracy and consider what different types of error mean for real people.

Constitutional Intelligence Can Guide AI Governance

The concept of “constitutional intelligence” emerged from Oliveira’s concern that AI regulation is still evolving, while public-sector leaders need practical guidance now.

Rather than waiting for perfect regulation, he argues that governments can already draw on constitutional principles, including rights, freedoms, dignity, access, and protection from harm.

“We just need to translate whatever is in the Charter of Rights and Freedoms and bring [it] to the digital world,” Oliveira says.

This framing gives AI governance a stronger foundation for its public-facing purpose. It shifts the discussion from what technology can do to what governments are obligated to protect.

In practice, that means asking whether an AI-enabled service improves access, whether it risks excluding vulnerable groups, whether it could cause harm, and whether citizens’ rights are protected in the design and delivery of digital services.

Biased Data Leads to Biased Outcomes

Oliveira is also clear that data-driven decision-making does not automatically produce fair decisions.

“Everybody’s trusting data nowadays,” he says. “But who said the data is to be trusted?”

Data reflects the systems that produce it. If those systems contain historical inequities, the data may reproduce those inequities unless leaders actively intervene.

Oliveira points to recidivism analysis as an example. If leaders use criminal justice datasets without examining the social and systemic factors behind them, they risk building models that replicate existing patterns of discrimination.

“If we get the data set and use them as is, we are just replicating the racism, the prejudice that always had,” he says.

Responsible AI governance requires analysis of the reality behind the date. We must question how that data was created, who is represented, who is missing, and what assumptions will be embedded into any model or policy built from it.

More Data =/= Better Data

Oliveira also challenges the idea that organizations always need more data.

In fact, he warns that many organizations are becoming “data hoarders,” collecting and storing information without a clear plan for how it will be governed, shared, corrected, or used.

“We don’t need more data,” he says. “We need to work better with the data that we already have.”

This has implications beyond privacy and governance. More data means more storage, more cloud consumption, more processing power, more cost, and a larger environmental footprint.

For public-sector leaders, Oliveira argues that the question should not be how much data can be collected. It should be whether the data serves a legitimate purpose, whether citizens have consented to its use, and whether mechanisms exist for people to correct inaccurate or harmful information about them.

He also believes leaders should widen how they define value. In government, success cannot be measured only in financial terms. Social impact, cost avoidance, service quality, and sustainability all matter.

A Practical Roadmap for Public-Sector AI Leaders

Oliveira’s book is designed to help leaders make practical moves to improve their data practice.

He describes frameworks for clarifying who should be involved in decisions, what evidence should be considered, and how leaders can challenge recommendations with more rigor.

The book also includes 90-day implementation roadmaps intended to help executives apply these ideas in their own organizations.

One starting point is decision quality. Leaders should ask who needs to be at the table, what questions should be asked, and how evidence should be tested before action is taken.

They should also watch for common traps, including overconfidence, confirmation bias, short-term thinking, unclear rationale, process drift, and resource waste.

Oliveira says leaders need people around them who will challenge their thinking, not simply confirm it. That challenge is essential if organizations are to avoid repeating old mistakes with new technology.

A Guide for More Intentional AI Governance

Ultimately, Oliveira wants Constitutional Intelligence to trigger better conversations about AI, governance, and public accountability.

He does not present the book as the final word on trustworthy AI. Instead, he sees it as a working toolkit that leaders can adapt, discuss, and challenge themselves inside their own organizations.

The larger message is that AI governance is not simply a technical discipline. It is a leadership discipline.

It requires executives to make conscious decisions about when to use AI, when not to use it, what evidence to trust, what harms to prevent, and what public value to create.

As Oliveira puts it, leaders must stop going in a direction simply because “everybody’s going.” In the AI era, trust will depend on whether organizations can make decisions with purpose, accountability, and constitutional intelligence.

__

To explore these ideas in more detail, readers can purchase a copy of Constitutional Intelligence: A Decision Architecture for Trustworthy AI Governance here

 

1122-26 - CDAO Government - Agenda Header