Navigating the Grey Zones: What Trust Means in the Age of AI

With AI dominating discussions in nearly every technology and cyber security conference, the need to rethink trust in the age of AI has never been more urgent.
Public trust in the digital economy today is under increasing pressure across both the public and private sectors. With digital identities susceptible to forgery and AI capable of mimicking human interactions convincingly, what does trust look like today? And who is responsible for maintaining it?
Drawing from my experience in producing cyber conferences and dialogues with cyber security experts, this article explores why trust today extends beyond traditional security measures, demanding a deeper commitment to accountability, transparency, and responsibility.
The Current Reality
AI is already integrated into critical systems from service delivery and decision-making to fraud detection and cyber threat monitoring.
Australia has been proactive and has made progress with several voluntary frameworks and high-level guidelines to help organisations navigate responsible AI use and adoption. But without mandatory guardrails, organisations could risk choosing what is convenient over what is secure. In the absence of clear laws raises a critical question: who is accountable when things go wrong – the developers, the operators, the users or AI itself?
We need AI regulations to reflect not only business intent but technical reality ensuring accountability, transparency and trust in the face of a breach and cyber attacks.
Ethical, Legal, and Cultural Imperatives
The challenges facing CISOs today extend beyond technical issues:
- Ethical: Ensuring AI systems are fair, unbiased, and respect privacy
- Legal: Preparing for regulations that keep pace with technological advancements
-
Cultural: Maintaining public confidence amid rapid tech changes
AI-generated misinformation, deepfakes, quantum risks, and automated decision-making all raise new difficult questions. Bias baked into algorithms. Privacy compromised by automation. Trust eroded by systems that can’t be explained.
These issues don’t just affect cyber security experts but they affect the general public. Think about the viral trend of AI-generated portraits mimicking Studio Ghibli’s style. If your image was used without your consent, has your privacy been breached? Is it art or IP infringement? It may seem trivial until you apply the same logic to facial recognition, predictive policing, or algorithmic welfare decisions. Where do we draw the line between innovation and intrusion? And who decides where that line is?
Trustworthy AI
So how do cyber leaders navigate these grey zones?
- Transparency: Clear communication about how AI systems work and their decision-making processes
- Governance: Robust frameworks to oversee AI use and ensure compliance with ethical standards
- Secure by Design: Build AI with cyber security at its core
- Accountability: Define responsibility before the system fails not after
- Public Engagement: Bring the community into the conversation to build legitimacy and trust
For AI to be truly trustworthy and secure within organisations, it must be built with accountability, oversight, and inclusivity at its core. Transparency in AI models should be non-negotiable. Both processes and outcomes must be explainable and auditable.
Critically, we need enforceable legislation to define boundaries, manage risk, and ensure stakeholders can be held to account when failures occur.
Because in the end, secure AI isn’t just a technical challenge, but it is a leadership imperative.
Final Thoughts
AI is here to stay, and the CISOs and cyber leaders have a critical role to play in shaping its future. Navigating these grey zones requires more than technical knowledge but it calls for ethical leadership, strategic foresight and a commitment to public trust. Because in this new age, trust isn't a given. It's something we have to build.
If you are keen to join our conversations on AI in cyber security, we invite you to join us at our upcoming events:
- CISO Melbourne 2025, 22-23 July at Crown Promenade
- AI in Cyber ANZ Online 2025, 9 September
- CISO Canberra 2025, 17 September at Rex Hotel
If you are interested in speaking at the events, feel free to reach out to Maddie Abe (Content Director).