Securing Your Tools and Understanding Your Risks with AI, People and Workflows - a Conversation with James Court.
AI is reshaping the way people work, often faster than organisations can track. The result is a wave of productivity, but also a growing shadow layer of risks that traditional security controls are not designed to handle.
How can organisations protect themselves without stifling the innovation that makes AI possible? James Court, Chief Security Officer at Cleanaway, weighs in.
Across industries, leaders are discovering a surprising truth. They have far less visibility into the tools employees are using than they thought. Three gaps are emerging as AI becomes more integrated into daily work.
The first gap is the basic discovery of the tools that exist within the company. Employees often sign up for AI and SaaS tools using personal accounts that never connect to corporate identity systems. None of these interactions is logged. Security teams cannot see what tools are being used, who is using them, or what data those tools access.
The second gap is around data flow. Traditional data loss prevention was built to recognise structured data such as personal identifiers or payment information. It was not designed to detect sensitive unstructured content like board reports, customer histories, internal summaries, or contract details. Employees frequently paste this type of content into AI tools, and none of it matches legacy detection patterns.
The third gap is intent. Prompts written in natural language can contain context that is more revealing than the data itself. Employees may include details about decision processes or internal reasoning without realising the sensitivity of what they are sharing. Because natural language is not recognised as a data classifier, these risks remain invisible.
The result is a clear mismatch between policy and actual behaviour on the ground.
Leaders are beginning to understand that visibility requires more than technical controls. It requires understanding how people work in practice.
We now examine the death of the perimeter, and identity as the new anchor.
Data no longer stays inside a network boundary. It lives in SaaS apps, LLMs, cloud platforms, and file locations that IT has never approved. Cleanaway has shifted to a simple belief: data is always in motion.
That means security controls must move with it.
To do that, the team focuses on classifying data at the moment of creation, not after the fact. When a file is labelled correctly from the start — an engineering report, a customer document, a board deck — the organisation can apply meaningful controls to wherever that data travels.
But this isn’t really a technology challenge, Court emphasises. It’s a culture challenge. People must understand what “sensitive” actually means, and why it matters. Training must move from rule‑based (“don’t click this link”) to judgment-based (“am I comfortable sharing this with an AI provider?”).
Helping employees think before they paste.
When asked what behaviour he wishes he could magically instil in every employee, Court’s answer is simple: Pausing. Thinking. Asking themselves whether they would be comfortable with the provider seeing the data they paste.
Traditional awareness programs teach rules; the AI era demands a shift toward discernment.
Employees chase answers, chase productivity, chase efficiency. Paste, submit, move on. No pause. No reflection.
The next evolution of training teaches employees to ask themselves:
- Would I be comfortable if the provider read this?
- Am I sharing something that feels personal, privileged, or strategic?
Cleanaway even uses real anonymised examples to reinforce training, showing employees the kinds of inappropriate oversharing that can easily occur in AI tools. It personalises the risk. And when security becomes personal, behaviour changes.
How do we govern tool sprawl without slowing progress?
New tools appear constantly. AI features are added to platforms that may not need them. Employees adopt new capabilities in search of efficiency. If governance is slow, security loses control. People will use the tools regardless, and by the time governance intervenes, the tools may be deeply embedded in workflows.
To avoid this, security teams are adopting faster and more flexible governance frameworks. These include rapid assessments that take hours or days rather than weeks, risk tiers based on data sensitivity, and recurring assurance checks. A living inventory of tools is essential. When a vendor has an incident, organisations need instant clarity on what data is at risk and where the exposure lies.
Governance becomes effective when it can move at the same pace as the business.
The coming challenges of Agentic AI.
So far, most AI-related risks revolve around humans manually pasting information into tools.
That will not remain the case for long. According to Court, the next wave will arrive through agentic AI, where systems act on behalf of employees rather than waiting for a prompt. These agents will have access to email, calendars, storage systems, third-party platforms, and connected business applications and will take actions, make decisions, send communications, and execute code without a person directly in the loop.
Court stresses that this shift changes the entire nature of the attack surface. Every piece of content an agent interacts with becomes a possible point of compromise. A malicious instruction hidden in an email, document, or code snippet could silently alter an agent’s behaviour. Users would have no idea that something had been embedded or that the agent had been hijacked. Court describes this as a prompt injection at a scale that organisations have never had to confront before.
This future also introduces a new category of identity.
Non-human accounts will behave differently from traditional service accounts because they will be interpreting natural language and performing actions on behalf of people. Court notes that most identity programs today are not prepared for this. These agent identities will need strict permission boundaries, clear scope, strong logging, and meaningful anomaly detection. Security teams must be able to see when something changes and understand why.
Controls need to be designed before agents become fully embedded in production workflows.
Once they are active in business processes, retrofitting controls will be significantly harder and far less effective because the context behind each action will be harder to reconstruct. In his view, organisations that build identity and governance frameworks early will be far more successful than those who try to introduce guardrails after autonomous tools are already doing the work.
Three Principles for Leaders to Navigate What’s Next
1. Start With Real Visibility
Run a full discovery exercise. Find out what tools employees are actually using — not what the policy says they should be using.
2. Build Foundations That Adapt Quickly
Where AI evolves weekly, rigid governance frameworks become obsolete. Agility is now a requirement.
3. Make Policy Reflect Reality, Not Restrict It
Policies must adapt to real human behaviour. People will innovate whether security blesses it or not. Governance that collaborates will always outperform governance that controls.
Conclusion
AI is spreading through the enterprise faster than traditional security structures can keep up. The organisations that will thrive are the ones that embrace visibility, encourage thoughtful behaviour, and build governance models that are flexible enough to move with the pace of change. AI will not slow down, and neither can security.
Hear more from James at CISO Brisbane (23 June 2026, @ The W, Brisbane) where we dive into the intersection of AI, Data Loss and Human Behaviour. Join us! Contribute to the conversation on Linkedin and Feel free to reach out to Kashmira George for more information about speaking opportunities at this event.
