One of the themes likely to dominate strategic conversations at CISO Chicago is how security leaders should adapt governance models to accommodate AI systems that increasingly operate autonomously.
As organizations experiment with generative AI, copilots, and intelligent automation, the role of the CISO is expanding beyond traditional cyber risk management into stewardship of emerging decision-making systems.
According to Dr. Mel Fenner, Chief Digital Innovation Officer at Lincoln University, one of the most significant changes is the emergence of what he describes as “agentic identities” — AI systems that behave more like users than tools.
“Zero trust is still very relevant because it is such a large framework… but as we talk about identities, the big thing now is agentic identities. People are shifting away from generic agents running under shared accounts to AI entities that have their own credentials and permissions.”
For CISOs, this evolution introduces new identity governance challenges. AI agents increasingly interact with sensitive systems, and access institutional. As a result, identity architecture must expand beyond human and machine users to include autonomous digital actors.
Rather than diminishing the relevance of Zero Trust, AI is reinforcing its importance. Identity lifecycle management, privilege segmentation, and continuous validation remain foundational — but must now be applied to entities capable of operating independently.
Governance, Not Control, Defines Effective AI Security
Many organizations are discovering that traditional command-and-control security models struggle to keep pace with the rapid adoption of AI tools across the enterprise. Employees, researchers, and business units can easily procure AI capabilities independently, often outside established procurement or risk review processes.
Dr. Fenner notes that attempting to block AI usage outright is rarely practical. Instead, security leaders must focus on creating governance structures that enable responsible experimentation while mitigating risk.
“We’re not going to completely block anybody’s access to OpenAI, or Claude, or Grok… the reality is that administrative policies and guidelines are often the best thing we can do right now. We have to explain the risks clearly so people understand why guardrails exist.”
This shift places CISOs in a more collaborative role, working alongside legal, academic, operational, and technology teams to define acceptable use principles. AI governance increasingly requires shared accountability models that balance innovation with oversight.
Security leaders therefore need visibility into how AI tools are being introduced, what data they are accessing, and how outputs are being used in decision-making processes.
Compliance Is the Floor, Not the Ceiling
While regulatory frameworks remain important reference points, Dr. Fenner cautions against equating compliance with a mature security function.
“Compliance is really the floor, that’s the baseline. You can say all these things on paper, but what’s the evidence or what actual process do you have in place? Reducing risk is about how well you operationalize those frameworks.”
For CISOs, this means shifting focus toward evidence-based assurance. Controls must be tested, validated, and continuously improved to ensure they function as intended in real-world conditions.
Frameworks such as NIST CSF or ISO standards provide structure, but resilience depends on execution maturity, monitoring practices, and organizational alignment.
Tool Sprawl Signals Structural Risk Issues
Another common challenge facing security leaders is the gradual accumulation of overlapping tools. As vendors expand platform capabilities and organizations adopt point solutions to address emerging threats, security stacks can become fragmented.
Dr. Fenner suggests that tool sprawl rarely stems from poor decision-making. Instead, it develops incrementally as organizations respond to immediate needs, creating complexity that accumulates over time.
“You get these one-time funding moments and suddenly you’re buying tools to solve immediate problems. Over time, those tools add features and now you’ve got three different technologies performing the same function.”
Mapping tooling capabilities directly to control frameworks can help CISOs identify redundancies and clarify where consolidation may improve operational efficiency. Strategic vendor partnerships often prove more valuable than continuously expanding feature sets.
Shared Governance Enables Innovation Without Increasing Risk
Decentralized environments present particular governance challenges, especially where multiple business units operate with significant autonomy.
Dr. Fenner emphasizes that governance must be designed as a shared responsibility model rather than a centralized approval mechanism.
“Governance has to be a shared model. Even in decentralized environments, the risk is still shared. We have to build a bigger box — one that allows innovation to happen safely within agreed guardrails.”
This approach allows organizations to maintain flexibility while ensuring that security principles remain consistent across the enterprise. Guardrails define acceptable risk thresholds without unnecessarily slowing experimentation.
For CISOs, the challenge lies in creating frameworks that scale across distributed decision-makers while maintaining visibility into evolving risk exposure.
The CISO Role Is Expanding into AI Risk Stewardship
As AI capabilities continue to mature, governance considerations increasingly involve defining responsibility for outcomes and ensuring consistent oversight of how systems operate in practice.
Issues related to how AI systems make decisions, and who is responsible for those outcomes, are increasingly becoming part of the CISO’s scope.
Dr. Fenner highlights the importance of ensuring security leaders are embedded in AI strategy discussions early, rather than reacting after deployment decisions have already been made.
“We have to make sure we’re at the table. If AI governance conversations are happening, security needs to be part of that discussion so we can help people understand the risks before tools are adopted.”
To maintain operational security, AI governance should be viewed as a strategic security discipline, and not rather than a technical afterthought.
__
Join your peers at CISO Chicago to explore how leading CISOs are building practical frameworks for secure AI adoption, balancing innovation with resilience, and redefining what effective cybersecurity leadership looks like in the AI era.