<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=306561&amp;fmt=gif">
Skip to content

AI Ethics can be Difficult to Control: ISACA Singapore’s Jenny Tan

Corinium APAC Content Director Director Vanessa Jalleh sits down with the President of ISACA Singapore, Jenny Tan, to talk about cybersecurity in the era of large language models (LLMs)

One of the biggest recent disrupters in the technology sector has been the emergence of mainstream large language models and the impact they are having globally on day-to day-life.

From universities unsure of how to regulate LLMs for academic writing, to Singapore’s civil servants being permitted to use ChatGPT in certain capacities, and Italy outright banning access to it, there have been numerous and different responses to the technology. From a security standpoint, there are a lot of risks around generative AI and how they will change the security landscape.

ISACA Singapore President Jenny Tan, who will be speaking at CISO Singapore in August, says there are some very clear information security risks involved with generative AI, particularly around deepfakes, data privacy, copyright issues, and cybersecurity problems.

“For example, attackers may abuse the technology to generate new and complex types of malware, phishing schemes, and other cyber dangers that conventional protection measures may not be able to detect and deal with,” she says.

“The challenges include inherent biases that generative AI produce, lagging in tech understanding in this area to react timely on security controls to deal with the versatile output that generative AI produced to cope with the emerging risks outlined above. And policies not timely adjusted to deal with this matter, especially in the education sector.”

While there are risks, LLMs do also present opportunities, which Tan believes will mainly surround productivity in coding as references and perhaps creativity in designing products that were given rise from generative AI.

In terms of the ethical risks inherent in the adoption of new AI models, Tan believes this is a very important area to consider, but quite difficult to identify and control.

“The existing threats about generative AI leading to higher volume of ransomware, phishing, and so on will peak as the technology matures over time and as human talents cannot catch up. I believe organisations have to revisit their risk appetite and tolerance considerations more regularly to assess their maturity in dealing with such risks,” she says.

Driving Cyber Deeper into Business

One of the strategy areas that cybersecurity leaders can find challenging is embedding more cybersecurity conscientiousness in more of the business.

One of the ways Tan says leaders can approach this is to embrace business language and the business’s way of thinking to drive more successful cybersecurity strategies, adding that there are three areas to consider when doing so.

“Training, to create more awareness of the impact of not doing so will lead to in business opportunity costs,” she says.

“Secondly, security by design, to mandate that every digital or technological projects will require security architecture and governance clearance prior design, development and implementation.

“The third point is continuous monitoring, in order to deploy technology to assist in security monitoring and active incident responses that has a loop back to the business KPIs (essential to manage business behaviour for immature organisations control environment).”

Reactive and Proactive Shift

Shifts in culture and practices are always challenging. When asked for her best practice advise on changing from a reactive to proactive approach in cybersecurity, Tan prefaced her answer by saying this is no easy task.

“First of all, changing mindset of the board and management and mass employees take time. Training is always the easiest route to create awareness, but the effectiveness is questionable as 90% of the attendees who walkout of any training will not retain or apply the lessons learnt,” Tan says.

“Cybersecurity is considered a compliance cost, and it’s not cheap. If every project has a cybersecurity cost component, like contingency cost, the cost may be passed on to consumers. I always advocate the concept of “combined assurance” such as leveraging line 1 (management and users) and line 2 (risk and security), together with line 3 (audit) to transform the risk landscape.

“If GRC can be part of balanced scorecard and the tone at the top is right, then perhaps we have a chance to truly be proactive in cybersecurity outlook.”

During times of high risk, cybersecurity leaders will be hyper-focused on strengthening the overall security posture of their organisations, and Tan says one critical strategy is to cultivate individuals’ risk responsibility.

“With the acknowledgement of every individual in an organisation appreciating the implication of cyber risks on organisations and individuals, careless mistakes can be avoided. Apply continuous monitoring and posture assessment help to re-calibrate every organisation’s capabilities to best mitigate such risks,” she says.


Similar topics will be explored during our free event, AI in Cyber Online being held on 8 May 2024. Check out the agenda and register to attend by following this link!