CMOtech UK - Technology news for CMOs & marketing decision-makers
Story image

Bad AI, good AI: harnessing AI for a stronger security posture

Yesterday

For better and for worse, AI is now ubiquitous in our society. And whether we like it or not, there is no turning back. As we learn to coexist with it, we have a duty to limit its dangers, but also an opportunity to exploit its full potential. 


The challenges and opportunities are particularly pronounced for information security professionals. On the one hand, AI is opening an era of exploration of how the technology can reshape security operations, from advanced threat detection to automated response, as well as securing data in AI-driven environments. On the other hand, cybercriminals are enjoying the ever-increasing availability and affordability of AI capabilities to craft new and more sophisticated techniques. 


The dark side of AI


Netskope's Threat Labs researchers recently discovered that clicks on phishing links in the workplace tripled in 2024, and that malicious content downloads occurred in 88% of organisations monthly. They also found that the common denominator in the success of these cyber threats is the rapid sophistication of social engineering campaigns attackers are designing to trick their victims, with AI-generated content contributing significantly. 


Tools such as WormGPT, followed by FraudGPT, are evolving to an increasing number of imitation ChatGPTs. These tools have emerged as dark, unbridled variants of legitimate genAI tools and are helping bad actors create more convincing emails and write more efficient malware, and they have become sources of inspiration for more malicious deeds. 


The emergence of AI has brought with it audio and video deepfakes, almost always created for malicious purposes. Creating hyper-realistic and convincing deepfakes has become easier, quicker, and we are already witnessing how efficient they are for an array of criminal activities, from targeted fraud in the workplace to mass disinformation. 


Aside from enhancing threats, AI is also causing headaches for data protection professionals. We cannot discuss AI risk without tackling the risks of genAI usage and the risks of sensitive data leakage. In 2024, approximately 6% of the hundreds of thousands of Australian users included in the Netskope Threat Labs' analysis violated their organisational data security policies each month, a significant proportion of which were attempts to input sensitive or regulated data into genAI prompts. 


It is pretty clear that the advent of genAI has been the cornerstone of the emergence of new threat vectors. But while AI is helping enhance the capabilities of cyber criminals, it is making an equal, if not larger, contribution to cyber security technologies and practices. 


Our best ally for security now and in the future


If you just read the horror headlines, it may appear that the bad guys are out-punching the defence. But that's an inaccurate picture. Security teams have the advantage over cyber criminals because the brightest minds in AI and machine learning have been contributing to building and refining some pretty awesome security tools for more than a decade. 


AI has drastically changed the threat detection game thanks to its ability to analyse and detect behaviours and patterns in real time and with high levels of sophistication and granularity. Identifying a user clicking a phishing link or accessing fake login pages, behaving unusually—a sign of potential compromise—adding sensitive data to a genAI prompt, or accessing or downloading malicious content from cloud applications- these scenarios that AI-powered threat detection engines should cover. 


Beyond detection, well-trained algorithms are also bringing automated and autonomous threat prevention and response to the table. Data Loss Prevention (DLP) tools automatically block users' actions if they violate data protection policies, for instance, by attempting to send confidential information to personal accounts. Real-time user coaching tools bring a softer approach—and a relevant complement to cybersecurity training in spreading best practices—detecting undesirable behaviours as they happen and presenting users with pop-ups if they are taking a potentially risky action. Users are given context to the policy, asked if they would like to 'accept the risk' (most won't), directed to an alternative, or asked to justify their action and receive policy exemption—whichever the security team chooses.


In considering tools and defining policies, security leaders need to ensure that all the potential scenarios their employees face are covered. Information is not always text-based, and in fact, 20% of sensitive data is represented in images such as photos or screen captures. Powerful AI algorithms trained for this specific purpose can now detect potential leaks of data showing in pictures or videos. 


If these capabilities sound amazing, consider that we only scratched the surface. The amount of R&D in this area is phenomenal and new functionality is rolled out to cloud security services constantly, enabling organisations to keep up much faster than older appliance-based approaches would have allowed. 


The bottom line? AI is bringing amazing capabilities in security, and is our best ally now and in the future, to defend against modern and constantly emerging threats, including those involving AI itself. Teams like the Netskope AI Labs have for years been leveraging AI and ML and embedding it at the heart of modern security platforms. 


Bob will be at the Gartner Security and Risk Management Summit in Sydney on March 3rd to discuss this topic further.

See below for more information.