AI protection concepts. 3D render

AI and security – where the rubber meets the road

Mark Fox, CEO, Zonic Group

The attention of every stakeholder in the cybersecurity space is focussed right now on San Francisco – specifically on the RSAC 2026 conference which is at the Moscone Center until March 26.

The topic on everybody’s lips is the critical juncture where AI meets security. This is a two-way street. On the one hand, AI in the hands of the bad guys is adding to the speed and intensity of attacks. On the plus side, it is boosting the efficiency of defences and safeguarding the digital economy.

What every attendee is agreeing on is that the centre of gravity has shifted from experimentation and proof of concept to the adoption at increasing scale of autonomous AI agents that can hunt out network breaches with no human involvement, and in many cases deal with them in real time. Basic threat detection was yesterday’s story, while today is all about agentic AI that delivers proactive, predictive security, a trend that is termed ‘shifting left’ if you are part of the developer community.

AI also has a key role in fighting back against a silent new menace, the so-called ‘espionage ecosystem’. These complex organisations are commonly sponsored by an autocratic nation-state, and work by deploying a range of sophisticated technologies with aims that range from disrupting supply chains and stealing information to undermining the security of critical national infrastructure. They demand that the CISO identify data authorisation boundaries, and understand how data is flowing inside the organisation. The advent of AI means they can enhance and automate some of that job.

AI is proving adept at testing for vulnerabilities across the ICT ecosystem. Commonly deployed to support rather than to replace the efforts of human experts, it is a useful weapon when it comes to finding weaknesses that old school methods might miss. But it is now clear that the kind of language models that power most AI use cases are not so effective at dealing with security risks, and that’s because cyber threats don’t conveniently come out of the typical datasets that LLMs are trained on. The answer lies in developing models that are trained using Reinforcement Learning (RL) rather than standard autocomplete capabilities and static datasets. RL, a useful tool for crowdsourced testing specialists like Bugcrowd, acts as a training gym for AI models.

Other innovators include X-BOW and its autonomous pentester technology, designed to defeat the bad guys before they strike and using AI to revolutionize how offensive security is approached. Terra Security’s novel approach to pentesting extends across web, AI, internal apps, API, mobile, network and cloud.

The dynamic world of cyber threats and counter measures ensures that every RSA event is unique, with attendance a must for anybody in the protection game.

Share this article

You might also like

Share this article