top of page

AI: Cybersecurity’s Best Ally or Biggest Threat?

  • nicolaferraritest
  • Sep 5
  • 5 min read
September 2025, Tom Pepper, Partner
Published on: CxO Cyber Connections (gated community)




AI is reshaping the cyber battlefield for both attackers and defenders. From deepfakes, self-altering malware and AI driven phishing campaigns to advanced detection, anomaly spotting and faster response times, AI is constantly being utilised be both sides. But which does it favour? Hear from four cybersecurity experts on the dangers, advantages

and their take on who benefits more from AI.


Andy Grayland, CISO at Silobreaker writes "Attackers and defenders alike are leveraging AI’s capabilities, but threat actors appear to be experimenting with a broader range of tactics. They’re exploiting vulnerabilities ranging from prompt injection and data poisoning, to model inversion. These methods let attackers compromise AI systems without ever touching the underlying infrastructure. 


Recent intelligence has also shown that attackers are using AI to increase the scale,

automation and success of their campaigns. And it’s not just cybercriminals. Nation state advanced persistent threats (APTs), and hacktivists alike are adding AI tools into

their attack arsenal. 


Likewise, new and low-skilled actors can also utilise AI capabilities to launch

sophisticated cyberattacks with little effort. The recently developed AI chatbot, Venice

AI, has attracted attention in the hacking community for its lack of content moderation

and use of open-source language models. It can write convincing and grammatically correct phishing emails within seconds, only requiring an attacker to insert a phishing link. This has undoubtedly contributed to the 1,200 percent surge in targeted phishing. 


We have also seen deepfakes and fabricated content used to sway public opinion, malicious software that can alter its own code to slip past traditional security systems and even adversarial input attacks capable of tricking AI into misreading road signs or misclassifying medical images. 


This is not to say AI is solely a risk. It also allows defenders to spot and close vulnerabilities earlier, train systems to withstand manipulation and test their own resilience using realistic simulations. 


The reality is that AI is not inherently good or bad, it is a force multiplier. Those who will benefit most are the ones prepared to invest early, adapt quickly and combine technological capability with the insight of skilled security teams". 


Mike Britton, CIO at Abnormal AI states "For attackers, artificial intelligence is already a proven tool in their arsenal. Criminal groups are using it to speed up campaigns and make them harder to spot. Take phishing: once crude, clumsy, and easy to detect, it now renders old methods of detection as woefully inadequate. With AI able to automate reconnaissance and spin tailored messages at scale, the difference is becoming painfully obvious for most large enterprises.  

 

Defenders are not empty-handed. AI provides security teams with new ways to keep

pace with fast-moving threats. It helps sift the noise at scale, flag unusual behaviours, and cut the time it takes to investigate an attack. Properly used, it can turn voluminous logs into patterns that make sense and highlight anomalies that used to be impossible to spot. The result is sharper detection and faster response—something that every security team aims to achieve. 


The push for AI in defence is gaining strong support. Nearly 99 percent of security leaders back its role in awareness programmes and day-to-day operation, a sign of just how much faith is being placed in its ability to sharpen human judgement and augment defences. AI isn’t inherently good or bad and carries no allegiances. Criminals fine-tune their tricks; defenders deploy it to make sense of complexity. The difference lies not in the technology, but in whose hands it sits. Used with purpose, it can become one of the strongest assets in security. 


The lesson for security leaders is not whether to use AI, but how. Success depends on proven use cases, clear guardrails, and security staff that embrace AI’s use. Treated with care, it can become less of a buzzword and more of a trusted partner in helping defences grow at the same pace as the threats they face".


Tom Pepper, Partner at Avella Security and the UK Government’s AI Security Institute writes "From my perspective, the question of whether attackers or defenders benefit more

from artificial intelligence isn’t simple. It’s a race, and right now the advantage can

swing either way depending on who moves faster. 


Attackers are already exploiting AI’s autonomy to scale their operations: automating

reconnaissance, crafting targeted phishing at unprecedented speed, and even adapting

attacks in real time without human oversight. Agentic AI in the wrong hands can be

alarmingly efficient. 


But defenders have an equally powerful opportunity, if they act decisively. AI can

monitor massive networks for anomalies, identify subtle indicators of compromise, and

respond to threats in seconds. The challenge is that many organisations are deploying

AI without fully understanding how it makes decisions or the risks of giving it too much

autonomy without guardrails. This is where governance comes in. 


I’m not interested in labelling AI as “good” or “evil.” It’s a force multiplier. Its impact depends entirely on the controls, accountability, and transparency we build into it. Agentic AI introduces unique risks; systems that can make and act on decisions independently can be manipulated or misdirected. Without robust oversight, those risks can spiral into

serious security vulnerabilities. 


That’s why I stress proactive governance frameworks: understanding AI decision-making processes, defining responsibility, and creating safeguards to prevent misuse. If defenders embrace these principles, AI could tip the balance in their favour. Without them, attackers will continue to exploit the technology’s autonomy faster than security teams can adapt. The outcome is in our hands; AI’s future in cybersecurity will be what we make of it".


Ellen Benaim, CISO at Templafy states "Over the past year, we’ve seen a 400% increase in attacks, everything from phishing to malware and much of that can be linked to AI. It’s giving cybercriminals economies of scale and making it easier for attackers to get into the field.


Even the phishing emails we’ve received are noticeably more sophisticated, often tailored in ways that make them harder to detect. For a smaller company, that’s turned into a daily battle. 


AI adoption is moving at a pace. For IT teams, the challenge here is to maintain centralised oversight, ensuring data visibility and control. The industry as a whole is acting on constant catch-up mode to secure the frameworks to manage the rapid pace of change. Adapting our defences for both human and AI actors is critical to staying ahead of emerging risks. 


The difference is that humans have predictable patterns; AI agents don’t. They develop their own behaviours, so defenders need to establish a new baseline before these security tools can be fully effective.



bottom of page