The artificial intelligence (AI) boom has boosted fears about its likely impact on cybercrime activity, but there is also a growing role for AI in cybersecurity.
Getting your Trinity Audio player ready...

Key insights

  • Cybercriminals are leveraging AI to automate and enhance ransomware, phishing and deepfake attacks, making threats faster and more sophisticated.
  • Digital supply chains are increasingly targeted, with vulnerabilities exploited through AI-driven attacks that can cascade across business networks.
  • While organizations are investing in AI-powered cybersecurity tools, most still lag in addressing the full scope of AI-related cyber risks.
  • Building cyber resilience now requires both advanced technology and well-trained staff, as human vigilance remains a critical line of defense.

The cyber risk landscape is continually evolving as the methods threat actors use keep changing. Ransomware attacks, which have proliferated in recent years, have shifted from an initial scattergun approach to becoming more targeted and sophisticated tactics.

Another major evolution in ransomware attacks is the increasing likelihood of compromises to expose vulnerabilities in digital supply chains, with data breaches then cascading throughout business networks.

If you're doing regular (security) scans, are you scanning known vulnerabilities in your supply chain? Realistically, that's where your threat vector may be.
Tom Mooney, cyber strategy manager, Cyber Risk Management, Gallagher

"Some of the major claims in the last 12 months have been from incidents involving trusted third parties, attacks which have directly impacted their supply chain," says Tom Mooney, cyber strategy manager, Cyber Risk Management at Gallagher.

He continues, "If you're doing regular (security) scans, are you scanning known vulnerabilities in your supply chain? Realistically, that's where your threat vector may be."

Recent cyberattacks on high-profile businesses have highlighted the growing threat of "triple extortion" events, where the "double extortion" of data encryption and exfiltration is compounded by the threat of a distributed denial-of-service (DDoS) attack on company systems. In 2025, UK retailers targeted in this manner were unable to restock their stores and had to suspend online sales.

Rising use of AI in cybersecurity: Emerging threats

Hackers are increasingly using AI tools to refine and automate ransomware attacks, as well as exploit vulnerabilities in IT systems and digital supply chains.

Cybercriminals typically exploit known problems, or common vulnerability exposures (CVEs), to gain access to company systems. Previously, finding and using CVEs to orchestrate data breaches was a cumbersome manual process for hackers. Finding CVEs is now much easier, using AI and tapping into the cybercrime marketplace available on the dark web marketplace.

"What used to take weeks can now take less than a minute," says Johnty Mongan, head of Cyber Risk Management at Gallagher. "ChatGPT can tell me which CVE has the highest 'blast radius', five major manufacturers that use this technology and the 1,000 most common passwords businesses use — and then write me a bash script for a 'brute force' attack."

AI is also turbo-powering social engineering techniques. It's disturbingly easy for cybercriminals to find information in the public domain about high-profile individuals in an organization, which can be used to access company systems.

With the power of generative AI, threat actors can create convincing spear phishing texts and emails, voice phishing ("vishing") phone messages and video deepfakes from hackers posing as senior colleagues to persuade company employees to release sensitive data or transfer funds.1

How AI is used: Hackers versus defenders

How hackers use AI

  • Automate ransomware and brute-force attacks.
  • Identify vulnerabilities quickly.
  • Create realistic phishing emails, voice calls and deepfakes.

How defenders use AI

  • Detect threats in real time.
  • Automate patch management and vulnerability scanning.
  • Educate staff as the first line of defense.

Growing awareness of AI-powered cyber risks

Gallagher research on AI adoption reveals that increasing vulnerability to cyber threats is a top concern for one in three business leaders.

While nearly half have revised cybersecurity protocols and bolstered data privacy policies to address AI considerations, most businesses still fail to address AI risks in relation to cyber threats.

According to AI security firm Deep Instinct, 46% of security professionals have witnessed an increase in targeted phishing attacks in 2024/2025, with 43% experiencing deepfake impersonations.2 As a result, 72% of organizations have revised their cybersecurity strategies over the past year due to the impact of AI.

Keeping up with the hackers as ransomware attacks evolve

Cybercriminals' ability to leverage AI tools, free from regulatory constraints, means they're often a step ahead of cybersecurity professionals, while the barriers to entry for aspiring hackers continue to fall.

"You don't have to be an expert in hacking to target an organization. With 'ransomware as a service' you can buy or subscribe to software services that include a 'how to' guide and a support mechanism to target businesses, utilizing AI tools," says Gallagher's Mooney.

However, organizations can take several steps to improve their cyber risk posture. Building resilience to cyberattack is as much about improving the organization's cyber hygiene to make it a less attractive target for hackers as it is about monitoring and defending against attacks.

Furthermore, there's an expanding role for AI in cybersecurity. A growing number of organizations have increased their use of AI within security operations, including AI-powered threat detection and response tools. According to the US Bureau of Labor Statistics, information security analyst roles are set to grow by 35% between 2021 and 2031.

The ability to proactively monitor and address common vulnerabilities builds cyber resilience by ensuring companies are less exposed to digital supply chain and "zero-day" type attacks. "Utilizing learning management systems for patch management is really key," says Mooney.

The human touch remains essential for effective cyber hygiene. "It's easy to characterize people as the biggest risk, but they should be your biggest strength, because staff are the first line of defense. They're the ones who are going to spot those phishing attacks and fake SMS calls. It's about educating them and ensuring they're following best practice," says Mooney.

As businesses seek to keep pace with the changing threat environment, they will require investments in both technology and personnel. Investing in better security tools and processes and improved education, training and skills will put organizations in a stronger position to counter a new generation of AI-powered cyberattacks.

Published February 2026.


Sources

1 Clarke, William. "Six-Million-Dollar Scam Reveals the Extent of Deepfake AI Threat," Beazley, 11 Jul 2023.

2 "Cybersecurity & AI: Promises, Pitfalls — and Prevention Paradise," Deep Instinct, accessed 22 Dec 2025.