Deepfakes are highly realistic synthetic media — including videos, audio clips and images — created using AI and machine learning. These tools can precisely mimic a person's voice, facial expressions and mannerisms. In 2025, the number of deepfake videos shared online is expected to reach 8 million — a massive increase from 500,000 in 20231.
Getting your Trinity Audio player ready...
Picture the below situation
A finance executive receives a voice message from the company's CEO, urgently requesting a bank payment to close a time-sensitive deal. The voice is unmistakably authentic — tone, accent and urgency. Without hesitation, the executive authorises the transaction. Hours later, they discover the unsettling truth: the CEO never made the call. It was a deepfake generated by AI.

As generative AI technology advances, deepfakes are becoming increasingly convincing in blurring the line between real and fake. As a result, deepfakes are becoming a more common tactic as part of social engineering attacks to impersonate executives, clients or colleagues. The goal is to manipulate employees into sharing sensitive information, approving financial transactions or granting access to secure systems.

Top organisational risks from deepfakes
  • Financial losses arising from fraudulent transactions
  • Data breach from attackers bypassing security safeguards
  • Operational disruption due to fake communications
  • Loss of employee and customer trust
  • Reputational damage from misleading content

Why are deepfakes hard to detect?

The sophisticated nature of the technology used in their creation, such as generative adversarial networks (GANs), which produce highly realistic content, often eludes detection. Subtle variations, such as slight changes in facial expressions or vocal tones, can be hard to pick up on by humans and automated systems2.

The challenge is further compounded by the complexity of human features and the lack of comprehensive datasets to train detection algorithms. As a result, even the best tools available in the market are not always reliable.

In the UK, 43% of businesses reported a cybersecurity breach or attack in 2024, with deepfakes cited as a growing concern3.

The solution: Building 'secure humans'

Addressing the challenges posed by deepfake threats goes beyond simply acquiring the latest security software. The true solution lies in empowering individuals with the skills to think critically and make informed decisions.

The 'secure humans' approach focuses on improving the human element in security, often through upskilling and following best practices, such as:

  • Critical thinking: Encouraging staff to question unusual requests, even from seemingly trusted sources.
  • Verification habits: Teaching staff to confirm sensitive requests through a second channel (e.g., a phone call or in-person check).
  • Spotting inconsistencies: Training employees to recognise subtle discrepancies in video or audio messages.

Awareness training and simulations

Ongoing training is essential. It can be the difference between preventing a deepfake attack and facing the repercussions of one. Training helps employees better understand:

  • What deepfakes are and how they are created
  • Common indicators of fake media
  • How to handle unexpected or urgent requests securely

However, awareness alone is insufficient. It is also important for businesses to invest in cyber defence strategies, such as network security, endpoint protection and incident response planning, to prevent attacks from occurring. Additionally, conducting simulation exercises to test employee responses in safe, controlled environments can help to improve people's ability to spot red flags. Such exercises may include:

  • AI-generated voicemail phishing (vishing) tests
  • Fake video calls issuing fraudulent instructions
  • Emails with suspicious attachments and fabricated personas

Why proactive training matters

In a world where threats evolve rapidly, continuous learning is key to resilience. Proactive training fosters a culture of cyber vigilance. Regular training sessions help employees stay informed about emerging threats, develop habits of critical thinking and build a strong first line of defence. Training can also help to reinforce the practices of good cyber hygiene, such as using strong passwords and making use of multi-factor authentication (MFA).

Partnering with a specialist who understands the risk landscape and has the right resources can help companies prepare better.

Gallagher's expertise in cybersecurity

The Gallagher Cyber Defence Centre offers support in identifying and mitigating cyber threats and improving cybersecurity. We combine technical expertise with a human-first approach, tailoring solutions to each client's needs.

Our solutions include ongoing education and training on emerging cyber threats and new technologies or methodologies used in cyber-attacks. Our Cyber Defence Centre includes training specifically focused on deepfakes within the vishing module, as a key component of the threat social engineering presents.

Contact us today to discover how our cyber defence team can help protect your business from emerging cyber threats.


Sources

1 "Innovating to detect deepfakes and protect the public," GOV.UK, 05 February 2025.

2 Dhaliwal, Jasdev. "How to Spot a Deepfake on Social Media," McAfee, 28 October 2024.

3 "Cyber security breaches survey 2025," GOV.UK, 19 June 2025.


Disclaimer

The sole purpose of this article is to provide guidance on the issues covered. This article is not intended to give legal advice, and, accordingly, it should not be relied upon. It should not be regarded as a comprehensive statement of the law and/or market practice in this area. We make no claims as to the completeness or accuracy of the information contained herein or in the links which were live at the date of publication. You should not act upon (or should refrain from acting upon) information in this publication without first seeking specific legal and/or specialist advice. Arthur J. Gallagher Insurance Brokers Limited accepts no liability for any inaccuracy, omission or mistake in this publication, nor will we be responsible for any loss which may be suffered as a result of any person relying on the information contained herein.