null

Advances in artificial intelligence (AI) and deep-learning technologies are making synthetic media — deepfakes — more convincing, making it hard for us to tell what is genuine. Deepfakes take the form of images, text, audio or videos altered or generated to appear that people did or said something that they never actually did or said. The technology can be combined with AI and deep-learning techniques to manipulate real media to create synthetic media.

Deepfake technology has been available since at least 2017. Since then we've seen rapid improvement in the technical quality of deepfakes, and the means to access and create them has become easier. In 2023 popular generative AI platforms such as Midjourney 5.1 and OpenAI's DALL-E 2 emerged as widely available tools for threat actors to conduct deepfake campaigns.1

How cyber criminals exploit deepfake technology

As this technology has evolved, so too have the criminal tactics to exploit it. Threat actors are using it to create synthetic media that can be used in a variety of destructive ways, creating a new and frightening reality in the 2023 cyber threat landscape.

Key deepfake technologies include:

  • face replacement — also called face swap — is a deepfake strategy that copies a facial image and places it on another body
  • face generation creates facial images that don't exist in reality
  • speech synthesis uses AI to create realistic human speech
  • generative adversarial networks (GANs) use deep learning methods to learn patterns of data — such as what may be contained in real videos — and uses it to create new content.

Common types of deepfakes and the motivations behind them

Various deepfake creation communities are online, connecting deepfake experts with those who want to create synthetic media. Here are some common deepfakes and how threat actors use them.

Pornographic deepfakes

Deepfake pornography accounts for the vast majority of deepfake videos. Victims are typically women from a range of professions. Non-consensual deepfake pornography can be shared indefinitely.

Political deepfakes

An individual or group with a particular political ideology could seek to disrupt an election by using deepfake video or audio to attack an opposing party

Political leaders around the globe have already been targeted, and the threat goes beyond elections. Impersonations of political leaders and high-ranking military personnel could lead to geo-political conflict.

Deepfakes for financial crimes

For several years hackers have convinced victims to transfer funds to false accounts, typically by using emails impersonating CEOs and other business leaders.

We now have evidence that hackers have progressed to using synthetic audio to execute the same crime. Criminals could expand on this method by impersonating business leaders to manipulate stock prices by having a CEO announcing false information.

Deepfakes for extortion and harassment

Individuals with grudges could attack others with deepfake technology in both personal and business environments. The outcomes of divorce proceedings, job applications and vendor bidding competitions could all be affected.

What can be done about deepfakes?

No single person, entity or technology solution can control the creation and distribution of digital content on an end-to-end basis. Its lifecycle is facilitated by a combination of people, hardware and software and it lives in cyberspace — designed for easily and quickly sharing information, which includes deepfake videos and audio. Once content is shared on the internet, it can be extremely difficult, if not impossible, to remove.

The Federal Government's eSafety website notes that while deepfake technology is advancing rapidly, some signs can help identify fake photos and videos.2

These include:

  • blurring, cropped effects or pixilation (small box-like shapes), particularly around the mouth, eyes and neck
  • skin inconsistency or discoloration
  • inconsistency across a video, such as glitches, sections of lower quality and changes in the lighting or background
  • badly synced sound
  • irregular blinking or movement that seems unnatural or irregular
  • gaps in the storyline or speech.

If in doubt, question the context. Ask yourself if it's what you'd expect that person to say or do, in that place, at that time.

Transferring the deepfake risk

The cyber insurance industry is evolving as new cyber threats surface. The most comprehensive policies pay for data breach crisis management, including lawyers, IT forensics investigators, credit monitoring services and public relations experts. They may also reimburse their clients for defending and settling lawsuits.

However, many policies require specific conditions to trigger coverage, and damage caused by impersonation in a deepfake video or audio may not be covered. In view of the latest deepfake threats, there are three potential losses to consider when negotiating insurance cover.

  • Lost funds. A deepfake social engineering scam resulting in unauthorised funds transfer can lead to immediate and significant financial harm.
  • Business interruption and other costs. Your focus on addressing a deepfake impersonation and attempting to manage the crisis could lead to financial loss and unexpected costs.
  • Reputational harm. Impersonation may lead to both near-term and long-term reputational harm to your brand and ultimately impact your bottom line.

Read your cyber insurance policy carefully, explore other policies and consult your broker for advice on managing the deepfake threat. In addition to cyber insurance protection, Gallagher offers expertise, advice and resources for building business resilience to withstand cyber security incidents.

Connect with us


Sources

1Nelson, Jason. "FBI Warns of AI Deepfake Extortion Scams," Decrypt, 5 Jun 2023.

2"Deepfake Trends And Challenges — Position Statement," Australia Government eSafety Commissioner, 23 Jan 2022.


Disclaimer

Gallagher provides insurance, risk management and benefits consulting services for clients in response to both known and unknown risk exposures. When providing analysis and recommendations regarding potential insurance coverage, potential claims and/or operational strategy in response to national emergencies (including health crises), we do so from an insurance and/or risk management perspective, and offer broad information about risk mitigation, loss control strategy and potential claim exposures. We have prepared this commentary and other news alerts for general information purposes only and the material is not intended to be, nor should it be interpreted as, legal or client-specific risk management advice. General insurance descriptions contained herein do not include complete insurance policy definitions, terms and/or conditions, and should not be relied on for coverage interpretation. The information may not include current governmental or insurance developments, is provided without knowledge of the individual recipient's industry or specific business or coverage circumstances, and in no way reflects or promises to provide insurance coverage outcomes that only insurance carriers' control.

Gallagher publications may contain links to non-Gallagher websites that are created and controlled by other organisations. We claim no responsibility for the content of any linked website, or any link contained therein. The inclusion of any link does not imply endorsement by Gallagher, as we have no responsibility for information referenced in material owned and controlled by other parties. Gallagher strongly encourages you to review any separate terms of use and privacy policies governing use of these third party websites and resources.

Insurance brokerage and related services to be provided by Arthur J. Gallagher & Co (Aus) Limited (ABN 34 005 543 920). Australian Financial Services License (AFSL) No. 238312