Author: John Farley
Edgar Allen Poe once warned us to "Believe half of what you see and nothing of what you hear." His advice was ahead of its time in light of the recent proliferation of what is known as deepfake technology.
Deepfakes are media — images, text, audio and videos — altered or generated to make it appear that people did or said something that they never actually did or said. Threat actors use artificial intelligence (AI) and deep-learning techniques to manipulate real media to create synthetic media.
Deepfake technology has been available since at least 2017. Since then, we've seen rapid improvement in the technical quality of deepfakes and easier means to access and create them. In 2023, popular generative AI platforms such as Midjourney 5.1 and OpenAI's DALL-E 2 emerged as widely available tools for threat actors to conduct deepfake campaigns.1
As this technology has evolved, so too have the criminal tactics to exploit it. Threat actors are using it to create synthetic media that can be used in a variety of destructive ways, creating a new and frightening reality in the 2023 cyber threat landscape.
Key deepfake technologies
Face replacement — also called face swap — is a deepfake strategy that copies a facial image and places it on another body.
Face generation creates facial images that don't exist in reality.
Speech synthesis uses AI to create realistic human speech.
Generative adversarial networks (GANs) use deep learning methods to learn patterns of data — such as what may be contained in real videos — and uses it to create new content.
Common types of deepfakes and the motivations behind them
Various deepfake creation communities are online, connecting deepfake experts with those who want to create synthetic media. Here are common deepfakes and how threat actors use them.
Deepfake pornography accounts for the vast majority of deepfake videos. Victims are typically females from a wide range of professions, including actors, musicians, politicians and business leaders. Non-consensual deepfake pornography can be shared indefinitely.
An individual or group with a particular political ideology could seek to disrupt an election by using deepfake video or audio to attack an opposing party. Political leaders can be impersonated and placed in compromising situations that could sway election results in the favor of the opposing candidate, undermining our democratic process in ways we haven't seen before.
Political leaders around the globe have already been targeted, and we expect the stakes to be raised with upcoming elections. Moreover, the threat goes beyond elections: Impersonations of political leaders and high-ranking military personnel could lead to geo-political conflict.
Deepfakes for financial crimes
For several years, hackers have exploited our natural tendencies to trust others by convincing victims to transfer funds to accounts the hackers control. This fraud was typically done using emails impersonating CEOs and other business leaders.
We now have evidence that hackers have evolved to using synthetic audio to execute the same crime. As far back as 2019, an unnamed UK energy company's CEO was impersonated on a phone call. His German accent was convincing and his speaking style familiar, leading the victim to transfer $243,000 that was ultimately stolen by the criminal.2
Criminals could expand upon this method by impersonating business leaders to manipulate stock prices. As soon as the public sees a CEO announcing false information, it could drive a stock price in significant ways. Criminals behind the attack could benefit from the anticipated stock price fluctuation as soon as the false information is made public. As recently as May 2023, a deepfake of Tesla CEO and Twitter owner Elon Musk was created from previous interviews in an effort to manipulate crypto investors and went viral on social media.3
Deepfakes for extortion and harassment
Individuals with personal vendettas could attack others with deepfake technology, gaining competitive advantage over the other in both personal and business environments. The outcomes of divorce proceedings, job applications and vendor bidding competitions could all be affected.
What can be done about deepfakes?
The reality is that no single person, entity or technology solution can control the creation and distribution of digital content on an end-to-end basis. Its lifecycle is facilitated by a combination of people, hardware and software. It lives in cyberspace — a free space designed for easily and quickly sharing information, which unfortunately includes deepfake videos and audio.
Malicious actors can capture, manipulate and distribute images, videos or personal information posted online without the owner's knowledge or consent. Once content is shared on the internet, it can be extremely difficult, if not impossible, to remove.
The threats deepfakes pose — and the real-world examples of how that threat has begun to manifest — have gained the attention of several leaders in government, private sector technology and academia. According to a recent FBI public service announcement, there's been an uptick in deepfake-based extortion scams of both minor children and non-consenting adults. In that public service announcement, the FBI issued the following best practices to help manage risks associated with deepfake technology:4
- Monitor children's online activity and discuss risks associated with sharing personal content.
- Use discretion when posting images, videos, and personal content online, particularly those that include children or their information.
- Run frequent online searches of your and your children's information (full name, address, phone number, etc.) to help identify the exposure and spread of personal information on the internet.
- Apply privacy settings on social media accounts — including making profiles and friend lists private — to limit the public exposure of your photos, videos and other personal information.
- Consider using reverse image search engines to locate any photos or videos that have circulated on the internet without your knowledge.
- Exercise caution when accepting friend requests, communicating, engaging in video conversations or sending images to individuals you don't know personally. Be especially wary of individuals who immediately ask or pressure you to provide them. Those items could be screen-captured, recorded, manipulated, shared without your knowledge or consent and used to exploit you or someone you know.
- Don't provide any unknown or unfamiliar individuals with money or other items of value. Complying with malicious actors doesn't guarantee your sensitive photos or content won't be shared.
- Use discretion when interacting with known individuals online who appear to be acting outside their normal pattern of behavior. Malicious actors can easily manipulate hacked social media accounts to gain trust from friends or contacts to further criminal schemes or activity.
- Secure social media and other online accounts using complex passwords or passphrases and multi-factor authentication (MFA).
- Research the privacy, data sharing and data retention policies of social media platforms, apps and websites before uploading and sharing images, videos or other personal content.
Transferring the deepfake risk
The Cyber insurance industry has been known to rise to the occasion when new cyber threats surface. The most comprehensive policies pay for data breach crisis management, including privacy attorneys, IT forensics investigators, mailing and call centers, credit monitoring services and public relations experts. They may also reimburse their clients for ransom payments to hackers and cover the cost to defend and settle lawsuits that result from network intrusions.
However, many policies require specific conditions to trigger coverage, and damage caused by impersonation in a deepfake video or audio may not be covered. That said, the Cyber insurance marketplace has always adapted to evolving cyber risk and often creates products that can help transfer risks of the modern cyber peril.
In light of the latest deepfake threats, three potential losses come to mind that you should consider when negotiating your insurance policies:
- Lost funds. A deepfake social engineering scam resulting in unauthorized funds transfer can lead to immediate and significant financial harm.
- Business interruption and other costs. Your focus on addressing a deepfake impersonation and attempting to manage the crisis could impact normal business activity and lead to financial loss and unexpected costs.
- Reputational harm. Impersonation may lead to both near-term and long-term reputational harm to your brand and ultimately impact your bottom line.
Read your Cyber insurance policy carefully, explore other policies and consult your broker for advice on managing the deepfake threat.