Edgar Allen Poe once warned us in his now famous quote: “Believe half of what you see and nothing of what you hear.” His advice was ahead of its time in light of the recent proliferation of what is known as deep fake technology. As it evolved, so too did the criminal tactics to exploit it. Threat actors are using it to create synthetic media that can be used in a variety of destructive ways, creating a new and frightening reality.

Deep Fakes – Key Terms Defined

Criminal Motives and How They Create It

Various deep fake creation communities can be found online, and serve to connect deep fake experts with those who want to create them. There are several motives that drive this threat.

  • Pornography & Deep Nudes: Deep fake pornography accounts for the vast majority of deep fake videos. Targets are typically females from a wide range of professions, including actors, musicians, politicians and business leaders. An app called DeepNude was created in 2019 that allowed users to download real photos, strip the person of clothes, insert a nude body and create a realistic nude photo. This creates non-consensual deep fake pornography that can be shared indefinitely.

  • Political Agendas: An individual or group with a particular political ideology could seek to disrupt an election by attacking an opposing party via a deep fake video or audio. Political leaders can be impersonated and placed in compromising situations that could sway election results in the favor of the opposing candidate. Our democratic process could be undermined in ways we have not seen before. President Donald Trump, U.S. Speaker of the House Nancy Pelosi, Malaysian Minister of Economic Affairs Azmin Ali, and Gabonese President Ali Bongo have already been targeted. We expect the stakes to be raised with the upcoming U.S. presidential election. The threat goes beyond elections. Impersonations of political leaders and high ranking military personnel could lead to geo-political conflict. 

  • Financial Crimes: For several years hackers have exploited our natural tendencies to trust others by convincing victims to transfer funds to accounts they control. This was typically done via emails impersonating CEOs and other business leaders. We now have evidence that hackers have evolved to using synthetic audio to execute the same crime. In March of 2019, an unnamed UK energy company’s CEO was impersonated on a phone call. His German accent was convincing and speaking style familiar, leading the victim to transfer $243,000 that was ultimately stolen by the criminal.1 Criminals could expand upon this method by impersonating business leaders to manipulate stock prices. As soon as the public sees a CEO announcing false information it could drive a stock price in significant ways. Criminals behind the attack could benefit from the anticipated stock price fluctuation as soon as the false information is made public.

  • Extortion & Harassment: Individuals with personal vendettas could attack others with deep fake technology, and could gain competitive advantage over the other in both personal and business environments. The outcomes of divorce proceedings, job applications, and vendor bidding competitions could all be affected.

What Can Be Done About It

The reality is that no single person, entity or technology solution can control the creation and distribution of digital content on an end-to-end basis. Its life cycle is facilitated by a combination of people, hardware and software. It lives in cyberspace that was originally designed for free sharing of information, easily and quickly, which unfortunately includes deep fake videos and audio. 

The threats posed by deep fakes, and the real world examples of how it has begun to manifest have gained the attention of several leaders in government, private sector technology and academia. In 2019, the Deeptrust Alliance was formed to assemble a team of diverse experts to address the growing concerns and form a coalition to create solutions. Attempts to deploy the latest technology, formulating public policy and focusing on education are all being leveraged to fight back. Conceptually, a framework could be built that can identify new content, authenticate it and, if necessary, intercept fake digital artifacts while educating the public who are hungry to consume it. It is easier said than done, and more work is needed to make this framework sound. 

Transferring the Deep Fake Risk

The cyber insurance industry has been known to rise to the occasion to cyber threats as they become known. The most comprehensive policies pay for data breach crisis management costs, including privacy attorneys, IT forensics investigators, mailing and call centers, credit monitoring services and public relations experts. They may also reimburse their clients for ransom payments to hackers and cover the cost to defend and settle lawsuits that result from network intrusions. However, many require specific conditions to trigger coverage, and damage caused by mere impersonation in the form of a deep fake video or audio may not provide coverage. That said, the cyber insurance marketplace remains competitive and therefore flexible.

In light of the latest deep fake threat, three potential losses come to mind and should be considered when negotiating your insurance policies:

  • Reputational Harm—Impersonation may lead to both near term and long term reputational harm to your brand and ultimately impact your bottom line.  
  • Lost Funds—A deep fake social engineering scam resulting in unauthorized funds transfer can lead to immediate and significant financial harm.
  • Business Interruption and other costs—Your focus on addressing a deep fake impersonation and attempting to manage the crisis could impact normal business activity and lead to financial loss and unexpected costs.  

Read your cyber insurance policy carefully, explore other policies and consult your broker for advice to manage the deep fake threat.

1. Source: https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/#1a08f76c2241