Artificial Intelligence is now an everyday tool at people’s disposal; its usefulness and potential impact dependent on a user’s intentions. In the cyber world, this typically falls into two categories: to aid cybercrime and to protect against it.

Author: Johnty Mongan

null

In this article, we explore:

  • How AI is supercharging common cyber-attacks.
  • AI-generated content and the tell-tale signs of a deepfake.
  • AI as a tool to enhance cybersecurity.
  • Strategies to mitigate AI-powered threats.

AI as the hacker’s sidekick

Hackers are always keen to add more sophisticated weapons to their armoury and AI tools are assisting in their techniques. Here are the key types of attacks where AI is making life easier for cybercriminals.

Ransomware attacks: Cybercriminals can use AI to automate the identification of vulnerable systems, select potential targets, and optimise the encryption process — significantly increasing the scale and efficiency of their attacks. The use of AI is also making cloud infrastructure more vulnerable to threats like data exfiltration. Furthermore, AI algorithms can analyse victims’ behaviour and tailor ransom demands accordingly.

Social engineering: AI algorithms can analyse large amounts of data to make phishing emails, chatbots, and other message-based communication seem more authentic than ever before. Messages, responses, and tactics can be tailored to each target, increasing the chances of successful exploitation.

Credential stuffing: Credential stuffing is a cyber-attack in which stolen account credentials are used. AI algorithms automate the process of testing these stolen username and password combinations across multiple platforms, allowing cybercriminals to quickly identify valid credentials and gain unauthorised access to user accounts.

Malware: AI algorithms can analyse security systems, identify vulnerabilities, and modify malware code to evade detection. This allows cybercriminals to launch sophisticated attacks that can bypass traditional antivirus software and intrusion detection systems, making malware more difficult to protect against.

Distributed denial of service (DDoS) attacks: AI-powered botnets can orchestrate large-scale distributed DDoS attacks. These botnets leverage AI algorithms to identify and exploit vulnerabilities in target systems, coordinating a massive influx of traffic to overwhelm servers and disrupt services. AI enables attackers to dynamically adjust attack patterns, making it harder for defenders to mitigate the impact. Businesses should prioritise investing in AI detection software and threat intelligence updates to stay ahead of emerging threats.

AI-generated content: spotting the deepfakes

AI-generated fake content — whether text, audio, or video — is a growing problem for organisations, with the main concerns including misinformation, disinformation, and impersonation of executives.

Impersonation will often start by creating an audio deepfake of a respected individual within the company. The perpetrator, posing as this person, initiates contact through web conferencing or voicemail and proceeds to employ various social engineering tactics like business email compromise or dynamic voice manipulation. By creating a sense of urgency, they can coerce employees into divulging funds or sensitive information.

Similarly, deepfake videos are becoming more sophisticated and believable, taking this risk to a new level. In one example, a finance worker at a multinational firm believed a video conference to be legitimate because the CFO and everyone else in attendance looked and sounded like known colleagues. He was duped into making a $25 million fraudulent payment1.

Organisations must urge employees to be alert when receiving urgent calls or messages and seek clarification for payment or data requests if in doubt.

How to Spot a Deepfake

Audio

  • Flat/emotionless voice
  • Longer than normal pauses between words or sentences
  • Unnatural phrases or speech patterns
  • Strange pronunciation
  • Glitches in the recording

Video

  • Long periods without blinking
  • Poor lip syncing
  • Flickering or blurriness around the jawline
  • Patchy/uneven skin tones (especially a difference between the face and ears)
  • Strange lighting or misplaced shadows

AI-powered cybersecurity and tools

On a positive note, AI can be harnessed to improve cybersecurity by continuously learning from threats and updating cyber threat intelligence.

It can assist in tasks including writing patch code and providing insights on common vulnerabilities and exposures. AI-enhanced cybersecurity tools, such as intrusion detection and prevention software, network security, user behaviour analytics, and phishing protection, are all assets businesses should consider in the fight against cybercrime.

AI’s ability to detect and defend against AI-related attacks also means it can alleviate skills gaps and talent shortages in cybersecurity.

Mitigating organisational threats from AI

One of the first things we say to our clients is to treat AI like a stranger to your business. As useful as the tool may be, it can bring unforeseen threats so it is vital to ensure the organisation has the correct usage and policy frameworks in place.

It is important to ensure cloud networks are configured to the highest security standards and Multi-Factor Authentication (MFA) is used across the organisation, along with regular threat intelligence updates. AI training for end users and IT staff is a must, covering topics like fake content, phishing, and utilising AI securely and appropriately.

AI is a fast-evolving area of cybersecurity and can seem daunting. One way in which Gallagher is helping organisations strengthen their cybersecurity is through Gallagher’s Cyber Defence Centre, a suite of services including vulnerability scanning, threat intelligence webinars, access to a virtual CISO and more. This is an ongoing package of support and is available here to explore as a one-month free trial*.

We can also conduct an open-source intelligence search to double-check what is currently known about your organisation’s network and potential vulnerabilities. Please contact us for details.

Author Information


Sources

1Chen, Heather and Kathleen Magramo, “Finance worker pays out $25 million after video call with deepfake ‘chief financial officer.” CNN. 4 February 2024.


Disclaimer

*Terms and conditions apply. Promotional Period: 00:00 15 April 2024 to 23:59 15 April 2026. Open to businesses based in the United Kingdom and the United States of America who do not currently have a CDC subscription and have not already received a free trial. You can access the free trial via the link or email cyberRM@ajg.com. Full terms and conditions can be found here.

The sole purpose of this article is to provide guidance on the issues covered. This article is not intended to give legal advice, and, accordingly, it should not be relied upon. It should not be regarded as a comprehensive statement of the law and/or market practice in this area. We make no claims as to the completeness or accuracy of the information contained herein or in the links which were live at the date of publication. You should not act upon (or should refrain from acting upon) information in this publication without first seeking specific legal and/or specialist advice. Arthur J. Gallagher Insurance Brokers Limited accepts no liability for any inaccuracy, omission or mistake in this publication, nor will we be responsible for any loss which may be suffered as a result of any person relying on the information contained herein.

Arthur J. Gallagher Insurance Brokers Limited is authorised and regulated by the Financial Conduct Authority. Registered Office: Spectrum Building, 55 Blythswood Street, Glasgow, G2 7AT. Registered in Scotland. Company Number: SC108909. FP784-2024. Exp, 16.05.2025.