What is AI’s place in the modern world, and are businesses ready for it?

Every Gallagher cyber client is asking the same questions: is AI secure, and how do we use it safely? In late January, Gallagher held its inaugural State of the Nation Cyber webinar: Are you ready for AI? Global Head of Cyber Risk Management Johnty Mongan and Head of Sales, Global Cyber Risk Management Georgia Price-Hunt, were tasked with exploring these “hot topic” questions.

Firstly, it’s essential to define what AI actually is: the ability of a computer or a computer-controlled robot to perform tasks commonly associated with intelligent beings. We must also acknowledge that technology has been replacing human tasks for some time.

AI: A recent timeline

2010: Microsoft launched the Xbox 360 Kinect, the first gaming hardware designed to track body movement and translate it into gaming directions.

2011: Apple released Siri, the first popular virtual assistant.

2015: Elon Musk, Stephen Hawking, and Steve Wozniak (and over 3,000 others) signed an open letter to the worlds’ government systems banning the development of (and later, use of) autonomous weapons for purposes of war.

2019: Google’s AlphaStar reached Grandmaster on the video game StarCraft 2, outperforming all but 0.2% of human players.

2020: OpenAI started beta testing GPT-3, a model that uses Deep Learning to create code, poetry, and other such language and writing tasks. While not the first of its kind, it is the first that creates content almost indistinguishable from those created by humans.

AI is literally changing our world; its benefits and potential are breathtakingly immense and best considered through an industry lens. In transport, we are tantalisingly close to autonomous vehicles; in manufacturing, AI is enhancing and improving the revolutionary 3D printing process; in education, it is changing the way humans of all ages learn; and in life sciences, it is integral in lifesaving early disease diagnosis; accelerating drug discovery and identifying clients for clinical trials.

From a business point of view, organisations have started using AI for functional admin tasks such as providing meeting summaries from transcripts and summarising key points from email chains so the recipient can quickly understand the issues at play without trawling through 15 emails.

When attendees were asked what forms of AI they were currently using, Chat GPT was the clear frontrunner, with a few mentions of Snapchat AI and Zoom AI. Participants' biggest concerns included the risk of misinformation, increased lack of human contact and how to manage data privacy.

These are valid fears, but the biggest exposure businesses could face is not delivering basic AI training to staff. Employers may feel they are still developing their strategy regarding AI, but waiting too long to communicate could increase the risk of employees using AI in a way that could damage the business. Therefore, bringing the workforce with you on the journey is better.

How are cybercriminals using AI?

According to the National Cyber Security Centre (NCSC), AI will almost certainly increase the volume and heighten the impact of cyber attacks over the next two years.1

AI lowers the barrier for would-be cybercriminals to carry out effective access and information-gathering operations. The NCSC predicts this enhanced access will likely contribute to the global ransomware threat over the next two years.

Large Language Models (LLMs) like Chat GPT are already changing the threat landscape regarding phishing. Criminals no longer have to overcome language barriers or grammar shortcomings – which previously provided valuable tell-tale signs for businesses.

Voicemail phishing is also on the increase thanks to AI, as bad actors can use generative AI to clone the voice of a trusted colleague and create deepfake audio. Spear phishing is a term used for attacks where social engineering is used to gather information on their target, and AI makes this process more effective, efficient and more challenging to detect.

Concerns have been raised about LLM's capability to write malicious code. Although Chat GPT has learned not to perform illegal tasks, it can still generate batch scripts that can be converted. Numerous other AI tools in development could generate malicious code.

What are Cyber Risk Managers using it for?

Fortunately, AI can also be a force for good in terms of cybersecurity and is already used to conduct vulnerability scans, analyse logs, and detect threats. It is generally faster, can handle huge volumes of data and can generate considerable cost savings as it limits the need for human intervention. Other benefits include strengthening access control and passwords and minimising/prioritising risks. According to Pinsent Mason, AI can also predict new threats.2  In terms of preventing phishing attacks, AI can detect emails with suspicious characteristics, block spam, and identify bots.

What business risks might AI pose?

As with any new technology, there is potential for deliberate and accidental misuse of AI by companies. If AI is being used, how much thought has gone into where inputting data ends up? Take translation, for example – AI is excellent for translating documentation into different languages. However, if businesses are feeding confidential information into AI, what happens to it? The LLM will be learning from it and possibly churning out this information elsewhere, and there is also the future risk that company data is hacked or sold to a third party. There is currently no real regulation relating to this exposure, which could lead to long-tail claims.

How will the threat landscape change in the future?

Moving towards 2025 and beyond, the NCSC believes the commoditisation of AI technology in criminal and commercial markets will almost certainly mean improved capability is more widely available to cybercrime and state actors.

In terms of cybersecurity, we can expect AI to drive more sophisticated security systems, capable of detecting increasingly complex threats in real-time, hopefully reducing the success rate of phishing emails or impersonation. However, businesses will need to invest in AI expertise and training to ensure that the potential of any new technologies is recognised while being properly integrated and monitored.

The use of AI has the potential to bring significant benefits to businesses, and it's exciting to think about how it could enhance global business practices and improve people's lives. Despite the risks associated with AI, Gallagher believes that its capacity for innovation and growth outweighs these threats. However, to fully leverage its power, it is crucial to understand and mitigate any potential risks.

Sign up to the State of the Nation webinar .


1 The near-term impact of AI on the cyber threat, National Cyber Security Centre, 24 January 2024

2 Insights from our Cyber team; Annual Report 2024, Pinsent Masons, accessed 28 February 2024