AI is now accessible to all, but how can you (safely) take advantage of it?

Chat GPT and Google Bard are the two most well-known large-language models (LLMs) currently available. LLMs have been described as endless libraries with AI as the librarian, scouring through knowledge at a rate never before conceived – months of research can now be garnered in seconds, and to say the potential is ground-breaking, is somewhat of an understatement.

AI adoption could eventually drive a 7% or almost a USD 7 trillion increase in global GDP over a decade, according to Goldman Sachs.1

Nevertheless, there are considerable risks relating to AI, which have been the subject of fierce debate across the media. It has been labelled biased, unpredictable and much of the technology that drives it is a mystery to the majority of us.

AI is appealingly vulnerable to bad actors. It is now possible to produce vast amounts of content instantly, facilitating the passage of misinformation and opening the door wider to fraud, spam and intellectual property risks.

AI developers themselves have stated there is a need for governments to intervene and mitigate the risks associated with developing AI models.2

Yet the biggest risk may be not learning about this technology and preparing your business to adapt.

The finance sector has used AI for 15 years; rather than replacing jobs, chatbots have helped to improve customer service. A host of companies are now experimenting with AI; for example, Mastercard has launched a Mastercard Unlocked AI training platform for employees based on sharing good practice from its top performers.3

AI tools could be used to improve efficiency across business operations – writing proposals based on internal policies, invoicing, data analysis, customer service, copy editing – the scope is wide, but businesses should proceed with caution.

The pace of change

The advent of email, browsers, search engines and smart phones have certainly helped to shape the workplace as we know it today. Nevertheless it is hard to argue that any tech in isolation has ever dramatically changed the economy for better or worse.

With major commercial developments like outsourcing and automation, the consequent operational impact is undeniable, but they have not led to the mass unemployment predicted in their infancy.

History suggests that mass adoption of AI in the workplace will take time. The first electronic message was sent in 1965, yet it took until the turn of the century for email to become ubiquitous in the workplace.

In March 2023, the UK government published recommendations for the AI industry which outlined five principles it wants companies to follow: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.4

Practical guidance from regulators is expected within the next year and legislation may be introduced further down the line. Regulation, when it catches up, is likely to be onerous, especially in the public sector, and unions are expected to be active in opposition.

Adapt to survive

Pandora’s box has been opened and, like it or not, AI is here to stay and, as tools like Chat GPT and Google Bard continue to learn, it is only set to become more sophisticated. While there has been understandable concern regarding the future of certain roles, technological evolution also paves the way for new jobs – roughly 60% of current jobs in the US did not exist in 1940.5

AI will change the way many of us work, assimilate knowledge and communicate with clients, and businesses should be open to understanding how its various tools may support their operations. Yet a measured response is called for. One of the key concerns regarding AI is the unforeseen consequences of its adoption. Firms that are too hasty to drive efficiencies and change business models, may get more than they bargained for.

What could businesses be doing now?

  • Communicate that colleagues should not be entering any confidential information into publicly accessible AI systems
  • Consider updating any relevant information and data policies to include handling rules in relation to AI systems
  • Create an AI working group and invite colleagues to submit user experiences to help build a picture of how AI systems could support the business
  • Outline the tasks where publicly accessible AI systems could be used to aid productivity, so there is a clear line for employees. Asking an AI system to provide a summary on a specific topic or a sample job description, for example
  • Ensure colleagues considering future projects involving AI or machine learning capabilities and confidential/proprietary information are liaising with IT and legal teams to maintain compliance with legal and privacy obligations.

LLMs may have taken decades to get to this point but, since they became publicly accessible, change has been rapid as every industry grapples to understand how (and if) AI will align with their operations. As the digital landscape evolves, adopting a proactive approach to cyber risk through the regular training of your employees and directors can strengthen your defences and help you be more prepared for new and/or unexpected threats.

Please get in touch if you would like to find out more about how Gallagher’s Cyber Risk Management Practice can help you achieve this for your organisation.


The sole purpose of this article is to provide guidance on the issues covered. This article is not intended to give legal advice, and, accordingly, it should not be relied upon. It should not be regarded as a comprehensive statement of the law and/or market practice in this area. We make no claims as to the completeness or accuracy of the information contained herein or in the links which were live at the date of publication. You should not act upon (or should refrain from acting upon) information in this publication without first seeking specific legal and/or specialist advice. Arthur J. Gallagher Insurance Brokers Limited accepts no liability for any inaccuracy, omission or mistake in this publication, nor will we be responsible for any loss which may be suffered as a result of any person relying on the information contained herein.