Author: John Farley

ChatGPT presents a growing cyber risk. Learn how your organisation can mitigate those risks.

Technology continues to evolve at a rapid pace, and its adoption on a mass scale follows an all-too-familiar trend. The moment a new technology makes headlines, many business leaders seek to integrate it into their products and services. Unfortunately this rapid appropriation into business plans rarely prioritises some key pillars of cyber risk management.

In fact, many of the most important questions around increased cyber threats associated with new technology often come only after a threat actor has exploited it, leaving risk managers to scramble for their loss mitigation plans. Regulators eventually catch up with their own requirements, and compliance departments are assigned yet one more concern to address.

The emergence of ChatGPT is a shining example of this pattern, as we struggle to understand how it may help — or hurt — those who use it. At this point, we're at the initial stages of adoption, with many C-suite executives laying grand plans to streamline business operations with products and services based on artificial intelligence (AI).

However, there are serious open questions as to how AI may impact these very organisations and their clients who are starting to use these products and services. By some accounts, ChatGPT and related products have served to significantly deepen our digital footprint while potentially raising our cyber risk profile, almost in lockstep.

What is ChatGPT?

AI research lab OpenAI launched ChatGPT in November 2022. ChatGPT is a highly sophisticated chat bot with the potential to significantly transform both businesses and personal lives. It uses a generative pre-trained transformer (GPT) that's designed to absorb massive amounts of data on any given subject while providing immediate, human-like output for anyone who asks.

The current ChatGPT threat landscape

As we navigate the 2023 ChatGPT threat environment we're starting to see evidence of malicious actors plotting attacks via this new threat vector. At this early stage their efforts appear crude at best, but we do foresee a time in the near future where ChatGPT-related exploits may become both sophisticated and widespread.

There's evidence of nefarious activity beginning to surface, including

  • malware development and phishing emails1
  • growing number of novice hackers2
  • misinformation campaigns
  • regulatory risk.

As sophisticated hackers find ways to exploit ChatGPT and sell their services on a mass scale, we may see an exponentially greater number of hackers emerge this year and into the future.

Mitigating ChatGPT labilities: what to do now

Risk management around ChatGPT and artificial intelligence adoption is in its infancy stage. However, several core principals can help risk managers actively manage the evolving threats related to intellectual property, security and privacy around the development and use of emerging AI tools. According to PwC, some best practices include, but are not limited to, the following:3

  • Set generative AI usage policies: Many organisations may seek to integrate generative AI models with their own content, including their intellectual property and other assets. Set policies for the use of generative AI to avoid confidential and private data from going into public systems, and to establish safe and secure environments for generative AI within your business.
  • Focus on data hygiene: Identifying the appropriate data to input into the system will help reduce the risk of losing confidential and private information to an attack.
  • Assess the risk of data bias: AI outputs rely on the data quality that's input. Deploy a team to evaluate any outputs that may indicate any inherent bias. This team may comprise a wide variety of cross-functional departments, including but not limited to IT, legal and marketing.
  • Manage access to generative AI: Privileged access management programs need to identify and limit the individuals permitted to use generative AI for content creation.

Leveraging cyber insurance

Cyber insurance and other insurance policies may help organisations that believe they may be impacted by claims related to the use of emerging technology. Claims arising from specific cyber incidents, cyber attacks or alleged wrongful collection and/or sharing information — either directly or indirectly through a vendor — may be covered.

Many policies provide access to crisis services, including breach coaches, IT forensics investigators and several other breach response experts. Those with cyber insurance should be mindful of claim reporting obligations, requirements to use insurance panel breach response vendors, evidence preservation and issues that may impact attorney-client privilege.

Organisations should also be aware of the rapidly evolving cyber insurance products that may impact the scope of insurance coverage. The 2023 cyber insurance market is changing rapidly. It has spurred cyber insurers to use various methods to reduce their cascading losses for regulatory risk, such as the issues unfolding around the use of technology. Sub-limits and coinsurance are often imposed for certain cyber losses. In addition, some carriers have modified cyber policy language to restrict or even exclude coverage for certain incidents that give rise to costs incurred for regulatory investigations, lawsuits, settlements and fines.

Author Information

The author of this article is John Farley, Managing Director — Cyber Liability Practice for Gallagher in the USA. In Australia the Gallagher Cyber Practice is led by Robyn Adcock. Gallagher offers cyber insurance protection to organisations of all sizes along with expertise, advice and resources to help them build cyber resilience and mitigate the impact of cyber incidents.


Gallagher provides insurance, risk management and benefits consulting services for clients in response to both known and unknown risk exposures. When providing analysis and recommendations regarding potential insurance coverage, potential claims and/or operational strategy in response to national emergencies (including health crises), we do so from an insurance and/or risk management perspective, and offer broad information about risk mitigation, loss control strategy and potential claim exposures. We have prepared this commentary and other news alerts for general information purposes only and the material is not intended to be, nor should it be interpreted as, legal or client-specific risk management advice. General insurance descriptions contained herein do not include complete insurance policy definitions, terms and/or conditions, and should not be relied on for coverage interpretation. The information may not include current governmental or insurance developments, is provided without knowledge of the individual recipient's industry or specific business or coverage circumstances, and in no way reflects or promises to provide insurance coverage outcomes that only insurance carriers' control.

Gallagher publications may contain links to non-Gallagher websites that are created and controlled by other organisations. We claim no responsibility for the content of any linked website, or any link contained therein. The inclusion of any link does not imply endorsement by Gallagher, as we have no responsibility for information referenced in material owned and controlled by other parties. Gallagher strongly encourages you to review any separate terms of use and privacy policies governing use of these third party websites and resources.

Insurance brokerage and related services to be provided by Arthur J. Gallagher & Co (Aus) Limited (ABN 34 005 543 920). Australian Financial Services License (AFSL) No. 238312