Author: John Farley
Technology continues to evolve at a rapid pace, and its adoption on a mass scale follows an all-too-familiar trend. The moment a new technology makes headlines, many business leaders seek to integrate it into their products and services. Unfortunately, this rapid appropriation into business plans rarely prioritizes some key pillars of cyber risk management. In fact, many of the most important questions around increased cyber threats associated with new technology often come only after a threat actor has exploited it, leaving risk managers to scramble for their loss mitigation plans. Regulators eventually catch up with their own requirements, and compliance departments are assigned yet one more concern to address.
The emergence of ChatGPT is a shining example of this pattern, as we struggle to understand how it may help — or hurt — those who use it. At this point, we're at the initial stages of adoption, with many C-suite executives laying grand plans to streamline business operations with products and services based on artificial intelligence (AI). However, there are serious open questions as to how AI may impact these very organizations and their clients who are starting to use these products and services. By some accounts, ChatGPT and related products have served to significantly deepen our digital footprint while potentially raising our cyber risk profile, almost in lockstep.
What is ChatGPT?
AI research lab OpenAI launched ChatGPT in November 2022. ChatGPT is a highly sophisticated chat bot with the potential to significantly transform both businesses and personal lives. It uses a generative pre-trained transformer (GPT) that's designed to absorb massive amounts of data on any given subject while providing immediate, human-like output for anyone who asks.
The current ChatGPT threat landscape
As we navigate the 2023 ChatGPT threat environment, we're starting to see evidence of malicious actors plotting attacks via this new threat vector. At this early stage, their efforts appear crude at best, but we do foresee a time in the near future where ChatGPT-related exploits may become both sophisticated and widespread.
There's evidence of nefarious activity beginning to surface, including:
- Malware development and phishing emails1
- Growing number of novice hackers2
- Misinformation campaigns
- Regulatory risk
As sophisticated hackers find ways to exploit ChatGPT and sell their services on a mass scale, we may see an exponentially greater number of hackers emerge this year and into the future.
Mitigating ChatGPT labilities: What to do now
Risk management around ChatGPT and artificial intelligence adoption is in its infancy stage. However, several core principals can help risk managers actively manage the evolving threats related to intellectual property, security and privacy around the development and use of emerging AI tools. According to PwC, some best practices include, but are not limited to, the following:3
- Set generative AI usage policies: Many organizations may seek to integrate generative AI models with their own content, including their intellectual property and other assets. Set policies for the use of generative AI to avoid confidential and private data from going into public systems, and to establish safe and secure environments for generative AI within their business.
- Focus on data hygiene: Identifying the appropriate data to input into the system will help reduce the risk of losing confidential and private information to an attack.
- Assess the risk of data bias: AI outputs rely on the data quality that's input. Deploy a team to evaluate any outputs that may indicate any inherent bias. This team may comprise a wide variety of cross-functional departments, including but not limited to IT, legal and marketing.
- Manage access to generative AI: Privileged access management programs need to identify and limit the individuals permitted to use generative AI for content creation.
Leveraging Cyber insurance
Cyber insurance and other insurance policies may help organizations that believe they may be impacted by claims related to the use of emerging technology. Claims arising from specific cyber incidents, cyberattacks or alleged wrongful collection and/or sharing information — either directly or indirectly through a vendor — may be covered. Many policies provide access to crisis services, including breach coaches, IT forensics investigators and several other breach response experts. Those with cyber insurance should be mindful of claim reporting obligations, requirements to use insurance panel breach response vendors, evidence preservation and issues that may impact attorney-client privilege.
Organizations should also be aware of the rapidly evolving cyber insurance products that may impact the scope of insurance coverage. The 2023 Cyber insurance market is changing rapidly. It has spurred cyber insurers to use various methods to reduce their cascading losses for regulatory risk, such as the issues unfolding around the use of technology. Sub-limits and coinsurance are often imposed for certain cyber losses. In addition, some carriers have modified cyber policy language to restrict or even exclude coverage for certain incidents that give rise to costs incurred for regulatory investigations, lawsuits, settlements and fines.