Artificial intelligence tools and applications continue to evolve, and one of the latest newcomers is already sparking conversations around cybersecurity.

Author: Johnty Mongan

null

At first sight, ChatGPT seems like the chatbot equivalent of a friend who seems to know everything about anything. Ask them a question and you’ll be guaranteed an answer. Share your own insights on a topic and they can easily outdo you. And, depending on how you ask, you might even persuade them to do your research project for you.

What is ChatGPT?

ChatGPT (GPT stands for Generated Pre-trained Transformer) is an artificial intelligence (AI) chatbot or conversation engine. It was released to the public in November 2022, by OpenAI, an AI research and development company, and became the fastest app to reach 100 million users—hitting that mark by the beginning of February 2023.1

Many tech companies such as Microsoft, Salesforce and Shopify are already integrating ChatGPT into their products, with Microsoft announcing a $10 billion investment in OpenAI earlier this year.2

How does ChatGPT work?

Considered more advanced than its predecessors, ChatGPT can answer all kinds of queries and prompts in a conversational, human-like way. The potential uses of ChatGPT include generating text, researching topics, producing translations, creating polls and feedback surveys, and much more. ChatGPT is not connected to the internet, but rather leverages an enormous pool of data (collected from the internet and other sources) to form its answers.

The model is also able to learn based on the user’s feedback. The more prompts used and the more detailed these prompts, the more the model trains itself. It has already been referred to as better than Siri and Alexa as a personal assistant3 and a near-term threat to Google.4

However, although ChatGPT was designed as a valuable companion for anyone creating content or conducting research, like any tool it can be misused. In reality, ChatGPT is neither friend nor foe—as an AI language model, it does not have any intention or motivations of its own. The model simply responds to the prompts it receives, which means that is at the mercy of the user and their purpose.

The cybersecurity concerns

While AI can be instrumental in developing advanced cybersecurity products—for example to identify threats more quickly—it can also be used maliciously by attackers. The abilities of ChatGPT and the speed at which it can produce content and solve problems make it both an opportunity and a threat. Below are some of the security issues identified so far.

A bigger phishing net

Due to its ability to mimic human conversation, ChatGPT can help to create phishing and spear phishing emails that may seem authentic and therefore more likely to manipulate people into divulging sensitive information or perform actions that can compromise security. Not only this, but the content generated by ChatGPT for these emails will not contain some of the warning signs to look out for such as typos, formats or questionable English that so often differentiates this type of attack from a legitimate email.

Using ChatGPT also enables cybercriminals to generate unique content for each email they send, making phishing emails harder for cybersecurity tools to detect. The ease and speed at which a professional-looking request can be sent for money to be transferred urgently or sensitive information to be shared electronically are extremely concerning.

Malware on demand

While ChatGPT does have security protocols to identify inappropriate requests, such as how to write malware code, developers have already discovered that these protocols can be bypassed. If the prompt is detailed enough to explain the bot steps of writing the malicious code, it is entirely possible for ChatGPT to effectively construct malware on demand.

This type of AI-generated code could also enable cyber-attackers and criminal groups offering malware-as-a-service to launch their attacks faster and more easily, as well as equipping less experienced attackers with a tool to write more accurate malware code.

Spam and fake news

ChatGPT’s ability to supply written content quickly and easily from virtually any given prompt opens up a world of opportunities for content creators everywhere, regardless of the agenda. As such, it can be used to generate large amounts of spam messages that can overwhelm email servers or social media platforms as well as create fake news articles that can appear to be written by legitimate news sources.

The speed at which this content can be produced using ChatGPT can allow content creators to jump on trends more quickly and strike while the iron is hot. While this may not be seen as a typical cybersecurity threat, it is still a very real digital threat as it can help to spread misinformation, influence opinion and manipulate outcomes.

Stay one step ahead

The emergence of ChatGPT has demonstrated how quickly the digital landscape can change. Adopting a proactive approach to cyber risk through the regular training of your employees and directors can strengthen your defences and help you be more prepared for new and/or unexpected threats.

Please get in touch if you would like to find out more about how Gallagher’s Cyber Risk Management Practice can help you achieve this for your organisation.

Author Information


Disclaimer

The sole purpose of this article is to provide guidance on the issues covered. This article is not intended to give legal advice, and, accordingly, it should not be relied upon. It should not be regarded as a comprehensive statement of the law and/or market practice in this area. We make no claims as to the completeness or accuracy of the information contained herein or in the links which were live at the date of publication. You should not act upon (or should refrain from acting upon) information in this publication without first seeking specific legal and/or specialist advice. Arthur J. Gallagher Insurance Brokers Limited accepts no liability for any inaccuracy, omission or mistake in this publication, nor will we be responsible for any loss which may be suffered as a result of any person relying on the information contained herein.