February 2024


Executive summary

Within the insurance industry, there's a dawning realization that generative artificial intelligence (GenAI) technology presents the insurance industry with an opportunity to start with some basic, but important, use cases and build from there.

In the near term, as the technology beds in, insurers and re/insurers are seeking to get in front of potential sources of claims, including litigation resulting from "hallucinations," allegations of bias and copyright infringement.

In the area of fraud, "shallowfake" and "deepfake" attacks are on the rise, but insurers are leveraging GenAI to better identify fraudulent documents.

By streamlining processes and accessing documents and data with ease, insurance and claims professionals can focus on making better decisions and building relationships.

Far from replacing the underwriter, GenAI is being fine-tuned to offer helpful prompts, which will ultimately lead to happier customers and more profitable outcomes.

The year generative AI took off (and why few were ready)

In 2023 rampant excitement about the capabilities of GenAI was tempered by the anxiety of potential negative — even existential — consequences. There were warnings of inherent bias in some large language models (LLMs) and the risk of "hallucinations" — false results — being accepted as truth.

As regulators sought to catch up and individual businesses developed their own guidelines around the technology's use, it became apparent the insurance industry was gaining a new and likely transformative technology. But so were others, including malicious actors, who were unconstrained by regulatory requirements.

Despite years of anticipation, when generative AI tools finally landed, the pace of change moved at breakneck speed.

Trend 1: Selecting a proof-of-concept

First movers are well underway with the testing phase, putting GenAI to work on everyday operational tasks. Potential use cases include guiding policyholders through claims procedures, and enhancing pricing and underwriting processes.

One of the bigger stories of 2023 was the announcement that Lloyd's insurer was partnering with a tech giant to create an AI-enhanced lead underwriting model.1 Similar headlines are likely to follow as this year progresses.

However, many of the early proof-of-concept initiatives being carried out by re/insurers are taking place outside "non-core" parts of the business. This approach, perhaps less attractive, allows insurers to gain comfort and learn critical lessons during the initial proof-of-concept phase without throwing compliance obligations into jeopardy or compromising intellectual property (IP).

"There's a good reason why the insurance industry doesn't turn on a dime every five minutes and embrace the latest technology," says Matthew Harrison, executive director, Casualty, at Gallagher Re. "It potentially introduces risks that we're not allowed to take or puts risks onto our policyholders in a way that would not be acceptable. There are some interesting checks and balances that you don't necessarily get in other industries."

Many insurers are training staff to improve their work and summarize key tasks through user-friendly tools. This includes checking and updating policies in a part of the business that doesn't touch customers directly.

"For the majority of executives anywhere in the insurance industry, this likely starts as an efficiency play for their staff," says Paolo Cuomo, executive director of Gallagher Re's Global Strategic Advisory business.

"I was talking to one insurer, and they have started off testing this tech in their HR function, far away from clients or customers. They were behind on updating their HR manuals and policies, and were all misaligned across the globe. So they made it their proof-of-concept, learning what generative AI can do with sets of text, and in the process ended up fixing something that otherwise was never going to be a priority."

Generative AI terminology

Bias: The tendency of an AI model to favor or generate content that reflects certain preferences, stereotypes or imbalances present in the training data. It can result in biased or unfair outputs.

Fine-tuning: The process of further training a pre-trained generative AI model on a specific dataset to make it more specialized or tailored to a particular task or domain.

Ethical considerations: The moral implications and concerns associated with generative AI related to privacy, bias, misinformation and the responsible use of AI-generated content.

Hallucinations: Instances in which an AI model generates content that's plausible but actually is fictional or not based on real data. It can occur when the model extrapolates beyond its training data.

Generative AI: A type of artificial intelligence that can generate new content, such as text, images and videos, based on patterns and examples it has learned from existing data.

GPT (Generative Pre-trained Transformer): A specific type of generative AI model that uses a transformer architecture. It's widely used for various natural language processing tasks, including text generation.

Large language models (LLMs): AI models specifically designed to understand and generate human language. They learn from large amounts of text data and can generate coherent and contextually relevant text.

Trend 2: Preparing for GenAI-fueled claims trends

The way Gen AI works — scraping and reconstituting large amounts of digital information — creates potential legal issues related to false results, biases and scraped copyrighted information.

AI hallucinations

AI hallucinations might be a short-term blip, as early models of generative AI attempt to fill in the blanks, and businesses learn how to interrogate the output of LLMs better. But for insurers, particularly those underwriting professional liability classes of business, there could be costly disruptions as the technology beds in.

Among the cautionary tales is a well-known example of a lawyer who used an open AI tool to help generate a legal brief for a personal injury claim filed in US federal court.2 Three of the examples were completely fabricated, highlighting the importance of double-checking all AI-generated source material.

"You can immediately see how over-reliance on AI, if unchecked or unsupervised, has the potential to compromise advice," explains Ben Waterton, executive director, Professional Indemnity at Gallagher. "It requires critical examination and peer review within quality assurance procedures to prevent losses."

Proactive insurers are responding in a number of ways, including properly advising their clients on the vulnerabilities they face, and mitigating exposures through new wordings.


In addition to errors generated by AI hallucinations, discrimination could be another key driver of claims and litigation in the near term. A major e-commerce firm, for instance, stopped using AI to screen applications after it discovered the new recruiting engine was favoring male over female candidates by searching for terms more typically found within men's resumes.3

As the IBM Data and AI team points out, "Examples of AI bias in the real world show us that when discriminatory data and algorithms are baked into AI models, the models deploy biases at scale and amplify the resulting negative effects."4

Copyright infringement

Copyright infringement is another potential driver of claims. We have seen strikes, walkouts and lawsuits from artists and the media concerned about IP theft.5 It's a broad issue, potentially touching a wide range of industries, with the floodgates starting to open on "data-scraping" lawsuits and plaintiffs questioning the ethics of some GenAI training models.6

Within open-source models, once sensitive information and IP are shared with an AI model, they're retained and used for subsequent training. Some of the early adopters of GenAI did not fully appreciate the implications.7 The practice of web scraping is under scrutiny by regulators, including the UK Information Commissioner's Office.8

Trend 3: Deepfakes and shallowfakes: Fighting more frequent and more convincing fraud

From a crime perspective, GenAI enables bad actors to generate credible content to perpetrate more convincing fraud and potential claims for insurers.9

Cybercriminals are already one step ahead, leveraging the technology to write malicious code and perpetrate deepfake attacks, taking social engineering and business email compromise (BEC) tactics to a new level of sophistication.

According to the UK's National Cyber Security Centre, AI will see the volume and impact of cyber attacks grow over the next two years.10 It notes that GenAI is being used to enable more "convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing."

Meanwhile, carriers anticipate an uptick in claims fraud within personal lines insurance. Criminal gangs and genuine claimants can use GenAI to create or doctor motor vehicle and property damage images to fabricate or exaggerate losses in claim submissions.11 With GenAI tools now readily available to the general public, these "shallowfake" frauds are becoming more frequent, according to insurers.12

At the same time, GenAI is an essential tool that insurers can use to combat fraud, with algorithms designed to detect anomalies. Upwards of 60% of insurers have used AI and machine learning technology for fraud detection for some time.13

One notable advantage specific to GenAI is its ability to identify AI-generated content, particularly when dealing with large volumes of information. This means insurers can more easily identify and address AI-assisted shallowfake and deepfake attempts.14 GenAI's natural language processing capabilities also allow it to analyze conversations between customers and insurers, with the ability to flag suspicious activity.

By fine-tuning their approach to fraud detection, insurance carriers will improve their ability to root out dishonest actors at the point of underwriting, improving risk selection and mitigating potential losses further down the line. By leveraging AI, insurers enhance their fraud-detection capabilities, proactively identify suspicious behavior, reduce financial loss and ultimately protect genuine customers.

With the ability to review vast amounts of data in a significantly shorter time, AI tools will continue to offer an efficient and cost-effective solution for fraud detection. It will save insurers valuable time and resources while enhancing their capabilities in the fight against fraud.

Trend 4: Digital Minions and Digital Sherpas: More than just an efficiency play

Ultimately, the hope is that AI technology will free up insurance and claims professionals to focus on making more informed risk-based decisions and building relationships with customers. For now, far from replacing the underwriter, GenAI will instead be fine-tuned to offer prompts and suggestions that will ultimately lead to better risk selection and more profitable outcomes.

By leveraging AI capabilities, insurers can gain new efficiencies, reduce business costs and empower professionals to make better decisions. But how digital assistants such as digital minions and digital sherpas are shaping the insurance industry is more than an efficiency play.

"Digital Minions" are the silent heroes of the insurance world because they excel at automating mundane tasks. By swiftly reviewing vast amounts of data, Digital Minions allow professionals to focus on their core competencies, such as customer relationships and make more informed risk-based decisions.

In many ways, the ability to use GenAI to speed up processes is nothing new; it's just the latest iterative shift towards more data- and analytics-based decisions. And it can make these digital transformations simpler and more straightforward for the technophobes. "What GenAI is going to allow us to do is create these Digital Minions with far less effort," says Paolo Cuomo.

"Meanwhile, Digital Sherpas are expected to play a more visible role in the underwriting process," explains Paolo Cuomo. These tools are designed to constructively challenge underwriters, claims managers and brokers, offering alternative routes to consider. While the ultimate decision remains in the hands of the professional, Digital Sherpas provide important nudges along the way by offering relevant insights to guide the overall decision-making process.

"In this role, the machine can advise and prompt," says Cuomo. "It can remind the underwriter, for instance, that they had a similar risk three years prior that they declined and share the underwriting notes with them. The machine doesn't know why you declined it, but offers information the human may have forgotten. The underwriter will consider this information and make the decision — they might then say, 'That was three years ago and the market was different' or 'I wasn't sure about the company's new CFO'.

"Essentially, machines have infinite memory, and while AI is not yet at the point where it can analyze as well as a human, it can prompt and nudge human beings."

Trend 5: Improving the customer experience, without losing the human touch

Within personal lines, AI is already well underway in being leveraged to streamline operational models and enhance customer interactions across multiple channels. GenAI takes that a step further, allowing for hyper-personalized sales, marketing and support materials tailored to the individual.

By analyzing customer data and predicting behavior, insurers strive to exceed customer expectations, improve satisfaction and drive up retention. Nowhere is this customer experience more important than within claims, often described as the industry's "shop window." End-to-end claims automation, easy-to-use self-service channels and super-fast payments are becoming the minimum standard for Auto and Home insurance providers.

Conversational AI empowers customers to self-serve through chatbots and virtual agents. At the point of underwriting, AI-driven tools can be used to gather insights and create more tailored insurance policies, including embedded insurance where relevant. It can speed up policy and quote generation, balancing automation with the human touch for simplicity, transparency and speed.

But as with all emerging GenAI use cases, the aim is to enhance rather than to remove the human touch. Many customers want to speak to a professional claims handler in their time of need. Personal contact is particularly important when dealing with more complex losses. Giving the customer choice and allowing them to dictate how they interact with their provider will remain important.

While there's a significant opportunity to personalize the experience within more transactional business classes, the use cases will likely be less expansive and more cautious within higher-complexity businesses, such as commercial lines and reinsurance — at least in the near term.

Matt Harrison points out that consistency of service is as important, if not more, than personalization. "It's the human curation of what we do that provides clarity, consistency and services that's the value statement of insurance."

"Brokers want underwriters to be consistent; ultimately, it's about transparency and maintaining relationships. Is it going to be the black box that makes the decision, or is it going to be a guiding principle? If AI undermines any of that, we risk eroding the product's value, so we need to be mindful of that moving forward."

For the majority of executives anywhere in the insurance industry, AI likely starts as an efficiency play for their staff.
Paolo Cuomo, executive director of Gallagher Re's Global Strategic Advisory business