Getting your Trinity Audio player ready...

Key insights

  • AI adoption has surged, with most companies implementing at least some AI solutions. However, over half report skills gaps and recruitment challenges as obstacles to going further.
  • AI automation is bringing positive returns in productivity and revenue, though businesses expect meaningful return on investment (ROI) to materialize in two to three years.
  • Companies continue to invest in training and other job protection strategies to address skills gaps and safeguard the human touch. Less than half of businesses have adopted formal risk management frameworks.
  • Emerging AI exposures are prompting dedicated AI insurance solutions, endorsements, bespoke add‑ons and, in some cases, exclusions.

When generative AI first burst onto the scene in November 2022, it sparked a wave of excitement, curiosity and anticipation. The possibilities seemed endless.

Fast forward three years, and while the initial buzz remains, a sense of realism has settled in. Organizations are no longer just dreaming about the potential of AI — they're navigating its complexities, balancing opportunities with risks and uncovering what it truly takes to weave this transformative technology into the fabric of their operations.

Amid the growing number of use cases, the journey has shifted from hype to a more grounded exploration of AI's role in shaping the future of work.

According to Gallagher's latest research, adoption of generative AI has continued over the last year, with many organizations shifting their focus from determining which AI tools to invest in to operationalizing the technology within the business.

But this phase of the journey, too, will take time. Organizations measuring the ROI of AI deployment anticipate it will take an average of 28 months for the value of transformation to outweigh the upfront costs.

"This survey complements what we've seen with our clients. At Gallagher, our AI adoption journey is about more than just implementing cutting-edge technology — it's about empowering our people and centering on our customer needs," commented Steve Rhee, global chief digital officer, Gallagher.

"We have continued to invest over the last several years in data, analytics and digital workforce skill development to ensure our teams are equipped to deliver the best outcomes and solutions for our clients in a rapidly evolving landscape."

Fast followers close the gap, but skills shortages remain a hurdle

This year's results indicate a step-change in the overall implementation of AI. In the past 12 months, the rollout of AI adoption strategies has accelerated, with a much greater proportion of businesses (63%) now having either fully operationalized or implemented AI within parts of their business. This is up from 45% in 2025.

In 2025, only 4% of businesses haven't experimented with AI, compared to 18% in 2023. In 2025, 20% are fully operational, compared to 9% in 2023.

Currently, the most popular uses for AI are in IT operations management, client-facing functions such as chatbots and personal assistants, and research and analytics.

Despite a maturing view of the AI transformation journey, some hurdles remain, with over half citing skills gaps and recruitment challenges as a barrier to implementation, followed by technical/infrastructure and compliance issues.

Concerns about eroding employee trust, ethical considerations and resistance to technological transformation are other considerations for respondents as they progress on their AI adoption journeys.

"Embedding AI in the operating model means redesigning processes and role definitions and building scalable AI platforms, and very few organizations are at that stage yet," says Ben Warren, managing director, People, Data, AI and Innovation at Gallagher. "What's important, therefore, is understanding those areas in the business where AI can drive the most value and building on that."

Concerns about trust and ethics remain strong in industries where automation is relatively new. Encouraging a just transition through reskilling and upskilling will help mitigate the impact of people risk in these businesses.

Firms target timeframes of two to three years to fully unlock the value

Nearly two-thirds of organizations are actively measuring ROI, estimating it will take an average of 28 months to realize that return.

The frameworks to capture ROI are most likely to be in place in technology-based organizations and financial services businesses.

63% of businesses actively measure ROI for Ail it takes 28 months to realize ROI; 82% report positive AI impact

Despite relatively conservative timelines, organizations are nonetheless bullish about AI's impact on business revenue now and in the future. Eighty-two percent say they're already seeing a positive impact, while 83% believe AI will boost revenue in the future.

There's a prevailing view among respondents that AI has already had a positive effect on employee productivity: 86% of businesses, on average, see it as very positive or quite positive, rising to 93% in Canada and Australia, 91% in the US and 90% in India.

In 2025, 86% of respondents thought AI will improve employee productivity compared to 76% in 2024.

Errors, AI misuse and data privacy breaches are the most likely sources of risk

Top perceived risks of using AI in business

  1. AI errors, misinformation and hallucinations
  2. Legal and reputational risks
  3. Privacy violations and data breaches
  4. Cyber‑attacks and fraud risk
  5. Overreliance and reduced human judgment
  6. Job insecurity and industrial action
  7. Ethical risk and weak governance
  8. Declining employee engagement and change fatigue
  9. Algorithmic bias and discrimination
  10. Shareholder action over poor ROI
  11. Reduced trust in leadership

Most businesses now feel they have a better grip of AI risks more than three years into the journey, with 93% ranking their understanding as "quite well" or "very well," compared with 77% in 2024 and 78% in 2023.

AI errors, misinformation and hallucinations remain a key concern, topping the list of perceived threats from AI adoption (57%), as do legal and reputational risks from AI misuse (56%) and data protection and privacy violations (55%).

People risks remain a top concern for business leaders in adopting AI. At least half of businesses see potential job insecurity and strikes as a side effect of AI transformation, alongside a drop in employee engagement and change fatigue. If left unaddressed, these issues could prompt reduced trust in leadership.

Respondents highlighted the importance of recognizing the potential negative impacts on the workforce, emphasizing the need to involve HR in addressing employee concerns. They also stressed the value of offering training programs to help employees adapt and feel supported through the changes.

There's been some progress in AI governance year over year, but more work remains to be done. Only 56% of organizations have so far communicated their AI adoption strategy to their workforce, for instance.

"You need to be building skills, confidence and trust to enable the workforce to be using AI on a day-to-day basis so that it is embedded into workflows," said Sonya Poonian, director of AI Transformation, Employee Engagement and Communication Consulting practice at Gallagher. "Only then will you reach the inflection point where you can unlock that longer-term ROI."

Implementation of 11 mitigation strategies ranges from 43% to 56%

More work is needed to strengthen risk management and governance frameworks in line with evolving knowledge of potential vulnerabilities and to satisfy the demands of new regulations. Less than half of businesses have adopted risk management frameworks for AI use, conducted ethical impact assessments or developed an AI-specific incident response plan.

"In terms of AI implementation, one seminal question worth asking is 'To what extent do we want and/or need to put controls in place?' " says Lenin Lopez, senior vice president, Management Liability at Gallagher. "And then, from an AI expertise perspective, companies will want to consider evaluating whether their management teams and boards possess relevant knowledge of AI so that they are positioned to appropriately evaluate AI initiatives."

The IT department is still overwhelmingly viewed as the function responsible for AI-related risks, according to nearly half of those surveyed. Senior executives and risk management are on level pegging, followed by the "departments using AI."

AI liabilities aren't simple data breaches; they're a black box of algorithmic risk where traditional breach response approaches fall short. Managing these legal, operational and reputational exposures requires a multidisciplinary approach that addresses bias and blends oversight with data integration.
John Farley, managing director of Cyber, Gallagher

Personal accountability on the part of the individuals or departments using AI has overtaken the roles of legal/compliance and HR functions in this year's survey, reflecting the growing recognition that users need to carry out their own due diligence.

"Human in the loop is also key, but the challenge of course is to determine where in the loop and how many times in the loop you need to be," says Lopez. "It's advisable to reevaluate those controls every time an AI model is updated since as new code gets written, there is a risk that the model can lead you astray. Ultimately, companies should establish rules, oversight mechanisms and human intervention points to keep AI decision-making in check."

Poonian thinks breaking out of a siloed mindset in managing both the risks and opportunities relating to AI requires cross-functional teams and strategies.

"Strong governance needs to be embedded throughout the organization in order to protect it against AI risks," she says. "That governance element should incorporate two-way feedback — from the leadership and the teams and individuals using it — so the business is continually updating and refining its approach."

While governance frameworks are becoming more robust, a widespread corporate culture of checks and balances remains a way off.

"With less than half of businesses adopting formal risk management frameworks or written incident response plans for AI, there are opportunities to further tighten controls in the year ahead," says John Farley, managing director of Cyber at Gallagher.

"AI liabilities aren't simple data breaches; they're a black box of algorithmic risk where traditional breach response approaches fall short," Farley explains. "Managing these legal, operational and reputational exposures requires a multidisciplinary approach that addresses bias and blends oversight with data integration."

5 biggest AI adoption challenges for 2026

Skills gaps and recruitment challenges. More than half of businesses cite a shortage of AI‑ready talent as a primary barrier to implementation. Many organizations lack the internal capabilities required to deploy, maintain and scale AI systems effectively.
Technical and infrastructure limitations. Companies continue to struggle to integrate AI into legacy systems and build scalable platforms for widespread operationalization. This struggle slows progress from pilot testing to full deployment.
Compliance, governance and data privacy concerns. AI errors, misuse, misinformation and privacy risks remain top worries for business leaders. Unclear regulatory expectations and evolving global standards add complexity to responsible AI deployment.
Declining employee trust and change resistance. Many organizations express concern about eroding employee confidence, job insecurity and change fatigue. Without clear communication and structured change management programs, businesses risk slowing the pace of AI transformation.
Ethical considerations and accountability challenges. AI governance remains uneven across businesses. Fewer than half have adopted formal risk management frameworks or implemented AI‑specific incident response plans, creating gaps in transparency, oversight and ethical safeguards.

Employers roll out job protection strategies, even as headcount falls

The high proportion of businesses concerned about skills gaps and other people-related risks is reflected in the broad expanse of workforce strategies.

A high proportion of business leaders say their organizations are delivering training, adding AI skills and capabilities and implementing change management programs — a clear signal that most organizations plan to take their employees with them on their AI journey.

By sector, IT and technology businesses are currently most focused on upskilling and continuous learning.

Strategies employed for adopting AI. Alt text: Strategies including training; adding AI to job descriptions; hiring people into AI roles; change management programs; AI champions; name a chief AI officer; named an AI ethics officer.

The main reasons cited for job protection strategies are to retain/promote creativity, retain the human touch for client interactions and carry out complex problem-solving and high-stakes decision-making.

Ethical and moral sensitivity is another key driver, with one in four saying their business is a "people-first" company.

However, the picture becomes more nuanced when assessing the actual impact on employees. Fifty-nine percent say their organization has either reduced overall numbers already or plans to do so in the future.

Respondents from South Korea are more likely to say their company has made headcount reductions through redundancies, followed by India. And in Australia, 53% of businesses indicate reduced workforce numbers through redundancies/not rehiring.

By sector, the combined impact of AI adoption on headcount is deemed to be more significant in telecoms, technology, energy and financial services. In terms of impact on future jobs, the biggest impacts are anticipated in sectors such as manufacturing and IT/computing.

Even within organizations, the impact can be felt disproportionately, according to Warren. "Reducing people risk and driving ROI requires investment in proper training and enablement programs — from setting the strategic direction and guardrails for usage to understanding where AI can be utilized within workflows."

The global average of companies that reduced headcount due to AI is 59%. The US, UK and Nordics were below average; Canada was average; and Australia, India, Japan and South Korea were above average.

Impact of "silent AI" on insurance amid evolving policy wording

Just as the cyber liability insurance market rapidly expanded to address the issues of "silent cyber" exposures, AI-related exposures are beginning to drive a similar cycle of exclusions, endorsements, bolt-on covers and, ultimately, bespoke AI policies.

As such, for the first time this year, the survey incorporates responses from a sample of insurance industry professionals. Of these, one in five said a client experienced loss or claims due to AI-related risks in the past year, with just over half covered fully by insurance.

Classes of business most likely to be impacted by AI-related claims are cyber liability, product liability and employers/employment practices liability.

Gallagher's 2026 Cyber Insurance Market Outlook reinforces this observation. It charts more than 200 active legal cases involving AI and machine learning, stemming from data bias, privacy liability, discrimination and regulatory risks relating to a broad range of liability coverage, including cyber, employment practices liability (EPL), product liability and Errors and Omissions (E&O).

The report suggests that a growing number of insurers will begin offering AI-related loss coverage, but key decisions over how "loss" is defined in these new coverage options will be a critical watchpoint.

The classes of liability insurance that most expect AI-related losses are cyber, product, employment practices and professional indemnity.

Insurance industry respondents shared their concerns over the effectiveness of today's business insurance protections, commenting that most policies don't explicitly address AI-related risks, with wordings either largely reactive or seemingly written for "a pre-AI world."

There are concerns that policy wordings are too vague for what would be covered in an AI-related loss, potentially leaving insurance carriers open to claims disputes and litigation.

"Insurers are considering including clearer language around AI risks across a range of policies to be able to better understand the total cost of risk. However, the wordings could prove challenging given that AI is constantly evolving," says Paige Cheasley, Canada National Technology practice leader, Gallagher.

As was the case with cyber, a market-wide pivot towards new policy language that incorporates AI-related risks is unlikely to happen until or unless there's a significant surge in claims or increased litigation involving AI-related losses.

"Exclusionary language around AI risks may be coming in classes like Errors and Omissions and general liability, but we haven't seen anything specific yet," says Cheasley. "Carriers are likely to be hesitant to be the first one out of the gate to exclude and would rather wait to see what claims come in."

Respondents anticipate new policies for AI-related risks, specialized endorsements and AI-specific wordings moving forward.

The insurance industry is likely to respond to AI exposures with new policies; specialized endorsements, specific coverage wording; new renewal questionnaires; policy exclusions; governance management.

Tighter controls and strong leadership are key to unlocking ROI

While full operationalization of the technology may be some years away, steady progress is being made within many organizations. In general, businesses are much further into their AI adoption journeys than when the survey was first conducted in December 2023.

As AI has become more embedded, companies are growing more confident in their understanding of the risks, opportunities and challenges of implementation. And they are setting realistic timescales for fully realizing their ROI, regardless of what is happening in the broader "hype cycle."

Crucially, the customer experience must remain front and center throughout, according to Rhee. "Our strategic investments in data, analytics, and technology enable us to empower our professionals and enhance the experiences of our customers, which is the core of everything we do," he says.

"It's not just about adopting digital tools — it's about stepping back, mapping the entire customer journey, and understanding where trusted and personalized advice adds the most value," says Rhee. "That's how you strike the right balance — leveraging trusted expertise and innovative AI capabilities to deliver meaningful and powerful results that help clients achieve their goals."

As AI and data add complexity to the risk landscape, trusted expertise is more essential than ever. Clients embracing digital transformation will increasingly depend on specialist partners to deliver reliable guidance, harness innovative AI capabilities and transform challenges into opportunities.

Published February 2026