Gallagher digital transformation advisor Ben Warren explains how employers can overcome mistrust as they progress on their AI adoption journeys.
null

According to Gallagher research, most business leaders think generative AI will augment and not replace existing roles in the workplace. Yet there is still a high degree of uncertainty surrounding what digital transformation means for the future of work.

Ben Warren, MD and head of Digital Transformation and AI, HR and Communications Consulting practice, Gallagher, advises businesses on their AI integration and knows how important it is to instill confidence and get buy-in from across the organization.

He shares his thoughts on what employers can do to empower staff, overcome trepidation and get more out of the technology.

Q: How have expectations around AI evolved over the years, both from the perspective of employees and employers?

A: Expectations have shifted enormously in the past few years with the proliferation and adoption of a new wave of AI tools, most recently in the form of generative AI (GenAI). These newer tools are now in the hands of every employee and are a top strategic priority for employers as firms better understand how GenAI tools can drive new growth and increase productivity, whilst minimizing the associated risks.

What's interesting is that, while much of the media narrative around GenAI is focused on the sensational near-term impact of AI tools replacing jobs, when you speak to people in their everyday work, their attitudes are often different. Whilst there is some understandable trepidation in utilizing AI tools, research shows that most employees express a desire for AI to handle the repetitive, lower-level and time-consuming parts of their jobs — like using AI to transcribe an interview, for example.

What's interesting is the alignment between what employees want and what leadership teams expect from AI. Both groups are focused on decreasing the amount of time spent on lower-level, time-consuming tasks.
Equipped with the right tools and training, 76% of business leaders expect to see an increase in employee efficiency and productivity. However, while still a positive trend overall, sentiment does appear to have dampened, with a 10%-point decline since last year.

Q: You mentioned that AI adoption is more experimental now. Can you elaborate on how this aligns with what you see in the field?

A: Absolutely. While the research shows a significant increase in AI adoption, especially over the past 12 months, much of it is still in its infancy. What companies are calling "AI adoption" often amounts to early experimentation and understanding of both tool and resource capabilities.

Organizations are building a greater understanding of AI tools, understanding how they can impact work, considering ways that they support use cases, or beginning to think about governance and upskilling programs. However, this is still relatively immature overall, and so full-scale integration of AI into core business operations is going to be a complex, multi-year program, particularly for larger firms.

From what I see in the field, businesses are still catching up with the opportunity, and even larger companies do not have AI embedded deeply enough into their models for it to be considered true adoption — we're still at the early stages.
Respondents note AI is being used for a variety of tasks; each use case presents its own risks and impact on people strategies.

Q: You mentioned AI reached significant mass adoption in less than a year. Has this influenced businesses' approach given the speed of take-up?

A: The speed at which AI has been adopted is staggering. The example I often give is how long mobile phones took to reach 100 million users — which is 16 years. AI tools like ChatGPT, on the other hand, hit that milestone in under three months. This rapid adoption is both exciting and challenging.

It demonstrates that AI is no longer a niche tool but a mainstream technology that is now in the hands of everyone. On the other hand, it raises questions about whether businesses are fully prepared to manage and govern these tools, in order to reap the significant benefits.

Many companies are rushing to adopt AI but have not yet figured out the governance frameworks needed to mitigate its associated risks. This imbalance between the enthusiasm for adoption and the lack of preparedness is something that businesses need to address quickly.

The fast pace of AI development means that regulations will continue evolving and will force organizations to consider their use of AI more carefully.

Q: What about the different departments within companies? Are some ahead of others in adopting AI?

A: Technology teams, sales, marketing and customer experience departments are embracing AI much faster, likely because organizations can directly measure its impact in these departments.

The interesting part — according to Gallagher's research findings — is the perceived delay in AI adoption within the HR and people functions. You would expect HR, which plays a key role in organizational development, to be in charge of AI adoption. But it is one of the slowest adopters. This is concerning because, without HR and learning and development teams on board, a significant skills gap could hinder broader AI opportunities.

Q: Can you explain some of the risks associated with AI adoption, especially regarding data security and governance?

A: What's different about this wave of technological change is that everyone now has access to these tools on phones or personal computers. If we look back at something like cloud transformation, which has been and continues to be hugely impactful for organizations, it didn't have the same visibility across the business, because it was largely confined to the IT department and operated behind the scenes. But now, these AI tools are literally in everyone's pockets.

The rapid adoption of tools like ChatGPT, Gemini and DeepSeek means employees are using these technologies independently, often without proper governance. This can lead to inadvertent breaches of sensitive information.

Beyond that, there are critical ethical considerations to consider, as well as the potential for brand damage. So, when you look at it all, there's a lot of potential risk that needs to be effectively managed.

This is why businesses need to establish strong AI principles, building into governance frameworks that offer flexibility and guardrails and an ongoing education program.

At Gallagher, we have seen firsthand how organizations have been accidentally misusing AI tools, and the risks can be significant.

Q: You mentioned ethics, so let's dive into that. What are some of the key ethical considerations?

A: All of these new AI tools are driven by data, and since data is created by humans, it carries inherent bias. Early versions of generative AI were especially biased and unethical, and while they've evolved, those ethical concerns persist.

The best solution to this challenge is to ensure human oversight throughout a process, meaning firms are optimizing towards the benefits whilst well-trained individuals can help to recognize any bias in outputs and ensure they are countered.

Interestingly, AI can be used to reduce biases in recruitment rather than perpetuate them. For instance, by removing names and identifying details from résumés, AI can ensure that the focus remains purely on skills and qualifications.

Q: There is a lot of fear around AI, especially regarding job displacement. How do you see companies managing these concerns?

A: AI is not just about the technology; it is about people. Currently, there is a lot of noise in the media about AI replacing jobs, which can create a climate of fear, and employers need to address this fear head-on.

Some departments will feel the impacts of AI much sooner than others. For instance, AI's potential to streamline customer service or sales functions can provide immediate business value.

Remember that AI is not necessarily here to replace jobs; in the short term, it can eliminate mundane tasks and allow people to focus on more strategic and value-added work.

 

The key to managing these fears is transparent communication. Organizations need to reassure employees that AI adoption will be a gradual process and that there will be plenty of opportunities for reskilling. It's about creating a culture where employees feel empowered to adapt.

Four of the top five strategies to implement AI are focused on people and skills development, illustrating that, beyond technical and professional risks, people-related risks are top of mind for leaders. Notably, over one-third have already hired a chief AI officer, and nearly half have hired people into AI-specific roles.
27% of leaders surveyed worry about AI's impact on employee engagement and morale. The belief that AI adoption is more likely to augment rather than replace roles is the likely driver of the need to upskill staff to get the most from the technology.

Q: Trust in AI remains an issue, according to this year's survey results. How should that be addressed?

A: Building trust is key. You cannot simply introduce new technologies without listening to your employees' feedback and taking them on a journey. As AI continues to evolve, people will need a clear understanding of how these tools fit into their future roles.

Creating awareness, fostering a desire for change and educating the workforce will be essential in addressing fears around job displacement and uncertainty. A focus on new jobs and skills is critical not just to support employee change, but to transition the business itself, architecting a future-ready business.

As we see more advances in areas such as AI agents, managing these transitions will only grow more critical. Leaders must drive this change and ensure employees feel empowered, not threatened, by the changes happening.

There's a lot we can learn from the past 25 years of technological shifts. One key takeaway is the importance of a robust change management program. This means not only raising awareness about the opportunities AI presents but also addressing the potential risks and challenges.

Published May 2025