Key takeaways
- Human error is thought to be behind over 90% of all cyber attacks, and artificial intelligence (AI), phishing and social engineering are becoming more convincing and targeted. With humans as the firewall, keeping up requires coordinated thinking across the business.
- Experience isn't everything. The "half-life" of a skill — which is the rate at which skills become half as useful — is getting shorter and shorter, calling for an agile, continuous approach to learning.
- In a world of constant change and disruption, the workforce is feeling fatigued. Building organizational resilience requires strong leadership and effective change strategies.
- Job insecurity can erode employee morale and trust in leadership, impacting engagement and talent retention. Unaddressed, job insecurity can spill over into industrial action, causing broader economic impacts.
Introduction
The past five years have seen a period of marked transformation within the workplace. Major technological leaps — including the mass adoption of generative AI — have created significant opportunity but also uncertainty and the need to address skills shortages.
Pandemic lockdowns, the shift to hybrid working and the cost-of-living crisis added further pressure and resets, with the "Big Quit" prompting talent to leave sectors such as healthcare, leisure and hospitality, and transportation as workers sought to reduce stress and improve work-life balance.
These events have supercharged many of the underlying people-related risks employers were already grappling with, including skills obsolescence and maintaining staff engagement. Building resilience in the workforce has arguably never been more challenging.
To better understand these trends — and how business leaders are responding — Gallagher carried out some global research in January 2025. The data reveals some of the workplace risks that are front of mind alongside the strategies being implemented.
The need for more collaboration across risk management and HR
Many of the current challenges sit at the intersection of enterprise risk management and HR and require a joined-up, cross-divisional approach. Such functions haven't always been aligned, and more collaboration will be needed moving forward.
"There is a clear connection between the risks that HR and Risk Management professionals manage," says Lisanne Sison, managing director of Enterprise Risk Management (ERM) at Gallagher US. "Every time I do a risk prioritization, at least two of the top 10 risks are people related."
The following four people-related trends are challenging employers and risk managers to think differently as the world around us continues to evolve. Whether firms are addressing the human factor in cyber risk management or seeking to address skills gaps, it's clear that a cross-functional approach will be needed to bridge the gaps and futureproof the workforce.
The much-forgotten upside of risk is opportunity. Risk management principles can empower business leaders to face the future with confidence — because, when it comes to leading in an unpredictable environment, confidence matters.
Trend 1: Keeping up with the cyber attacks with humans at the digital frontline
Because human error allows most cyber attacks to succeed, companies are doing their best to counter these vulnerabilities by putting in place more checks and balances, including robust cyber hygiene and "zero trust" approaches to cybersecurity. But with AI, companies are two steps behind how criminals are using the technology to scam and exploit workers.
Nefarious actors are using AI to analyze social media activity, online conversations and personal data to make attacks more targeted and convincing. When phishing, AI allows hackers to create more tailored messages that mimic the language, tone and style of a trusted contact or reputable organization.
The telltale signs of a malicious email or message are becoming more difficult to spot, explains Johnty Mongan, global head of Cyber Risk Management in the Gallagher Cyber Defence Centre. "Cybercriminals used to create phishing emails that were littered with spelling errors and punctuation issues, but now ChatGPT — being a language-based product — will create perfect text."
Deepfake audio and video are another area where AI is being used to impersonate executives or other trusted figures to manipulate victims into revealing sensitive information or authorizing financial transactions. These sophisticated frauds are very convincing and employ emotional manipulation strategies.
John Farley, managing director of Gallagher's Cyber Liability practice, warns that businesses will be more exposed to social engineering-type attacks in 2025: "Hackers can launch really sophisticated phishing campaigns with emails that have no spelling mistakes and excellent grammar, which are very targeted to you as the victim because they pulled all your information from social media. We must get better at responding to those and then preventing those.
"Then there's deepfake technology," he continues. "That's another area where you're going to see video or voicemail impersonations. Employees need to be trained on the new ways hackers are using AI."
According to the 2025 Attitudes to AI Adoption and Risk Survey, business leaders see the increased threat of privacy violations and data breaches (33%) and greater vulnerability to cyber attacks and fraud (29%) as among the top four risks to the business arising from AI.

