The rise of AI has prompted a flurry of corporate initiatives as companies seek to plug skills gaps in their drive to get more from digital investments. Among these is the ascendance of the chief AI ethics officer, a signal of the ongoing struggle to balance automation and ethical values.
Getting your Trinity Audio player ready...
null

Of the new roles being created as part of the ongoing impact of artificial intelligence (AI) on the workforce, that of chief AI ethics officer stands out. Unlike a prompt engineer, chief AI officer or AI product manager, the core skills needed for the role of an AI ethics officer are less contingent on coding or a Silicon Valley track record, but more a strong background in philosophy, law, human rights and governance.

That said, as Gallagher's Chief Ethics Officer Tom Tropp emphasizes, there cannot be just one individual responsible for integrating AI ethically into a company. "This must be on the minds of a whole group of experts across the business as part of a fully cross-functional approach," he says.

The results from the Gallagher global benchmarking study, Attitudes to AI Adoption and Risk, show that companies continue to grapple with the ethical issues arising from AI adoption. When asked to choose all the obstacles their businesses face in adopting AI, just under a third of business leaders cited ethical issues as the leading barrier to adoption, tied with the lack of skills required within the business.

Evolving risk perceptions of AI in the workplace

Our research indicates that business leaders, including AI ethics officers, are becoming more aware of the risks. Sixty-eight percent still view AI as an opportunity, down from 82% a year ago.

And while still a minority, the proportion seeing AI as primarily a risk for the business has doubled in a year, amid fears of data incidents and reputational risks.

Some of the ethical conundrums facing employers make regular headlines, from the biased algorithms that have been used to screen out job applicants, to facial recognition software that underrepresents women. Then there are the broader questions surrounding what AI means for the workplace, particularly for roles that are becoming increasingly automated.

On the one hand, most respondents to our 2025 AI adoption survey thought that AI was more likely to augment than replace roles in the workplace.

One analogy is to compare the latest digital transformation to the introduction of the Excel spreadsheet in the late 1980s. It did not, as feared, replace the need for accountants, but gave them a new tool to make their role more efficient.

On the other hand, some displacement is inevitable, with discord arising in industry sectors where workers feel their skills are becoming redundant in the face of automation.1

Employees might need to be reassured about how AI use affects them and their jobs. At Gallagher, transparency and clear communication are critical enablers of trust, notes Christy Wolf, vice president, Talent and HR Transformation at Gallagher.

"It's crucial to help employees understand how their roles may change as AI use cases are identified and implemented," Wolf says. "Equally important is supporting employees in developing the skills needed to interact with AI in their daily work.

"Internal mobility can also be facilitated for those seeking to transition to new roles."

The access to and democratizing nature of AI means that most workers are already using AI at work. Transparency around digital transformation plans, effective change management and training are critical factors to ensuring the technology is used responsibly, IP is protected, and crucially, that it boosts productivity and wellbeing.

Question of AI ethics

Trust is a key issue arising in this year's survey, with a quarter of respondents citing eroding employee trust in the company and a quarter anticipating resistance to AI adoption and reskilling. As a result, four of the top five business strategies being are focused on people and skills development.

"It's essential to determine not just whether a solution is technically feasible but also to assess the associated ethical risks," says Christy Wolf. "Even if something is possible, we must ask ourselves whether it's the right thing to pursue."

Even if something is possible, we must ask ourselves whether it's the right thing to pursue.
Christy Wolf, vice president, talent and HR transformation, Gallagher

There are strong feelings about whether AI systems should be used to replicate the lived experience of individuals, for instance. Attempts to mimic the voices of actors and singers, or people from different ethnic groups and the LGBTQ community, have been widely criticized.

Beyond the worker-specific issues are risks arising from how the technology has been developed. The inherent "black box" nature of large language models (LLMs) means that it is difficult to gage what happens inside an AI system.

Algorithms are created by humans with their own set of biases, which can be exacerbated depending on the nature and size of datasets used to train the models.

"Examples of AI bias in the real world show us that when discriminatory data and algorithms are baked into AI models, the models deploy biases at scale and amplify the resulting negative effects," notes the IBM data and AI team.2

The need to balance the opportunities presented by AI against uncertainties inherent in the shifting nature of technology is why many firms feel they need a chief AI ethics officer to take the lead.

It is in this gray area that a business's ethical guardrails and leadership decision-making framework come into play.

"At Gallagher, risk-mitigation strategies are embedded in our responsible AI policy, which emphasizes the importance of data privacy, security, ethical AI use and humans in the loop," says Angela Isom, global chief privacy officer at Gallagher.

AI adoption — ethical considerations for business leaders

There are several key areas that AI governance frameworks should consider to ensure ethical principles are upheld during the lifecycle of an AI system and to remain responsive to changes in regulation and technology.
  • Accountability and responsibility: The business is responsible and accountable for the inputs and outputs of AI systems, as well as any (non)malicious use or mistakes in the use of AI, not the AI agent. To properly interrogate the output from AI systems, businesses are using a more structured approach to ensure the right level of due diligence and oversight is taking place.
  • Data privacy and security: AI systems are built using existing available data. Having both the legal and contractual right to use this data must be considered. Likewise, employee use of publicly accessible AI tools can expose sensitive company and personal data to third parties. Both pose ethical, legal and contractual considerations for leaders requiring governance around the use of data sets, employee use of publicly accessible AI tools and how they manage exposure of their own data externally. Ongoing compliance with data privacy and protection regulations can help firms navigate these risks while also ensuring workers have clear policies surrounding the use of AI so they don't inadvertently share sensitive data on open-source platforms.
  • Transparency and usage: AI algorithms and the data used are now ubiquitous in their complex integration into existing business operations and technologies. To build trust in AI, stakeholders need to have transparency into what data is being used, key components of the algorithms, how AI makes decisions and where it is being utilized and implemented.
  • Impact on jobs and the workforce: As with the introduction of any new technology or business process, AI may make certain roles redundant — but can also bring new role opportunities. Business leaders establishing ethical guardrails to their approach to talent management can guide decisions on when, where and how roles are impacted by AI. Businesses can reassure employees by putting in place retraining programs to give them skillsets that match the new opportunities in the workplace.
  • Bias and fairness: If data and AI models are not monitored continually for bias in the underlying data or model drift, AI can reinforce societal inequity, lead to incorrect conclusions or result in harmful outcomes. Ongoing testing of models and outputs will help ensure AI-generated results avoid harmful outcomes and align to the business's own internal ethical code of behavior and regulatory obligations.
  • Potential for misuse: AI has the potential to not only be misused by bad actors but also could be susceptible to accidental misuse due to a lack of awareness of risks or application of ethical principles during design and development. The right safeguards and education will assist in protecting the workforce and the business from malicious uses of AI; for example, dissemination of misinformation, deepfakes and fraud, as well as cyber threats.

Mitigating AI risk through a cross-functional approach

Structures and processes are necessary to ensure that a company's core values are embedded into its AI adoption framework and that AI outputs and processes are monitored closely.

Failing to build controls and systems early in design, build and implementation processes could lead to the manifestation of risks in the future: so-called "technical debt" or "ethical debt."

Technical debt describes how, in the race to digitize, development teams may choose speed over a more robust solution. Prioritizing speedy delivery over perfect code can mean much greater costs to rectify and maintain systems further down the road.

Ethical debt implies a similar trade-off: when ethical considerations take a backseat to moving fast, consequences, such as issues of bias, fairness, transparency and social impact, could come back to bite.

With AI adoption moving at breakneck speed, and firms not wanting to be left behind, it is not easy to balance the pace of innovation with the right checks and balances. However, achieving this balance is important from a brand, legal and reputational perspective.

As Paolo Cuomo, executive director at Gallagher Re, points out, AI systems lack ethical and moral guardrails. "AI systems need to be trained to have ethical considerations to prevent them from making harmful decisions," he says. "Without ethical guidelines, AI could unintentionally act in ways that are not aligned with human values and societal expectations."

Without ethical guidelines, AI could unintentionally act in ways that are not aligned with human values and societal expectations.
Paolo Cuomo, executive director, Gallagher Re

This is also where the human touch comes in. As part of their AI adoption strategies, employers may want to place a protective ring around workforce roles that involve ethical sensitivity and moral judgments to ensure they are kept from AI disruption.

Lack of readiness

Around 39% of our survey respondents indicated they "did not feel prepared" for incorporating AI within their business.

Christy Wolf says that preparing the ground through training and education is key to providing employees with foundational knowledge about AI to help ease their fears and reinforce ethical AI principles.

This foundation can also help build trust and support for AI among employees. Providing training can also reduce risk — for instance, by educating workers on the dangers of sharing sensitive information on open-source platforms or discouraging overreliance on the output.

To determine the business and workforce's level of readiness, Angela Isom suggests asking some key questions, including:

  • Have you updated your standard operating procedures to include the use of this new technology?
  • Have you ensured appropriate governance around any automated decision-making by the AI system?
  • Have you trained staff on the new process, appropriate AI use and any known limitations or restrictions of use?
  • Can you provide evidence of that training?
  • Do you have failover processes in the event the AI system is offline?
  • Do you maintain a master list of all your AI use cases?
  • Have you put controls in place to ensure that each time a new underlying AI model is introduced, the existing use cases are retested?

Regulation and compliance

Another important element of governance is keeping track of the myriad regulations, court rulings and AI best practice frameworks to ensure that AI development, procurement, deployment and use meet the appropriate standards.

Multinational organizations are already familiar with the relevant privacy/regulatory issues as they apply to their data use — across multiple jurisdictions, as well as ensuring compliance with benchmark global standards.

As Angela Isom points out, companies that are already well governed relative to data, security and privacy are often extending existing roles and governance mechanisms to cover the additional risks associated with AI and its adoption.

Some regulatory bodies have already issued guidelines

Ethical issues when implementing AI in a business are of key concern to regulatory bodies.
The European Union AI Act (2024) is the world's first legal framework regulating A1 by classifying systems into varying risk levels and then applying specific regulations according to risk category.
The World Health Organization has guidance on the use of generative AI in the healthcare sector, and the United Nations has a non-binding resolution that encourages countries to safeguard human rights, protect personal data and monitor AI for risks.

Angela Isom recognizes that managing AI compliance is challenging and that if businesses want to oversee this area effectively, they must collaborate with business functional leaders and hold them accountable for identifying and escalating new AI-related sector regulations governing their function.

With at least 800 AI policy initiatives currently in motion and many more emerging, navigating this landscape requires constant vigilance both at the national/federal and sector level. Setting a baseline level of AI governance to be applied across the business and assessing new regulations to identify requirements above the baseline can influence the development of internal guidelines and practices.

She also points out that many firms are trying to avoid creating entirely new governance infrastructures for managing AI risk. Instead, they are focused on using existing governance committees and processes, ensuring that this risk is integrated into current operations.

Where standalone AI committees have been established, they are typically led by a chief data officer who is already well-versed in overseeing data collection, management and usage across the firm.

Part of the AI ethics officer's role is to ensure that AI initiatives align with the company's ethical standards and societal values, fostering a culture of ethical awareness throughout the company.
Angela Isom, global chief privacy officer, Gallagher

Monitoring and modification

The widespread use of generative AI by the general population is relatively new. As more people explore its capabilities and recognize its value, the incentive to check outputs may diminish over time, as complacency and dependence grow.

To counter this, compliance processes, standard operating procedures and quality assurance activities can include regular prompts to help ensure that AI adoption outputs meet expectations and are verified.

This is the concept of the "digital sherpa" AI tools that are designed to challenge users constructively. While the ultimate decision remains in the hands of the professional, digital sherpas provide important leads along the way and may also offer relevant insights to guide the overall decision-making process.

"Essentially, machines have infinite memory, and while AI may not yet analyze as well as a human, it can prompt and nudge human beings," says Paolo Cuomo.

Meanwhile, ongoing monitoring for issues like bias is necessary to ensure ethical use of data and achieve accurate and non-harmful outcomes. This includes due diligence around the selection of vendors and use of third-party solutions, where it is important to ask about the representative nature of datasets that have been used to train LLMs and develop algorithms.

"Constant monitoring and AI output verification are essential," says Angela Isom. "Each time a new version of an AI system is released, the best practice is to retest the AI system to confirm that it still functions correctly. Otherwise, firms risk relying on a system that may not deliver the right results."

This highlights the importance of keeping humans in the loop. If AI outputs are not continuously analyzed to assess their suitability and compliance with a company's ethics, incorrect or biased information might be produced, resulting in harmful outcomes to individuals.

"Ultimately, AI should support and enhance decision-making rather than make decisions independently," says Christy Wolf. "AI can provide valuable information and improve efficiency, but human input is essential in the decision-making process."

Tom Tropp believes that by developing a statement of values, companies can use this "as a guideline for their entire approach which can be regularly reviewed by all involved. The Gallagher Way comes to mind as an example of how this can be done."

Building trust across the value chain

With business leaders becoming more aware of the risks of AI and ethics, it is becoming increasingly important to demonstrate that you are addressing AI-related issues in a way that reassures external clients.

As clients progress on their own AI adoption journeys and incorporate AI into their third-party risk management processes, they are inevitably asking more pertinent questions about vendors' AI governance practices. As a result, firms that fail to prioritize ethical considerations may find themselves at a competitive disadvantage.

Another element here is the need to provide employees with a safe space to raise concerns about potential misuses of AI. The provision of a robust reporting system, coupled with a credible investigation and disciplinary process that is understood and trusted by employees, is key.

"Our third-party risk management group evaluates vendor partners' use of AI in their product or services — for example, our questionnaires cover data they're using in their AI agents and use cases," says Robert Allen, corporate vice president and global chief information security officer at Gallagher. "But it's a two-way process: our vendors, partners and clients evaluate us too."

Working with AI tools rather than against them

As AI adoption evolves and changes shape, uncertainty is inescapable. However, even when working at speed, it is possible to mitigate risk. With the right people and the right working practices, businesses can create appropriate structures of governance that establish and embed the necessary ethical guardrails.

Whether firms have a dedicated chief AI ethics officer or not, no one individual or business function is solely responsible for the ethical risks arising during the AI transformation journey.

Just over half (51%) of business leaders surveyed think responsibility for risk management in the adoption of AI lies with IT. Successful identification and mitigation of ethical risks, however, will require the collective brains of senior leadership, cybersecurity, risk management, IT, data scientists, HR and legal and compliance among others.

"Our compliance, risk and ethics group is made up of the most senior members of our business, across production and functional lines," says Robert Allen. "We meet every month — it's part of a continuous feedback loop to calibrate risk and activity. And it's also vital to benchmark — I meet regularly with peer sharing groups to share insights and better understand where we stand in cyberspace."

With human oversight and appropriate guidance baked in at every step of the way, businesses can have confidence that they are taking important strides without jeopardizing their brand and reputation. As importantly, they will retain and attract top talent.

As with all strategic risks, the buck stops in the boardroom: responsibility for integrity rests with directors and other senior decision-makers, regardless of who made the tools. And, certainly in the near term, addressing these complex issues is not going to get any easier.

The opportunity here for business leaders is to lean on their ethical practices and codes to steer them on their journey of AI adoption. Tom Tropp emphasizes, "Compliance tells us what we must do, while ethics tells us what we should do."

Published July 2025


Sources

1Houser, Kristin. "Port Workers Are at War With Automation. Can They Win?" Big Think, 10 Nov 2024.

2"Shedding Light on AI Bias With Real World Examples," IBM, 16 Oct 2023.