Regulation and compliance
Another important element of governance is keeping track of the myriad regulations, court rulings and AI best practice frameworks to ensure that AI development, procurement, deployment and use meet the appropriate standards.
Multinational organizations are already familiar with the relevant privacy/regulatory issues as they apply to their data use — across multiple jurisdictions, as well as ensuring compliance with benchmark global standards.
As Angela Isom points out, companies that are already well governed relative to data, security and privacy are often extending existing roles and governance mechanisms to cover the additional risks associated with AI and its adoption.
Some regulatory bodies have already issued guidelines
Ethical issues when implementing AI in a business are of key concern to regulatory bodies.
The European Union AI Act (2024) is the world's first legal framework regulating A1 by classifying systems into varying risk levels and then applying specific regulations according to risk category.
The World Health Organization has guidance on the use of generative AI in the healthcare sector, and the United Nations has a non-binding resolution that encourages countries to safeguard human rights, protect personal data and monitor AI for risks.
Angela Isom recognizes that managing AI compliance is challenging and that if businesses want to oversee this area effectively, they must collaborate with business functional leaders and hold them accountable for identifying and escalating new AI-related sector regulations governing their function.
With at least 800 AI policy initiatives currently in motion and many more emerging, navigating this landscape requires constant vigilance both at the national/federal and sector level. Setting a baseline level of AI governance to be applied across the business and assessing new regulations to identify requirements above the baseline can influence the development of internal guidelines and practices.
She also points out that many firms are trying to avoid creating entirely new governance infrastructures for managing AI risk. Instead, they are focused on using existing governance committees and processes, ensuring that this risk is integrated into current operations.
Where standalone AI committees have been established, they are typically led by a chief data officer who is already well-versed in overseeing data collection, management and usage across the firm.
“
Part of the AI ethics officer's role is to ensure that AI initiatives align with the company's ethical standards and societal values, fostering a culture of ethical awareness throughout the company.
Angela Isom, global chief privacy officer, Gallagher
Monitoring and modification
The widespread use of generative AI by the general population is relatively new. As more people explore its capabilities and recognize its value, the incentive to check outputs may diminish over time, as complacency and dependence grow.
To counter this, compliance processes, standard operating procedures and quality assurance activities can include regular prompts to help ensure that AI adoption outputs meet expectations and are verified.
This is the concept of the "digital sherpa" AI tools that are designed to challenge users constructively. While the ultimate decision remains in the hands of the professional, digital sherpas provide important leads along the way and may also offer relevant insights to guide the overall decision-making process.
"Essentially, machines have infinite memory, and while AI may not yet analyze as well as a human, it can prompt and nudge human beings," says Paolo Cuomo.
Meanwhile, ongoing monitoring for issues like bias is necessary to ensure ethical use of data and achieve accurate and non-harmful outcomes. This includes due diligence around the selection of vendors and use of third-party solutions, where it is important to ask about the representative nature of datasets that have been used to train LLMs and develop algorithms.
"Constant monitoring and AI output verification are essential," says Angela Isom. "Each time a new version of an AI system is released, the best practice is to retest the AI system to confirm that it still functions correctly. Otherwise, firms risk relying on a system that may not deliver the right results."
This highlights the importance of keeping humans in the loop. If AI outputs are not continuously analyzed to assess their suitability and compliance with a company's ethics, incorrect or biased information might be produced, resulting in harmful outcomes to individuals.
"Ultimately, AI should support and enhance decision-making rather than make decisions independently," says Christy Wolf. "AI can provide valuable information and improve efficiency, but human input is essential in the decision-making process."
Tom Tropp believes that by developing a statement of values, companies can use this "as a guideline for their entire approach which can be regularly reviewed by all involved. The Gallagher Way comes to mind as an example of how this can be done."
Building trust across the value chain
With business leaders becoming more aware of the risks of AI and ethics, it is becoming increasingly important to demonstrate that you are addressing AI-related issues in a way that reassures external clients.
As clients progress on their own AI adoption journeys and incorporate AI into their third-party risk management processes, they are inevitably asking more pertinent questions about vendors' AI governance practices. As a result, firms that fail to prioritize ethical considerations may find themselves at a competitive disadvantage.
Another element here is the need to provide employees with a safe space to raise concerns about potential misuses of AI. The provision of a robust reporting system, coupled with a credible investigation and disciplinary process that is understood and trusted by employees, is key.
"Our third-party risk management group evaluates vendor partners' use of AI in their product or services — for example, our questionnaires cover data they're using in their AI agents and use cases," says Robert Allen, corporate vice president and global chief information security officer at Gallagher. "But it's a two-way process: our vendors, partners and clients evaluate us too."
Working with AI tools rather than against them
As AI adoption evolves and changes shape, uncertainty is inescapable. However, even when working at speed, it is possible to mitigate risk. With the right people and the right working practices, businesses can create appropriate structures of governance that establish and embed the necessary ethical guardrails.
Whether firms have a dedicated chief AI ethics officer or not, no one individual or business function is solely responsible for the ethical risks arising during the AI transformation journey.
Just over half (51%) of business leaders surveyed think responsibility for risk management in the adoption of AI lies with IT. Successful identification and mitigation of ethical risks, however, will require the collective brains of senior leadership, cybersecurity, risk management, IT, data scientists, HR and legal and compliance among others.
"Our compliance, risk and ethics group is made up of the most senior members of our business, across production and functional lines," says Robert Allen. "We meet every month — it's part of a continuous feedback loop to calibrate risk and activity. And it's also vital to benchmark — I meet regularly with peer sharing groups to share insights and better understand where we stand in cyberspace."
With human oversight and appropriate guidance baked in at every step of the way, businesses can have confidence that they are taking important strides without jeopardizing their brand and reputation. As importantly, they will retain and attract top talent.
As with all strategic risks, the buck stops in the boardroom: responsibility for integrity rests with directors and other senior decision-makers, regardless of who made the tools. And, certainly in the near term, addressing these complex issues is not going to get any easier.
The opportunity here for business leaders is to lean on their ethical practices and codes to steer them on their journey of AI adoption. Tom Tropp emphasizes, "Compliance tells us what we must do, while ethics tells us what we should do."
Published July 2025