Artificial intelligence is fraught with bias and untruths, but there are ways to get more reliable responses.
Getting your Trinity Audio player ready...

Authors: Rebecca Starr Edward F Barry

null

This article is the second in a series exploring artificial intelligence (AI) in human resources. The first article looked at how to keep the "human" in human resources. We invite you to share your experiences with AI in HR for possible inclusion in future installments in the series.

AI regulation is evolving

Regulation or the lack thereof is a risk factor of artificial intelligence. Because AI regulation is still emerging, it's a bit messy. The European Union is leading the way, and whatever directives come from its governing body likely will serve as the basis for a global standard. Until then, check for any local regulations — including your organization's policies. While our comments are relevant in any market, our perspective is US-centric.

As part of a broader journey, this article serves as the first of two pieces examining the fundamental risks of AI. In part two, we'll explore specific HR applications and ways to experiment to become comfortable with the technology. First, however, you should understand its risks.

Big risks: Faulty facts and bias

Generative AI makes stuff up. Never assume AI-generated information is accurate. Most generative AI tools cite references, so review all sources. Often, the cited source won't be the original source. Follow the data trail to its origin. When that's not possible, evaluate the quality of the cited source. For example, was it The New York Times or an unknown blogger? In the HR world, we believe Society for Human Resources Management (SHRM)-cited sources carry more weight than a product-sponsored survey.

AI is biased. Generative AI outputs exhibit bias, whether subtle or overt, that can color your brand when HR teams use it repeatedly in communications. You may have read that AI is a great tool to eliminate bias. Nice idea, but large language models reflect only the information used to train them — information created by humans. Humans are inherently biased, so generative AI tools reflect that bias. Still, with a human touch, you can counter bias.

Consider the use of AI to recruit candidates. Here are our respective "people" and "tech" viewpoints.

Political bias in AI?

The line between news and opinion has blurred, making it hard to filter politics from the vast data sets that train generative AI. An August 2023 study1 tested 14 large language models and found varying degrees of political bias, ranging from "libertarian" to "authoritarian." (For more, see the MIT article about AI bias listed in the recommended resources section.)

Regardless of your perspective, know that HR isn't immune to political bias, which might be evident in responses to prompts such as, "Write a policy for inclusive benefits for LGBTQ+ employees." Or, "Which ethnic or non-traditional holidays should an organization recognize?" Awareness of potential bias can help you mitigate risk.

While false information and bias typically surface as the big AI issues, risk also stems from regulation, privacy and liability sources — see recommended resources section. While you can't eliminate risk, you can mitigate it by learning to ask for information in such a way that AI returns the most factual and focused results.

Rebecca: Training model overlooked source of bias

I recall when Amazon scrapped its AI recruitment tool in 2018 after finding the tool discriminated against female candidates. The fault lay not in the tech but in the people-led AI training method of feeding it resumes of past "successful" candidates, which were predominately male, resulting in unintended bias.

If diversity, equity and inclusion (DEI) is an organizational priority, you must proactively define what "success" looks like in your organization and continue to monitor and adjust internal processes and practices. 

Ed: Leverage AI technology to counter bias in hiring

I'm confident that AI technology can offer a viable solution to counter hiring bias. With proper training, HR teams can write job descriptions more consistently to eliminate age, gender or ethnic biases.

Taking it a step further, AI can add value by identifying a great candidate who may not fit the typical profile — for example, where equivalent experience offsets the lack of a degree. There are ways to manage risk while taking advantage of generative AI's value.

Manage risk using best-practice prompts

The best results from generative AI tools come from crafting effective prompts. A poorly worded prompt will return, at best, incomplete or off-point results. At worst, it may contain false information presented as organizational policy or guidance. The following are our best-practice tips informed by ChatGPT on asking for information that will return the best results and reduce your risk.

  • Be clear and specific in your prompt. State the topic and what you want the tool to do. Break down complex topics into multiple narrowly focused prompts.
  • Avoid yes-or-no questions. Open-ended questions elicit more robust responses or provide rationales for actual yes or no questions.
  • Provide context. Generative AI will try to create context if you don't provide it, potentially leading to false or irrelevant information. For example, start with "based on the information below," then paste the information.
  • Embrace proper spelling and grammar. Leave grammar shortcuts for texting with friends. Grammatically correct prompts give you more accurate responses.
  • Include hypothetical examples in your prompt. Generative AI learns from your prompts, so provide real-world scenarios to support your queries.
  • Don't ask leading questions. You get back what you give, so if your prompt assumes a particular viewpoint or answer, expect to see it reflected in the response. Be aware of prompts that could lead to biased results.
  • Prompt. Assess. Repeat. Practice crafting prompts for a topic you know well. Assess the results for what is wrong or missing and give it another go. Like generative AI, you will learn by doing.

Share your experience

We've started asking our clients about their AI usage. The overwhelming response has been (paraphrasing) "No way; we're not touching it."

When we probe, that feedback boils down to a lack of understanding of the technology and resulting fear. Playing with AI in a controlled and safe way is the best method to overcome this lack of understanding, and practicing prompts is a great start. Give it a try and then tell us about your experience.

Whether you're a first-time or regular AI user, share what you learned and observed, as well as your questions or concerns. Wherever your organization is in its AI in HR journey, we're here to ease the path forward and mitigate risk.

Is your organization prepared to guide employees?

In a late 2023 Gallagher survey of organizational communications,2 we learned that only 29% of organizations provide employees guidance on when, where and/or how to use AI, and just 20% provide AI training or resources. Organizations that have developed their own AI solution tend to apply better governance and oversight than others.

We're proud that Gallagher is one of those organizations. "Gallagher AI" launched in 2023. We appreciate the tool and guidance, and as consultants, we're excited that Gallagher understands using generative AI is a learning experience. Employees are encouraged to provide feedback from use cases on what's working and what needs more attention. These insights inform our use of the tool and provide us a broader basis for guiding our clients.

Our next AI in HR topic

We put ChatGPT to the test for ways to use it in HR and then evaluated its response. Watch for the insights in the next article in this series.

Contact us with your experiences, observations, questions or concerns. We may not respond to every message personally, but we'll try to address your comments in a future conversation without identifying you.

SHARE FEEDBACK

Recommended resources

"Artificial Intelligence Risk Management," Gallagher USA.

"The Latest Regulation for Artificial Intelligence." Gallagher USA.

"Biases in LLM Generative AI: A Guide to Responsible Implementation." Forbes.

"Why It's Impossible to Build an Unbiased AI Language Model," MIT Technology Review.

"AI Language Models Are Rife With Different Political Biases," MIT Technology Review.

"Best Practices for Prompt Engineering With the OpenAI API," OpenAI

Author Information


Disclaimer

Consulting and insurance brokerage services to be provided by Gallagher Benefit Services, Inc. and/or its affiliate Gallagher Benefit Services (Canada) Group Inc. Gallagher Benefit Services, Inc. is a licensed insurance agency that does business in California as "Gallagher Benefit Services of California Insurance Services" and in Massachusetts as "Gallagher Benefit Insurance Services." Neither Arthur J. Gallagher & Co., nor its affiliates provide accounting, legal or tax advice.