HR leaders must find the space between rushing to adopt AI and fearing it. AI can enable HR to execute strategically, maximize productivity and achieve organizational goals. While there is no need to fear it, AI demands vigilant oversight by trained humans.

Author: Edward F Barry

null

A few years ago, talk of artificial intelligence (AI) for human resources mainly was a marketing spin for hyper-learning technology. The advent of ChatGPT from OpenAI® and other generative AI tools changed that. From the creation of the wheel to the development of the internet, AI joins a relatively short list of revolutionary developments that changed the world. AI already has significantly impacted employer organizations despite the technology's early stage of development — and organizations are exploring ways to use it.

IBM's 2023 CEO study found that 50% of CEOs integrate generative AI into digital products and services, another 43% use it to inform strategic decisions, and 36% use it for operational decisions. Yet only 29% of these CEOs' executive teams feel they have the in-house expertise to adopt generative AI. Further, only 30% of non-CEO senior executives surveyed said their organization was ready to adopt generative AI responsibly.1

Some HR leaders rush to embrace ChatGPT while others ignore it, fearing unintended consequences or worse — that it will replace them. Yet there is a lot of space between these two responses — which is where HR needs to be. Properly understood and used responsibly, AI can perform transactional routine tasks, freeing HR time for more strategic activities. As for the fear factor, AI won't take your job — but the person who knows how to use it effectively may.

The following information may help HR leaders find comfort in the in-between space.

AI is not the same as hyper-automation

Many confuse AI with hyper-automation. Professionals make no real decisions using hyper-automation. Instead, the simple math of this tool makes things happen far more quickly than if people managed those tasks in a slower, multi-step fashion. For example, approval of paid time off (PTO) requires only one human action — manager approval. Automation can manage all the other steps: confirmation of earned time, notifying the requester of approval and correctly logging PTO in the time system.

Conversely, AI comprises the ability of machines to perform tasks associated with human intelligence, such as learning and problem-solving. AI uses a large language model fed an immense amount of data, enabling an algorithm to determine the output of a query. Simultaneously, the machine learns with each query and decision, providing more data to respond to future queries. Machine learning should't occur in a vacuum. A human touch is essential for the responsible and ethical use of AI — hence the ongoing need for HR involvement and guidance.

Good AI and not-so-good AI

In August 2023, MIT Technology Review reported research on 14 large language models, revealing outputs rife with bias.2 AI language models reflect the biases in their training data and those of the people who created and trained them. Using correct inputs, managed expectations and human review of outputs, organizations can use AI's power for good, improving productivity and clarity of language and intent with use. The absence of any of these three conditions — correct inputs, managed expectations and human review — can lead to not-so-good AI outcomes.

The growth of generative AI models has dramatically changed the threat scenario. Cybercriminals are using deceptive chatbot services to facilitate destructive activities. In July 2023, the data analytics platform Netenrich® uncovered a new AI tool sold on the dark web called "FraudGPT," explicitly built for malicious activities such as phishing emails and cracking tools to break security measures.3 Such threats are real, and organizations struggle to stay ahead of cybercriminals.

Not all threats are malicious

Threats aren't limited to malicious software. Consider a 2023 personal injury lawsuit for which a 30-year-old lawyer used ChatGPT to prepare his briefs. ChatGPT fed him six non-existent court decisions that he cited to bolster his case. When it was discovered that no such case law existed, the embarrassed attorney admitted that he "was unaware of the possibility that its content could be false" and accepted responsibility for not confirming the chatbot's sources. The attorney and his firm were sanctioned and fined.4

In its current stage, generative AI can't be trusted without human oversight. The technology takes snippets of human-created information from the web, splices them together and spits them out as factual information. AI cannot discern true from false — only a human can. HR users must review outputs and independently confirm whether the information is factual and — in the instance of subjective information — logical for its designated purpose.

There's no question that AI will improve as it evolves — much faster than its revolutionary predecessors. Consider that the Internet is 40 years old. Yet, IBM already offers an AI certification for tech professionals. Organizations must take a monitoring approach. HR's use of AI requires properly trained staff that understands AI's value and limitations and knows how to use it ethically and responsibly. Toward that end, new tools are available to help manage AI risk. Cutting-edge technology can monitor your AI use against your business strategy as reflected in policies and definitions. Such tools can assess the use of AI in making consequential decisions for your organization. These tools then report on the failure or success of your model.

Regulation lags behind AI technology

Regulation associated with AI is emerging worldwide. Italy stepped up as the first country to ban ChatGPT while regulators determined appropriate use and consequences. As of October 2023, 36 primarily authoritarian countries have banned ChatGPT. Other jurisdictions, including the European Union and China, are developing tailored rules for AI. In the US, the education community is leading the way in raising concerns about generative AI. Several large US school districts have banned ChatGPT due to plagiarism and accuracy concerns. Others have blocked access to it from their systems.

In August 2023, the US Equal Employment Opportunity Commission (EEOC) settled its first-ever AI discrimination in hiring lawsuit involving a tutoring company that allegedly programmed its recruitment software to reject older applicants.5 Although the EEOC settled the case, the lawsuit sent a clear signal that employers using AI in the hiring process can be held liable for unintended discrimination.

States and municipalities are moving faster than the US government to regulate the use of AI in the hiring process. Various recent legislation from across the country speaks to the following:

  • Notifying candidates of the use of AI and how it works
  • Requiring candidate consent to use AI to assess candidate-supplied information
  • Dictating a candidate's right to know what data is collected and analyzed, how long data may be kept and with whom information may be shared
  • Prohibiting the use of facial recognition software during a video interview without consent
  • Requiring an annual independent check of the software for bias

Still, these laws lag behind the technology.

Don't fear AI; control it through responsible and ethical use

Understandably, many HR leaders feel overwhelmed with the prospect of staying on top of the rapidly developing technology and associated regulations. Unfortunately, the onus is on employers to vet AI tools and validate that there is no discrimination because AI vendor contracts typically include non-liability clauses. Given this responsibility, organizations may be tempted to ignore the benefits of generative AI in hiring and elsewhere and block its use. However, AI can enhance HR strategy, and organizations using it may gain a competitive advantage. Gallagher's advice is not to fear AI but to control it.

The following four broad "rules" can help to guide responsible and ethical use of AI in HR:

  1. Data entry. Never enter information classified as "confidential" or "restricted" in an unapproved AI system.
  2. AI systems for company business. Actively monitor information uploads from the organization's network to publicly accessible AI tools. Reserve the right to take responsive action or block usage.
  3. Support productivity. Allow employees to use AI for productivity provided they abide by rules 1 and 2. Practical applications include drafting sample job descriptions, removing gender bias, auto-posting job descriptions to targeted hiring sites, scheduling candidate interviews and summarizing employee policy. A human should always review the output for accuracy.
  4. No public sharing. Never share output from a company-approved AI system outside the company.

As you leverage AI opportunities in a compliant manner, develop use cases to establish success scenarios, failure scenarios and variants or exceptions to guide future use.

Human oversight is HR's superpower

AI is here to stay and will most certainly become more powerful. Organizations that refuse to consider how it can benefit productivity and outcomes risk losing their competitive edge. Responsible and ethical use of AI demands the involvement of a human — especially in HR applications. The need for a constant human touch is HR's "superpower." Used for good, AI can enable HR to do more and execute strategically, work with less, maximize productivity, achieve organizational goals and retain talent. So, while there is no need to fear AI, it demands vigilant oversight by trained humans.

Gallagher's Human Resources Technology Consulting practice can work with you to optimize AI and other HR technology so your organization operates more efficiently. Let us help your team face the future with confidence.

Call us at 1.800.821.8481 or request that we contact you.

Author Information


Sources

1"CEO Decision-Making in the Age of AI," IBM Corporation, 26 Jul 2023.

2Heikkilä, Melissa. "AI Language Models Are Rife With Different Political Biases," MIT Technology Review, 8 Aug, 2023.

3Krishnan, Rakesh. "FraudGPT: The Villain Avatar of ChatGPT," Netenrich, 25 Jul 2023.

4Maruf, Ramishah. "Lawyer Apologizes for Fake Court Citations from ChatGPT," CNN, 28 May 2023.

5Gilbert, Annelise. "EEOC Settles First-of-Its-Kind AI Bias in Hiring Lawsuit (1)," Bloomberg Law, 10 Aug 2023.


Disclaimer

Consulting and insurance brokerage services to be provided by Gallagher Benefit Services, Inc. and/or its affiliate Gallagher Benefit Services (Canada) Group Inc. Gallagher Benefit Services, Inc. is a licensed insurance agency that does business in California as "Gallagher Benefit Services of California Insurance Services" and in Massachusetts as "Gallagher Benefit Insurance Services." Neither Arthur J. Gallagher & Co., nor its affiliates provide accounting, legal or tax advice.