Embracing Cybersecurity Awareness Month: Safeguarding against AI-driven risks and understanding the role your people play every day

Author: Richard A. Egleston

null

The era of generative artificial intelligence (AI) is upon us at warp speed, epitomized by generative pre-trained transformer (GPT) tools such as ChatGPT. With the rapid adoption of ChatGPT and similar AI tools reaching historic levels, cybersecurity awareness also must rise as a top priority for individuals and organizations. October is Cybersecurity Awareness Month, so it's a good time to think about the incredible potential — and the cyber risk — associated with the advancement of AI tools in the hands of ordinary people.

Understanding the concerns surrounding new AI tools: speed to market and safety

For years, AI has silently operated in the background of many services we use — from search engines and Netflix recommendations, to auto-correct features and facial recognition services. However, the game changed with the recent introduction of ChatGPT, which put the power of AI front and center in most of our lives, whether we recognize it or not.1

ChatGPT saw historical adoption: 1 million users in the first five days following its November 2022 launch and an estimated 100 million users in the first two months.2 ChatGPT held the record for shortest time to one million users until it was recently beat out by Threads.3

ChatGPT held the record for shortest time to one million users until it was recently beat out by Threads.

ChatGPT functions as a highly advanced AI chatbot designed to generate humanlike text and answers to users' queries. It operates on large language models and massive data sets to act like a human brain. It's the first of many generative AI models now available to anyone.

The dual nature of generative AI

Generative AI has the potential for both positive and negative impacts. On the plus side, ChatGPT can help individuals and businesses create, learn and build things faster and better, such as composing music, drafting marketing copy, coding and conducting market research. However, in the wrong hands, ChatGPT can be weaponized to help criminals craft convincing personal phishing and business email compromise scams, or disseminate false information.

Malicious uses of ChatGPT have been on the rise in the last few months, with a 24% increase in social engineering attacks reported by Norton Threat Labs.4

In Q2 2023,

  • 75% of cybercrime incidents are scams, phishing (smishing) and malvertising.4
  • 2 in 5 people have fallen victim to a scam,5 and 50% of victims experienced financial loss.
  • The average American now encounters 25 scams a week.

A tipping point in AI identification

Imagine someone impersonating your boss and instructing you to pay an invoice. Would you easily be able to verify it's not your boss?

We've reached an uncomfortable place where distinguishing between AI sources and human sources has become a challenge for humans and AI systems alike. In this evolving landscape, awareness emerges as our strongest defense. With generative AI tools, it's easier for cybercriminals to write flawless imitations of legitimate sources (look-alike emails and texts) because reinforcement learning is helping the ChatGPT algorithm fine-tune messaging to make it ever more challenging to differentiate real versus fake.

Given all the details of our everyday lives shared on social networks, or exposed in corporate data breach, generative AI is allowing for more accurate and nefarious personalization.

Empowering individuals and companies: Awareness is the best defense

In a world where 88% of individuals are online daily6 and AI-driven threats are on the rise, everyone now has a role and responsibility in safeguarding their data and information — both personally and professionally.

The best advice for individuals and organizations is to educate themselves, adopt secure online practices and use available cyber safety tools. Protect your people by offering digital protection as an employee benefit and help your people be proactive in the fight against hackers. The more we generate awareness and use best practices, the safer we are at home and work.

This Cybersecurity Awareness Month, leading organizations such as the National Cybersecurity Alliance and Norton LifeLock Benefit Solutions are promoting essential cyber safe tips to raise awareness of the emerging risks.

5 cyber safe best practices

  1. Enable multi-factor authentication (MFA) for all financial accounts.
  2. Avoid reusing passwords by using a password manager to auto-generate unique, strong passwords.
  3. Keep software updated to ensure the latest vulnerabilities are patched quickly.
  4. Exercise caution and recognize phishing attempts, don't click links in emails and be cautious about sharing personal information in response to unsolicited texts, emails or calls.
  5. Trust but verify and always be skeptical by verifying the authenticity of communications from trusted sources.

Companies must take responsibility

Although individuals own maintaining their personal cyber safety, companies play a pivotal role in fostering cybersecurity awareness among their employees. Just as social media policies were introduced to address employees' use of Facebook, LinkedIn, and other social media platforms, policies surrounding generative AI should be implemented. When the right policies are in place, it ensures their workforce is aware and empowered to operate responsibly in an AI-driven world that continues to evolve rapidly.

Policy, process and products can reduce risk

Recent findings indicate that experimentation with generative AI tools in the workplace is relatively common. An April 2023 McKinsey survey of almost 1,700 employees shows the following:7

*of respondents using AI

This experimentation can leave companies wrestling with how, or even if, to welcome generative AI tools in the workplace. Employers have good reason to be concerned about employees disseminating confidential or proprietary business information into generative AI systems. Use of ChatGPT at work raises concerns, from spreading inaccuracies to introducing new cybersecurity risks and violating personal privacy.

In May 2023, Samsung Electronics banned staff globally from using ChatGPT and similar AI tools after discovering employees had uploaded sensitive code to the platform and used it to convert text-to-video meeting notes.8

Now, Human Resources leaders are considering policy, processes and product/tools that can help minimize corporate and individual risk. To date, companies have done everything from banning products (like ChatGPT) to fully embracing its use. Most companies, however, have yet to address AI concerns at all — only 21% have adopted policies governing employee use.7

At a minimum, employers should establish clear guidelines to protect corporate assets, employee rights and privacy.

A best practice for the immediate term would be for HR to introduce a clear framework around responsible generative AI use in the employee handbook alongside employee cybersecurity training sessions.

Additional vehicles to strengthen cyber safety: Benefits and Enrollment Services

HR and Security professionals face the challenge of enhancing cyber safety within organizations. A staggering 82% of breaches and security incidents relate to human factors.9 Yet, employees receive only annual cybersecurity training at best. These security behaviors need more than annual reinforcement to work effectively and continuously.

Many companies are creatively using existing HR vehicles, including employee benefits, to reinforce a culture of cyber safety.

Additional ways to educate employees and help build cyber safe habits

  • Increase cadence of security training. Choose a quarterly cadence and add captivating modules for current threats.
  • Use enrollment services as another education vehicle. Gallagher's trained benefit counselors can educate employees on cybersecurity threats, while introducing benefits — and free tools.
  • Add cyber safety solutions to employee benefits package — such as identity protection with cyber insurance — as a voluntary or employer-paid benefit that protects against modern scams.
  • Help your people protect their finances, possibly in the form of retirement plan cybersecurity best practices or overall financial wellbeing and protecting financial data.

Make free cyber safety tools accessible through internal communications and intranets. Examples include:

  • Free AI-powered scam detector. There is a clear need for tools to help people immediately verify if content is real or a scam. Norton Genie is one option.10
  • Phishing education infographic identifies the most common types of phishing scams this year.11
  • Five cyber safety tips from National Cybersecurity Alliance can help you stay safe online.12
  • Links to corporate cybersecurity tools and content from your IT team and partners.

A collective effort for a safer digital world

Whether by following these recommendations or exploring others, organizations can support their employees and reduce their own risk exposure by fostering a culture of cybersecurity awareness and responsible AI use. Investing in policy, process and educational tools and products will pay dividends in mitigating cyber risks for both employees and the company. In this Cybersecurity Awareness Month, start the conversation by elevating awareness about generative AI risks and benefits, and help employees and their families navigate the digital realm securely, both at work and at home.

Interested in how you can protect your organization from a cyber crisis? Learn more about your insurance coverages.

Author Information


Sources

1Kelly, Samantha Murphy. "This AI Chatbot Is Dominating Social Media With Its Frighteningly Good Essays," CNN, updated 19 May 2023.

2Shewale, Rohit. "32 Detailed ChatGPT Statistics — Users, Revenue and Trends," DemandSage, 7 Sep 2023.

3Buchholz, Katharina. "Threads Shoots Past One Million User Mark at Lightning Speed," Statistica, 7 Jul 2023.

4Threat Research team. "Avast Q2/2023 Threat Report," Avast Threat Labs, 10 Aug 2023.

5"2023 Norton Cyber Safety Insights Report," GenDigital, Feb 2023. PDF file.

6Nurse, Dr. Jason R.C., et al. "Oh Behave! The Annual Cybersecurity Attitudes and Behaviors Report 2022," CybSafe, 2022. PDF file.

7Chui, Michael, et al. "The State of AI in 2023: Generative AI's Breakout Year," McKinsey & Company, Aug 2023.

8Ray, Siladitya. "Samsung Bans ChatGPT Among Employees After Sensitive Code Leak," Forbes, 2 May 2023.

9"The Human Element Is Still a Major Factor in Breaches," Imprivata, 3 Jun 2022.

10"Genie Scam Detector," Norton, accessed 12 Sep 2023. Product page.

11Pechoucek, Michal. "Phictionary — The Phishing Dictionary Every Digital Citizen Should Read," Norton, 11 Jul 2023.

12"Online Safety Basics," National Cybersecurity Alliance, 26 May 2022.


Disclaimer

Consulting and insurance brokerage services to be provided by Gallagher Benefit Services, Inc. and/or its affiliate Gallagher Benefit Services (Canada) Group Inc. Gallagher Benefit Services, Inc. is a licensed insurance agency that does business in California as "Gallagher Benefit Services of California Insurance Services" and in Massachusetts as "Gallagher Benefit Insurance Services." Neither Arthur J. Gallagher & Co., nor its affiliates provide accounting, legal or tax advice.