Author: Richard A. Egleston
The era of generative artificial intelligence (AI) is upon us at warp speed, epitomized by generative pre-trained transformer (GPT) tools such as ChatGPT. With the rapid adoption of ChatGPT and similar AI tools reaching historic levels, cybersecurity awareness also must rise as a top priority for individuals and organizations. October is Cybersecurity Awareness Month, so it's a good time to think about the incredible potential — and the cyber risk — associated with the advancement of AI tools in the hands of ordinary people.
Understanding the concerns surrounding new AI tools: speed to market and safety
For years, AI has silently operated in the background of many services we use — from search engines and Netflix recommendations, to auto-correct features and facial recognition services. However, the game changed with the recent introduction of ChatGPT, which put the power of AI front and center in most of our lives, whether we recognize it or not.1
ChatGPT saw historical adoption: 1 million users in the first five days following its November 2022 launch and an estimated 100 million users in the first two months.2 ChatGPT held the record for shortest time to one million users until it was recently beat out by Threads.3
ChatGPT functions as a highly advanced AI chatbot designed to generate humanlike text and answers to users' queries. It operates on large language models and massive data sets to act like a human brain. It's the first of many generative AI models now available to anyone.
The dual nature of generative AI
Generative AI has the potential for both positive and negative impacts. On the plus side, ChatGPT can help individuals and businesses create, learn and build things faster and better, such as composing music, drafting marketing copy, coding and conducting market research. However, in the wrong hands, ChatGPT can be weaponized to help criminals craft convincing personal phishing and business email compromise scams, or disseminate false information.
Malicious uses of ChatGPT have been on the rise in the last few months, with a 24% increase in social engineering attacks reported by Norton Threat Labs.4
In Q2 2023,
- 75% of cybercrime incidents are scams, phishing (smishing) and malvertising.4
- 2 in 5 people have fallen victim to a scam,5 and 50% of victims experienced financial loss.
- The average American now encounters 25 scams a week.
A tipping point in AI identification
We've reached an uncomfortable place where distinguishing between AI sources and human sources has become a challenge for humans and AI systems alike. In this evolving landscape, awareness emerges as our strongest defense. With generative AI tools, it's easier for cybercriminals to write flawless imitations of legitimate sources (look-alike emails and texts) because reinforcement learning is helping the ChatGPT algorithm fine-tune messaging to make it ever more challenging to differentiate real versus fake.
Given all the details of our everyday lives shared on social networks, or exposed in corporate data breach, generative AI is allowing for more accurate and nefarious personalization.
Empowering individuals and companies: Awareness is the best defense
In a world where 88% of individuals are online daily6 and AI-driven threats are on the rise, everyone now has a role and responsibility in safeguarding their data and information — both personally and professionally.
The best advice for individuals and organizations is to educate themselves, adopt secure online practices and use available cyber safety tools. Protect your people by offering digital protection as an employee benefit and help your people be proactive in the fight against hackers. The more we generate awareness and use best practices, the safer we are at home and work.
This Cybersecurity Awareness Month, leading organizations such as the National Cybersecurity Alliance and Norton LifeLock Benefit Solutions are promoting essential cyber safe tips to raise awareness of the emerging risks.
5 cyber safe best practices
- Enable multi-factor authentication (MFA) for all financial accounts.
- Avoid reusing passwords by using a password manager to auto-generate unique, strong passwords.
- Keep software updated to ensure the latest vulnerabilities are patched quickly.
- Exercise caution and recognize phishing attempts, don't click links in emails and be cautious about sharing personal information in response to unsolicited texts, emails or calls.
- Trust but verify and always be skeptical by verifying the authenticity of communications from trusted sources.
Companies must take responsibility
Although individuals own maintaining their personal cyber safety, companies play a pivotal role in fostering cybersecurity awareness among their employees. Just as social media policies were introduced to address employees' use of Facebook, LinkedIn, and other social media platforms, policies surrounding generative AI should be implemented. When the right policies are in place, it ensures their workforce is aware and empowered to operate responsibly in an AI-driven world that continues to evolve rapidly.
Policy, process and products can reduce risk
Recent findings indicate that experimentation with generative AI tools in the workplace is relatively common. An April 2023 McKinsey survey of almost 1,700 employees shows the following:7