All posts
Published Mar 14, 2025, 1:05:29 PM
5 min read

Why Social Engineering Has a Before and After ChatGPT

Gradient heart illustration

ChatGPT launch

Since the launch of ChatGPT in November 2022, we’ve seen the world shift into an even faster-paced environment, where knowledge is more accessible than ever—and only getting better. As of February 2025, ChatGPT has reached 400 million weekly active users and is now widely used across a range of industries. Students treat it as their new teacher for solving complex math problems, payment companies like Klarna have frozen new hires to automate much of their customer support using AI, and biotech companies are accelerating drug discovery at unprecedented speeds.

With every major technological leap, we also face new risks that call for modern countermeasures—AI-powered social engineering is one of them. In this blog, we explore how this shift is changing the toolkit of malicious actors and enabling them to scale and personalize attacks like never before.

1. The Nigerian Prince Playbook

Before GenAI, phishing and social engineering attacks largely relied on static templates, poor translations, and basic tactics. A classic example is the Nigerian Prince scam, where scammers impersonated royalty or officials offering large sums in exchange for help transferring funds.

Example: "My name is Prince Adewale from Nigeria, and I need your help to transfer $10 million out of my country. You will receive 30% for your assistance."

It may seem basic, but it worked—and still does—because it taps into emotions like greed, fear, and trust. Still, success rates were low due to spelling errors, awkward phrasing, and obvious red flags. Traditional email security easily caught these using rule-based filters, blacklists, and signature detection. The biggest limitation was adaptability: no personalization, no realistic tone—just high-volume noise with low impact.

Exposing Nigerian fraud syndicates (60 minutes)

youtube_nigerian_scammers

2. ChatGPT: The Inflection Point

With the launch of ChatGPT in November 2022 and the broader rise of large language models (LLMs), the game changed for cybercriminals. Suddenly, it became possible to bypass common linguistic mistakes—such as misspellings and awkward phrasing. Attacks could be auto-localized into any language, all delivered on-demand and at real-time speed.

Cybercriminals saw this as a major opportunity. Although OpenAI quickly implemented guardrails to prevent the use of ChatGPT for phishing and other malicious purposes, attackers found workarounds—such as accessing the OpenAI API through Telegram bots or simply copying and pasting generated emails to avoid keyword-based filters.

The tone was set with ChatGPT, but as the guardrails continued to improve, it became increasingly difficult for professional threat actors to rely on ChatGPT directly.

example screenshot

ChatGPT responses for abusive requests


3. WormGPT, FraudGPT, DarkBART

In response, cybercriminal communities began developing their own tools, uncensored language model trained specifically for malicious use. This gave rise to models like WormGPT, FraudGPT and DarkBART, which were marketed openly on dark web forums and Telegram channels as alternatives free from ethical restrictions. These models were trained or fine-tuned on phishing, social engineering, and malware-related data, allowing threat actors to craft highly convincing scam messages, fake identities, or even code snippets for exploits.

And you might think this tool is only accessible to nation-state actors or professional threat groups? Think again. A subscription to basic FraudGPT offerings starts as low as $200 per month. Even at its most advanced level, the platform only costs around $1,700 per month. For hackers, this low price point makes it extremely lucrative—if they manage to pull off just a few convincing phishing campaigns, they could earn back their investment a hundred times over.

fraud_gpt

Prompt used to create a scam in FraudGPT


4. Where We're Headed

No one really knows yet how this will escalate, but it’s clear that the expanding attack surface is increasing risk exposure for both enterprises and SMEs. McKinsey reported in November 2024 that “since the proliferation of generative AI platforms starting in 2022, phishing attacks have risen by 1,265 percent.” A new modus operandi could involve an acceleration in multi-modal attacks — where, in addition to high-fidelity vishing and deepfake videos, malicious actors also use large language models (LLMs) to interact with chatbots and email systems, manipulating them continuously. This could result in persistent attacks on any department in your organization that interfaces with external partners, suppliers, or customers.

If bad actors are leveraging LLMs continuously, it becomes equally critical to deploy defensive LLMs — your best line of defense for ongoing protection.