Cyber attackers can now use generative AI able to create convincing videos, images, or audio recordings to deceive individuals into divulging their data, as many cybersecurity firms have warned of late.
Yet, in a less dramatic but possibly more devastating manner, technology such as ChatGPT can also be used by fraudsters to conduct fraud on a massive scale to affect a lot more victims, says Sophos, a cybersecurity services provider.
In a recent test, Sophos built a fully functioning website with AI-generated images, audio, product descriptions, and a fake Facebook login and checkout page to steal users’ login credentials and credit card details. All of this is possible using LLM tools like GPT-4 and a basic e-commerce template today.
With just one button, Sophos X-Ops could construct hundreds of similar websites using the same technique in minutes, with minimal technical skills.
Given how successful this attempt was, more cyber attackers can be expected to use new technology to automate their attacks, said Ben Gelman, a senior data scientist at Sophos.
“The original creation of spam e-mails was a critical step in scamming technology because it changed the scale of the playing field,” he noted.
“New AIs are poised to do the same; if an AI technology can create complete, automated threats, people will eventually use it,” he added. “We have already seen the integration of generative AI elements in classic scams, such as AI-generated text or photographs to lure victims.”
The good news is that Sophos’ study indicates that cyber criminals are reluctant to employ generative AI in their attack strategies, at least for now.
The company had searched four well-known Dark Web forums for conversations about LLM. It discovered that threat actors were talking about AI’s potential for social engineering, suggesting that cybercriminals are still in the early phases of using the technology. AI has already been used in romance-based, crypto scams.
Furthermore, according to Sophos, most posts discussed “jailbreaks,” or methods of getting beyond LLM security measures to use for harmful reasons, and compromised ChatGPT accounts for sale.
Ten ChatGPT derivatives that the developers of Sophos X-Ops, a threat response task force, believe might be used to initiate cyberattacks and produce malware were also discovered.
Threat actors, however, had differing opinions about these derivatives and other nefarious uses of LLMs. A number of criminals expressed worry that the people who made the ChatGPT imitators were attempting to defraud them.
Cybercriminals are more skeptical than enthused about the abuse of AI and LLMs since the release of ChatGPT, according to Sophos.
Across two of the four forums on the Dark Web examined, it only found 100 posts on AI. In contrast, cryptocurrency scams were discussed in more than 1,000 posts for the same period.
Some cybercriminals were attempting to create malware or attack tools using LLMs, but the results were rudimentary and often met with skepticism from other users, said Christopher Budd, director of X-Ops research at Sophos.
In one case, a threat actor, eager to showcase the potential of ChatGPT inadvertently revealed significant information about his real identity, he noted.
“We even found numerous “thought pieces” about the potential negative effects of AI on society and the ethical implications of its use,” he added.
“In other words, at least for now, it seems that cybercriminals are having the same debates about LLMs as the rest of us,” he pointed out.