The Dark Side of AI And The Art Of Deception

Published 5 months ago
By Forbes Africa | By Tiana Cline
Artificial Intelligence Technology Background
(Source: Getty Images)

There is a thin line between innovation and abuse. A look at how large language models (LLMs) are making cybercriminals even better at mastering the art of deception.

What you can do with generative artificial intelligence (AI) tools like OpenAI’s ChatGPT or Gemini from Google DeepMind is phenomenal. It’s never been easier to translate text, brainstorm ideas or even write or debug code… but have you stopped to consider that because it’s significantly easier for the everyday person to use AI, cybercriminals are also using this type of technology to their advantage?

Here’s the thing – cybercriminals love ChatGPT. There are even clones of the large language model (LLM) on the dark web for hackers like XXXGPT, FraudGPT and WormGPT (which was shut down by its developers because of overwhelming media attention).

Advertisement

According to Check Point Research, there is a massive underground market for premium ChatGPT accounts. “Cybercriminals use ChatGPT clones but the most common LLM they’re using is ChatGPT because it is the easiest. It’s the one that has the API,” says Sergey Shykevich, a threat intelligence group manager at Check Point Research. One of the reasons cybercriminals are after premium ChatGPT accounts is that the service isn’t available in countries like North Korea, China, Iran and Russia. “The cybercriminals have to bypass different restrictions but when you use the API or a premium account, the restrictions are much lower than the free, user interface.”

THE DEEP, DARK WEB

Loading...

A Kaspersky study looking into how cybercriminals are using AI on the dark web picked up that hackers are using language models to not only develop malicious products but obtain information. With the help of AI phishing, a type of social engineering scam, it’s becoming significantly harder to detect. Emails that were once filled with tell-tale spelling mistakes and grammatical errors are now indistinguishable from legitimate correspondence.

“Now, all the spam phishing services use ChatGPT to randomize email,” adds Shykevich. “Many of the anti- spam engines are based on this data. If they find a lot of emails that are similar, they will automatically label them as malicious but with AI, you can run a campaign consisting of hundreds of thousands of phishing emails, all randomized by automated tools.”

Advertisement

Vladislav Tushkanov, a lead data scientist at Kaspersky explains that generative AI, like other types of technology, are tools with dual use. “You can do a lot of cool stuff with these tools… but there are also some applications that can be interesting for cybercriminals. The most obvious use is fraud – creating believable voice fake to lure people into sending money,” says Tushkanov. “What our Digital Footprint intelligence found on the dark web are posts saying ‘ChatGPT is my best buddy’ and ‘when I do my work, I always consult ChatGPT – it’s so useful’. This is a real post from a cybercriminal that we saw on a hacker forum.”

Greg van der Gaast is a former hacker, FBI operative and the MD of Sequoia Consulting. His take on cybersecurity is a little bit different. “If you’re not vulnerable to what cybercriminals are trying to exploit using AI, if you have good build processes and architecture, they’re not a threat to you,” he says.

For van der Gaast, one of the main problems with the cybersecurity industry at large is that it is focussed on risk management.

“If I leave a bag of grain in my front garden, I’m not surprised that 10,000 mice show up three days later. The solution isn’t for me to buy 10,000 mousetraps, it’s to store my grain better and invest in a couple of mousetraps,” he says.

Advertisement

“To keep storing your grain in the same place and keep flinging mousetraps is ridiculous. The only reason there are so many threats is that there are so many organizations with vulnerabilities that they’re not addressing. They’re just risk- managing them instead.”

Looking at AI from a hacker’s perspective, van der Gaast says that the best thing about AI is not having to think. “AI allows [a hacker] to work a lot more efficiently. So, if you’re out doing reconnaissance or trying to break into stuff, AI speeds up the workflow,” he explains. “With AI, a cybercriminal could pace an attack in a way that it goes around detection algorithms.”

This is why van der Gaast believes that instead of playing catch up and never solving the problem at large, AI could be used to think intelligently about where the issue is coming from in the first place.

“No one is looking at the root causes. Stop looking at the severity and start looking at the reasons behind why you have a vulnerability in the first place,” he recommends.

Advertisement

“Start seeing security issues as quality issues because that’s what a security vulnerability is – it’s a defect in code and configuration, how a system is built. Don’t have the vulnerability and then the threat won’t be a threat to you.”

With the hype surrounding generative AI, it isn’t a surprise that cybercriminals want to get in on the action. Over and above smarter phishing emails and messages, scammers have managed to lure those curious about LLMs with adverts to download enhanced versions that don’t really exist. There are also a number of fake ChatGPT apps which imitate OpenAI’s free features at a fee. Often referred to as fleeceware, these apps overwhelm users with adverts until they cave in and pay for a subscription.

“We are now in a race between the good guys and the bad guys,” says Shykevich. “It’s still too early to call who is on the winning side. We still don’t see the bad guys using AI for really sophisticated malware or something crazy. The defender side is a bit better. We have the advantage and I hope it will remain peaceful.”

Advertisement

Loading...