21 August 2023

6 min read

AI and the cyber threat landscape: how worried should you be?

Cyber security
Data Breach
AI and the cyber threat landscape: how worried should you be?

As Artificial Intelligence looks set to disrupt almost every sector of the economy, opportunistic cyber criminals are experimenting with how it can be used maliciously. In July 2023, there were various reports that large language models specialised in generating malicious content were being sold on the dark web. While the threat from these specific tools is easily overstated, James Tytler looks at why and how cyber criminals will use AI to their advantage and considers how it will change the threat landscape.

 

The rise of ‘dark’ chatbots trained on malicious data

Large language models (LLMs) are a form of generative artificial intelligence trained on vast amounts of written input. They can produce human quality text, images, code, or other media in response to a prompt. The rapid development of these tools has sparked concerns from the security community that malicious actors will use them to draft highly convincing phishing emails or to code malware. In April 2023, Europol issued a report warning that LLMs could be abused to create phishing campaigns and other forms of online fraud “faster, much more authentically, and at a significantly increased scale”.

In response to these concerns, the developers of major publicly accessible LLMs have hastily introduced constraints and limitations on what prompts their chatbots will respond to. This includes closing down so-called “Jailbreaking” prompts, which are specifically designed to bypass these ethical constraints. When OpenAI’s ChatGPT, the first commercially available LLM, was released it was possible to ask it to write a phishing email by presenting it as part of a training exercise. These chatbots will now typically refuse to comply with any prompt containing terms such as “malware” or “phishing”, even if presented in an educational or fictional context.

In early July, cyber criminals began selling access to what were described as “evil clones” of ChatGPT, designed without these constraints. Posts on various dark web forums claimed to offer access to “WormGPT” on a subscription basis, either for EUR 100 for monthly access to the basic version, “WormGPT v1” or EUR 550 for yearly access to a more advanced version dubbed “WormGPT v2”. The tool was explicitly marketed as being able to produce phishing emails or code malware. Following a spike in interest, no doubt driven by the media attention, a Twitter account claiming to belong to the developer of WormGPT said they would be upping their prices.

 

cyber threat landscape 

How much of a threat do these chatbots pose?

There are several reasons to be sceptical about the threat from malicious LLMs at present. While there have been exponential increases in the capabilities of generative AI, from a technological standpoint, it’s unlikely that WormGPT or similar LLMs available on the dark web can perform at anywhere close to the same level as the major commercial LLMs developed by OpenAI or Google. A forum user advertising the product announced that it was based on GPT-J, an open source LLM released in 2021. As such, the underlying LLM is significantly less powerful than GPT-3.5, which powers the freely available ChatGPT, let alone GPT-4 – the most recent paid release. Then there is the computational cost of running a LLM to consider. While we don’t have visibility into the resources available to the operators of WormGPT, training and then running an LLM at scale requires huge computational resources.  

It is worth considering how significant or useful it would be for threat actors to have access to an LLM with no constrains in the first place, ethical or otherwise. While these tools are undoubtedly good at producing well-crafted emails, their output is often less convincing when it comes to generating code. There have been several alleged proofs of concept for using ChatGPT to write malware, these have required significant input and hand holding from knowledgeable developers and are of limited practical use. It is unlikely that the output of a less powerful LLM would be any different.

The impact that threat actors having access to LLMs might have on the sophistication of phishing campaigns is also difficult to measure. It has been argued that the prevalence of LLMs has led to a decline in spelling and grammatical errors in phishing emails. However, it is debatable how beneficial the ability to draft perfectly worded phishing emails would translate to increased success in attacks. Fraudsters deliberately include spelling and grammatical mistakes in phishing emails as they are more likely to ensure they reach gullible victims. Spelling errors can also help evade traditional spam filters which look out for specific keywords and phrases.

While WormGPT itself is more likely a gimmick than a game changer, its appearance is an indicator of the many ways in which AI will shape the cyber threat landscape in the future. The interest in generative AI among the cyber criminal community indicates that there is a strong economic incentive for threat actors to experiment with LLMs and other forms of artificial intelligence.  

 

What are the threats from malicious use of AI in the medium term?

Ransomware groups are highly professionalised and have access to significant financial resources. Many are also believed to have links to the governments of adversarial nation states. As such it is reasonable to assume that more technologically advanced and financially motivated threat actors will actively deploy resources on research and development in AI. While it is difficult to predict exactly what form this will take, we can expect to see LLMs, and other forms of AI, used in more creative ways in the medium term. Some potential developments include:

  • AI-driven vulnerability identification becoming a force multiplier: It is likely that threat actors will begin to leverage AI to identify vulnerabilities in corporate networks to allow them to gain initial access. Indicative of this trend, in May 2023 an automated open-source penetration testing toolkit called PentestGPT was released. This tool can reportedly streamline the process of identifying vulnerabilities in a network. It has been claimed that the tool can already solve easy to medium “capture the flag” challenges, which are designed for human hackers to test their skills. While these tools are in their infancy, with further advancements in machine learning, it is possible that malicious actors will soon be able to leverage AI to identify previously unknown “zero-day” vulnerabilities.

  • Biometric data like voices and visual likenesses being abused in more creative ways: Threat actors will use cloned voices or even visual likenesses to carry out sophisticated social engineering attacks. It is now possible to convincingly clone an individuals’ voice from just a few minutes of speech, and voice spoofing attacks are already occurring. Back in 2019, fraudsters used AI to mimic a CEO’s voice and authorise a fraudulent transfer of funds. Security researchers have also tricked voice recognition systems to gain access to phone-banking services, or to commit tax fraud. It is likely that AI will soon be able to clone a person’s visual likeness as well as their voice, including their individual mannerisms and even movement patterns. In an era of remote working, you may no longer be able to trust that you are speaking to who you think you are on the other end of a team’s call.

 

What can organisations do to protect themselves?

In light of these rapidly evolving threats, organisations should ensure they have artificial intelligence in their defensive arsenals. Traditional signature-based antivirus tooling is rapidly becoming obsolete. Modern Endpoint Detection and Response (‘EDR’) platforms use machine learning to detect active threats based on their behaviour. Email filtering technology also leverages AI to detect threats based on their technical features, meaning that even the more carefully crafted phishing emails do not get in front of end users.   

Of course, even with the deployment of AI in defence tooling, there is no substitute for critical thinking. Humans are one of the most vulnerable attack vectors, a well-informed and engaged workforce is a critical first line of defence.

 

Cyber Intelligence Briefing

To discuss this article or other industry developments, please reach out to one of our experts.

Author

Share this post

Subscribe to our insights

Get industry news and expert insights straight to your inbox.