OpenAI

How threat actors use AI

OpenAI has published an update on some of the criminal campaigns it has identified and disrupted.

Since the start of 2024, OpenAI says it has disrupted over 20 campaigns from around the world that attempted to abuse its Artificial Intelligence (AI) platform in criminal operations and deceptive campaigns.

The full report paints a picture of covert operations that tried to influence elections all over the globe. A popular activity since almost half the world’s population has or will see important elections this year.

But it also covers criminal groups looking to find and remove bugs from malware. The report mentions the example of an Iranian group known as STORM-0817 that used OpnAI’s large language models (LLMs) to debug their code.

The fact that the AI is used for intermediary stages between organizing the necessary infrastructure and unleashing malware campaigns in the wild, provides a unique insight which can help to tie activities together that might normally seem unrelated.

This in turn can help us to sharpen our defenses by targeted investments in detection and investigation capabilities across the internet.

One of the case studies highlights a spear phishing campaign against the personal and corporate email addresses of OpenAI employees. To help in this campaign the attacker used OpenAI’s own services for reconnaissance, vulnerability research, scripting support, anomaly detection evasion, and development. A third party attributed this attack to a Chinese group called SweetSpecter.

The group sent emails to OpenAI posing as a ChatGPT user asking for support. In an attached ZIP file was an LNK file that, when opened, decrypted and executed the SugarGh0st remote access trojan (RAT).

By analyzing the campaign, OpenAI was able to check that none of the emails reached their employees, and it found and disrupted a cluster of ChatGPT accounts that were using the same infrastructure to try and get answers to questions that would help them to complete scripting and vulnerability research tasks.

Another group, known as CyberAv3ngers, was found using ChatGPT to research potential targets and hacking techniques.

Much of the behavior observed on ChatGPT consisted of reconnaissance activity, asking our models for information about various known companies or services and vulnerabilities that an attacker would have historically retrieved via a search engine. We also observed these actors using the model to help debug code.

CyberAv3ngers is thought to be affiliated with the Iranian Islamic Revolutionary Guard Corps and has been known to attack industrial control systems (ICS) and programmable logic controllers (PLCs) used in water systems, manufacturing, and energy systems. Sure enough, OpenAI reports that some of CyberAv3ngers’ prompts “focused on asking for default username and password combinations for various PLCs.”

As well as spilling the beans on potential targets, OpenAI reports that the prompts associated with CyberAv3ngers allowed them to “identify additional technologies and software that they may seek to exploit.”

The report provides an intriguing insight in to how cybercriminals operate, and how they use AI to hammer out the details of their campaigns. It also hints that, like the rest of us, even sophisticated threat actors are still just feeling their way with AI.

In line with our findings from other investigations into state-sponsored threat actors using our models, we believe that these interactions did not provide CyberAv3ngers with any novel capability, resource, or information, and only offered limited, incremental capabilities that are already achievable with publicly available, non-AI powered tools.