Researchers Warn of Undetectable Malware ChatGPT Can Create

Researchers at cybersecurity firm CyberArk Labs have warned that the advanced AI-driven text generator, ChatGPT, created by OpenAI, could be used to create a highly sophisticated and evasive type of malware. According to the researchers, this new type of malware, known as a polymorphic or metamorphic virus, is capable of mutating itself while keeping the original algorithm intact, making it difficult to detect and remove by traditional cybersecurity tools.

The researchers created a proof-of-concept (POC) that demonstrates how the built-in content filters of ChatGPT can be bypassed to create variations of malware. ChatGPT, which stands for Generative Pre-trained Transformer, is an AI-powered chatbot that uses natural language processing (NLP) to generate human-like text in response to prompts. It is used for a variety of NLP tasks such as language translation, text summarization, and question answering.

The researchers found that by repeatedly querying the chatbot and receiving unique pieces of code, they could create a polymorphic program that is highly evasive and difficult to detect. They also noted that, unlike the web version, the ChatGPT system did not utilize its content filter when using the API, which made their task much easier.

“One of the powerful capabilities of ChatGPT is the ability to easily create and continually mutate injectors,” said the researchers in a statement. “By continuously querying the chatbot and receiving a unique piece of code each time, it is possible to create a polymorphic program that is highly evasive and difficult to detect.”

The researchers suggest that attackers could use ChatGPT’s ability to generate various payloads and techniques to develop a wide range of malware that is highly evasive to security products. They also highlighted that this type of malware does not exhibit malicious behavior while stored on disk and often does not contain suspicious logic while in memory.

This is not the first time that researchers have raised concerns about the potential malicious use of advanced AI-based text generators. As the technology continues to improve, it is becoming increasingly important for cybersecurity firms and developers to consider the potential risks and take steps to mitigate them.

Share This Post

Catch Up On Business & Tech Round Up