Europol issued a warning a few days ago that ChatGPT will aid criminals in honing their online victimization techniques. Using ChatGPT to create malware was one of the instances given by Europol. There are safeguards in place for the OpenAI generative AI tool. If you ask it directly, it will stop it from assisting you in the creation of harmful code.
However, a security researcher got around such safeguards by doing what criminals undoubtedly do. He instructed ChatGPT to build the virus function by function using simple, straightforward directions. Then, he combined the code fragments into a piece of malware that can steal data from PCs while remaining undetected. The type of zero-day assault used by nation-states in extremely sophisticated strikes. A piece of malware that would take a group of hackers weeks to create.
Aaron Mulgrew, a Forcepoint researcher, designed an excellent malware product called ChatGPT. A screen saver program installs the software on a computer. To circumvent some detection techniques, the file auto-executes after a brief delay.
The virus then discovers photos, PDFs, and Word documents that it may take from the victim’s computer. It then divides documents into smaller pieces while using steganography to conceal the data in the aforementioned pictures. The data-containing photographs then go to a Google Drive folder, a move that also helps to evade discovery.
The researcher did not perform any coding himself and simply needed a few hours of effort. Given that Mulgrew employed straightforward recommendations to enhance the malware’s first versions in order to escape detection, the results are astounding.
Only five of 69 products tested by VirusTotal recognized the original version of the ChatGPT virus. In a later version, the researcher was able to remove all of them. Finally, just three antivirus systems detected the “commercial” version, which operated from penetration to exfiltration.
“We’ve had our Zero Day,” Mulgrew explained. “We were able to produce a very advanced attack in only a few hours by simply using ChatGPT prompts and without writing any code.” I believe that without an AI-based Chatbot, it may take a team of 5 – 10 malware developers a few weeks to avoid all detection-based vendors.”
The researcher said, “Nation-state attackers using many resources to develop each component of the overall malware have previously been reserved for this kind of end-to-end very advanced attack.” “However, despite this, a self-described amateur has been able to produce the comparable virus with the aid of ChatGPT in just a few hours. The abundance of malware that might arise as a result of ChatGPT is a worrying trend that could disgrace the present toolset.
It’s worthwhile to read the complete blog article describing this incredibly sophisticated ChatGPT virus. It includes advice on how to avoid malware assaults, which ChatGPT can readily manufacture. You may check it out at this link.
The researcher’s product, on the other hand, is unlikely to see the light of day. Malicious hackers, on the other hand, may be creating similar assaults utilizing OpenAI’s generative AI.
Microsoft, on the other hand, is already employing ChatGPT to increase the detection of malware threats in its security solutions. Using AI in your security may be the best method to catch AI malware.