ChatGPT and associated AI projects have raised all sorts of concerns, ranging from job loss to cheating for credentials. One that may be going underlooked is the use of it to generate malware.

While still very much in the basic stages, several proofs of concept have emerged already. The most recent, as described in a new Check Point Research paper, are dark web forum posts indicating that low- or even no-skill threat actors have figured out how to manipulate ChatGPT instructions to get it to produce basic but viable malware.

ChatGPT malware uses Python, Powershell to steal files

The Check Point report describes a dark web thread posted on December 29, created by a more experienced criminal actor providing instruction to those with lower skill, and a thread from a week earlier by a user that said their ChatGPT malware script was the first code they had ever created. Another thread posted on New Year’s Eve describes how to use ChatGPT to generate dark web marketplaces.

The more sophisticated forum user said that he was attempting to prompt ChatGPT to recreate a variety of known malware strains and techniques, and had success in getting the AI to translate malware samples between programming languages. The method does require some basic coding knowledge, but the hacker provided detailed instructions for those looking to replicate the technique. A second sample from this poster has ChatGPT generate a short piece of Java code that downloads an SSH and telnet client and makes use of Powershell to run it on a target system while evading detection. This script is open-ended, allowing for pieces of malware to be downloaded and installed on target systems instead.

The earlier forum user, the one experimenting with their first Python malware, essentially created a basic ransomware tool with the assistance of ChatGPT. More experienced forum users confirmed that this script would successfully encrypt a specified list of files or directory. As presented, the script also contained information needed to decrypt the target files, but Check Point notes that it could be modified to remove this. Though this user’s past forum activity indicates that they are not a coder, they are active and recognized in the criminal underground as a broker for stolen databases and access to companies that have been compromised.

The third case is not an example of malware, but it does loop ChatGPT into the process of selling and transferring stolen information. This sample creates a temporary forum marketplace capable of implementing crypto payment methods to facilitate transfers.

Most immediate AI malware threat: Boosting the capabilities of unskilled threat actors

At the moment, the tools that ChatGPT has been able to generate are not any sort of new or serious threat. But it is important to keep in mind that ChatGPT is an early release of a project that is still in development, and it is only a matter of time until more sophisticated malware can be auto-generated with little to no hacking knowledge.

Experienced cyber criminals will eventually be able to create or refine highly customized tools in much shorter periods of time with the help of AI, and the inexperienced will have a major helping hand. One example of this is seen in ChatGPT’s ability to generate fairly convincing phishing emails in another language.

Brad Hong, Customer Success Manager for Horizon3.ai, expands on this very immediate aspect of ChatGPT: “From an attacker’s perspective, what code-generating AI systems allows the bad guys to do easily is to first bridge any skills gap by serving as a “translator” between languages the programmer may be less experienced in, and second, an on-demand means of creating base templates of code relevant to the lock that we are trying to pick instead of spending our time scraping through stack overflow and Git for similar examples. Attackers understand that this isn’t a master key, but rather, the most competent tool in their arsenal to jump hurdles typically only possible through experience. However, OpenAI in all its glory is not a masterclass in algorithm and code-writing and will not universally replace zero-day codes entirely. Cybersecurity in the future will become a battle between algorithms in not only creation of code but processing it as well and just because the teacher lets you use a cheat sheet for the test, doesn’t mean you’ll know how to apply the information until it’s been digested in context. As such, code-generating AI is more dangerous in its ability to speed up the loop an attacker must take to utilize vulnerabilities that already exist. What this means to organizations is that the countdown to breach has started and they cannot afford the time to ignore known vulnerabilities and misconfigurations due to human error.”

In time, an ongoing arms race between attack and defense AIs may develop. There is something of a safeguard in that AI developers can place restrictions on their tools preventing certain topics from being broached, but the ChatGPT developers have tried doing this with malware (and other forms of harm) and the examples presented here demonstrate that people have not had much trouble in finding workarounds.

The “battle of AIs” remains a distant possibility, however, and one limited by a number of factors. One is that ChatGPT tends to get things wrong fairly often, but always outputs answers as if it is absolutely certain it is correct; it still requires a skilled eye to know if the generated code is actually functional and fit to its intended purpose. Another is simply that these advanced, expensive models remain in relatively few hands that retain a good deal of ability to limit how they are used.

#Darkweb forum posts indicate that low- or even no-skill threat actors have figured out how to manipulate #ChatGPT instructions to get it to produce basic but viable #malware. #cybersecurity #respectdata

The most immediate threat is the boost that this will provide to “script kiddies” that have poor knowledge of coding but comb sources such as GitHub and StackExchange to paste together prefabricated code that can be used in a malicious way; tools like ChatGPT can make their work somewhat easier and faster in the immediate term. The biggest risk is that the smarter script kiddies will use AI tools to make new iterations of code that works, helping to bypass the automated defenses that would usually catch their amateur level of work.

 



Source: CPO Magazine

Share:

Leave a Reply

Your email address will not be published. Required fields are marked *