Call us at 030 227 21 68 or reach out at

Cybercriminals are starting to use ChatGPT, this is how they do it

The release of ChatGPT instantly created a flurry of interest in AI and its possible uses. However, ChatGPT has also added some spice to the modern cyber threat landscape as it quickly became apparent that code generation can help less-skilled threat actors effortlessly launch cyberattacks.

Last December, researchers described how ChatGPT successfully conducted a full infection flow, from creating a convincing spear-phishing email to running a reverse shell, capable of accepting commands in English. The question at hand is whether this is just a hypothetical threat or if there are already threat actors using OpenAI technologies for malicious purposes.

New analysis of several major underground hacking communities by Check Point Research shows that there are already first instances of cybercriminals using OpenAI to develop malicious tools. Some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all. Although the examples below are basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.

Case 1 – Creating an Infostealer

On December 29, 2022, a thread named “ChatGPT – Benefits of Malware” appeared on a popular underground hacking forum. The publisher of the thread disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware. As an example, he shared the code of a Python-based stealer that searches for common file types, copies them to a random folder inside the Temp folder, ZIPs them and uploads them to a hardcoded FTP server.

Analysis of the script confirms the cybercriminal’s claims. This is indeed a basic stealer, which searches for 12 common file types (such as MS Office documents, PDFs, and images) across the system. If any files of interest are found, the malware copies the files to a temporary directory, zips them, and sends them over the web. It is worth noting that the actor didn’t bother encrypting or sending the files securely, so the files might end up in the hands of third parties as well.

The second sample created by this actor using ChatGPT is a simple Java snippet. It downloads PuTTY, a very common SSH and telnet client, and runs it covertly on the system using Powershell. This script can of course be modified to download and run any program, including common malware families.

This threat actor’s prior forum participation includes sharing several scripts like automation of the post-exploitation phase, and a C++ program that attempts to phish for user credentials. In addition, he actively shares cracked versions of SpyNote, an Android RAT malware. So overall, this individual seems to be a tech-oriented threat actor, and the purpose of his posts is to show less technically capable cybercriminals how to utilize ChatGPT for malicious purposes, with real examples they can immediately use. 

Case 2 – Creating an Encryption Tool

On December 21, 2022, a threat actor dubbed USDoD posted a Python script, which he emphasized was the first script he ever created. 

When another cybercriminal commented that the style of the code resembles openAI code, USDoD confirmed that the OpenAI gave him a “nice [helping] hand to finish the script with a nice scope.”

Analysis of the script verified that it is a Python script that performs cryptographic operations. 

The code can of course be used in a benign fashion. However, this script can easily be modified to encrypt someone’s machine completely without any user interaction. For example, it can potentially turn the code into ransomware if the script and syntax problems are fixed.

Case 3 – Facilitating ChatGPT for Fraud Activity 

While our first two examples focused more on malware-oriented use of ChatGPT, the following example shows a recent discussion with the title “Abusing ChatGPT to create Dark Web Marketplaces scripts.” In this thread, the cybercriminal shows how easy it is to create a Dark Web marketplace, using ChatGPT. The marketplace’s main role in the underground illicit economy is to provide a platform for the automated trade of illegal or stolen goods like stolen accounts or payment cards, malware, or even drugs and ammunition, with all payments in cryptocurrencies. To illustrate how to use ChatGPT for these purposes, the cybercriminal published a piece of code that uses a third-party API to get up-to-date cryptocurrency (Monero, Bitcoin and Etherium) prices as part of the Dark Web market payment system.

At the beginning of 2023, several threat actors opened discussions in additional underground forums that focused on how to use ChatGPT for fraudulent schemes. Most of these focused on generating random art with another OpenAI technology (DALLE2) and selling them online using legitimate platforms like Etsy. In another example, the threat actor explains how to generate an e-book or short chapter for a specific topic (using ChatGPT) and sells this content online. 


It’s still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web. However, the cybercriminal community has already shown significant interest and are jumping into this latest trend to generate malicious code. CPR will continue to track this activity throughout 2023.

Finally, there is no better way to learn about ChatGPT abuse than by asking ChatGPT itself. So, we asked the chatbot about the abuse options and received a pretty interesting answer:

This article was a contribution by Zahier Madhar, security engineer expert at Check Point.

2048 1152 Partner Content
My name is HAL 9000, how can I assist you?
This website uses cookies to ensure the best possible experience. By clicking accept, you agree to our use of cookies and similar technologies.
Privacy Policy