Arrested for Creating ChatGPT Ransomware

Ransomware Made by ChatGPT – Four Chinese Individuals Arrested

Meanwhile, four people have been arrested in China over ransomware that mirrors ChatGPT, a software unavailable there. As reported in the South China Morning Post.

Hangzhou, in eastern Zhejiang, has seen its first known ransomware victim at the hands of a machine intelligence. The company has been asked to pay $20,000 in cryptocurrency in exchange for the unlocking of its systems.

Suspects Arrested for ChatGPT-made Ransomware Development

They said that two suspects had been apprehended in the Chinese capital of Beijing and two others in the northern autonomous region of Inner Mongolia. GPT criminals wrote ransomware versions, optimised the programs with ChatGPT, conducted vulnerability scans, infiltrated the systems, planted ransomware, and extorted victims, the state-run Xinhua News Agency reported in late November. ChatGPT wasn’t among the allegations levelled at the fraudsters in the police report.

ChatGPT Banned in China

ChatGPT has been banned in China This law is a sign that China appears determined to apply its ban on ChatGPT to any other apps it deems to be based on the AI program, as it builds its own locally developed AI large language models. According to a statement from Baidu and reported this week by Cyber News, Ernie Bot, a ChatGPT-like app, had been used by more than 100 million users.

Concerns on Misuse of Artificial Intelligence Technology – ChatGPT

Arrested for Creating ChatGPT Ransomware
Photo by Levart_Photographer on Unsplash

AIs have not played much of a role in transforming cybercriminals. According to the researchers at Sophos, the cybercriminals have certainly not been put to much use for illegal clones of Chat-GPT, which are made for the express purpose of developing malicious apps. The Sophos article goes on to say: ‘While everyone is concerned about illicit use cases for ChatGPT and others like it, conversations on cybercrime forums so far, explored by Sophos X-Ops, suggest many bad actors may still be skeptical – and are plagued by the same problems as the rest of us.

Nonetheless, generative artificial intelligence has already been used to trick. According to a report published by Hiya, scammers pretend to be family members or government officials to coax their victims, but it’s not just people pretending to be startled grannies or government workers demanding a fee. Cybersecurity firms such as CheckPoint and news agencies such as Forbes and BusinessInsider reported that cybercriminals are using artificial intelligence tools such as ChatGPT en masse for malicious purposes. This year, the US Federal Trade Commission warned that AI-cloned voices, derived from as little as a few seconds of audio, were being deployed as part of a new wave of imposter scams.

Many of these people and organisations have begun asking for attention to the mass intellectual property theft built into the heart of these models. Most recently, the New York Times filed a lawsuit against OpenAI (and its backer, Microsoft, the world’s third most valuable company) for training models on millions of articles without their consent. Skynews on X (formerly Twitter).

For more cybersecurity news and updates, follow us on Cybersecurity – The SOC Labs.


Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The SOC Labs assumes no liability for the accuracy or consequences of using this information.


Discover more from The SOC Labs

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *