The Police in China have cracked down on two different gangs that used ChatGPT to create fake videos. One of the bursts took place back in May while the other in June. ChatGPT is an AI chatbot developed by microsoft – backed OpenAI that allows users to generate responses based on user prompts and questions. According to AP News, ChatGPT has been used to create deep fakes, which are fabricated digital images, videos, or other media that appear to be real. Furthermore, Business Insider reveals that deep fakes created using ChatGPT are a particular concern because they are becoming increasingly sophisticated and difficult to detect. The police in China busted two gangs involved in the use of ChatGPT to create deep fake videos.
ChatGPT is illegal in China
Chinese regulators have reportedly clamped down on access to ChatGPT. However, Chinese tech firms and schools push forward with developing domestic AI bots. According to Forbes, China's “great firewall” blocks ChatGPT as the government says it is against its censorship laws. But many had been accessing it via VPNs. Also, some third-party developers had produced programs that gave some access to the service. Searches for ChatGPT on Chinese platforms no longer return results. However, workaround programs had been disabled or replaced with a notice saying they had been suspended for “violating relevant laws and regulations”. The ban on ChatGPT in China is not a surprise as the Asian nation is quite strict with its laws. The Chinese govt has been cracking down on political propaganda and misinformation. According to reports, it views ChatGPT's ability to generate text as a threat to its control over the flow of info.
A report from CNN reveals that China has one of the most comprehensive and sophisticated internet censorship systems in the world. The Chinese government blocks website content and monitors internet access, and major internet platforms in China have established elaborate self-censorship mechanisms as well as implemented a real-name system. The government defends its right to censor the internet by claiming that this right extends from the country's own rules inside its borders. The government's Great Firewall is the world's most sophisticated internet censorship apparatus, and the content targeted for blocking includes major news websites, social media platforms, and messaging apps.
Police burst the first gang in May
A police report from the northwest Chinese province of Gansu mentions a suspect who is part of a gang. Identified only by his surname, Hong, he used ChatGPT to create a fake news article about a crash that supposedly led to the deaths of nine construction workers in Gansu. This story went viral and even legit news sources were deceived by the fake story. According to Bloomberg, by the time the article was removed, it already had 15,000 views. The police statement stated,
“Hong used modern technology to fabricate false information, spreading it on the internet, which was widely disseminated.”
Back in May, Shaoxing police found that the same gang had illegally purchased a batch of video accounts and used them to create fake videos using ChatGPT. The police arrested a man who used ChatGPT to create fake news about a train crash. This is one of the first enforcement actions under a recently enacted Chinese law regulating AI-generated “deep fakes”. According to the police, the works of the gang are seemingly realistic but fabricated digital images, videos or other media.
The local cybersecurity police unit was alerted to the article about the train crash — published on April 25. It then launched a probe into the matter. Hong then admitted he used ChatGPT to create fake news about the train crashes before posting them online. Hong's arrest is the first detainment since China's new regulation on deep fake tech took effect in January. The law aims to prevent the misuse of tech that can alter face and voice data. And while ChatGPT is banned in China, there are workarounds such as virtual private networks.
China police burst another gang in June
The second major burst happened in June 2023. On June 2, the Shangyu police found during an online probe that an app user named “Shangyi Explanation” released a video about a fire in the Shangyu Industrial Park. The number of views on the video rose rapidly in a short period of time. The police double-checked the video and found that it was fake news. The Shangyu police also spotted a tech group in another province linked with the crime. On June 5, the police rushed to other provinces and arrested three suspects.
Gizchina News of the week
After investigation, the gang pr group has illegally acquired a number of video accounts from the Internet since May. It spliced the videos and produced fake videos through ChatGPT tech to publish the network to gain traffic. The traffic led to profits for the group. So far, the gang has illegally purchased more than 1,500 video accounts and released more than 3,000 fake videos. The arrested suspect has since confessed to the crime but the police are still probing further for more suspects.
These gangs do not have any relationship with OpenAI or ChatGPT. They only used the tool to create deep fakes and generate traffic for financial gain. Also, according to the Shangyu police, the June Burst gang does not have any links with the May Burst gang. Thus, the police have to do more to avoid more of these gangs springing up.
Both cases of ChatGPT abuse are not older than three months. The Chinese police are still investigating the case before any prosecution will be made.
ChatGPT and its application for creating fake videos
ChatGPT is an AI chatbot developed by OpenAI that can generate highly plausible images, text, audio, and video. A report from NPR noted that it has been used to create fake videos, among other things, and it only takes a few dollars and a few minutes to create a deep fake video using ChatGPT. The rapid rollout of AI to the public has raised concerns about supercharged propaganda and influence campaigns by bad actors. OpenAI has signalled a commitment to preventing AI plagiarism and other nefarious applications. The company has been working on a way to “watermark” GPT-generated text with an “unnoticeable secret signal” to identify its source.
However, the risks inherent in the technology and the speed of its take-up demonstrate why it's important to be cautious. For example, The Guardian reported that a researcher recently used ChatGPT to invent a fake Guardian article that seemed believable to the person who absolutely hadn't written it. Tech experts claim that making a deep fake video that speaks AI-written text is as easy as generating first-person scripts using ChatGPT and pasting them into any virtual environment. CNN claims that the potential for misuse of this technology is significant, and it's important to be aware of the risks associated with it.
What is deep fake technology?
For those who may not know, deep fake technology is a type of AI that is used to create convincing images, audio, and video hoaxes. According to The Guardian, the term “deep fake” is a portmanteau of “deep learning” and “fake. Deep fakes often transform existing source content where one person is swapped for another. They can also create entirely original content where someone is represented doing or saying something they didn't do or say. Virginia.com reports that the tech uses two algorithms. It uses a generator and a discriminator. With these, it can create and refine fake content. The generator builds a training data set based on the desired output, creating the initial fake digital content. However, the discriminator analyzes how realistic or fake the initial version of the content is.
Business Insider reports that deep fake technology has garnered widespread attention. This is mostly because of its potential to create child sexual abuse material, celebrity pornographic videos, revenge porn, fake news, hoaxes and other frauds. Though deep fakes can be difficult to spot, there are ways to detect them. One such way is to look for inconsistencies in the video or audio. We can also check the source of the content, and use special software tools to spot deep fakes.
Benefits of ChatGPT
This post is about the use of ChatGPT for a negative course. However, it is important to state that there are several benefits of ChatGPT. Some of its benefits are
- ChatGPT makes learning easier and faster by providing fairly accurate responses to queries
- This tool can write an entire essay or a code depending on the need
- ChatGPT will perform in seconds what will take people a few minutes to handle
- ChatGPT can automate repetitive tasks, such as composing emails or writing code, which can save time and improve efficiency
- ChatGPT can mimic human-like conversations, which can improve customer engagement and satisfaction
- ChatGPT can understand the overall context of a conversation and generate specific responses that are relevant to the topic
- ChatGPT provides specific responses to user queries, which can be helpful for finding information quickly
- ChatGPT allows users to have follow-up corrections until they are satisfied with the response
The major downside of ChatGPT its application can be misused for nefarious activities and its responses are not always 100% correct.
As ChatGPT's popularity continues to grow, it is likely that we will see more cases of fraud and misuse. There is always a danger of AI-created deep fakes thus, there need to regulate the use of AI tech. This situation is becoming common in China as the police recently busted two non-related gangs involved in using ChatGPT to create fake videos for monetary gains. The crackdown on the gang that used ChatGPT to create fake videos is a step in the right direction towards ensuring that AI tech is used responsibly and ethically. Though ChatGPT can be used for a negative course, it has a lot of benefits.