South Korea Launches Official Inquiry into Grok AI Over Sexually Explicit Deepfake Reports
The privacy watchdog of South Korea has started looking into Grok, an AI chatbot made by xAI, after serious concerns emerged about the tool being used to create fake sexual images without people’s consent.
The Personal Information Protection Commission launched a preliminary investigation following reports about Grok’s involvement in generating explicit deepfake content. The Electronic Times reported that officials are first checking if violations actually happened and if they have the authority to take action before moving forward with a full investigation.
The problem centers on claims that people have been using Grok to make inappropriate deepfake images of real individuals, including children. These alarming reports have caught the attention of regulators around the world, prompting them to take a closer look at what the AI can do.
Korean law is pretty clear on this issue. The Personal Information Protection Act says that creating or changing sexual images of someone you can identify without their permission is illegal. The commission plans to review whatever explanation and documents Grok provides while also keeping an eye on how other countries are handling similar situations.
xAI Restricts Grok Image Generation Amid International Investigations
Grok works as part of X, the social media platform formerly known as Twitter. It can generate both text and images, which is where the trouble started. People have been criticizing it for making deepfake images since late last year.
The numbers are shocking. The Center for Countering Digital Hate, a global nonprofit group, estimated that Grok created around 3 million sexually explicit images between December 29, 2025, and January 8 this year. What’s even worse is that about 23,000 of those images involved children.
The organization warned that these AI-generated images are spreading quickly online, creating serious risks for child safety. This isn’t just happening on one platform either—the explicit content is circulating all over the internet.
South Korea isn’t alone in taking action. Several other countries are investigating Grok, too. The United States,the United Kingdom, France, and Canada have all launched their own reviews. Meanwhile, Indonesia, Malaysia, and the Philippines decided to block access to Grok entirely.
Facing all this pressure from around the world, xAI said earlier this month that it’s made some changes. The company announced it now prevents both free and paid users from editing or making images of real people. They promised that more safety features are coming soon.
South Korean Regulators Give X Two-Week Deadline Over Safety Concerns
South Korea’s Media and Communications Commission has also gotten involved. On January 14, they told X that it needs to do better at protecting young people. The regulator wants Grok to come up with a solid plan to stop illegal or harmful content from being created and to keep minors away from such material.
Right now, X follows Korean law by having a youth protection officer and sending in yearly reports. But the commission wants more information specifically about how Grok keeps users safe.
The regulator made it clear that creating and sharing nonconsensual sexual images, especially of children, is a crime in Korea. X has two weeks to respond. If the company fails to respond to the request or disregards it, the company may be fined a maximum of 10 million won, which is approximately $6,870.
This is one of the ways in which it is becoming increasingly difficult for the government to keep up with the technology of AI. Although AI tools are very beneficial for creative tasks and getting things done, they can also cause harm if people misuse them.
This particular incident leads to larger questions about what tech companies should be doing when they are developing AI. As the power of AI increases, there is a lot of pressure on companies to develop strong safeguards that prevent malicious users while still allowing people to use the technology for good.
How South Korea is Defining AI Accountability in the Age of Deepfakes
For South Korea, this problem is one that is very close to home. The country has been experiencing digital sex crimes for several years and has been trying its best to suppress the distribution of non-consensual intimate images. This is why the government is taking this matter so seriously.
The outcome of these investigations may be significant in the future. The way in which regulators treat Grok may set a precedent for the regulation of AI companies and the responsibility they have for the content that their technology produces.
Comments are closed.