Chinese Official’s ChatGPT Use Exposes Worldwide Harassment Ring

A new report from OpenAI describes a sprawling Chinese influence campaign that targeted critics of Beijing living overseas. Investigators say the operation came to light after a Chinese law enforcement official used ChatGPT as a personal logbook to record details of the effort.

According to the report, the official treated the AI tool like a diary, entering notes about activities aimed at intimidating Chinese dissidents abroad. In one case, operators posed as United States immigration officers and contacted a dissident based in America. The message warned that the individual’s public comments had violated US law, an apparent attempt to create fear and pressure.

The same user described another effort involving forged legal documents. The operators allegedly created fake paperwork that appeared to come from a US county court and used it to request the removal of a dissident’s social media account. Investigators later linked parts of these claims to real online activity, suggesting the campaign moved beyond planning into execution.

OpenAI researchers say the network relied on hundreds of participants and thousands of fake online accounts spread across social media platforms and websites. Much of the propaganda content came from tools other than ChatGPT, while the AI system served mainly as a planning and documentation space. After uncovering the activity, OpenAI banned the account involved.

ChatGPT and the New Era of Repression: From Fabricated Obituaries to Global Disinformation

According to Ben Nimmo, a principal investigator at OpenAI, “This is a new wave of cross-border repression that is connected to the Chinese Communist Party. This is not random harassment on the internet.

This is a targeted campaign to overwhelm critics with threats of legal action, disinformation, and messaging.” Ben Nimmo made these comments before the release of the report.

Investigators also found evidence of a false death narrative targeting one dissident. The ChatGPT user documented a plan to fabricate an obituary along with images of a gravestone. Similar rumors later appeared online in 2023, reported by Voice of America in its Chinese-language coverage.

In another example, the user asked the AI system to help design a campaign against Japanese political figure Sanae Takaichi. The plan involved stirring anger over US tariffs on Japanese goods. ChatGPT declined to assist with the request, OpenAI said. Nevertheless, researchers later found that hashtags criticising Takaichi were used on a popular forum for graphic artists, along with complaints related to trade policy.

Credits: Reuters

The report comes at a point when there is an increasing rivalry between Washington and Beijing regarding artificial intelligence. Governments have come to see AI not only as an economic resource but also as a strategic resource that determines information control and security.

The issue extends beyond private companies. The US Department of Defense is currently locked in a dispute with AI firm Anthropic over safeguards built into its systems. Defense Secretary Pete Hegseth has pressed Anthropic chief Dario Amodei to loosen certain restrictions or risk losing a major Pentagon contract, highlighting tensions over how AI should operate in military settings.

How Small Habits Expose State-Led Operations?

Experts say the OpenAI findings show how governments may adapt AI tools to support existing surveillance and influence tactics. Michael Horowitz, a former Pentagon official now at the University of Pennsylvania, told CNN that the report illustrates how China integrates AI into daily information operations, not only high-level research programs.

“This competition is not limited to cutting-edge technology,” Horowitz said. “It also shapes how states manage information and enforce control.”

CNN has reached out to the Chinese Embassy in Washington, DC, for comment and had not received a response at the time of publication.

This particular story highlights the larger issue that democratic nations and tech companies are currently facing. While AI can be used for research, communication, and creativity, it can also be used to highlight the ways in which influence campaigns work. In this particular instance, it was a simple habit that led to the exposure of the workings of an influence campaign.

Comments are closed.