AI-Powered Chatbots Generating Explicit Deepfakes, Raising Concerns

In an alarming development, AI-powered chatbots are now capable of generating explicit images of real people based on user requests. This trend has raised significant concerns among experts, who warn of a potential “nightmarish scenario.” The emergence of these deepfakes, particularly on platforms like Telegram, has led to a surge in nonconsensual explicit content, affecting millions globally and marking a disturbing evolution in digital abuse.

The Rise of Deepfake Technology

The origins of this troubling phenomenon can be traced back to 2020 when deepfake expert Henry Ajder uncovered one of the first Telegram bots designed to “undress” photos of women. At that time, the bot had already been used to create over 100,000 inappropriate images, some involving minors. Ajder recognized this as a pivotal moment, highlighting the profound dangers posed by deepfake technology. Today, the accessibility and sophistication of these tools have expanded significantly, making it easier for users to produce harmful content.

A recent analysis by WIRED revealed that at least 50 Telegram bots are currently available, capable of generating explicit images and videos with minimal effort. These bots range in functionality, with some boasting capabilities to “remove clothing” while others create images depicting sexual acts. The findings suggest that these bots have a combined user base exceeding 4 million monthly, with some bots attracting hundreds of thousands of users. This data underscores the alarming prevalence of deepfake generation tools on Telegram.

The Expanding Scope of Nonconsensual Content

The rise of explicit deepfakes, often classified as nonconsensual intimate image abuse (NCII), has been fueled by advancements in generative AI since its introduction in 2017. Various websites and Telegram bots now offer services to “nudify” images, impacting countless women and girls around the world. High-profile individuals, including Italy’s prime minister, have not been spared, while a recent survey indicated that 40% of U.S. students are aware of deepfakes related to their schools.

WIRED’s investigation also identified 25 related Telegram channels with over 3 million combined users. These channels serve as marketing platforms, promoting new bot features and selling “tokens” for image generation. Despite Telegram’s efforts to remove some problematic bots following inquiries from WIRED, the creators quickly developed replacements, highlighting the resilience of this troubling trend.

Telegram’s Role in Deepfake Proliferation

With more than 700 million monthly users, Telegram has become a central hub for the generation of deepfakes. The platform’s bot capabilities facilitate various functions, including quizzes and translation, but they have also been exploited for creating and disseminating explicit deepfake content. Experts argue that Telegram’s user-friendly features, such as its search functionality and bot-hosting capabilities, make it particularly vulnerable to abuse.

WIRED refrained from naming specific bots or testing their functionality due to their harmful nature. The actual volume of images generated by these bots remains uncertain, as user engagement varies widely. While some may not generate any images, others could create hundreds. The bots are often overt about their intentions, yet their ease of use and accessibility contribute to their popularity.

Emotional Toll on Victims

The emotional and psychological impact of deepfakes on victims can be severe. Emma Pickering, head of technology-facilitated abuse at Refuge, a leading UK domestic abuse organization, emphasizes that explicit deepfakes can inflict profound trauma, shame, and fear on individuals. Unfortunately, accountability for perpetrators remains elusive, and this type of abuse is increasingly common in intimate relationships.

While several U.S. states have enacted laws to combat nonconsensual deepfakes, tech companies have been slow to address the issue. Explicit deepfake apps have surfaced in major app stores, and even celebrities like Taylor Swift have been targeted with such content online.

Calls for Accountability and Action

Kate Ruane, director of the Center for Democracy and Technology’s free expression project, points out that major tech platforms have policies against the nonconsensual distribution of intimate images, but Telegram’s terms of service are less clear. Civil society groups have criticized Telegram for its inconsistent moderation of harmful content.

After Telegram CEO Pavel Durov faced legal issues in France, the platform began adjusting its terms of service and increasing cooperation with law enforcement. Nonetheless, experts like Ajder remain skeptical about whether Telegram can adequately address the proliferation of harmful deepfake content.

The Need for Proactive Solutions

Advocates such as Elena Michael, cofounder of the campaign group #NotYourPorn, argue that the responsibility for detecting and removing harmful content should not rest solely on victims. Instead, platforms like Telegram must adopt a more proactive approach to content moderation. While some improvements have been made, the current strategy largely remains reactive, leaving survivors vulnerable in an increasingly dangerous digital landscape of deepfake abuse.

Comments are closed.