A Deepfake Porn Bot Is Being Used to Abuse Thousands of Women

“This is now something that a community has embedded into a messaging platform app, and therefore they have pushed forward the usability and the ease to access this type of technology,” Patrini says. The Telegram bot is powered by external servers, Sensity says, meaning it lowers the barrier of entry. “In a way, it is literally deepfakes as a service.”

Telegram did not answer questions about the bot and the abusive images it produces. Sensity’s report also says the company did not respond when it reported the bot and channels several months ago. The company has a limited set of terms of service. One of its three bullet points says that people should not “post illegal pornographic content on publicly viewable Telegram channels, bots, etc.”

In an expanded set of frequently asked questions, Telegram says it does process requests to take down “illegal public content.” It adds that Telegram chats and group chats are private, and the company doesn’t process requests related to them; however, channels and bots are publicly available. A section on takedowns says “we can take down porn bots.”

Before the publication of this article, the Telegram channel that pushed out daily galleries of bot-generated deepfake images saw all of the messages within it removed. It is not clear who these were removed by.

For this sort of activity there is usually some data on who has used the bot and their intentions. Within the Telegram channels linked to the bot, there is a detailed “privacy policy,” and people using the service have answered self-selecting surveys about their behavior.

An anonymous poll posted to the Telegram channel in July 2019 was answered by more than 7,200 people, of which 70 percent said they were from “Russia, Ukraine, Belarus, Kazakhstan, and the entire former USSR.” All other regions of the world had less than 6 percent of the poll share each. People using the bot also self-reported finding it from Russian social media network VK. Sensity’s report says that it has found a large amount of deepfake content on the social network, and the bot also has a dedicated page on the site. A spokesperson for VK says it “doesn’t tolerate such behavior on the platform” and has “permanently blocked this community.”

A separate July 2019 poll answered by 3,300 people revealed people’s motivations for using the bot. It asked, “Who are you interested to undress in the first place?” The overwhelming majority of respondents, 63 percent, selected the option “Familiar girls, whom I know in real life.” Celebrities and “stars” was the second-most selected category (16 per cent), “models and beauties from Instagram” was the third-most selected option with eight percent.

Experts fear these type of images will be used to humiliate and blackmail women. But as deepfake technology has been rapidly scaled, the law has failed to keep up and has mostly focused on the future political impact of the technology.

Since deepfakes were invented at the end of 2017, they have mostly been used to abuse women. Growth over the past year has been exponential, as the technology required to make them becomes cheaper and easier to use. In July 2019 there were 14,678 deepfake videos online, a previous Sensity research found. By June this year the number climbed to 49,081. Almost all of these videos were pornographic in nature and targeted women.

In August, WIRED reported on how deepfake porn videos had gone mainstream. More than 1,000 abusive videos were being uploaded to the world’s biggest porn websites every month. One 30-second video that uses actress Emma Watson’s face and is hosted on XVideos and Xnxx, both are owned by the same company, has been watched more than 30 million times. The company did not respond to requests for comment at the time, while xHamster scrubbed tens of deepfake videos with millions of views from its site after WIRED highlighted the videos.