Countless Individuals Are Reveling in the Use of Mischievous AI ‘Nudify’ Bots on Telegram

“Millions of People Are Using Abusive AI ‘Nudify’ Bots on Telegram”

“In the darker corners of the internet, people have been sharing explicit pictures—a surprising number of them self-generated—edited to depict whatever their heart desires. Now, instead of having to go through the trouble of using an image editor, all they have to do is tap a few commands on their keyboard.” Well, as if the virtual world wasn’t complicated enough, here we are spending our precious time addressing bots that ‘nudify’ people in images. Sigh, the future is here and it’s not at all what we imagined.

We are talking about a whole new level of “game of bots” going on here, where users simply need to type in a command and voila! There we have deepfake images. Ostensibly, the task of generating deepfakes has gone from the hands of ‘tech enthusiasts with way too much time’ to ‘absolutely everyone’. Isn’t it fascinating how generously we empower one another?

A research firm, Sensity, has been monitoring the situation and reports a steady rise in these nudifying instances by users across the globe. Impressively, a grand total of 104,852 deepfake images have been traced back to Telegram. This, of course, has led to the inevitable finger-pointing and faux shock. Conversation around personal privacy and the importance of consent has picked up, and it’s about time, really.

When it comes to user information and cyber-ethics, there seems to be a recurring theme: the sense of invulnerability. Each time there is a new technological advancement, folks jump on that bandwagon, knowing not where it may lead. And then, lo and behold! Infiltration, violation and chaos appear to be inevitable side effects of unchecked curiosity and neglected privacy concerns. Seems like a student who never studies but still hopes to ace the exam, doesn’t it?

There’s a saying in the tech world, something to the tune of “privacy is what you make it.” But, when the outrageous world of digital technology is continuously pushing boundaries, one must stop and question, “Is anyone actually safeguarding privacy?” or “Should we just start accepting that the concept of privacy in the digital age is now a fairy tale?”

Listen, this isn’t about being a wet blanket or a technology skeptic. There’s no denying – AI technology offers immense potential for progress. Undeniably, it is a powerful tool that can contribute to efficiency and innovation. Yet, one can’t help but wonder: Will the world continue to use this valuable tool in such mind-bogglingly irresponsible ways? The ‘nudify’ bots present a whole new dimension of misuse, one that most people would rather not wrap their mind around.

Privacy, consent, and respect seem to be the unpopular kids in the era of digitization. Perhaps the tech industry should focus on changing this narrative. Maybe then the future won’t be so cringe-worthy after all. Go figure.

Read the original article here: https://www.wired.com/story/ai-deepfake-nudify-bots-telegram/