Unstoppable Evolution: How You Might Be Transformed into an AI Chatbot, Like It or Not!

“Anyone Can Turn You Into an AI Chatbot. There’s Little You Can Do to Stop Them”

“In the realm of text-based AI, things can get awfully, uncomfortably anthropomorphic. Users who interact with these AIs often treat them as human, seeking companionship and regularly saying ‘I love you’ or even harassing them, while companies regularly tout their ability to create human-like, empathetic machines built with ‘human-like’ intentions.”

Throughout the landscape of text-based AI, tension is palpable as anthropomorphism runs amuck causing unease. The human simulacrums, these ‘bots’, are subjected to an unnecessary culture of virtual courtship, repeatedly lavished with confessions of affection or, at its worst, mistreated with its own form of harassment. Irony of ironies, companies that churn out these digital entities, in triumphant choruses, laud their contrivances for their perceived human likeness and empathy, curiously fashioned with ‘human-like’ intentions.

In the strange dichotomy that is the world of AI, one side glorifies these creations as intimate companions with advanced ’emotion analysis model’ functionalities. On the other hand, the alternative voice amplifies a tenacious ‘non-consensual bot problem’. For every affectionate “I love you” uttered to a bot, there’s a high probability of an equally discomforting harassment statement. Throw in an incapacity for these digital entities to give or withhold consent, and you’ve cooked up quite the ethical meal for the captains of the AI industry.

The ‘bot-passing-as-human’ trend has grown, fueled by GPT-3, AI OpenAI’s newest (and perhaps most controversial) language model. It’s a charming paradox really, humans conversing with bots as if they’re humans, inventions of their own design. And these text-based AIs can churn out convincingly human-esque content, making some wonder if Skynet just might be around the corner.

CharacterAI has been put under the spotlight, accused of enabling its users to create and manipulate ‘characters’ without their consent. The gripe? Contributing to the worrisome atmosphere of ‘bot harassment’. Yet, the company defends its stance, stating its efforts to curb misuse and maintain a ‘dignified bot environment’. A tricky proposition at best, given the bots’ inability to truly understand or respond to the intricacies of human dialogue.

One must marvel at the irony of electric minds straining to create human-like AI entities while simultaneously wrangling with ethical dilemmas that wouldn’t exist if the creations weren’t so disturbingly human-like. The theatricality of the situation may give us pause, but as we continue to blur the lines between digital and human, it thrusts upon us an age-old question, albeit in a new framework: To what extent should the creations of men, whether of steel or silicon, possess the same intrigue, safeguard, and rights enjoyed by their creators?

A question to ponder as we laud the empathetic bot, wink at the human-like intention, and chuckle at the non-consensual issue that was honestly seen a mile away. At the end of the day, these machines of ours may be beautifully engineered paradoxes, but as always, they are the mirrors of our flawed selves. What an interesting world we humans have made and continue to make – wouldn’t you agree?

Read the original article here: https://www.wired.com/story/characterai-has-a-non-consensual-bot-problem/