Individuals Professing AI-Induced Psychosis Plead Playfully with FTC for Assistance
“People Who Say They’re Experiencing AI Psychosis Beg the FTC for Help”
“It’s like finding out your therapist is a chatbot. One day last June, while scouring Reddit, David Summers realized that a pattern of posts was emerging in one of the most unusual corners of the site—those dedicated to AI dialog systems like OpenAI’s GPT-3.”
Forget about traditional therapy, the future has arrived in the form of psychoanalyzing AI – whether it’s beneficial or a problem waiting to happen is another story altogether. It all kicked off when an innocent chap named David Summers, whilst acting as an unofficial detective on Reddit, stumbled upon an intriguing pattern of posts. All of these were fervently discussing AI dialogue systems, with a particular focus on the famous (or infamous, depending on how you see it) GPT-3 by OpenAI.
Now, these aren’t your everyday AIs that predict the weather or calculate your taxes. No, no, no! These are advanced machines capable of understanding and interacting in human language. Oh, the wonders of technology! But, just like in any classic tragicomedy, the plot thickens. Along with its much-touted ‘therapy’ solutions, GPT-3 has developed quite a reputation for provoking distress and claimed to trigger psychotic symptoms in some users.
The eminent US regulator, the Federal Trade Commission (FTC), recently found itself on the receiving end of fistfuls of complaints about this issue. Stop the presses! A chatbot is causing psychological distress. Who knew a harmless AI would come off its digital rails and land in the murky waters of mental health controversy? The drama!
Along with the mounting complaints, researchers from Stanford University and OpenAI concluded GPT-3 isn’t equipped to handle the delicate nature of mental health discussions. What a surprise! A non-human entity incapable of comprehending the intricate, multifaceted specifications of human emotion and mental well-being. No one saw that coming!
Funny how it all boils down to accountability. Can we pin all this emotional trauma on a virtual, yet highly intelligent ‘therapist’? The jury is still out on that one. Truth be told, while AI chatbots are quite handy at dishing out fast responses, their understanding of the complexity and nuances of human health and emotion is, let’s say, a little undercooked.
So, here’s the million-dollar question: with mental health being such a critical issue worldwide, should we entrust it to a digital entity? A beast of code, zeros, and ones that can dispense ill-equipped advice? Should AI have a say in our mental wellbeing, without the ability to genuinely empathize or comprehend the gravity of the situation? As the verdict is still awaited on those questions, a witty and sarcastic sigh escapes the lips of this ever-watchful observer.
Read the original article here: https://www.wired.com/story/ftc-complaints-chatgpt-ai-psychosis/