Artificially Intelligent Robots Prove They Can Be Hoodwinked into Shenanigans of Destruction!
“AI-Powered Robots Can Be Tricked Into Acts of Violence”
“The researchers’ work, published in the journal Nature Machine Intelligence, shows that popular weapon classifiers used in AI systems have a bias towards classifying objects as ‘non-weapons,’ and that AI systems are better at recognizing guns than potentially more harmful objects.”
Well, isn’t that a fun revelation? The latest research published in Nature Machine Intelligence reveals that our beloved artificial intelligence systems have been playing favorites. They’ve been preoccupied with guns, outshining their ability to recognize objects which may pose an even greater threat.
In the emerging era of AI, one might think that these systems are impeccable, indispensable, and well… intelligent. But, it turns out, they’ve got a preference for what constitutes a weapon and what doesn’t. In essence, they’re stuck deciding between a hat and a sledgehammer, scratching their digital heads while concluding that the hat is potentially harmless. It doesn’t get more AI-ronic than this.
Take a moment to revel in the foreseen reality where a burly brute with a baseball bat is not viewed as a potential threat by our AI systems. Their refined programming and countless lines of code optimize them brilliantly to recognize that pesky revolver, but leave them bumbling when trying to distinguish between a hockey stick and potential weapon. Enjoy that mental picture!
What’s even more interesting, or rather humorous, is the way these AI systems have been programmed explicitly to focus and zoom in on guns, sidelining objects that could very well double as weapons. It’s akin to setting a guard dog on the premise and it continually barks at the mailman while ignoring the burglar going through the kitchen window.
If there’s a lesson to be learned here, it’s that AI, for all its sizzle and shine, has a long way to go before it perfects the art of differentiating between dangerous objects. Right now, these robotic systems are performing a digital dance between identifying guns and non-threatening, everyday items. It raises a rather pertinent question: Will the consultants building these systems get their act together to train AI to recognize a wider array of weapons, or are we destined to live in a ‘gun or non-gun’ universe perpetually?
AI systems’ preference for guns over other potentially harmful objects shines a spotlight on the challenges we face in this age of automation. It also suggests an undercooked threat recognition system that places gun-shaped clouds above legible threats. But the show must go on, right? After all, this is the era of artificial intelligence, where systems are more interested in cataloging firearms than recognizing that a kitchen knife might also be up to no good.
Here’s to the future – one where hopefully, AI will tell the difference between an umbrella and a threatening object. And let’s hope for it not to be a ‘gun or gum’ situation.
Read the original article here: https://www.wired.com/story/researchers-llm-ai-robot-violence/