OpenAI Enthusiastically Unveils Latest AI Safety Research, Critics Deem It an Appealing Start but Insist There’s More Ground to Cover

“OpenAI Touts New AI Safety Research. Critics Say It’s a Good Step, but Not Enough”

“It’s not a surprise, then, that OpenAI, the digital brain trust founded in part by Elon Musk, announced this week that it would step back from publishing all of its research—an ironic decision for an organization that was established on principles of openness and transparency,” begins the original WIRED article on OpenAI’s latest safety initiatives.

The paradox is thick enough to cut with a knife. OpenAI, the hotshot artificial intelligence offshoot backed by leading luminaries such as Elon Musk, made the oh-so-public announcement that it would be taking a step back from openly sharing all its research. Yes, the very organization founded on an ethos of transparency and openness. Cue the Inception ‘bwaaah’ sound effect.

This backtrack makes one feel like they’re watching a live-action adaptation of George Orwell’s 1984. The sector heralded as the beacon for pure and open access has decided to flip the script in a rather dramatic turn of events. But why this change? Why, indeed.

OpenAI claims that the move is made in the interest of safety and security; a plausible explanation, if one subscribes to the ‘for your protection’ logic. Apparently, they believe they are now wielding knowledge equivalent to an intellectual nuclear weapon.

The decision signifies an existential crisis for OpenAI. They’re now attempting to grapple with the fact that the same info they’re churning out, meant to bring sunshine and rainbows into the world, could just as easily bring doom and gloom. It’s like a puppy realizing that its tail can be both a fun toy and an irritating distraction.

OpenAI now believes in the ‘too hot to handle’ theory, considering the negative implications that AI technology could bring. Is it possible they’ve been watching too many Terminator repeats on late-night TV?

The switch towards ‘cherry-picking’ what to publish and what not to, isn’t exactly a great look for an organization that calls itself ‘Open’AI. But then, with the heavy responsibility to steer clear of a dystopian future, one can hardly blame them for wanting to shroud certain things under the veil of secrecy.

After all, no one wants to be that guy who let the AI genie out of the bottle. Or worse, be remembered as the one who ‘pulled an Ultron’, a scenario that seems more possible with each passing day in our headlong rush to embrace all things AI.

Regardless of how you view this choice by OpenAI, it’s evident the AI party has taken a serious turn. And as we find ourselves in middle of this techno-morality play, let’s all remember to take a deep breath and pray for a Hollywood ‘happy ending’ rather than a dystopian nightmare.

Read the original article here: https://www.wired.com/story/openai-safety-transparency-research/