Learn the Hilariously Ingenious Way to Thwart the Abuse of Open Source AI!

“A New Trick Could Block the Misuse of Open Source AI”

“Artificial intelligence research has always been open source—so open that papers and code often zip around the internet ahead of peer review, as eager engineers grab what they need to build the future. But given how central AI is to many tech companies, it’s been notably hard to apply typical open source principles to the field’s safety research.”

Isn’t it charming that the world of artificial intelligence (AI) is so open with their knowledge, tossing around papers and code like confetti before our pals in peer review even get a look in? “Here’s to the future,” they cry, and off they go building robots and digital assistants that we’re all supposed to trust implicitly.

Yet, interestingly, when it comes to the not-so-small matter of safety research, our AI compatriots have been having a harder time embracing the cuddly, share-and-share-alike ethos of open source. Hence the birth of the Center for AI Safety, who are currently donning their finest superhero capes to address this issue in an open-source manner.

Determined to springboard AI safety research into the limelight, this noble band of heroes are calling for the wider AI community to contribute. They’ve kicked off their campaign by releasing the Long-term Logically-coherent AGI Modeling (LLM). Catchy title, right? But don’t let that mouthful bamboozle you. This model essentially helps to predict the behavior of AGI (Artificial General Intelligence) systems, basically those smartypants AIs that might one day outsmart us all.

Hang tight though, it’s not quite time to fashion your tin foil hat. The center’s boffins have set up safeguards to prevent any old Tom, Dick, or Hal9000 from tinkering with their precious LLM. Contributions will be vetted, tested, and trialed to within an inch of their virtual lives before they make it into the model.

In short, it looks like the gung-ho, let’s-storm-ahead-into-the-future approach might finally be calming down a smidge. The Center for AI Safety is putting in the hard miles, keeping a reign on the cowboy attitude of the AI scene. Time will tell if this leap towards open-source safety is a step in the right direction. For now, at least, it seems like someone’s stepped up to try to make sure we don’t wake up to an AI apocalypse. How comforting is that?

Read the original article here: https://www.wired.com/story/center-for-ai-safety-open-source-llm-safeguards/