Elon Musk Apparently Confesses xAI Borrowed a Leaf from OpenAI’s Models to Hone Its Own Skills

“Elon Musk Seemingly Admits xAI Has Used OpenAI’s Models to Train Its Own”

“Elon Musk thinks he might have a way to make AI less dangerous. ‘It should be distilled into an open-source model that’s useful to people,’ he opines. He’s suggesting the fruits of the labors of OpenAI, the lab he cosponsored until 2018 and which continues to refrain from revealing its most advanced AI coding, might usefully be shared with the world. Musk’s proposition ties into the concept of XAI — Explainable AI — with the fundamental idea being that AI should not be a mysterious black box, but instead something that people can understand and use.”

Oh, the magnanimity! The previously tight-lipped OpenAI, strongly mentored until 2018 by none other than our beloved Musk, shall, if all goes Musk’s way, render its advanced technological savoir faire to the public. The software magnate, famous for his innovative ventures that strive to casually transform life as we know it, believes we can tame the AI beast with something called… Explainable AI (XAI). Essentially, Musk wants us to unpack the impenetrable “black box” that is AI. Can these complex systems that effortlessly beat grandmasters at chess and predict the stock market really be distilled into a user-friendly manual? Musk certainly thinks so.

At the heart of Musk’s suggestion is the notion that the powerful AI algorithms, zealously guarded by companies and organizations, should be open-sourced, making them understandable and accessible to everyone. Sounds simple, doesn’t it? After all, who wouldn’t want the keys to this futuristic kingdom of neural networks and deep learning? But in reality, it raises the question: can the complexities of AI be adequately “explained”?

OpenAI’s reticence on the matter of its most sophisticated coding seems almost quaint when placed under the benevolent shade of Musk’s idealism. After all, he was a co-sponsor of the lab. His suggestions are not without merit, of course, but it’s worth considering the implications of them. Is it in everyone’s best interest for advanced AI knowledge to be public, or is this a case of opening Pandora’s Box?

On one hand, the idea of XAI is a noble one: a transparent AI that everyone can understand and use. On the other hand, giving everyone the power to create their own AI could lead to unforeseen consequences, as AI capabilities continue to grow at an exponential rate. But let’s not forget that it was Musk himself who once famously warned that AI could be “more dangerous than nuclear weapons.” So one must wonder: does Musk consider these potential consequences as he advises that AI be “distilled into an open-source model”?

Bravo, Elon, for sparking a conversation that was in desperate need of ignition. True to form, Musk’s opinion on this contentious matter ensures that the world will be talking about it. Whether or not one agrees with his suggestion, it’s undeniable that the debate surrounding the openness of AI is set to intensify. “Distill” or not to distill, that is the question. Over to you, OpenAI.

Read the original article here: https://www.wired.com/story/elon-musk-distill-openai-models-partly-xai/