“Ilya Sutskever’s Thoughtful Proposal for Establishing AI Best Practices”
“OpenAI’s Ilya Sutskever Has a Plan for Keeping Super-Intelligent AI in Check”
“OpenAI has delivered some remarkable feats in artificial intelligence. But it has also been mired in a kind of academic controversy over its decision to release, and then not to release, certain AI models.” An accurate summary of OpenAI’s journey we’d say. Look at them, giving goosebumps to tech communities with their impressive feats, while generating academic discourse like they’re auditioning for a role in a sci-fi drama.
No one can deny the insane advancements OpenAI has made towards pushing the envelope in artificial intelligence. However, it’s interesting that they’ve forged themselves a path that exists between “should we or should we not release certain AI models for public use?” It’s like being stuck in a non-ending episode of “Black Mirror,” isn’t it?
The company withheld their language-generating technology, GPT-2, from the public over fears of misuse. Remember that? They thought GPT-2 could be used to generate fake news, spam, and other nefarious outcomes. Kind of rich, and more than a little ironic, given how society is already struggling with an overflow of fake news. It seems, however, that OpenAI decided to backpedal on this as they later released GPT-2 to the public. Small victories count too, right?
Next on deck, the infamous GPT-3. The AI juggernaut that showed promise and provoked awe. But it also caused a hailstorm of safety concerns. Suddenly, the Big Brother fears of ‘dangerous technology in the wrong hands’ loomed large. But wait — Sutskever, the co-founder of OpenAI, has jumped into the ring advocating for caution instead of blindly rushing into AI advancements. His vision is to make AI that’s beneficial for all. Now, isn’t that a fantastic concept?
The journey of OpenAI is certainly a gripping narrative. From mind-boggling advancements to the question of safety, it has it all. The goal of AI serving humanity is noble, but executing it isn’t as straightforward as it might seem. With unchecked AI powers dense as a J.K. Rowling novel, the responsibility of controlling open-sourced AI models becomes as important as the technology itself.
Like Sutskever aptly remarks, the road ahead is filled with complexity, and being mindful is the least one can do. Because, after all, crossing the street without looking both ways isn’t exactly recommended, is it?
Read the original article here: https://www.wired.com/story/openai-ilya-sutskever-ai-safety/