Get Acquainted with the Jovial Geniuses Behind Goody-2, the Globe’s ‘Most Conscientious’ AI Chatbot

“Meet the Pranksters Behind Goody-2, the World’s ‘Most Responsible’ AI Chatbot”

“OpenAI, a research group that Elon Musk co-founded, has built a new version of the AI tool it calls GPT-3, or ‘the most human-like language model ever.’ The organization calls it ‘the world’s most responsible AI chatbot,’ and has invited companies to apply for early access.” Well, well, well, it seems everyone’s favorite tech mogul is at it again. This time, he’s not shooting cars into space or trying to implant chips into our brains. No, now he’s promising us the “world’s most responsible AI chatbot.”

OpenAI has christened this latest creation as GPT-3, and it’s essentially version 2.0 of their AI language model. The idea? To be as human-like as possible. And it’s not just about crafting eerily accurate text replies. As per the marketing jargon, this super chatbot can apparently assist with “drafting emails, writing code, creating written content, tutoring, language translation, and brainstorming.” Quite the overachiever, it seems.

What sets our robotic friend apart? For starters, the AI is trained through a two-step process: a pre-training phase where it learns from a plethora of internet texts (spoiler alert: it’s not reading your emails), and a fine-tuning phase that molds its writing style while also reinforcing certain ethical default settings. After all, who needs a malevolent robot overlord when you can have a chatbot that’s a stickler for the rules?

But let’s not get too comfortable. Sure, it’s currently learning to differentiate between right and wrong using its inbuilt AI ethics framework. But it’s still an AI learning to navigate an ethically complex world, and that’s never a smooth ride. A slight misinterpretation could turn our “world’s most responsible AI chatbot” into its evil twin. Oh, the suspense.

Bringing the “world’s most responsible AI chatbot” to life raises its own bouquet of questions. Like maintaining user privacy, avoiding AI malfunction, and, lest we forget, the constant threat of creating a rogue AI. But according to OpenAI, these problems are just another day in the park.

Either way, it’s clear that the folks at OpenAI are pretty stoked about this. They’re already inviting companies to apply for early access to GPT-3. Probably to prove that Elon Musk’s AI isn’t about to trigger an apocalypse. Or at least not yet.

So if the idea of a hyper-intelligent, ethically-upright chatbot seems enticing, watch out for GPT-3. It might just be the AI butler you never knew you needed. Or, if the movies have taught us anything, it could be the beginning of an exciting, dystopian future. Take your pick.

Read the original article here: https://www.wired.com/story/goody-2-worlds-most-responsible-ai-chatbot/