Setting a Fresh High Score in the AI Danger Game!

“A New Benchmark for the Risks of AI”

“The goal isn’t to develop new AI risk scenarios or to scare people about the future, said Tim Hwang, director of the center’s Ethics and Governance of AI initiative. He believes that artificial intelligence should be developed responsibly, and wants the government and industry to better understand potential hazards.”

Strap in for a read to stir a sense of responsibility in the head honchos of AI stuff. The Ethics and Governance of AI initiative’s director, Tim Hwang, is all about developing artificial intelligence in a safe and secure manner. His end game? Prompting the industry bigwigs and Uncle Sam to get a solid grip on understanding possible potholes in their path. Spooky scenarios about a dystopian AI future? He didn’t sign up for that.

In order to meet his goal, let’s sniff out the potential risks that the AI systems could stumble upon. How about creating a little thing called The Index of Risky AI? Seem like the stuff of spy movies? No, just a list that captures what could potentially go amiss. Before you get your knickers in a twist about this dystopian-esque book-Hunger-Games-meets-Blacklist style, it’s imperative to remember it’s not about uncertainty, but the certainty of wanting a safer technological home.

Now, who’s going to give this rollercoaster a nudge? Tim Hwang doesn’t shy away from suggesting government and industry should carry this massive onus. Why not pass this hot potato to them? As people who oftentimes use AI (no thanks, autocorrect) and as people who implement AI, they undoubtedly understand its inner workings and idiosyncrasies way better than the rest of us, right? Bless them! They should sort through the snakes and ladders of disaster scenarios, straight-facedly decoding, discerning, and neatly defusing them, one by one.

How about an example, for old times’ sake? Let’s say an autonomous vehicle takes a nasty spill on a patch of black ice. What should happen? Will Uncle Sam pay the mechanic’s bill? Or maybe the AI car dealer will chip in a little? Ah, the imagination soars. But, ping – reality check. Scenarios like these are just the tip of, dare it be said, an iceberg.

Now, imagining these scenarios is one thing, but implementing plans to ensure they don’t really unwind in the real world? Well, that’s what the clever guys at the top should figuratively put their heads together and brainstorm on.

Clearly, predicting the future is not the name of this game. And getting scared out of your wits? No, thank you, we aren’t aiming for that! But understanding the lifespan of AI, and its ins and outs by the movers and shakers of this industry? Now, that’s a goal worth shooting for. People at the helm of creating this technology certainly need to plan and steer it in the right direction to avoid the crumbling of digital civilization as we know it.

Remember, this isn’t to scare the pants off anyone; it’s about creating and curating a safer techno-future that’s AI-secure. Because, hey, wouldn’t we all prefer that over a flailing robot apocalypse?

Read the original article here: https://www.wired.com/story/benchmark-for-ai-risks/