Scholars Unleash a Hilarious Pandemonium by Rating AI Models on Risks – The Findings are Wildly Diverse!

“Researchers Have Ranked AI Models Based on Risk—and Found a Wild Range”

“Algorithms that make decisions about mortgage loans or medical treatments often lack a critical feature: the ability to assess how well they will work in the real world.” If this quote doesn’t fire up all sorts of imagined doomsday scenarios, nothing else will. If algorithms can’t predict their efficiency in the real-world applications, maybe they should stick to solving 5th-grade math problems rather than deciding the fate of real people and their homes or health.

The study conducted by MIT researchers shows that when AI models are evaluated without consideration to the real-world factors, their performances are overestimated. And by overestimation, we aren’t just talking about a number or two off; we’re talking about an eye-raising 50-fold. Now, who here thinks it’s all hunky-dory, that a program dictating one’s eligibility for a house loan or casing out a medical treatment plan is off its mark by just about, oh, 50 times?

But will you be surprised if it’s said that the models got all pompous because they were trained on skewed data? If you’re feeding bonsai plants to an algorithm, don’t be surprised to see it performing tearing stunts when introduced to a bonsai tree. But a twist in the tale- The model gets a reality check and ends up looking startled when it meets a forest. The algorithm key in AI systems is too naïve, and it needs to be redesigned with real world variability and imperfections in consideration.

So, while we’re all clapping and cheering for this tech revolution where AI becomes the life-saver, it might be good to remember that these so-called ‘set-in-stone’ algorithms are, in fact, as changeable as a politician’s promises. The morality here is that numbers on a paper don’t mean much in practice, till they’re tried against the unpredictable messiness of real life, especially when the predictions have life-altering consequences.

In conclusion, gear up folks! It’s time for AI to go to school again. It seems we’ve been crediting AI with a little too much intelligence while it still needs to figure out the basics. Aren’t we all in love with this never-ending teething phase? It’s as though AI’s the toddler who never grows up. It just makes you wonder: at what point will AI stop being the student and become the master?

Read the original article here: https://www.wired.com/story/ai-models-risk-rank-studies/