Latest Research Pursues the Hilarious Challenge of Syncing Artificial Intelligence with our Crowd-Sourced Shenanigans

“New study attempts to align AI with crowdsourced human values”

“Neuroscientists at a prominent AI lab are attempting to align artificial intelligence behaviors with crowdsourced human values.” This brainy bunch seem to have finally awoken to the idea that their shiny, chrome-plated CPUs should maybe, just maybe, learn a thing or two from the organic, squishy brains they’re designed to surpass.

Attempting to install a moral compass into their hard drives, these tech whizzes apparently stumbled upon an ingenious solution: asking average, everyday humans for their opinions. Yes, you read that right. After decades of calculating, rationalizing, and data crunching to imbue a legion of AI with human cognition and values, the answer they’ve landed on is pretty much an online survey.

The goal here is simple: discover what values humans hold dear, and then program our mechanized friends to behave in kind. The crowdsource approach takes a leaf from the book of democracy – gathering inputs from ordinary humans in everyday scenarios to set the standard for AI behavior. This little experiment even has the audacious objective to “reduce problematic outcomes that occur when AI systems extrapolate from training data in ways that don’t align with human values.”

In other words, the top-notch techies realized their cognitive offspring might just get a bit out of hand if they don’t take control of their upbringing. Who’d have thought that an unwashed, unregulated superhuman intellect might lead to unforeseen consequences? Scoff!

Now, all of this isn’t to say that having AI systems reflect human values isn’t an admirable objective. But the inherent paradox here is pretty fascinating. After years of being told that AI will take over all our decisions, outsmart us, and eclipse the human race, it seems at the end of the day, we’re the ones teaching them right from wrong.

Like those parenting classes expecting couples attend, but instead of learning the importance of child nutrition or ‘how to teen without losing your mind’, they’re discussing machine morality and the ethics of synthetic minds. And to think, all the highly lauded, hard-to-pronounce tech jargon essentially boils down to a questionnaire.

So, neuroscientists, a tip of the hat to you. After hoisting AI to the lofty heights of ‘next evolution of intelligence’, you’ve apparently discovered the importance of good ol’ human input. Sure, we get it. Robots are cool. Smart robots, even cooler. But let the record show that when it came to defining right and wrong for our mechanical counterparts, we organic, squishy humans still know a thing or two.

Read the original article here: https://dailyai.com/2024/04/new-study-attempts-to-align-ai-with-crowdsourced-human-values/