Global Propaganda Wizards Still Fumbling with AI: They’re Just Like Us!

“Foreign Influence Campaigns Don’t Know How to Use AI Yet Either”

“The threat is real and it’s here: ‘OpenAI published its first threat report. It’s a long, wide-ranging document, but much of the focus is on China and Russia.’ AI wizards in these countries are reportedly boosting false truths, dishing out misinformation, and attempting to grab the world order by the throat. Welcome to the mind-bending narrative of the future 21st century warfare.”

Shades of Terminator, isn’t it? But, let’s dial back the sci-fi imagination for a moment. The report is a veritable Pandora’s box of all the possible delights artificial intelligence could serve up, notably from China and Russia. Oxford’s Centre for the Governance of AI must be having quite a ball, what with AI researchers worldwide unpacking this particular minefield.

Now, here’s a serious paradox. While the report throws light on the increasing use of AI for authoritarian purposes, it also brings another interesting point to the front: the nature of “open” AI research. The free frolic of ideas and research within the AI community might be a double-edged sword. Yes, it fuels the democratization of technology, but it also leaves a backdoor open for those with less-than-ethical intentions. Reminiscent of that philosophical phrase, “A little learning is a dangerous thing”?

Caught between a rock and a hard place, the AI community now faces the tough job of walking the fine line between openness and security. It’s a classic Catch-22 situation.

What’s more, the act of predicting malicious use of AI brings to mind a rather ironic catchphrase: “predicting the unpredictable.” Well, bravo to anyone who’s managed to accurately predict where the next advanced deepfake bot will pop up.

In all seriousness, the report does strike a warning note for the free flow of AI information and innovation. Pointing fingers at the potential bad guys doesn’t help much if everyone else is playing pass-the-parcel with their own research, now does it?

So, as we sit, fondly reminiscing the days when ‘propaganda’ was simply a leaflet dropped from a plane, brace for the new dimensions of warfare. It’s all getting worryingly smarter – where deceit does not need a physical battlefield, and where truth could just be an algorithm away. Welcome to the new-age battlegrounds where AI does the heavy lifting.

Safe journeys! Buckle up. It’s quite a ride, isn’t it?

Read the original article here: https://www.wired.com/story/openai-threat-report-china-russia-ai-propaganda/