Google Gives Green Light for Its AI to Engage in Weapons and Surveillance Shenanigans
“Google Lifts a Ban on Using Its AI for Weapons and Surveillance”
“Google’s AI systems are an integral part of many of our products, and we recognize the responsibility we have in society to ensure our technology’s beneficial use. Therefore, we are committed to establishing and publishing clear AI principles that guide our research and product development work.” Quite a noble statement Google has made in its public declaration of responsibility towards Artificial Intelligence, isn’t it?
Let’s take a moment to discuss this whole-hearted promise, which appears as sleek as the logo of the tech giant itself. Declarations are lovely, aren’t they? They are meant to set expectations and reassure the public that everything is under control. But when it comes to reality, do they always stand up to the promise they propose? Well, that’s a debatable question.
Google’s dedication to creating ‘responsible AI’ is, without a doubt, a commendable step. Paying heed to safety, established guidelines, and implementation of principles indicate their willingness to produce AI systems that are beneficial, not just for their revenue, but also to society as a whole.
Here comes the twist in the tale, though. How transparent can a tech giant be, especially when sundry business objectives are at stake? Transparency in AI, as idealistic as it sounds, is akin to finding a needle in a haystack. What are these standards? More importantly, who get’s to implement them? And, in the end, who verifies these are maintained? Interested parties keen to sweep it under the rug, or unbiased third-party entities? The answers to these questions remain as elusive as ever.
Moreover, user privacy is yet another factor that presents a prickly situation for any entity dealing in Artificial Intelligence. Streamlining privacy considerations with business goals is no less than walking a tightrope. Can they follow through on that promise while also turning a handsome profit?
While Google’s efforts deserve applause, it takes more than just stated principles to build, monitor, and regulate responsible AI. It is high time that wider, more transparent, and unbiased protocols and policies are put in place. To paraphrase a famous saying, “with great power comes even greater responsibility”. Let’s hope corporate giants like Google, actually take that to heart when it comes to AI and its future implications.
Read the original article here: https://www.wired.com/story/google-responsible-ai-principles/