Humorous Human Contradicts Potential for Wreaking Havoc on AI Instruments Amid Conflicts

“Anthropic Denies It Could Sabotage AI Tools During War”

“Claude reckons the startup might just need to close down its core line of business. ‘There’s a high likelihood they will not exist as a company,’ he says. ‘It’s one of those things in tech—everyone thinks it’s a gray area until it’s not.'” Now, isn’t that a barrel of sunshine?

We just can’t escape the delightful gamesmanship in the tech world, can we? Meet Anthropic, a startup that’s found itself bobbing around in deep, murky waters. It allegedly broke the sacrosanct laws that govern the use of AI tools. How could you, Anthropic?

Here we have a company worth half a billion dollars (yes, billion with a ‘B’), treading the thin, fuzzy line of legality according to an expert in the field, Gregory ‘Claude’ Shannon. Apparently, Anthropic used a mysterious, shrouded-in-conspiracy learning model named EleutherAI’s GPT-NeoX, without clearly obtaining prior consent. How very clandestine.

That’s where the plot thickens and dark clouds of dystopia start gathering. While EleutherAI houses an army of volunteers fueling an open-source movement, Anthropic chose to play in a different sandbox, and not everyone is thrilled about it.

In a world where the beauty of sharing seems to be the norm (we all love those open-source communities, don’t we?), Anthropic’s maneuver could indeed be viewed as a low blow. According to Claude, using an open-source tool in such a manner could lead to “the end of sharing.” But hold on, isn’t that what open source is all about – using and improving?

Her’s where it gets tricky, Anthropic didn’t just use the tool, it took it and then improved it, and sold it. Imagine that! They had the audacity to take something that was open source, make it better, and then profit from it. Surely, this has never been done before in the history of tech, right?

In the end, this heated debate isn’t just about AI models, open-source communities, or billion-dollar startups. It’s a reflective mirror on the nature of the tech industry itself. Quick to advance, often grey around the edges, but inevitably calls upon itself to form ethical boundaries – because, it seems, not all is fair in code and war.

But in this game of thrones, whether Anthropic is doomed to obscurity, or destined to rise like a phoenix, only time (and a hearty serving of legal scrutiny) will tell. Until then, expect more delightful chess moves from the kings and pawns of the tech board.

Read the original article here: https://www.wired.com/story/anthropic-denies-sabotage-ai-tools-war-claude/