Inside Anthropic: The A.I. Start-Up Fueled by Fear

·

·

Imagine this scene: Anthropic HQ, a San Francisco tech start-up, is buzzing with anticipation. This isn’t your typical pre-launch jitters though. The team at Anthropic aren’t just worried about server crashes or unimpressed users. They’re actually scared. Why? Because they’re about to unleash a powerful new A.I. chatbot, Claude, into the world. And they’re all too aware of the potential for catastrophe.

The Fear Factor at Anthropic

Anthropic isn’t your everyday tech start-up. Despite its modest size (160 employees) and unassuming profile, it’s a titan in the A.I. research world, a formidable rival to tech behemoths like Google and Meta. Sure, the usual start-up nerves are there. But here, the stakes are higher.

Anthropic’s engineers are not just developers. They’re creators of intelligent beings, powerful A.I. models that could potentially rival human intelligence. The fear? That they may lose control of their creation. That their own technology could turn against them, or worse, be weaponized by others.

The Rise of the A.I. Panic

Just a few years ago, the idea of an A.I. apocalypse was the stuff of sci-fi novels. But as A.I. technology rapidly evolves, the panic is becoming very real. Tech leaders and A.I. experts are sounding the alarm, warning of the destructive potential of increasingly intelligent chatbots. A.I., they argue, could be as dangerous as nuclear weapons or pandemics if not properly regulated.

At Anthropic, they’re cranking the doomsday dial up to eleven. I spent weeks embedded in their headquarters, interviewing staff and observing their preparations for the launch of Claude 2 – the latest version of their A.I. chatbot. And let me tell you, optimism is in short supply.

The Anthropic Anxiety

Anthropic’s staff are well aware of the potential for disaster. They see themselves as the Robert Oppenheimers of our time, grappling with the moral implications of their cutting-edge technology. The fear is palpable. At times, it felt like I was visiting a restaurant where the kitchen staff could talk about nothing but food poisoning.

One worker confessed to losing sleep over his A.I. fears. Another casually predicted, mid-lunch, a 1 in 5 chance of rogue A.I. wiping out humanity within the next decade. The company itself has held back on releasing previous models due to potential misuse.

The Effective Altruism Connection

But there’s a twist to the Anthropic story. The company has ties to effective altruism – a movement that champions using data and logic to do the most good in the world. Anthropic was founded by employees of OpenAI who felt the company had become too commercial. They envisioned a new venture focused on A.I. safety.

The Future of A.I. – and Anthropic

Despite the doom and gloom, Anthropic does have its optimists. Co-founder Ben Mann told me he believes A.I. language models will do more good than harm. He’s proud of the safety measures they’ve put in place and optimistic about the future.

And maybe that’s what the tech world needs more of – a healthy dose of caution. Anthropic’s obsession with safety may seem over-the-top, but considering the potential risks of A.I., perhaps it’s just the right amount of fear.

Despite the existential dread, there’s a sense of responsibility at Anthropic that’s almost comforting. Yes, A.I. is scary. But maybe with more companies like Anthropic, prioritizing safety over profits, we can navigate the A.I. revolution without sleepless nights.

You can try Claude for yourself (US and UK only at the moment). Be warned though, it’s a bit neurotic. But then again, wouldn’t you be too, if you knew what it knew?

Source:New York Times