They built it. They’re scared of it. They’re selling it anyway.
Stop me if you’ve heard this one before: a tech company says it’s built a new AI that’s so powerful it’s scary. Apparently, it’s too dangerous to release into the world – the consequences would be catastrophic. Luckily for us, they are keeping it locked up for now. They just wanted you to know.

That’s exactly what AI company Anthropic is telling us about its latest model, Claude Mythos. The company says Mythos’ ability to find cybersecurity bugs far surpasses human experts, and it could have world-altering consequences if similar technology lands in the wrong hands. “The fallout – for economies, public safety and national security – could be severe,” Anthropic said in an early April blog post. Some breathless observers warned that Mythos will soon force you to replace every piece of technology in your life, down to your WiFi-enabled microwave, to protect from the digital madness.
Some security experts doubt these claims, but let’s set that aside. This isn’t new. Executives at leading AI providers regularly issue warnings about how their industry’s products may destroy humanity. Why do AI companies want us to be afraid of them?
It’s a strange way for any company to talk about its own work. You don’t hear McDonald’s announcing that it’s created a burger so terrifyingly delicious that it would be unethical to grill it for the public.
Here’s one theory. According to critics, it benefits AI companies to keep you fixated on apocalypse because it distracts from the very real damage they’re already doing to the world. Tech leaders say they’re just warning us about an inevitable future, and safety is a top priority whether it’s now or later. But others argue what we’re actually seeing is fear mongering, which exaggerates the potential of the technology and serves to boost stock prices. And it encourages a narrative that regulators must stand aside, because these AI companies are the only ones who can stop the bad guys and build this technology responsibly.
“If you portray these technologies as somehow almost supernatural in their danger, it makes us feel like we are powerless, like we are outmatched,” says Shannon Vallor, a professor of the ethics of data and artificial intelligence at the University of Edinburgh in the UK. “As if the only people we could possibly look to would be the companies themselves.”
Somebody stop me
An Anthropic spokesperson told me the company has been clear about these issues. They shared blog posts from other organisations supporting Mythos’ cyber capabilities, but said nothing to address the points in this article, aside from one comment I’ll include below.
This isn’t the first time Anthropic chief Dario Amodei has worked on a tool that’s been declared too dangerous for the public by the company he worked for. In 2019, when Amodei was an executive at OpenAI, the company announced GPT-2. He and other company leaders said they just couldn’t release GPT-2 because of “concerns about malicious applications of the technology”. This was a tool far less sophisticated than ChatGPT. And months later, they released it anyway. (OpenAI CEO Sam Altman published a blog post which says the company embraces uncertainty, acknowledging that fears about GPT-2 were “misplaced”.)
