Sunday, April 12, 2026
Home World News‘Too powerful for the public’: inside Anthropic’s bid to win the AI publicity war | AI (artificial intelligence)

‘Too powerful for the public’: inside Anthropic’s bid to win the AI publicity war | AI (artificial intelligence)

by admin7
0 comments


This week, the AI company Anthropic said it had created an AI model so powerful that, out of a sense of overwhelming responsibility, it was not going to release it to the public.

The US treasury secretary, Scott Bessent, summoned the heads of major banks for a chat about the model, Mythos. The Reform UK MP Danny Kruger wrote a letter to the government urging it to “engage with AI firm Anthropic whose new frontier model Claude Mythos could present catastrophic cybersecurity risks to the UK”. X went wild.

Others were more sceptical, including the noted AI critic Gary Marcus, who said: “Dario [Amodei] has far more technical chops than Sam [Altman], but seems to have graduated from the same school of hype and exaggeration,” referring to the CEOs of Anthropic and its rival, OpenAI.

It is unclear if Anthropic has built the machine god. What is more apparent is that the San Francisco startup widely seen as the “responsible” AI company is brilliant at marketing.

In the past months, Anthropic has enjoyed a 10,000-word profile in the New Yorker, two pieces in the Wall Street Journal, and the front cover of Time magazine, on which Amodei’s face was emblazoned, movie-poster style, above the Pentagon and the US defense secretary, Pete Hegseth.

Amodei and Anthropic’s co-founder, Jack Clark, appeared on two separate New York Times podcasts in February, chewing over questions such as whether their machine was conscious, and if it might soon “rip through the economy”. The company’s “resident philosopher” has spoken to the WSJ about whether Claude – a commercial product being used to trade cryptocurrency and designate missile targets – has a “sense of self”.

Dario Amodei ‘has far more technical chops’ than the OpenAI CEO, Sam Altman, said one AI critic, ‘but seems to have graduated from the same school of hype and exaggeration’. Photograph: Denis Balibouse/Reuters

This has all come amid a dustup between Anthropic and the US department of defence in which Anthropic, despite creating the AI tool used by the Pentagon to strike Iran, has managed to come out looking far better than OpenAI, which offered to help the US military do the same thing but with – maybe – fewer guardrails.

Its media lead, Danielle Ghiglieri, has notched the wins on LinkedIn. “I’m endlessly proud to work at Anthropic,” she said of the company’s Time cover, tagging the journalists involved in a post about the “mad dash” to get the story over the line.

Watching a CBS 60 Minutes segment featuring Amodei “was one of those pinch-me moments,” she said. “What made it meaningful wasn’t just the platform. It was seeing the story we wanted to tell actually come through.”

Of the New Yorker profile, by the journalist Gideon Lewis-Kraus, she wrote: “I would be lying if I said I wasn’t nervous for our first meeting in person … working with someone of Gideon’s calibre means being pushed to articulate ideas you’re still forming, and being OK with that discomfort.”

(“I bet that’s what they all say about you,” said my editor.)

Other tech PRs have taken notice.

“They are clearly having a moment right now but companies building technology that will change the world deserve equal scrutiny,” said one. “They accidentally leaked their own source code last week, then this week they claim stewardship over cyber threats with a new powerful model that only they control. Any other big tech firm would be ridiculed.”

Anthropic did accidentally release part of Claude’s internal source code at the beginning of April. “No sensitive customer data or credentials were involved or exposed,” it said.

What does this all mean about Anthropic’s undoubtedly powerful Mythos?

The model’s capacities were not “substantiated,” said Dr Heidy Khlaaf, the chief AI scientist at the AI Now Institute. “Releasing a marketing post with purposely vague language that obscures evidence … brings into question if they are trying to garner further investment without scrutiny.”

“Mythos is a real development and Anthropic was right to treat it seriously,” said Jameison O’Reilly, an expert in offensive cybersecurity. But, he said, some of Anthropic’s claims, such as that it found thousands of “zero-day vulnerabilities” in major operating systems, were not that significant to real-world cybersecurity considerations.

A zero-day vulnerability is a flaw in software or hardware unknown to its developers.

“We have spent over 10 years gaining authorised access to hundreds of organisations – banks, governments, critical infrastructure, global enterprises,” said O’Reilly. “In those 10 years, across hundreds of engagements, the number of times we needed a zero-day vulnerability to achieve our objective was vanishingly small.”

Protesters in San Francisco called on AI firms to pause development last month, marching to the offices of Anthropic and OpenAI. Photograph: Manuel Orbegozo/Reuters

Other reasons may have contributed to Anthropic’s decision not to release Mythos.

The company has limited resources, and appears to be struggling to offer enough computing capacity to allow all its subscribers to use its models. It has introduced usage caps on the wildly popular Claude. Recently, it said users would have to purchase extra capacity on top of their subscriptions in order to run third-party tools, such as OpenClaw. At this point, it may simply not have the infrastructure to support the release of a hyped-up new creation.

Like OpenAI, Anthropic is in a race to raise billions of dollars and capture a market – still ill-defined – of people who might lean on its chatbots as friends, romantic partners or deeply personalised assistants, and of companies that might use them to replace human employees.

But differences in these products are marginal and impressionistic, mostly down to hard-to-quantify attributes such as “sense of self” and “soul” – or rather, what passes for these in an AI agent. The battle is for hearts and minds.

“Mythos is a strategic announcement to show that they’re open for business,” said Khlaaf, saying Anthropic’s release limitation prevented independent experts from evaluating the company’s claims.

She suggested we may be “seeing the very same bait and switch playbook that was used by OpenAI, where safety is a PR tool to gain public trust before profits are prioritised” and: “Anthropic publicity has managed to better obscure this switch than its rivals.”



Source link

You may also like

Leave a Comment