AI firms must be clear on risks or repeat tobacco’s mistakes, says Anthropic chief | Artificial intelligence (AI)

Artificial intelligence companies must be transparent about the risks posed by their products or risk repeating the mistakes of tobacco and opioid companies, according to the chief executive of the AI startup Anthropic.

Dario Amodei, who runs the US company behind the Claude chatbot, said he believed AI will become smarter than “most or all humans in most or all ways” and urged his peers to “call it as you see it”.

Speaking to CBS News, Amodei said a lack of transparency about the impact of powerful AI would replay the errors of cigarette and opioid firms that failed to raise a red flag over the potential health damage of their own products.

“You could end up in the world of, like, the cigarette companies, or the opioid companies, where they knew there were dangers, and they didn’t talk about them, and certainly did not prevent them,” he said.

Amodei warned this year that AI could eliminate half of all entry-level white-collar jobs – office jobs such as accountancy, law and banking – within five years.

“Without intervention, it’s hard to imagine that there won’t be some significant job impact there. And my worry is that it will be broad and it’ll be faster than what we’ve seen with previous technology,” Amodei said.

Anthropic, whose CEO is a prominent voice for online safety, has flagged various concerns about its AI models recently, including an apparent awareness that they are being tested and attempting to commit blackmail. Last week it said its coding tool, Claude Code, was used by a Chinese state-sponsored group to attack 30 entities around the world in September, achieving a “handful of successful intrusions”.

“One of the things that’s been powerful in a positive way about the models is their ability to kind of act on their own,” said Amodei. “But the more autonomy we give these systems, you know, the more we can worry are they doing exactly the things that we want them to do?”

Logan Graham, the head of Anthropic’s team for stress testing AI models, told CBS that the flipside of a model’s ability to find health breakthroughs could be helping to build a biological weapon.

skip past newsletter promotion

“If the model can help make a biological weapon, for example, that’s usually the same capabilities that the model could use to help make vaccines and accelerate therapeutics,” he said.

Referring to autonomous models, which are viewed as a key part of the investment case for AI, Graham said users want to an AI tool to help their business – not wreck it.

“You want a model to go build your business and make you a billion,” he said. “But you don’t want to wake up one day and find that it’s also locked you out of the company, for example. And so our sort of basic approach to it is, we should just start measuring these autonomous capabilities and to run as many weird experiments as possible and see what happens.”

Continue Reading