Anthropic Yanks OpenAI’s Access to Claude Model

Anthropic reportedly blocked OpenAI’s access to its models due to a terms-of-service violation.

The incident involving the rival artificial intelligence startups happened Tuesday (July 29), Wired reported Friday (Aug. 1), citing unnamed sources.

Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI’s own technical staff were also using our coding tools ahead of the launch of GPT-5,” Anthropic spokesperson Christopher Nulty said, per the report. “Unfortunately, this is a direct violation of our terms of service.”

The terms of service prevent customers from using Anthropic to build a competing product or service, “including to train competing AI models” or “reverse engineer or duplicate” the services, the report said.

OpenAI was plugging Claude into its own internal tools via APIs, rather than using the regular chat interface, according to the report. This let the company test Claude’s capabilities in coding and creative writing against its own AI models, as well as determine how Claude responded to safety-related prompts involving categories like self-harm and defamation.

“It’s industry standard to evaluate other AI systems to benchmark progress and improve safety,” OpenAI Chief Communications Officer Hannah Wong said in a statement, per the report. “While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them.”

Meanwhile, there’s a debate in the AI sector about whether advancements in large language models are slowing, centered around AI scaling laws.

Popularized by OpenAI, the idea behind AI scaling laws says larger models trained on more compute will produce better performance.

“Over the past few years, AI labs have hit on what feels like a winning strategy: scaling more parameters, more data, more compute,” said Garry Tan, president of startup accelerator Y Combinator. “Keep scaling your models, and they keep improving.”

However, there are indications that early leaps in performance are slowing. The two chief fuels for scaling — data and computing — are becoming more costly and rarer, said Adnan Masood, UST’s chief architect of AI and machine learning.

“These trends strongly indicate a plateau in the current trajectory of large language models,” he said.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Continue Reading