Anthropic has withdrawn OpenAI's developer-level access to the Claude family of large-language models, citing a violation of commercial terms, multiple people familiar with the decision told Wired on Tuesday.
In a statement, Anthropic spokesperson Christopher Nulty said internal OpenAI engineers had been "using our coding tools ahead of the launch of GPT-5," an action the company regards as a direct breach of contractual clauses that forbid deploying Claude to "build a competing product or service" or to reverse-engineer the model.
According to the same report, OpenAI had connected Claude to proprietary test tools in order to compare code-generation, creative-writing, and safety performance with its systems. The tests allegedly covered sensitive prompt categories, such as self-harm and defamation. OpenAI's communications chief, Hannah Wong, described the practice as "industry standard" benchmarking and expressed disappointment at the suspension, noting that OpenAI's own API remains open to Anthropic.
Anthropic says limited access for "benchmarking and safety evaluations" will be restored, but it did not clarify how that restriction will operate in practice. The company has previously restricted rivals' usage: in July, it blocked start-up Windsurf after rumors linked the firm to an OpenAI acquisition bid. Chief science officer Jared Kaplan remarked at the time that "selling Claude to OpenAI" would be "odd."
Pulling APIs from competitors is a well-worn tactic in the tech sector; Facebook famously blocked Vine from using its APIs, and Salesforce recently curtailed Slack APIs for rival collaboration apps. Anthropic's move arrived a day after it throttled Claude Code usage for all customers, citing rapid growth and terms-of-service violations.
Source(s)
Wired (in English)