OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government

More than 30 employees from OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief on Monday in support of Anthropic in its legal fight against the US government.
“If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the employees wrote.
The brief was filed just hours after Anthropic sued the Department of Defense and other federal agencies over the Pentagon’s decision to designate the company a “supply-chain risk.” The sanction, which severely limits Anthropic’s ability to work with military contractors, went into effect after Anthropic’s negotiations with the Pentagon fell apart. The AI startup is seeking a temporary restraining order to continue its work with military partners as the lawsuit progresses. This brief specifically supports this motion.
Signatories of the brief include Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, as well as OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak, among others. Amicus briefs are legal filings submitted by parties that are not directly involved in a court case but that have expertise relevant to it. The employees signed in a personal capacity and don’t represent the views of their companies, according to the brief.
OpenAI and Google did not immediately respond to WIRED’s request for comment.
The amicus brief says that the Pentagon’s decision to blacklist Anthropic “introduces an unpredictability in [their] industry that undermines American innovation and competitiveness” and “chills professional debate on the benefits and risks of frontier AI systems.” It notes that the Pentagon could have simply dropped Anthropic’s contract if it no longer wished to be bound by its terms.
The brief also says that the red lines Anthropic claims it requested, including that its AI wouldn’t be used for mass domestic surveillance and the development of autonomous lethal weapons, are legitimate concerns and require sufficient guardrails. “In the absence of public law, the contractual and technological requirements that AI developers impose on the use of their systems represent a vital safeguard against their catastrophic misuse,” the brief says.
Several other AI leaders have also publicly questioned the Pentagon’s decision to label Anthropic a supply-chain risk. OpenAI CEO Sam Altman said in a post on social media that “enforcing the SCR [supply-chain risk] designation on Anthropic would be very bad for our industry and our country.” He added that “this is a very bad decision from the DoW and I hope they reverse it.” As Anthropic’s relationship with the Pentagon soured, OpenAI quickly signed its own contract with the US military, a decision some people criticized as opportunistic.

