Microsoft becomes first company to say it is not ‘abandoning’ Anthropic; company says: Our lawyers have studied that … – The Times of India


Microsoft becomes first company to say it is not 'abandoning' Anthropic; company says: Our lawyers have studied that ...
Satya Nadella, CEO, Microsoft

Microsoft has now announced that it will continue to embed Anthropic’s artificial intelligence models in its products, despite the US Department of War labelling the startup as a supply-chain risk. According to a report by CNBC, the company has now clarified that its legal team reviewed the designation and concluded Anthropic’s products, including the Claude model can remain available for customers with the exception of Department of War. “Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers — other than the Department of War — through platforms such as M365, GitHub, and Microsoft’s AI Foundry,” a Microsoft spokesperson told CNBC.

Political and defense context

This announcement from Microsoft comes after US President Donald Trump directed the federal agencies to drop Anthropic’s technology, and Secretary of War Pete Hegseth said that the company would be phased out of Pentagon systems within six months. This decision is followed by a round of failed negotiations between Anthropic and the Department of War over issues including mass domestic surveillance and autonomous weapons. Rival OpenAI quickly accounted that its own deal with the Pentagon intensifying competition in rhetoric defense AI sector.CNBC also confirmed that Anthropic’s models had played a role in recent U.S. airstrikes on Iran, further fueling scrutiny of the company’s defense ties.

Microsoft’s broader AI strategy

Microsoft has deep ties with Anthropic, having agreed to invest up to $5 billion in the company, while Anthropic committed to spending $30 billion on Microsoft’s Azure cloud services. This partnership sits alongside Microsoft’s larger stake in OpenAI, valued at $135 billion, with OpenAI pledging $250 billion in Azure spending.CEO Satya Nadella has emphasized “model choice” as a guiding principle, allowing customers to toggle between Anthropic and OpenAI models in Microsoft 365 Copilot. Anthropic’s Claude models are also integrated into GitHub Copilot, where they are widely used by software engineers for drafting source code.Microsoft’s decision makes it the first major company to publicly affirm support for Anthropic after the Pentagon’s designation. While some defense contractors have already instructed employees to stop using Claude models, Microsoft’s stance signals confidence in Anthropic’s technology for non-defense applications, including productivity tools and developer platforms.

Anthropic becomes first-ever American company to be designated as ‘risk to America’s national security’

Claude-maker Anthropic has been officially designated as “nation security risk” in America, becoming the first US company to get the label. In an official statement, Anthropic CEO Dario Amodei said that the AI firm now has no choice but to challenge the supply chain risk designation. “Yesterday (March 4) Anthropic received a letter from the Department of War confirming that we have been designated as a supply chain risk to America’s national security,” Amodei said in the statement. “As we wrote on Friday (February 27), we do not believe this action is legally sound, and we see no choice but to challenge it in court,” he added.Dario Amodei further stated that the language used by the Department of War in the letter (even supposing it was legally sound) matches the company’s statement on Friday “that the vast majority of our customers are unaffected by a supply chain risk designation.” “With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts,” he said.

Source link