Добавить новость
123ru.net
Деньги
Февраль
2026

OpenAI sweeps in to ink deal with Pentagon as Anthropic is designated a ‘supply chain risk’—an unprecedented action likely to crimp its growth

0

OpenAI announced late Friday it reached a deal for the Pentagon to use its AI models in classified systems, just hours after the U.S. government designated OpenAI arch-rival Anthropic a “supply chain risk” in a move that threatens to deal a serious blow to Anthropic’s business.

Legal and policy experts said the government’s unprecedented decision presents profound questions about the relationship between the government and business in the U.S. It is the first time the U.S. has ever designated an American company a supply chain risk, and the first time the designation has been used in apparent retaliation for a business not agreeing to certain contractual terms. Anthropic said in a statement Friday that it would take legal action to try to overturn the Pentagon’s designation.

In a statement announcing its deal, OpenAI CEO Sam Altman said that its agreement with the Pentagon contains the same two limitations on how the military can use its technology that Anthropic had been insisting on and which the government has said it could not accept.

But OpenAI seems to have sought to enshrine these in the agreement in a different way than Anthropic. While Anthropic tried to have the limits spelled out explicitly in the contract, OpenAI agreed that the Pentagon could use its tech for “any lawful purpose,” while Altman also says of the limitations that OpenAI “put them into our agreement.”

It is unclear exactly how both these things could be true or how the limitations are stated in the agreement. But it may simply be that the contract language highlights that current U.S. law prohibits the Pentagon from deploying A.I. for mass surveillance of Americans and current U.S. military policy states that humans must retain “appropriate levels of human judgment” over the use of lethal force.

OpenAI also said that the Pentagon agreed that the company could build technical solutions into its AI models intended to prevent them from being used for either mass surveillance of U.S. citizens or deployed in lethal autonomous weapons.

“We are asking the [Department of War] to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept,” Altman said. Some commentators interpreted Altman’s remark as a veiled criticism of Anthropic, which had not agreed to these terms previously and instead insisted on explicit contractual restrictions on how its models could be used.

Altman had previously publicly supported Anthropic’s position on the limitations it was seeking. Numerous OpenAI employees also signed an open letter supporting Anthropic CEO Dario Amodei’s insistence that its models not be used for mass surveillance or autonomous weapons.

The potential impact of a ‘supply chain risk’ designation

The extent of the damage to Anthropic’s business of the “supply chain risk” designation remained unclear over the weekend. Anthropic had a $200 million contract with the Pentagon that has now been cancelled. But that is not a huge blow to a company that is reportedly on track to generate at least $18 billion in revenue this year.

Instead, the larger concern is the extent to which other enterprises will have to stop using Anthropic’s technology. President Trump said on Truth Social that all federal departments were being ordered to stop using Anthropic’s AI immediately, but with a six-month phase in of the order to prevent disruption. Total federal technology spending is about $140 billion per year, but the amount the U.S. government currently spends on AI is a fraction of that.

The greatest danger, though, is posed by how Pete Hegseth, Secretary of War, has interpreted the supply chain risk designation and its impact. Hegseth said in a social media post that “effective immediately, no contractor, supplier, or partner that does business with the United State military may conduct any commercial activity with Anthropic.”

If that interpretation stands, it would do potentially catastrophic damage to Anthropic’s business, because many of the large enterprises that have been rapidly adopting Anthropic’s Claude models for software coding and other use cases also do some business with the U.S. military. It might also mean that companies such as Amazon, Google, and Nvidia that have invested billions of dollars into Anthropic would have to divest from the company, potentially leaving it with a large funding hole and making it difficult to raise further funds from U.S. investors.

Anthropic earlier this month announced it had closed a new $30 billion venture capital funding round that valued the company at $380 billion. It has reportedly been hiring financial and legal advisors for a potential IPO that could come late this year or early next. But its fight with the Pentagon now casts a pall over that prospect.

Many legal analysts and AI policy experts questioned Hegseth’s broad interpretation of the “supply chain risk” designation. Peter Harrell, a former Biden administration National Security Council official and a visiting scholar at Georgetown University Law School, posted on X that DoW’s supply chain risk designation applies only to work on Department of War contracts. “DoW can’t, legally, tell its contractors ‘don’t use Anthropic even in your private contracts,’” Harrell said.

Dean Ball, a senior fellow at the Foundation for American Innovation and a former AI policy advisor to the Trump administration, said in a post on X that Hegseth’s interpretation of the supply chain risk designation was “almost surely illegal” and amounted to “attempted corporate murder.” He said Hegseth’s actions—which he called “a psychotic power grab”—sent a terrible message to any business about whether it should ever risk doing business with the U.S. government.

Several legal experts noted that even a more narrowly-interpreted decision to designate Anthropic a supply chain risk may not survive a legal challenge. Charlie Bullock, a senior research fellow at the Institute for Law & AI, told Wired that the government cannot make the designation without having completed a risk assessment—something which it is unclear if the government conducted—and notifying Congress prior to taking action, something that also doesn’t seem to have occurred.

Amos Toh, a senior counsel at the Brennan Center for Justice at New York University, was also among several legal experts who said that the supply chain risk designation requires the government to prove that there is a risk of sabotage, subversion, or manipulation of operations by an adversary. “It is not at all clear how adversaries could exploit Anthropic’s usage restrictions on Claude to sabotage military systems,” Toh told the defense news site DefenseScoop. The statute also requires that the Pentagon have exhausted any alternative, less intrusive courses of action to mitigate the risk prior to making the supply chain risk finding. Toh questioned whether the Pentagon could reasonably claim to have made a “good faith effort” to pursue less intrusive measures, given how quickly the Anthropic dispute escalated over the past few days.

Even if Anthropic ultimately prevails in challenging the supply chain risk designation in court, the damage to its business may be done. ”It will take years to resolve in court. And in the meantime, every general counsel at every Fortune 500 company with any Pentagon exposure is going to ask one question: is using Claude worth the risk?” Shenaka Anslem Perera, an independent analyst with a large social media following, posted on X.

This story was originally featured on Fortune.com






Загрузка...


Губернаторы России

Спорт в России и мире

Загрузка...

Все новости спорта сегодня


Новости тенниса

Загрузка...


123ru.net – это самые свежие новости из регионов и со всего мира в прямом эфире 24 часа в сутки 7 дней в неделю на всех языках мира без цензуры и предвзятости редактора. Не новости делают нас, а мы – делаем новости. Наши новости опубликованы живыми людьми в формате онлайн. Вы всегда можете добавить свои новости сиюминутно – здесь и прочитать их тут же и – сейчас в России, в Украине и в мире по темам в режиме 24/7 ежесекундно. А теперь ещё - регионы, Крым, Москва и Россия.


Загрузка...

Загрузка...

Экология в России и мире




Путин в России и мире

Лукашенко в Беларуси и мире



123ru.netмеждународная интерактивная информационная сеть (ежеминутные новости с ежедневным интелектуальным архивом). Только у нас — все главные новости дня без политической цензуры. "123 Новости" — абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Smi24.net — облегчённая версия старейшего обозревателя новостей 123ru.net.

Мы не навязываем Вам своё видение, мы даём Вам объективный срез событий дня без цензуры и без купюр. Новости, какие они есть — онлайн (с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии).

123ru.net — живые новости в прямом эфире!

В любую минуту Вы можете добавить свою новость мгновенно — здесь.






Здоровье в России и мире


Частные объявления в Вашем городе, в Вашем регионе и в России






Загрузка...

Загрузка...





Друзья 123ru.net


Информационные партнёры 123ru.net



Спонсоры 123ru.net