Edit

Anthropic rejects Pentagon demand for unrestricted military AI access

Anthropic rejects Pentagon demand for unrestricted military AI access

Artificial intelligence company Anthropic said on Thursday that it will not permit the Pentagon unrestricted access to its technology, pushing back against pressure from US officials who warned of possible action under the Defense Production Act. The dispute highlights growing tensions between private AI developers and the US government over how advanced systems should be deployed in military and intelligence operations.

According to the company, the Pentagon had given Anthropic a deadline of 5:01 pm Friday to agree to broader military use of its models. Officials reportedly indicated that failure to comply could prompt action under the Defense Production Act, a Cold War-era law that allows the federal government to compel private companies to prioritize national security and defense needs. The Pentagon also warned that refusal could result in Anthropic being designated a supply chain risk, a label that may limit a company’s ability to secure future government contracts.

In a public statement, Anthropic chief executive Dario Amodei said the company could not accept the demand that its artificial intelligence systems be made available without limits. He emphasized that national security pressure would not alter the company’s position. “These threats do not change our position,” Amodei said, adding that Anthropic could not “in good conscience accede to their request.”

Anthropic acknowledged that its AI models are already in use by the Pentagon and intelligence agencies in defensive and analytical capacities. However, Amodei outlined what he described as clear ethical boundaries. He argued that the use of AI for mass surveillance of US citizens would be incompatible with democratic values and longstanding civil liberties protections. The company also raised concerns about deploying AI in fully autonomous weapons systems, cautioning that current technology is not sufficiently reliable to make life-and-death decisions without meaningful human oversight.

“We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” Amodei said, underscoring the company’s stance on responsible AI policy. The remarks reflect broader debates within the artificial intelligence industry about the appropriate role of military AI and the limits of automation in defense technology.

The Pentagon has increasingly sought access to advanced AI systems as it integrates machine learning tools into intelligence analysis, logistics, and battlefield decision-making. The Defense Production Act has historically been invoked to prioritize industrial output during times of crisis, but its potential application to artificial intelligence underscores the strategic importance the US government places on emerging technologies.

The standoff between Anthropic and the Pentagon signals a significant moment in the evolving relationship between Silicon Valley and Washington. As artificial intelligence becomes more deeply embedded in national security infrastructure, questions about oversight, ethical constraints, and democratic accountability are likely to intensify. For now, Anthropic’s refusal marks a rare instance of a major AI developer publicly resisting federal pressure to expand military use of its technology.

What is your response?

joyful Joyful 0%
cool Cool 0%
thrilled Thrilled 0%
upset Upset 0%
unhappy Unhappy 0%
AD
AD
AD
AD
AD