(The Epoch Times)—Anthropic on Feb. 26 rejected the Pentagon’s request to allow unrestricted use of its Claude AI model, citing concerns that the technology could be used for mass domestic surveillance or fully autonomous weapons.
Anthropic CEO Dario Amodei said in a lengthy blog post that use cases such as mass surveillance and the development of autonomous weapons have never been included in the company’s contracts with the Pentagon.
Amodei said those two applications are “simply outside the bounds of what today’s technology can safely and reliably do,” and warned that, in a narrow set of cases, artificial intelligence (AI) may erode democratic values.
“It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider,” he said.
According to Amodei, the Department of War has threatened to remove Anthropic from its systems if the company refuses to remove safeguards related to the two uses in question.
Amodei said the Pentagon also threatened to designate Anthropic as a “supply chain risk,” which he said in his blog post is generally applied to U.S. adversaries.
Amodei said the company would maintain its position despite the pressure.
“Regardless, these threats do not change our position: we cannot in good conscience accede to their request,” he wrote.
The Epoch Times reached out to the Pentagon for comment.
This is a developing story and will be updated.
