Blog

Conflict between Anthropic and the federal government

rewrite this title in other words: SF high-tech AI firm declared supply chain risk to national security – in Etokom

0

Summarize this content to 100 words: Washington DC – Anthropic Public Benefit Corporation, one of the world’s leading artificial intelligence companies, has been designated a “supply chain risk to national security” by Secretary of War Pete Hegseth. Back story: The conflict between the Defense Department and Anthropic centers on the military’s demand for unrestricted access to Anthropic’s artificial intelligence system, nicknamed The Cloud. Anthropic, based in San Francisco, was formed as a public benefit corporation committed to the responsible development of AI. Digging Deep: Anthropic is refusing to allow the government to use its systems to create and operate autonomous weapons, as well as mass surveillance of Americans. Ann Skeet, senior director of the Center for Applied Ethics at the University of Santa Clara, said that Anthropic’s claim that Claude has an inherent instruction to make ethical decisions is counter to this. “I think what (they’re) saying is that the models are not able to safely support those uses right now, especially autonomous weapons. I think that’s what the company is trying to say: Can we just slow down and make sure we’re doing the right thing here?” Skeet said. However, that hesitation has led Hegseth to declare Anthropic a supply chain risk to the government, a status typically held for companies with foreign adversaries. This designation blacklists Anthropic from the entire US defense system and, possibly, the broader federal government. What they are saying: Longtime tech expert Larry Magid said Anthropic is doing what it’s supposed to do. “If you have a product capable, even if unlikely, of causing enormous harm, you don’t want to put that product in a situation where it can cause harm,” Magid said.AI is not perfect or always correct, and Magid said extra caution should be taken when applying the technology to weapons. “Anyone who knows anything about AI knows that once you essentially put a gun in his hands, you run the risk that he could fire that gun and the possibility that he could make a mistake,” Magid said. Additionally, mass surveillance – which is useful in military conflict zones – could be overly intrusive into American society. Representative Ro Khanna (D-Santa Clara) said, “I do not want technology used by the federal government to conduct mass surveillance of American citizens.” Although Anthropic has a few months to separate from the US government, mediators are trying to resolve differences between the two parties. Source: Original reporting by KTVU’s Tom Wacker TechnologySan Francisco President

Anthropic Public Benefit Corporation, one of the world’s leading artificial intelligence companies, has been designated a “supply chain risk to national security” by Secretary of War Pete Hegseth.

Back story:

The conflict between the Defense Department and Anthropic centers on the military’s demand for unrestricted access to Anthropic’s artificial intelligence system, nicknamed The Cloud.

Anthropic, based in San Francisco, was formed as a public benefit corporation committed to the responsible development of AI.

Digging Deep:

Anthropic is refusing to allow the government to use its systems to create and operate autonomous weapons, as well as mass surveillance of Americans.

Ann Skeet, senior director of the Center for Applied Ethics at the University of Santa Clara, said that Anthropic’s claim that Claude has an inherent instruction to make ethical decisions is counter to this.

“I think what (they’re) saying is that the models are not able to safely support those uses right now, especially autonomous weapons. I think that’s what the company is trying to say: Can we just slow down and make sure we’re doing the right thing here?” Skeet said.

However, that hesitation has led Hegseth to declare Anthropic a supply chain risk to the government, a status typically held for companies with foreign adversaries.

This designation blacklists Anthropic from the entire US defense system and, possibly, the broader federal government.

What they are saying:

Longtime tech expert Larry Magid said Anthropic is doing what it’s supposed to do.

“If you have a product capable, even if unlikely, of causing enormous harm, you don’t want to put that product in a situation where it can cause harm,” Magid said.

AI is not perfect or always correct, and Magid said extra caution should be taken when applying the technology to weapons.

“Anyone who knows anything about AI knows that once you essentially put a gun in his hands, you run the risk that he could fire that gun and the possibility that he could make a mistake,” Magid said.

Additionally, mass surveillance – which is useful in military conflict zones – could be overly intrusive into American society.

Representative Ro Khanna (D-Santa Clara) said, “I do not want technology used by the federal government to conduct mass surveillance of American citizens.”

Although Anthropic has a few months to separate from the US government, mediators are trying to resolve differences between the two parties.

Source: Original reporting by KTVU’s Tom Wacker

TechnologySan Francisco President

[ad_1]

#hightech #firm #declared #supply #chain #risk #national #security #trending #[now:year]

Leave a Reply