AI development company Anthropic is working with experts to investigate the safety of its AI 'Claude' to ensure it does not assist in weapons production



Anthropic, a startup known for developing AI, is working with nuclear experts from the Department of Energy's National Nuclear Security Administration (NNSA) to conduct safety studies to ensure that the AI 'Claude' does not assist in weapons manufacturing.

Anthropic, feds test whether Claude AI will share nuclear sensitive info

https://www.axios.com/2024/11/14/anthropic-claude-nuclear-information-safety



AI Nuclear Risk Potential: Anthropic Teams Up with US Energy Department For Red-Teaming - WinBuzzer

https://winbuzzer.com/2024/11/14/ai-nuclear-risk-potential-anthropic-teams-up-with-us-energy-department-for-red-teaming-xcxwbn/

As the use of AI advances in various fields, its use in the military is also being considered, and the U.S. Department of Defense is using AI for purposes including outer space surveillance.

The US Department of Defense aims to use AI for military purposes to 'operate autonomous drones,' 'monitor threats in outer space,' 'maintain aircraft and soldiers,' etc. - GIGAZINE



However, because AI is so useful, there are concerns that it could lead to the development of new biological weapons simply by using chatbots, and measures and regulations are being considered.

'AI-developed biological weapons' become a national security concern as the U.S. government and AI companies begin considering regulations - GIGAZINE



According to the news site Axios, Anthropic and the National Nuclear Security Administration are conducting a safety investigation into whether Anthropic's AI, 'Claude 3.5 Sonnet,' will share any potentially dangerous information related to nuclear power.

Claude 3.5 Sonnet was released to the public on June 21, 2024, but the research program has been running since April 2024.

Introducing Claude 3.5 Sonnet \ Anthropic

https://www.anthropic.com/news/claude-3-5-sonnet



According to Axios, Marina Favaro, who is in charge of national security policy, said, 'U.S. industry has been ahead in developing frontier models, and the federal government has the unique expertise needed to evaluate AI systems for specific national security risks. This effort will help developers build stronger safeguards for the frontier models that drive responsible innovation and American leadership.'

'AI is one of those ' game-changing ' technologies that is at the top of many of our discussions,' said Wendyn Smith, NNSA's deputy administrator for counterterrorism and nuclear proliferation. 'At the same time, it is also a national security imperative that we evaluate and explore the capabilities of AI to generate outputs that could potentially indicate classification or radiological risks.'

In addition, Anthropic signed a contract with the AI Safety Institute in August 2024 to test AI for national security risks before releasing it to the public together with OpenAI.

in Note, Posted by logc_nt