More than 350 AI researchers and company CEOs, including Sam Altman, CEO of OpenAI, signed a letter appealing that ``reducing the risk of extinction by AI is the same priority as nuclear war and pandemic countermeasures''



More than 350 CEOs and AI researchers of AI research organizations such as

Google DeepMind , Anthropic , and OpenAI say that ``Reducing the risk of human extinction with AI is a global priority on a par with nuclear war and pandemics.'' We have signed a letter to inform policy makers of the risks posed by AI.

Statement on AI Risk | CAIS
https://www.safe.ai/statement-on-ai-risk



AI Extinction Statement Press Release | CAIS

https://www.safe.ai/press-release



Top AI researchers and CEOs warn against 'risk of extinction' in 22-word statement - The Verge

https://www.theverge.com/2023/5/30/23742005/ai-risk-warning-22-word-statement-google-deepmind-openai

OpenAI execs warn of “risk of extinction” from artificial intelligence in new open letter | Ars Technica
https://arstechnica.com/information-technology/2023/05/openai-execs-warn-of-risk-of-extinction-from-artificial-intelligence-in-new-open-letter/

The Center for AI Safety (CAIS), a non-profit organization that promotes the responsible and ethical development of AI, said on May 30, 2023, 'Reducing the risk of extinction caused by AI is a social scale such as a pandemic or nuclear war. It should be recognized as a global priority alongside the high risk of

In response to this statement, more than 350 CEOs of AI research companies, including OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, as well as Joshua Bengio and Stuart Russell. and signatures endorsed by researchers.

CAIS Director Dan Hendricks said, ``Before the spread of the new coronavirus, the concept of a ``pandemic'' was not in the general public's mind. There is no idea that it is 'premature' to build.'



``While addressing AI issues, the AI industry and governments around the world need to grapple with the risks that future AI could pose a threat to human existence,'' Hendricks said. increase.

“Reducing the risk of AI-induced extinction requires global action. The world has successfully worked together to reduce the risk of nuclear war. requires the same level of effort as mitigating the risk of nuclear war.' “Although our statement was very brief and did not set out how to mitigate the threat posed by AI, our aim was to have a dialogue with AI researchers and company CEOs,” Hendricks said. It's about avoiding disagreements.'

On the other hand, author Daniel Jeffries said, ``Future AI-induced extinction risk is a fictitious problem, and fictitious things cannot be fixed. I hope you will spend it, ”he criticized the letter. He also said, 'Trying to solve fictitious future problems is a waste of time. Solving current problems will naturally solve future problems.'




Also, Meta's Yang Lucan said, ``Since there is no AI that exceeds human intelligence yet, I think it is premature to place the extinction risk of AI on the same level as pandemics and nuclear wars.At least dog-level intelligence. We should discuss how to develop AI safely after the emergence of AI with




In March 2023, a letter calling for immediate suspension of development for six months for AI that exceeds GPT-4 , the next-generation large-scale language model, will be sent to Future of Published by the Life Institute , the letter was signed by Elon Musk and Apple co-founder Steve Wozniak.

More than 1,300 people, including Earon Mask and Steve Wozniak, signed an open letter asking all engineers to stop the immediate development of AI beyond GPT-4 for 6 months due to the fear of ``loss of control'' - GIGAZINE

in Software, Posted by log1r_ut