It turned out that there were many fake signers in the open letter requesting a temporary suspension of the ``uncontrollable AI development competition'', and AI researchers continued to refute the letter



An open letter signed by many celebrities such as Elon Musk and Steve Wozniak, requesting that AI development be stopped as ``the AI development competition will become uncontrollable and will have a serious adverse effect on society.'' It was reported that there were multiple fake signatures, including that of Chinese President

Xi Jinping . AI researchers have argued against the philosophy of the letter, saying that the claims of the non-profit organization Future of Life Institute, which created the open letter, are exaggerated and unrealistic.

The Open Letter to Stop 'Dangerous' AI Race Is a Huge Mes
https://www.vice.com/en/article/qjvppm/the-open-letter-to-stop-dangerous-ai-race-is-a-huge-mess



The Future of Life Institute , a non-profit organization established for the ethical development of artificial intelligence, announced in March 2023 that AI such as OpenAI's GPT-4 poses a serious risk to society and humanity. It published an open letter advocating that cutting-edge AI training should be halted for at least six months. The letter has more than 1,000 signatures, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak.

More than 1,300 people, including Earon Mask and Steve Wozniak, signed an open letter asking all engineers to stop the immediate development of AI beyond GPT-4 for 6 months due to the fear of ``loss of control'' - GIGAZINE



The Future of Life Institute claims that the authenticity of the signature was ``independently verified through direct communication,'' but shortly after it was released, it became clear that some of the signatures were false or impersonated. According to overseas media Motherboard, there should have been no signers from OpenAI, which is developing and commercializing the GPT series, but the name of OpenAI CEO Sam Altman was among the signatures. matter.

Similarly, Meta's chief AI scientist Yann LeCun, who was said to be the signer, also denied that he did not sign because he could not agree with the content premised in the letter.



Furthermore, it was pointed out that the signature may not have been verified because the name of Chinese President Xi Jinping was among the signers.



The contents of the letter have also been questioned. Emily M. Bender, a professor at the University of Washington and author of the paper cited in the open letter, tweeted , ``The open letter is drooling over AI hype,'' blaming the letter for misusing its own research. accused of doing so.

In the letter, the Future of Life Institute argued based on Mr. Bender's research that ``AI systems with intelligence that competes with humans may pose serious risks to society and humanity.'' ``My research deals with current scale language models, not fictional future AI,'' he argued , saying that AI is not a distant future problem but a real problem.

Mr. Bender also pointed out that ``ChatGPT can be controlled reliably just by not setting it as an easily accessible information source,'' and said that AI is becoming an uncontrollable existence that even developers cannot understand. deny the allegations of



It is said that such a discrepancy is due to the ideological bias of the Future of Life Institute. Motherboard describes the Future of Life Institute as 'a sort of secular religion favored by Silicon Valley's tech elite that preaches a massive investment in humanity's distant future. We have some supporters of ' long-termism '.'

The idea of long-termism is that the focus is on improving the distant future and avoiding global doom scenarios, rather than on concrete efforts to address existing problems. Critics also argue that the hyper-rich, who can move vast amounts of money into the future, are celebrated, justifying morally questionable practices. Elon Musk has expressed this long-termism idea before, and former FTX CEO Sam Bankman-Fried, who was arrested on fraud charges, is also a famous supporter of this idea.

Professor Arvind Narayanan of Princeton University said, 'Of course, large-scale language models will impact labor, and you should plan for it, but the idea that language models will soon replace experts is nonsense. 'I think human existential risk is a valid long-term concern, but it has been used strategically repeatedly to divert attention from today's harm, such as real-world information security and safety risks.' It is also a thing, ”he criticized that the tone of the open letter, which exaggerates the threat of AI but does not see it as a real problem, is a change of topic.

in Software, Posted by log1l_ks