Three AI experts testify before the US Congress, how do the leading figures of AI research see the current situation and risks of AI?



At the hearing held by the U.S. Senate Judiciary Committee on July 25, 2023, three experts familiar with AI gave testimony and made recommendations on how AI should be regulated and the future outlook.

AI leaders warn Senate of twin risks: Moving too slow and moving too fast | TechCrunch

https://techcrunch.com/2023/07/25/ai-leaders-warn-senate-of-twin-risks-moving-too-slow-and-moving-too-fast/

My testimony in front of the US Senate - The urgency to act against AI threats to democracy, society and national security - Yoshua Bengio
https://yoshuabengio.org/2023/07/25/my-testimony-in-front-of-the-us-senate/

AI pioneer Yoshua Bengio tells Congress global AI rules are needed - The Washington Post
https://www.washingtonpost.com/technology/2023/07/25/ai-bengio-anthropic-senate-hearing/

At the ``Public Hearing on AI Oversight and Regulation'' held in the US Congress this time, Dario Amodei, co-founder of AI startup Anthropic, which was launched by former members of OpenAI with funding from Google, and the University of California, Berkeley Mr. Stuart Russell, a computer scientist at the school, and Mr. Joshua Bengio, who is famous for his research on neural networks and deep learning, attended and answered questions from lawmakers.

Mr. Dario Amodei
Each expert was first asked the question 'What should we do?' as a view on the most important measures in the short term. In response, Amodei pointed out that there are geopolitical factors and safety issues with the hardware that is essential for AI research, using TSMC, which is at risk of an emergency in Taiwan, as an example. I said it was urgent.



Amodei also stressed that 'establishing testing and auditing processes' like those in cars and electronics is essential, but said the science and technology to establish these is still developing. . Referring to the current state of the AI industry, Amodei likened it to the plane a few years after the Wright brothers' first flight, saying, 'The need for regulation is clear, but we have the viability and adaptability to respond to new developments. It must be done by a regulatory body that has

According to Amodei, the most concerning AI risks for the time being are election misinformation, deepfakes and propaganda. As a future risk, he also said that cutting-edge AI could be used to develop dangerous viruses and other biological weapons in less than two years.

Mr. Stuart Russell
In the short term, Russell said, ``Establish an absolute right to know whether you are interacting with a person or a machine,'' and ``Make any algorithm that can decide to kill a person illegal at any scale.'' He showed four points: ``Make

a kill switch that prevents AI systems from duplicating themselves or invading other systems'' and ``Forcibly withdraw systems that break the rules'' from the market. rice field.



The most immediate risk Russell sees is an “external influence campaign” using personalized AI. Russell said, ``We presented a huge amount of information about an individual, for example, all the information published on Twitter and Facebook, to train the AI and ask it to generate a disinformation campaign directed at that person. And it's a piece of cake to do it to 1 million people at the same time.This would be far more effective than distributing fake news or spam emails that are not personalized.' .

Asked about China's AI, Russell said the country's level of expertise in AI in general was 'somewhat exaggerated', adding: 'China has a pretty good academic sector, but it We are being gutted,” he replied. According to Russell, China's large-scale language models are imitative and do not threaten OpenAI or Anthropic, but they are predictably ahead of the liberal camp in surveillance areas such as speech and pedestrian recognition. ... apparently ...

Mr. Joshua Bengio
Mr. Bengio said, ``Limit who can access large AI models and provide incentives for security and safety,'' ``Be able to confirm that the model works as intended,'' and ``AI 'Monitoring usage and understanding who has access to hardware that is large enough to build these models.'



'We AI researchers don't really know what we're doing,' Bengio said. He pointed out that international cooperation is essential instead of doing He stressed the need to fund global research on AI safety.

In a blog post reflecting on his testimony before Congress, Bengio also said, 'We are committed to the economic and social impact of AI while protecting society, humanity and the future of all of us from potential dangers.' We believe we have a moral responsibility to combine the best minds and secure large-scale investments in a bold, coordinated global effort to reap the full benefits.'

in Software, Posted by log1l_ks