ChatGPT developer OpenAI needs to launch a global regulatory body to prepare for the emergence of `` super intelligent AI '' based on the concern that `` AI will exceed the skill level of experts in most fields within 10 years '' claim

On May 22, 2023, local time, OpenAI, an AI research group that developed interactive AI 'ChatGPT' and large-scale language model 'GPT-4', will exceed the skill level of experts and conduct advanced production activities. In anticipation of the emergence of AI ' Superintelligence' that will conduct research, he advocated the need to launch an international regulatory body in order to promote the safe development of AI.

Governance of superintelligence

On May 22, 2023, OpenAI proposed the need to launch an international oversight organization and regulatory body for AI research under the joint names of CEO Sam Altman, Greg Brockman, and Ilya Sutskever.

``Within the next 10 years, AI systems will exceed the skill level of experts in most fields and perform highly productive activities on a scale comparable to that of large companies,'' OpenAI said. We call AI “superintelligence” and envision it to be more powerful than the various technologies humans have dealt with so far.

OpenAI also claims that 'super intelligence' has potential benefits and risks, and that special adjustments are needed to reduce risks.

Regarding the development of 'super-intelligence', OpenAI said, 'Major governments around the world will launch projects to limit the rapid growth of AI in order to enable smooth integration with society while maintaining security. We should take action,” he argues. 'Individual companies should take responsibility for development,' he said.

He also advocates the launch of an international regulatory body that requires inspections and audits of systems like the International Atomic Energy Agency , tests for compliance with safety standards, and limits security levels. “These international regulatory bodies are focused on reducing risk to issues that cannot be left up to individual countries or governments, such as defining what AI should be allowed to say,” OpenAI said. It's important to hit,' he said.

In addition, he advocated that 'technical capabilities are needed to develop super intelligence safely,' and OpenAI said, 'We have made a lot of effort so far.'

On the other hand, OpenAI says, 'As an exception, it is important to allow companies and open source projects to develop models below a certain level without being subject to regulations such as licenses and audits.' However, Elon Musk, who was an investor at the time of the establishment of OpenAI and later resigned as a director due to the deterioration of the relationship, responded to a tweet criticizing OpenAI's remarks, which develops high-performance AI models such as GPT-4, It responds with '????' (hitting the target).

OpenAI said, 'Although today's AI systems carry potential risks, they create tremendous value for the world. Regulations, audits, and standards are applied to AI technology that does not reach the level of superintelligence. should not be done,” he argues.

Regarding superintelligence, OpenAI says, “We believe that the world will be much better than the results of previous AI, such as education, creative work, and productivity. We need to solve the problem of

``The cost of building superintelligence is decreasing year by year, and the number of AI research institutes is increasing rapidly, so it is difficult to stop the creation of superintelligence in the future,'' he said. For Safe AI Development “The benefits of creating superintelligence are enormous, and part of the intrinsic goal of AI development organizations like OpenAI is the development of superintelligence. To do so, we need a global oversight system, but we need to get it right, as there is no guarantee that this body will work.'

in Software, Posted by log1r_ut