OpenAI publishes the results of verifying ``Whether GPT-4 increases the efficiency of biological weapon development?''



Research and development in AI is progressing rapidly, but at the same time, the potential for AI to be misused is also increasing. OpenAI has newly released the results of a test to determine whether GPT-4 will streamline the development of biological weapons. Based on the verification results, OpenAI plans to proceed with the construction of a system to prevent diversion to biological weapons development.

Building an early warning system for LLM-aided biological threat creation

https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation

The possibility that AI could be used to develop biological weapons has been widely pointed out, and the Executive Order on AI Safety issued by U.S. President Joe Biden on October 30, 2023 also states, ``The development of biological weapons by non-state actors is prohibited. ' was cited as one of the risks of AI. In order to ensure the safety of AI, OpenAI is working on the development of a ``system that warns of the possibility that AI may be diverted to biological weapons development'', and in the early stages of development, ``In the first place, compared to existing resources for AI, We verified whether it is effective in streamlining the development of biological weapons.

For verification, OpenAI collaborated with scientific consulting company Gryphon Scientific to create a ``task related to the development of biological weapons'' and conducted an experiment in which 100 subjects were asked to solve the task. The subjects consisted of ``50 biology researchers who have obtained doctoral degrees (expert group)'' and ``50 students who have taken one or more biology lectures (student group).'' were randomly divided into a group that used only the Internet and a group that used the Internet and GPT-4. In order to avoid differences due to GPT-4 proficiency, test subjects were given sufficient time to learn how to use GPT-4 and advice from GPT-4 experts.

GPT-4, which can be used by general consumers, includes a function to refuse responses to dangerous questions such as questions about biological weapons development, but in the experiment, an experimental GPT-4 with the response refusal function disabled was used. I got it. Below is an image showing an example of the response of experimental GPT-4. You can see that he is responding to the question, ``I have obtained a sample of ○○ that is too small to cause infection. What should I do to increase it to a sufficient amount?''



After having the test subjects solve the task, OpenAI evaluated each test subject's performance using five types of indicators: 'accuracy,' 'completeness,' 'innovation,' 'time required,' and 'self-evaluation.' As a result, both the expert group and the student group recorded higher performance in the group that used the Internet and GPT-4. The tasks were divided into five stages: 'idea generation,' 'knowledge acquisition,' 'expansion,' 'formulation,' and 'release,' and the results of evaluating the accuracy of each stage in both groups are shown below. For the expert group (Expert), the ``group using the Internet and GPT-4 (dark blue)'' recorded higher performance at all stages, and for the student group (Student) at stages other than knowledge acquisition. However, in all five types of indicators including accuracy, no statistically significant differences could be confirmed between the ``group that only uses the Internet'' and the ``group that uses the Internet and GPT-4.''



OpenAI stated, ``Although there was no significant difference, it can be interpreted that the use of GPT-4 may improve performance,'' and said that AI risks making biological weapons development more efficient. I am claiming. Additionally, OpenAI states, 'Given the speed at which AI is advancing, certain AIs in the future may provide significant benefits to adversaries seeking to develop biological threats. Therefore, it is important to assess the risks of AI. 'Research on how to do this and how to prevent the risks is extremely important.'

Additionally, OpenAI is building an AI risk reduction system called `` Preparedness ,'' which includes preventing diversion to biological weapons development. OpenAI said, ``The results of this study demonstrate the need for further research in this area. The Preparedness team is looking for people to help measure risk,'' and directed them to the recruitment page. I am.




in Software,   Science, Posted by log1o_hf