OpenAI, which develops GPT-4 and ChatGPT, announces ``Approach to AI safety''
Generative AI, which automatically generates content such as pictures and sentences, continues to evolve day by day, but there are also voices calling for legal restrictions on AI. Under such circumstances, OpenAI, an AI development group that develops GPT-4, a large-scale language model, and ChatGPT, an interactive AI, has released 'Approach to AI Safety'.
Our approach to AI safety
https://openai.com/blog/our-approach-to-ai-safety
◆ Building a safer AI system
OpenAI “tests new systems rigorously before releasing them, solicits feedback from external experts, uses techniques such as reinforcement learning with human feedback to improve model behavior, and provides extensive safety and We will build a monitoring system.' For example, before the training of GPT-4 was completed and released to the public, it took more than 6 months to make it safer and more consistent across OpenAI.
“We believe that any powerful AI system should undergo rigorous safety evaluations,” said OpenAI. We are actively engaging with governments on the best possible form of such regulation.'
◆ Learn from practice to improve safeguards
Even with extensive research and testing, it is not possible to predict all uses for the technology, and there are limits to what can be learned in the lab. OpenAI said, ``I believe that being able to actually use and learn from AI is an important factor in releasing an AI system that will become more and more secure over time,'' carefully and gradually new AI systems , providing a steadily expanding group of people with adequate safeguards and stating that it will make continuous improvements based on lessons learned.
OpenAI also says it will make the best models available through its own services and APIs so that developers can embed this technology directly into their apps. As a result, it is possible to monitor unauthorized use and take countermeasures, and it is possible to continuously build countermeasures against unauthorized use of AI systems.
OpenAI said, “Importantly, society needs time to allow AI to become more capable and adapt, and everyone affected by this AI technology will have an important say in how AI can further develop.” It means we should have it,' he said.
◆ Protect children
As part of its commitment to safety, OpenAI has revealed that it is considering an age limit of 18 and older for using AI tools, or 13 and older with parental approval. Also, OpenAI does not allow categories such as hate, harassment, violence, and adult content to be created with OpenAI's technology, and GPT-4 may respond to requests for content that is not permitted compared to GPT-3.5. is 82% lower. It also declared that it would block and report any attempt to upload child sexual abuse material to its image tool.
OpenAI is also working with the nonprofit Khan Academy to build an AI that acts as a teacher and assistant for students.
◆ Respect for privacy
OpenAI said, “Our large-scale language models have been trained on a wide range of text corpora, including publicly available content, licensed content, and content generated by human reviewers. We do not use data to sell services, advertise, or build profiles of people.'
Also, because the training data contains personal information available on the Internet, we removed personal information from the training dataset where possible and fine-tuned the model to request personal information from users. OpenAI will deny and respond to requests from users to remove personal information from their systems. These steps minimize the possibility that the model will generate answers containing personal information.
◆ Improved accuracy
ChatGPT, which is based on GPT-4, a large-scale language model, is a model for generating natural sentences with human-level accuracy. and stochastically generate sentences.' Therefore, the content is not necessarily true.
Therefore, GPT-4 utilizes user feedback on the output of ChatGPT to improve the accuracy of facts. As a result, GPT-4 is 40% more likely to generate factual content compared to GPT-3.5.
Ongoing research and engagement
OpenAI said, 'A pragmatic approach to solving AI safety concerns requires more time and effort to research effective mitigation and tuning techniques and test them against real exploits.' I think it's about spending resources, 'he argues that 'maintaining the safety of AI' and 'improving AI's functionality' need to work closely together. “Policy makers and AI providers need to ensure that AI development and deployment are managed effectively on a global scale. It's a challenge, but it's a challenge that we want to contribute to,' said OpenAI.
Related Posts:
in Software, Posted by log1i_yk