OpenAI establishes 'safety and security committee' including CEO Sam Altman, and begins training GPT-4 successor



The OpenAI Board of Directors announced that it will establish a Safety and Security Committee to recommend important safety and security decisions for OpenAI's projects and operations, and will begin training the next generation of models to follow the GPT-4 series.

OpenAI Board Forms Safety and Security Committee | OpenAI

https://openai.com/index/openai-board-forms-safety-and-security-committee/



OpenAI trains its next major AI model, forms new safety committee | Ars Technica
https://arstechnica.com/information-technology/2024/05/openai-training-its-next-major-ai-model-forms-new-safety-committee/

OpenAI had a 'Super Alignment' team that studied the safety and control of general artificial intelligence (AGI), but it was reported that the team's researchers, including former chief researchers Ilya Satskivar and Jan Reich, left OpenAI one after another and the team was dissolved.

OpenAI's 'Super Alignment' team, which was researching the control and safety of superintelligence, has been disbanded, with a former executive saying 'flashy products are being prioritized over safety' - GIGAZINE



Reich, who has since left the company, said, 'At OpenAI, flashy products are prioritized over safety,' sounding the alarm about safety being neglected within OpenAI as it develops AGI. In response, Altman said, 'We will continue to conduct research into safety.'

The newly established Safety and Security Committee is led by CEO Sam Altman, Board Chair Brett Taylor, and Board members Adam D'Angelo and Nicole Seligman. Other members include OpenAI's Head of Technology and Policy Alexander Madry, Head of Safety Systems Lillian Wen, Head of Alignment Science John Shulman, Head of Security Matt Knight, and Principal Investigator Jakub Paczowski. In addition, OpenAI advisors Rob Joyce and John Carlin will also serve as advisors to the Safety and Security Committee.


By TechCrunch

The Safety and Security Committee will evaluate OpenAI's processes and safeguards over a 90-day period and make a recommendation to the full Board, which will evaluate the recommendations and make them publicly available in a manner consistent with safety and security.

OpenAI's 'safety and security' is set out in a statement released by the company on May 21, 2024, and refers to a broader range of 'safety' including alignment research, child protection, maintaining the integrity of elections, evaluating social impacts, and implementing security measures.



OpenAI also stated, 'We recently began training our next-generation model, and we hope that the resulting system will provide the next level of functionality on the path to AGI,' revealing that they are working on developing a next-generation model to succeed GPT-4.

However, it is unclear whether this 'next-generation model' is intended to be GPT-5 or goes one step further. IT news site Ars Technica said, 'There are persistent rumors that progress in large-scale language models has plateaued at GPT-4,' suggesting that it may not be a major upgrade of the model itself, but rather just an evolution of the interface, like GPT-4o.

In addition, Mr. Reich, who criticized OpenAI's attitude of neglecting safety, joined a competitor, Anthropic, and joined the newly established Super Alignment team.

Anthropic hires former OpenAI researchers to set up 'Super Alignment Team' to strengthen AI safety and security - GIGAZINE

in Note,   Software, Posted by log1i_yk