Anthropic hires former OpenAI researcher to set up 'Super Alignment Team' to strengthen AI safety and security
Anthropic, an AI company developing chat AI such as 'Claude,' has hired
Anthropic hires former OpenAI safety lead to head up new team | TechCrunch
https://techcrunch.com/2024/05/28/anthropic-hires-former-openai-safety-lead-to-head-up-new-team/
Anthropic's goal is to 'emphasize safety more than OpenAI,' and the newly established Super Alignment Team will focus on various aspects of AI safety and security, particularly 'scalable supervision,' ' weak-to-strong generalization, ' and 'automated alignment research.'
I'm excited to join @AnthropicAI to continue the superalignment mission!
— Jan Leike (@janleike) May 28, 2024
My new team will work on scalable oversight, weak-to-strong generalization, and automated alignment research.
If you're interested in joining, my dms are open.
According to the people, Reich will report to Anthropic's chief scientific officer, Jared Kaplan, and Anthropic researchers working on scalable surveillance at the time of writing will join Reich's team as the team is formed.
✨???? Woo! ????✨
— Sam Bowman (@sleepinyourhat) May 28, 2024
Jan led some seminally important work on technical AI safety and I'm thrilled to be working with him! We'll be leading twin teams aimed at different parts of the problem of aligning AI systems at human level and beyond. https://t.co/aqSFTnOEG0
Reich previously led the SuperAlignment team at OpenAI with Ilya Satskivar, but will leave the company in May 2024, citing 'disagreements with OpenAI's leadership and core priorities.'
I joined because I thought OpenAI would be the best place in the world to do this research.
— Jan Leike (@janleike) May 17, 2024
However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point.
He also said, 'Over the past few years, safety culture and process has lagged behind flashy products,' and suggested that 'OpenAI must become a safety-first AGI company.'
But over the past years, safety culture and processes have taken a backseat to shiny products.
— Jan Leike (@janleike) May 17, 2024
Having lost key AI researchers such as Reich and Satskivar, OpenAI disbanded the SuperAlignment team in May 2024.
OpenAI's 'Super Alignment' team, which was researching the control and safety of superintelligence, has been disbanded, with a former executive saying 'flashy products are being prioritized over safety' - GIGAZINE
Related Posts:
in Software, Posted by log1r_ut