Anthropic's AGI Safety Officer Moves to OpenAI as Head of AI Readiness



Dylan Scandinaro, who was in charge of safety measures for artificial general intelligence (AGI) at Anthropic, an AI company founded by former members of OpenAI, has been appointed as the head of AI preparation at rival OpenAI.

OpenAI Fills Safety Job Listed at $555,000 With Anthropic Hire - Bloomberg
https://www.bloomberg.com/news/articles/2026-02-03/openai-fills-safety-job-listed-at-555-000-with-anthropic-hire

OpenAI poached its new safety executive from Anthropic. | The Verge
https://www.theverge.com/ai-artificial-intelligence/873420/openai-poached-its-new-safety-executive-from-anthropic

According to the business social networking site LinkedIn , Scandinaro graduated from Dartmouth College and has served as a product manager at AI-related companies such as Unity Technologies, developer of the Unity game engine; Everyday Robots, a robotics developer spun off from the former Google X; and DeepMind. At Anthropic, he was responsible for AGI safety measures.

OpenAI CEO Sam Altman reported that he had succeeded in pulling Scandinaro out of Anthropic.

Altman said on his X account, 'I am very excited to announce that Dylan Scandinaro will be joining OpenAI as Head of Readiness. Things are moving forward very quickly and we will soon be working with very powerful models. This requires appropriate safeguards to ensure we continue to deliver significant benefits. Dylan will lead our efforts to prepare for and mitigate these serious risks. He is by far the best candidate I have ever met for this role. He certainly has a big job ahead of him, but I think he will sleep well tonight. I look forward to working very closely with him to achieve the changes we need across the company.'



Scandinaro also quoted Altman's post, writing, 'I will be joining OpenAI as Head of Readiness. I am deeply grateful for my time at Anthropic and the incredible people I worked with. AI is evolving rapidly. The potential benefits are great, as are the extreme and irreversible risks. There is so much work to do and so little time!'



According to a Bloomberg report, OpenAI's Chief Preparedness Officer will be responsible for AI risk and safety, and will be responsible for preparing for the safe development and release of the company's AI and developing risk response measures. According to the report, OpenAI is looking to hire a Chief Preparedness Officer in December 2025, with an annual salary of up to $555,000 (approximately 87 million yen).

In response to a lawsuit alleging that ChatGPT encouraged child suicide , OpenAI has focused on safety measures, including adding parental control features to ChatGPT and making proposals to politicians who are formulating safety standards for teenagers using AI .

in AI, Posted by logu_ii