OpenAI spinoff Ilya Satskivar's Safe Superintelligence raises $1 billion, valuation could reach $5 billion



Safe Superintelligence, a startup working on AI safety led by Ilya Satskiver, co-founder and former chief scientist of OpenAI, has raised $1 billion in funding in just three months since its founding.

Exclusive: OpenAI co-founder Sutskever's new safety-focused AI startup SSI raises $1 billion | Reuters
https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/

Safe Superintelligence Raises $1 Billion, Reaching $5 Billion Valuation in Just Three Months
https://www.maginative.com/article/safe-superintelligence-raises-1-billion-reaching-5-billion-valuation-in-just-three-months/

OpenAI co-founder raises $1B to build safe superintelligence
https://www.thestack.technology/openai-ssi-ilya-sutskever/

Safe Superintelligence has raised $1 billion to help develop safe artificial intelligence systems that can perform functions far better than humans, Reuters reported. OpenAI declined to disclose its valuation, but sources said it was worth $5 billion.

The approximate valuations of major AI companies are said to be $100 billion for OpenAI, $24 billion for xAI, and $6 billion for Mistral AI.



In addition to Satskever, Safe Superintelligence has several well-known figures, including former OpenAI employee Daniel Levy and Daniel Gross, who led Apple's AI initiative, and has a total of 10 employees. Reuters pointed out that the fact that a startup with no visible achievements and a small team has attracted $1 billion in funding 'shows that investors are betting big on talented people who focus on basic AI research.'

'It's important to us to be surrounded by investors who understand, respect and support our mission of being single-minded about moving forward on the path to safe superintelligence,' Gross told Reuters in an interview. 'We spend two to three years researching and developing our products before we bring them to market.'



Satskivar, who founded Safe Superintelligence, originally led the Super Alignment team at OpenAI to improve AI safety. Safe Superintelligence is also an organization that researches how to improve the functionality of AI with the safety of AI as the top priority, but what makes it different from OpenAI is that it is operated as a normal commercial business. OpenAI has an organizational structure with profit restrictions, and any profits generated beyond the restrictions are returned to a non-profit organization for the 'benefit of humanity.'

in Posted by log1p_kr