What was the 'Mario Project' secretly pursued by Google subsidiary DeepMind? The impasse in AGI security measures is now revealed.

DeepMind, which was acquired by Google in January 2014, was secretly seeking a highly independent governance structure to prevent its future AGI (Artificial General Intelligence) from being driven solely by the parent company's interests. DeepMind co-founders Demis Hassabis and Mustafa Suleiman explored several safety measures over several years starting in the fall of 2015, but external oversight and the establishment of committees proved ineffective, and it is believed that they ultimately leaned towards the idea that they had no choice but to exert influence within the company themselves.
Project Mario - Colossus
https://colossus.com/article/project-mario-demis-hassabis-deepmind-mallaby/

We're publishing an exclusive chapter from @scmallaby 's brilliant new book about Demis Hassabis and DeepMind.
— Colossus (@colossusmag) March 31, 2026
This is the inside story of Project Mario. How DeepMind's co-founders spent 4 years trying every mechanism they could think of to put guardrails around AGI, only to… pic.twitter.com/acSas1dRzE
Colossus, which covers business, investment, and technology, has published an excerpt from Sebastian Malaby's book 'The Infinity Machine' that discusses DeepMind's 'Mario Project.'
In August 2015, Hassabis and Suleyman met at SpaceX to discuss how to safely handle AGI, which is described as 'AI that can perform a wide range of tasks like a human.' The meeting was attended by SpaceX founder Elon Musk, LinkedIn co-founder Reid Hoffman, and Google executives, but they were divided on who should manage the AI and how, and were unable to reach an agreement.
Following this meeting, an initiative called 'Project Mario' began within DeepMind. This initiative aimed to make DeepMind less susceptible to the dictates of its parent company, Google, in order to move it into a semi-independent state while remaining under Google's umbrella. The reason for the name 'Project Mario' has not been revealed, but it is said that DeepMind executives liked to use mysterious code names.
Subsequently, on August 10, 2015, Google established Alphabet, shifting to a system where specialized businesses were spun off as semi-independent 'bets' companies.
Alphabet, the giant corporation that will acquire Google, is born - GIGAZINE

During the restructuring, Don Harrison, Google's head of M&A, suggested to Hassabis and Suleyman that 'this restructuring could restore some degree of DeepMind's independence.' This led to the proposal to establish a board of directors consisting of three from DeepMind, three from Alphabet, and three independent members from outside the company.
In 2016, negotiations surrounding this proposal became concrete, and Hassabis held numerous discussions with Larry Page, Google co-founder and then-leader of parent company Alphabet. However, in November 2016, Google CEO Sundar Pichai opposed the proposal, stating that 'AI is a crucial technology that is deeply involved not only in areas such as autonomous driving and life extension, but also in Google's core businesses such as search and cloud computing.'
Nevertheless, the Mario project continues. Around the time of the discussions in January 2017, proposals were considered to raise funds from external investors and separate from Google, or to operate under a different legal entity than a for-profit company, but the deadlock remained unresolved.
Furthermore, in early 2018, DeepMind even presented documents to Alphabet's board of directors arguing that 'unprecedented technology requires an unprecedented organizational structure,' but even then, the level of independence that DeepMind desired was not achieved.
Malaby explains that the meeting at SpaceX failed because each participant had their own position and motives, and that the proposal to form a board of directors with three from DeepMind, three from Alphabet, and three external members also hit a wall because the external members were likely to be similarly influenced by their own positions and judgments.
As an example related to these issues, Maraby cites DeepMind's healthcare business, 'DeepMind Health.' DeepMind Health was overseen by a committee of external experts, but the committee was disbanded when Google acquired the business. Maraby writes that even with external experts, committee members tend to prioritize their own reputation and position, making the system seem unreliable to Google.
In a similar vein, Google's external advisory committee, established in 2019, was disbanded after a short period.
Google's external advisory committee supporting AI development disbands shortly after its launch - GIGAZINE

OpenAI also operated under a structure that combined non-profit and for-profit elements, but the weaknesses of this system became apparent in 2023 when the board of directors attempted to remove CEO Sam Altman.
What really happened behind the scenes of the OpenAI CEO Sam Altman's dismissal, which caused a global uproar? - GIGAZINE

by TechCrunch
Hassabis says that even if you create a board of directors or a charter outlining safety principles, it doesn't guarantee that it will work when it really matters, and it's not realistic to decide years in advance 'how far we'll allow and where we'll stop.'
Malaby explains that through these trials and failures, Hassabis and Suleyman came to the conclusion that 'it is more realistic for us to have influence within the company than to design security mechanisms from the outside.'
Related Posts:
in AI, Web Service, Posted by log1b_ok







