OpenAI and Google DeepMind staff sign open letter warning of 'AI companies going berserk'



On June 4, 2024, current and former employees of OpenAI and Google DeepMind published an open letter accusing AI companies of being closed off and lacking internal controls. In the letter, the engineers at the forefront of AI development stated that 'sufficient oversight is necessary to avoid the risks of AI technology while reaping its benefits,' and argued that the current oversight system does not adequately protect whistleblowers or clarify their responsibilities.

A Right to Warn about Advanced Artificial Intelligence

https://righttowarn.ai/

OpenAI and Google DeepMind workers warn of AI industry risks in open letter | Artificial intelligence (AI) | The Guardian
https://www.theguardian.com/technology/article/2024/jun/04/openai-google-ai-risks-letter

OpenAI, which is backed by Microsoft, and Google DeepMind, a subsidiary of Alphabet, are both pioneering companies leading the way in AI development, backed by huge capital. However, OpenAI in particular has been criticized both inside and outside the company for internal conflicts that began with the sudden dismissal of CEO Sam Altman, and for an oppressive corporate culture that prohibits departing employees from criticizing the company.

OpenAI CEO Altman apologizes for 'aggressive tactics' used against departing employees in internal documents - GIGAZINE



A letter signed by current and former employees of OpenAI and Google DeepMind, called 'The Right to Warn About Advanced Artificial Intelligence,' said AI companies like those two have extensive non-disclosure agreements that prevent them from being held accountable for their actions.

AI companies hold a large amount of non-public information about the performance, limitations, and various risks of their systems, but they are not obligated to share this information with governments or other companies, so we cannot expect the information to be made public voluntarily.

It has also been pointed out that due to delays in the creation of legal systems regarding AI technology, the usual whistleblower system, which is designed to expose corporate misconduct, does not provide sufficient protection for whistleblowers.

The AI company employees wrote, 'We are current and former employees of cutting-edge AI companies and believe that AI technology has the potential to bring unprecedented benefits to humanity. We also understand the significant risks that AI technology poses and hope that with sufficient scientific, political and civil oversight, these risks can be sufficiently mitigated. However, we believe that AI companies have strong economic incentives to avoid effective oversight, and that their individual corporate governance structures are insufficient to change the status quo.'

To improve this situation, the letter called on AI companies to do four things: 'Avoid contracts that prohibit criticism or retaliate against criticism; Facilitate anonymity mechanisms for AI company employees to raise concerns with management, regulators, and external organizations; Allow employees to openly raise concerns about the risks of their company's AI, so long as it does not violate trade secrets or intellectual property rights; and Ensure that companies do not retaliate against employees who disclose confidential information related to risks when these objectives cannot be achieved in other ways.'



Google did not respond to media requests for comment.

An OpenAI spokesperson added, 'We are proud of our track record of delivering the most capable and safe AI systems, and believe in a science-based approach to addressing risks. Given the importance of this technology, we agree that a rigorous discussion is essential, and we will continue to engage with governments, civil society, and other communities around the world.'

The letter was signed by 11 current and former OpenAI employees and two current and former Google DeepMind employees, including: General purpose ( Thank you ) Daniel Kokotairo, who left OpenAI because he was not sure OpenAI would act responsibly when artificial general intelligence (AGI) emerged, and William Sanders, who was a member of the Superalignment team at OpenAI, which is responsible for identifying potential risks related to AGI, are also on the list.

in Software, Posted by log1l_ks