OpenAI discovered that its internal messaging system had been hacked and information related to AI technology had been stolen, and it was kept secret without notifying the FBI or general users.



It has been revealed that OpenAI, an AI company known for developing the chat AI ChatGPT, was hacked in early 2023.

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too - The New York Times

https://www.nytimes.com/2024/07/04/technology/openai-hack.html



NYT: Hacker stole details about OpenAI's AI tech, but the company kept it quiet - Neowin
https://www.neowin.net/news/nyt-hacker-stole-details-about-openais-ai-tech-but-the-company-kept-it-quiet/

OpenAI Was Hacked But Never Told by the FBI
https://www.thedailybeast.com/openai-was-hacked-but-never-told-fbi

In early 2023, The New York Times reported that hackers had accessed OpenAI's internal messaging system and stolen detailed information about the design of the company's AI technology. According to a source familiar with the OpenAI hacking incident, the hackers accessed an online forum where OpenAI employees discuss the latest technology and spied on the discussions. However, they were unable to penetrate the system where OpenAI stores and builds its AI technology.

OpenAI executives notified employees that they had been hacked during a company-wide meeting held at the San Francisco office in April 2023. This was also the first time that the company had reported the hacking incident to the board of directors. However, since no AI technology was actually stolen, and no confidential information about customers or partners was stolen, OpenAI executives decided not to make the incident public. In addition, since the hacker who hacked OpenAI was thought to be a private citizen with no connections to overseas government agencies, it was determined that the hacking was not a threat to national security, and no report was made to law enforcement agencies, including the Federal Bureau of Investigation (FBI).

But some OpenAI employees were concerned that America's adversaries, such as China, had hacked the company to steal its AI technology. The hack also raised questions about how seriously OpenAI takes security, 'opening a rift within the company,' The New York Times reported.



One example is Leopold Aschenbrenner, who served as technical program manager at OpenAI. He argued to the OpenAI board of directors that 'OpenAI has not taken sufficient measures to prevent the theft of confidential information by the Chinese government and other foreign hostile forces,' and appealed for strengthening security. However, Aschenbrenner was subsequently fired in the spring of 2024 for leaking confidential information outside the company. Aschenbrenner said in his podcast that 'OpenAI's security is not strong enough to prevent the theft of important confidential information even if a foreigner breaks into the company,' and claimed that he was fired from OpenAI because executives found his claims bothersome.

Regarding Aschenbrenner's firing, OpenAI spokesperson Liz Bourgeois told The New York Times, 'We appreciate the concerns that Leopold raised during his tenure at OpenAI, but they did not lead to his departure.' She explained that Aschenbrenner's firing was unrelated to the company's security recommendations.

Bourgeois added, 'While I support Leopold's passion for building safe AGI (artificial general intelligence), I disagree with many of the points he has made about OpenAI's work since then, including his views on OpenAI's security, particularly since this incident was addressed and reported to the board before Leopold joined the company.'

Daniella Amodei, co-founder of Anthropic, a competitor of OpenAI, argues that theft of the latest generative AI designs would not pose a significant threat to national security. Some say that theft of the latest AI technology would not pose a threat to national security.

in Software,   Security, Posted by logu_ii