OpenAI and Microsoft report that ``hackers from China, Russia, North Korea, and Iran were hacking using AI''

OpenAI, which develops generative AI such as ChatGPT and DALL-E, has teamed up with Microsoft to block five groups of attackers who attempted to use AI for malicious cyber activities. Both groups are said to be backed by countries such as Russia and North Korea.

Disrupting malicious uses of AI by state-affiliated threat actors

Staying ahead of threat actors in the age of AI | Microsoft Security Blog

'Based on cooperation and information sharing with Microsoft, we disrupted the activities of malicious actor groups associated with specific nation states,' OpenAI reported, and OpenAI accounts identified as associated with these groups. revealed that it had been frozen.

The following five groups were identified this time.

・Forest Blizzard(STRONTIUM)
Forest Blizzard is a military intelligence agency connected to Unit 26165 of the General Directorate of Intelligence (GRU) of the Russian Federation's Armed Forces General Staff , and is involved in various fields such as defense, transportation/logistics, government, energy, non-governmental organizations, and information technology. He is enthusiastic about his activities in this field. Microsoft assesses that Forest Blizzard operations play an important supporting role in Russia's foreign policy and military objectives in Ukraine and the broader region. According to Microsoft, Forest Blizzard used large-scale language models (LLM) to code scripts used for satellite and radar research and activities.

・Emerald Sleet(THALLIUM)
Emerald Sleet is a North Korean-linked threat actor group that has been active throughout 2023, launching

spear-phishing attacks against prominent North Korea experts and gathering information about other countries' foreign policies toward North Korea. I did. In addition to spying through spear phishing, Emerald Sleet also used LLM to investigate Windows vulnerabilities and code scripts.

・Crimson Sandstorm(CURIUM)
Crimson Sandstorm, which is associated with

the Islamic Revolutionary Guards Corps (IRGC), is an Iranian threat actor group that has been active since at least 2017. Attacks target multiple sectors such as defense, shipping, transportation, and healthcare, and attacks such as watering hole attacks and sending malware that uses command and control servers via email have been confirmed. Crimson Sandstorm uses LLM to write the text of phishing emails used in attacks, generates code snippets for communication with remote servers, web scraping, etc., as well as code to evade malware detection and Windows antivirus functions. It is said that LLM was developing a method to disable it.

・Charcoal Typhoon(CHROMIUM)
In recent years, it has been reported that threat actors linked to China have established roots in infrastructure projects in other countries. Charcoal Typhoon collects information on government organizations, higher education institutions, communication infrastructure, life infrastructure such as oil and gas, and information technology infrastructure in Taiwan, Thailand, Mongolia, Malaysia, France, and Nepal. thing. It has been discovered that LLM was used to gather this information, generate scripts necessary for this activity, and also for translation and social engineering.

・Salmon Typhoon(SODIUM)
Salmon Typhoon is also a Chinese-linked threat actor group that has been known to target American defense contractors, government agencies, and organizations in the cryptographic technology field. Salmon Typhoon also used LLM to investigate specific individuals and topics of interest, develop malicious code, and translate technical documents.

OpenAI declares that all accounts and data associated with the five attacker groups listed above have been disabled at the time of article creation.

Microsoft and OpenAI will 'monitor and disrupt malicious nation-state-linked threat actor groups,' and 'work with industry partners and other stakeholders to collaborate with the AI ecosystem to provide information on detecting the use of AI.' Various aspects of AI safety, such as ``regularly exchanging information,'' ``increase AI safety with feedback from attacker group abuses,'' and ``continue to provide exploited information to increase public transparency.'' He stated that he will continue to take a proactive approach.

in Software,   Security, Posted by log1i_yk