OpenAI announces it has blocked accounts using ChatGPT to generate disinformation on multiple topics, including the presidential election



On Friday, August 16, 2024 (local time), OpenAI announced that it had detected and blocked accounts using ChatGPT to generate false information on multiple topics, including the U.S. presidential election. However, there is no indication that the content generated by ChatGPT has been posted in public view of the general internet user base.

Disrupting a covert Iranian influence operation | OpenAI

https://openai.com/index/disrupting-a-covert-iranian-influence-operation/



OpenAI disrupts Iranian operation that used ChatGPT to for disinformation
https://www.axios.com/2024/08/16/openai-iran-disinformation-chatgpt

OpenAI is committed to preventing misuse of AI-generated content and improving transparency. This includes efforts to detect and stop influence operations that attempt to manipulate public opinion or affect political outcomes while concealing the true identities and intentions of the parties behind them. With elections scheduled around the world in 2024, preventing influence operations will be particularly important. Therefore, OpenAI has been working to properly detect misuse using its own AI models.

The fraud detection AI model detected an influence operation called ' Storm-203 ' that generates false information to influence the US presidential election, and the company announced that it had blocked the account's access to ChatGPT. Storm-203 used ChatGPT to generate false information focused on various topics, including comments about candidates on both sides of the US presidential election, and shared it through social media and websites.



However, OpenAI found that the majority of social media posts where Storm-203 posted disinformation created with ChatGPT were 'posts with little impact' with few likes, shares, or comments. OpenAI uses

The Breakout Scale, created and published by the think tank Brookings, to measure the influence of disinformation campaigns (1 being the lowest and 6 being the highest). As a result, Storm-203 was classified as '2' (active across multiple platforms, but with no evidence that real humans obtained or widely shared the content).

In Storm-203, ChatGPT was found to be used to create long-form articles and short comments on social media. In the 'long-form article creation' section, articles were created mainly about American politics and world affairs, and fake information articles were published on five websites posing as both progressive and conservative news organizations. In the 'social media short comment creation' section, short comments in English and Spanish were created and posted on social media. In Storm-203, 12 X accounts and one Instagram account were detected to have been used, and some of the comments posted by these accounts were actually comments posted by social media users that had been rewritten by ChatGPT.



Storm-203 generated content mainly about the Gaza conflict, Israel's presence in the Olympics, and the US presidential election. It also generated information about Venezuelan politics, the rights of the Latino community in the US, and Scottish independence. OpenAI points out that the political content is interspersed with comments about fashion and beauty, which are either an attempt to hide the fact that it is AI-generated content or to gain followers.



When news media Axios asked Meta about this incident, they responded that 'the attack was linked to an Iranian disinformation campaign carried out in 2021 that disabled the Instagram account in question and targeted Scottish users.'

Axios has also reached out to X for comment but has not received a response at the time of writing, although OpenAI noted that all of the social media accounts in question were 'inactive' at the time of writing.

in Web Service, Posted by logu_ii