It turns out that just 10 X accounts posted more than 60,000 times, greatly influencing the election



On social media, a wide variety of information is posted by a wide variety of users, and often the information is spread without regard for important information such as whether it is true or who is posting it. This irresponsible spreading of information is also affecting elections, an activity that is fundamental to the country, and one study found that just 10 social media accounts could have posted more than 60,000 posts containing false information, which could have been viewed 150 million times.

Bot-like tweets seen 150 million times ahead of UK elections | Global Witness

https://www.globalwitness.org/en/campaigns/digital-threats/investigation-reveals-content-posted-bot-accounts-x-has-been-seen-150-million-times-ahead-uk-elections/

In light of the UK elections scheduled for July 4, 2024, the NGO Global Witness investigated Twitter (X) accounts spreading political messages during the election period.

The researchers focused on two of the most central topics in the UK general election debate: climate change and immigration. They collected all posts using specific hashtags related to these topics. They then looked for accounts posting with these hashtags to see if they were bots, and only tallied those that they suspected were bots.

The criteria for determining whether an account is a bot include accounts that post a huge amount of content per day, accounts that rarely write their own content and always spread other people's posts, accounts whose handle names end with a long string of numbers (default ID), accounts that have no profile image, and accounts whose profile images show signs of being AI-generated or stolen from elsewhere on the web. The research team combined these multiple factors to make their judgment. For example, an account with fewer than 1,000 followers that posts an average of more than 60 times a day, 90% of which are reposts of other accounts' posts, is deemed to be a possible bot.



The team looked at posts from May 22, 2024, when Chancellor Rishi Sunak announced the dissolution of Parliament and the calling of a general election, and found 10 likely bot accounts that posted more than 60,000 times in the weeks after the election was announced. These posts are estimated to have been viewed more than 150 million times.

Most of these accounts were overtly partisan in their posts, not just expressing political opinions but also explicitly supporting or opposing certain parties, such as urging people not to vote for

the Conservatives or voting for the anti- Labour party , Reform UK .

Some of the accounts were found to be promoting radical and violent images of Muslims, homophobia, and conspiracy theories such as that climate change is a 'hoax,' that vaccines will cause 'genocide,' and that the 'Great Change' theory, in which the country's leaders are accepting immigrants and trying to reduce the number of white people, is true.



'Social media companies bear responsibility for the proliferation of these accounts, which are designed to drive people into closed and harmful conversations,' the team said.

X has established policies that prohibit the disruption of civil activities and hateful acts based on race, religion, etc., but the research team pointed out that 'the problem is that this policy is not sufficiently implemented. ' We called on X to investigate whether the list of bots revealed this time violates the policy and to invest more in protecting democratic discussion from speech manipulation.

in Web Service, Posted by log1p_kr