Is 'Fake News Made by Sentence Generation AI' Really Dangerous for Democracy?



In recent years, AI that generates extremely accurate sentences has been developed, '

Sentence generation AI was talking on Reddit on an overseas bulletin board for a week without being noticed by anyone. ' ' Articles written by sentence generation AI are social. It has emerged at the top of news sites . ' Among such, is the AI-related start-up of Canada Cohere Mr. Cooper Raterink that studied the safety and responsibility of large-scale language model is, the actual risk of fake news 'sentence generating AI has created gives to democracy Is explained.

Assessing the risks of language model “deepfakes” to democracy
https://techpolicy.press/assessing-the-risks-of-language-model-deepfakes-to-democracy/

Raterink said in the 2020 US presidential election, 'although AI-generated fake news could have a big impact,' experts were worried, but in reality most AI-made fake news is dying. He pointed out that he never did. In fact, it was the well-known political accounts and grassroots activities on social media that most influenced the election results, and it was these who contributed to the spread of fake news.

Although there are deep-rooted concerns about AI that generates high-precision sentences such as ' GPT-3 ' and ' T5', there are some obstacles for malicious persons to use these sentence generation AI. Raterink cites the following issues with using sentence generation AI to develop fake news:

◆ 1: Difficult to access cutting-edge sentence generation AI or train AI from scratch
In 2019, the text generation AI ' GPT-2 ' announced by OpenAI , a non-profit AI research organization, generated too high-precision text, and the development team raised concerns. Therefore, GPT-2 released a version with reduced functions in stages, checked the cases where AI was actually used, and then released the final version .

In addition, researchers are focusing on fake news countermeasures using AI, such as developing AI 'Grover' that detects fake news that abused AI by another research institute, so people who aim to spread fake news It was difficult to use the sentence generation AI effectively. There is also a method in which a malicious person trains AI by himself, but this is still not a realistic option because it requires advanced technical experience and cost.



◆ 2: The text produced by the sentence generation AI is still not highly accurate.
The text created by the sentence generation AI 'GPT-3' developed by OpenAI is said to be difficult for humans to identify, but when a sentence for the purpose of 'fooling humans' was generated, the AI generated it. It will be difficult to deceive

'GLTR ' that detects sentences. It has also been pointed out that the sentences generated by AI are based on training data, and do not have a natural language understanding that is truly similar to humans.

◆ 3: The platform is investing heavily in blocking fake news
The main places where fake news is spread are platforms such as social media, but major platforms are investing heavily to combat spam and fake news. The measures taken by platforms such as Facebook and Twitter are mainly focused on creating bot accounts, etc., and are not affected by the accuracy of sentences generated by AI. Therefore, no matter how accurate the text generation AI is, as long as there are suspicious points in the creation and operation of accounts that spread fake news, Raterink points out that effective spread can not be expected.

◆ 4: The threat theory of sentence generation AI lacks 'context before and after'
Many studies have pointed out the dangers of abusing text-generating AI, but few have actually investigated AI in the context of platforms on the Internet. Even if AI can actually generate fake news, it is a different story whether it is based on the context of the current platform, and it is unclear whether it will work effectively on SNS etc.

From the above points, Raterink pointed out that even if more accurate sentence generation AI is developed in the future, it may not be worth deploying sentence generation AI for those who aim to spread fake news. In fact, investing in these accounts is more cost-effective than text-generating AI, given that well-known political accounts and grassroots movements helped spread fake news in the 2020 US presidential election. It would be expensive, Raterink said.



Raterink also argues that fake news generation using text-generating AI should be seen as a chase with the regulator, rather than a time bomb that becomes effective after a certain amount of time. I will. In other words, although the accuracy of sentence generation AI will continue to improve, fake news countermeasures by platforms, regulators, and research institutes will also be strengthened, so simply because the accuracy of sentence generation AI has improved, fake news It does not increase the risk.

As a measure against fake news by sentence generation AI, Raterink said, 'Focus on the platform to distribute fake news content rather than the content itself' 'Active ethical agreement between researchers' 'Strengthen media literacy for the general public' I think that something like that is effective. On the other hand, as a side to spread fake news, it may be easier to spread fake news by targeting non-English-speaking communities rather than English-speaking communities with perfect countermeasures.

'General concerns about the current state of the threat of automated fake news are currently largely unfounded,' Raterink said, arguing that it is important for people to have a correct understanding of the issue at hand. On the other hand, the impact of fake news on democracy is serious, and regulators need to keep evolving fake news measures.



in Software,   Science, Posted by log1h_ik