YouTube announces three policies such as content suspension and cross-platform sharing restrictions to prevent the spread of hoaxes

In January 2022, YouTube, which was protested by more than 80 fact-checking groups that '

YouTube has become a hotbed of misinformation and hoaxes, ' announced new efforts to prevent the spread of hoaxes.

Inside Responsibility: What's next on our misinfo efforts

YouTube has invested heavily in a framework called The Four Rs of Responsibility for the past five years. This is an effort to 'Remove' and 'Reduce' inappropriate content such as hate speech, and 'Raise' and 'Reward' reliable content. Based on this framework , over 100,000 movies and 17,000 channels were removed in 2019 in three months .

The Four Rs of Responsibility uses a combination of machine learning and humans to quickly remove content that violates YouTube policy, create trusted sources, and prevent the spread of problematic content. It is done. As a result, while maintaining freedom of expression on YouTube, it is possible to suppress inappropriate content from being noticeable. Still, in recent years, various hoaxes and misinformation such as false alarms related to the new coronavirus have been circulating, so YouTube Chief Product Officer Neil Mohan wrote, 'We need to evolve our approach.' I am.

The three new challenges that the YouTube product team is trying to tackle are '1: Detecting false alarms before they are spread', '2: Cross-platform issues, dealing with false alarms', and '3: Around the world'. We will strengthen our efforts to deal with false alarms. '

◆ 1: Detect false alarms before they are spread
Until now, it was only necessary to deal with misinformation that has been disseminated for a relatively long time, such as the conspiracy theory about the landing on the moon and the flat earth theory, and the content is in a state where it is complete to some extent, so we built an archive of the content and matched it. It seems that it was possible to easily prevent the spread of false information by deleting what was done. However, in recent years, completely new misinformation has been spread, and one example is the hoax that ' the epidemic of the new coronavirus is caused by 5G .' YouTube has updated its guidelines to address hoaxes related to the new coronavirus by making this type of content a policy violation.

YouTube announces that it will limit content that conspiracy theories that '5G is contributing to the spread of the new coronavirus' --GIGAZINE

However, YouTube reveals that it is providing continuous training on the latest data, as the current system is not enough to cope with the new hoaxes that will appear in the future. Specifically, it aims to detect content that cannot be detected by the main classifier by utilizing a combination of classifiers, keywords in additional languages, and information obtained from regional analysis. matter.

In addition to this false alarm spread prevention measure, YouTube is considering protecting users from false alarms and hoaxes by displaying reliable content in search results. However, there are cases where there is a lack of trusted content for certain topics. Therefore, it is planned to provide reliable content by recommending text-based articles for major news-related search results such as natural disasters.

Also, for niche topics, it is being considered to display a fact check to the viewer so that the user can easily judge whether the content is reliable or not. However, it is difficult to fact check all topics, so it seems that a warning message will be displayed for those that are difficult to judge whether the content is reliable or not.

◆ 2: Dealing with cross-platform problems and the spread of false alarms
A further problem is the approach to the problem of spreading videos with false information uploaded on YouTube to platforms other than YouTube. In particular, how to deal with content in the so-called gray zone, such as 'content that does not violate the deletion policy but should not be recommended to others', is a major issue. This type of content is called borderline content, but YouTube claims that recommender systems have reduced the percentage of borderline content displayed to less than 1%.

Other workarounds for content on the border include disabling the share button and removing the link to the movie. However, if you do these things, it will limit the freedom of the viewer, so YouTube is proceeding with the response while carefully considering whether it is an excessive response.

In addition, there are cases where content on the borderline is taken up in research papers and news reports, and YouTube said that it is important to secure a place for discussion on such delicate and controversial topics, and some content is included. We are also considering handling it as an exception.

In addition, efforts are being considered to display a warning message when the content on the border is played, suggesting to the viewer that it may contain incorrect information.

◆ 3: Strengthen efforts to deal with false alarms around the world
YouTube offers services in more than 100 countries and dozens of languages, so no matter how successful the above efforts are, the complexity still remains.

In the United Kingdom, for example, public broadcasters such as the BBC are treated as reliable sources, while in other regions public broadcasters are treated as unreliable information based on the government's Probaganda. In addition, some countries have rigorous fact checks on media coverage, while others have little oversight and verification.

To address this regional diversity and determine what is wrong and what is correct, YouTube is investing more in partnerships with experts and non-governmental organizations around the world. I am considering it. In addition, we are working to build a model for detecting false information in each region by making the content detection system mentioned in the '1: Detecting false alarms before they are spread' support local languages. I'm going.

in Web Service, Posted by logu_ii