There is a possibility that YouTube will increase the number of videos incorrectly determined to be 'policy violation' due to the influence of the new coronavirus
Following the epidemic of the new coronavirus infection (COVID-19), a national emergency declaration was issued in the United States, and on March 13, New York City announced suspension of public transport, closing roads, no-going at night, etc. Did. Many companies recommend working from home, and Google is no exception, but because human employees can not come to work, YouTube content monitoring depends on AI, so wrong policy violation judgments There is a possibility that it will increase.
YouTube Creator Blog: Protecting our extended workforce and the community
Actions to reduce the need for people to come into our offices
https://blog.google/inside-google/company-announcements/update-extended-workforce-covid-19
YouTube will rely more on AI moderation while human reviewers can't come to the office-The Verge
https://www.theverge.com/2020/3/16/21182011/youtube-ai-moderation-coronavirus-video-removal-increase-warning
Following the outbreak of the new coronavirus, many companies, including Google, Amazon, Apple, Microsoft, and Facebook, recommend employees to work from home. As a result, Google has more than 100,000 employees working at 26 offices in the United States and Canada working from home.
Google requests all employees in North America to work from home for new coronavirus measures-gigazine
Google wrote in a blog post dated March 16, 2020 about efforts to reduce the need for employees to come to the office while ensuring that Google products continue to work without problems. These initiatives include 'remote access,' 'ranking workflows,' 'increasing automation,' and 'adjusting shifts.'
Regarding the “increase in automation,” Google said, “We have used both humans and machines to review content on platforms such as YouTube. We are focusing on a more temporary, but more automated, system to reduce the chances of our employees coming to the office.Our goal is to immediately remove content that violates community guidelines and policies That's it. '
In other words, on YouTube, AI will be the main moderator for content monitoring for the foreseeable future. For this reason, the deleted movie will be purely flagged by AI, and the analysis of the content may be less accurate than humans do. Google said that during periods of low human employee engagement, more content may be classified for removal, including content that is not a policy violation. However, except for content with a high probability of policy violation, Google predicts that 'many will not be deleted by AI'.
Also, on YouTube, if the creator thinks that 'the content has been deleted by mistake', you can file an appeal , but during this period the response to the appeal may be delayed.
Google acknowledged that the above efforts could create confusion, but said, `` We believe this is the right thing for those who work to keep YouTube safe and for the wider community. '' .
According to the Intercept and The Irish Times reports, Facebook has been working in the moderator's office as of March 13 and 14.
Related Posts:
in Web Service, Posted by darkhorse_log