With introduction of machine learning system on YouTube, 75% of "violent extremism" video is unpublished before the user points out
As of 2017, the amount of video uploaded to YouTube is huge, over 400 hours per minute. Its contents vary from movie trailers and MVs, interesting images created by somewhere YouTuber, to what terrorists made. Now, machine learning is utilized for this content confirmation, and it is overwhelmingly faster and accurate than before.
Official YouTube Blog: An update on our commitment to fight terror content online
It has been a long time to depend on people to judge whether content on the net is problematic or not. For example, on Google, there is a story that a person who was engaged in "work to check for child pornography and grotesque content" spiritually came.
A man who continues to monitor children for child pornography or grotesque content - GIGAZINE
It is a great help that is "Artificial Intelligence (AI)" and "Machine Learning" that have been growing at a stretch from around 2016.
YouTube has been struggling to cope with content of terrorists uploaded mixedly with content without problems, but the introduction of machine learning system changed the situation significantly.
Specifically, in July 2017 "Violent extremism"75% of the videos deleted because it was unpublished before being flagged as inappropriate by the user.
In addition, the number of processed videos has doubled.
These efforts in the future, have erected an incorrect flag by although not illegal user, go and things also being conducted into the potential such as hate speech and violent extremism that might violate YouTube's policies videos is.
in Web Service, Posted by logc_nt