Google develops a tool that uses AI to automatically find 'illegal pornographic content that exploits children' on the Internet



There are many child sexual abuse materials (CSAMs) on the Internet that target minors such as child pornography. The work of removing CSAM from the Internet is done manually, but the current situation is that it is not possible to deal with the CSAM that appears one after another. Therefore, in order to help people who face CSAM, Google is developing a tool ' Content Safety API ' that judges CSAM using AI.

Using AI to help organizations detect and report child sexual abuse material online

https://www.blog.google/around-the-globe/google-europe/using-ai-help-organizations-detect-and-report-child-sexual-abuse-material-online/

Content providers such as Google are conducting surveillance activities to prevent illegal child pornography from appearing online. However, since the act of deleting specific content from the Internet can also infringe on 'freedom of expression,' careful human judgment is unavoidable regarding the illegality of content.

A man who kept watching for child pornography and grotesque content on Google-GIGAZINE



The person who continues to monitor CSAM is a living human being, and the removal activity is known to be mentally harsh.

Microsoft's online safety officer sues the company for becoming PTSD after watching murder and child pornography-GIGAZINE



Also, the tools introduced in traditional CSAM monitoring activities rely on matching with previously reported illegal CSAM hash values to prevent re-uploading. As a result, we were unable to keep up with the new CSAM under the limited number of personnel.

Sharing a 'hash list' so that Google, Twitter, Facebook, etc. can automatically block violating images to eradicate child pornographic images --GIGAZINE



Google has developed the 'Content Safety API' that automatically determines content using image / video recognition technology that uses deep learning and exposes CSAM. The Content Safety API automatically identifies innumerable Internet contents by AI, can detect suspicious CSAMs that need to be checked by human eyes, and responds to the necessity and urgency of dealing with the detected contents. It will be sorted according to. The person who monitors the net can decide whether to delete as CSAM in order from the most important content issued from CSAM, and it is possible to streamline the work.

According to Google, by introducing the Content Safety API, in some cases it succeeded in discovering eight times as many CSAMs as before, so even a small number of people can work efficiently and speedily. I am.


by Trey Ratcliff

Susie Hargreaves, president of the Internet Watch Foundation, who works to eliminate CSAM, said, 'We have exposed illegal content that we could not detect before, provided materials that human experts can judge on a larger scale, and criminals. We are thrilled to have an artificial intelligence tool developed to help catch up with the illegal uploading of ours, 'he said, welcoming AI-powered CSAM counter-tools.

in Software,   Posted by darkhorse_log